Result for 017603F9415698FDD2F860E2A499B977239AE579

Query result

Key Value
FileName./usr/lib/python3.9/site-packages/unidecode/__pycache__/x0d3.cpython-39.opt-1.pyc
FileSize1775
MD530C7121605865CA0D06DA65D7C3A3DD2
SHA-1017603F9415698FDD2F860E2A499B977239AE579
SHA-256C71693D7E4D869C30DFEEE48A4ED633A6016D63031F92A098B321254FD68EA1A
SSDEEP48:5WETTMUTmoHbR9SiFYzBcgGTD9aRYQR1cLlaMUUz2C9nwLZsfnM:bTTPmod9SiyzBcgGn9aRfypoUzld/U
TLSHT1D1314A14F064E84093135AB53C5B772260CA3B40D66E4DF45733D90EB38BEAAA8E5B39
tar:gnameroot
tar:unameroot
hashlookup:parent-total10
hashlookup:trust100

Network graph view

Parents (Total: 10)

The searched file hash is included in 10 parent files which include package known and seen by metalookup. A sample is included below:

Key Value
MD53A29A1A4B91F06EEE31487C5F804EDAB
PackageArchnoarch
PackageDescriptionIt often happens that you have text data in Unicode, but you need to represent it in ASCII. For example when integrating with legacy code that doesn't support Unicode, or for ease of entry of non-Roman names on a US keyboard, or when constructing ASCII machine identifiers from human-readable Unicode strings that should still be somewhat intelligible (a popular example of this is when making an URL slug from an article title). In most of these examples you could represent Unicode characters as "???" or "\\15BA\\15A0\\1610", to mention two extreme cases. But that's nearly useless to someone who actually wants to read what the text says. What Unidecode provides is a middle road: function unidecode() takes Unicode data and tries to represent it in ASCII characters (i.e., the universally displayable characters between 0x00 and 0x7F), where the compromises taken when mapping between two character sets are chosen to be near what a human with a US keyboard would choose. The quality of resulting ASCII representation varies. For languages of western origin it should be between perfect and good. On the other hand transliteration (i.e., conveying, in Roman letters, the pronunciation expressed by the text in some other writing system) of languages like Chinese, Japanese or Korean is a very complex issue and this library does not even attempt to address it. It draws the line at context-free character-by-character mapping. So a good rule of thumb is that the further the script you are transliterating is from Latin alphabet, the worse the transliteration will be. Note that this module generally produces better results than simply stripping accents from characters (which can be done in Python with built-in functions). It is based on hand-tuned character mappings that for example also contain ASCII approximations for symbols and non-Latin alphabets. This is a Python port of Text::Unidecode Perl module by Sean M. Burke <sburke@cpan.org>.
PackageNamepython39-Unidecode
PackageRelease28.4
PackageVersion1.3.1
SHA-11D1C55856E2D574D590B99053302E6DB062B6299
SHA-2564DCDDB40CA3D32BE147BDBCA2E83B59519402AAF5DC74860AC3902596A5701E0
Key Value
MD555540D65BD4F21F0D96A72095B330026
PackageArchnoarch
PackageDescriptionIt often happens that you have text data in Unicode, but you need to represent it in ASCII. For example when integrating with legacy code that doesn't support Unicode, or for ease of entry of non-Roman names on a US keyboard, or when constructing ASCII machine identifiers from human-readable Unicode strings that should still be somewhat intelligible (a popular example of this is when making an URL slug from an article title). In most of these examples you could represent Unicode characters as "???" or "\\15BA\\15A0\\1610", to mention two extreme cases. But that's nearly useless to someone who actually wants to read what the text says. What Unidecode provides is a middle road: function unidecode() takes Unicode data and tries to represent it in ASCII characters (i.e., the universally displayable characters between 0x00 and 0x7F), where the compromises taken when mapping between two character sets are chosen to be near what a human with a US keyboard would choose. The quality of resulting ASCII representation varies. For languages of western origin it should be between perfect and good. On the other hand transliteration (i.e., conveying, in Roman letters, the pronunciation expressed by the text in some other writing system) of languages like Chinese, Japanese or Korean is a very complex issue and this library does not even attempt to address it. It draws the line at context-free character-by-character mapping. So a good rule of thumb is that the further the script you are transliterating is from Latin alphabet, the worse the transliteration will be. Note that this module generally produces better results than simply stripping accents from characters (which can be done in Python with built-in functions). It is based on hand-tuned character mappings that for example also contain ASCII approximations for symbols and non-Latin alphabets. This is a Python port of Text::Unidecode Perl module by Sean M. Burke <sburke@cpan.org>.
PackageNamepython39-Unidecode
PackageRelease29.6
PackageVersion1.3.2
SHA-1D251A78D42647637C84FAAE8E1205DD307BF3614
SHA-256092D8D22948543835CC7980E7B7001B5FEB10D1788A02452A555C2DB5355B926
Key Value
MD5DD13427C150A9171B74166E769391F4F
PackageArchnoarch
PackageDescriptionIt often happens that you have text data in Unicode, but you need to represent it in ASCII. For example when integrating with legacy code that doesn't support Unicode, or for ease of entry of non-Roman names on a US keyboard, or when constructing ASCII machine identifiers from human-readable Unicode strings that should still be somewhat intelligible (a popular example of this is when making an URL slug from an article title). In most of these examples you could represent Unicode characters as "???" or "\\15BA\\15A0\\1610", to mention two extreme cases. But that's nearly useless to someone who actually wants to read what the text says. What Unidecode provides is a middle road: function unidecode() takes Unicode data and tries to represent it in ASCII characters (i.e., the universally displayable characters between 0x00 and 0x7F), where the compromises taken when mapping between two character sets are chosen to be near what a human with a US keyboard would choose. The quality of resulting ASCII representation varies. For languages of western origin it should be between perfect and good. On the other hand transliteration (i.e., conveying, in Roman letters, the pronunciation expressed by the text in some other writing system) of languages like Chinese, Japanese or Korean is a very complex issue and this library does not even attempt to address it. It draws the line at context-free character-by-character mapping. So a good rule of thumb is that the further the script you are transliterating is from Latin alphabet, the worse the transliteration will be. Note that this module generally produces better results than simply stripping accents from characters (which can be done in Python with built-in functions). It is based on hand-tuned character mappings that for example also contain ASCII approximations for symbols and non-Latin alphabets. This is a Python port of Text::Unidecode Perl module by Sean M. Burke <sburke@cpan.org>.
PackageNamepython39-Unidecode
PackageRelease1.2
PackageVersion1.2.0
SHA-12A6BA900350D72E61D29868A083196094C493B3D
SHA-2561BCDAE85CD6A3CD5AEFEA9DB8A9425A96136B267FE8F782B364044E47FA3C31C
Key Value
MD561489D9FB02A943E30AF1E0BF0509941
PackageArchnoarch
PackageDescriptionIt often happens that you have text data in Unicode, but you need to represent it in ASCII. For example when integrating with legacy code that doesn't support Unicode, or for ease of entry of non-Roman names on a US keyboard, or when constructing ASCII machine identifiers from human-readable Unicode strings that should still be somewhat intelligible (a popular example of this is when making an URL slug from an article title). In most of these examples you could represent Unicode characters as "???" or "\\15BA\\15A0\\1610", to mention two extreme cases. But that's nearly useless to someone who actually wants to read what the text says. What Unidecode provides is a middle road: function unidecode() takes Unicode data and tries to represent it in ASCII characters (i.e., the universally displayable characters between 0x00 and 0x7F), where the compromises taken when mapping between two character sets are chosen to be near what a human with a US keyboard would choose. The quality of resulting ASCII representation varies. For languages of western origin it should be between perfect and good. On the other hand transliteration (i.e., conveying, in Roman letters, the pronunciation expressed by the text in some other writing system) of languages like Chinese, Japanese or Korean is a very complex issue and this library does not even attempt to address it. It draws the line at context-free character-by-character mapping. So a good rule of thumb is that the further the script you are transliterating is from Latin alphabet, the worse the transliteration will be. Note that this module generally produces better results than simply stripping accents from characters (which can be done in Python with built-in functions). It is based on hand-tuned character mappings that for example also contain ASCII approximations for symbols and non-Latin alphabets. This is a Python port of Text::Unidecode Perl module by Sean M. Burke <sburke@cpan.org>.
PackageNamepython39-Unidecode
PackageRelease29.12
PackageVersion1.3.2
SHA-1984CC4A1D2B036F83A7C4D4A2A56F2FD5E605920
SHA-256C9DE97C00ABFB022CA5C3D32E0C905C7D1F69A3AB9210901E55501A53FAC963D
Key Value
MD59FF2AAE0A32A4B20AE34A6EDE859FEB5
PackageArchnoarch
PackageDescriptionIt often happens that you have text data in Unicode, but you need to represent it in ASCII. For example when integrating with legacy code that doesn't support Unicode, or for ease of entry of non-Roman names on a US keyboard, or when constructing ASCII machine identifiers from human-readable Unicode strings that should still be somewhat intelligible (a popular example of this is when making an URL slug from an article title). In most of these examples you could represent Unicode characters as "???" or "\\15BA\\15A0\\1610", to mention two extreme cases. But that's nearly useless to someone who actually wants to read what the text says. What Unidecode provides is a middle road: function unidecode() takes Unicode data and tries to represent it in ASCII characters (i.e., the universally displayable characters between 0x00 and 0x7F), where the compromises taken when mapping between two character sets are chosen to be near what a human with a US keyboard would choose. The quality of resulting ASCII representation varies. For languages of western origin it should be between perfect and good. On the other hand transliteration (i.e., conveying, in Roman letters, the pronunciation expressed by the text in some other writing system) of languages like Chinese, Japanese or Korean is a very complex issue and this library does not even attempt to address it. It draws the line at context-free character-by-character mapping. So a good rule of thumb is that the further the script you are transliterating is from Latin alphabet, the worse the transliteration will be. Note that this module generally produces better results than simply stripping accents from characters (which can be done in Python with built-in functions). It is based on hand-tuned character mappings that for example also contain ASCII approximations for symbols and non-Latin alphabets. This is a Python port of Text::Unidecode Perl module by Sean M. Burke <sburke@cpan.org>.
PackageNamepython39-Unidecode
PackageRelease29.13
PackageVersion1.3.2
SHA-11AD67DBA7353C38E7B262408BD8E356864D9E939
SHA-256B2C2B5D100BA73E8AF539765180C78218241A56608B00E4D1CC7A3B8E215CC77
Key Value
MD545318A1AC68E322179E8410207138FFB
PackageArchnoarch
PackageDescriptionIt often happens that you have text data in Unicode, but you need to represent it in ASCII. For example when integrating with legacy code that doesn't support Unicode, or for ease of entry of non-Roman names on a US keyboard, or when constructing ASCII machine identifiers from human-readable Unicode strings that should still be somewhat intelligible (a popular example of this is when making an URL slug from an article title). In most of these examples you could represent Unicode characters as "???" or "\\15BA\\15A0\\1610", to mention two extreme cases. But that's nearly useless to someone who actually wants to read what the text says. What Unidecode provides is a middle road: function unidecode() takes Unicode data and tries to represent it in ASCII characters (i.e., the universally displayable characters between 0x00 and 0x7F), where the compromises taken when mapping between two character sets are chosen to be near what a human with a US keyboard would choose. The quality of resulting ASCII representation varies. For languages of western origin it should be between perfect and good. On the other hand transliteration (i.e., conveying, in Roman letters, the pronunciation expressed by the text in some other writing system) of languages like Chinese, Japanese or Korean is a very complex issue and this library does not even attempt to address it. It draws the line at context-free character-by-character mapping. So a good rule of thumb is that the further the script you are transliterating is from Latin alphabet, the worse the transliteration will be. Note that this module generally produces better results than simply stripping accents from characters (which can be done in Python with built-in functions). It is based on hand-tuned character mappings that for example also contain ASCII approximations for symbols and non-Latin alphabets. This is a Python port of Text::Unidecode Perl module by Sean M. Burke <sburke@cpan.org>.
PackageNamepython39-Unidecode
PackageRelease28.5
PackageVersion1.3.1
SHA-1ED982FC217F83EE19C05B676CB0C8C41189CCEED
SHA-2567C32F75EE25F3269A73E1C5908B63060A2F62DD5DE9CB467E5E0D54F56ECD89E
Key Value
FileNamehttp://archlinux.mirror.root.lu//pool//community//python-unidecode-1.1.2-1-any.pkg.tar.zst
MD534AE3D1591A8B12652F68B8DC757E60F
SHA-1C64E285D77A06FB3ED2EC0B2E71C751B1603FDC6
SHA-256872D025BC1B2D0785124267BF5B6208882F7CA8A50264EFBA12785DC8CA2F94D
SSDEEP6144:5VRqIe4PvLrK4/FY194TXxYIJ/0lN3Nc5exEIdgof:5uIe47ru94TXx7/0lN3Nccxf
TLSHT1E8342374CCBDCD82F95D64EE27D12133265A6E1B76DED9821F7F9DED0C00648890AA8C
Key Value
FileNamehttp://archlinux.mirror.root.lu//pool//community//python-unidecode-1.2.0-1-any.pkg.tar.zst
MD5536EF48997799AFC20813303815FD669
SHA-1C1961CBE74B89226BA1E6B38871FA869D28C0F9E
SHA-25686B7678C976BC13F30447249701A9785B38A32D168FC1D2D03AE9CFFA7CF7B77
SSDEEP6144:vBsM39Xn+HLTP8RLhSej/5U9PmudsghtRcOjjgU0Ex7:v3tX2zedUhCghtR3XgUX
TLSHT12334222E9193A515718CF4D3687127F0336D2827B34DF6EE8D217E3BE1249DAA7050E6
Key Value
MD56B4982F1DAF70F5AB86E381C1B24936C
PackageArchnoarch
PackageDescriptionIt often happens that you have text data in Unicode, but you need to represent it in ASCII. For example when integrating with legacy code that doesn't support Unicode, or for ease of entry of non-Roman names on a US keyboard, or when constructing ASCII machine identifiers from human-readable Unicode strings that should still be somewhat intelligible (a popular example of this is when making an URL slug from an article title). In most of these examples you could represent Unicode characters as "???" or "\\15BA\\15A0\\1610", to mention two extreme cases. But that's nearly useless to someone who actually wants to read what the text says. What Unidecode provides is a middle road: function unidecode() takes Unicode data and tries to represent it in ASCII characters (i.e., the universally displayable characters between 0x00 and 0x7F), where the compromises taken when mapping between two character sets are chosen to be near what a human with a US keyboard would choose. The quality of resulting ASCII representation varies. For languages of western origin it should be between perfect and good. On the other hand transliteration (i.e., conveying, in Roman letters, the pronunciation expressed by the text in some other writing system) of languages like Chinese, Japanese or Korean is a very complex issue and this library does not even attempt to address it. It draws the line at context-free character-by-character mapping. So a good rule of thumb is that the further the script you are transliterating is from Latin alphabet, the worse the transliteration will be. Note that this module generally produces better results than simply stripping accents from characters (which can be done in Python with built-in functions). It is based on hand-tuned character mappings that for example also contain ASCII approximations for symbols and non-Latin alphabets. This is a Python port of Text::Unidecode Perl module by Sean M. Burke <sburke@cpan.org>.
PackageMaintainerhttps://bugs.opensuse.org
PackageNamepython39-Unidecode
PackageRelease1.3
PackageVersion1.3.2
SHA-1487F96ACFC4FAD0F71898D2471E65A3D68393FC8
SHA-256A6CB6297C2A70D78838E7DCBD4FAC66245633EF02CBA061CA418E2701F41AF68
Key Value
MD506BEFF93A90101E6DA3B84995A407035
PackageArchnoarch
PackageDescriptionIt often happens that you have text data in Unicode, but you need to represent it in ASCII. For example when integrating with legacy code that doesn't support Unicode, or for ease of entry of non-Roman names on a US keyboard, or when constructing ASCII machine identifiers from human-readable Unicode strings that should still be somewhat intelligible (a popular example of this is when making an URL slug from an article title). In most of these examples you could represent Unicode characters as "???" or "\\15BA\\15A0\\1610", to mention two extreme cases. But that's nearly useless to someone who actually wants to read what the text says. What Unidecode provides is a middle road: function unidecode() takes Unicode data and tries to represent it in ASCII characters (i.e., the universally displayable characters between 0x00 and 0x7F), where the compromises taken when mapping between two character sets are chosen to be near what a human with a US keyboard would choose. The quality of resulting ASCII representation varies. For languages of western origin it should be between perfect and good. On the other hand transliteration (i.e., conveying, in Roman letters, the pronunciation expressed by the text in some other writing system) of languages like Chinese, Japanese or Korean is a very complex issue and this library does not even attempt to address it. It draws the line at context-free character-by-character mapping. So a good rule of thumb is that the further the script you are transliterating is from Latin alphabet, the worse the transliteration will be. Note that this module generally produces better results than simply stripping accents from characters (which can be done in Python with built-in functions). It is based on hand-tuned character mappings that for example also contain ASCII approximations for symbols and non-Latin alphabets. This is a Python port of Text::Unidecode Perl module by Sean M. Burke <sburke@cpan.org>.
PackageNamepython39-Unidecode
PackageRelease29.10
PackageVersion1.3.2
SHA-19809B8519A8CA2EEDC01639AAE2854BE524E238D
SHA-256D7EFB2141652AD6664679CC18489AB7B43018BE4BBF12D1F6BDC5BF3872CD413