Result for 02036613D85E16431C38808B4BCF1C66C998A6CD

Query result

Key Value
FileName./usr/lib/python3.9/site-packages/unidecode/__pycache__/x017.cpython-39.pyc
FileSize710
MD5803D1D378ED29D86D792589FBC58CACC
SHA-102036613D85E16431C38808B4BCF1C66C998A6CD
SHA-256641783064DD1153E034D8E36FBE9FC8C57CB85D0FC4BB5DDE713624876D033DD
SSDEEP12:eM/8eUIhviFXVYTo9/ZpJlTSeen4qvV01afHEBJiKTEO26:h/rJcVm2/Hzfs01EciAEi
TLSHT159019EE3F599C09FFEDAFB711121C9ACC1581283D30508963B3960A87C097E1D8158D7
hashlookup:parent-total5
hashlookup:trust75

Network graph view

Parents (Total: 5)

The searched file hash is included in 5 parent files which include package known and seen by metalookup. A sample is included below:

Key Value
MD555540D65BD4F21F0D96A72095B330026
PackageArchnoarch
PackageDescriptionIt often happens that you have text data in Unicode, but you need to represent it in ASCII. For example when integrating with legacy code that doesn't support Unicode, or for ease of entry of non-Roman names on a US keyboard, or when constructing ASCII machine identifiers from human-readable Unicode strings that should still be somewhat intelligible (a popular example of this is when making an URL slug from an article title). In most of these examples you could represent Unicode characters as "???" or "\\15BA\\15A0\\1610", to mention two extreme cases. But that's nearly useless to someone who actually wants to read what the text says. What Unidecode provides is a middle road: function unidecode() takes Unicode data and tries to represent it in ASCII characters (i.e., the universally displayable characters between 0x00 and 0x7F), where the compromises taken when mapping between two character sets are chosen to be near what a human with a US keyboard would choose. The quality of resulting ASCII representation varies. For languages of western origin it should be between perfect and good. On the other hand transliteration (i.e., conveying, in Roman letters, the pronunciation expressed by the text in some other writing system) of languages like Chinese, Japanese or Korean is a very complex issue and this library does not even attempt to address it. It draws the line at context-free character-by-character mapping. So a good rule of thumb is that the further the script you are transliterating is from Latin alphabet, the worse the transliteration will be. Note that this module generally produces better results than simply stripping accents from characters (which can be done in Python with built-in functions). It is based on hand-tuned character mappings that for example also contain ASCII approximations for symbols and non-Latin alphabets. This is a Python port of Text::Unidecode Perl module by Sean M. Burke <sburke@cpan.org>.
PackageNamepython39-Unidecode
PackageRelease29.6
PackageVersion1.3.2
SHA-1D251A78D42647637C84FAAE8E1205DD307BF3614
SHA-256092D8D22948543835CC7980E7B7001B5FEB10D1788A02452A555C2DB5355B926
Key Value
MD561489D9FB02A943E30AF1E0BF0509941
PackageArchnoarch
PackageDescriptionIt often happens that you have text data in Unicode, but you need to represent it in ASCII. For example when integrating with legacy code that doesn't support Unicode, or for ease of entry of non-Roman names on a US keyboard, or when constructing ASCII machine identifiers from human-readable Unicode strings that should still be somewhat intelligible (a popular example of this is when making an URL slug from an article title). In most of these examples you could represent Unicode characters as "???" or "\\15BA\\15A0\\1610", to mention two extreme cases. But that's nearly useless to someone who actually wants to read what the text says. What Unidecode provides is a middle road: function unidecode() takes Unicode data and tries to represent it in ASCII characters (i.e., the universally displayable characters between 0x00 and 0x7F), where the compromises taken when mapping between two character sets are chosen to be near what a human with a US keyboard would choose. The quality of resulting ASCII representation varies. For languages of western origin it should be between perfect and good. On the other hand transliteration (i.e., conveying, in Roman letters, the pronunciation expressed by the text in some other writing system) of languages like Chinese, Japanese or Korean is a very complex issue and this library does not even attempt to address it. It draws the line at context-free character-by-character mapping. So a good rule of thumb is that the further the script you are transliterating is from Latin alphabet, the worse the transliteration will be. Note that this module generally produces better results than simply stripping accents from characters (which can be done in Python with built-in functions). It is based on hand-tuned character mappings that for example also contain ASCII approximations for symbols and non-Latin alphabets. This is a Python port of Text::Unidecode Perl module by Sean M. Burke <sburke@cpan.org>.
PackageNamepython39-Unidecode
PackageRelease29.12
PackageVersion1.3.2
SHA-1984CC4A1D2B036F83A7C4D4A2A56F2FD5E605920
SHA-256C9DE97C00ABFB022CA5C3D32E0C905C7D1F69A3AB9210901E55501A53FAC963D
Key Value
MD59FF2AAE0A32A4B20AE34A6EDE859FEB5
PackageArchnoarch
PackageDescriptionIt often happens that you have text data in Unicode, but you need to represent it in ASCII. For example when integrating with legacy code that doesn't support Unicode, or for ease of entry of non-Roman names on a US keyboard, or when constructing ASCII machine identifiers from human-readable Unicode strings that should still be somewhat intelligible (a popular example of this is when making an URL slug from an article title). In most of these examples you could represent Unicode characters as "???" or "\\15BA\\15A0\\1610", to mention two extreme cases. But that's nearly useless to someone who actually wants to read what the text says. What Unidecode provides is a middle road: function unidecode() takes Unicode data and tries to represent it in ASCII characters (i.e., the universally displayable characters between 0x00 and 0x7F), where the compromises taken when mapping between two character sets are chosen to be near what a human with a US keyboard would choose. The quality of resulting ASCII representation varies. For languages of western origin it should be between perfect and good. On the other hand transliteration (i.e., conveying, in Roman letters, the pronunciation expressed by the text in some other writing system) of languages like Chinese, Japanese or Korean is a very complex issue and this library does not even attempt to address it. It draws the line at context-free character-by-character mapping. So a good rule of thumb is that the further the script you are transliterating is from Latin alphabet, the worse the transliteration will be. Note that this module generally produces better results than simply stripping accents from characters (which can be done in Python with built-in functions). It is based on hand-tuned character mappings that for example also contain ASCII approximations for symbols and non-Latin alphabets. This is a Python port of Text::Unidecode Perl module by Sean M. Burke <sburke@cpan.org>.
PackageNamepython39-Unidecode
PackageRelease29.13
PackageVersion1.3.2
SHA-11AD67DBA7353C38E7B262408BD8E356864D9E939
SHA-256B2C2B5D100BA73E8AF539765180C78218241A56608B00E4D1CC7A3B8E215CC77
Key Value
MD56B4982F1DAF70F5AB86E381C1B24936C
PackageArchnoarch
PackageDescriptionIt often happens that you have text data in Unicode, but you need to represent it in ASCII. For example when integrating with legacy code that doesn't support Unicode, or for ease of entry of non-Roman names on a US keyboard, or when constructing ASCII machine identifiers from human-readable Unicode strings that should still be somewhat intelligible (a popular example of this is when making an URL slug from an article title). In most of these examples you could represent Unicode characters as "???" or "\\15BA\\15A0\\1610", to mention two extreme cases. But that's nearly useless to someone who actually wants to read what the text says. What Unidecode provides is a middle road: function unidecode() takes Unicode data and tries to represent it in ASCII characters (i.e., the universally displayable characters between 0x00 and 0x7F), where the compromises taken when mapping between two character sets are chosen to be near what a human with a US keyboard would choose. The quality of resulting ASCII representation varies. For languages of western origin it should be between perfect and good. On the other hand transliteration (i.e., conveying, in Roman letters, the pronunciation expressed by the text in some other writing system) of languages like Chinese, Japanese or Korean is a very complex issue and this library does not even attempt to address it. It draws the line at context-free character-by-character mapping. So a good rule of thumb is that the further the script you are transliterating is from Latin alphabet, the worse the transliteration will be. Note that this module generally produces better results than simply stripping accents from characters (which can be done in Python with built-in functions). It is based on hand-tuned character mappings that for example also contain ASCII approximations for symbols and non-Latin alphabets. This is a Python port of Text::Unidecode Perl module by Sean M. Burke <sburke@cpan.org>.
PackageMaintainerhttps://bugs.opensuse.org
PackageNamepython39-Unidecode
PackageRelease1.3
PackageVersion1.3.2
SHA-1487F96ACFC4FAD0F71898D2471E65A3D68393FC8
SHA-256A6CB6297C2A70D78838E7DCBD4FAC66245633EF02CBA061CA418E2701F41AF68
Key Value
MD506BEFF93A90101E6DA3B84995A407035
PackageArchnoarch
PackageDescriptionIt often happens that you have text data in Unicode, but you need to represent it in ASCII. For example when integrating with legacy code that doesn't support Unicode, or for ease of entry of non-Roman names on a US keyboard, or when constructing ASCII machine identifiers from human-readable Unicode strings that should still be somewhat intelligible (a popular example of this is when making an URL slug from an article title). In most of these examples you could represent Unicode characters as "???" or "\\15BA\\15A0\\1610", to mention two extreme cases. But that's nearly useless to someone who actually wants to read what the text says. What Unidecode provides is a middle road: function unidecode() takes Unicode data and tries to represent it in ASCII characters (i.e., the universally displayable characters between 0x00 and 0x7F), where the compromises taken when mapping between two character sets are chosen to be near what a human with a US keyboard would choose. The quality of resulting ASCII representation varies. For languages of western origin it should be between perfect and good. On the other hand transliteration (i.e., conveying, in Roman letters, the pronunciation expressed by the text in some other writing system) of languages like Chinese, Japanese or Korean is a very complex issue and this library does not even attempt to address it. It draws the line at context-free character-by-character mapping. So a good rule of thumb is that the further the script you are transliterating is from Latin alphabet, the worse the transliteration will be. Note that this module generally produces better results than simply stripping accents from characters (which can be done in Python with built-in functions). It is based on hand-tuned character mappings that for example also contain ASCII approximations for symbols and non-Latin alphabets. This is a Python port of Text::Unidecode Perl module by Sean M. Burke <sburke@cpan.org>.
PackageNamepython39-Unidecode
PackageRelease29.10
PackageVersion1.3.2
SHA-19809B8519A8CA2EEDC01639AAE2854BE524E238D
SHA-256D7EFB2141652AD6664679CC18489AB7B43018BE4BBF12D1F6BDC5BF3872CD413