Result for 0062A3846BB95867DBDE6DD8D202D761FC0A5653

Query result

Key Value
FileNameusr/lib/python3.12/site-packages/unidecode/x05d.py
FileSize4660
MD5CEF9212EA61D03A37A88BBD262FDCC81
RDS:package_id302124
SHA-10062A3846BB95867DBDE6DD8D202D761FC0A5653
SHA-2565AE75B6FBC4EB56AF2E17BB9C66F63F34BC592280279D1AAE6E1DC600625D28F
SHA-512B8A03DFE54B6F1DF24B4CEEE3F04A641E6CC65662E52EEF31C7E3BD6F143413501F5E9CB3FCB0697F20C7DD9E3910FDD8948E44469ED613D3B398B68C1CA6ECE
SSDEEP96:KUjkc6iacokkebBiq2Vgrhkf2Z8E42EUjrm:/jp6ifoQiq22hkf2Z8Epdjq
TLSHT1A0A1D6B46A9622CC451EFEA2D214DEE3EC9780530BF0806D7EFDD824675E84DCB79498
insert-timestamp1728991461.2075975
mimetypetext/plain
sourcesnap:MZhNqq04Y1yzup4ZCsVhgBeWKdlvtBsL_39
tar:gnameroot
tar:unameroot
hashlookup:parent-total90
hashlookup:trust100

Network graph view

Parents (Total: 90)

The searched file hash is included in 90 parent files which include package known and seen by metalookup. A sample is included below:

Key Value
FileNamehttp://dl-cdn.alpinelinux.org/alpine/latest-stable//community//riscv64//py3-unidecode-1.3.8-r1.apk
MD56B75C66BA213AC60E33957BF9EC4B540
SHA-1050B80C70DC1220B351CCB80FC2C044B92C9E553
SHA-256B8C2D03D018D2E97453777F7360B4A9AD704AE93D1325C5970CD87211744016E
SSDEEP6144:aCW1c0togHGW/7RZz1JU9aCfhbKGxVV5jmr:QCuoM/chpeG9Ur
TLSHT1DC24125328CDA5DC84ECAE8B5A4946B48670932085B797716FCC8732F4FD4E8EB2D274
Key Value
MD58063F9D4F68A648D0E0E74333B0C9EAE
PackageArchnoarch
PackageDescriptionIt often happens that you have text data in Unicode, but you need to represent it in ASCII. For example when integrating with legacy code that doesn't support Unicode, or for ease of entry of non-Roman names on a US keyboard, or when constructing ASCII machine identifiers from human-readable Unicode strings that should still be somewhat intelligible (a popular example of this is when making an URL slug from an article title). In most of these examples you could represent Unicode characters as "???" or "\\15BA\\15A0\\1610", to mention two extreme cases. But that's nearly useless to someone who actually wants to read what the text says. What Unidecode provides is a middle road: function unidecode() takes Unicode data and tries to represent it in ASCII characters (i.e., the universally displayable characters between 0x00 and 0x7F), where the compromises taken when mapping between two character sets are chosen to be near what a human with a US keyboard would choose. The quality of resulting ASCII representation varies. For languages of western origin it should be between perfect and good. On the other hand transliteration (i.e., conveying, in Roman letters, the pronunciation expressed by the text in some other writing system) of languages like Chinese, Japanese or Korean is a very complex issue and this library does not even attempt to address it. It draws the line at context-free character-by-character mapping. So a good rule of thumb is that the further the script you are transliterating is from Latin alphabet, the worse the transliteration will be. Note that this module generally produces better results than simply stripping accents from characters (which can be done in Python with built-in functions). It is based on hand-tuned character mappings that for example also contain ASCII approximations for symbols and non-Latin alphabets. This is a Python port of Text::Unidecode Perl module by Sean M. Burke <sburke@cpan.org>.
PackageNamepython3-Unidecode
PackageRelease2.1
PackageVersion1.3.2
SHA-1110E9F2067C942D7B7EA0E9F292754B58C70B2A9
SHA-256A3FF76DDB5AAB7D5F3B28679671BF58337598545C8BFA7EFE29E1850543B93ED
Key Value
FileNamehttp://dl-cdn.alpinelinux.org/alpine/latest-stable//community//armhf//py3-unidecode-1.3.6-r3.apk
MD5BAEFBB529A34C3E77E07B95D15D098F2
SHA-11302CC20FBF390A3DB270B896B903293ECC95C7E
SHA-256A059899D21EDDD6E8FD17568CDF0D1B419272A63DA43758A9294526A38AAEC4D
SSDEEP3072:PVsXjAgsSZLOW0tjAjIBlcF3Xe6yXmlAz03Wj+Vbwh+uQ4EEM+SJFDVAHQZfLObG:Pkj5sgqW0xlUFAz4iCo22m6qNROdk
TLSHT1171423D3A8D0EECADBBA659217C1D04FB1C6F490BDE7647081DC599E28C3E9872F114A
Key Value
FileNamehttp://dl-cdn.alpinelinux.org/alpine/latest-stable//community//armhf//py3-unidecode-1.3.8-r1.apk
MD58DAF56F171AC1C74534285467EEED61E
SHA-1148847D7E27586BF62DFDAA1B4F3F9AEF363FA0B
SHA-256D390238CB8C9186ED2233EF8E2EB22C720B6D636940936CC89AA6700C79B0890
SSDEEP6144:KCW1c0togHGW/7RZz1JU9aCfhbKGxVV5jmr:ACuoM/chpeG9Ur
TLSHT1F624135728CDA5CC84ECA98B5A4946F48A709310C9A797716FCC8636F8FD4F8E62C264
Key Value
MD5D74D8A963EFA5C0BAAF8A260913D0B9D
PackageArchnoarch
PackageDescriptionIt often happens that you have text data in Unicode, but you need to represent it in ASCII. For example when integrating with legacy code that doesn't support Unicode, or for ease of entry of non-Roman names on a US keyboard, or when constructing ASCII machine identifiers from human-readable Unicode strings that should still be somewhat intelligible (a popular example of this is when making an URL slug from an article title). In most of these examples you could represent Unicode characters as "???" or "\\15BA\\15A0\\1610", to mention two extreme cases. But that's nearly useless to someone who actually wants to read what the text says. What Unidecode provides is a middle road: function unidecode() takes Unicode data and tries to represent it in ASCII characters (i.e., the universally displayable characters between 0x00 and 0x7F), where the compromises taken when mapping between two character sets are chosen to be near what a human with a US keyboard would choose. The quality of resulting ASCII representation varies. For languages of western origin it should be between perfect and good. On the other hand transliteration (i.e., conveying, in Roman letters, the pronunciation expressed by the text in some other writing system) of languages like Chinese, Japanese or Korean is a very complex issue and this library does not even attempt to address it. It draws the line at context-free character-by-character mapping. So a good rule of thumb is that the further the script you are transliterating is from Latin alphabet, the worse the transliteration will be. Note that this module generally produces better results than simply stripping accents from characters (which can be done in Python with built-in functions). It is based on hand-tuned character mappings that for example also contain ASCII approximations for symbols and non-Latin alphabets. This is a Python port of Text::Unidecode Perl module by Sean M. Burke <sburke@cpan.org>.
PackageMaintainerhttps://www.suse.com/
PackageNamepython3-Unidecode
PackageReleaselp154.29.1
PackageVersion1.3.2
SHA-1174BF5A90A33F37F702840FB99107344C485DBA9
SHA-25626B978B9532C357313D452DC1C231FDD56A475AE41CB9B594293097DB031C8BB
Key Value
MD5B0258C9272884B847AD83B9E00DA6DA1
PackageArchnoarch
PackageDescriptionIt often happens that you have text data in Unicode, but you need to represent it in ASCII. For example when integrating with legacy code that doesn't support Unicode, or for ease of entry of non-Roman names on a US keyboard, or when constructing ASCII machine identifiers from human-readable Unicode strings that should still be somewhat intelligible (a popular example of this is when making an URL slug from an article title). In most of these examples you could represent Unicode characters as "???" or "\\15BA\\15A0\\1610", to mention two extreme cases. But that's nearly useless to someone who actually wants to read what the text says. What Unidecode provides is a middle road: function unidecode() takes Unicode data and tries to represent it in ASCII characters (i.e., the universally displayable characters between 0x00 and 0x7F), where the compromises taken when mapping between two character sets are chosen to be near what a human with a US keyboard would choose. The quality of resulting ASCII representation varies. For languages of western origin it should be between perfect and good. On the other hand transliteration (i.e., conveying, in Roman letters, the pronunciation expressed by the text in some other writing system) of languages like Chinese, Japanese or Korean is a very complex issue and this library does not even attempt to address it. It draws the line at context-free character-by-character mapping. So a good rule of thumb is that the further the script you are transliterating is from Latin alphabet, the worse the transliteration will be. Note that this module generally produces better results than simply stripping accents from characters (which can be done in Python with built-in functions). It is based on hand-tuned character mappings that for example also contain ASCII approximations for symbols and non-Latin alphabets. This is a Python port of Text::Unidecode Perl module by Sean M. Burke <sburke@cpan.org>.
PackageNamepython3-Unidecode
PackageRelease2.1
PackageVersion1.3.2
SHA-11977A6446B98D545BF38DCC73D81282732B94951
SHA-256599B72E6189629B87538FA31A20BAF14B21CF185F258C5B02D9AF8263C048CB1
Key Value
MD59FF2AAE0A32A4B20AE34A6EDE859FEB5
PackageArchnoarch
PackageDescriptionIt often happens that you have text data in Unicode, but you need to represent it in ASCII. For example when integrating with legacy code that doesn't support Unicode, or for ease of entry of non-Roman names on a US keyboard, or when constructing ASCII machine identifiers from human-readable Unicode strings that should still be somewhat intelligible (a popular example of this is when making an URL slug from an article title). In most of these examples you could represent Unicode characters as "???" or "\\15BA\\15A0\\1610", to mention two extreme cases. But that's nearly useless to someone who actually wants to read what the text says. What Unidecode provides is a middle road: function unidecode() takes Unicode data and tries to represent it in ASCII characters (i.e., the universally displayable characters between 0x00 and 0x7F), where the compromises taken when mapping between two character sets are chosen to be near what a human with a US keyboard would choose. The quality of resulting ASCII representation varies. For languages of western origin it should be between perfect and good. On the other hand transliteration (i.e., conveying, in Roman letters, the pronunciation expressed by the text in some other writing system) of languages like Chinese, Japanese or Korean is a very complex issue and this library does not even attempt to address it. It draws the line at context-free character-by-character mapping. So a good rule of thumb is that the further the script you are transliterating is from Latin alphabet, the worse the transliteration will be. Note that this module generally produces better results than simply stripping accents from characters (which can be done in Python with built-in functions). It is based on hand-tuned character mappings that for example also contain ASCII approximations for symbols and non-Latin alphabets. This is a Python port of Text::Unidecode Perl module by Sean M. Burke <sburke@cpan.org>.
PackageNamepython39-Unidecode
PackageRelease29.13
PackageVersion1.3.2
SHA-11AD67DBA7353C38E7B262408BD8E356864D9E939
SHA-256B2C2B5D100BA73E8AF539765180C78218241A56608B00E4D1CC7A3B8E215CC77
Key Value
MD53A29A1A4B91F06EEE31487C5F804EDAB
PackageArchnoarch
PackageDescriptionIt often happens that you have text data in Unicode, but you need to represent it in ASCII. For example when integrating with legacy code that doesn't support Unicode, or for ease of entry of non-Roman names on a US keyboard, or when constructing ASCII machine identifiers from human-readable Unicode strings that should still be somewhat intelligible (a popular example of this is when making an URL slug from an article title). In most of these examples you could represent Unicode characters as "???" or "\\15BA\\15A0\\1610", to mention two extreme cases. But that's nearly useless to someone who actually wants to read what the text says. What Unidecode provides is a middle road: function unidecode() takes Unicode data and tries to represent it in ASCII characters (i.e., the universally displayable characters between 0x00 and 0x7F), where the compromises taken when mapping between two character sets are chosen to be near what a human with a US keyboard would choose. The quality of resulting ASCII representation varies. For languages of western origin it should be between perfect and good. On the other hand transliteration (i.e., conveying, in Roman letters, the pronunciation expressed by the text in some other writing system) of languages like Chinese, Japanese or Korean is a very complex issue and this library does not even attempt to address it. It draws the line at context-free character-by-character mapping. So a good rule of thumb is that the further the script you are transliterating is from Latin alphabet, the worse the transliteration will be. Note that this module generally produces better results than simply stripping accents from characters (which can be done in Python with built-in functions). It is based on hand-tuned character mappings that for example also contain ASCII approximations for symbols and non-Latin alphabets. This is a Python port of Text::Unidecode Perl module by Sean M. Burke <sburke@cpan.org>.
PackageNamepython39-Unidecode
PackageRelease28.4
PackageVersion1.3.1
SHA-11D1C55856E2D574D590B99053302E6DB062B6299
SHA-2564DCDDB40CA3D32BE147BDBCA2E83B59519402AAF5DC74860AC3902596A5701E0
Key Value
MD59384579CD68FD4A247DA90D02D470948
PackageArchnoarch
PackageDescriptionIt often happens that you have text data in Unicode, but you need to represent it in ASCII. For example when integrating with legacy code that doesn't support Unicode, or for ease of entry of non-Roman names on a US keyboard, or when constructing ASCII machine identifiers from human-readable Unicode strings that should still be somewhat intelligible (a popular example of this is when making an URL slug from an article title). In most of these examples you could represent Unicode characters as "???" or "\\15BA\\15A0\\1610", to mention two extreme cases. But that's nearly useless to someone who actually wants to read what the text says. What Unidecode provides is a middle road: function unidecode() takes Unicode data and tries to represent it in ASCII characters (i.e., the universally displayable characters between 0x00 and 0x7F), where the compromises taken when mapping between two character sets are chosen to be near what a human with a US keyboard would choose. The quality of resulting ASCII representation varies. For languages of western origin it should be between perfect and good. On the other hand transliteration (i.e., conveying, in Roman letters, the pronunciation expressed by the text in some other writing system) of languages like Chinese, Japanese or Korean is a very complex issue and this library does not even attempt to address it. It draws the line at context-free character-by-character mapping. So a good rule of thumb is that the further the script you are transliterating is from Latin alphabet, the worse the transliteration will be. Note that this module generally produces better results than simply stripping accents from characters (which can be done in Python with built-in functions). It is based on hand-tuned character mappings that for example also contain ASCII approximations for symbols and non-Latin alphabets. This is a Python port of Text::Unidecode Perl module by Sean M. Burke <sburke@cpan.org>.
PackageNamepython3-Unidecode
PackageRelease30.2
PackageVersion1.3.2
SHA-11EE15ECA52B5BCFB93B368786F25A66E29E1DFB4
SHA-25626F4801852A73C126D8FE0EDDB832EA5443D9E9E28C304DA8011AB3237983EF0
Key Value
MD55EE2E9BC2CD927A29A69C4C16D3398FE
PackageArchnoarch
PackageDescriptionIt often happens that you have text data in Unicode, but you need to represent it in ASCII. For example when integrating with legacy code that doesn't support Unicode, or for ease of entry of non-Roman names on a US keyboard, or when constructing ASCII machine identifiers from human-readable Unicode strings that should still be somewhat intelligible (a popular example of this is when making an URL slug from an article title). In most of these examples you could represent Unicode characters as "???" or "\\15BA\\15A0\\1610", to mention two extreme cases. But that's nearly useless to someone who actually wants to read what the text says. What Unidecode provides is a middle road: function unidecode() takes Unicode data and tries to represent it in ASCII characters (i.e., the universally displayable characters between 0x00 and 0x7F), where the compromises taken when mapping between two character sets are chosen to be near what a human with a US keyboard would choose. The quality of resulting ASCII representation varies. For languages of western origin it should be between perfect and good. On the other hand transliteration (i.e., conveying, in Roman letters, the pronunciation expressed by the text in some other writing system) of languages like Chinese, Japanese or Korean is a very complex issue and this library does not even attempt to address it. It draws the line at context-free character-by-character mapping. So a good rule of thumb is that the further the script you are transliterating is from Latin alphabet, the worse the transliteration will be. Note that this module generally produces better results than simply stripping accents from characters (which can be done in Python with built-in functions). It is based on hand-tuned character mappings that for example also contain ASCII approximations for symbols and non-Latin alphabets. This is a Python port of Text::Unidecode Perl module by Sean M. Burke <sburke@cpan.org>.
PackageNamepython38-Unidecode
PackageRelease29.12
PackageVersion1.3.2
SHA-1205290E9BC4304CCA798B975ED0FED0605701EBD
SHA-256E5D9B6EFB8B64EDB96E1B0490F268D8E2F68612F811857F4A8628B1B27740C70