Result for 04019992818C4CF7B38021B9EFE28ECC8429B80A

Query result

Key Value
FileNameusr/lib/python3.12/site-packages/unidecode/x002.py
FileSize3871
MD52B5F47F17E087D20CE76EA7D05EA0665
RDS:package_id302124
SHA-104019992818C4CF7B38021B9EFE28ECC8429B80A
SHA-256356A2B77C9B28B68D857863094D6456CABBA1E06DBA2558B3423A5B1E4775AC6
SHA-512BDBD11E76C76F42196F1C9B33B1C50B87012201D2125215C9F1CC9332406064D53F758674AAD6FB49A2D6DB19538577C3491D2CC060BBE87757BA6E2FC2F2563
SSDEEP48:N1WOCLIpt5saYwJGPN3neng1LYw8DZNem4k8az//ePZMEsFu:NKI6aYneg1LYwo0mRpyZMHu
TLSHT15881E234659A222EEB4A3F31EB51EC91628B82871DE4587EFADDE810FF0F64C98551C4
insert-timestamp1728991461.8757503
mimetypetext/plain
sourcesnap:MZhNqq04Y1yzup4ZCsVhgBeWKdlvtBsL_39
tar:gnameroot
tar:unameroot
hashlookup:parent-total92
hashlookup:trust100

Network graph view

Parents (Total: 92)

The searched file hash is included in 92 parent files which include package known and seen by metalookup. A sample is included below:

Key Value
FileNamehttp://dl-cdn.alpinelinux.org/alpine/latest-stable//community//armhf//py3-unidecode-1.3.8-r1.apk
MD53A4433DF093E5DAE1A235CC03514A511
SHA-10348A1141DAB902C54A48C154405A45BED96475E
SHA-25639A58AD0835CDF65047D368B650A921FFCA73B6163EEB4CCAC9F6144B3A70284
SSDEEP6144:mEYRdZd8x4xHOoK3ZPrga6Sc6uP8JS5hHvpr:mZRdQC0ZjUnEY5hHl
TLSHT1661413685630CFBCD9CC9976AB2EC314EB9D14C4A28273116FD7408A6826FF51E67D0D
Key Value
FileNamehttp://dl-cdn.alpinelinux.org/alpine/latest-stable//community//riscv64//py3-unidecode-1.3.8-r1.apk
MD56B75C66BA213AC60E33957BF9EC4B540
SHA-1050B80C70DC1220B351CCB80FC2C044B92C9E553
SHA-256B8C2D03D018D2E97453777F7360B4A9AD704AE93D1325C5970CD87211744016E
SSDEEP6144:aCW1c0togHGW/7RZz1JU9aCfhbKGxVV5jmr:QCuoM/chpeG9Ur
TLSHT1DC24125328CDA5DC84ECAE8B5A4946B48670932085B797716FCC8732F4FD4E8EB2D274
Key Value
MD58063F9D4F68A648D0E0E74333B0C9EAE
PackageArchnoarch
PackageDescriptionIt often happens that you have text data in Unicode, but you need to represent it in ASCII. For example when integrating with legacy code that doesn't support Unicode, or for ease of entry of non-Roman names on a US keyboard, or when constructing ASCII machine identifiers from human-readable Unicode strings that should still be somewhat intelligible (a popular example of this is when making an URL slug from an article title). In most of these examples you could represent Unicode characters as "???" or "\\15BA\\15A0\\1610", to mention two extreme cases. But that's nearly useless to someone who actually wants to read what the text says. What Unidecode provides is a middle road: function unidecode() takes Unicode data and tries to represent it in ASCII characters (i.e., the universally displayable characters between 0x00 and 0x7F), where the compromises taken when mapping between two character sets are chosen to be near what a human with a US keyboard would choose. The quality of resulting ASCII representation varies. For languages of western origin it should be between perfect and good. On the other hand transliteration (i.e., conveying, in Roman letters, the pronunciation expressed by the text in some other writing system) of languages like Chinese, Japanese or Korean is a very complex issue and this library does not even attempt to address it. It draws the line at context-free character-by-character mapping. So a good rule of thumb is that the further the script you are transliterating is from Latin alphabet, the worse the transliteration will be. Note that this module generally produces better results than simply stripping accents from characters (which can be done in Python with built-in functions). It is based on hand-tuned character mappings that for example also contain ASCII approximations for symbols and non-Latin alphabets. This is a Python port of Text::Unidecode Perl module by Sean M. Burke <sburke@cpan.org>.
PackageNamepython3-Unidecode
PackageRelease2.1
PackageVersion1.3.2
SHA-1110E9F2067C942D7B7EA0E9F292754B58C70B2A9
SHA-256A3FF76DDB5AAB7D5F3B28679671BF58337598545C8BFA7EFE29E1850543B93ED
Key Value
FileNamehttp://dl-cdn.alpinelinux.org/alpine/latest-stable//community//armhf//py3-unidecode-1.3.6-r3.apk
MD5BAEFBB529A34C3E77E07B95D15D098F2
SHA-11302CC20FBF390A3DB270B896B903293ECC95C7E
SHA-256A059899D21EDDD6E8FD17568CDF0D1B419272A63DA43758A9294526A38AAEC4D
SSDEEP3072:PVsXjAgsSZLOW0tjAjIBlcF3Xe6yXmlAz03Wj+Vbwh+uQ4EEM+SJFDVAHQZfLObG:Pkj5sgqW0xlUFAz4iCo22m6qNROdk
TLSHT1171423D3A8D0EECADBBA659217C1D04FB1C6F490BDE7647081DC599E28C3E9872F114A
Key Value
FileNamehttp://dl-cdn.alpinelinux.org/alpine/latest-stable//community//armhf//py3-unidecode-1.3.8-r1.apk
MD58DAF56F171AC1C74534285467EEED61E
SHA-1148847D7E27586BF62DFDAA1B4F3F9AEF363FA0B
SHA-256D390238CB8C9186ED2233EF8E2EB22C720B6D636940936CC89AA6700C79B0890
SSDEEP6144:KCW1c0togHGW/7RZz1JU9aCfhbKGxVV5jmr:ACuoM/chpeG9Ur
TLSHT1F624135728CDA5CC84ECA98B5A4946F48A709310C9A797716FCC8636F8FD4F8E62C264
Key Value
MD5D74D8A963EFA5C0BAAF8A260913D0B9D
PackageArchnoarch
PackageDescriptionIt often happens that you have text data in Unicode, but you need to represent it in ASCII. For example when integrating with legacy code that doesn't support Unicode, or for ease of entry of non-Roman names on a US keyboard, or when constructing ASCII machine identifiers from human-readable Unicode strings that should still be somewhat intelligible (a popular example of this is when making an URL slug from an article title). In most of these examples you could represent Unicode characters as "???" or "\\15BA\\15A0\\1610", to mention two extreme cases. But that's nearly useless to someone who actually wants to read what the text says. What Unidecode provides is a middle road: function unidecode() takes Unicode data and tries to represent it in ASCII characters (i.e., the universally displayable characters between 0x00 and 0x7F), where the compromises taken when mapping between two character sets are chosen to be near what a human with a US keyboard would choose. The quality of resulting ASCII representation varies. For languages of western origin it should be between perfect and good. On the other hand transliteration (i.e., conveying, in Roman letters, the pronunciation expressed by the text in some other writing system) of languages like Chinese, Japanese or Korean is a very complex issue and this library does not even attempt to address it. It draws the line at context-free character-by-character mapping. So a good rule of thumb is that the further the script you are transliterating is from Latin alphabet, the worse the transliteration will be. Note that this module generally produces better results than simply stripping accents from characters (which can be done in Python with built-in functions). It is based on hand-tuned character mappings that for example also contain ASCII approximations for symbols and non-Latin alphabets. This is a Python port of Text::Unidecode Perl module by Sean M. Burke <sburke@cpan.org>.
PackageMaintainerhttps://www.suse.com/
PackageNamepython3-Unidecode
PackageReleaselp154.29.1
PackageVersion1.3.2
SHA-1174BF5A90A33F37F702840FB99107344C485DBA9
SHA-25626B978B9532C357313D452DC1C231FDD56A475AE41CB9B594293097DB031C8BB
Key Value
MD5B0258C9272884B847AD83B9E00DA6DA1
PackageArchnoarch
PackageDescriptionIt often happens that you have text data in Unicode, but you need to represent it in ASCII. For example when integrating with legacy code that doesn't support Unicode, or for ease of entry of non-Roman names on a US keyboard, or when constructing ASCII machine identifiers from human-readable Unicode strings that should still be somewhat intelligible (a popular example of this is when making an URL slug from an article title). In most of these examples you could represent Unicode characters as "???" or "\\15BA\\15A0\\1610", to mention two extreme cases. But that's nearly useless to someone who actually wants to read what the text says. What Unidecode provides is a middle road: function unidecode() takes Unicode data and tries to represent it in ASCII characters (i.e., the universally displayable characters between 0x00 and 0x7F), where the compromises taken when mapping between two character sets are chosen to be near what a human with a US keyboard would choose. The quality of resulting ASCII representation varies. For languages of western origin it should be between perfect and good. On the other hand transliteration (i.e., conveying, in Roman letters, the pronunciation expressed by the text in some other writing system) of languages like Chinese, Japanese or Korean is a very complex issue and this library does not even attempt to address it. It draws the line at context-free character-by-character mapping. So a good rule of thumb is that the further the script you are transliterating is from Latin alphabet, the worse the transliteration will be. Note that this module generally produces better results than simply stripping accents from characters (which can be done in Python with built-in functions). It is based on hand-tuned character mappings that for example also contain ASCII approximations for symbols and non-Latin alphabets. This is a Python port of Text::Unidecode Perl module by Sean M. Burke <sburke@cpan.org>.
PackageNamepython3-Unidecode
PackageRelease2.1
PackageVersion1.3.2
SHA-11977A6446B98D545BF38DCC73D81282732B94951
SHA-256599B72E6189629B87538FA31A20BAF14B21CF185F258C5B02D9AF8263C048CB1
Key Value
MD59FF2AAE0A32A4B20AE34A6EDE859FEB5
PackageArchnoarch
PackageDescriptionIt often happens that you have text data in Unicode, but you need to represent it in ASCII. For example when integrating with legacy code that doesn't support Unicode, or for ease of entry of non-Roman names on a US keyboard, or when constructing ASCII machine identifiers from human-readable Unicode strings that should still be somewhat intelligible (a popular example of this is when making an URL slug from an article title). In most of these examples you could represent Unicode characters as "???" or "\\15BA\\15A0\\1610", to mention two extreme cases. But that's nearly useless to someone who actually wants to read what the text says. What Unidecode provides is a middle road: function unidecode() takes Unicode data and tries to represent it in ASCII characters (i.e., the universally displayable characters between 0x00 and 0x7F), where the compromises taken when mapping between two character sets are chosen to be near what a human with a US keyboard would choose. The quality of resulting ASCII representation varies. For languages of western origin it should be between perfect and good. On the other hand transliteration (i.e., conveying, in Roman letters, the pronunciation expressed by the text in some other writing system) of languages like Chinese, Japanese or Korean is a very complex issue and this library does not even attempt to address it. It draws the line at context-free character-by-character mapping. So a good rule of thumb is that the further the script you are transliterating is from Latin alphabet, the worse the transliteration will be. Note that this module generally produces better results than simply stripping accents from characters (which can be done in Python with built-in functions). It is based on hand-tuned character mappings that for example also contain ASCII approximations for symbols and non-Latin alphabets. This is a Python port of Text::Unidecode Perl module by Sean M. Burke <sburke@cpan.org>.
PackageNamepython39-Unidecode
PackageRelease29.13
PackageVersion1.3.2
SHA-11AD67DBA7353C38E7B262408BD8E356864D9E939
SHA-256B2C2B5D100BA73E8AF539765180C78218241A56608B00E4D1CC7A3B8E215CC77
Key Value
MD53A29A1A4B91F06EEE31487C5F804EDAB
PackageArchnoarch
PackageDescriptionIt often happens that you have text data in Unicode, but you need to represent it in ASCII. For example when integrating with legacy code that doesn't support Unicode, or for ease of entry of non-Roman names on a US keyboard, or when constructing ASCII machine identifiers from human-readable Unicode strings that should still be somewhat intelligible (a popular example of this is when making an URL slug from an article title). In most of these examples you could represent Unicode characters as "???" or "\\15BA\\15A0\\1610", to mention two extreme cases. But that's nearly useless to someone who actually wants to read what the text says. What Unidecode provides is a middle road: function unidecode() takes Unicode data and tries to represent it in ASCII characters (i.e., the universally displayable characters between 0x00 and 0x7F), where the compromises taken when mapping between two character sets are chosen to be near what a human with a US keyboard would choose. The quality of resulting ASCII representation varies. For languages of western origin it should be between perfect and good. On the other hand transliteration (i.e., conveying, in Roman letters, the pronunciation expressed by the text in some other writing system) of languages like Chinese, Japanese or Korean is a very complex issue and this library does not even attempt to address it. It draws the line at context-free character-by-character mapping. So a good rule of thumb is that the further the script you are transliterating is from Latin alphabet, the worse the transliteration will be. Note that this module generally produces better results than simply stripping accents from characters (which can be done in Python with built-in functions). It is based on hand-tuned character mappings that for example also contain ASCII approximations for symbols and non-Latin alphabets. This is a Python port of Text::Unidecode Perl module by Sean M. Burke <sburke@cpan.org>.
PackageNamepython39-Unidecode
PackageRelease28.4
PackageVersion1.3.1
SHA-11D1C55856E2D574D590B99053302E6DB062B6299
SHA-2564DCDDB40CA3D32BE147BDBCA2E83B59519402AAF5DC74860AC3902596A5701E0
Key Value
MD59384579CD68FD4A247DA90D02D470948
PackageArchnoarch
PackageDescriptionIt often happens that you have text data in Unicode, but you need to represent it in ASCII. For example when integrating with legacy code that doesn't support Unicode, or for ease of entry of non-Roman names on a US keyboard, or when constructing ASCII machine identifiers from human-readable Unicode strings that should still be somewhat intelligible (a popular example of this is when making an URL slug from an article title). In most of these examples you could represent Unicode characters as "???" or "\\15BA\\15A0\\1610", to mention two extreme cases. But that's nearly useless to someone who actually wants to read what the text says. What Unidecode provides is a middle road: function unidecode() takes Unicode data and tries to represent it in ASCII characters (i.e., the universally displayable characters between 0x00 and 0x7F), where the compromises taken when mapping between two character sets are chosen to be near what a human with a US keyboard would choose. The quality of resulting ASCII representation varies. For languages of western origin it should be between perfect and good. On the other hand transliteration (i.e., conveying, in Roman letters, the pronunciation expressed by the text in some other writing system) of languages like Chinese, Japanese or Korean is a very complex issue and this library does not even attempt to address it. It draws the line at context-free character-by-character mapping. So a good rule of thumb is that the further the script you are transliterating is from Latin alphabet, the worse the transliteration will be. Note that this module generally produces better results than simply stripping accents from characters (which can be done in Python with built-in functions). It is based on hand-tuned character mappings that for example also contain ASCII approximations for symbols and non-Latin alphabets. This is a Python port of Text::Unidecode Perl module by Sean M. Burke <sburke@cpan.org>.
PackageNamepython3-Unidecode
PackageRelease30.2
PackageVersion1.3.2
SHA-11EE15ECA52B5BCFB93B368786F25A66E29E1DFB4
SHA-25626F4801852A73C126D8FE0EDDB832EA5443D9E9E28C304DA8011AB3237983EF0