Key | Value |
---|---|
FileName | usr/lib/python3.12/site-packages/unidecode/x05d.py |
FileSize | 4660 |
MD5 | CEF9212EA61D03A37A88BBD262FDCC81 |
RDS:package_id | 302124 |
SHA-1 | 0062A3846BB95867DBDE6DD8D202D761FC0A5653 |
SHA-256 | 5AE75B6FBC4EB56AF2E17BB9C66F63F34BC592280279D1AAE6E1DC600625D28F |
SHA-512 | B8A03DFE54B6F1DF24B4CEEE3F04A641E6CC65662E52EEF31C7E3BD6F143413501F5E9CB3FCB0697F20C7DD9E3910FDD8948E44469ED613D3B398B68C1CA6ECE |
SSDEEP | 96:KUjkc6iacokkebBiq2Vgrhkf2Z8E42EUjrm:/jp6ifoQiq22hkf2Z8Epdjq |
TLSH | T1A0A1D6B46A9622CC451EFEA2D214DEE3EC9780530BF0806D7EFDD824675E84DCB79498 |
insert-timestamp | 1728991461.2075975 |
mimetype | text/plain |
source | snap:MZhNqq04Y1yzup4ZCsVhgBeWKdlvtBsL_39 |
tar:gname | root |
tar:uname | root |
hashlookup:parent-total | 90 |
hashlookup:trust | 100 |
The searched file hash is included in 90 parent files which include package known and seen by metalookup. A sample is included below:
Key | Value |
---|---|
FileName | http://dl-cdn.alpinelinux.org/alpine/latest-stable//community//riscv64//py3-unidecode-1.3.8-r1.apk |
MD5 | 6B75C66BA213AC60E33957BF9EC4B540 |
SHA-1 | 050B80C70DC1220B351CCB80FC2C044B92C9E553 |
SHA-256 | B8C2D03D018D2E97453777F7360B4A9AD704AE93D1325C5970CD87211744016E |
SSDEEP | 6144:aCW1c0togHGW/7RZz1JU9aCfhbKGxVV5jmr:QCuoM/chpeG9Ur |
TLSH | T1DC24125328CDA5DC84ECAE8B5A4946B48670932085B797716FCC8732F4FD4E8EB2D274 |
Key | Value |
---|---|
MD5 | 8063F9D4F68A648D0E0E74333B0C9EAE |
PackageArch | noarch |
PackageDescription | It often happens that you have text data in Unicode, but you need to represent it in ASCII. For example when integrating with legacy code that doesn't support Unicode, or for ease of entry of non-Roman names on a US keyboard, or when constructing ASCII machine identifiers from human-readable Unicode strings that should still be somewhat intelligible (a popular example of this is when making an URL slug from an article title). In most of these examples you could represent Unicode characters as "???" or "\\15BA\\15A0\\1610", to mention two extreme cases. But that's nearly useless to someone who actually wants to read what the text says. What Unidecode provides is a middle road: function unidecode() takes Unicode data and tries to represent it in ASCII characters (i.e., the universally displayable characters between 0x00 and 0x7F), where the compromises taken when mapping between two character sets are chosen to be near what a human with a US keyboard would choose. The quality of resulting ASCII representation varies. For languages of western origin it should be between perfect and good. On the other hand transliteration (i.e., conveying, in Roman letters, the pronunciation expressed by the text in some other writing system) of languages like Chinese, Japanese or Korean is a very complex issue and this library does not even attempt to address it. It draws the line at context-free character-by-character mapping. So a good rule of thumb is that the further the script you are transliterating is from Latin alphabet, the worse the transliteration will be. Note that this module generally produces better results than simply stripping accents from characters (which can be done in Python with built-in functions). It is based on hand-tuned character mappings that for example also contain ASCII approximations for symbols and non-Latin alphabets. This is a Python port of Text::Unidecode Perl module by Sean M. Burke <sburke@cpan.org>. |
PackageName | python3-Unidecode |
PackageRelease | 2.1 |
PackageVersion | 1.3.2 |
SHA-1 | 110E9F2067C942D7B7EA0E9F292754B58C70B2A9 |
SHA-256 | A3FF76DDB5AAB7D5F3B28679671BF58337598545C8BFA7EFE29E1850543B93ED |
Key | Value |
---|---|
FileName | http://dl-cdn.alpinelinux.org/alpine/latest-stable//community//armhf//py3-unidecode-1.3.6-r3.apk |
MD5 | BAEFBB529A34C3E77E07B95D15D098F2 |
SHA-1 | 1302CC20FBF390A3DB270B896B903293ECC95C7E |
SHA-256 | A059899D21EDDD6E8FD17568CDF0D1B419272A63DA43758A9294526A38AAEC4D |
SSDEEP | 3072:PVsXjAgsSZLOW0tjAjIBlcF3Xe6yXmlAz03Wj+Vbwh+uQ4EEM+SJFDVAHQZfLObG:Pkj5sgqW0xlUFAz4iCo22m6qNROdk |
TLSH | T1171423D3A8D0EECADBBA659217C1D04FB1C6F490BDE7647081DC599E28C3E9872F114A |
Key | Value |
---|---|
FileName | http://dl-cdn.alpinelinux.org/alpine/latest-stable//community//armhf//py3-unidecode-1.3.8-r1.apk |
MD5 | 8DAF56F171AC1C74534285467EEED61E |
SHA-1 | 148847D7E27586BF62DFDAA1B4F3F9AEF363FA0B |
SHA-256 | D390238CB8C9186ED2233EF8E2EB22C720B6D636940936CC89AA6700C79B0890 |
SSDEEP | 6144:KCW1c0togHGW/7RZz1JU9aCfhbKGxVV5jmr:ACuoM/chpeG9Ur |
TLSH | T1F624135728CDA5CC84ECA98B5A4946F48A709310C9A797716FCC8636F8FD4F8E62C264 |
Key | Value |
---|---|
MD5 | D74D8A963EFA5C0BAAF8A260913D0B9D |
PackageArch | noarch |
PackageDescription | It often happens that you have text data in Unicode, but you need to represent it in ASCII. For example when integrating with legacy code that doesn't support Unicode, or for ease of entry of non-Roman names on a US keyboard, or when constructing ASCII machine identifiers from human-readable Unicode strings that should still be somewhat intelligible (a popular example of this is when making an URL slug from an article title). In most of these examples you could represent Unicode characters as "???" or "\\15BA\\15A0\\1610", to mention two extreme cases. But that's nearly useless to someone who actually wants to read what the text says. What Unidecode provides is a middle road: function unidecode() takes Unicode data and tries to represent it in ASCII characters (i.e., the universally displayable characters between 0x00 and 0x7F), where the compromises taken when mapping between two character sets are chosen to be near what a human with a US keyboard would choose. The quality of resulting ASCII representation varies. For languages of western origin it should be between perfect and good. On the other hand transliteration (i.e., conveying, in Roman letters, the pronunciation expressed by the text in some other writing system) of languages like Chinese, Japanese or Korean is a very complex issue and this library does not even attempt to address it. It draws the line at context-free character-by-character mapping. So a good rule of thumb is that the further the script you are transliterating is from Latin alphabet, the worse the transliteration will be. Note that this module generally produces better results than simply stripping accents from characters (which can be done in Python with built-in functions). It is based on hand-tuned character mappings that for example also contain ASCII approximations for symbols and non-Latin alphabets. This is a Python port of Text::Unidecode Perl module by Sean M. Burke <sburke@cpan.org>. |
PackageMaintainer | https://www.suse.com/ |
PackageName | python3-Unidecode |
PackageRelease | lp154.29.1 |
PackageVersion | 1.3.2 |
SHA-1 | 174BF5A90A33F37F702840FB99107344C485DBA9 |
SHA-256 | 26B978B9532C357313D452DC1C231FDD56A475AE41CB9B594293097DB031C8BB |
Key | Value |
---|---|
MD5 | B0258C9272884B847AD83B9E00DA6DA1 |
PackageArch | noarch |
PackageDescription | It often happens that you have text data in Unicode, but you need to represent it in ASCII. For example when integrating with legacy code that doesn't support Unicode, or for ease of entry of non-Roman names on a US keyboard, or when constructing ASCII machine identifiers from human-readable Unicode strings that should still be somewhat intelligible (a popular example of this is when making an URL slug from an article title). In most of these examples you could represent Unicode characters as "???" or "\\15BA\\15A0\\1610", to mention two extreme cases. But that's nearly useless to someone who actually wants to read what the text says. What Unidecode provides is a middle road: function unidecode() takes Unicode data and tries to represent it in ASCII characters (i.e., the universally displayable characters between 0x00 and 0x7F), where the compromises taken when mapping between two character sets are chosen to be near what a human with a US keyboard would choose. The quality of resulting ASCII representation varies. For languages of western origin it should be between perfect and good. On the other hand transliteration (i.e., conveying, in Roman letters, the pronunciation expressed by the text in some other writing system) of languages like Chinese, Japanese or Korean is a very complex issue and this library does not even attempt to address it. It draws the line at context-free character-by-character mapping. So a good rule of thumb is that the further the script you are transliterating is from Latin alphabet, the worse the transliteration will be. Note that this module generally produces better results than simply stripping accents from characters (which can be done in Python with built-in functions). It is based on hand-tuned character mappings that for example also contain ASCII approximations for symbols and non-Latin alphabets. This is a Python port of Text::Unidecode Perl module by Sean M. Burke <sburke@cpan.org>. |
PackageName | python3-Unidecode |
PackageRelease | 2.1 |
PackageVersion | 1.3.2 |
SHA-1 | 1977A6446B98D545BF38DCC73D81282732B94951 |
SHA-256 | 599B72E6189629B87538FA31A20BAF14B21CF185F258C5B02D9AF8263C048CB1 |
Key | Value |
---|---|
MD5 | 9FF2AAE0A32A4B20AE34A6EDE859FEB5 |
PackageArch | noarch |
PackageDescription | It often happens that you have text data in Unicode, but you need to represent it in ASCII. For example when integrating with legacy code that doesn't support Unicode, or for ease of entry of non-Roman names on a US keyboard, or when constructing ASCII machine identifiers from human-readable Unicode strings that should still be somewhat intelligible (a popular example of this is when making an URL slug from an article title). In most of these examples you could represent Unicode characters as "???" or "\\15BA\\15A0\\1610", to mention two extreme cases. But that's nearly useless to someone who actually wants to read what the text says. What Unidecode provides is a middle road: function unidecode() takes Unicode data and tries to represent it in ASCII characters (i.e., the universally displayable characters between 0x00 and 0x7F), where the compromises taken when mapping between two character sets are chosen to be near what a human with a US keyboard would choose. The quality of resulting ASCII representation varies. For languages of western origin it should be between perfect and good. On the other hand transliteration (i.e., conveying, in Roman letters, the pronunciation expressed by the text in some other writing system) of languages like Chinese, Japanese or Korean is a very complex issue and this library does not even attempt to address it. It draws the line at context-free character-by-character mapping. So a good rule of thumb is that the further the script you are transliterating is from Latin alphabet, the worse the transliteration will be. Note that this module generally produces better results than simply stripping accents from characters (which can be done in Python with built-in functions). It is based on hand-tuned character mappings that for example also contain ASCII approximations for symbols and non-Latin alphabets. This is a Python port of Text::Unidecode Perl module by Sean M. Burke <sburke@cpan.org>. |
PackageName | python39-Unidecode |
PackageRelease | 29.13 |
PackageVersion | 1.3.2 |
SHA-1 | 1AD67DBA7353C38E7B262408BD8E356864D9E939 |
SHA-256 | B2C2B5D100BA73E8AF539765180C78218241A56608B00E4D1CC7A3B8E215CC77 |
Key | Value |
---|---|
MD5 | 3A29A1A4B91F06EEE31487C5F804EDAB |
PackageArch | noarch |
PackageDescription | It often happens that you have text data in Unicode, but you need to represent it in ASCII. For example when integrating with legacy code that doesn't support Unicode, or for ease of entry of non-Roman names on a US keyboard, or when constructing ASCII machine identifiers from human-readable Unicode strings that should still be somewhat intelligible (a popular example of this is when making an URL slug from an article title). In most of these examples you could represent Unicode characters as "???" or "\\15BA\\15A0\\1610", to mention two extreme cases. But that's nearly useless to someone who actually wants to read what the text says. What Unidecode provides is a middle road: function unidecode() takes Unicode data and tries to represent it in ASCII characters (i.e., the universally displayable characters between 0x00 and 0x7F), where the compromises taken when mapping between two character sets are chosen to be near what a human with a US keyboard would choose. The quality of resulting ASCII representation varies. For languages of western origin it should be between perfect and good. On the other hand transliteration (i.e., conveying, in Roman letters, the pronunciation expressed by the text in some other writing system) of languages like Chinese, Japanese or Korean is a very complex issue and this library does not even attempt to address it. It draws the line at context-free character-by-character mapping. So a good rule of thumb is that the further the script you are transliterating is from Latin alphabet, the worse the transliteration will be. Note that this module generally produces better results than simply stripping accents from characters (which can be done in Python with built-in functions). It is based on hand-tuned character mappings that for example also contain ASCII approximations for symbols and non-Latin alphabets. This is a Python port of Text::Unidecode Perl module by Sean M. Burke <sburke@cpan.org>. |
PackageName | python39-Unidecode |
PackageRelease | 28.4 |
PackageVersion | 1.3.1 |
SHA-1 | 1D1C55856E2D574D590B99053302E6DB062B6299 |
SHA-256 | 4DCDDB40CA3D32BE147BDBCA2E83B59519402AAF5DC74860AC3902596A5701E0 |
Key | Value |
---|---|
MD5 | 9384579CD68FD4A247DA90D02D470948 |
PackageArch | noarch |
PackageDescription | It often happens that you have text data in Unicode, but you need to represent it in ASCII. For example when integrating with legacy code that doesn't support Unicode, or for ease of entry of non-Roman names on a US keyboard, or when constructing ASCII machine identifiers from human-readable Unicode strings that should still be somewhat intelligible (a popular example of this is when making an URL slug from an article title). In most of these examples you could represent Unicode characters as "???" or "\\15BA\\15A0\\1610", to mention two extreme cases. But that's nearly useless to someone who actually wants to read what the text says. What Unidecode provides is a middle road: function unidecode() takes Unicode data and tries to represent it in ASCII characters (i.e., the universally displayable characters between 0x00 and 0x7F), where the compromises taken when mapping between two character sets are chosen to be near what a human with a US keyboard would choose. The quality of resulting ASCII representation varies. For languages of western origin it should be between perfect and good. On the other hand transliteration (i.e., conveying, in Roman letters, the pronunciation expressed by the text in some other writing system) of languages like Chinese, Japanese or Korean is a very complex issue and this library does not even attempt to address it. It draws the line at context-free character-by-character mapping. So a good rule of thumb is that the further the script you are transliterating is from Latin alphabet, the worse the transliteration will be. Note that this module generally produces better results than simply stripping accents from characters (which can be done in Python with built-in functions). It is based on hand-tuned character mappings that for example also contain ASCII approximations for symbols and non-Latin alphabets. This is a Python port of Text::Unidecode Perl module by Sean M. Burke <sburke@cpan.org>. |
PackageName | python3-Unidecode |
PackageRelease | 30.2 |
PackageVersion | 1.3.2 |
SHA-1 | 1EE15ECA52B5BCFB93B368786F25A66E29E1DFB4 |
SHA-256 | 26F4801852A73C126D8FE0EDDB832EA5443D9E9E28C304DA8011AB3237983EF0 |
Key | Value |
---|---|
MD5 | 5EE2E9BC2CD927A29A69C4C16D3398FE |
PackageArch | noarch |
PackageDescription | It often happens that you have text data in Unicode, but you need to represent it in ASCII. For example when integrating with legacy code that doesn't support Unicode, or for ease of entry of non-Roman names on a US keyboard, or when constructing ASCII machine identifiers from human-readable Unicode strings that should still be somewhat intelligible (a popular example of this is when making an URL slug from an article title). In most of these examples you could represent Unicode characters as "???" or "\\15BA\\15A0\\1610", to mention two extreme cases. But that's nearly useless to someone who actually wants to read what the text says. What Unidecode provides is a middle road: function unidecode() takes Unicode data and tries to represent it in ASCII characters (i.e., the universally displayable characters between 0x00 and 0x7F), where the compromises taken when mapping between two character sets are chosen to be near what a human with a US keyboard would choose. The quality of resulting ASCII representation varies. For languages of western origin it should be between perfect and good. On the other hand transliteration (i.e., conveying, in Roman letters, the pronunciation expressed by the text in some other writing system) of languages like Chinese, Japanese or Korean is a very complex issue and this library does not even attempt to address it. It draws the line at context-free character-by-character mapping. So a good rule of thumb is that the further the script you are transliterating is from Latin alphabet, the worse the transliteration will be. Note that this module generally produces better results than simply stripping accents from characters (which can be done in Python with built-in functions). It is based on hand-tuned character mappings that for example also contain ASCII approximations for symbols and non-Latin alphabets. This is a Python port of Text::Unidecode Perl module by Sean M. Burke <sburke@cpan.org>. |
PackageName | python38-Unidecode |
PackageRelease | 29.12 |
PackageVersion | 1.3.2 |
SHA-1 | 205290E9BC4304CCA798B975ED0FED0605701EBD |
SHA-256 | E5D9B6EFB8B64EDB96E1B0490F268D8E2F68612F811857F4A8628B1B27740C70 |