loongson/pypi/: charset-normalizer-2.0.4 metadata and description

Homepage Simple index

The Real First Universal Charset Detector. Open, modern and actively maintained alternative to Chardet.

author Ahmed TAHRI @Ousret
author_email ahmed.tahri@cloudnursery.dev
  • License :: OSI Approved :: MIT License
  • Intended Audience :: Developers
  • Topic :: Software Development :: Libraries :: Python Modules
  • Operating System :: OS Independent
  • Programming Language :: Python
  • Programming Language :: Python :: 3
  • Programming Language :: Python :: 3.5
  • Programming Language :: Python :: 3.6
  • Programming Language :: Python :: 3.7
  • Programming Language :: Python :: 3.8
  • Programming Language :: Python :: 3.9
  • Programming Language :: Python :: 3.10
  • Topic :: Text Processing :: Linguistic
  • Topic :: Utilities
  • Programming Language :: Python :: Implementation :: PyPy
description_content_type text/markdown
keywords encoding,i18n,txt,text,charset,charset-detector,normalization,unicode,chardet
license MIT
provides_extras unicode_backport
  • unicodedata2 ; extra == 'unicode_backport'
requires_python >=3.5.0

Because this project isn't in the mirror_whitelist, no releases from root/pypi are included.

File Tox results History
36 KB
Python Wheel

Charset Detection, for Everyone πŸ‘‹

The Real First Universal Charset Detector
Code Quality Badge Documentation Status Download Count Total

A library that helps you read text from an unknown charset encoding.
Motivated by chardet, I'm trying to resolve the issue by taking a new approach. All IANA character set names for which the Python core library provides codecs are supported.

>>>>> πŸ‘‰ Try Me Online Now, Then Adopt Me πŸ‘ˆ <<<<<

This project offers you an alternative to Universal Charset Encoding Detector, also known as Chardet.

Feature Chardet Charset Normalizer cChardet
Fast ❌
Universal** ❌ βœ… ❌
Reliable without distinguishable standards ❌ βœ… βœ…
Reliable with distinguishable standards βœ… βœ… βœ…
Free & Open βœ… βœ… βœ…
License LGPL-2.1 MIT MPL-1.1
Native Python βœ… βœ… ❌
Detect spoken language ❌ βœ… N/A
Supported Encoding 30 :tada: 93 40

Reading Normalized TextCat Reading Text

** : They are clearly using specific code for a specific encoding even if covering most of used one

⚑ Performance

This package offer better performance than its counterpart Chardet. Here are some numbers.

Package Accuracy Mean per file (ns) File per sec (est)
chardet 93.0 % 150 ms 7 file/sec
charset-normalizer 95.0 % 36 ms 28 file/sec
Package 99th percentile 95th percentile 50th percentile
chardet 647 ms 250 ms 24 ms
charset-normalizer 354 ms 202 ms 16 ms

Chardet's performance on larger file (1MB+) are very poor. Expect huge difference on large payload.

Stats are generated using 400+ files using default parameters. More details on used files, see GHA workflows. And yes, these results might change at any time. The dataset can be updated to include more files.

cchardet is a non-native (cpp binding) faster alternative. If speed is the most important factor, you should try it.

Your support

Please ⭐ this repository if this project helped you!

✨ Installation

Using PyPi for latest stable

pip install charset-normalizer

Or directly from dev-master for latest preview

pip install git+https://github.com/Ousret/charset_normalizer.git

If you want a more up-to-date unicodedata than the one available in your Python setup.

pip install charset-normalizer[unicode_backport]

πŸš€ Basic Usage


This package comes with a CLI.

usage: normalizer [-h] [-v] [-a] [-n] [-m] [-r] [-f] [-t THRESHOLD]
                  file [file ...]

The Real First Universal Charset Detector. Discover originating encoding used
on text file. Normalize text to unicode.

positional arguments:
  files                 File(s) to be analysed

optional arguments:
  -h, --help            show this help message and exit
  -v, --verbose         Display complementary information about file if any.
                        Stdout will contain logs about the detection process.
  -a, --with-alternative
                        Output complementary possibilities if any. Top-level
                        JSON WILL be a list.
  -n, --normalize       Permit to normalize input file. If not set, program
                        does not write anything.
  -m, --minimal         Only output the charset detected to STDOUT. Disabling
                        JSON output.
  -r, --replace         Replace file when trying to normalize it instead of
                        creating a new one.
  -f, --force           Replace file without asking if you are sure, use this
                        flag with caution.
  -t THRESHOLD, --threshold THRESHOLD
                        Define a custom maximum amount of chaos allowed in
                        decoded content. 0. <= chaos <= 1.
  --version             Show version information and exit.
normalizer ./data/sample.1.fr.srt

:tada: Since version 1.4.0 the CLI produce easily usable stdout result in JSON format.

    "path": "/home/default/projects/charset_normalizer/data/sample.1.fr.srt",
    "encoding": "cp1252",
    "encoding_aliases": [
    "alternative_encodings": [
    "language": "French",
    "alphabets": [
        "Basic Latin",
        "Latin-1 Supplement"
    "has_sig_or_bom": false,
    "chaos": 0.149,
    "coherence": 97.152,
    "unicode_path": null,
    "is_preferred": true


Just print out normalized text

from charset_normalizer import from_path

results = from_path('./my_subtitle.srt')


Normalize any text file

from charset_normalizer import normalize
    normalize('./my_subtitle.srt') # should write to disk my_subtitle-***.srt
except IOError as e:
    print('Sadly, we are unable to perform charset normalization.', str(e))

Upgrade your code without effort

from charset_normalizer import detect

The above code will behave the same as chardet. We ensure that we offer the best (reasonable) BC result possible.

See the docs for advanced usage : readthedocs.io

πŸ˜‡ Why

When I started using Chardet, I noticed that it was not suited to my expectations, and I wanted to propose a reliable alternative using a completely different method. Also! I never back down on a good challenge !

I don't care about the originating charset encoding, because two different tables can produce two identical files. What I want is to get readable text, the best I can.

In a way, I'm brute forcing text decoding. How cool is that ? 😎

Don't confuse package ftfy with charset-normalizer or chardet. ftfy goal is to repair unicode string whereas charset-normalizer to convert raw file in unknown encoding to unicode.

🍰 How

Wait a minute, what is chaos/mess and coherence according to YOU ?

Chaos : I opened hundred of text files, written by humans, with the wrong encoding table. I observed, then I established some ground rules about what is obvious when it seems like a mess. I know that my interpretation of what is chaotic is very subjective, feel free to contribute in order to improve or rewrite it.

Coherence : For each language there is on earth, we have computed ranked letter appearance occurrences (the best we can). So I thought that intel is worth something here. So I use those records against decoded text to check if I can detect intelligent design.

⚑ Known limitations

πŸ‘€ Contributing

Contributions, issues and feature requests are very much welcome.
Feel free to check issues page if you want to contribute.

πŸ“ License

Copyright Β© 2019 Ahmed TAHRI @Ousret.
This project is MIT licensed.

Characters frequencies used in this project Β© 2012 Denny VrandečiΔ‡