loongson/pypi/: evaluate-0.4.0 metadata and description

Homepage Simple index

HuggingFace community-driven open-source library of evaluation

author HuggingFace Inc.
author_email leandro@huggingface.co
classifiers
  • Development Status :: 5 - Production/Stable
  • Intended Audience :: Developers
  • Intended Audience :: Education
  • Intended Audience :: Science/Research
  • License :: OSI Approved :: Apache Software License
  • Operating System :: OS Independent
  • Programming Language :: Python :: 3
  • Programming Language :: Python :: 3.7
  • Programming Language :: Python :: 3.8
  • Programming Language :: Python :: 3.9
  • Programming Language :: Python :: 3.10
  • Topic :: Scientific/Engineering :: Artificial Intelligence
description_content_type text/markdown
download_url https://github.com/huggingface/evaluate/tags
keywords metrics machine learning evaluate evaluation
license Apache 2.0
platform
  • UNKNOWN
provides_extras torch
requires_dist
  • datasets (>=2.0.0)
  • numpy (>=1.17)
  • dill
  • pandas
  • requests (>=2.19.0)
  • tqdm (>=4.62.1)
  • xxhash
  • multiprocess
  • fsspec[http] (>=2021.05.0)
  • huggingface-hub (>=0.7.0)
  • packaging
  • responses (<0.19)
  • importlib-metadata ; python_version < "3.8"
  • absl-py ; extra == 'dev'
  • charcut (>=1.1.1) ; extra == 'dev'
  • cer (>=1.2.0) ; extra == 'dev'
  • nltk ; extra == 'dev'
  • pytest ; extra == 'dev'
  • pytest-datadir ; extra == 'dev'
  • pytest-xdist ; extra == 'dev'
  • tensorflow (!=2.6.0,!=2.6.1,<=2.10,>=2.3) ; extra == 'dev'
  • torch ; extra == 'dev'
  • bert-score (>=0.3.6) ; extra == 'dev'
  • rouge-score (>=0.1.2) ; extra == 'dev'
  • sacrebleu ; extra == 'dev'
  • sacremoses ; extra == 'dev'
  • scipy ; extra == 'dev'
  • seqeval ; extra == 'dev'
  • scikit-learn ; extra == 'dev'
  • jiwer ; extra == 'dev'
  • sentencepiece ; extra == 'dev'
  • transformers ; extra == 'dev'
  • mauve-text ; extra == 'dev'
  • trectools ; extra == 'dev'
  • toml (>=0.10.1) ; extra == 'dev'
  • requests-file (>=1.5.1) ; extra == 'dev'
  • tldextract (>=3.1.0) ; extra == 'dev'
  • texttable (>=1.6.3) ; extra == 'dev'
  • unidecode (>=1.3.4) ; extra == 'dev'
  • Werkzeug (>=1.0.1) ; extra == 'dev'
  • six (~=1.15.0) ; extra == 'dev'
  • black (~=22.0) ; extra == 'dev'
  • flake8 (>=3.8.3) ; extra == 'dev'
  • isort (>=5.0.0) ; extra == 'dev'
  • pyyaml (>=5.3.1) ; extra == 'dev'
  • s3fs ; extra == 'docs'
  • transformers ; extra == 'evaluator'
  • scipy (>=1.7.1) ; extra == 'evaluator'
  • black (~=22.0) ; extra == 'quality'
  • flake8 (>=3.8.3) ; extra == 'quality'
  • isort (>=5.0.0) ; extra == 'quality'
  • pyyaml (>=5.3.1) ; extra == 'quality'
  • cookiecutter ; extra == 'template'
  • gradio (>=3.0.0) ; extra == 'template'
  • tensorflow (!=2.6.0,!=2.6.1,>=2.2.0) ; extra == 'tensorflow'
  • tensorflow-gpu (!=2.6.0,!=2.6.1,>=2.2.0) ; extra == 'tensorflow_gpu'
  • absl-py ; extra == 'tests'
  • charcut (>=1.1.1) ; extra == 'tests'
  • cer (>=1.2.0) ; extra == 'tests'
  • nltk ; extra == 'tests'
  • pytest ; extra == 'tests'
  • pytest-datadir ; extra == 'tests'
  • pytest-xdist ; extra == 'tests'
  • tensorflow (!=2.6.0,!=2.6.1,<=2.10,>=2.3) ; extra == 'tests'
  • torch ; extra == 'tests'
  • bert-score (>=0.3.6) ; extra == 'tests'
  • rouge-score (>=0.1.2) ; extra == 'tests'
  • sacrebleu ; extra == 'tests'
  • sacremoses ; extra == 'tests'
  • scipy ; extra == 'tests'
  • seqeval ; extra == 'tests'
  • scikit-learn ; extra == 'tests'
  • jiwer ; extra == 'tests'
  • sentencepiece ; extra == 'tests'
  • transformers ; extra == 'tests'
  • mauve-text ; extra == 'tests'
  • trectools ; extra == 'tests'
  • toml (>=0.10.1) ; extra == 'tests'
  • requests-file (>=1.5.1) ; extra == 'tests'
  • tldextract (>=3.1.0) ; extra == 'tests'
  • texttable (>=1.6.3) ; extra == 'tests'
  • unidecode (>=1.3.4) ; extra == 'tests'
  • Werkzeug (>=1.0.1) ; extra == 'tests'
  • six (~=1.15.0) ; extra == 'tests'
  • torch ; extra == 'torch'
requires_python >=3.7.0

Because this project isn't in the mirror_whitelist, no releases from root/pypi are included.

File Tox results History
evaluate-0.4.0-py3-none-any.whl
Size
80 KB
Type
Python Wheel
Python
3



Build GitHub Documentation GitHub release Contributor Covenant

🤗 Evaluate is a library that makes evaluating and comparing models and reporting their performance easier and more standardized.

It currently contains:

🎓 Documentation

🔎 Find a metric, comparison, measurement on the Hub

🌟 Add a new evaluation module

🤗 Evaluate also has lots of useful features like:

Installation

With pip

🤗 Evaluate can be installed from PyPi and has to be installed in a virtual environment (venv or conda for instance)

pip install evaluate

Usage

🤗 Evaluate's main methods are:

Adding a new evaluation module

First install the necessary dependencies to create a new metric with the following command:

pip install evaluate[template]

Then you can get started with the following command which will create a new folder for your metric and display the necessary steps:

evaluate-cli create "Awesome Metric"

See this step-by-step guide in the documentation for detailed instructions.

Credits

Thanks to @marella for letting us use the evaluate namespace on PyPi previously used by his library.