loongson/pypi/: seqeval-1.0.0 metadata and description

Homepage Simple index

Testing framework for sequence labeling

author Hironsan
author_email hiroki.nakayama.py@gmail.com
classifiers
  • License :: OSI Approved :: MIT License
  • Programming Language :: Python
  • Programming Language :: Python :: 2.6
  • Programming Language :: Python :: 2.7
  • Programming Language :: Python :: 3
  • Programming Language :: Python :: 3.3
  • Programming Language :: Python :: 3.4
  • Programming Language :: Python :: 3.5
  • Programming Language :: Python :: 3.6
  • Programming Language :: Python :: Implementation :: CPython
  • Programming Language :: Python :: Implementation :: PyPy
description_content_type text/markdown
license MIT
platform
  • UNKNOWN
provides_extras gpu
requires_dist
  • numpy (==1.19.2)
  • scikit-learn (==0.23.2)

Because this project isn't in the mirror_whitelist, no releases from root/pypi are included.

File Tox results History
seqeval-1.0.0-py3-none-any.whl
Size
14 KB
Type
Python Wheel
Python
3

seqeval

seqeval is a Python framework for sequence labeling evaluation. seqeval can evaluate the performance of chunking tasks such as named-entity recognition, part-of-speech tagging, semantic role labeling and so on.

This is well-tested by using the Perl script conlleval, which can be used for measuring the performance of a system that has processed the CoNLL-2000 shared task data.

Support features

seqeval supports following formats:

and supports following metrics:

metrics description
accuracy_score(y_true, y_pred) Compute the accuracy.
precision_score(y_true, y_pred) Compute the precision.
recall_score(y_true, y_pred) Compute the recall.
f1_score(y_true, y_pred) Compute the F1 score, also known as balanced F-score or F-measure.
classification_report(y_true, y_pred, digits=2) Build a text report showing the main classification metrics. digits is number of digits for formatting output floating point values. Default value is 2.

Usage

Behold, the power of seqeval:

>>> from seqeval.metrics import accuracy_score
>>> from seqeval.metrics import classification_report
>>> from seqeval.metrics import f1_score
>>> 
>>> y_true = [['O', 'O', 'O', 'B-MISC', 'I-MISC', 'I-MISC', 'O'], ['B-PER', 'I-PER', 'O']]
>>> y_pred = [['O', 'O', 'B-MISC', 'I-MISC', 'I-MISC', 'I-MISC', 'O'], ['B-PER', 'I-PER', 'O']]
>>>
>>> f1_score(y_true, y_pred)
0.50
>>> accuracy_score(y_true, y_pred)
0.80
>>> classification_report(y_true, y_pred)
              precision    recall  f1-score   support

        MISC       0.00      0.00      0.00         1
         PER       1.00      1.00      1.00         1

   micro avg       0.50      0.50      0.50         2
   macro avg       0.50      0.50      0.50         2
weighted avg       0.50      0.50      0.50         2

If you want to explicitly specify the evaluation scheme, use mode='strict':

>>> from seqeval.scheme import IOB2
>>> classification_report(y_true, y_pred, mode='strict', scheme=IOB2)
              precision    recall  f1-score   support

        MISC       0.00      0.00      0.00         1
         PER       1.00      1.00      1.00         1

   micro avg       0.50      0.50      0.50         2
   macro avg       0.50      0.50      0.50         2
weighted avg       0.50      0.50      0.50         2

Note: The behavior of the strict mode is different from the default one which is designed to simulate conlleval.

Installation

To install seqeval, simply run:

$ pip install seqeval