loongson/pypi/: seqeval-1.0.0 metadata and description
Testing framework for sequence labeling
author | Hironsan |
author_email | hiroki.nakayama.py@gmail.com |
classifiers |
|
description_content_type | text/markdown |
license | MIT |
platform |
|
provides_extras | gpu |
requires_dist |
|
Because this project isn't in the mirror_whitelist
,
no releases from root/pypi are included.
File | Tox results | History |
---|---|---|
seqeval-1.0.0-py3-none-any.whl
|
|
seqeval
seqeval is a Python framework for sequence labeling evaluation. seqeval can evaluate the performance of chunking tasks such as named-entity recognition, part-of-speech tagging, semantic role labeling and so on.
This is well-tested by using the Perl script conlleval, which can be used for measuring the performance of a system that has processed the CoNLL-2000 shared task data.
Support features
seqeval supports following formats:
- IOB1
- IOB2
- IOE1
- IOE2
- IOBES
and supports following metrics:
metrics | description |
---|---|
accuracy_score(y_true, y_pred) | Compute the accuracy. |
precision_score(y_true, y_pred) | Compute the precision. |
recall_score(y_true, y_pred) | Compute the recall. |
f1_score(y_true, y_pred) | Compute the F1 score, also known as balanced F-score or F-measure. |
classification_report(y_true, y_pred, digits=2) | Build a text report showing the main classification metrics. digits is number of digits for formatting output floating point values. Default value is 2 . |
Usage
Behold, the power of seqeval:
>>> from seqeval.metrics import accuracy_score >>> from seqeval.metrics import classification_report >>> from seqeval.metrics import f1_score >>> >>> y_true = [['O', 'O', 'O', 'B-MISC', 'I-MISC', 'I-MISC', 'O'], ['B-PER', 'I-PER', 'O']] >>> y_pred = [['O', 'O', 'B-MISC', 'I-MISC', 'I-MISC', 'I-MISC', 'O'], ['B-PER', 'I-PER', 'O']] >>> >>> f1_score(y_true, y_pred) 0.50 >>> accuracy_score(y_true, y_pred) 0.80 >>> classification_report(y_true, y_pred) precision recall f1-score support MISC 0.00 0.00 0.00 1 PER 1.00 1.00 1.00 1 micro avg 0.50 0.50 0.50 2 macro avg 0.50 0.50 0.50 2 weighted avg 0.50 0.50 0.50 2
If you want to explicitly specify the evaluation scheme, use mode='strict'
:
>>> from seqeval.scheme import IOB2 >>> classification_report(y_true, y_pred, mode='strict', scheme=IOB2) precision recall f1-score support MISC 0.00 0.00 0.00 1 PER 1.00 1.00 1.00 1 micro avg 0.50 0.50 0.50 2 macro avg 0.50 0.50 0.50 2 weighted avg 0.50 0.50 0.50 2
Note: The behavior of the strict mode is different from the default one which is designed to simulate conlleval.
Installation
To install seqeval, simply run:
$ pip install seqeval