loongson/pypi/: transformers-stream-generator-0.0.5 metadata and description
This is a text generation method which returns a generator, streaming out each token in real-time during inference, based on Huggingface/Transformers.
| author | LowinLi |
| author_email | lowinli@outlook.com |
| classifiers |
|
| description_content_type | text/markdown |
| keywords | GPT,stream,transformers,NLP,model hub,transformer,text generation,summarization,translation,q&a,qg,machine learning,CausalLM |
| license | MIT License |
| project_urls |
|
| requires_dist |
|
| requires_python | >=3.5 |
| File | Tox results | History |
|---|---|---|
transformers_stream_generator-0.0.5-py3-none-any.whl
|
|
transformers-stream-generator
Description
This is a text generation method which returns a generator, streaming out each token in real-time during inference, based on Huggingface/Transformers.
Web Demo
- original

- stream

Installation
pip install transformers-stream-generator
Usage
- just add two lines of code before your original code
from transformers_stream_generator import init_stream_support init_stream_support()
- add
do_stream=Trueinmodel.generatefunction and keepdo_sample=True, then you can get a generator
generator = model.generate(input_ids, do_stream=True, do_sample=True) for token in generator: word = tokenizer.decode(token) print(word)