loongson/pypi/: transformers-stream-generator-0.0.5 metadata and description
This is a text generation method which returns a generator, streaming out each token in real-time during inference, based on Huggingface/Transformers.
author | LowinLi |
author_email | lowinli@outlook.com |
classifiers |
|
description_content_type | text/markdown |
keywords | GPT,stream,transformers,NLP,model hub,transformer,text generation,summarization,translation,q&a,qg,machine learning,CausalLM |
license | MIT License |
project_urls |
|
requires_dist |
|
requires_python | >=3.5 |
Because this project isn't in the mirror_whitelist
,
no releases from root/pypi are included.
File | Tox results | History |
---|---|---|
transformers_stream_generator-0.0.5-py3-none-any.whl
|
|
transformers-stream-generator
Description
This is a text generation method which returns a generator, streaming out each token in real-time during inference, based on Huggingface/Transformers.
Web Demo
- original
- stream
Installation
pip install transformers-stream-generator
Usage
- just add two lines of code before your original code
from transformers_stream_generator import init_stream_support init_stream_support()
- add
do_stream=True
inmodel.generate
function and keepdo_sample=True
, then you can get a generator
generator = model.generate(input_ids, do_stream=True, do_sample=True) for token in generator: word = tokenizer.decode(token) print(word)