loongson/pypi/: transformers-stream-generator-0.0.5 metadata and description

Homepage Simple index

This is a text generation method which returns a generator, streaming out each token in real-time during inference, based on Huggingface/Transformers.

author LowinLi
author_email lowinli@outlook.com
classifiers
  • Intended Audience :: Developers
  • Topic :: Scientific/Engineering :: Artificial Intelligence
  • License :: OSI Approved :: Apache Software License
  • Programming Language :: Python :: 3
  • Programming Language :: Python :: 3.5
  • Programming Language :: Python :: 3.6
  • Programming Language :: Python :: 3.7
  • Programming Language :: Python :: 3.8
  • Programming Language :: Python :: 3.9
  • Programming Language :: Python :: 3.10
description_content_type text/markdown
keywords GPT,stream,transformers,NLP,model hub,transformer,text generation,summarization,translation,q&a,qg,machine learning,CausalLM
license MIT License
project_urls
  • Repo, https://github.com/LowinLi/transformers-stream-generator
  • Bug Tracker, https://github.com/LowinLi/transformers-stream-generator/issues
requires_dist
  • transformers>=4.26.1
requires_python >=3.5

Because this project isn't in the mirror_whitelist, no releases from root/pypi are included.

File Tox results History
transformers_stream_generator-0.0.5-py3-none-any.whl
Size
12 KB
Type
Python Wheel
Python
3

transformers-stream-generator

PyPI - Python Version PyPI GitHub license badge Blog

Description

This is a text generation method which returns a generator, streaming out each token in real-time during inference, based on Huggingface/Transformers.

Web Demo

Installation

pip install transformers-stream-generator

Usage

  1. just add two lines of code before your original code
from transformers_stream_generator import init_stream_support
init_stream_support()
  1. add do_stream=True in model.generate function and keep do_sample=True, then you can get a generator
generator = model.generate(input_ids, do_stream=True, do_sample=True)
for token in generator:
    word = tokenizer.decode(token)
    print(word)

Example