Metadata-Version: 2.1
Name: pleonasty
Version: 0.1.2
Summary: A very simple abstraction for LLMs to get single responses to a given input.
Author-email: "Ryan L. Boyd" <ryan@ryanboyd.io>
Project-URL: Homepage, https://github.com/ryanboyd/pleonasty
Project-URL: Issues, https://github.com/ryanboyd/pleonasty/issues
Classifier: Programming Language :: Python :: 3
Classifier: License :: OSI Approved :: MIT License
Classifier: Operating System :: OS Independent
Requires-Python: >=3.10
Description-Content-Type: text/markdown
License-File: LICENSE
Requires-Dist: tqdm >=4.66.5
Requires-Dist: transformers >=4.44.2
Requires-Dist: accelerate >=0.33.0
Requires-Dist: sentencepiece >=0.2.0
Requires-Dist: protobuf >=5.28.0

# pleonasty

A very, very, very, very basic library to abstract interactions with an LLM for single-response purposes. Like, if you're a computer scientist, this package will probably make things harder rather than easier. But, if you're like me, then this is great!

In essence, this is a library that makes it a bit easier to load up a "chat" or "instruct" LLM and then have it sequentially provide a single response to multiple input texts. For example, if you want to use an LLM to "code" or annotate texts in the same way that a human would, you might want to give it the same instructions before batch coding an entire dataset. This makes it relatively easy to do so, saving the output as a CSV file.

## Installation

The easiest way to get up and running with `pleonasty` is to install via `pip`, e.g.,

```pip install pleonasty```

Note that, in order to use this package, you will already need to have your CUDA environment properly configured if you plan to use a GPU for accelerated inference. This includes having the appropriate version of [PyTorch installed with CUDA support](https://pytorch.org/get-started/locally/).

To use pleonasty, ensure you have the following installed:

- Python 3.10 or higher (might work with older versions, but not tested)
- PyTorch with CUDA support (if using a GPU)

All other requirements can be found in the `pyproject.toml` file.

## How to Use

An example notebook is included in this repo that shows how it can be used. I have also included a "chat mode" where you can load up an LLM and have back-and-forth interactions with it — an example of this is also provided in a sample notebook.
