Metadata-Version: 2.1
Name: python-pilot
Version: 0.0.11
Summary: A python terminal with coding copilot inside
Home-page: https://github.com/roy-pstr/python-pilot
Author: Roy Pasternak
Author-email: roy@larium.ai
Requires-Python: >=3.9,<4.0
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.9
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Provides-Extra: llama
Requires-Dist: huggingface-hub (>=0.21.4,<0.22.0) ; extra == "llama"
Requires-Dist: llama-cpp-python[server] (>=0.2.56,<0.3.0) ; extra == "llama"
Requires-Dist: numpy (>=1.26.4,<2.0.0)
Requires-Dist: openai (>=1.13.3,<2.0.0)
Requires-Dist: python-dotenv (>=1.0.1,<2.0.0)
Requires-Dist: tiktoken (>=0.6.0,<0.7.0)
Description-Content-Type: text/markdown

# PyPilot
A python terminal with a coding copilot inside.</br>
This is a regular python terminal, only that your comments are used as requests from the copilot.</br>
Don't forget to set the API KEY (supports only OPENAI for now).

## Demo
<img src="./assets/demo.gif" />

## Features
- Code generation inside the python terminal.
- Your comments are used to communicate with the copilot.
- The copilot is aware of the terminal history and locals.
- Both code and chat responses are supported.
- Supports system commands from within the terminal (e.g. !pip install <package_name>).
- Supports all OpenAI models.

## Installation
```bash
$ pip install python-pilot
```

## Usage
```bash
$ pypilot --api-key sk-....
```
or
```bash
$ export OPENAI_API_KEY=sk-... 
$ pypilot
```

# TODO
- add a way to use history only with headers of functions...
- docker containers
- add a selector step that decide what context the next llm prompt should have:
    - history: code executed (w/wo expressions), errors, llm requests
    - locals: vars, functions, modules
 (full terminal history, locals only) and if the output should be code or chat
- add support in llm config file
