Metadata-Version: 2.4
Name: esperanto
Version: 1.2.0
Summary: A unified interface for various AI model providers
Project-URL: repository, https://github.com/lfnovo/esperanto
Project-URL: homepage, https://github.com/lfnovo/esperanto
Project-URL: documentation, https://github.com/lfnovo/esperanto#readme
Author-email: LUIS NOVO <lfnovo@gmail.com>
License: MIT
License-File: LICENSE
Keywords: ai,anthropic,elevenlabs,google,llm,openai,speech-to-text,text-to-speech
Requires-Python: <3.14,>=3.9
Requires-Dist: pydantic>=2.0.0
Requires-Dist: python-dotenv>=1.0.1
Requires-Dist: requests>=2.31.0
Provides-Extra: all
Requires-Dist: anthropic>=0.39.0; extra == 'all'
Requires-Dist: elevenlabs>=0.3.0; extra == 'all'
Requires-Dist: google-cloud-texttospeech>=2.14.0; extra == 'all'
Requires-Dist: google-genai>=1.8.0; extra == 'all'
Requires-Dist: groq>=0.12.0; extra == 'all'
Requires-Dist: ollama>=0.4.1; extra == 'all'
Requires-Dist: openai>=1.55.1; extra == 'all'
Requires-Dist: tokenizers>=0.15.2; extra == 'all'
Requires-Dist: torch>=2.2.2; extra == 'all'
Requires-Dist: transformers>=4.40.0; extra == 'all'
Requires-Dist: vertexai>=1.71.1; extra == 'all'
Requires-Dist: voyageai>=0.1.4; extra == 'all'
Provides-Extra: all-with-langchain
Requires-Dist: anthropic>=0.39.0; extra == 'all-with-langchain'
Requires-Dist: elevenlabs>=0.3.0; extra == 'all-with-langchain'
Requires-Dist: google-cloud-texttospeech>=2.14.0; extra == 'all-with-langchain'
Requires-Dist: google-genai>=1.8.0; extra == 'all-with-langchain'
Requires-Dist: groq>=0.12.0; extra == 'all-with-langchain'
Requires-Dist: langchain-anthropic>=0.3.0; extra == 'all-with-langchain'
Requires-Dist: langchain-core<0.4.0,>=0.3.29; extra == 'all-with-langchain'
Requires-Dist: langchain-google-genai>=2.1.2; extra == 'all-with-langchain'
Requires-Dist: langchain-groq>=0.2.1; extra == 'all-with-langchain'
Requires-Dist: langchain-ollama>=0.2.0; extra == 'all-with-langchain'
Requires-Dist: langchain-openai>=0.2.9; extra == 'all-with-langchain'
Requires-Dist: langchain<0.4.0,>=0.3.8; extra == 'all-with-langchain'
Requires-Dist: ollama>=0.4.1; extra == 'all-with-langchain'
Requires-Dist: openai>=1.55.1; extra == 'all-with-langchain'
Requires-Dist: tokenizers>=0.15.2; extra == 'all-with-langchain'
Requires-Dist: torch>=2.2.2; extra == 'all-with-langchain'
Requires-Dist: transformers>=4.40.0; extra == 'all-with-langchain'
Requires-Dist: vertexai>=1.71.1; extra == 'all-with-langchain'
Requires-Dist: voyageai>=0.1.4; extra == 'all-with-langchain'
Provides-Extra: anthropic
Requires-Dist: anthropic>=0.39.0; extra == 'anthropic'
Provides-Extra: dev
Requires-Dist: build; extra == 'dev'
Requires-Dist: ipykernel>=6.29.5; extra == 'dev'
Requires-Dist: ipywidgets>=8.1.5; extra == 'dev'
Requires-Dist: mypy>=1.11.1; extra == 'dev'
Requires-Dist: pytest-asyncio>=0.23.5; extra == 'dev'
Requires-Dist: pytest-cov>=6.0.0; extra == 'dev'
Requires-Dist: pytest-mock>=3.12.0; extra == 'dev'
Requires-Dist: pytest>=8.3.3; extra == 'dev'
Requires-Dist: ruff>=0.5.5; extra == 'dev'
Requires-Dist: twine; extra == 'dev'
Requires-Dist: types-requests>=2.32.0.20241016; extra == 'dev'
Provides-Extra: elevenlabs
Requires-Dist: elevenlabs>=0.3.0; extra == 'elevenlabs'
Provides-Extra: google
Requires-Dist: google-cloud-texttospeech>=2.14.0; extra == 'google'
Requires-Dist: google-genai>=1.8.0; extra == 'google'
Provides-Extra: groq
Requires-Dist: groq>=0.12.0; extra == 'groq'
Provides-Extra: langchain
Requires-Dist: langchain-anthropic>=0.3.0; extra == 'langchain'
Requires-Dist: langchain-core<0.4.0,>=0.3.29; extra == 'langchain'
Requires-Dist: langchain-google-genai>=2.1.2; extra == 'langchain'
Requires-Dist: langchain-groq>=0.2.1; extra == 'langchain'
Requires-Dist: langchain-ollama>=0.2.0; extra == 'langchain'
Requires-Dist: langchain-openai>=0.2.9; extra == 'langchain'
Requires-Dist: langchain<0.4.0,>=0.3.8; extra == 'langchain'
Provides-Extra: ollama
Requires-Dist: ollama>=0.4.1; extra == 'ollama'
Provides-Extra: openai
Requires-Dist: openai>=1.55.1; extra == 'openai'
Provides-Extra: transformers
Requires-Dist: bitsandbytes>=0.43.0; extra == 'transformers'
Requires-Dist: tokenizers>=0.15.2; extra == 'transformers'
Requires-Dist: torch>=2.2.2; extra == 'transformers'
Requires-Dist: transformers>=4.40.0; extra == 'transformers'
Provides-Extra: vertex
Requires-Dist: google-genai>=1.8.0; extra == 'vertex'
Requires-Dist: vertexai>=1.71.1; extra == 'vertex'
Provides-Extra: voyage
Requires-Dist: voyageai>=0.1.4; extra == 'voyage'
Description-Content-Type: text/markdown

# Esperanto 🌐

[![PyPI version](https://badge.fury.io/py/esperanto.svg)](https://badge.fury.io/py/esperanto)
[![PyPI Downloads](https://img.shields.io/pypi/dm/esperanto)](https://pypi.org/project/esperanto/)
[![Coverage](https://img.shields.io/badge/coverage-87%25-brightgreen)](https://github.com/lfnovo/esperanto)
[![Python Versions](https://img.shields.io/pypi/pyversions/esperanto)](https://pypi.org/project/esperanto/)
[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)

Esperanto is a powerful Python library that provides a unified interface for interacting with various Large Language Model (LLM) providers. It simplifies the process of working with different AI models (LLMs, Embedders, Transcribers) APIs by offering a consistent interface while maintaining provider-specific optimizations.

## Features ✨

- **Unified Interface**: Work with multiple LLM providers using a consistent API
- **Provider Support**:
  - OpenAI (GPT-4, GPT-3.5, o1, Whisper, TTS)
  - Anthropic (Claude 3)
  - OpenRouter (Access to multiple models)
  - xAI (Grok)
  - Groq (Mixtral, Llama, Whisper)
  - Google GenAI (Gemini LLM, Text To Speech, Embedding)
  - Vertex AI (Google Cloud)
  - Ollama (Local deployment)
  - Transformers (Local Hugging Face models)
  - ElevenLabs (Text-to-Speech)
- **Embedding Support**: Multiple embedding providers for vector representations
- **Speech-to-Text Support**: Transcribe audio using multiple providers
- **Text-to-Speech Support**: Generate speech using multiple providers
- **Async Support**: Both synchronous and asynchronous API calls
- **Streaming**: Support for streaming responses
- **Structured Output**: JSON output formatting (where supported)
- **LangChain Integration**: Easy conversion to LangChain chat models

For detailed information about our providers, check out:
- [LLM Providers Documentation](https://github.com/lfnovo/esperanto/blob/main/docs/llm.md)
- [Embedding Providers Documentation](https://github.com/lfnovo/esperanto/blob/main/docs/embedding.md)
- [Speech-to-Text Providers Documentation](https://github.com/lfnovo/esperanto/blob/main/docs/speech_to_text.md)
- [Text-to-Speech Providers Documentation](https://github.com/lfnovo/esperanto/blob/main/docs/text_to_speech.md)

## Installation 🚀

Install Esperanto using pip:

```bash
pip install esperanto
```

For specific providers, install with their extras:

```bash
# For OpenAI support
pip install "esperanto[openai]"

# For Anthropic support
pip install "esperanto[anthropic]"

# For Google (GenAI) support
pip install "esperanto[google]"

# For Vertex AI support
pip install "esperanto[vertex]"

# For Groq support
pip install "esperanto[groq]"

# For Ollama support
pip install "esperanto[ollama]"

# For Transformers support
pip install "esperanto[transformers]"

# For ElevenLabs support
pip install "esperanto[elevenlabs]"

# For LangChain integration
pip install "esperanto[langchain]"

# For all providers without LangChain
pip install "esperanto[all]"

# For all providers including LangChain
pip install "esperanto[all_with_langchain]"
```

## Provider Support Matrix

| Provider     | LLM Support | Embedding Support | Speech-to-Text | Text-to-Speech | JSON Mode |
|--------------|-------------|------------------|----------------|----------------|-----------|
| OpenAI       | ✅          | ✅               | ✅             | ✅             | ✅        |
| Anthropic    | ✅          | ❌               | ❌             | ❌             | ✅        |
| Groq         | ✅          | ❌               | ✅             | ❌             | ✅        |
| Google (GenAI) | ✅          | ✅               | ❌             | ✅             | ✅        |
| Vertex AI    | ✅          | ✅               | ❌             | ❌             | ❌        |
| Ollama       | ✅          | ✅               | ❌             | ❌             | ❌        |
| Transformers | ❌          | ✅               | ❌             | ❌             | ❌        |
| ElevenLabs   | ❌          | ❌               | ❌             | ✅             | ❌        |

## Quick Start 🏃‍♂️

You can use Esperanto in two ways: directly with provider-specific classes or through the AI Factory.

### Using AI Factory

```python
from esperanto.factory import AIFactory

# Get available providers for each model type
providers = AIFactory.get_available_providers()
print(providers)
# Output:
# {
#     'language': ['openai', 'anthropic', 'google', 'groq', 'ollama', 'openrouter', 'xai'],
#     'embedding': ['openai', 'google', 'ollama', 'vertex', 'transformers'],
#     'speech_to_text': ['openai', 'groq'],
#     'text_to_speech': ['openai', 'elevenlabs', 'google']
# }

# Create a language model instance with structured output (JSON)
model = AIFactory.create_language(
    "openai", 
    "gpt-3.5-turbo",
    structured={"type": "json"}
)
messages = [
    {"role": "system", "content": "You are a helpful assistant."},
    {"role": "user", "content": "What's the capital of France?"},
]
response = model.chat_complete(messages)  # Response will be in JSON format

# Create an embedding instance
model = AIFactory.create_embedding("openai", "text-embedding-3-small")
texts = ["Hello, world!", "Another text"]
embeddings = model.embed(texts)
```

## Standardized Responses

All providers in Esperanto return standardized response objects, making it easy to work with different models without changing your code.

### LLM Responses

```python
from esperanto.factory import AIFactory

model = AIFactory.create_language("openai", "gpt-3.5-turbo")
messages = [{"role": "user", "content": "Hello!"}]

# All LLM responses follow this structure
response = model.chat_complete(messages)
print(response.choices[0].message.content)  # The actual response text
print(response.choices[0].message.role)     # 'assistant'
print(response.model)                       # The model used
print(response.usage.total_tokens)          # Token usage information

# For streaming responses
for chunk in model.chat_complete(messages):
    print(chunk.choices[0].delta.content)   # Partial response text
```

### Embedding Responses

```python
from esperanto.factory import AIFactory

model = AIFactory.create_embedding("openai", "text-embedding-3-small")
texts = ["Hello, world!", "Another text"]

# All embedding responses follow this structure
response = model.embed(texts)
print(response.data[0].embedding)     # Vector for first text
print(response.data[0].index)         # Index of the text (0)
print(response.model)                 # The model used
print(response.usage.total_tokens)    # Token usage information
```

The standardized response objects ensure consistency across different providers, making it easy to:
- Switch between providers without changing your application code
- Handle responses in a uniform way
- Access common attributes like token usage and model information

## Links 🔗

- **Documentation**: [GitHub Documentation](https://github.com/lfnovo/esperanto#readme)
- **Source Code**: [GitHub Repository](https://github.com/lfnovo/esperanto)
- **Issue Tracker**: [GitHub Issues](https://github.com/lfnovo/esperanto/issues)

## License 📄

This project is licensed under the MIT License - see the [LICENSE](https://github.com/lfnovo/esperanto/blob/main/LICENSE) file for details.
