Metadata-Version: 2.4
Name: agent-framework-lib
Version: 0.3.1
Summary: A comprehensive Python framework for building and serving conversational AI agents with FastAPI
Author-email: Sebastian Pavel <sebastian@cinco.ai>, Elliott Girard <elliott.girard@icloud.com>
Maintainer-email: Sebastian Pavel <sebastian@cinco.ai>
License: MIT
Project-URL: Homepage, https://github.com/Cinco-AI/AgentFramework
Project-URL: Repository, https://github.com/Cinco-AI/AgentFramework.git
Project-URL: Issues, https://github.com/Cinco-AI/AgentFramework/issues
Project-URL: Documentation, https://github.com/Cinco-AI/AgentFramework/blob/main/README.md
Project-URL: Changelog, https://github.com/Cinco-AI/AgentFramework/blob/main/docs/CHANGELOG.md
Project-URL: Bug Tracker, https://github.com/Cinco-AI/AgentFramework/issues
Project-URL: Source Code, https://github.com/Cinco-AI/AgentFramework
Keywords: ai,agents,fastapi,llamaindex,framework,conversational-ai,multi-agent,llm,openai,gemini,chatbot,session-management,framework-agnostic
Classifier: Development Status :: 4 - Beta
Classifier: Intended Audience :: Developers
Classifier: License :: OSI Approved :: MIT License
Classifier: Operating System :: OS Independent
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Classifier: Programming Language :: Python :: 3.13
Classifier: Programming Language :: Python :: 3 :: Only
Classifier: Topic :: Software Development :: Libraries :: Python Modules
Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
Classifier: Topic :: Communications :: Chat
Classifier: Topic :: Internet :: WWW/HTTP :: HTTP Servers
Classifier: Framework :: FastAPI
Classifier: Environment :: Web Environment
Classifier: Typing :: Typed
Requires-Python: <3.14,>=3.10
Description-Content-Type: text/markdown
License-File: LICENSE
Requires-Dist: aiofiles>=24.1.0
Requires-Dist: fastapi>=0.115.12
Requires-Dist: uvicorn>=0.34.2
Requires-Dist: fastmcp>=2.2.7
Requires-Dist: mcp-python-interpreter
Requires-Dist: pyyaml>=6.0.2
Requires-Dist: pydantic>=2.0.0
Requires-Dist: opentelemetry-sdk>=1.33.1
Requires-Dist: opentelemetry-api>=1.33.1
Requires-Dist: opentelemetry-exporter-otlp-proto-grpc>=1.33.1
Requires-Dist: pymongo>=4.10.1
Requires-Dist: motor>=3.6.0
Requires-Dist: black>=25.1.0
Requires-Dist: markitdown[all]>=0.1.2
Requires-Dist: psutil>=7.0.0
Requires-Dist: weasyprint>=60.0
Requires-Dist: markdown>=3.5
Provides-Extra: llamaindex
Requires-Dist: llama-index-core>=0.13.3; extra == "llamaindex"
Requires-Dist: llama-index>=0.13.3; extra == "llamaindex"
Requires-Dist: llama-index-llms-openai>=0.4.7; extra == "llamaindex"
Requires-Dist: llama-index-llms-gemini>=0.4.7; extra == "llamaindex"
Requires-Dist: llama-index-llms-anthropic>=0.4.7; extra == "llamaindex"
Provides-Extra: microsoft
Provides-Extra: dev
Requires-Dist: pytest>=8.4.0; extra == "dev"
Requires-Dist: pytest-asyncio>=0.21.0; extra == "dev"
Requires-Dist: pytest-cov>=6.2.1; extra == "dev"
Requires-Dist: pytest-mock>=3.10.0; extra == "dev"
Requires-Dist: pytest-benchmark>=4.0.0; extra == "dev"
Requires-Dist: pytest-xdist>=3.3.0; extra == "dev"
Requires-Dist: black>=25.1.0; extra == "dev"
Requires-Dist: flake8>=6.0.0; extra == "dev"
Requires-Dist: mypy>=1.5.0; extra == "dev"
Requires-Dist: ruff>=0.1.0; extra == "dev"
Requires-Dist: pre-commit>=3.0.0; extra == "dev"
Requires-Dist: aiohttp>=3.12.13; extra == "dev"
Requires-Dist: httpx>=0.28.1; extra == "dev"
Requires-Dist: coverage>=7.0.0; extra == "dev"
Provides-Extra: mongodb
Requires-Dist: pymongo>=4.10.1; extra == "mongodb"
Requires-Dist: motor>=3.6.0; extra == "mongodb"
Provides-Extra: s3
Requires-Dist: boto3>=1.34.0; extra == "s3"
Requires-Dist: botocore>=1.34.0; extra == "s3"
Provides-Extra: minio
Requires-Dist: minio>=7.2.0; extra == "minio"
Provides-Extra: multimodal
Requires-Dist: pillow>=10.0.0; extra == "multimodal"
Requires-Dist: opencv-python>=4.8.0; extra == "multimodal"
Requires-Dist: pytesseract>=0.3.10; extra == "multimodal"
Provides-Extra: all
Requires-Dist: agent-framework-lib[dev,llamaindex,microsoft,minio,mongodb,multimodal,s3]; extra == "all"
Dynamic: license-file

# Agent Framework Library

A comprehensive Python framework for building and serving conversational AI agents with FastAPI. Features framework-agnostic architecture supporting multiple AI frameworks (LlamaIndex, Microsoft Agent Framework), automatic multi-provider support (OpenAI, Anthropic, Gemini), dynamic configuration, session management, streaming responses, and a rich web interface.

**🎉 NEW: PyPI Package** - The Agent Framework is now available as a pip-installable package from PyPI, making it easy to integrate into any Python project.

## Installation

```bash
# Install base framework
uv add agent-framework-lib

# Install with LlamaIndex support
uv add agent-framework-lib[llamaindex]

# Install with all frameworks
uv add agent-framework-lib[all]

# Install with development dependencies
uv add agent-framework-lib[dev]
```

### PDF Generation System Dependencies

The framework includes PDF generation tools that require system-level dependencies:

**macOS (via Homebrew):**
```bash
brew install pango gdk-pixbuf libffi

# Add to your shell profile (~/.zshrc or ~/.bash_profile):
export DYLD_LIBRARY_PATH="/opt/homebrew/lib:$DYLD_LIBRARY_PATH"
```

**Linux (Ubuntu/Debian):**
```bash
sudo apt-get install libpango-1.0-0 libpangoft2-1.0-0 libgdk-pixbuf2.0-0 libffi-dev
```

**Linux (Fedora/RHEL):**
```bash
sudo dnf install pango gdk-pixbuf2 libffi-devel
```

For detailed installation instructions, see the [Installation Guide](docs/installation.md).

## 🚀 Features

### Core Capabilities

- **Framework-Agnostic Architecture**: Support for multiple AI agent frameworks (LlamaIndex, Microsoft)
- **Multi-Provider Support**: Automatic routing between OpenAI, Anthropic, and Gemini APIs
- **Dynamic System Prompts**: Session-based system prompt control
- **Agent Configuration**: Runtime model parameter adjustment
- **Session Management**: Persistent conversation handling with structured workflow
- **Session Workflow**: Initialize/end session lifecycle with immutable configurations
- **User Feedback System**: Message-level thumbs up/down and session-level flags
- **Media Detection**: Automatic detection and handling of generated images/videos
- **Web Interface**: Built-in test application with rich UI controls
- **Debug Logging**: Comprehensive logging for system prompts and model configuration

### Advanced Features

- **Model Auto-Detection**: Automatic provider selection based on model name
- **Parameter Filtering**: Provider-specific parameter validation
- **Configuration Validation**: Built-in validation and status endpoints
- **Correlation & Conversation Tracking**: Link sessions across agents and track individual exchanges
- **Manager Agent Support**: Built-in coordination features for multi-agent workflows
- **Persistent Session Storage**: MongoDB integration for scalable session persistence
- **Agent Identity Support**: Multi-agent deployment support with automatic agent identification
- **File Storage System**: Persistent file management with multiple storage backends (Local, S3, MinIO)
- **Generated File Tracking**: Automatic distinction between user-uploaded and agent-generated files
- **Multi-Storage Architecture**: Route different file types to appropriate storage systems
- **Markdown Conversion**: Automatic conversion of uploaded files (PDF, DOCX, TXT, etc.) to Markdown
- **Reverse Proxy Support**: Automatic path prefix detection for deployment behind reverse proxies
- **Backward Compatibility**: Existing implementations continue to work

## 🚀 Quick Start

### Option 1: LlamaIndex Agents (Recommended)

The fastest way to create agents with LlamaIndex:

```python
from typing import List
from agent_framework import LlamaIndexAgent, create_basic_agent_server
from llama_index.core.tools import FunctionTool

class MyAgent(LlamaIndexAgent):
    def __init__(self):
        super().__init__()
        # Required: Unique agent ID for session isolation
        self.agent_id = "my_calculator_agent"
    
    def get_agent_prompt(self) -> str:
        return "You are a helpful assistant that can perform calculations."
  
    def get_agent_tools(self) -> List[callable]:
        def add(a: float, b: float) -> float:
            """Add two numbers together."""
            return a + b
        
        def subtract(a: float, b: float) -> float:
            """Subtract one number from another."""
            return a - b
        
        return [
            FunctionTool.from_defaults(fn=add),
            FunctionTool.from_defaults(fn=subtract)
        ]

# Start server with one line - includes streaming, session management, etc.
create_basic_agent_server(MyAgent, port=8000)
```

**✨ Benefits:**

- **Minimal code** - Focus on your agent logic
- **Built-in streaming** - Real-time responses
- **Session management** - Automatic state persistence
- **10-15 minutes** to create a full-featured agent

### Option 2: Generic Agent Interface

For custom implementations or other frameworks:

```python
from agent_framework import AgentInterface, StructuredAgentInput, StructuredAgentOutput, create_basic_agent_server

class MyAgent(AgentInterface):
    async def get_metadata(self):
        return {"name": "My Agent", "version": "1.0.0"}
  
    async def handle_message(self, session_id: str, agent_input: StructuredAgentInput):
        return StructuredAgentOutput(response_text=f"Hello! You said: {agent_input.query}")

# Start server with one line
create_basic_agent_server(MyAgent, port=8000)
```

## 📚 Documentation

Quick access to all documentation:

### Getting Started
- **[Installation Guide](docs/installation.md)** - Detailed installation instructions for all platforms and configurations
- **[Getting Started](docs/GETTING_STARTED.md)** - Quick start guide to help you choose between LlamaIndex and BaseAgent
- **[Creating Agents](docs/CREATING_AGENTS.md)** - Comprehensive guide for building agents with LlamaIndex or custom frameworks

### Guides
- **[Tools and MCP Integration](docs/TOOLS_AND_MCP_GUIDE.md)** - How to add tools and integrate Model Context Protocol servers
- **[AI Content Management](docs/AI_CONTENT_MANAGEMENT_GUIDE.md)** - Managing AI-generated content and artifacts
- **[Multimodal Tools](docs/MULTIMODAL_TOOLS_GUIDE.md)** - Working with images, audio, and video
- **[Testing Guide](docs/UV_TESTING_GUIDE.md)** - Testing best practices with UV and pytest

### API Reference
- **[API Reference](docs/api-reference.md)** - Complete API documentation for all components
- **[Architecture](ARCHITECTURE.md)** - System architecture and design principles

### Examples
- **[simple_agent.py](examples/simple_agent.py)** - Basic LlamaIndex agent with calculator tools
- **[agent_with_file_storage.py](examples/agent_with_file_storage.py)** - Agent with file upload/download capabilities
- **[agent_with_mcp.py](examples/agent_with_mcp.py)** - Agent with MCP server integration
- **[custom_framework_agent.py](examples/custom_framework_agent.py)** - BaseAgent example for custom frameworks

---

## 📋 Table of Contents

- [Documentation](#-documentation)
- [Features](#-features)
- [Quick Start](#-quick-start)
- [Installation](#-installation)
- [Configuration](#️-configuration)
- [Architecture](#-architecture)
- [API Reference](#-api-reference)
- [Client Examples](#-client-examples)
- [Web Interface](#-web-interface)
- [Advanced Usage](#-advanced-usage)
- [Development](#️-development)
- [Testing](#-testing)
- [Authentication](#-authentication)
- [Contributing](#-contributing)
- [License](#-license)
- [Support](#-support)

## 🏗️ Architecture

The framework follows a clean, modular architecture with clear separation of concerns:

```
agent_framework/
├── core/                    # Framework-agnostic core components
│   ├── agent_interface.py   # Abstract agent interface
│   ├── base_agent.py        # Generic base agent
│   ├── agent_provider.py    # Agent lifecycle management
│   ├── state_manager.py     # State management & compression
│   ├── model_config.py      # Multi-provider configuration
│   └── model_clients.py     # LLM client factory
│
├── session/                 # Session management
│   └── session_storage.py   # Session persistence (Memory, MongoDB)
│
├── storage/                 # File storage
│   ├── file_storages.py     # Storage backends (Local, S3, MinIO)
│   ├── file_system_management.py
│   └── storage_optimizer.py
│
├── processing/              # Content processing
│   ├── markdown_converter.py
│   ├── multimodal_integration.py
│   └── ai_content_management.py
│
├── tools/                   # Reusable tools
│   └── multimodal_tools.py
│
├── monitoring/              # Performance & monitoring
│   ├── performance_monitor.py
│   ├── progress_tracker.py
│   ├── resource_manager.py
│   ├── error_handling.py
│   └── error_logging.py
│
├── web/                     # Web server & UI
│   ├── server.py            # FastAPI server
│   ├── modern_ui.html
│   └── test_app.html
│
├── implementations/         # Framework-specific agents
│   ├── llamaindex_agent.py  # LlamaIndex implementation
│   └── microsoft_agent.py   # Microsoft Agent Framework
│
└── utils/                   # Utilities
    └── special_blocks.py
```

### Key Design Principles

1. **Framework-Agnostic Core**: The core framework (server, session management, state persistence) works with any agent implementation
2. **Interface-Driven**: All agents implement `AgentInterface`, ensuring consistent behavior
3. **Modular Architecture**: Clear separation between core, storage, processing, and implementations
4. **Extensible**: Easy to add new agent frameworks without modifying core code

For detailed architecture documentation, see [ARCHITECTURE.md](ARCHITECTURE.md).

## 🛠️ Development

### 1. Installation

For detailed installation instructions, see the [Installation Guide](docs/installation.md).

```bash
# Quick install
uv add agent-framework-lib[llamaindex]

# Or clone for development
git clone https://github.com/your-org/agent-framework
cd AgentFramework
uv sync --group dev
```

### 2. Configuration

```bash
# Copy configuration template
cp env-template.txt .env

# Edit .env with your API keys
```

**Minimal .env setup:**

```env
# At least one API key required
OPENAI_API_KEY=sk-your-openai-key-here

# Set default model
DEFAULT_MODEL=gpt-4o-mini
```

For complete configuration options, see the [Installation Guide](docs/installation.md).

### 3. Start the Server

**Option A: Using convenience function (recommended)**

```python
# In your agent file
from agent_framework import create_basic_agent_server
create_basic_agent_server(MyAgent, port=8000)
```

**Option B: Traditional method**

```bash
# Start the development server
uv run python agent.py

# Or using uvicorn directly
export AGENT_CLASS_PATH="agent:Agent"
uvicorn server:app --reload --host 0.0.0.0 --port 8000
```

### 4. Test the Agent

Open your browser to `http://localhost:8000/ui` or make API calls:

```bash
# Without authentication
curl -X POST http://localhost:8000/message \
  -H "Content-Type: application/json" \
  -d '{"query": "Hello, how are you?"}'

# With API Key authentication
curl -X POST http://localhost:8000/message \
  -H "Content-Type: application/json" \
  -H "X-API-Key: sk-your-secure-api-key-123" \
  -d '{"query": "Hello, how are you?"}'
```

## 🧪 Testing

The project includes a comprehensive test suite built with `pytest` and optimized for UV-based testing.

**🚀 Quick Start with UV (Recommended):**

```bash
# Install test dependencies
uv sync --group test

# Run all tests
uv run pytest

# Run tests with coverage
uv run pytest --cov=agent_framework --cov-report=html

# Run specific test types
uv run pytest -m unit          # Fast unit tests
uv run pytest -m integration   # Integration tests
uv run pytest -m "not slow"    # Skip slow tests
```

**📚 Comprehensive Testing Guide:**

For detailed instructions, see [UV Testing Guide](docs/UV_TESTING_GUIDE.md)

**📊 Test Categories:**

- `unit` - Fast, isolated component tests
- `integration` - Multi-component workflow tests  
- `performance` - Benchmark and performance tests
- `multimodal` - Tests requiring AI vision/audio capabilities
- `storage` - File storage backend tests
- `slow` - Long-running tests (excluded from fast runs)

## ⚙️ Configuration

### Session Storage Configuration

Configure persistent session storage (optional):

```env
# === Session Storage ===
# Use "memory" (default) for in-memory storage or "mongodb" for persistent storage
SESSION_STORAGE_TYPE=memory

# MongoDB configuration (only required when SESSION_STORAGE_TYPE=mongodb)
MONGODB_CONNECTION_STRING=mongodb://localhost:27017
MONGODB_DATABASE_NAME=agent_sessions
MONGODB_COLLECTION_NAME=sessions
```

### File Storage Configuration

```env
# Local Storage (always enabled)
LOCAL_STORAGE_PATH=./file_storage

# AWS S3 (optional)
AWS_S3_BUCKET=my-agent-files
AWS_REGION=us-east-1
S3_AS_DEFAULT=false

# MinIO (optional)  
MINIO_ENDPOINT=localhost:9000
MINIO_ACCESS_KEY=minioadmin
MINIO_SECRET_KEY=minioadmin
MINIO_BUCKET=agent-files

# Routing Rules
IMAGE_STORAGE_BACKEND=s3
VIDEO_STORAGE_BACKEND=s3
FILE_ROUTING_RULES=image/:s3,video/:minio
```

## 📚 API Reference

### Core Endpoints

#### Send Message

Send a message to the agent and receive a complete response.

**Endpoint:** `POST /message`

**Request Body:**

```json
{
  "query": "Your message here",
  "parts": [],
  "system_prompt": "Optional custom system prompt",
  "agent_config": {
    "temperature": 0.8,
    "max_tokens": 1000,
    "model_selection": "gpt-4"
  },
  "session_id": "optional-session-id",
  "correlation_id": "optional-correlation-id"
}
```

**Response:**

```json
{
  "response_text": "Agent's response",
  "parts": [{"type": "text", "text": "Agent's response"}],
  "session_id": "generated-or-provided-session-id",
  "user_id": "user1",
  "correlation_id": "correlation-id-if-provided",
  "conversation_id": "unique-id-for-this-exchange"
}
```

#### Session Management

**Initialize Session:** `POST /init`

```json
{
  "user_id": "string",
  "correlation_id": "string",
  "session_id": "string",
  "configuration": {
    "system_prompt": "string",
    "model_name": "string",
    "model_config": {
      "temperature": 0.7,
      "token_limit": 1000
    }
  }
}
```

**End Session:** `POST /end`

```json
{
  "session_id": "string"
}
```

**List Sessions:** `GET /sessions`

**Get History:** `GET /sessions/{session_id}/history`

**Find Sessions by Correlation ID:** `GET /sessions/by-correlation/{correlation_id}`

### Configuration Endpoints

**Get Model Configuration:** `GET /config/models`

**Validate Model:** `GET /config/validate/{model_name}`

**Get System Prompt:** `GET /system-prompt`

### File Storage Endpoints

**Upload File:** `POST /files/upload`

**Download File:** `GET /files/{file_id}/download`

**Get File Metadata:** `GET /files/{file_id}/metadata`

**List Files:** `GET /files`

**Delete File:** `DELETE /files/{file_id}`

**Storage Statistics:** `GET /files/stats`

For complete API documentation, visit `http://localhost:8000/docs` when the server is running.

## 💻 Client Examples

### Python Client

```python
import requests

class AgentClient:
    def __init__(self, base_url="http://localhost:8000"):
        self.base_url = base_url
        self.session = requests.Session()
  
    def send_message(self, message, session_id=None, correlation_id=None):
        """Send a message and get complete response."""
        payload = {"query": message, "parts": []}
        if session_id:
            payload["session_id"] = session_id
        if correlation_id:
            payload["correlation_id"] = correlation_id
    
        response = self.session.post(f"{self.base_url}/message", json=payload)
        response.raise_for_status()
        return response.json()
  
    def init_session(self, user_id, configuration, correlation_id=None):
        """Initialize a new session with configuration."""
        payload = {"user_id": user_id, "configuration": configuration}
        if correlation_id:
            payload["correlation_id"] = correlation_id
    
        response = self.session.post(f"{self.base_url}/init", json=payload)
        response.raise_for_status()
        return response.json()

# Usage
client = AgentClient()
session_data = client.init_session(
    user_id="user123",
    configuration={
        "system_prompt": "You are a helpful assistant",
        "model_name": "gpt-4",
        "model_config": {"temperature": 0.7}
    }
)

response = client.send_message("Hello!", session_id=session_data["session_id"])
print(response["response_text"])
```

### JavaScript Client

```javascript
class AgentClient {
    constructor(baseUrl = 'http://localhost:8000') {
        this.baseUrl = baseUrl;
    }
  
    async sendMessage(message, options = {}) {
        const payload = {query: message, parts: [], ...options};
        const response = await fetch(`${this.baseUrl}/message`, {
            method: 'POST',
            headers: {'Content-Type': 'application/json'},
            body: JSON.stringify(payload)
        });
        return response.json();
    }
  
    async initSession(userId, configuration, options = {}) {
        const payload = {user_id: userId, configuration, ...options};
        const response = await fetch(`${this.baseUrl}/init`, {
            method: 'POST',
            headers: {'Content-Type': 'application/json'},
            body: JSON.stringify(payload)
        });
        return response.json();
    }
}

// Usage
const client = new AgentClient();
const session = await client.initSession('user123', {
    system_prompt: 'You are a helpful assistant',
    model_name: 'gpt-4',
    model_config: {temperature: 0.7}
});

const response = await client.sendMessage('Hello!', {session_id: session.session_id});
console.log(response.response_text);
```

## 🌐 Web Interface

Access the web interface at `http://localhost:8000/ui` for interactive testing with:

- Real-time message streaming
- Session management
- System prompt configuration
- Model selection and parameter tuning
- File upload and management
- Conversation history

## 🔧 Advanced Usage

### Creating Custom Agents

#### LlamaIndex Agents

```python
from agent_framework import LlamaIndexAgent
from llama_index.core.tools import FunctionTool
from typing import List

class MyLlamaAgent(LlamaIndexAgent):
    def get_agent_prompt(self) -> str:
        return "You are a specialized assistant for data analysis."
  
    def get_agent_tools(self) -> List[callable]:
        def analyze_data(data: str) -> str:
            """Analyze the provided data."""
            return f"Analysis of {data}"
        
        return [FunctionTool.from_defaults(fn=analyze_data)]
```

#### Microsoft Agent Framework

```python
from agent_framework import MicrosoftAgent

class MyMicrosoftAgent(MicrosoftAgent):
    # Implement Microsoft-specific agent logic
    pass
```

#### Generic Custom Agent

```python
from agent_framework import BaseAgent, StructuredAgentInput, StructuredAgentOutput

class MyCustomAgent(BaseAgent):
    async def handle_message(self, session_id: str, agent_input: StructuredAgentInput):
        # Your custom logic here
        return StructuredAgentOutput(response_text="Custom response")
  
    async def get_state(self, session_id: str):
        # Return agent state for persistence
        return {"session_id": session_id, "data": {}}
  
    async def load_state(self, state: dict):
        # Load agent state from persistence
        pass
```

### System Prompt Configuration

```python
# Server-level default
class MyAgent(AgentInterface):
    def get_system_prompt(self) -> str:
        return "You are a helpful assistant."

# Per-session override
response = client.send_message(
    "Help me with coding",
    system_prompt="You are a coding expert specializing in Python."
)
```

### Multi-Modal Support

```python
# Send image with message
payload = {
    "query": "What's in this image?",
    "parts": [{
        "type": "image_url",
        "image_url": {"url": "data:image/jpeg;base64,/9j/4AAQ..."}
    }]
}
```

## 🔒 Authentication

The framework supports two authentication methods:

### 1. Basic Authentication

```env
REQUIRE_AUTH=true
BASIC_AUTH_USERNAME=admin
BASIC_AUTH_PASSWORD=your-secure-password
```

```bash
curl -u admin:password http://localhost:8000/message \
  -H "Content-Type: application/json" \
  -d '{"query": "Hello!"}'
```

### 2. API Key Authentication

```env
REQUIRE_AUTH=true
API_KEYS=sk-key-1,sk-key-2,sk-key-3
```

```bash
curl -H "Authorization: Bearer sk-key-1" \
  http://localhost:8000/message \
  -H "Content-Type: application/json" \
  -d '{"query": "Hello!"}'
```

### Security Best Practices

1. Use strong API keys (generate with `openssl rand -base64 32`)
2. Rotate keys regularly
3. Never hardcode credentials
4. Always use HTTPS in production
5. Minimize key scope

## 📚 Documentation

For complete documentation, see the [Documentation](#-documentation) section at the top of this README.

## 📖 Examples

The `examples/` directory contains working examples for different use cases:

- **[simple_agent.py](examples/simple_agent.py)** - Basic LlamaIndex agent with tools
- **[agent_with_file_storage.py](examples/agent_with_file_storage.py)** - Agent with file upload/download
- **[agent_with_mcp.py](examples/agent_with_mcp.py)** - Agent with MCP server integration
- **[custom_framework_agent.py](examples/custom_framework_agent.py)** - BaseAgent example for custom frameworks

Each example is self-contained and runnable. See the file headers for usage instructions.

## 📝 Contributing

1. Fork the repository
2. Create a feature branch
3. Make your changes
4. Add tests for new functionality
5. Submit a pull request

## 📄 License

[Your License Here]

## 🤝 Support

- **Documentation**: See docs/ folder for detailed guides
- **Examples**: Check examples/ folder for usage examples
- **Issues**: Report bugs via GitHub Issues
- **API Docs**: Visit `http://localhost:8000/docs` when server is running

---

**Quick Links:**

- [Web Interface](http://localhost:8000/ui) - Interactive testing
- [API Documentation](http://localhost:8000/docs) - OpenAPI/Swagger docs
- [Configuration Test](http://localhost:8000/config/models) - Validate setup
