Metadata-Version: 2.1
Name: llama-index-readers-lilac
Version: 0.0.1
Summary: llama-index readers lilac integration
License: MIT
Author: Your Name
Author-email: you@example.com
Requires-Python: >=3.9,<3.12
Classifier: License :: OSI Approved :: MIT License
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.9
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Requires-Dist: lilac (>=0.1.5,<0.2.0)
Requires-Dist: llama-index-core (>=0.9.32,<0.10.0)
Description-Content-Type: text/markdown

# Lilac reader

[Lilac](https://lilacml.com/) is an open-source product that helps you analyze, enrich, and clean unstructured data with AI.

It can be used to analyze, clean, structure, and label data that can be used in downstream LlamaIndex and LangChain applications.

## Lilac projects

This assumes you've already run Lilac locally, and have a project directory with a dataset. For more details on Lilac projects, see [Lilac Projects](https://lilacml.com/projects/projects.html)

You can use any LlamaIndex loader to load data into Lilac, clean data, and then bring it back into LlamaIndex Documents.

## Usage

### LlamaIndex => Lilac

See [this notebook](https://github.com/lilacai/lilac/blob/main/notebooks/LlamaIndexLoader.ipynb) for getting data into Lilac from LlamaHub.

```python
from llama_index import download_loader
import lilac as ll

# See: https://llamahub.ai/l/papers-arxiv
ArxivReader = download_loader("ArxivReader")

loader = ArxivReader()
documents = loader.load_data(search_query="au:Karpathy")

# Set the project directory for Lilac.
ll.set_project_dir("./data")

# This assumes you already have a lilac project set up.
# If you don't, use ll.init(project_dir='./data')
ll.create_dataset(
    config=ll.DatasetConfig(
        namespace="local",
        name="arxiv-karpathy",
        source=ll.LlamaIndexDocsSource(
            # documents comes from the loader.load_data call in the previous cell.
            documents=documents
        ),
    )
)

# You can start a lilac server with. Once you've cleaned the dataset, you can come back into GPTIndex.
ll.start_server(project_dir="./data")
```

### Lilac => LlamaIndex Documents

```python
from llama_index import VectorStoreIndex, download_loader

LilacReader = download_loader("LilacReader")

loader = LilacReader()
documents = loader.load_data(
    project_dir="~/my_project",
    # The name of your dataset in the project dir.
    dataset="local/arxiv-karpathy",
)

index = VectorStoreIndex.from_documents(documents)

index.query("How are ImageNet labels validated?")
```

This loader is designed to be used as a way to load data into [GPT Index](https://github.com/run-llama/llama_index/tree/main/llama_index) and/or subsequently used in a [LangChain](https://github.com/hwchase17/langchain) Agent.

