Metadata-Version: 2.1
Name: lexoid
Version: 0.1.12
Summary: 
Requires-Python: >=3.10,<4.0
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Classifier: Programming Language :: Python :: 3.13
Requires-Dist: bs4 (>=0.0.2,<0.0.3)
Requires-Dist: docx2pdf (>=0.1.8,<0.2.0)
Requires-Dist: google-generativeai (>=0.8.1,<0.9.0)
Requires-Dist: huggingface-hub (>=0.27.0,<0.28.0)
Requires-Dist: loguru (>=0.7.2,<0.8.0)
Requires-Dist: markdown (>=3.7,<4.0)
Requires-Dist: markdownify (>=0.13.1,<0.14.0)
Requires-Dist: nest-asyncio (>=1.6.0,<2.0.0)
Requires-Dist: openai (>=1.47.0,<2.0.0)
Requires-Dist: opencv-python (>=4.10.0.84,<5.0.0.0)
Requires-Dist: openpyxl (>=3.1.5,<4.0.0)
Requires-Dist: pandas (>=2.2.3,<3.0.0)
Requires-Dist: pdfplumber (>=0.11.4,<0.12.0)
Requires-Dist: pikepdf (>=9.3.0,<10.0.0)
Requires-Dist: playwright (>=1.49.0,<2.0.0)
Requires-Dist: pptx2md (>=2.0.6,<3.0.0)
Requires-Dist: pypdfium2 (>=4.30.0,<5.0.0)
Requires-Dist: pyqt5 (>=5.15.11,<6.0.0) ; platform_system != "debian"
Requires-Dist: pyqtwebengine (>=5.15.7,<6.0.0) ; platform_system != "debian"
Requires-Dist: python-docx (>=1.1.2,<2.0.0)
Requires-Dist: python-dotenv (>=1.0.0,<2.0.0)
Requires-Dist: tabulate (>=0.9.0,<0.10.0)
Requires-Dist: together (>=1.4.0,<2.0.0)
Description-Content-Type: text/markdown

<div align="center">
  
```
 ___      _______  __   __  _______  ___   ______  
|   |    |       ||  |_|  ||       ||   | |      | 
|   |    |    ___||       ||   _   ||   | |  _    |
|   |    |   |___ |       ||  | |  ||   | | | |   |
|   |___ |    ___| |     | |  |_|  ||   | | |_|   |
|       ||   |___ |   _   ||       ||   | |       |
|_______||_______||__| |__||_______||___| |______| 
                                                                                                    
```
  
</div>

[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/oidlabs-com/Lexoid/blob/main/examples/example_notebook_colab.ipynb)
[![GitHub license](https://img.shields.io/badge/License-Apache_2.0-blue.svg)](https://github.com/oidlabs-com/Lexoid/blob/main/LICENSE)
[![PyPI](https://img.shields.io/pypi/v/lexoid)](https://pypi.org/project/lexoid/)
[![Docs](https://github.com/oidlabs-com/Lexoid/actions/workflows/deploy_docs.yml/badge.svg)](https://oidlabs-com.github.io/Lexoid/)

Lexoid is an efficient document parsing library that supports both LLM-based and non-LLM-based (static) PDF document parsing.

[Documentation](https://oidlabs-com.github.io/Lexoid/)

## Motivation:

- Use the multi-modal advancement of LLMs
- Enable convenience for users
- Collaborate with a permissive license

## Installation

### Installing with pip

```
pip install lexoid
```

To use LLM-based parsing, define the following environment variables or create a `.env` file with the following definitions

```
OPENAI_API_KEY=""
GOOGLE_API_KEY=""
```

Optionally, to use `Playwright` for retrieving web content (instead of the `requests` library):

```
playwright install --with-deps --only-shell chromium
```

### Building `.whl` from source

```
make build
```

### Creating a local installation

To install dependencies:

```
make install
```

or, to install with dev-dependencies:

```
make dev
```

To activate virtual environment:

```
source .venv/bin/activate
```

## Usage

[Example Notebook](https://github.com/oidlabs-com/Lexoid/blob/main/examples/example_notebook.ipynb)

[Example Colab Notebook](https://colab.research.google.com/github/oidlabs-com/Lexoid/blob/main/examples/example_notebook_colab.ipynb)

Here's a quick example to parse documents using Lexoid:

```python
from lexoid.api import parse
from lexoid.api import ParserType

parsed_md = parse("https://www.justice.gov/eoir/immigration-law-advisor", parser_type="LLM_PARSE")["raw"]
# or
pdf_path = "path/to/immigration-law-advisor.pdf"
parsed_md = parse(pdf_path, parser_type="LLM_PARSE")["raw"]

print(parsed_md)
```

### Parameters

- path (str): The file path or URL.
- parser_type (str, optional): The type of parser to use ("LLM_PARSE" or "STATIC_PARSE"). Defaults to "AUTO".
- pages_per_split (int, optional): Number of pages per split for chunking. Defaults to 4.
- max_threads (int, optional): Maximum number of threads for parallel processing. Defaults to 4.
- \*\*kwargs: Additional arguments for the parser.

## Supported API Providers
* Google
* OpenAI
* Hugging Face
* Together AI
* OpenRouter

## Benchmark

Results aggregated across 5 iterations each for 5 documents.

_Note:_ Benchmarks are currently done in the zero-shot setting.

| Rank | Model                                                 | Mean Similarity | Std. Dev. | Time (s) | Cost($)  |
| ---- | ----------------------------------------------------- | --------------- | --------- | -------- | -------- |
| 1    | gemini-2.0-flash                                      | 0.829           | 0.102     | 7.41     | 0.000480 |
| 2    | gemini-2.0-flash-001                                  | 0.814           | 0.176     | 6.85     | 0.000421 |
| 3    | gemini-1.5-flash                                      | 0.797           | 0.143     | 9.54     | 0.000238 |
| 4    | gemini-2.0-pro-exp                                    | 0.764           | 0.227     | 11.95    |   TBA    |
| 5    | gemini-2.0-flash-thinking-exp                         | 0.746           | 0.266     | 10.46    |   TBA    |
| 6    | gemini-1.5-pro                                        | 0.732           | 0.265     | 11.44    | 0.003332 |
| 7    | gpt-4o                                                | 0.687           | 0.247     | 10.16    | 0.004736 |
| 8    | gpt-4o-mini                                           | 0.642           | 0.213     | 9.71     | 0.000275 |
| 9    | gemma-3-27b-it (via OpenRouter)                       | 0.628           | 0.299     | 18.79    | 0.000096 |
| 10   | gemini-1.5-flash-8b                                   | 0.551           | 0.223     | 3.91     | 0.000055 |
| 11   | Llama-Vision-Free (via Together AI)                   | 0.531           | 0.198     | 6.93     | 0        |
| 12   | Llama-3.2-11B-Vision-Instruct-Turbo (via Together AI) | 0.524           | 0.192     | 3.68     | 0.000060 |
| 13   | qwen/qwen-2.5-vl-7b-instruct (via OpenRouter)         | 0.482           | 0.209     | 11.53    | 0.000052 |
| 14   | Llama-3.2-90B-Vision-Instruct-Turbo (via Together AI) | 0.461           | 0.306     | 19.26    | 0.000426 |
| 15   | Llama-3.2-11B-Vision-Instruct (via Hugging Face)      | 0.451           | 0.257     | 4.54     |   0      |
| 16   | microsoft/phi-4-multimodal-instruct (via OpenRouter)  | 0.366           | 0.287     | 10.80    | 0.000019 |

