Metadata-Version: 2.1
Name: search-llm-actions
Version: 0.0.1
Summary: ('search_llm_actions provides a simple template to collect action trajectories from local (e.g. vllm servers) or remote (e.g. togetherai api) llms by search.',)
Home-page: https://github.com/bebetterest/search_llm_actions
Author: Betterest
Author-email: bebetterest@outlook.com
License: MIT
Classifier: License :: OSI Approved :: MIT License
Classifier: Programming Language :: Python
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.6
Classifier: Programming Language :: Python :: Implementation :: CPython
Classifier: Programming Language :: Python :: Implementation :: PyPy
Requires-Python: >=3.6.0
Description-Content-Type: text/markdown
Requires-Dist: vllm
Requires-Dist: tqdm
Requires-Dist: together
Requires-Dist: jsonlines
Requires-Dist: transformers


# search_llm_actions

search_llm_actions provides a simple template to collect action trajectories from local (vllm servers on multi-gpus are supported) or remote (togetherai api is supported) llms by search.

## Installation

```bash
pip install search_llm_actions
```

## Customization

- You could find an minimal example in `search_llm_actions/main.py`.
- You need to override  `init_root_node`, `take_parallel_actions`, `simulate` & `detect_end` functions in `search_llm_actions/search.py` to adapt to your own task.
- You need to override `deploy_vllm_multi.sh` & `test_vllm_multi.sh` in `search_llm_actions/scripts` to adapt to your own llm server.
- You need to implement a new subclass of `Caller` in `search_llm_actions/llm_caller.py` to adapt to your own llm server.

enjoy:)
🤯
