Metadata-Version: 2.1
Name: ai-care
Version: 0.1.2
Summary: AI-Care endows AI with the capability to speak proactively. With simple settings, AI-Care allows your AI to proactively care for you.
Author-email: Xueao Chao <chaoxueao@gmail.com>
License: MIT License
        
        Copyright (c) 2023 Xueao Chao
        
        Permission is hereby granted, free of charge, to any person obtaining a copy
        of this software and associated documentation files (the "Software"), to deal
        in the Software without restriction, including without limitation the rights
        to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
        copies of the Software, and to permit persons to whom the Software is
        furnished to do so, subject to the following conditions:
        
        The above copyright notice and this permission notice shall be included in all
        copies or substantial portions of the Software.
        
        THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
        IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
        FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
        AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
        LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
        OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
        SOFTWARE.
        
Project-URL: Homepage, https://github.com/happyapplehorse/ai-care
Project-URL: Issues, https://github.com/happyapplehorse/ai-care/issues
Keywords: AI,Proactive AI,LLM
Classifier: Development Status :: 3 - Alpha
Classifier: Intended Audience :: Developers
Classifier: Topic :: Software Development
Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
Classifier: Topic :: Software Development :: Libraries
Classifier: Topic :: Software Development :: Libraries :: Python Modules
Classifier: Topic :: Utilities
Classifier: Programming Language :: Python :: 3
Classifier: License :: OSI Approved :: MIT License
Classifier: Operating System :: OS Independent
Requires-Python: >=3.10
Description-Content-Type: text/markdown
License-File: LICENSE

# AI-Care
![GitHub License](https://img.shields.io/github/license/happyapplehorse/ai-care)
![PyPI - Version](https://img.shields.io/pypi/v/ai-care)
![GitHub Workflow Status (with event)](https://img.shields.io/github/actions/workflow/status/happyapplehorse/ai-care/python-publish.yml?logo=pypi)
![GitHub Workflow Status (with event)](https://img.shields.io/github/actions/workflow/status/happyapplehorse/ai-care/codecov.yml?logo=pytest&label=test)
[![codecov](https://codecov.io/gh/happyapplehorse/ai-care/graph/badge.svg?token=G091OOWBDF)](https://codecov.io/gh/happyapplehorse/ai-care)
![Zero Dependencies](https://img.shields.io/badge/dependencies-zero-brightgreen)

Current AI models are only capable of passively responding to user inquiries
and lack the ability to initiate conversations.
AI-Care endows AI with the capability to speak proactively.
With simple settings, AI-Care allows your AI to proactively care for you.

## Highlights✨

1. 💵Low Cost: In terms of both token usage and API call frequencies, AI-Care is designed to minimize these expenses.
It operates with an O(1) cost complexity, which means that the costs do not increase with the duration of its activation.
2. 🕊️Low Intrusiveness: AI-Care provides its services alongside existing systems,
with virtually zero intrusion into the original code.
This allows for easy integration of AI-Care services into existing systems.
3. 🌍Model Universality: Compatible with all LLM (Large Language Model) models,
AI-Care does not rely on function call features or specific ways in which the model is used.

## Usage

1. Define the "to_llm" and "to_user" interfaces. AICare uses the "to_llm" interface to send
messages to the LLM and uses the "to_user" interface to send messages to the user.
```python
def to_llm_method(chat_context, to_llm_messages: list[AICareContext]) -> str | Generator[str, None, None]:
    # Here you need to convert `to_llm_messages` into the message format of the LLM you are using
    # and send the message to the LLM.
    # The return value is the message content replied by the LLM.
    # If you are not using stream mode, directly return the reply message string.
    # You can also use stream mode, in which case a string generator should be returned.
    # If using stream mode, AICare will also automatically use stream mode when sending messages to the user.
    pass

def to_user_method(to_user_message: str | Generator[str, None, None]) -> None:
    # Here you need to process messages from AICare as you would a normal LLM reply.
    # If using stream mode, this method should be able to receive and handle a string generator.
    # If the `to_llm` method uses stream mode, then this method should also use stream mode.
    pass
```

2. Instantiate AICare:
```python
ai_care = AICare()
```

3. Register "to_llm" and "to_user" methods:
```python
ai_care.register_to_llm_method(to_llm_method)
ai_care.register_to_user_method(to_user_method)
```

4. Using AICare
```python
# After each round of conversation or when AICare service is needed
ai_care.chat_update(chat_context)
```

## AI-Care settings
```python
# Set how long before AI-Care is activated
ai_care.set_config(key="delay", value=60)

# Set the number of times AI-Care selects “ASK_LATER”, setting it to 0 can disable this option.
ai_care.set_config(key="ask_later_count_limit", value=1)

# Set the system default recursive depth for ask.
ai_care.set_config(key="ask_depth", value=1)

# Set the maximum number of chat intervals the system automatically records.
ai_care.set_config(key="n_chat_intervals", value=20)
```

## License

This project is licensed under the [MIT License](./LICENSE).
