Metadata-Version: 2.1
Name: rwkv_simple
Version: 0.1.0
Summary: Easy-to-use RWKV-6 (x060) implementation
Author-email: Ronsor Labs <ronsor@ronsor.com>
Project-URL: Homepage, https://github.com/pypa/rwkv-simple
Project-URL: Issues, https://github.com/ronsor/rwkv-simple/issues
Classifier: Programming Language :: Python :: 3
Classifier: License :: OSI Approved :: Apache Software License
Classifier: Operating System :: OS Independent
Requires-Python: >=3.10
Description-Content-Type: text/markdown
License-File: LICENSE
Requires-Dist: torch >=2.1.0
Requires-Dist: einops >=0.7.0
Provides-Extra: fla
Requires-Dist: fla ; extra == 'fla'

# rwkv_simple

`rwkv_simple` is an easy-to-use implementation of RWKV-6 (x060). We support
multiple WKV kernels (a Triton-based one from Flash Linear Attention, the official
CUDA kernel, and a pure-PyTorch one).

## Flash Linear Attention

We use the WKV6 kernel from the Flash Linear Attention (FLA) project by default,
so for best performance, please install FLA directly from its repository:

```
pip install -U git+https://github.com/sustcsonglin/flash-linear-attention
```

See: <https://github.com/sustcsonglin/flash-linear-attention>.

## License

Copyright © 2024 Ronsor Labs. Licensed under the Apache License, version 2.0.
