loongson/pypi/: webrtcvad-2.0.10 metadata and description
Python interface to the Google WebRTC Voice Activity Detector (VAD)
author | John Wiseman |
author_email | jjwiseman@gmail.com |
classifiers |
|
keywords | speechrecognition asr voiceactivitydetection vad webrtc |
license | MIT |
provides_extras | dev |
requires_dist |
|
Because this project isn't in the mirror_whitelist
,
no releases from root/pypi are included.
File | Tox results | History |
---|---|---|
webrtcvad-2.0.10-cp38-cp38-linux_loongarch64.whl
|
|
py-webrtcvad
This is a python interface to the WebRTC Voice Activity Detector (VAD). It is compatible with Python 2 and Python 3.
A VAD classifies a piece of audio data as being voiced or unvoiced. It can be useful for telephony and speech recognition.
The VAD that Google developed for the WebRTC project is reportedly one of the best available, being fast, modern and free.
How to use it
Install the webrtcvad module:
pip install webrtcvad
Create a Vad object:
import webrtcvad vad = webrtcvad.Vad()
Optionally, set its aggressiveness mode, which is an integer between 0 and 3. 0 is the least aggressive about filtering out non-speech, 3 is the most aggressive. (You can also set the mode when you create the VAD, e.g. vad = webrtcvad.Vad(3)):
vad.set_mode(1)
Give it a short segment (“frame”) of audio. The WebRTC VAD only accepts 16-bit mono PCM audio, sampled at 8000, 16000, or 32000 Hz. A frame must be either 10, 20, or 30 ms in duration:
# Run the VAD on 10 ms of silence. The result should be False. sample_rate = 16000 frame_duration = 10 # ms frame = b'\x00\x00' * (sample_rate * frame_duration / 1000) print 'Contains speech: %s' % (vad.is_speech(frame, sample_rate)
See example.py for a more detailed example that will process a .wav file, find the voiced segments, and write each one as a separate .wav.
How to run unit tests
To run unit tests:
pip install -e ".[dev]" python setup.py test