220 lines
44 KiB
Plaintext
220 lines
44 KiB
Plaintext
[38;5;12m [39m[38;2;255;187;0m[1m[4mPython for Scientific Audio[0m
|
||
[38;5;14m[1m![0m[38;5;12mAwesome[39m[38;5;14m[1m [0m[38;5;14m[1m(https://cdn.rawgit.com/sindresorhus/awesome/d7305f38d29fed78fa85652e3a63e154dd8e8829/media/badge.svg)[0m[38;5;12m [39m[38;5;12m(https://github.com/sindresorhus/awesome)[39m[38;5;12m [39m[38;5;14m[1m![0m[38;5;12mBuild[39m[38;5;12m [39m[38;5;12mStatus[39m[38;5;14m[1m [0m
|
||
[38;5;14m[1m(https://github.com/faroit/awesome-python-scientific-audio/workflows/CI/badge.svg)[0m[38;5;12m [39m[38;5;12m(https://github.com/faroit/awesome-python-scientific-audio/actions?query=workflow%3ACI+branch%3Amaster+event%3Apush)[39m
|
||
|
||
[38;5;12mThe aim of this repository is to create a comprehensive, curated list of python software/tools related and used for scientific research in audio/music applications.[39m
|
||
|
||
[38;2;255;187;0m[4mContents[0m
|
||
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;14m[1mAudio Related Packages[0m[38;5;12m (#audio-related-packages)[39m
|
||
[48;5;235m[38;5;249m- **Read/Write** (#read-write)[49m[39m[48;5;235m[38;5;249m [49m[39m
|
||
[48;5;235m[38;5;249m- **Transformations - General DSP** (#transformations---general-dsp)[49m[39m[48;5;235m[38;5;249m [49m[39m
|
||
[48;5;235m[38;5;249m- **Feature extraction** (#feature-extraction)[49m[39m[48;5;235m[38;5;249m [49m[39m
|
||
[48;5;235m[38;5;249m- **Data augmentation** (#data-augmentation)[49m[39m[48;5;235m[38;5;249m [49m[39m
|
||
[48;5;235m[38;5;249m- **Speech Processing** (#speech-processing)[49m[39m[48;5;235m[38;5;249m [49m[39m
|
||
[48;5;235m[38;5;249m- **Environmental Sounds** (#environmenta)[49m[39m[48;5;235m[38;5;249m [49m[39m
|
||
[48;5;235m[38;5;249m- **Perceptial Models - Auditory Models** (#perceptial-models---auditory-models)[49m[39m
|
||
[48;5;235m[38;5;249m- **Source Separation** (#source-separation)[49m[39m[48;5;235m[38;5;249m [49m[39m
|
||
[48;5;235m[38;5;249m- **Music Information Retrieval** (#music-information-retrieval)[49m[39m[48;5;235m[38;5;249m [49m[39m
|
||
[48;5;235m[38;5;249m- **Deep Learning** (#deep-learning)[49m[39m[48;5;235m[38;5;249m [49m[39m
|
||
[48;5;235m[38;5;249m- **Symbolic Music - MIDI - Musicology** (#symbolic-music---midi---musicology)[49m[39m[48;5;235m[38;5;249m [49m[39m
|
||
[48;5;235m[38;5;249m- **Realtime applications** (#realtime-applications)[49m[39m[48;5;235m[38;5;249m [49m[39m
|
||
[48;5;235m[38;5;249m- **Web - Audio** (#web-audio)[49m[39m[48;5;235m[38;5;249m [49m[39m
|
||
[48;5;235m[38;5;249m- **Audio related APIs and Datasets** (#audio-related-apis-and-datasets)[49m[39m[48;5;235m[38;5;249m [49m[39m
|
||
[48;5;235m[38;5;249m- **Wrappers for Audio Plugins** (#wrappers-for-audio-plugins)[49m[39m[48;5;235m[38;5;249m [49m[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;14m[1mTutorials[0m[38;5;12m (#tutorials)[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;14m[1mBooks[0m[38;5;12m (#books)[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;14m[1mScientific Paper[0m[38;5;12m (#scientific-papers)[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;14m[1mOther Resources[0m[38;5;12m (#other-resources)[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;14m[1mRelated lists[0m[38;5;12m (#related-lists)[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;14m[1mContributing[0m[38;5;12m (#contributing)[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;14m[1mLicense[0m[38;5;12m (#license)[39m
|
||
|
||
|
||
[38;2;255;187;0m[4mAudio Related Packages[0m
|
||
|
||
[38;5;12m- Total number of packages: 66[39m
|
||
|
||
[38;2;255;187;0m[4mRead-Write[0m
|
||
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;14m[1maudiolazy[0m[38;5;12m (https://github.com/danilobellini/audiolazy) [39m[38;5;14m[1m:octocat:[0m[38;5;12m (https://github.com/danilobellini/audiolazy) [39m[38;5;14m[1m:package:[0m[38;5;12m (https://pypi.python.org/pypi/audiolazy/) - Expressive Digital Signal Processing (DSP) package for Python.[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;14m[1maudioread[0m[38;5;12m (https://github.com/beetbox/audioread) [39m[38;5;14m[1m:octocat:[0m[38;5;12m (https://github.com/beetbox/audioread) [39m[38;5;14m[1m:package:[0m[38;5;12m (https://pypi.python.org/pypi/audioread/) - Cross-library (GStreamer + Core Audio + MAD + FFmpeg) audio decoding.[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;14m[1mmutagen[0m[38;5;12m (https://mutagen.readthedocs.io/) [39m[38;5;14m[1m:octocat:[0m[38;5;12m (https://github.com/quodlibet/mutagen) [39m[38;5;14m[1m:package:[0m[38;5;12m (https://pypi.python.org/pypi/mutagen) - Reads and writes all kind of audio metadata for various formats.[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;14m[1mpyAV[0m[38;5;12m (http://docs.mikeboers.com/pyav/) [39m[38;5;14m[1m:octocat:[0m[38;5;12m (https://github.com/mikeboers/PyAV) - PyAV is a Pythonic binding for FFmpeg or Libav.[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;14m[1m(Py)Soundfile[0m[38;5;12m (http://pysoundfile.readthedocs.io/) [39m[38;5;14m[1m:octocat:[0m[38;5;12m (https://github.com/bastibe/PySoundFile) [39m[38;5;14m[1m:package:[0m[38;5;12m (https://pypi.python.org/pypi/SoundFile) - Library based on libsndfile, CFFI, and NumPy.[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;14m[1mpySox[0m[38;5;12m (https://github.com/rabitt/pysox) [39m[38;5;14m[1m:octocat:[0m[38;5;12m (https://github.com/rabitt/pysox) [39m[38;5;14m[1m:package:[0m[38;5;12m (https://pypi.python.org/pypi/pysox/) - Wrapper for sox.[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;14m[1mstempeg[0m[38;5;12m (https://github.com/faroit/stempeg) [39m[38;5;14m[1m:octocat:[0m[38;5;12m (https://github.com/faroit/stempeg) [39m[38;5;14m[1m:package:[0m[38;5;12m (https://pypi.python.org/pypi/stempeg/) - read/write of STEMS multistream audio.[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;14m[1mtinytag[0m[38;5;12m (https://github.com/devsnd/tinytag) [39m[38;5;14m[1m:octocat:[0m[38;5;12m (https://github.com/devsnd/tinytag) [39m[38;5;14m[1m:package:[0m[38;5;12m (https://pypi.python.org/pypi/tinytag/) - reading music meta data of MP3, OGG, FLAC and Wave files.[39m
|
||
|
||
[38;2;255;187;0m[4mTransformations - General DSP[0m
|
||
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;14m[1macoustics[0m[38;5;12m (http://python-acoustics.github.io/python-acoustics/) [39m[38;5;14m[1m:octocat:[0m[38;5;12m (https://github.com/python-acoustics/python-acoustics/) [39m[38;5;14m[1m:package:[0m[38;5;12m (https://pypi.python.org/pypi/acoustics) - useful tools for acousticians.[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;14m[1mAudioTK[0m[38;5;12m (https://github.com/mbrucher/AudioTK) [39m[38;5;14m[1m:octocat:[0m[38;5;12m (https://github.com/mbrucher/AudioTK) - DSP filter toolbox (lots of filters).[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;14m[1mAudioTSM[0m[38;5;12m (https://audiotsm.readthedocs.io/) [39m[38;5;14m[1m:octocat:[0m[38;5;12m (https://github.com/Muges/audiotsm) [39m[38;5;14m[1m:package:[0m[38;5;12m (https://pypi.python.org/pypi/audiotsm/) - real-time audio time-scale modification procedures.[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;14m[1mGammatone[0m[38;5;12m (https://github.com/detly/gammatone) [39m[38;5;14m[1m:octocat:[0m[38;5;12m (https://github.com/detly/gammatone) - Gammatone filterbank implementation.[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;14m[1mpyFFTW[0m[38;5;12m (http://pyfftw.github.io/pyFFTW/) [39m[38;5;14m[1m:octocat:[0m[38;5;12m (https://github.com/pyFFTW/pyFFTW) [39m[38;5;14m[1m:package:[0m[38;5;12m (https://pypi.python.org/pypi/pyFFTW/) - Wrapper for FFTW(3).[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;14m[1mNSGT[0m[38;5;12m (https://grrrr.org/research/software/nsgt/) [39m[38;5;14m[1m:octocat:[0m[38;5;12m (https://github.com/grrrr/nsgt) [39m[38;5;14m[1m:package:[0m[38;5;12m (https://pypi.python.org/pypi/nsgt) - Non-stationary gabor transform, constant-q.[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;14m[1mmatchering[0m[38;5;12m (https://github.com/sergree/matchering) [39m[38;5;14m[1m:octocat:[0m[38;5;12m (https://github.com/sergree/matchering) [39m[38;5;14m[1m:package:[0m[38;5;12m (https://pypi.org/project/matchering/) - Automated reference audio mastering.[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;14m[1mMDCT[0m[38;5;12m (https://github.com/nils-werner/mdct) [39m[38;5;14m[1m:octocat:[0m[38;5;12m (https://github.com/nils-werner/mdct) [39m[38;5;14m[1m:package:[0m[38;5;12m (https://pypi.python.org/pypi/mdct) - MDCT transform.[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;14m[1mpydub[0m[38;5;12m (http://pydub.com) [39m[38;5;14m[1m:octocat:[0m[38;5;12m (https://github.com/jiaaro/pydub) [39m[38;5;14m[1m:package:[0m[38;5;12m (https://pypi.python.org/pypi/mdct) - Manipulate audio with a simple and easy high level interface.[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;14m[1mpytftb[0m[38;5;12m (http://tftb.nongnu.org) [39m[38;5;14m[1m:octocat:[0m[38;5;12m (https://github.com/scikit-signal/pytftb) - Implementation of the MATLAB Time-Frequency Toolbox.[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;14m[1mpyroomacoustics[0m[38;5;12m (https://github.com/LCAV/pyroomacoustics) [39m[38;5;14m[1m:octocat:[0m[38;5;12m (https://github.com/LCAV/pyroomacoustics) [39m[38;5;14m[1m:package:[0m[38;5;12m (https://pypi.python.org/pypi/pyroomacoustics) - Room Acoustics Simulation (RIR generator)[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;14m[1mPyRubberband[0m[38;5;12m [39m[38;5;12m(https://github.com/bmcfee/pyrubberband)[39m[38;5;12m [39m[38;5;14m[1m:octocat:[0m[38;5;12m [39m[38;5;12m(https://github.com/bmcfee/pyrubberband)[39m[38;5;12m [39m[38;5;14m[1m:package:[0m[38;5;12m [39m[38;5;12m(https://pypi.python.org/pypi/pyrubberband/)[39m[38;5;12m [39m[38;5;12m-[39m[38;5;12m [39m[38;5;12mWrapper[39m[38;5;12m [39m[38;5;12mfor[39m[38;5;12m [39m[38;5;14m[1mrubberband[0m[38;5;12m [39m[38;5;12m(http://breakfastquay.com/rubberband/)[39m[38;5;12m [39m[38;5;12mto[39m[38;5;12m [39m[38;5;12mdo[39m[38;5;12m [39m
|
||
[38;5;12mpitch-shifting[39m[38;5;12m [39m[38;5;12mand[39m[38;5;12m [39m[38;5;12mtime-stretching.[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;14m[1mPyWavelets[0m[38;5;12m (http://pywavelets.readthedocs.io) [39m[38;5;14m[1m:octocat:[0m[38;5;12m (https://github.com/PyWavelets/pywt) [39m[38;5;14m[1m:package:[0m[38;5;12m (https://pypi.python.org/pypi/PyWavelets) - Discrete Wavelet Transform in Python.[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;14m[1mResampy[0m[38;5;12m (http://resampy.readthedocs.io) [39m[38;5;14m[1m:octocat:[0m[38;5;12m (https://github.com/bmcfee/resampy) [39m[38;5;14m[1m:package:[0m[38;5;12m (https://pypi.python.org/pypi/resampy) - Sample rate conversion.[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;14m[1mSFS-Python[0m[38;5;12m (http://www.sfstoolbox.org) [39m[38;5;14m[1m:octocat:[0m[38;5;12m (https://github.com/sfstoolbox/sfs-python) [39m[38;5;14m[1m:package:[0m[38;5;12m (https://pypi.python.org/pypi/sfs/) - Sound Field Synthesis Toolbox.[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;14m[1msound_field_analysis[0m[38;5;12m [39m[38;5;12m(https://appliedacousticschalmers.github.io/sound_field_analysis-py/)[39m[38;5;12m [39m[38;5;14m[1m:octocat:[0m[38;5;12m [39m[38;5;12m(https://github.com/AppliedAcousticsChalmers/sound_field_analysis-py)[39m[38;5;12m [39m[38;5;14m[1m:package:[0m[38;5;12m [39m[38;5;12m(https://pypi.org/project/sound-field-analysis/)[39m[38;5;12m [39m[38;5;12m-[39m[38;5;12m [39m
|
||
[38;5;12mAnalyze,[39m[38;5;12m [39m[38;5;12mvisualize[39m[38;5;12m [39m[38;5;12mand[39m[38;5;12m [39m[38;5;12mprocess[39m[38;5;12m [39m[38;5;12msound[39m[38;5;12m [39m[38;5;12mfield[39m[38;5;12m [39m[38;5;12mdata[39m[38;5;12m [39m[38;5;12mrecorded[39m[38;5;12m [39m[38;5;12mby[39m[38;5;12m [39m[38;5;12mspherical[39m[38;5;12m [39m[38;5;12mmicrophone[39m[38;5;12m [39m[38;5;12marrays.[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;14m[1mSTFT[0m[38;5;12m (http://stft.readthedocs.io) [39m[38;5;14m[1m:octocat:[0m[38;5;12m (https://github.com/nils-werner/stft) [39m[38;5;14m[1m:package:[0m[38;5;12m (https://pypi.python.org/pypi/stft) - Standalone package for Short-Time Fourier Transform.[39m
|
||
|
||
[38;2;255;187;0m[4mFeature extraction[0m
|
||
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;14m[1maubio[0m[38;5;12m (http://aubio.org/) [39m[38;5;14m[1m:octocat:[0m[38;5;12m (https://github.com/aubio/aubio) [39m[38;5;14m[1m:package:[0m[38;5;12m (https://pypi.python.org/pypi/aubio) - Feature extractor, written in C, Python interface.[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;14m[1maudioFlux[0m[38;5;12m (https://github.com/libAudioFlux/audioFlux) [39m[38;5;14m[1m:octocat:[0m[38;5;12m (https://github.com/libAudioFlux/audioFlux) [39m[38;5;14m[1m:package:[0m[38;5;12m (https://pypi.python.org/pypi/audioflux) - A library for audio and music analysis, feature extraction.[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;14m[1maudiolazy[0m[38;5;12m (https://github.com/danilobellini/audiolazy) [39m[38;5;14m[1m:octocat:[0m[38;5;12m (https://github.com/danilobellini/audiolazy) [39m[38;5;14m[1m:package:[0m[38;5;12m (https://pypi.python.org/pypi/audiolazy/) - Realtime Audio Processing lib, general purpose.[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;14m[1messentia[0m[38;5;12m (http://essentia.upf.edu) [39m[38;5;14m[1m:octocat:[0m[38;5;12m (https://github.com/MTG/essentia) - Music related low level and high level feature extractor, C++ based, includes Python bindings.[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;14m[1mpython_speech_features[0m[38;5;12m [39m[38;5;12m(https://github.com/jameslyons/python_speech_features)[39m[38;5;12m [39m[38;5;14m[1m:octocat:[0m[38;5;12m [39m[38;5;12m(https://github.com/jameslyons/python_speech_features)[39m[38;5;12m [39m[38;5;14m[1m:package:[0m[38;5;12m [39m[38;5;12m(https://pypi.python.org/pypi/python_speech_features)[39m[38;5;12m [39m[38;5;12m-[39m[38;5;12m [39m[38;5;12mCommon[39m[38;5;12m [39m[38;5;12mspeech[39m[38;5;12m [39m[38;5;12mfeatures[39m[38;5;12m [39m
|
||
[38;5;12mfor[39m[38;5;12m [39m[38;5;12mASR.[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;14m[1mpyYAAFE[0m[38;5;12m (https://github.com/Yaafe/Yaafe) [39m[38;5;14m[1m:octocat:[0m[38;5;12m (https://github.com/Yaafe/Yaafe) - Python bindings for YAAFE feature extractor.[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;14m[1mspeechpy[0m[38;5;12m (https://github.com/astorfi/speechpy) [39m[38;5;14m[1m:octocat:[0m[38;5;12m (https://github.com/astorfi/speechpy) [39m[38;5;14m[1m:package:[0m[38;5;12m (https://pypi.python.org/pypi/speechpy) - Library for Speech Processing and Recognition, mostly feature extraction for now.[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;14m[1mspafe[0m[38;5;12m (https://github.com/SuperKogito/spafe) [39m[38;5;14m[1m:octocat:[0m[38;5;12m (https://github.com/SuperKogito/spafe) [39m[38;5;14m[1m:package:[0m[38;5;12m (https://pypi.org/project/spafe/) - Python library for features extraction from audio files.[39m
|
||
|
||
[38;2;255;187;0m[4mData augmentation[0m
|
||
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;14m[1maudiomentations[0m[38;5;12m (https://github.com/iver56/audiomentations) [39m[38;5;14m[1m:octocat:[0m[38;5;12m (https://github.com/iver56/audiomentations) [39m[38;5;14m[1m:package:[0m[38;5;12m (https://pypi.org/project/audiomentations/) - Audio Data Augmentation.[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;14m[1mmuda[0m[38;5;12m (https://muda.readthedocs.io/en/latest/) [39m[38;5;14m[1m:octocat:[0m[38;5;12m (https://github.com/bmcfee/muda) [39m[38;5;14m[1m:package:[0m[38;5;12m (https://pypi.python.org/pypi/muda) - Musical Data Augmentation.[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;14m[1mpydiogment[0m[38;5;12m (https://github.com/SuperKogito/pydiogment) [39m[38;5;14m[1m:octocat:[0m[38;5;12m (https://github.com/SuperKogito/pydiogment) [39m[38;5;14m[1m:package:[0m[38;5;12m (https://pypi.org/project/pydiogment/) - Audio Data Augmentation.[39m
|
||
|
||
[38;2;255;187;0m[4mSpeech Processing[0m
|
||
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;14m[1maeneas[0m[38;5;12m (https://www.readbeyond.it/aeneas/) [39m[38;5;14m[1m:octocat:[0m[38;5;12m (https://github.com/readbeyond/aeneas/) [39m[38;5;14m[1m:package:[0m[38;5;12m (https://pypi.python.org/pypi/aeneas/) - Forced aligner, based on MFCC+DTW, 35+ languages.[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;14m[1mdeepspeech[0m[38;5;12m (https://github.com/mozilla/DeepSpeech) [39m[38;5;14m[1m:octocat:[0m[38;5;12m (https://github.com/mozilla/DeepSpeech) [39m[38;5;14m[1m:package:[0m[38;5;12m (https://pypi.org/project/deepspeech/) - Pretrained automatic speech recognition.[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;14m[1mgentle[0m[38;5;12m (https://github.com/lowerquality/gentle) [39m[38;5;14m[1m:octocat:[0m[38;5;12m (https://github.com/lowerquality/gentle) - Forced-aligner built on Kaldi.[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;14m[1mParselmouth[0m[38;5;12m [39m[38;5;12m(https://github.com/YannickJadoul/Parselmouth)[39m[38;5;12m [39m[38;5;14m[1m:octocat:[0m[38;5;12m [39m[38;5;12m(https://github.com/YannickJadoul/Parselmouth)[39m[38;5;12m [39m[38;5;14m[1m:package:[0m[38;5;12m [39m[38;5;12m(https://pypi.org/project/praat-parselmouth/)[39m[38;5;12m [39m[38;5;12m-[39m[38;5;12m [39m[38;5;12mPython[39m[38;5;12m [39m[38;5;12minterface[39m[38;5;12m [39m[38;5;12mto[39m[38;5;12m [39m[38;5;12mthe[39m[38;5;12m [39m[38;5;14m[1mPraat[0m[38;5;12m [39m[38;5;12m(http://www.praat.org)[39m[38;5;12m [39m
|
||
[38;5;12mphonetics[39m[38;5;12m [39m[38;5;12mand[39m[38;5;12m [39m[38;5;12mspeech[39m[38;5;12m [39m[38;5;12manalysis,[39m[38;5;12m [39m[38;5;12msynthesis,[39m[38;5;12m [39m[38;5;12mand[39m[38;5;12m [39m[38;5;12mmanipulation[39m[38;5;12m [39m[38;5;12msoftware.[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;14m[1mpersephone[0m[38;5;12m (https://persephone.readthedocs.io/en/latest/) [39m[38;5;14m[1m:octocat:[0m[38;5;12m (https://github.com/persephone-tools/persephone) [39m[38;5;14m[1m:package:[0m[38;5;12m (https://pypi.org/project/persephone/) - Automatic phoneme transcription tool.[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;14m[1mpyannote.audio[0m[38;5;12m (https://github.com/pyannote/pyannote-audio) [39m[38;5;14m[1m:octocat:[0m[38;5;12m (https://github.com/pyannote/pyannote-audio) [39m[38;5;14m[1m:package:[0m[38;5;12m (https://pypi.org/project/pyannote-audio/) - Neural building blocks for speaker diarization.[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;14m[1mpyAudioAnalysis[0m[38;5;12m (https://github.com/tyiannak/pyAudioAnalysis)² [39m[38;5;14m[1m:octocat:[0m[38;5;12m (https://github.com/tyiannak/pyAudioAnalysis) [39m[38;5;14m[1m:package:[0m[38;5;12m (https://pypi.python.org/pypi/pyAudioAnalysis/) - Feature Extraction, Classification, Diarization.[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;14m[1mpy-webrtcvad[0m[38;5;12m (https://github.com/wiseman/py-webrtcvad) [39m[38;5;14m[1m:octocat:[0m[38;5;12m (https://github.com/wiseman/py-webrtcvad) [39m[38;5;14m[1m:package:[0m[38;5;12m (https://pypi.python.org/pypi/webrtcvad/) - Interface to the WebRTC Voice Activity Detector.[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;14m[1mpypesq[0m[38;5;12m (https://github.com/vBaiCai/python-pesq) [39m[38;5;14m[1m:octocat:[0m[38;5;12m (https://github.com/vBaiCai/python-pesq) - Wrapper for the PESQ score calculation.[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;14m[1mpystoi[0m[38;5;12m (https://github.com/mpariente/pystoi) [39m[38;5;14m[1m:octocat:[0m[38;5;12m (https://github.com/mpariente/pystoi) [39m[38;5;14m[1m:package:[0m[38;5;12m (https://pypi.org/project/pystoi) - Short Term Objective Intelligibility measure (STOI).[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;14m[1mPyWorldVocoder[0m[38;5;12m (https://github.com/JeremyCCHsu/Python-Wrapper-for-World-Vocoder) [39m[38;5;14m[1m:octocat:[0m[38;5;12m (https://github.com/JeremyCCHsu/Python-Wrapper-for-World-Vocoder) - Wrapper for Morise's World Vocoder.[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;14m[1mMontreal[0m[38;5;14m[1m [0m[38;5;14m[1mForced[0m[38;5;14m[1m [0m[38;5;14m[1mAligner[0m[38;5;12m [39m[38;5;12m(https://montrealcorpustools.github.io/Montreal-Forced-Aligner/)[39m[38;5;12m [39m[38;5;14m[1m:octocat:[0m[38;5;12m [39m[38;5;12m(https://github.com/MontrealCorpusTools/Montreal-Forced-Aligner)[39m[38;5;12m [39m[38;5;12m-[39m[38;5;12m [39m[38;5;12mForced[39m[38;5;12m [39m[38;5;12maligner,[39m[38;5;12m [39m[38;5;12mbased[39m[38;5;12m [39m[38;5;12mon[39m[38;5;12m [39m[38;5;12mKaldi[39m[38;5;12m [39m[38;5;12m(HMM),[39m[38;5;12m [39m[38;5;12mEnglish[39m[38;5;12m [39m[38;5;12m(others[39m[38;5;12m [39m[38;5;12mcan[39m[38;5;12m [39m[38;5;12mbe[39m[38;5;12m [39m
|
||
[38;5;12mtrained).[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;14m[1mSIDEKIT[0m[38;5;12m (http://lium.univ-lemans.fr/sidekit/) [39m[38;5;14m[1m:package:[0m[38;5;12m (https://pypi.python.org/pypi/SIDEKIT/) - Speaker and Language recognition.[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;14m[1mSpeechRecognition[0m[38;5;12m [39m[38;5;12m(https://github.com/Uberi/speech_recognition)[39m[38;5;12m [39m[38;5;14m[1m:octocat:[0m[38;5;12m [39m[38;5;12m(https://github.com/Uberi/speech_recognition)[39m[38;5;12m [39m[38;5;14m[1m:package:[0m[38;5;12m [39m[38;5;12m(https://pypi.python.org/pypi/SpeechRecognition/)[39m[38;5;12m [39m[38;5;12m-[39m[38;5;12m [39m[38;5;12mWrapper[39m[38;5;12m [39m[38;5;12mfor[39m[38;5;12m [39m[38;5;12mseveral[39m[38;5;12m [39m[38;5;12mASR[39m[38;5;12m [39m[38;5;12mengines[39m[38;5;12m [39m[38;5;12mand[39m[38;5;12m [39m[38;5;12mAPIs,[39m[38;5;12m [39m[38;5;12monline[39m[38;5;12m [39m
|
||
[38;5;12mand[39m[38;5;12m [39m[38;5;12moffline.[39m
|
||
|
||
[38;2;255;187;0m[4mEnvironmental Sounds[0m
|
||
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;14m[1msed_eval[0m[38;5;12m (http://tut-arg.github.io/sed_eval) [39m[38;5;14m[1m:octocat:[0m[38;5;12m (https://github.com/TUT-ARG/sed_eval) [39m[38;5;14m[1m:package:[0m[38;5;12m (https://pypi.org/project/sed_eval/) - Evaluation toolbox for Sound Event Detection[39m
|
||
|
||
[38;2;255;187;0m[4mPerceptial Models - Auditory Models[0m
|
||
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;14m[1mcochlea[0m[38;5;12m (https://github.com/mrkrd/cochlea) [39m[38;5;14m[1m:octocat:[0m[38;5;12m (https://github.com/mrkrd/cochlea) [39m[38;5;14m[1m:package:[0m[38;5;12m (https://pypi.python.org/pypi/cochlea/) - Inner ear models.[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;14m[1mBrian2[0m[38;5;12m (http://briansimulator.org/) [39m[38;5;14m[1m:octocat:[0m[38;5;12m (https://github.com/brian-team/brian2) [39m[38;5;14m[1m:package:[0m[38;5;12m (https://pypi.python.org/pypi/Brian2) - Spiking neural networks simulator, includes cochlea model.[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;14m[1mLoudness[0m[38;5;12m (https://github.com/deeuu/loudness) [39m[38;5;14m[1m:octocat:[0m[38;5;12m (https://github.com/deeuu/loudness) - Perceived loudness, includes Zwicker, Moore/Glasberg model.[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;14m[1mpyloudnorm[0m[38;5;12m (https://www.christiansteinmetz.com/projects-blog/pyloudnorm) [39m[38;5;14m[1m:octocat:[0m[38;5;12m (https://github.com/csteinmetz1/pyloudnorm) - Audio loudness meter and normalization, implements ITU-R BS.1770-4.[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;14m[1mSound Field Synthesis Toolbox[0m[38;5;12m (http://www.sfstoolbox.org) [39m[38;5;14m[1m:octocat:[0m[38;5;12m (https://github.com/sfstoolbox/sfs-python) [39m[38;5;14m[1m:package:[0m[38;5;12m (https://pypi.python.org/pypi/sfs/) - Sound Field Synthesis Toolbox.[39m
|
||
|
||
[38;2;255;187;0m[4mSource Separation[0m
|
||
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;14m[1mcommonfate[0m[38;5;12m (https://github.com/aliutkus/commonfate) [39m[38;5;14m[1m:octocat:[0m[38;5;12m (https://github.com/aliutkus/commonfate) [39m[38;5;14m[1m:package:[0m[38;5;12m (https://pypi.python.org/pypi/commonfate) - Common Fate Model and Transform.[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;14m[1mNTFLib[0m[38;5;12m (https://github.com/stitchfix/NTFLib) [39m[38;5;14m[1m:octocat:[0m[38;5;12m (https://github.com/stitchfix/NTFLib) - Sparse Beta-Divergence Tensor Factorization.[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;14m[1mNUSSL[0m[38;5;12m [39m[38;5;12m(https://interactiveaudiolab.github.io/project/nussl.html)[39m[38;5;12m [39m[38;5;14m[1m:octocat:[0m[38;5;12m [39m[38;5;12m(https://github.com/interactiveaudiolab/nussl)[39m[38;5;12m [39m[38;5;14m[1m:package:[0m[38;5;12m [39m[38;5;12m(https://pypi.python.org/pypi/nussl)[39m[38;5;12m [39m[38;5;12m-[39m[38;5;12m [39m[38;5;12mHolistic[39m[38;5;12m [39m[38;5;12msource[39m[38;5;12m [39m[38;5;12mseparation[39m[38;5;12m [39m[38;5;12mframework[39m[38;5;12m [39m[38;5;12mincluding[39m[38;5;12m [39m[38;5;12mDSP[39m[38;5;12m [39m[38;5;12mmethods[39m[38;5;12m [39m[38;5;12mand[39m[38;5;12m [39m
|
||
[38;5;12mdeep[39m[38;5;12m [39m[38;5;12mlearning[39m[38;5;12m [39m[38;5;12mmethods.[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;14m[1mNIMFA[0m[38;5;12m (http://nimfa.biolab.si) [39m[38;5;14m[1m:octocat:[0m[38;5;12m (https://github.com/marinkaz/nimfa) [39m[38;5;14m[1m:package:[0m[38;5;12m (https://pypi.python.org/pypi/nimfa) - Several flavors of non-negative-matrix factorization.[39m
|
||
|
||
[38;2;255;187;0m[4mMusic Information Retrieval[0m
|
||
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;14m[1mCatchy[0m[38;5;12m (https://github.com/jvbalen/catchy) [39m[38;5;14m[1m:octocat:[0m[38;5;12m (https://github.com/jvbalen/catchy) - Corpus Analysis Tools for Computational Hook Discovery.[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;14m[1mchord-detection[0m[38;5;12m (https://github.com/sevagh/chord-detection) [39m[38;5;14m[1m:octocat:[0m[38;5;12m (https://github.com/sevagh/chord-detection) - Algorithms for chord detection and key estimation.[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;14m[1mMadmom[0m[38;5;12m (https://madmom.readthedocs.io/en/latest/) [39m[38;5;14m[1m:octocat:[0m[38;5;12m (https://github.com/CPJKU/madmom) [39m[38;5;14m[1m:package:[0m[38;5;12m (https://pypi.python.org/pypi/madmom) - MIR packages with strong focus on beat detection, onset detection and chord recognition.[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;14m[1mmir_eval[0m[38;5;12m (http://craffel.github.io/mir_eval/) [39m[38;5;14m[1m:octocat:[0m[38;5;12m (https://github.com/craffel/mir_eval) [39m[38;5;14m[1m:package:[0m[38;5;12m (https://pypi.python.org/pypi/mir_eval) - Common scores for various MIR tasks. Also includes bss_eval implementation.[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;14m[1mmsaf[0m[38;5;12m (http://pythonhosted.org/msaf/) [39m[38;5;14m[1m:octocat:[0m[38;5;12m (https://github.com/urinieto/msaf) [39m[38;5;14m[1m:package:[0m[38;5;12m (https://pypi.python.org/pypi/msaf) - Music Structure Analysis Framework.[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;14m[1mlibrosa[0m[38;5;12m (http://librosa.github.io/librosa/) [39m[38;5;14m[1m:octocat:[0m[38;5;12m (https://github.com/librosa/librosa) [39m[38;5;14m[1m:package:[0m[38;5;12m (https://pypi.python.org/pypi/librosa) - General audio and music analysis.[39m
|
||
|
||
[38;2;255;187;0m[4mDeep Learning[0m
|
||
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;14m[1mKapre[0m[38;5;12m (https://github.com/keunwoochoi/kapre) [39m[38;5;14m[1m:octocat:[0m[38;5;12m (https://github.com/keunwoochoi/kapre) [39m[38;5;14m[1m:package:[0m[38;5;12m (https://pypi.python.org/pypi/kapre) - Keras Audio Preprocessors[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;14m[1mTorchAudio[0m[38;5;12m (https://github.com/pytorch/audio) [39m[38;5;14m[1m:octocat:[0m[38;5;12m (https://github.com/pytorch/audio) - PyTorch Audio Loaders[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;14m[1mnnAudio[0m[38;5;12m (https://github.com/KinWaiCheuk/nnAudio) [39m[38;5;14m[1m:octocat:[0m[38;5;12m (https://github.com/KinWaiCheuk/nnAudio) [39m[38;5;14m[1m:package:[0m[38;5;12m (https://pypi.org/project/nnAudio/) - Accelerated audio processing using 1D convolution networks in PyTorch.[39m
|
||
|
||
[38;2;255;187;0m[4mSymbolic Music - MIDI - Musicology[0m
|
||
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;14m[1mMusic21[0m[38;5;12m (http://web.mit.edu/music21/) [39m[38;5;14m[1m:octocat:[0m[38;5;12m (https://github.com/cuthbertLab/music21) [39m[38;5;14m[1m:package:[0m[38;5;12m (https://pypi.python.org/pypi/music21) - Toolkit for Computer-Aided Musicology.[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;14m[1mMido[0m[38;5;12m (https://mido.readthedocs.io/en/latest/) [39m[38;5;14m[1m:octocat:[0m[38;5;12m (https://github.com/olemb/mido) [39m[38;5;14m[1m:package:[0m[38;5;12m (https://pypi.python.org/pypi/mido) - Realtime MIDI wrapper.[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;14m[1mmingus[0m[38;5;12m (https://github.com/bspaans/python-mingus) [39m[38;5;14m[1m:octocat:[0m[38;5;12m (https://github.com/bspaans/python-mingus) [39m[38;5;14m[1m:package:[0m[38;5;12m (https://pypi.org/project/mingus) - Advanced music theory and notation package with MIDI file and playback support.[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;14m[1mPretty-MIDI[0m[38;5;12m (http://craffel.github.io/pretty-midi/) [39m[38;5;14m[1m:octocat:[0m[38;5;12m (https://github.com/craffel/pretty-midi) [39m[38;5;14m[1m:package:[0m[38;5;12m (https://pypi.python.org/pypi/pretty-midi) - Utility functions for handling MIDI data in a nice/intuitive way.[39m
|
||
|
||
[38;2;255;187;0m[4mRealtime applications[0m
|
||
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;14m[1mJupylet[0m[38;5;12m (https://github.com/nir/jupylet) [39m[38;5;14m[1m:octocat:[0m[38;5;12m (https://github.com/nir/jupylet) - Subtractive, additive, FM, and sample-based sound synthesis.[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;14m[1mPYO[0m[38;5;12m (http://ajaxsoundstudio.com/software/pyo/) [39m[38;5;14m[1m:octocat:[0m[38;5;12m (https://github.com/belangeo/pyo) - Realtime audio dsp engine.[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;14m[1mpython-sounddevice[0m[38;5;12m [39m[38;5;12m(https://github.com/spatialaudio/python-sounddevice)[39m[38;5;12m [39m[38;5;14m[1m:octocat:[0m[38;5;12m [39m[38;5;12m(http://python-sounddevice.readthedocs.io)[39m[38;5;12m [39m[38;5;14m[1m:package:[0m[38;5;12m [39m[38;5;12m(https://pypi.python.org/pypi/sounddevice)[39m[38;5;12m [39m[38;5;12m-[39m[38;5;12m [39m[38;5;12mPortAudio[39m[38;5;12m [39m[38;5;12mwrapper[39m[38;5;12m [39m[38;5;12mproviding[39m[38;5;12m [39m[38;5;12mrealtime[39m[38;5;12m [39m[38;5;12maudio[39m[38;5;12m [39m[38;5;12mI/O[39m[38;5;12m [39m[38;5;12mwith[39m[38;5;12m [39m
|
||
[38;5;12mNumPy.[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;14m[1mReTiSAR[0m[38;5;12m (https://github.com/AppliedAcousticsChalmers/ReTiSAR) [39m[38;5;14m[1m:octocat:[0m[38;5;12m (https://github.com/AppliedAcousticsChalmers/ReTiSAR) - Binarual rendering of streamed or IR-based high-order spherical microphone array signals.[39m
|
||
|
||
[38;2;255;187;0m[4mWeb Audio[0m
|
||
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;14m[1mTimeSide (Beta)[0m[38;5;12m (https://github.com/Parisson/TimeSide/tree/dev) [39m[38;5;14m[1m:octocat:[0m[38;5;12m (https://github.com/Parisson/TimeSide/tree/dev) - high level audio analysis, imaging, transcoding, streaming and labelling.[39m
|
||
|
||
[38;2;255;187;0m[4mAudio Dataset and Dataloaders[0m
|
||
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;14m[1mbeets[0m[38;5;12m (http://beets.io/) [39m[38;5;14m[1m:octocat:[0m[38;5;12m (https://github.com/beetbox/beets) [39m[38;5;14m[1m:package:[0m[38;5;12m (https://pypi.python.org/pypi/beets) - Music library manager and [39m[38;5;14m[1mMusicBrainz[0m[38;5;12m (https://musicbrainz.org/) tagger.[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;14m[1mmusdb[0m[38;5;12m (http://dsdtools.readthedocs.io) [39m[38;5;14m[1m:octocat:[0m[38;5;12m (https://github.com/sigsep/sigsep-mus-db) [39m[38;5;14m[1m:package:[0m[38;5;12m (https://pypi.python.org/pypi/musdb) - Parse and process the MUSDB18 dataset.[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;14m[1mmedleydb[0m[38;5;12m (http://medleydb.readthedocs.io) [39m[38;5;14m[1m:octocat:[0m[38;5;12m (https://github.com/marl/medleydb) - Parse [39m[38;5;14m[1mmedleydb[0m[38;5;12m (http://medleydb.weebly.com/) audio + annotations.[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;14m[1mSoundcloud[0m[38;5;14m[1m [0m[38;5;14m[1mAPI[0m[38;5;12m [39m[38;5;12m(https://github.com/soundcloud/soundcloud-python)[39m[38;5;12m [39m[38;5;14m[1m:octocat:[0m[38;5;12m [39m[38;5;12m(https://github.com/soundcloud/soundcloud-python)[39m[38;5;12m [39m[38;5;14m[1m:package:[0m[38;5;12m [39m[38;5;12m(https://pypi.python.org/pypi/soundcloud)[39m[38;5;12m [39m[38;5;12m-[39m[38;5;12m [39m[38;5;12mWrapper[39m[38;5;12m [39m[38;5;12mfor[39m[38;5;12m [39m[38;5;14m[1mSoundcloud[0m[38;5;14m[1m [0m[38;5;14m[1mAPI[0m[38;5;12m [39m
|
||
[38;5;12m(https://developers.soundcloud.com/).[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;14m[1mYoutube-Downloader[0m[38;5;12m (http://rg3.github.io/youtube-dl/) [39m[38;5;14m[1m:octocat:[0m[38;5;12m (https://github.com/rg3/youtube-dl) [39m[38;5;14m[1m:package:[0m[38;5;12m (https://pypi.python.org/pypi/youtube_dl) - Download youtube videos (and the audio).[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;14m[1maudiomate[0m[38;5;12m (https://github.com/ynop/audiomate) [39m[38;5;14m[1m:octocat:[0m[38;5;12m (https://github.com/ynop/audiomate) [39m[38;5;14m[1m:package:[0m[38;5;12m (https://pypi.python.org/pypi/audiomate/) - Loading different types of audio datasets.[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;14m[1mmirdata[0m[38;5;12m (https://mirdata.readthedocs.io/en/latest/) [39m[38;5;14m[1m:octocat:[0m[38;5;12m (https://github.com/mir-dataset-loaders/mirdata) [39m[38;5;14m[1m:package:[0m[38;5;12m (https://pypi.python.org/pypi/mirdata) - Common loaders for Music Information Retrieval (MIR) datasets.[39m
|
||
[38;2;255;187;0m[4mWrappers for Audio Plugins[0m
|
||
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;14m[1mVamPy Host[0m[38;5;12m (https://code.soundsoftware.ac.uk/projects/vampy-host) [39m[38;5;14m[1m:package:[0m[38;5;12m (https://pypi.python.org/pypi/vamp) - Interface compiled vamp plugins.[39m
|
||
|
||
[38;2;255;187;0m[4mTutorials[0m
|
||
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;14m[1mWhirlwind Tour Of Python[0m[38;5;12m (https://jakevdp.github.io/WhirlwindTourOfPython/) [39m[38;5;14m[1m:octocat:[0m[38;5;12m (https://github.com/jakevdp/WhirlwindTourOfPython[39m
|
||
[38;5;12m) - fast-paced introduction to Python essentials, aimed at researchers and developers.[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;14m[1mIntroduction to Numpy and Scipy[0m[38;5;12m (http://www.scipy-lectures.org/index.html) [39m[38;5;14m[1m:octocat:[0m[38;5;12m (https://github.com/scipy-lectures/scipy-lecture-notes) - Highly recommended tutorial, covers large parts of the scientific Python ecosystem.[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;14m[1mNumpy for MATLAB® Users[0m[38;5;12m (https://docs.scipy.org/doc/numpy/user/numpy-for-matlab-users.html) - Short overview of equivalent python functions for switchers.[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;14m[1mMIR Notebooks[0m[38;5;12m (http://musicinformationretrieval.com/) [39m[38;5;14m[1m:octocat:[0m[38;5;12m (https://github.com/stevetjoa/stanford-mir) - collection of instructional iPython Notebooks for music information retrieval (MIR).[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;14m[1mSelected Topics in Audio Signal Processing[0m[38;5;12m ( https://github.com/spatialaudio/selected-topics-in-audio-signal-processing-exercises) - Exercises as iPython notebooks.[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;14m[1mLive-coding a music synthesizer[0m[38;5;12m (https://www.youtube.com/watch?v=SSyQ0kRHzis) Live-coding video showing how to use the SoundDevice library to reproduce realistic sounds. [39m[38;5;14m[1mCode[0m[38;5;12m (https://github.com/cool-RR/python_synthesizer).[39m
|
||
|
||
[38;2;255;187;0m[4mBooks[0m
|
||
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;14m[1mPython Data Science Handbook[0m[38;5;12m (https://github.com/jakevdp/PythonDataScienceHandbook) - Jake Vanderplas, Excellent Book and accompanying tutorial notebooks.[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;14m[1mFundamentals of Music Processing[0m[38;5;12m (https://www.audiolabs-erlangen.de/fau/professor/mueller/bookFMP) - Meinard Müller, comes with Python exercises.[39m
|
||
|
||
[38;2;255;187;0m[4mScientific Papers[0m
|
||
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;14m[1mPython for audio signal processing[0m[38;5;12m (http://eprints.maynoothuniversity.ie/4115/1/40.pdf) - John C. Glover, Victor Lazzarini and Joseph Timoney, Linux Audio Conference 2011.[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;14m[1mlibrosa:[0m[38;5;14m[1m [0m[38;5;14m[1mAudio[0m[38;5;14m[1m [0m[38;5;14m[1mand[0m[38;5;14m[1m [0m[38;5;14m[1mMusic[0m[38;5;14m[1m [0m[38;5;14m[1mSignal[0m[38;5;14m[1m [0m[38;5;14m[1mAnalysis[0m[38;5;14m[1m [0m[38;5;14m[1min[0m[38;5;14m[1m [0m[38;5;14m[1mPython[0m[38;5;12m [39m[38;5;12m(http://conference.scipy.org/proceedings/scipy2015/pdfs/brian_mcfee.pdf),[39m[38;5;12m [39m[38;5;14m[1mVideo[0m[38;5;12m [39m[38;5;12m(https://www.youtube.com/watch?v=MhOdbtPhbLU)[39m[38;5;12m [39m[38;5;12m-[39m[38;5;12m [39m[38;5;12mBrian[39m[38;5;12m [39m[38;5;12mMcFee,[39m[38;5;12m [39m[38;5;12mColin[39m[38;5;12m [39m[38;5;12mRaffel,[39m[38;5;12m [39m[38;5;12mDawen[39m[38;5;12m [39m[38;5;12mLiang,[39m[38;5;12m [39m[38;5;12mDaniel[39m[38;5;12m [39m[38;5;12mP.W.[39m[38;5;12m [39m
|
||
[38;5;12mEllis,[39m[38;5;12m [39m[38;5;12mMatt[39m[38;5;12m [39m[38;5;12mMcVicar,[39m[38;5;12m [39m[38;5;12mEric[39m[38;5;12m [39m[38;5;12mBattenberg,[39m[38;5;12m [39m[38;5;12mOriol[39m[38;5;12m [39m[38;5;12mNieto,[39m[38;5;12m [39m[38;5;12mScipy[39m[38;5;12m [39m[38;5;12m2015.[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;14m[1mpyannote.audio:[0m[38;5;14m[1m [0m[38;5;14m[1mneural[0m[38;5;14m[1m [0m[38;5;14m[1mbuilding[0m[38;5;14m[1m [0m[38;5;14m[1mblocks[0m[38;5;14m[1m [0m[38;5;14m[1mfor[0m[38;5;14m[1m [0m[38;5;14m[1mspeaker[0m[38;5;14m[1m [0m[38;5;14m[1mdiarization[0m[38;5;12m [39m[38;5;12m(https://arxiv.org/abs/1911.01255),[39m[38;5;12m [39m[38;5;14m[1mVideo[0m[38;5;12m [39m[38;5;12m(https://www.youtube.com/watch?v=37R_R82lfwA)[39m[38;5;12m [39m[38;5;12m-[39m[38;5;12m [39m[38;5;12mHervé[39m[38;5;12m [39m[38;5;12mBredin,[39m[38;5;12m [39m[38;5;12mRuiqing[39m[38;5;12m [39m[38;5;12mYin,[39m[38;5;12m [39m[38;5;12mJuan[39m[38;5;12m [39m[38;5;12mManuel[39m[38;5;12m [39m[38;5;12mCoria,[39m[38;5;12m [39m[38;5;12mGregory[39m[38;5;12m [39m[38;5;12mGelly,[39m[38;5;12m [39m[38;5;12mPavel[39m[38;5;12m [39m[38;5;12mKorshunov,[39m[38;5;12m [39m
|
||
[38;5;12mMarvin[39m[38;5;12m [39m[38;5;12mLavechin,[39m[38;5;12m [39m[38;5;12mDiego[39m[38;5;12m [39m[38;5;12mFustes,[39m[38;5;12m [39m[38;5;12mHadrien[39m[38;5;12m [39m[38;5;12mTiteux,[39m[38;5;12m [39m[38;5;12mWassim[39m[38;5;12m [39m[38;5;12mBouaziz,[39m[38;5;12m [39m[38;5;12mMarie-Philippe[39m[38;5;12m [39m[38;5;12mGill,[39m[38;5;12m [39m[38;5;12mICASSP[39m[38;5;12m [39m[38;5;12m2020.[39m
|
||
|
||
[38;2;255;187;0m[4mOther Resources[0m
|
||
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;14m[1mCoursera Course[0m[38;5;12m (https://www.coursera.org/learn/audio-signal-processing) - Audio Signal Processing, Python based course from UPF of Barcelona and Stanford University.[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;14m[1mDigital Signal Processing Course[0m[38;5;12m (http://dsp-nbsphinx.readthedocs.io/en/nbsphinx-experiment/index.html) - Masters Course Material (University of Rostock) with many Python examples.[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;14m[1mSlack Channel[0m[38;5;12m (https://mircommunity.slack.com) - Music Information Retrieval Community.[39m
|
||
|
||
[38;2;255;187;0m[4mRelated lists[0m
|
||
|
||
[38;5;12mThere[39m[38;5;12m [39m[38;5;12mis[39m[38;5;12m [39m[38;5;12malready[39m[38;5;12m [39m[38;5;14m[1mPythonInMusic[0m[38;5;12m [39m[38;5;12m(https://wiki.python.org/moin/PythonInMusic)[39m[38;5;12m [39m[38;5;12mbut[39m[38;5;12m [39m[38;5;12mit[39m[38;5;12m [39m[38;5;12mis[39m[38;5;12m [39m[38;5;12mnot[39m[38;5;12m [39m[38;5;12mup[39m[38;5;12m [39m[38;5;12mto[39m[38;5;12m [39m[38;5;12mdate[39m[38;5;12m [39m[38;5;12mand[39m[38;5;12m [39m[38;5;12mincludes[39m[38;5;12m [39m[38;5;12mtoo[39m[38;5;12m [39m[38;5;12mmany[39m[38;5;12m [39m[38;5;12mpackages[39m[38;5;12m [39m[38;5;12mof[39m[38;5;12m [39m[38;5;12mspecial[39m[38;5;12m [39m[38;5;12minterest[39m[38;5;12m [39m[38;5;12mthat[39m[38;5;12m [39m[38;5;12mare[39m[38;5;12m [39m[38;5;12mmostly[39m[38;5;12m [39m[38;5;12mnot[39m[38;5;12m [39m[38;5;12mrelevant[39m[38;5;12m [39m[38;5;12mfor[39m[38;5;12m [39m[38;5;12mscientific[39m[38;5;12m [39m[38;5;12mapplications.[39m[38;5;12m [39m[38;5;14m[1mAwesome-Python[0m[38;5;12m [39m
|
||
[38;5;12m(https://github.com/vinta/awesome-python)[39m[38;5;12m [39m[38;5;12mis[39m[38;5;12m [39m[38;5;12mlarge[39m[38;5;12m [39m[38;5;12mcurated[39m[38;5;12m [39m[38;5;12mlist[39m[38;5;12m [39m[38;5;12mof[39m[38;5;12m [39m[38;5;12mpython[39m[38;5;12m [39m[38;5;12mpackages.[39m[38;5;12m [39m[38;5;12mHowever,[39m[38;5;12m [39m[38;5;12mthe[39m[38;5;12m [39m[38;5;12maudio[39m[38;5;12m [39m[38;5;12msection[39m[38;5;12m [39m[38;5;12mis[39m[38;5;12m [39m[38;5;12mvery[39m[38;5;12m [39m[38;5;12msmall.[39m
|
||
|
||
[38;2;255;187;0m[4mContributing[0m
|
||
|
||
[38;5;12mYour contributions are always welcome! Please take a look at the [39m[38;5;14m[1mcontribution guidelines[0m[38;5;12m (CONTRIBUTING.md) first.[39m
|
||
|
||
[38;5;12mI will keep some pull requests open if I'm not sure whether those libraries are awesome, you could vote for them by adding 👍 to them.[39m
|
||
|
||
[38;2;255;187;0m[4mLicense[0m
|
||
|
||
[38;5;14m[1m![0m[38;5;12mLicense: CC BY 4.0[39m[38;5;14m[1m (https://img.shields.io/badge/License-CC%20BY%204.0-lightgrey.svg)[0m[38;5;12m (https://creativecommons.org/licenses/by/4.0/)[39m
|