Cyanite
Transform music catalogs into searchable assets with AI-driven tagging and semantic discovery.
The industry-standard MATLAB framework for comprehensive musical feature extraction and audio signal analysis.
MIRtoolbox is a specialized MATLAB-based software environment designed for the extraction of musical descriptors from audio files. Developed primarily for academic research and industrial audio analysis, the toolbox provides a modular, high-level syntax that allows users to chain complex operations—such as spectral analysis, rhythmic extraction, and tonal estimation—into efficient pipelines. Its architecture is built on a hierarchical data structure that maintains temporal and spectral metadata throughout the processing chain. In the 2026 market, MIRtoolbox serves as a critical pre-processing layer for training specialized AI models in music recommendation, automated tagging, and therapeutic audio applications. By abstracting low-level signal processing into functions like mirmelpectrum, mirpitch, and mirkeystrength, it enables researchers to focus on higher-level musicological insights rather than routine algorithm implementation. It is particularly valued for its 'Lazy Evaluation' capabilities and its ability to handle large-scale batch processing of audio corpora, making it a foundational tool for the development of generative audio systems and content-based retrieval engines.
Outputs from one MIR function can be used as inputs for another, allowing for complex multi-stage analysis without manual data reshaping.
Transform music catalogs into searchable assets with AI-driven tagging and semantic discovery.
The gold standard Python library for high-performance music and audio signal processing.
Enterprise-Grade AI Audio Intelligence for Automated Metadata and Catalog Management.
The Industry-Standard Library for High-Performance Audio Analysis and Music Information Retrieval
Verified feedback from the global deployment network.
Post queries, share implementation strategies, and help other users.
Automated windowing and overlapping for time-varying analysis (mirframe) that optimizes memory by only computing data as needed.
Calculates the rate of change across spectral, rhythmic, and tonal dimensions simultaneously.
Implementation of Pammer's fluctuation strength models to quantify rhythmic periodicity and sensory dissonance.
Maps audio onto a 6D space (HPCN) to detect harmonic relationships and key changes.
Automated extraction of high-level statistics (mean, std, slope, periodicity) across a library of extracted features.
Generates self-similarity matrices (SSM) based on any combination of extracted features.
Manually extracting features for a 10,000-track dataset is computationally expensive and complex.
Registry Updated:2/7/2026
Export to CSV for training a Random Forest classifier.
Identifying chorus, verse, and bridge sections in pop music for automated editing.
Quantifying the specific acoustic properties of music that induce relaxation.