AboutIn applications such as audio denoising, music transcription, music remixing, and audio-based forensics, it is desirable to decompose a single- or stereo-channel recording into its respective sources. To perform such tasks, we present ISSE - an interactive source separation editor (pronounced "ice"). ISSE is an open-source, freely available, cross-platform audio editing tool that allows a user to perform source separation by painting on time-frequency visualizations of sound. The software leverages both a new user interaction paradigm and machine learning-based separation algorithm that "learns" from human feedback (e.g. painting annotations) to perform separation. LicenseThe software is a free, open source, cross-platform, project licensed under the GNU General Public License, Version 3. This means you are free to use, study, share, and improve both the application and source code. Please see the license link for more information. ManualFor brief instructions on how to use ISSE, please see our manual. History ISSE was developed by Nicholas J. Bryan as part of his PhD thesis, advised by Gautham J. Mysore at Adobe Research and Prof. Ge Wang of the Music, Computing, and Design Group (MCD) at the Center for Computer Research in Music and Acoustics (CCRMA), Stanford University. In July 2013, it was released as an open-source project licensed by Adobe Research and Stanford University. Since then, several others have joined as advisors, collaborators, and developers.
Authors, Advisors, and CollaboratorsAuthors, advisors, and collaborators include:
In addition, the software is heavily dependent on several fantastic third-party open-source libraries including JUCE, Eigen, and FFTW. PublicationsTechnically speaking, the core separation algorithm used within ISSE is a machine learning separation method based on non-negative matrix factorization and related probabilistic methods. Over the past decade these methods have been found to be useful for a wide variety of music and audio-related tasks including source separation. For a list of relevant prior work, please see this link. For the specific technical contributions of this work compared to past research, please see the list below.
|