lfo signal
CONMOD: Controllable Neural Frame-based Modulation Effects
Lee, Gyubin, Kim, Hounsu, Lee, Junwon, Nam, Juhan
Deep learning models have seen widespread use in modelling LFO-driven audio effects, such as phaser and flanger. Although existing neural architectures exhibit high-quality emulation of individual effects, they do not possess the capability to manipulate the output via control parameters. To address this issue, we introduce Controllable Neural Frame-based Modulation Effects (CONMOD), a single black-box model which emulates various LFO-driven effects in a frame-wise manner, offering control over LFO frequency and feedback parameters. Additionally, the model is capable of learning the continuous embedding space of two distinct phaser effects, enabling us to steer between effects and achieve creative outputs. Our model outperforms previous work while possessing both controllability and universality, presenting opportunities to enhance creativity in modern LFO-driven audio effects.
- Europe > United Kingdom > England > Surrey > Guildford (0.05)
- Asia > South Korea > Daejeon > Daejeon (0.05)
- Europe > Czechia > South Moravian Region > Brno (0.05)
Modulation Extraction for LFO-driven Audio Effects
Mitcheltree, Christopher, Steinmetz, Christian J., Comunità, Marco, Reiss, Joshua D.
Low frequency oscillator (LFO) driven audio effects such as phaser, flanger, and chorus, modify an input signal using time-varying filters and delays, resulting in characteristic sweeping or widening effects. It has been shown that these effects can be modeled using neural networks when conditioned with the ground truth LFO signal. However, in most cases, the LFO signal is not accessible and measurement from the audio signal is nontrivial, hindering the modeling process. To address this, we propose a framework capable of extracting arbitrary LFO signals from processed audio across multiple digital audio effects, parameter settings, and instrument configurations. Since our system imposes no restrictions on the LFO signal shape, we demonstrate its ability to extract quasiperiodic, combined, and distorted modulation signals that are relevant to effect modeling. Furthermore, we show how coupling the extraction model with a simple processing network enables training of end-to-end black-box models of unseen analog or digital LFO-driven audio effects using only dry and wet audio pairs, overcoming the need to access the audio effect or internal LFO signal. We make our code available and provide the trained audio effect models in a real-time VST plugin.
- Europe > Denmark > Capital Region > Copenhagen (0.05)
- Europe > United Kingdom > England > Greater London > London (0.04)
- Asia > Middle East > Iran (0.04)
- Media > Music (1.00)
- Leisure & Entertainment (1.00)