Thursday, October 22, 2009 - 11:00am - 12:00pm EDT
- Location:Dibner Building, LC 400
Five MetroTech Center, Brooklyn, New York
Speaker: Professor Rudolf Rabenstein
Faculty Host: Professor Ivan Selesnick
Computer generated sounds have started out from dedicated digital musical instruments in the 1980s and made their way over computer sound cards on to cell phones and into todays mobile multimedia devices. There exist a number of synthesis methods which allow to trade off memory requirements against computing power and natural sound fidelity against parametric control of synthetic sounds.
A family of synthesis methods that allow close-to-natural sounds and parametric control at the same time is based on physical modeling of resonating structures such as strings, membranes, plates, air columns, etc. The physical model is usually formulated as a partial differential equation resulting from a mechanical analysis. The resulting synthesis algorithms consists of a parallel arrangement of second order digital filters. Their coefficients are obtained by analytic expressions directly from the parameters of the physical model. More elaborate computational models include nonlinearities and excitation mechanisms.
The resulting algorithmical models are suitable for real-time implementation on modern desktop or laptop computers and mobile devices. Low-delay algorithms permit control from sequencer programs or haptic devices. A VST-plugin demonstrates the capabilities for real-time synthesis and parametric control.