This is the second third of the 270abc core sequence designed for the computer music graduate program. Students in other programs are also welcome. Some DSP knowledge, such as provided in 270a, will be very helpful.
This course was originally taught by Jerry Balzano and focused on cognitive processes in music listening and understanding, but since 2016 it has evolved to focus on computational models, particularly analysis of musical sound.
Over the course of the quarter I'll throw out three sonic challenges, soundfiles for you to attempt to analyze and resynthesize using techniques of your choice. The goal shouldn't be just to recreate the sound exactly (as from an FFT analysis/resynthesis) but to make something that you can manipulate musically - exact likeness is less important than musicality. Here are patches, screen movies and blackboards. (These are empty at first and will accrete over the quarter).
Mar. 28-30. Short-time Fourier analysis.
The Portnoff and "Convolution Brothers" frameworks; measuring phase,
amplitude, and frequency of sinusoids; time/frequency resolution.
Jont Allen, "Short term spectral analysis, synthesis, and modification by discrete Fourier transform"
Puckette, Theory and Techniques of Electronic Music, Chapter 9
Apr. 4-6. The phase vocoder and its applications.
The constrained resynthesis problem and known workarounds.
Flanagan, "Phase Vocoder"
Puckette, "Phase-locked vocoder"
Laroche and Dolson, "Phase-Vocoder: About this phasiness business"
Apr. 11-13. More on short-time Fourier analysis.
Serra sinusoids-plus-noise model and its tweaks; multirate analysis;
Griffin and Lim, "Signal Estimation from Modified Short-Time Fourier Transform"
Xavier Serra, "Musical sound modeling with sinusoids plus noise"
Apr. 18-20. Measures of loudness and timbre.
The bark scale; critical bands; timbre spaces; NMF partitioning of spectra
David Wessel, "Musical Timbre as a Musical Control Structure"
Hiroko Terasawa, Malcolm Slaney, and Jonathan Berger, "Perceptual Distance In Timbre Space"
Apr. 25-27. (No class May 4). Least-squares estimation techniques.
Linear prediction/estimation; principal component analysis. Kalman filters.
(Wikipedia entries on the above)
May 2. Pitch and pitch estimation.
Alain de Cheveigne and Hideki Kawahara, "YIN, a fundamental frequency estimator for speech and music"
Puckette, M., Apel, T., and Zicarelli, D., 1998. "Real-time audio analysis tools for Pd and MSP".
(see also Terhardt's algorithm on https://jjensen.org/VirtualPitch.html)
May 9-11. scales, key, and tonality. Helmholz theory of
consonance and dissonance; prevalence of scale degrees; the mysterious minor
Lerdahl and Krumhansl, Modeling Tonal Tension
Parncutt, The Tonic as Triad: Key Profiles as Pitch Salience Profiles of Tonic Triads
Balzano, What are musical pitch and timbre? Music Perception. 1986
May 16-18. Segmentation, beat, and rhythm. Onset detection; tempo
estimation; score following.
Roger Dannenberg, "An On-Line Algorithm for Real-Time Accompaniment"
Ning Hu and Roger Dannenberg, "Bootstrap Learning for accurate Onset Detection"
Michelle Daniels, "An Ensemble Framework for Real-Time Audio Beat Tracking" (PhD thesis)
Gilbert Nouno, "Suivi de Tempo Applique aux Musiques Improvisees" (PhD thesis)
May 23-25. Sound and space. Sound propagation, microphones and speakers; impulse response estimation; spatial perception; sound projection models.
June 1. (no class May 30). Statistical inference and basics of experimental design.
June 6 or 7? possible extra meeting during finals week to look at sound challenge results.