271b-1a-mar31.mp4: Meeting 1a, March 31, 2021 0:00 the electronic music repertory 1:00 part 1: early computer music 5:00 part 2: audio recordings as medium 7:00 the tape music era (1943-2000 perhaps) 8:30 part 3: algorithmic music 16:30 Outliers: Wendy Carlos and Maryanne Amacher 19:30 organization of the course (send me e-mail about what interests you) 23:00 Patches to use: the Pure Data Repertory Project: 25:00 PDRP documentation 28:30 Example: running patch for John Chowning, Stria 31:30 about the Stria patch (not a realization, just a study patch) 32:00 FM (frequency modulation) synthesis 33:00 sinusoids and FM sound at A440 34:30 FM ratio (modulation frequency to carrier frequency) 37:00 FM spectrum 37:40 main out and mute controls. Gain units for main out 39:30 spectrum of very slowly changing frequency tone 40:00 index controls range of frequency deviation. Phase modulation 40:30 if using phase modulation, index adds bandwidth 42:30 frequency resolution of ears and of FFT-measured spectra 43:30 bandwidth (spread) is product of index and modulation frequency 44:30 faster modulation gives distinct spectral peaks 47:00 modulating frequency controls separation of peaks but not amplitude 48:00 index changes the amplitudes but not the frequencies 49:30 when index is 0.38 center frequency drops out. (it bobs in and out) 51:30 using simple fractions as FM ratios (example, 1/4: periodic tone) 54:00 golden ratio as FM ratio (non-periodic) 56:00 spacing between peaks (reflection of negative frequencies) 56:00 Helholz theory of consonance and dissonance for periodic tones 57:30 musical fifth (7 half tones) 59:00 beating between common partials that almost coincide 1:02:00 making the interval sour 1:03:30 filtering to hear the individual harmonics 1:04:00 caution abut gain while filtering sounds 1:05:00 perfect fifth (7.02 hald steps) 1:06:00 tritone (dissonant interval: pairs of partials within a critical band) 1:08:30 tempered major triad 1:10:00 near-perfect triad (-3.16 and -7.02 half-steps) 271b-1b-mar31.mp4: Meeting 1b, March 31, 2021 0:00 Stria by John Chowning 1:00 first tone in stria, imitated using the patch 2;00 beginning of real piece 6:30 time proportions in the piece 9:00 the pre-stored sequence in the patch 10:00 more faithful reconstruction from Huddersfield 12:00 theory: using the golden ratio as an FM ratio 14:00 geometric construction of golden ratio 17:00 r-1, 1, r, and r+1 in geometric progression (same interval 3 times) 22:00 Fibonacci sequence and golden ratio related 24:30 spectrum of FM with golden ratio of modulation to carrier frequency 25:00 stacking three golden FM spectra 26:00 musical scales with different octave ratios and octave divisions 28:00 spectrum of four tones separated by golden ratio 29:00 three golden ratios sound almost like 2 octaves 32:00 same interval is less sour if we use golden-ratio spectra 34:00 spectrum of four tones separated by golden ratio sounds consonant 37:00 1/3 of an octave (if an octave is golden ratio) 40:00 major tetrad in 12-tone scale, sounds sour if using golden ratio spectra 41:30 (0,3,6,9) tetrad plays role of major tetrad in Stria 42:30 dividing 2:1 octave in other numbers of steps, playing (0,4,7,12) chord 43:00 quarter tones (2:1 divided in 24, or tritone divided by 12) 48:30 how many steps to divide "octave" into 53:00 how the Stria example is sequenced and what the parameters mean 56:30 amplitudes in this patch are in decibels (old practice) 57:30 index of modulation units 59:30 beating modulator oscillators 1:01:30 carrier and modulation frequencies more general definition 1:04:00 layout of files in the Chowning study patch (includes a library) 1:06:00 the qlist ("cue list" or "score") 1:08:00 events in the qlist 1:10:00 cross-fade in event 2 271b-2a-apr7.mp4: Meeting 2a, April 7, 2021 0:00 How to build a piece out of the null piece 1:30 the patches 0.pd and 1.pd (without or with a qlist stepper/sequencer) 2:30 null piece now can read input soundfiles 4:00 copying the null piece to create a new piece named "aardvark" 4:40 difference between library and score directories 6:00 controls on main window, guts hidden 7:00 keep heavy drawing tasks in subpatches you can hide to save CPU time 9:00 put together a phase modulation sound 10:00 input, DSP, and output windows are connected to control execution order 11:00 "input1" send/receive and "output1" throw/catch 15:00 modulating oscillator 16:00 carrier oscillator split into phasor~ and cos~ 17:00 controlling the two amplitudes 19:00 index of modulation is amplitude of modulating oscillator 20:00 testing the FM pair 23:00 controlling and naming the parameters with "genctl" abstraction 24:00 receive, number box, and genctl working together 25:30 sending a message to set a parameter as in a qlist 27:00 importance of choosing mousable, readable scales (units) for parameters 30:00 fourth-power curve for controlling gains 31:00 the amp4~ abstraction for controlling amplitudes or gains 34:00 controlling the index with amp4~ (later changed) 35:00 first cut at fully controllable FM pair 36:00 "stop" button resets everything (the genctl abstraction does this) 37:30 usnig MIDI units for the two frequencies 38:00 hang a number box off a control object for debugging 39:30 oops, index should use linear units, not fourth-power 40:00 fixing index to be linearly rampable 41:30 amp4~ handles ramping 43:00 genctl makes the control display progress of ramp 43:45 pitfall: sending a pair of numbers to a receiver that didn't expect it 44:30 npack to split elements of the pair 45:20 need an audio rate ramp (line~) to control index to avoid zipper noise 46:30 how to make a default ramp time 51:00 rampable frequencies (ramp in pitch, not frequency0 53:00 wrongness of linearly ramped frequencies 54:00 5-msec default frequency sweep time (vs. 20 for amplitudes) 57:30 pitch sweep working 1:00:00 testing finished FM pair 1:01:00 alternative definition of parameters to specify frequency ratio 1:03:00 controlling the FM pair from main patch 1:04:00 adding a reverberator (rev3~) 1:06:30 rev3~ parameters don't ramp (use an external control line of you want) 1:10:00 reverberator working 1:10:30 using "grab" feature to get parameter snapshot 271b-2b-apr7.mp4: Meeting 2b, April 7, 2021 1:00 Jonathan Harvey, Mortuos Plango, Vivos Voco. Quick look at Harvey's paper in CMJ 3:45 difficulty of wielding additive synthesis in general 5:00 generating frequencies in MPVV, partials of Winchester cathedral bell 7:30 orchestration possibilities of voices plus additive synthesis 9:00 generatong series of MPVV 11:30 sections of piece labeled by generating series 12:30 the glissandi 14:00 stable clusters at beginning and end of glisses suggest pedal tomes 16:00 Rand's technique similar but harmonic series instead of bell 17:00 piece annnounces its series at 1:00 19:00 section 2 announced by its characteristic pitch 19:45 glissandi ni the piece (distorted) 20:30 again, better (not perfect yet) 21:30 sung chord at end 24:30 about the study patch 26:45 opening the study patch 28:00 soundfile chooser to play soundfiles through patch 30:00 the patch has a built-in bell loop 31:00 the sigmund object for sinusoidal analysis 32:00 printed out peaks 33:00 oscillator bank 35:00 comparing analyzed pitches to series in CMJ paper 37:00 separate window for culling and sorting sinusoidal components 40;00 different anayses of sound are similar but distinct 42:00 bell frequencies might not really be stable (Doppler shift) 43:00 research question: what makes some spectra sweet and others ugly? 47:00 glissandi 48:00 making message boxes containing all the components 53:00 sorting components by frequency 55:00 glissing between sorted list of compinents 55:30 selecting components within a pitch range 56:00 verifying vocal chord at end of piece is taken from series 1:00:00 additive synthesis patches always enforce some writing style 1:02:00 why Harvey's piece sounds better than, say Risset's Inharmoniques 1:03:00 Latin inscription on the bell suggests the ethos of the piece 271b-03a-apr14.mp4: Meeting 3a, April 14, 2021 0:00 looking at recorded sound based techniques earlier than planned 2:00 two currents in electronic music typified by GRM and IRCAM 3:30 studio technique isn't really the opposite of real-time 4:00 study piece is James Tenney, Collage #1 7:00 instantly recognizable sound to a 1963 audience emerges in middle 8:30 study patch is tenney-collage (tenney-collage.zip on PDRP site) 9:00 no patch could do everything interesting; this one just shows some ideas. 11:45 sample rate of patch migth be different from that of recording 13:00 reading and writing soundfiles through PDRP patches 14:00 the sampler itself is in DSP subpatch usnig "clone" object 14:30 messages to "samp" play one "note" using the sampler 15:30 arguments to samp: pitch, gain, duration, sample#, onset, rise, decay 16:30 can play overlapping notes 17:30 microtonal pitches 18:00 making a cluser, several "notes" at same start time 20:00 units of gain (50 for unit gain) and pitch (MIDI, 60 for original speed) 21:00 duration in milliseconds is output duration, not duration in file 22:00 stacking 4 octaves 22:30 need for automation: potentially 1000s of individual notes 24:00 example of audtomation: phased loops 26:30 you might not want to use this in exactly this form. 27:00 period and length in the looper 30:30 timing and emveloping of sampler playback in the patch (diagram) 31:00 onset is where in recording to start 31:30 the amplitude envelope shape 33:45 output is product of envelope generator output and playback signal 34:00 rise time, "duration", and "decay" time (nomenclature is slippery) 35:30 overall length is "duration" plus "decay" time (as in a keyboard synth) 38:00 special case, duration equal to rise time 40:00 setting all three time values equal gives time-symmetric envelope 40:30 sometimes called a Hann window 42:00 why aren't rise, duration, and decay just the same parameter 43:30 you can record audio input live and sample from it 45:00 simple example of automated granular sampling 46:45 changing parameters in a loop 47:45 overlap depends on total playing time and frequency of the loop 49:00 zero rise time to find an attack in the recording 50:00 this can make clicks but we're getting away with it here 50:45 slow rise and fast decay, and vice versa 51:00 desirability of having a choice of decay shapes for more natural decays 53:00 idea: name starting locations instead of giving them numerically 55:00 digression: tanh as a saturation function, different from envelope shape 57:00 digression: using audacity to label onsets and "text" object to store them 1:00:00 another thing you can do with the patch: granular sampling 1:01:00 exactly periodic repetition gives a tone 1:03:00 getting rid of the periodicity randomizing grain time intervals 1:05:00 diagram of periodic and aperiodic repetition of grains 1:07:00 getting random numbers 0 to 10 in increments of 1/100 1:09:00 fixed "samp" messages in message boxes aren't very flexible 1:09:30 the looper example demonstrates parametrized messages 1:10;00 should add time reversal to dolist (patch doesn't do it yet) 1:11:00 item in dolist: you can't track down a speaking "voice" to alter it 1:14:00 more dolist: this patch only does stereo in and stereo out 1:15:00 more dolist: separate streams ("tracks") of samples 1:16:45 more dolist: multichannel output and spatialization 271b-03b-apr14.mp4: Meeting 3b, April 14, 2021 0:00 aleatoric processes 4:30 making duration variable 6:00 using random time both for metronome and for duration of sample 7:00 changing playback location in time (onset) 7:30 range from 500 to 1500 (1499 to be exact) 9:00 saple rate correction seems not to be working yet 11:00 more than one dollar substitution in message boxes: "pack" object 13:00 "float" object stores first value so that it doesn't trigger "pack" 13:45 "bang" sets the pair of numbers off, and you can asynchronously set them 14:30 you can do the opposite (make "pack" output a message on either input) 17:00 random number fo r onset, range chosen to fall within guitar solo 17:45 using trigger to put number in non-first inlet of pack and trigger it 19:30 all these parameters probably need names. Name ranges in pairs 20:00 random-onset-start and random-onset-range variables and number boxes 23:00 receives can go into control (message) version of line 24:30 default time grain of 20 milliseconds 27:00 control names for on/off and for time duration 28:00 suggestion for messages that can't ramp: use "f" to be sure of message 30:00 putting an event in a message box to start randomized-playback process 32:00 adding pitch variation 34:30 three elements to pack together now, ordered by trigger "b f b" 35:30 adding randomizer for pitch 37:00 grain of randomization should be much smaller than 1 (1/1000 here) 38:30 you might want to specify middle of random range instead of start 41:00 trying to imitate beginning of Tenney piece 44:00 how could you make this loop? 45:30 you can have multiple processes running at once 47:00 another automated patch: granular sampler 48:45 variables we'll want: pitch, timing, duration, onset 50:00 bad but effective way to replace a name systematically (random to grainer) 53:00 inter-onset interval (time delay) no longer same parameter as duration 55:00 combine pitch, onset, and duration into a "samp" message 59:00 granular synthesis result from vocal portion of recording 1:00:00 randomizing pitch variation 1:01:00 randomizing onset makes a more diffuse sound 1:02:00 oops, another process had been running, rerunning the tests 1:06:00 possibility: sampling live instruments, playing patch as an instrument 1:07:00 more possibilities: attach MIDI controllers and/or use presets 1:09:00 inside the cloned "stereo-sample-reader" patch 1:10:00 inlet for messages to start playback, issued by "clone" object 1:11:00 more capabilities of "clone" 1:13:00 how you could adapt this patch to your own ends 1:14:00 Polansky has a paper about collage #1 on his website 271b-04a-apr21.mp4: Meeting 4a, April 21, 2021 0:00 Hidegard Westerkamp's Beneath the forest floor 0:30 musique concrete and soundscape composition compared 3:00 writings and videos available about the piece 3:30 Inside Computer Music by Clarke, Dufeu, and Manning 6:00 downloadable materials on their website including apps to explore pieces 7:00 Westerkamp's videos describing the genesis of the piece 14:30 why one recording of two ravens didn't work i nteh piece 16:00 another raven recording that proved useful 19:00 the raven recording in audacity 20:00 1/70 - second-long ululations 21:00 TaCEM app shows the layout of the entire piece 24:00 reconstruction of piece in graphical form showing recordings and processes 25:20 slowed-down raven sound in TaCEM app 27:00 reverberance extended by slowing the recording down 29:00 trying it in audacity 32:00 three separate calls used so that it doesn't sound like sampling 34:30 overall view of piece in audacity. 35:00 Raven calls dominate forst part of piece 38:00 sounds of wind, lifeforms, and water 39:00 the sound of the motorized saw answers teh raven 40:30 quiet section that recalls beginning 41:00 pitched sounds emerge in second half 44:00 other bird sounds are sources of pitched sounds 44:30 the wren recording is high-pass filtered and slowed down 45:00 7 kHz salient frequency 47:30 time stretching (just an example, not the original technique) 48:00 more faithful imitation using reverberation 49:30 related sound from in piece (but (oops) a different bird) 50:00 hovering chords not unusual in traditional tape music 51:00 treatment of rusing water with delays 53:00 spatializing stereo recordings 54:00 close microphone pair gives impression of stereo field 56:00 Westerkamp: not sampling and manipulation, but recording and processing 57:30 melodic material from thrush song 1:03:00 1990-era technology 1:04:00 pitch fields in recording-based music 271b-04b-apr21.mp4: Meeting 4b, April 21, 2021 0:00 adding to Tenney patch to imitate techniques in Westerkamp 1:30 need to filter birdsong to get clean sounds 3:00 closer look at thrush sound in audacity 5:00 start times and durations of three thrush notes 7:00 playing thrush notes in Tenney patch 10:00 wrong transposition (fixed after class) 14:00 back to filtering 18:00 butterworth low-pass and high-pass filters 19:30 testing filters with noise input 25:00 more about filters 26:30 improving on simple band-pass filter by pairing two detuned ones 31:00 frequency response of Butterworth versus simple low/high-pass filters 32:00 Butterworth filter design 36:00 three thrush notes into sampler 37:30 reverberation using rev3~ object 44:00 testing reverberator using an impulse as nput 48:00 sampler into reverberator 52:00 fix filter cutoff to match transposition 55:00 why ramp up and down over 100 msec - gain ramp as modulation 57:30 frequency response of a ramp 58:30 the faster the envelope, the higher-frequency the aliasing 1:02:00 these ramps can be slow but in other cases you want a fast one 1:02:30 to preserve the attack in a recording make rise time instantaneous 1:04:30 gating (applying a noise gate to a recording) 271b-05a-apr28.mp4: Meeting 5a, April 28, 2021 0:00 Trevor Wishart's Imago: wavesets and phase vocoders 2:00 the acousmatic musical style 2:30 same techniques as in soundscape composition but different musical style 4:00 idea of deriving a piece out of a small sonic seed: Xenakis's P/H Concrete 6:30 seed sound for Wishart: two whiskey glasses clinking together 9:00 formal approach: form imposed by cmoposer, not emerging from soundscape 10:30 characteristic acousmatic move: gathering energy and release 12:30 close look at the seed sound using phase vocoder 17:00 time-frozen decay of the sound 19:00 uses of phase vocoder in Imago 19:30 frequency shifting (might not actually be the thing found in the piece) 23:00 pitch shifting and frequency shifting contrasted theoretically 28:00 incitement: use both pitch and frequency shifting to squash a spectrum 31:30 frequency shifting (single sideband modulation) compared with ring mod 35:00 more about phase vocoder 37:30 location in source file (center of analysis window) 38:00 window size 39:00 Hann window (same envelope as in granular example from Tenney) 41:00 how the Hann window modulates the sound 43:00 spectral stamp imposed by Hann window is 4x findamental frequency wide 44:00 analyzing a sum of sinusoids: frequency resolution 45:00 2048 points (fundamental is about 24 Hz.) - 100 Hz. resolution 47:00 100 Hz. is about low A flat. 49:00 testing phase vocoder on a combination of sinusoids 54:00 what happens when the window is too small (i.e., sinusoids too close) 56:30 fundamental tradoff: time resolution versus frequency resolution 1:00:00 two frequencies in seed sound, 1763 and 1863 1:04:00 time stretching using the phase vocoder patch 1:06:00 why the phase vocoder sounds spacy (or phase-y) 1:07:00 phase locking. Frequency domain is separated into "bins" 1:13:00 phase vocoder on speech 1:16:00 speech stretched usnig very short analysis window 271b-05b-apr28.mp4: Meeting 5b, April 28, 2021 0:00 more about phase vocoder: compared with reverberation 8:00 vibrato via pitch shifters 9:00 2700-ish frequencies can be made to sound like a vocal formant 11:00 waveset manipulation (specifically waveset duplication) 18:00 Two Women, Four Voices: simpler examples of Wishart's techniques 19:00 part 3 (Ian Paisley): phase vocoder. Vibrato and also too-small windowing 20:00 part 2: waveset duplication on Diana's voice 24:00 waveset duplication tested on two sinusoids 26:00 controls: minimum number of repetitions and minimum waveset length 34:00 high frequencies can re-emerge as formants 36:00 waveset duplication tested on Diana's voice 36:30 waveset duplication compared to wavetable oscillation 39:00 other waveset operations (not ready in this patch) 40:00 waveset substitution 42:00 waveset averaging 44:30 waveset duplication on seed sound 48:00 time-stretched seed sound through waveset duplication 52:00 why waveset duplication sometimes puts out higher pitches 58:00 Wishart's emphasis on hearing the source sound rather than the technique 1:02:00 even though the form is classical, result is guided by what he finds 1:05:00 Composer's Desktop Project (CDP), well-established software 1:06:00 using phase vocoder to shape envelope 271b-06a-may5.mp4: Meeting 6a, May 5, 2021 0:00 Natasha Barrett, Little Animals and (mostly) The Lock, from Hidden Values 3:00 debts to Dennis Smalley, Jonty Harrison, (early) Trevor Wishart 8:00 "wav" and "aiff" can't hold long soundfiles with many channels 10:00 classig GRM-ish sounds in Little Animals 11:30 sound projection - technique/style in which a stereo tape is spatialized live 13:30 realistic-sounding, "present" images in stereo recordings 16:00 foreground and background layers 17:00 resonant filterbanks. Another example from Noanoa by Saariaho 19:00 The generating flute multiphonic 21:30 tradition (or trope) of discursive sounds against a static pitch field 24:00 The Lock avoids many electroacoustic-music tropes and keeps others 24:30 pitch fields in The Lock made by time-stretching and filtering the voice 27:00 spatialization is central to Hidden Values 28:30 spatial paths in concert with musical gestures and timbral evolutions 31:00 Barrett shows her setup in video from TaCEM 32:30 IRCAM spatializateur ("spat") 2 views of spatial paths 34:00 Nuendo DAW feeding 32(?) channels into IRCAM spat via jack 36:00 acousmatique-style spatial projection is not same as cinematic spatialization 36:30 cinematic-style hearkens back to Stockhausen in the 1950s 37:30 angle cues via panning and distance cues via filtering and reverberation 38:30 John Chowning's paper The Simulation of Moving Sound Sources 40:00 three approaches to cinematic spatialization: Ambisonics, VBAP, and WFS 42:00 VBAP (Vector based amplitude panning) 43:00 VBAP simulates direction but not distance of a virtual source 45:00 cinematic spatialization as used to simulate moving sources 46:30 Gestalt "common fate" grouping to fuse different source sounds 47:00 this is described in Four Criteria of Electronic Music by Stockhausen 49:00 Barrett placing three sounds spread out in space but moving together 50:30 simulating distance on top of VBAP 53:00 Ambisonics as a way to represent a local sound field (not cinematic) 54:30 to fully represent the sound field in a room takes about a million speakers 57:00 a soundfield is a superposition of plane waves in all directions 58:00 at a single point there are only 4 independent channels 58:30 but humans pick up more about the sound filed than from a single point 59:30 two-dimensional ambisonics as a function of time for every direction 1:01:00 if you sample the possible directions you get a Nyquist-like cutoff 1:02:00 to first order: DC and first-harmonic sinusoid around a circle 1:03:00 "radiation patterns" for ambisonics: omni and figure-eights 1:04:00 weighted sums of three basis patterns to make others such as cardioid 1:05:00 you can theoretically record first-order sound field at a single point 1:06:00 recording and playing back first-order Ambisonic signals 1:08:00 Ambisonic channels are in (or 180 degrees out of) phase (no delays) 1:10:00 issues of polarity and diffuse-ness in Ambisonic reproduction 1:12:00 higher-order Ambisonics. Second-order requires 5 channels 1:13:00 equivalent to sampling the circle of directions into 5 points 1:14:00 linearity of Ambisonics allows superposition of sound fields 271b-06b-may5.mp4: Meeting 6b, May 5, 2021 0:00 Wave field synthesis (WFS) 2:30 imagining a faraway sound source heard through a series of open windows 3:30 special case: all sources ("windows") get the same signal 5:00 beam width depends on wavelength of sound and total width of source array 6:00 side-beams (additional directions of radiation) 8:00 spacing of speakers matters and should be a wavelength or less 9:00 aliasing in the angle of radiation depends on individual speaker spacing 10:30 projecting at an angle requires that you delay the signals in the speakers 13:00 requirement to avoid spatial aliasing 14:30 true WFS would require a speaker every 1/2 inch 16:00 WFS can theoretically simulate a sound source inside the listening area 17:00 VBAP and Ambisonics can only simulate faraway sources 18:00 one-dimensional wavefield arrays actually output cylindrical waves 19:00 OK, you could use a pizza-box shaped listening space 21:00 problems that arise when audience is dispersed in a listening space 23:00 problems when panning to simulate virtual sources between speakers 25:00 yet another problem: Hass (precedence) affect 29:00 in extreme case, one foot off-axis can cause Hass trouble 31:00 Ambisonics seems to get a larger listening sweet spot than VBAP 32:00 but directionality is more diffuse (less clarity of image) 34:00 highly present sounds of traditional acousmatic music benefits from "projection" 35:00 summary: advantages and disadvantages of VBAP, WFS, and Ambisonics 39:00 implications for listening to music in listening spaces in the future 40:00 binaural reductions of loudspeaker spatialization 41:00 how binaural spatialization works 43:00 head-related impulse responses and transfer functions (HRTFs) 45:00 spatial perception requires freely moving head 47:00 the ORTF mic setup 48:00 mixing an ORTF pair to a monaural signal 50:30 why worry about cinematic spatialization at all? 51:00 example: Janet Cardiff's 40-voice motet 52:00 octagonal loudspeaker arrays frequently used in electronic music concerts 53:30 affordance: clarity from separating "voices", moving or not 55:00 concert halls blur out details of amplified sounds 58:00 music in front of listener versus music played from behind 59:00 suggestion: put speakers and live musicians on stage together 1:02:00 more on Barrett, The Lock - the nature of the sound sources 1:02:30 TaCEM screen showing geneologies of materiels 1:04:00 processing is mostly time stretching, filtering, and reverberation 1:06:00 electroacoustice-style pitch fields easy to make beacuse of source 1:08:00 spatial separation to enhance clarity 1:09:00 apparently used all three spatialization models for IRCAM performance 1:10:00 Spatialisateur used to simulate binaural recording of loudspeakers via HRTFs 1:11:00 advantages: no room reverberation, and listener is in sweet spot 1:13:00 simple, arcing spatial paths 1:16:00 binaural spatialization better at sides than front 271b-07a-may12.mp4: Meeting 76a, May 12, 2021 0:00 Laurie Spiegel, Appalachian Grove 1:00 Groove system at Bell Labs - computer controlled analog electronics 2:00 this was Mathews's first foray into real-time computer music 3:00 genesis and design of "hybrid" studios 5:00 related to sequential drum (Mathews and Moore) 6:00 what the piece sounds like and how it was generated 8:00 control inputs to analog patch - ADSR triggers, pitches, filter parameters 12:00 center-panned voice only speaks on on-eighth-notes 13:00 choice of pitches 14:30 the ADSR envelope generators 16:00 what happens when sustain level changes while ADSR is in D or S phase 18:00 can enter release phase while in A or D 20:30 study patch: signal paths first, then control 23:00 ADSR abstraction 24:00 output is line segments so usually need to apply a nonlinear function 24:30 band-limited sawtooth generator 25:30 resonant filter 26:00 panner 27:00 control panel: tempo in attacks per minute 28:00 ADSR abstraction has a peak-amplitude parameter (unlike analog ones) 30:00 preview of mechanism to build control panel 31:00 six parameters for each ADSR (one for amplitude and one for VCF) 32:00 pitch, pan, and q are per-voice (three each eventually) 32:30 units for ADSR envelopes: times in milliseconds 34:30 "peak" is in whatever units used by what you send the ADSR to 35:30 sustain level is a percentage of peak 36:30 effect of changing duration of DASR triggering pulse 39:30 rearranging the patch to contain three oscillators 42:00 control (message) line objects to allow ramps in ADSR parameters 43:00 using the new parameters 44:00 testing ramping the ADSR release time 45:00 comparison of LFO versus ramps to control parameters 47:00 aliasing if paramter is changing faster than ADSR is triggered 50:00 historical hardware was (sort of) using audio-rate parameter changes 52:00 pitch is fixed for duration of each individual note (not continuous) 54:00 abstraction for the oscillator so we can make 3 copies 56:00 pan controlled inside abstraction 57:00 individualize pan using a control with a per-copy name 58:00 gotcha: don't retype an abstraction if you're editing the interior 59:00 use outlets for output signals and add them outside the abstraction 1:00:00 parameters for filter: center frequency (as pitch) and q 1:02:00 need two filters, one for each output channel 1:05:00 managing triggers for ADSRs (delay equal to note duration) 1:06:00 random choice of which voice to trigger each tick 1:07:00 pitch1, etc, parameters to set pitch 1:07:30 rebuilding control panel 1:09:00 genctl takes an argument to specify default (reset) value 1:10:00 testing the three-voice patch driven by a metronome 1:11:00 imitate detuning of analog oscillators 1:12:00 grab all the parameters and put in a message box 1:14:00 rhythmic detail in the piece: center voice is accented 1:16:00 center-panned voice only appears on on-sixteenths 1:17:00 no hardware sequencer (it was in the computer program) 1:18:00 computer also had live analog inputs 271b-07b-may12.mp4: Meeting 7b, May 12, 2021 0:00 photo of control panel for (I think) GROOVE system 1:30 center frequency, bandwidth and q of a resonant filter 8:30 Moog ladder filter compared to vcf~ object 10:00 affordances of computer-controlled analog versus digital synthesis 12:30 imitating rhythmic pattern in Appalachian Grove 22:00 will skip doing the accents correctly 23:31 control of pitches 26:00 the value object, used to read next-pitch into correct voice 27:00 why you don't need a trigger to control order of pitch versus ADSR 31:30 selecting pitches from a collection 33:00 one way to make a pittch set: 7-element array with a message to fill it 34:30 perhaps better alternative: use a text object to hold the pitch set 38:00 array for pitch probabilities 41:00 bob~ object: ladder filter, alternative to vcf~ 53:00 adding reverberation 54:00 automatic control panel generator 55:30 looking at saved patch to learn how to create objects 57:00 windows in Pd can be sent messages by window name 1:00:00 make-ctls sub-patch. Catching the first in a sequence of messages 1:02:00 counter to set y location of controls in new panel 1:03:00 supplying default arguments to messages using unpack 1:08:00 time stretching (sort of) using long reverberation 1:11:00 short versus lnog sinusoidal burst into infinite reverb 1:16:00 this is probably what was done with bird sounds in Westercamp piece 1:16:30 other ways: phase vocoder (e.g., Wishart); resonant filters (Saariaho) 271b-08a-may19.mp4: Meeting 8a, May 19, 2021 0:00 invited talk: Kerry Hagan (Univ. of Limerick), ...of pulses and times... 4:30 generative music and electronic music genres 6:00 inspired by Nick Collins. Probabilistic analysis of beat music 7:30 algorithmic versus generative music and electronic music genres 8:30 another inspiration: Mortom Feldman's metric modulations 9:30 layering tempos with different pulses 11:00 playing the piece from the patch (note: recording is only one channel) 18:00 why not use samples 19:00 top-down design of piece 20:00 score is series of messages 21:30 metric modulation subpatch 24:00 metronome objects generating different subdivisions of beat 25:00 non-unifirm probabilities 27:00 10 percent nonuplets, etc. 28:00 random selection of sounds depending on beat 30:00 occasional detour nito triplets (dubstep reference) 33:00 sound generator subpatches 35:30 usnig vline object to make stuttered pulses 37:30 attacks don't exactly repeat so that successive events aren't identical 39:30 FM instrument borrowed from another piece, Morphons and Bions 41:30 chirps, drones, and swoops 43:00 patch can play automatically or be performed live 45:00 automatic meter changes 47:00 Pd programming style 48:30 preparing patches to be played without composer present 50:00 control surfaces 52:00 memoryless processes and longer time durations 54:30 piece can sound different every time although the form is fixed 57:00 comparison with fixed-media pieces 1:00:00 real time is fundamentally different compositionally 1:01:00 multichannel sound and real-time spatialization 1:03:00 no filtering used (but some broad-band sounds and some narrow ones) 1:06:00 using filters in generatove or algorithmic music 1:06:30 serialism and representation of all musical values as numbers 1:08:00 role of accident in stochastic music 1:11:00 narrative and musical form 1:14:30 how strictly should a performance follow the "score" (message sequence) 271b-09a-may26.mp4: Meeting 9a, May 26, 2021 0:00 more on Manoury, Pluton. Running patch using recorded piano and MIDI 6:30 "section 100" - qlist1.txt and score1.txt 7:30 score follower compares incoming MIDI from piano with score 8:00 example: C-D-E-F-G-F-E-D-C-C-C-C-C-C in score and repeated Cs from keyboard 9:00 jumps to event 19 10:30 concert setup for Pluton. MIDI interface, PianoBar designed by Buchla 11:00 audio conections 11:30 logical diagram: frequency&pitch shifters, reverb, spatialization 12:30 oscillator bank controlled by FFT analysis of piano; and live sampler 13:30 how MIDI input is used: first, velocity-controlled spatialization 14:30 second use for MIDI input: training a Markov chain 15:30 third use: score following. Barry Vercoe and Roger Dannenberg 16:30 score-following driven sequencing (qlist) 18:30 don't need to send MIDI back to "play" piano 19:30 score following algorithm. Possibility of very tight synchrony 21:30 explanation of how the example from 8:00 worked 24:30 finding best possible path connecting score with performance 27:00 each path has a conditional probability (incorrectly called "likelihood") 30:30 most probable path implies where we want the computer to be 32:00 equivalent to Viterbi algorithm for a particular hidden Markov chain 34:00 related term: "dynamic programming" 35:30 limitation: this approach doesn't take advantage of timing of events 36:00 why people don't often use score following today 37:30 polyphonic score following 39:00 only using note onsets as input to score following is a severe limitation 44:00 research problem: score following for Pluton using audio alone 46:00 "notes.txt" file in patch show what labels are where in score 47:00 Event 31 (section IV): FFT-controlled oscillator bank 47:30 section II introduces oscillators and Markov chain and alternates them 48:30 section III Markov alone 50:30 "gumbo" objects that control pitches of oscillators 51:00 oscillator control panel. Capital and small "o" . Oto2, Oto4, etc. 56:00 amplitudes of oscillators controled by FFT analysis as in a vocoder 58:00 spectral shift and response speed controls 59:00 why this doesn't sound like a vocoder used in the usual way 1:01:00 usnig the four 12-voice banks as one larger bank (gumbo5 object) 1:02:30 why the "gumbo" object has setparate "set" and "bang" messages 1:03:30 "mode 1" loads pitches into the oscillators from piano keyboard 1:08:00 aliasing effect in FFT analysis if speed set high 1:10:00 setting speed to zero freezes spectrum 1:11:00 cross-fading between spectra 1:12:00 different wavetables 1:14:00 patch to simulate piano (audio and MIDI). Section III, events 1-6 1:18:00 Pd patch adapted from original Max patch 271b-09b-may26.mp4: Meeting 9b, May 26, 2021 0:00 Markov chain object ("mark" receive name) 1:00 recording MIDI into the Markov chain which controls a sampler 1:30 "trec" message records; "play1" plays back 3:00 Record a chain C-D-E-F-G-C 3:30 Oops, need to clear the chain first 4:00 ambitus control 5:00 transition probabilties in Markov chain taken from MIDI velocity 6:00 time limit between notes to make a transition 6:30 restart parameter in case chain gets stuck 8:00 common pitches between different pitch sets to make transitions 9:30 how rhythm is input from MIDI 11:00 if Markov chain has absorbing states it lasts a random amount of time 13:00 section 2: oscillators first, then Markov chain at event 7 16:00 Markov chains have faded out; oscillators take over 18:00 reemergence of Markov chain 18:30 recording arpeggios into sampler 19:00 ambitus is at zero 20:00 Markov chain disappears a second time 21:30 playing chords into Markov chain (not demonstrated) 23:30 imperfect separation between interaction and score (events 43:1-5) 26:00 three metronomes at different speeds 27:30 event 3 records from piano to sampler bank 2 30:00 pitch shifter ("harmonizer") to make celeste effect 32:30 playing through section V, part E 38:30 granular time stretching in part F 41:00 time stretching played without piano part 42:00 sample played without time stretch 43:00 playback at different speeds. The "rspeed" control 46:00 events 44:1-3 (section E part F) 49:00 ending of piece, 64-to-one time stretch 50:00 spatializer, repeated patterns with variable speed 51:00 tables (arrays) giving x and y coordinates for spatialization 55:00 velocity control of spatialization 57:30 velocity controled crossfades between oscillator banks 1:00:00 velocity control conects piano to electronics 1:05:00 research problem: make a cheap, robust piano MIDI detector 1:06:00 designing pickups for individual piano strings 1:08:00 a more recent algorithm (from Manoury, en Echo): "3F" 1:11:00 three generating pitches generate inharmonic spectra 1:13:00 secret sauce: linear integer combinations of generating frequencies 1:15:00 example: using incoming pitch as one of the three frequencies