270c-01a-jan5.mp4: Meeting 1, Jan. 5, 2021
0:00 overview of syllabus
6:40 assignment 1
10:15 solution 1: using pure data
13:00 pure data utility patch for playing and seeing musical notes: 0.common.pd
14:00 Pure Data patch library
16:15 Pd "mod" object (integer modulus)
21:50 circle of fifths as 7n mod 12
24:00 solution 2: using julia and csound
27:00 printing csound "note cards" in julia
34:30 julia solution requires a rewrite of the math to vectorize it
37:00 relationship between pitch (in MIDI units) and frequency
38:00 major and minor thirds in common and graphical notation
40:00 pitch continuum ("floating point MIDI units")
41:00 just and tempered triads as pitch
43:30 conversion from pitch to and from frequency (mtof and ftom objects)
51:30 overtone series in musical notation
54:00 frequency shifting (as pitches and freqencies no notes)
57:00 odd harmonics of A55
1:01:30 inharmonic frequence shifts
1:03:30 intervals between pairs of notes with fixed frequency spacing
1:07:00 pitch related to log frequency (midi = 69 + 12*log_2(frequency/440)
1:09:00 negative frequencies. Frequency shift to generate microtonal pitch sets
1:10 callout to Stock, Mantra and Chowning, Stria as examples
1:12:30 quantization. Rounding to nearest integer in Pd
1:20:00 next time: non-integer-spaced pitches rounded to integers
270c-01b-jan8.mp4: Meeting 2, Jan. 8, 2021
0:00 Continuing with pitch manipulation: Western scale as int(12n/7)
2:00 the "for" object in the 270c library, counting to 7
11:00 adding 1 to change western-scape mode
22:00 non-octave scales, genral maximally uniform spacing
23:30 specifying time onsets
25:00 maximally-uniform time onsets
26:00 quintuplets, 5 against 4 against 3
27:30 quantized rhythm
30:00 western scale as rhythm
33:00 repeating the quantization process
38:00 western major scale as 3 triads end to end, also maximally uniform
41:00 building major scale as f-a-c-e-g-b-d
43:00 Balzano picture of 7-tone scale in 12-tone octave
43:45 more operations on pitches. Quantizing to just scale
50:00 pitch sequence to try
51:00 text object for searching through list of pitches
59:00 general tool for quantizing to a set of pitches
1:01:00 interpolating between pitch sets
1:03:00 preliminary: stretching and squeezing interval sets
1:04:00 inversion
1:05:00 sliding through different stretch factors (using trigger object)
1:06:30 back to interpolation
1:08:00 the map object in the 270c library
1:09:30 expression for weighted mean (aka linear interpolation)
1:10:00 oops, need a trigger object
1:15:00 oops, rounding negative numbers works differently
1:16:30 working interpolation
1:17:30 extrapolations
270c-02a-jan12.mp4: Meeting 3, Jan. 12, 2021
0:00 generalities about formalized composition
3:00 Xenakis sets a piano on fire
5:30 folding via modulus and absolute value
7:00 revisit assignment 1 using folding
11:00 interval between 2 pitch classes
17:00 mod 12, minus 6, absolute value, plus 6
19:30 interval vectors
20:00 serialize (etc) objects in library
22:30 interval vector of (0,3,6,9) chord
24:00 generatING all pairs from (0,1,2,3)
30:00 map the pairs through the chord to get all intervals
33:00 music theory: structure of complete tetrachords
38:00 how to complete an interval into a complete tetrachord
39:00 example - chains of tetrachords sharing bichords
43:30 algorithmic search for chords containing subsets
44:00 nested for loops in Pd
48:00 counting with the second digit as least significant digit
50:00 choosing each possible pair of digits once
54:00 finding all the intervals in a tetrachord using "for" loops
56:00 serialize and parallelize library objects
59:00 tohisto and fromhisto library objects
1:05:00 finding library objects and documentation in 0.common.pd
1:07:30 prepare for how to find all chords containing a set: set intersectios
1:11:30 apply library object
1:13:00 iterate through transpositions of a chord
1:17:00 intersection of a fixed set with transpositions of a second one
1:20 finding the unique E/F# bichord in a transposed (0 5 6 8) tetrachord
270c-02b-jan15.mp4: Meeting 4, Jan. 15, 2021
0:00 interpolation between sequences
0:30 text object to hold a sequence
4:00 reading sequence using "text get"
12:30 incorrect sequencer
15:30 playback tempo control
17:00 text editing wizardry failure
20:00 corrected sequencer
21:50 same but with expr so that tempo control isn't upside-down
22:30 get notes out of both sequences in parallel
23:45 Parker rhythm applied to Bach pitches
24:30 pitch squeezing/stretching using expr
28:00 inverting about half-integer pitches
30:00 pitch interpolation between sequences
33:30 pitch interpolation plus stretch/squeeze
35:45 interpolating the rhythms (inter-onset intervals and durations)
45:00 application: P/H Concrete, Xenakis
46:00 straight-line paths through a hyperbolic paraboloid
49:00 pairs of P/Hs sharing edges
52:00 same structure as time/frequency pairs
57:00 rhythmic interpolation in Jupiter by Manoury
270c-03a-jan19.mp4: Meeting 5, Jan. 19, 2021
0:00 assignment 2: 12-bar blues noodler
3:00 probability distributions
4:30 the "random" object in Pd
5:00 pseudorandom number generators and seeds
7:00 Restarting Pd resets random seed. (Deterministic stochastic process)
10:30 a simple pseudorandom number generator
14:00 histograms to measure probability distribution
21:00 checking flatness of probability distribution coming from "random" object
25:00 histogram of digraphs
30:00 small number of trials gives crazy histograms
32:00 thought experiment: write a piece of music in which everything is random
37:00 Xenakis's idea of stochastic music
40:30 what the seed does internally
42:00 generating a random number with a given probability distribution
54:00 verifying the probability distribution with a histogram
1:00:00 interpolating bwtween two distributions
1:01:00 quantiles as generalization of random choice with given distribution
1:05:00 interpolation version 1: cross-fading between distributions
1:11:00 interpolating horizontally between two distributions
270c-04a-jan26.mp4: Meeting 6, Jan. 26, 2021
0:00 more on randomness: sampling without replacement
11:00 arpeggiation without repetition (Lansky technique)
17:00 random permutations
20:00 example: voicing chords as permutation
36:00 random processes that depend on previous outcomes
39:00 stationary processes
42:00 Markov chains versus memoryless processes
47:00 memory of future outcomes?
52:00 Markov condition: dependence on past is limited to immediate past
54:00 random walks (example of a Markov chain)
1:06:00 making a Markov chain in Pd
1:07:00 higher-order Markov chains
1:08:00 limitations of Markov modeling of musical sequences
270c-04b-jan29.mp4: Meeting 7, Jan. 29, 2021
0:00 training a Markov chain from a corpus
4:00 sparseness of Markov chain matrices trained from a corpus
5:30 continuous versus discrete state spaces
8:30 alternative to building a Markov transition matrix: indexing the corpus
11:00 example: smerdyakov object, used by Manoury
16:00 encoding extra information when states are labeled by pitch
20:00 polyphonic music through smerdyakov
270c-05a-feb2.mp4: Meeting 8, Feb. 2, 2021
0:00 more on random walks: boundarys at end of finite-length walks
5:00 adding stationary transitions to a random walk
6:00 boundaries on states or between states
7:30 implementation in Pd
13:00 histogram showing steady state probabilities of random walk
16:00 adding gravity (asymmetric step probabilities)
17:00 adding stationary transitions in Pd
20:00 imitating continuous time (discrete state) Markov process
21:00 diffusion rate of the process
26:00 tour of smerdyakov c code
30:00 mixtures of two Markov chains
32:00 oops, 4th order, no transitions between Donna Lee and Bach invention
33:00 weighting the mixture to crossfade between the two
37:00 don't play transitions that have long delays
39:00 back to c implementation
42:00 methods of smerdyakov object
43:30 "tick" method
44:30 how variable-order Markov chain is implemented
47:00 adding up weights of all transitions (used later to normalize)
48:00 uniformizing rhythm
54:00 writing out a random walk explicitly in smerdyakov
57:00 chord slop (playing back chords that are recorded from a keyboard)
59:00 text-based Markov chain as demonstration of adaptively variable order
1:06:00 varying the order as function of desired number of continuations
1:10:00 entropy
1:12:00 incitement: when is one ramp greater than one at a different increment?
270c-05b-feb5.mp4: Meeting 9, Feb. 5, 2021
0:00 deterministic processes with desired statistics (z12 process)
12:00 keeping track of each state's excess
16;00 dueling metronome (equivalent process)
19:00 outputs a familiar sequence (01001010...)
20:00 l system that outputs the same thing (actually its complement 10110101...)
21:30 incitement: convolving 1000 and 1618 msec metronome outputs
27:00 same thing as sweeping through an orchard (lattice points in first quadrant)
270c-06a-feb9.mp4: Meeting 10, Feb. 9, 2021
0:00 supervised and unsupervised machine learning
3:00 artificial neural networks (ANNs)
6:00 perceptron function: sigmoid function of a linear combination of inputs
9:30 behavior for large inputs
10:00 xor function as thing a one-deep perceptron network can't "learn"
12:00 hidden layers of nurons / perceptrons
14;45 training the ANN as fitting a spline to input data
16:30 ANN package for pure data
18:00 training ann_mlp object in Pd
20:00 training set (file train-xor.txt)
23:00 running the trained ANN
25:00 the trained neural net as a text file
34:00 (tried and failed to use julia to plot ANN function)
37:00 run-network-to-file patch runs an ANN so we can graph the behavior
44:00 (give up trying to graph and go on): how ANNs fail
45:45 Lee and Wessel paper, ANN to train additive synth from sound recording
52:00 subpatch playback.pd works on a single partial
55:00 training set from saxophone solo recording
57:00 is it better to train one ANN for all partials, or train separate ones?
1:00:00 testing the ANN we trained on teh saxophone
1:08:00 listening to output of ANN imitating the sax (using same input data)
1:10;00 why this is an ill posed problem for an ANN
1:14:00 2D should be too low dimensionality for this to work
270c-06b-feb12.mp4: Meeting 11, Feb. 12, 2021
0:00 sonic challenge 3: choice 1: make an algorithm to attack sax solo problem
1:00 choice 2 (discrete-event problem): make a melody given a melodic contour
4:30 theoretical discussion of back-propagation training for ANNs
6:30 training as a minimization problem
9:00 total error function to minimize
10:00 optimization assumes we have a trusted distance function on the output
14:00 training in practice using ann_mlp object in Pd to learn a parabola
15:30 graphing output of ANN
18:30 graphng XOR ANN result
20:00 training for XOR again, failure case
23:00 badly trained ANN wasted 3 hidden neurons solving the same part of problem
31:00 behavior outside of training set: sigmoids better than polyynomials
37:30 ANN trained on parabola levels off outside training set
38:00 training failure example on parabola
40:45 another failure mode
43:00 surprise: adding hidden neurons doesn't make training much slower
48:00 2-dimensional paraboloid, 10 hidden neurons, graphing outside training set
50:00 successfully trained paraboloid, behavior outside training set
56:00 heatmap view
57:30 existence of python machine learning packages (not available in julia)
58:30 how to prepare inputs to ML algorithm from audio
1:00:00 is "deep learning" musically useful, either now or potentially?
270c-07a-feb16.mp4: Meeting 12, Feb. 16, 2021
0:00 Sam Pluta guest talk: training neural nets to generate synthesis parameters
1:00 training on tiny training sets
2:00 inspired by Rebecca Fiebrink's Wekinator
2:30 instead of using complex inputs, try simple inputs and complex outputs
3:30 training ANNs when there's no right answer
4:00 Pluta's instrument
5:00 two x/y pads as input
6:30 two surfaces, each with 4 synth algorithms, 8 ANN mappings
10:30 training the ANN with 6 points
12:00 testing the network on one of the 6 training points
13:00 another synth, FM7 (DX7 imitation in supercollider)
14:30 Following Frederick Olafsen's example of how to use FM7
16:30 code for parametrizing FM7
17:00 controlling range of parameters
19:00 analog systems are more forgiving of unstable networks than digital ones
20:00 in FM example, controlling oscillators that then control synth parameters
20:30 how to train the network
21:00 example of a fragile network (small usable parameter range)
22:00 adding a point and retraining
25:00 classifiers don't make much sense in this environment
25:30 very narrow space that works, most parameter combinations will be lame
28:00 4-dimensional space is already impossible to visualize
28:30 impossible for a human to design the mapping the ANNs make automatically
30:00 the ANN can exactly hit the training points; fun thing is interpolations
31:00 using FLUCOMA MOP regressor from Huddersfield
31:50 training is better in Keras but it's much slower to train or load
34:00 we're in a moment where new ideas are opening up
35:00 in this approach failures are interesting
37:00 FLUCOMA is out already
38:00 not an example of overfitting
40:00 about overfitting. Controlling by minimizing total variation
46:00 good and bad range of output of sigmoid
48:00 best if lots of useful outputs in middle of range of sigmoid
49:30 Pd ann_mlp (based on FANN package) defaults to (-1, 1) interval
50:00 implication for training partial amplitudes
53:00 how this works out for Lee/Wessel technique
59:00 if natural output range is (-1,1) quiet partials work badly
1:03:00 normalizing before presenting trainign set to ANN
1:06:00 badly normalized ANNs on Weeks solo sounds like Hammond organ
1:12:00 theoretically Lee/Wessel ANN should never make foldover
1:13:00 Q&A and discussion
1:19:00 quick review of how to operate Lee/Wessel patch
270c-07b-feb19.mp4: Meeting 13, Feb. 19, 2021
0:00 sound challenge 3 again: making melodies with desired contours
1:00 rule-based composition as optimization: melodizer.pd
4:00 digression: debugged version of sax Lee/Wessel technique
7:00 vocal recording through saxophone transform
10:00 graph of first partial amplitude as function of pitch and loudness
12:30 comparing ANN output with measured amplitudes
13:30 graph of measured tenth partial amplitudes
15:00 compared with ANN output
16:30 unsupervsed learning algorithms (no coordinate distinguished as "output")
18:30 self-organized maps (SOMs)
20:30 example: Henrik Frisk SOM analysis in free improvisation
21:30 "learning" a circle
24:00 standard SOM algorithm described
33:30 convergence of SOM algorithm
36:00 how learning parameters affects convergence
41:00 letter "phi" shape (harder problem)
44:00 topological condition for SOMs to work
45:30 using SOM output
47:30 necessity of trusting the distance measure (metric)
49:00 possible metric for spectra
53:00 code for SOM object in Pd
56:00 k-means clustering algorithm
1:07:30 example of a bad outcome
1:08:00 outcome depends on (random) initial state
1:12:00 bad outcomes are local minima that aren't global
270c-08a-feb23.mp4: Meeting 14, Feb. 23, 2021
0:00 will cover paper by Arjovsky et al., Wasserstein Gan in week 10
3:45 today's subject is optimization algorithms applied to musical composition
4:30 amusement: applying Lee/Wessel to Judy Garland vocals
8:30 separate networks work better than one combined one
10:30 Garland analysis file, 10.66 msec per slice
11:15 getting signal power (add partial power to partially reject orchestra)
13:30 melodizer: optimizing melody generator
14:00 how to import C code into Pd using "cfc" object (simpler than externs)
16:00 cfc example: greatest common divisor (gcd) using Euclid's algorithm
18:00 melodizer setup: 29-note melody with quarter-note rhythm
18:30 First term in melody's "value" or "score" evaluation: transitions
22:30 overall score is weighted sum of scores by the different criteria
23:00 criteria are: transitions, repetitions, contour, harmony, and similarity
23:30 combinatorial optimization algorithm: change 1, 2, or 3 pitches per step
25:30 even if a change doesn't improve score, make it if score doesn't go down
27:00 melody optimized according to transitions criterion alone
29:30 repetition criterion: fragments reappear. (wrong, corrected 45:00)
30:30 Markov analysis or other local methods won't capture this
32:00 first argument to cfc message selects 1 of 5 functions
33:00 tour of the cfc source code
42:45 "getscore" c function in cfc to make weighted sum of component scores
43:45 "perturb" function alters melody
44:30 functions for the individual criteria
45:00 repetitions criterion punishes us for over-repeating pairs of notes
45:30 contour criterion
46:45 tonality criterion
50:00 similarity criterion - repeated fragments
52:00 optimizing by contour alone
54:00 (fixing a mistake in contour code and trying again)
56:30 adding transitions criterion to contour
58:00 all criteria combined
58:30 optimization strategy: relaxation (temporarily drop one requirement)
1:03:00 finished product (sonic challenge 3, melodic version)
1:09:00 Stockhausen BBC lecture: "if this were an octave the piece would be over"
1:11:00 another possiblitiy: duelling Markov chains (Baysian networks, sort of)
270c-08b-feb26.mp4: Meeting 15, Feb. 26, 2021
0:00 Approach to sonic challenge 3 using conditioned Markov chans
1:00 input to smerdyakov. Example: top line of Bach invention
2:00 changing order of Markov chain
3:30 imposing a priori probabilities
4:30 probabilitiy space representation of conditioning a Markov chain
7:00 conditioning is just hacking a smaller piece out of the sample space
11:30 Bayes' rule (flipping conditional probability backward)
16:00 specifying an a priori probability distribution
17:00 model: melodic contour is considered as blurred output of the Markov chain
19:00 Bach with random accidentals
20:00 force that (stupid) output to be a melodic contour we specify
31:00 result: just multiply transition probabilities by contour weighting function
34:00 normalization is confusing but we can just assume it's constant
36:00 message to smerdykov to impose an apriori probability distribution
38:00 testing smerdyakov with imposed contour
40:00 possible to give an impossible condition (0/0 probability)
41:00 no sanity check, but give everything an a priori prob of at least 10^-9
43:00 setting order to 0
44:30 implementation in smerdyakov source code
48:00 example of a Markov chain to generate syncopated rhythm
56:00 limitation: the three processes are ignoring each other
57:00 because of syncopation the chains don't advance at same times
58:00 for the chains to condition on each other we might need to add states
1:00:00 conditioning Markov chains on each others' (not yet computed) outputs
1:01:00 I think this is a Bayesian belief network (that's a buzz-phrase)
1:02:00 Bayes' law allows us to rearrange the arrows of causality
1:05:00 if not in real time you can re-imagine this as an optimization problem
270c-09a-mar2.mp4: Meeting 16, March 2, 2021
0:00 typical rhythmic constraint: avoiding collisions in time
2:00 example from Charlotte Truchet paper: rhythmic canons
4:00 random walk, smaller time steps limit to Brownian motion
5:30 parameter of Brownian motion: variance per unit time
6:00 doubling speed multiplies standard deviation by sqrt(2)
8:00 fractal property: re-scaling speed gives another one renormalized
9:30 another fractal: snowflake curve
10:00 length is infinite
13:00 this mirros composers' idea of having small scale reflect large scale
15:00 Stochhausen's idea of playing Beethoven's 9th symphony at 440 Hz
16:00 back to L system to make self-similar sequence of 0s and 1s
17:00 there's also a top-down way of seeing it that goes from large to small
21:00 fractally partitioning a line segment.
22:00 This shows up in Poeme ELectornique,
25:00 Cyril Henry's (I think) L-system examples in Gem
33:00 chaotic flows on 4-sphere
270c-09b-mar5.mp4: Meeting 17, March 5, 2021
(spliced; part of original is missing)
0:00 relationship between stochastic processes and chaos
0:30 iterating the function 4ax(1-x)
4:00 a=1: Values near zero repelled exponentially
5:00 smaller values of a. a>1/4: zero (unstable), and another fixed point
8:00 when slope of crossover point less than -1, other fixed point is unstable
10:00 period doubling
11:00 understanding the function as taffy-pulling
12:00 a=1: Cauchy polynomial - doubling the angle, so equivalent to 2x mod 1
14:00 start with a random real number, and output binary digits as a sequence
15:00 so iteration (if a=1) is equivalent to a coin-tossing process
18:00 continuous-time chaotic process
19:00 flows in in one or two dimensions
20:00 one-dimensional flows have simple behaviors
23:00 oscillators as two-dimensional flows
26:00 possible behaviors in neighborhood of fixed point in two dimensions
30:00 three dimensions or up: possibility of chaotic behavior
33:00 example: Lorenz attractor
36:00 output as audio signal
38:00 if you can't observe it exactly, you can base a random process on it
39:00 you could measure the entropy of the random process
39:30 how to make fractal curves out of a dynamical system
40:00 snowflake curve interior as eventual fate of an iterated function
270c-10a-mar9.mp4: Meeting 18, March 9, 2021
0:00 example of a (non-chaotic) flow: differences between three phasors in pairs
2:00 which of two phasors is "ahead" of the other measured against a third one
4:00 same thing as a control operation (incitement from end of Feb. 2 class)
5:00 the 0100101... L-series if one is golden section faster than the other
7:30 why did subtracting 2 phasors into a comparator give 3 pitches?
24:00 state space of two oscillators can be drawn as a rectangle
27:00 flows in a two-dimensional phase space that is topologically a torus
33:00 coupled oscillator pairs
34:00 can require that all arrows in the flow point northeast to avoid stable points
35:00 flow described as a differential equation determined by a vector field
37:00 first-order equation implies no inertia
39:00 hard-syncing one oscllator to another
42:00 hard sync as a flow
44:00 soft sync
48:00 making a sawtooth oscillator from scratch in Pd (block 1 subpatch)
53:00 soft-syncing two of them using a comparator
58:00 other type of coupled oscillator network: non-physical pool table
1:00:00 simplified phase space is 4x the top of the table
1:01:00 more correctly, the phase space is four-dimensional
1:02:00 very incorrect pool table: triangle
1:03:00 assuming fixed speed can reduce phase space to 3 dimensions
1:04:30 resulting behavior
1:05:00 very strange behavior if triangle has one very small angle
270c-10b-mar12.mp4: Meeting 19, March 12, 2021
0:00 topic not covered yet: coupling Markov chains to each other
3:30 another uncovered topic: flows on 3-sphere (unit quaternions)
6:30 other topics: Gendy (Xenakis); signal-rate staircase; 3F
9:00 Gendy algorithm: waveforms made of end-to-end connected line segments
13:00 Pd patch to imitate Gendy
14:00 random-walk abstraction. Random walks with constraints and bias
21:00 S709 by Xenakis - you can check on waveforms in the original recording
23:00 Audio rate staircase: phasors and sample/hold
31:00 bell designer (3F technique)
35:00 2- or 3- dimensional lattice of frequencies
36:30 ways of choosing finite subsets of lattice
38:00 the zero-frequency line (or in general, hyperplane)
40:00 frequently occurring frequency pairs separated by f, g, or h
41:00 can be realized as oscillators or resonant filters for example
43:00 seeding the pseudo-random sieve
270c-11a-mar16.mp4: Meeting 20, March 16, 2021
0:00 Arjovsky, Chintala, and Botton, Wasserstein GAN paper
2:00 Think of unsupervised learning as learning a probability distribution
3:00 ground truth is a probability distribution
4:00 we wish to map a simpler distribution to the "real" one
4:30 insight 2: the Kullback-Leibler divergence isn't an ideal measure
5:30 definition of KL divergence
7;00 think of divergence from f to g as "how well you can encode f using g"
8:00 why it's always nonnegative
9:30 problem with KL as a thing to minimize in a training procedure
12:30 maximizing likelihood of observations given the model
13:00 but what if likelihood is zero? adding noise to make log likelihood finite
15:30 asymptotically, measured KL divergence becomes log liklihood plus constant
16:30 added noise typically 10% of amplitude of signal
19:00 earth-mover distance. We saw a form of this earlier (quantile cross-fade)
21:30 probably related to the optimal transport problem
22:30 hard to estimate earth-mover distance in higher number of dimensions
23:30 even NP complete problems are often tractable in practice
24:30 a minimax principle relating earth-mover distance to an integral
24:30 (Kantorovich-Rubinstein duality)
30:00 Can treat this as a supervised learning problem
31:30 maybe this doesn't measure earth-mover distance but it seems to work anyhow
34:00 general setup for auto-encoders
37:00 applying earth-mover's distance to train it
38:00 training algorithm as a nested loop (inside loop learns earth-mover distance)
40:00 this is "algorithm 1" in paper
41:00 one trick: don't have to update earth-mover function to estimate slopes
43:00 typical application: sampling from the "learned" distribution
45:30 how it's possible to affect specific choise of sample
48:00 comparison: self-organized map solves a different minimization problem
49:00 idea: make a joystick that travels through a set of desired outcomes
50:30 this might be what is called normalizing flows
51:30 IRCAM paper: train a VAE to learn how to parametrize a synthesizer
53:00 big problem area: dealing with time behavior as a learnable feature