next up previous index
Next: 3. Spectra Up: course-notes Previous: 1. Sounds, Signals, and   Index

Subsections

2. Sinusoids

For several different reasons, sinusoids pop up ubiquitously in both theoretical and practical situations having to do with sound. For one thing, sinusoids occur naturally in a variety of ways, and if one happens to couple physically with the air and is of audible frequency and amplitude, we'll hear it. Second, sinusoids behave in simple and predictable ways when the elementary operations (amplification. mixing, delay; section 1.5) are applied to them. Third, one can add up sinusoids to make arbitrary signals or digital recordings (with some provisos having to do with convergence); this ability is extraordinarily useful for analyzing and synthesizing sounds.


2.1 Elementary Operations on Sinusoids

Here is a picture that might help visualize the mathematics of sinusoids. Imagine a point on the rim of a spinning bicycle wheel:

\includegraphics[bb = 167 86 456 706, scale=0.75]{fig/B01-sine-bicycle.ps}

The progress in space of the point has horizontal ($x$) and vertical ($y$) components. If we forget the vertical component and graph just the horizontal component over time we get a sinusoid. If the point is initially at an angle $\phi_0$ from the $x$ axis, we get the familiar formula:

\begin{displaymath}
x(t) = a \cdot \cos(2 \pi f t + \phi_0)
\end{displaymath}

where $f$, the frequency, is the number of revolutions per unit time, and the amplitude $a$ is the radius of the wheel.

Now for the three elementary operations. First, amplification, say by a linear gain $g$, replaces $x(t)$ above with

\begin{displaymath}
g \cdot x(t) = ga \cdot \cos(2 \pi f t + \phi_0)
\end{displaymath}

If the gain is specified in decibels (say, $g_{dB}$), then we convert from decibels to a linear gain by applying the definition of decibels backward:

\begin{displaymath}
g = {{10} ^ {{g_{dB} / 20}}}
\end{displaymath}

Applying a delay to a sinusoid equal to $\tau$ (or, if a recording, a time shift forward or backward by a positive or negative number equivalent to a time $\tau$) has the effect of replacing $t$ with $t-\tau$ in the formula:

\begin{displaymath}
x(t-\tau) = a \cdot \cos(2 \pi f (t-\tau) + \phi_0)
= a \cdot \cos(2 \pi f t + (\phi_0 - 2\pi \tau f))
\end{displaymath}

This leaves the amplitude $a$ and the frequency $f$ unchanged, but subtracts an offset $2\pi \tau f$ from the initial phase.

The effect of mixing two sinusoids (the third elementary operation) is more complicated. We'll start by supposing the two have equal frequencies (but not necessarily the same amplitudes or initial phases). Here is a picture:

\includegraphics[bb = 90 199 521 600, scale=0.75]{fig/B02-2sines.ps}

The parallelogram represents the initial situation at time zero; the entire thing rotates about the origin as indicated by the arrows, without changing size or shape. If the initial phases of the two are $\phi_1$ and $\phi_2$, the angle between them is either plus or minus $\phi_2 - \phi_1$ and, by the law of cosines, we get

\begin{displaymath}
c^2 = a^2 + b^2 + 2ab \cdot \cos(\phi_2 - \phi_1)
\end{displaymath}

(it doesn't matter which order $\phi_1$ and $\phi_2$ appear in the formula, since the cosine of the difference is the same either way). Depending on the phase difference, $c$ may lie anywhere between $\vert b-a\vert$ (if $\phi_2 - \phi_1 = \pi$ so that the two sinusoids are exactly out of phase) and $a+b$ (if $\phi_2 - \phi_1 = 0$ so that they are perfectly in phase.)

The resulting initial phase depends in a complicated way on all of $a$, $b$, $\phi_1$, and $\phi_2$--the easiest way to compute it would be to convert everything to rectangular coordinates and back, but we will put that off for another day.

If the two frequencies are not equal--call them $f$ and $g$--we can still apply the same reasoning, at least qualitatively. At time $t=0$ we still get a parallelogram, but now the two summands are rotating about the origin at different rates, so that the difference between the two phases, initially $\phi_2 - \phi_1$, is itself increasing or decreasing by a rate equal to the difference of the two component frequencies, that is, $g-f$. As a result, exactly $g-f$ times every unit of time, the parallelogram runs through its entire range of shapes and the resultant amplitude runs back and forth between its minimum and maximum possible values, $\vert b-a\vert$ and $a+b$.

If $f$ and $g$ differ by less than about 30 Hz., you can hear these changes in amplitude. This effect is called beating. At greater frequency separations you are likely to hear two separate tones, unless indeed they act as we'll describe in the next section:


2.2 Periodic and aperiodic tones

So far we have tacitly assumed that our ears can actually hear sinusoids as separate sounds, and that, presented with two or more sinusoids, we would be likely to perceive them as separate sounds. The truth is somewhat stranger: under the right conditions, our ears appear to have evolved to be able to distinguish periodic signals from each other, even if several of them with different periodicities are mixed together. (This is a good adaptation because it allows us to perceive the voices of other humans, which are approximately periodic most of the time, but rarely if ever sinusoidal.)

A signal is called periodic when, for some nonzero time duration $\tau$, we have

\begin{displaymath}
f(t) = f(t+\tau)
\end{displaymath}

for all $t$. We can apply this equation repeatedly to get:

\begin{displaymath}
\ldots = f(t-\tau) = f(t) = f(t+\tau) = f(t+2\tau) = \ldots
\end{displaymath}

In other words, the signal repeats forever. Knowing the value of the function for one period, for example from $t=0$ to $t=\tau$, determines the function for all other values of $t$.

If a function repeats after $\tau$ time units, it also repeats after $2\tau$, $3\tau$, ..., time units.. The smallest value of $\tau$ at which the signal repeats is called the signal's period.

A sinusoid whose frequency is $f$ has period $1/f$. But an infinitude of other sinusoids repeat after $1/f$ time units. A sinusoid of frequency $2f$ has period $1/2f$, and so repeats twice in a time interval lasting $1/f$. In general a sinusoid whose frequency is any integer multiple of $f$ repeats (perhaps for the $n$th time) after an elapsed time of $1/f$. More generally, any signal obtained by amplifying and mixing sinusoids of frequencies that are all multiples of $f$ will repeat after $1/f$ units of time, and therefore have a period of $1/f$ (if not some smaller submultiple of $f$).

Under reasonable conditions ($f$ at least about 30; sinusoids at lower multiples of $f$ having enough relative amplitude compared to the whole; no signal frequency other than $f$ having an amplitude greater than the sum of all the others; at least some energy in odd-numbered multiples of $f$; etc.) we would hear such a mixture as a single tone whose pitch corresponds to $f$, which is then called the fundamental frequency of the mixture. The mixture will have the general form:

\begin{displaymath}
x(t) = {a_1} \cos(2 \pi f t + \phi_1)
\end{displaymath}


\begin{displaymath}
+ {a_2} \cos(4 \pi f t + \phi_2)
\end{displaymath}


\begin{displaymath}
+ {a_3} \cos(6 \pi f t + \phi_3) + \cdots
\end{displaymath}

only stopping, for a digital recording, at the Nyquist frequency, and possibly continuing forever for an analog signal.

Such a sum of harmonically sinusoids is known as a Fourier series, and although we won't prove it here, it's known that any ``reasonable" periodic signal, (having a certain continuity property in time that any real signal should have) can be expressed as a Fourier series. Its digital recording can as well. This means that, in principle at least, you can synthesize any periodic signal if you can synthesize sinusoids.

The whole mixture is sometimes called a complex periodic tone, and the individual sinusoids that make it up are called harmonics. If all goes well, the perceived pitch of a complex periodic tone is that of its first harmonic, corresponding to the frequency $f$, which you can compute as in section 1.3.

It sometimes happens that a mixture of sinusoids that aren't collectively periodic somehow are perceived by the ear as a single entity (a tone) anyhow. Such a mixture could be written as:

\begin{displaymath}
x(t) = {a_1} \cos(2 \pi {f_1} t + \phi_1)
\end{displaymath}


\begin{displaymath}
+ {a_2} \cos(2 \pi {f_2} t + \phi_2)
\end{displaymath}


\begin{displaymath}
+ {a_3} \cos(2 \pi {f_3} t + \phi_3) + \cdots
\end{displaymath}

and is called a complex inharmonic tone. The individual sinusoids that make it up are then called partials or components, and not harmonics - that term is reserved for the periodic case described earlier.

2.3 Special case: combining two equal-amplitude sinusoids

Suppose two sinusoids have the same amplitude $a$ and frequency $f$, but different initial phases, $\phi_1$ and $\phi_2$. Our formula for the amplitude of the sum (from section 2.1) reduces to:

\begin{displaymath}
{a_\mathrm{sum}} = a \sqrt { 2 + 2 \cos(\phi_2 - \phi_1 ) }
\end{displaymath}

We can apply a standard trigonometric identity to get:

\begin{displaymath}
{a_\mathrm{sum}} = 2 a \cdot \cos({{\phi_2 - \phi_1} \over 2})
\end{displaymath}

This outcome is clear even if we don't remember that sort of identity; we can look at what the previous figure becomes when the two amplitudes are equal:

\includegraphics[bb = 92 254 525 536, scale=0.8]{fig/B03-equalamps.ps}

So not only is the amplitude increased (or decreased) by twice the cosine of half the phase difference; we also see that the initial phase of the resulting sinusoid (which would have been complicated to calculate in general) is the average of the two initial phases $\phi_1$ and $\phi_2$.

As long as the amplitudes of the two sinusoids are the same, we can use the same picture to find the result of adding (mixing) two sinusoids of different frequencies $f$ and $g$. To reduce clutter we'll leave out the initial phases to get the following formula:

\begin{displaymath}
a \cdot \cos (2 \pi f t ) + a \cdot \cos (2 \pi g t ) =
2a...
...cos (2 \pi {{f-g} \over 2} t ) \cos (2 \pi {{f+g} \over 2} t )
\end{displaymath}

This formula will recur often. I call it the Fundamental Law of Electronic Music, although perhaps that's overstating things a bit.

2.4 Power

Although the nominal (peak) amplitude of a sinusoid is a perfectly good measure of its overall strength, most signals in real life aren't sinusoids, and their peak amplitudes don't necessarily give a realistic measure of their strength. Also, you could wish to have a measure of strength that was additive, in the sense that, at least in good conditions, when you add two signals their measured strengths are added as well. The nearest thing we have to such a measure is the average power, which we will first motivate from physical considerations, then define, then show that it (at least sometimes) works the way we would wish.

The simplest way to motivate the definition of power is by considering a real-world analog, electrical signal. The amplitude (a function of time) is in this instance the time-varying voltage, customarily given the variable name $V$. We now suppose the signal is connected to a load of some sort, which has an electrical resistance $R$, measured in ohms. Power is voltage times current. To find the current $I$ we apply Ohm's law to get:

\begin{displaymath}
I = V/R
\end{displaymath}

and finally

\begin{displaymath}
\mathrm{power} = V^2 / R
\end{displaymath}

We conclude that power, like amplitude, is a function of time; it is proportional to the square of amplitude. It is always either zero or positive.

Although we aren't ready to discuss real sounds in the air yet (we will be able to put that off until chapter 5 or perhaps even 6), the same reasoning will apply. The amplitude is the (space-dependent) pressure. One can measure the power flowing through a specified area as follows: the pressure exerts a force on the area; as a result some air flows through the area, and the force times the velocity gives energy per second, which is the physical definition of power. The speed at which the air flows is proportional to the pressure (it's pressure divided by impedance)--a concept that generalizes resistance to describe ``reluctance to move" in whatever medium we're talking about.) Power is then amplitude squared divided by impedance.

For digital recordings, we don't have a notion of physical impedance and so we just arbitrarily set it to one, giving

\begin{displaymath}
\mathrm{power}(t) = \left [x(t) \right] ^ 2
\end{displaymath}

where $x(t)$ denotes the amplitude of the recorded signal. (Note that we're abusing notation here; recordings aren't functions of time, so $t$ really stands for the time at which we mean to play the sample, or else the time at which we recorded it. The only true way to describe the variation of a recording is by talking about the memory addresses, or indices, of the sample points.)

So far we've only described instantaneous power, which is a time-varying function. The measure we're interested in is a signal's or recording's average power, which is simply the average, over some suitable period of time or range of samples, of the instantaneous power.

What is the average power of a sinusoid? Well, its square is

\begin{displaymath}
a^2 \cdot \cos ^2 (2 \pi f t )
\end{displaymath}

(we're dropping the initial phase which won't affect out calculation). Now use my Fundamental Law of Electronic Music with $g=0$ (and omitting its own, arbitrary value of $a$):

\begin{displaymath}
a \cdot \cos (4 \pi f t ) + a = 2 a \cdot \cos ^2 (2 \pi f t )
\end{displaymath}

(We omitted the $\cos(2 \pi g t)$ term because $g=0$ and the cosine of zero is one.) If we multiplied the right hand size by $a/2$ we would get the desired instantaneous power, so we multiply through by $a/2$ and swap sides, giving

\begin{displaymath}
\mathrm{power} (t) = {{a^2} \over 2} \cos (2 \pi f t ) + {{a^2} \over 2}
\end{displaymath}

We want to know the average power. When we average the right-hand side of the equation, the cosine term averages out to zero, and so the average power of the original sinusoid is given by:

\begin{displaymath}
(\mathrm{average~power}) = {{a^2} \over 2}
\end{displaymath}

Here's what it looks like:

\includegraphics[bb = 88 237 541 564, scale=0.8]{fig/B04-avg-power.ps}

What happens when we add two sinusoids? Well, case one, they have the same frequency, and their amplitudes are $a$ and $b$. Let $c$ denote the amplitude of the resulting sinusoid (which will also have the same frequency). As we saw above, the three are related by the law of cosines:

\begin{displaymath}
c^2 = a^2 + b^2 + 2 a b \cos( \phi_2 - \phi_1 )
\end{displaymath}

The three sinusoids have average power

\begin{displaymath}
{P_a} = {{a^2} \over 2} ~,~~ \mathrm{etc}
\end{displaymath}

so

\begin{displaymath}
{P_c} = {P_a} + {P_b} + ab \cos(\phi_2 - \phi_1 )
\end{displaymath}

About this we can at least say that, if we don't know what the relative phases of the two are, ``on average" we expect the power to be additive because the cosine term is just as likely to be negative as positive.

Once again, we can deal with sinusoids of differing frequencies $f$ and $g$ by just letting the phase difference $\phi_2 - \phi_1$ precess in time at a frequency $\vert f-g\vert$. In this case the cosine term really does average out to zero no matter what the initial phases were. The power of the sum of the two sinusoids is the sum of the powers of the two summands.

In fact, the cosine term can be considered as the two sinusoids beating. If we want to measure the power accurately we must wait at least a few beats--the closer the two sinusoids are in frequency, the longer it will take our measurement to converge on the correct answer.

To calculate the average power of uniform white noise of amplitude $a$ we have to do a quick integral; we get

\begin{displaymath}
{P_\mathrm{noise}} = {{a^2} \over \sqrt{3}}
\end{displaymath}

Noise also has the property that it contributes power additively to a signal (as long as you don't add it to itself; see the next paragraph.)

It might seem that it is almost always true that adding two signals, with average power $P_a$ and $P_b$, respectively, gives a signal of average power $P_a + P_b$; but beware the following counterexample: if you add a signal to itself you will double all its values and so the average power will be multiplied by 4, not 2. If you add a signal to its additive inverse (which has the same power as the original), the power of the sum will be zero. Also, if two sinusoids have the same frequency the average power of their sum will depend on the phase difference. There is a term for the situation in which you can simply add the average power of two signals to get the average power of the sum: such signals are said to be uncorrelated.

In general, scaling a signal (that is, multiplying all its values) by a factor of $k$ scales the average power by a factor $k^2$, whereas accumulating $k$ unrelated signals should be expected only to multiply the power by $k$ on average.

2.4.1 Expressing Power In Decibels

In the previous chapter, we developed the notion of decibels for comparing the amplitudes of sinusoids. At that point we had no precise way to describe the amplitudes of signals in general, but now we do: by measuring their average power. If two signals have average power $P_1$ and $P_2$, their level difference in decibels is:

\begin{displaymath}
L = 10 \log_{10} ({{P_1} \over {P_2}})
\end{displaymath}

We can quickly check that this is compatible with our earlier formula in terms of amplitude: two sinusoids of amplitude $a_1$ and $a_2$ would have average power $a_1^2$, $a_2^2$ and the above formula reduces to:

\begin{displaymath}
L = 10 \log_{10} \left ( ( {{a_1} \over {a_2}})^2 \right )
\end{displaymath}


\begin{displaymath}
= 20 \log_{10} ({{a_1} \over {a_2}})
\end{displaymath}

so this new definition agrees with th eearlier one in section 1.3.

Exercises and Project

1. Two sinusoids with the the same frequency (440 Hz., say), and with peak amplitudes 2 and 3 are added (or mixed, in other words). What are the minimum and maximum possible peak amplitude of the resulting sinusoid?

2. Two sinusoids with different frequencies, whose average powers are 3 and 4 respectively, are added. What is the average power of the resulting signal?

3. Two sinusoids, of period 4 and 6 milliseconds, respectively, are added. What is the period of the resulting waveform?

4. Two sinusoids are added (once again)... One has a frequency of 1 kHz . The resulting signal ``beats" 5 times per second. What are the possible frequencies of the other sinusoid?

5. A signal - any signal - is amplified, multiplying it by three. By how many decibels is the level raised?

6. What is the pitch, in octaves, of the second harmonic of a complex harmonic tone, relative to the first harmonic?

Project: comb filtering. In this project you will use the phase-dependent effect of combining two sinusoids to build the simplest type of digital filter, called a comb filter.

To start with, make a single sinusoid of frequency 100 Hz (using the sinusoid object in the course library for Pd). You can check the level of its output using the ``meter" object; it should be about 97 dB.

Now put the sinusoid into a ``vdelay" (variable delay) object, and connect the delay output as well as the original sinusoid output to the meter. When the delay is zero you should see something 6 decibels higher, about 103.

Now measure and graph the amplitudes you measure, changing the delay in ten steps from 0 to 0.005 seconds. (Hint: to make the graph readable, don't make the vertical axis linear in decibels; instead, perhaps make equal spaces for 0, 94, 97, 100, and 103). But if you really want a nice-looking graph and don't mind 5 extra minutes of effort, convert from decibels to power.

Now do the same thing (on the same graph with a different color or line style) with the sinusoid at 200 Hz. instead of 100 Hz. Do you see a relationship between the two?

Now put six sinusoids at 100, 200, 300, 400, 500, 600 Hz. into a ``switch" object (that's primarily for convenience; connecting the six to the switch will add them.) Connect the switch output to both the delay and directly to the output as before. As you change the delay between 0 and 10 milliseconds, what do you hear? What special thing happens when you choose a 5 millisecond delay?


next up previous index
Next: 3. Spectra Up: course-notes Previous: 1. Sounds, Signals, and   Index
msp 2014-11-24