next up previous contents index
Next: Encapsulation in Pd Up: Automation and voice management Previous: Voice allocation   Contents   Index

Voice tags

Suppose now that we're using a voice bank to play notes, as in the example above, but suppose the notes $a$, $b$, $c$, and $d$ all have the same pitch, and furthermore that all their other parameters are identical. How can we design a control stream so that, when any one note is turned off, we know which one it is?

This question doesn't come up if the control source is a clavier keyboard because it's impossible to play more than one simultaneous note on a single key. But it could easily arise algorithmically, or simply as a result of merging two keyboard streams together. Moreover, turning notes off is only the simplest example of a more general problem, which is how, once having set a task off in a voice bank, we can get back to the same voice to guide its evolution as a function of real-time inputs or any other unpredictable factor.

To deal with situations like this we can add one or more tags to the message starting a note (or, in general, a task). A tag is any collection of data with which we can later identify the task, which we can then use to search for the voice that is allocated for it.

Taking again the example of Figure 4.10, here is one way we might write those four tasks as a control stream:

start-time end-time   pitch   ...

    1          3        60    ...
    2          8        62
    4          6        64
    5          8        65

In this representation we have no need of tags because each message (each line of text) contains all the information we need in order to specify the entire task. (Here we have assumed that the tasks $a$, ..., $d$ are in fact musical notes with pitches 60, 62, 64, and 65.) In effect we're representing each task as a single event in a control stream (Section 3.3).

On the other hand, if we suppose now that we do not know in advance the length of each note, a better representation would be this one:

time    tag  action   parameters

  1      a   start     60 ...
  2      b   start     62 ...
  3      a   end
  4      c   start     64 ...
  5      d   start     65 ...
  6      c   end
  8      b   end
  8      d   end

Here each note has been split into two separate events to start and end it. The labels $a$, ..., $d$ are used as tags; we know which start goes with which end since their tags are the same. Note that the tag is not necessarily related at all to the voice that will be used to play each note.

The MIDI standard does not supply tags; in normal use, the pitch of a note serves also as its tag (so tags are constantly being re-used.) If two notes having the same pitch must be addressed separately (to slide their pitches in different directions for example), the MIDI channel may be used (in addition to the note) as a tag.

In real-time music software it is often necessary to pass back and forth between the event-per-task representation and the tagged one above, since the first representation is better suited to storage and graphical editing, while the second is often better suited to real-time operations.

next up previous contents index
Next: Encapsulation in Pd Up: Automation and voice management Previous: Voice allocation   Contents   Index
Miller Puckette 2006-12-30