next up previous contents index
Next: Encapsulation in Pd Up: Automation and voice management Previous: Voice allocation   Contents   Index


Voice tags

Suppose now that we're using a voice bank to play notes, as in the example above, but suppose the notes $a$, $b$, $c$, and $d$ all had the same pitch, and furthermore that all their other parameters were identical. How would we design a control stream so that, when any one note was turned off, we would know which one it was?

This question doesn't come up if the control source is a clavier keyboard because it's impossible to play more than one simultaneous note on a single key. But it could easily arise algorithmically, or simply as a result of merging two keyboard streams together. Moreover, turning notes off is only the simplest example of a more general problem, which is how, once having set an task off in a voice bank, we can get back to the correct voice to guide its evolution as a function of real-time inputs or any other unpredictable factor.

To deal with situations like this we can add one or more tags to the message starting a process (such as a note). A tag is any collection of data with which we can later identify the process, which we can later use to search for the voice that is allocated for it.

Taking again the example of Figure 4.10, here is one way we might write those four tasks as a control stream:

start-time end-time   pitch   ...

    1          2        60    ...
    2          6        62
    4          2        64
    5          3        65

In this representation we have no need of tags because each message (each line of text) contains all the information we need in order to specify the entire task. (Here we have assumed that the tasks $a$, ..., $d$ are in fact musical notes with pitches 60, 62, 64, and 65.) In effect we're representing each task as a single event (section 3.3) in a control stream.

On the other hand, if we suppose now that we do not know in advance the length of each note, a better representation would be this one:

time    tag  action   parameters

  1      a   start     60 ...
  2      b   start     62 ...
  3      a   end
  4      c   start     64 ...
  5      d   start     65 ...
  6      c   end
  8      b   end
  8      d   end

Here each note has been split into two separate events to start and end it. The labels $a$, ..., $d$ are used as tags; we know which start goes with which end since their tags are the same. Note that the tag is not necessarily related at all to the voice that will be used to play each note.

The MIDI standard does not supply tags; in normal use, the pitch of a note serves also as its tag (so tags are constantly being re-used.) If two notes having the same pitch must be addressed separately (to slide their pitches in different ways for example), the MIDI channel may be used (in addition to the note) as a tag.

In real-time music software it is often necessary to pass back and forth between the event-per-task representation and the tagged representation above, since the first representation is better suited to storage and graphical representation, while the second is better suited to real-time operations.


next up previous contents index
Next: Encapsulation in Pd Up: Automation and voice management Previous: Voice allocation   Contents   Index
msp 2003-08-09