[linux-audio-dev] XAP: Voice Controls and connections

New Message Reply About this list Date view Thread view Subject view Author view Other groups

Subject: [linux-audio-dev] XAP: Voice Controls and connections
From: David Olofson (david_AT_olofson.net)
Date: Wed Dec 18 2002 - 02:20:29 EET


I was just thinking about the details of Voice vs Channel Controls.
The events are identical in format and semantics, except that Voice
Controls must have a Virtual Voice ID argument.

As a result, if you have only Channel controls, those could be
exported as Voice controls, if you say you have only 1 Voice. (*)

Seeing it from the sender side, we can also conclude that you can
control a single voice synth with Channel Control events, *provided*
the synth really ignores the VVID. (Which we cannot assume just like
that, of course.)

Or, we could require that Channel Control events carry VVIDs as well!

We wouldn't really have to waste VVIDs on this, because when
connecting Channel Control outputs to Channel Control inputs, no
VVIDs are needed. (They will be ignored, since Channel Controls can't
have that dimension of addressing.)

When connecting a Channel Control output to a Voice Control input, a
fixed VVID will be allocated, giving you a single voice to do what
you want with - as if you were controlling a monophonic synth!

If you're into polyphony, you'll want to be able to grab a bunch of
VVIDs, so you can control multiple Voices. Obiously, this is what
most sequencers will do. But then, what happens if you connect a
sequencer to a monophonic plugin, that has only Channel Controls?

Well, sending Voice Control events with a single (possibly fake) VVID
to Channel Controls works just fine... :-)

We just have to tell the sequencer that there really is only one
Voice. It'll ask for a VVID for that, and the host will go "aha;
Voice -> Channel" and hand the sequencer a fake VVID. (0, that is.)

I'll have to think some more about this, but I think it would be
possible to handle channel->voice and voice->channel Control
connections pretty much transparently. I think this is quite
important, especially for monophonic synths (controlling them from a
standard sequencer), and for building monophonic control processing
nets that run polyphonic synths. (Each Voice will act as a mono
synth.)

(*) Which is probably something we should support, BTW. Some synths
    may have a fixed number of voices that are not independent, and
    it might be useful to be able to express that in some way.
    Documentation or naming might be sufficient, though, as it's
    really a matter of how you control the synth. A dual voice
    "interference" synth would probably best be controlled from two
    sequencer tracks, set to use only one Virtual Voice each - ie
    monophonic tracks, like on a traditional tracker.

//David Olofson - Programmer, Composer, Open Source Advocate

.- The Return of Audiality! --------------------------------.
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`---------------------------> http://olofson.net/audiality -'
   --- http://olofson.net --- http://www.reologica.se ---


New Message Reply About this list Date view Thread view Subject view Author view Other groups

This archive was generated by hypermail 2b28 : Wed Dec 18 2002 - 02:26:19 EET