Re: [linux-audio-dev] more on XAP Virtual Voice ID system

New Message Reply About this list Date view Thread view Subject view Author view Other groups

Subject: Re: [linux-audio-dev] more on XAP Virtual Voice ID system
From: David Olofson (david_AT_olofson.net)
Date: Tue Jan 07 2003 - 00:17:07 EET


On Monday 06 January 2003 22.03, Steve Harris wrote:
> On Mon, Jan 06, 2003 at 12:04:23 -0800, robbins jacob wrote:
> > Alternately, we could require that event ordering has 2
> > criterion: -first- order on timestamps
> > -second- put voice-on ahead of all other event types.
>
> This is what I was assuming was meant orignally.
>
> However you dont have to think of them as initiasiation parameters,
> voices can have instrument wide defaults (eg. a pitch of 0.0 and
> and amplitude of 0.0), and the parameter changes that arrive at the
> same timestamp can be thought of as immediate parameter changes,
> which they are.

Exactly.

These "instantiation parameters" are in fact just control events, and
they relate to "whatever voice is assigned to the provided VVID".

The issue here is this: Where does control data go when there is no
voice assigned to the VVID?

It's tempting to just say that whenever you get events, check the
VVID, and allocate a voice for it if it doesn't have one.

However, it's kind of hard to implement voice allocation in a useful
way if you don't know what control will trigger the allocation. (You
can't discard very silent notes and that sort of stuff.)

As to event ordering, the logical way is to send "parameters"
*first*, and then send the event that triggers the note - be it a
specific note_on event (bad idea IMHO) or just a change of a control,
such as Velocity.

Synth implementations will want it the other way around, though.
Problem with that is that it breaks a fundamental rule of the event
systems: "Events are processed in order."

What I'm saying is that if you send the "trigger" event first,
followed by the "parameters", you require synths to process a number
of control events *before* actually performing the trigger action.
That simply does not mix with the way events are supposed to be
handled.

What's even worse; the logical alternative (sending parameters first)
results in allocation problems. Do we have to have actual Virtual
Voices objects in synths? If so, how to allocate them? (Or rather;
how many?) If VVIDs are global and allocated from the host, that just
won't work. Synths would at least need to be informed about the
number of VVIDs allocated for a Channel, when a connection is made.

Wait. There's another alternative.

If you really want to your voice allocator to be able to turn down
requests based on "parameters", how about using a single temporary
"fake voice" whenever you get a "new" VVID? Grab it, fill in the
defaults and have it receive whatever controls you're getting. When
you get the "trigger" event, check the "fake voice", and if the voice
allocator doesn't like it, just hook the VVID up to your "null" voice.

Obviously, this requires that "parameters" are sent with the same
timestamp as the corresponding "trigger" event - but that makes sense
if the "parameters" are really to be regarded as such.

If they aren't, we're talking about a different problem: Keeping
Voice Control history for Voices that aren't allocated or playing.

Anyway, this obviously suggests that it should be strictly specified
whan you may and may not talk to a Voice and expect it to respond. I
mean, it obviously seems stupid to have a physical voice running just
to track Voice Controls while nothing is being played - but OTOH,
what do you do if you have 128 VVIDs in use on a 32 voice synth? What
do you track, and what do out send to the NULL voice? When do you
actually steal a physical voice?

Note that all this is very closely related to implementation
specifics of synth voice allocators. I think the issue is more one of
coming up with a sensible API for this, than to dictate a policy for
voice management. I doubt that a single policy can work well for all
kinds of synths.

//David Olofson - Programmer, Composer, Open Source Advocate

.- The Return of Audiality! --------------------------------.
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`---------------------------> http://olofson.net/audiality -'
   --- http://olofson.net --- http://www.reologica.se ---


New Message Reply About this list Date view Thread view Subject view Author view Other groups

This archive was generated by hypermail 2b28 : Tue Jan 07 2003 - 00:19:23 EET