Re: [linux-audio-dev] XAP status : incomplete draft

New Message Reply About this list Date view Thread view Subject view Author view Other groups

Subject: Re: [linux-audio-dev] XAP status : incomplete draft
From: David Olofson (david_AT_olofson.net)
Date: Mon Dec 16 2002 - 02:15:54 EET


On Sunday 15 December 2002 22.46, Sebastien Metrot wrote:
> Sorry to step in this very interesting discution but I have one
> question regarding this floating point note/pitch thing.

At least, *I* don't mind. :-)

> Floating
> point is great for DSP code as it makes our lives quite easy. On
> the other hand, floating point is not a very stable way of storing
> fixed values such as notes.

Who said they're "fixed"?

In audiality, they're 16:16 fixed point, 12.0/octave - which I migth
change to 1.0/octave, for various reasons; both technical and
political.

They could have been floats, but as it is now, Audiality was
originally designed and implemented to run on low end hardware, where
FPU performance isn't the greatest. In the future, it will probably
only be the actual processing that scales all the way down to
integer, while all highel level stuff will use fp where it makes
sense.

> Many applications rely a lot on the
> midi note number, for exemple mapping a drum kit on each note of a
> keyboard is something that most of us do everyday. I think most of
> you know that there are some numbers that have no representation in
> floating point format, and you must also know that comparing
> floating point numbers (as in a == b) may fail even for some really
> stupid and simple cases. I wonder if programming a beat box (for
> exemple) in such an environement could be be made really over
> complicated and could create some very interesting bugs.

How about this?

        int note = int(note_pitch * 12.0 + 0.5);

Or for comparing:

        if(fabs(pitch1 - pitch2) < 0.5/12.0)
                same_note();

Not *that* complicated, eh? :-)

Now, I would say you should *try* to also take continous pitch in
account, but indeed, that's pretty hard if you're applying note based
theory. So, just make sure there is a pitch bend control as well, and
then:

        clean_note_pitch = note_pitch - pitch_bend;

first. Obviously, I'm assuming that most people will not want to mess
with pitch bend as a separate control, so I'm letting them just
ignore it by having pitch bend applied to note_pitch by default.

(The pitch bend would be optional outputs as well, of course, since
it's actually irrelevant to continous pitch controllers, like guitar
synths and generic audio pitch trackers.)

> I don't
> thing using octave/1 is a good idea, I also fail to understand why
> you totaly refuse to build on the existing fundations of midi that
> all musicians understand very well and is well established in the
> comunity.

What are you referring to exactly? (I hope it's just the note
numbers... You probably know what I think about the rest, and what I
think about APIs that (ab)use it it like VST.)

> I'd preffer an integer giving the original midi note and
> a floating point value giving the deviation of this note (1/octave
> or 1/note, i don't really care about this).

I don't see the point.

With float for contious note pitch and float for pitch bend, you can
extract what you need quite easilly, and you don't have to do
anything if you just want continous pitch. Finally, if you just want
1.0/octave, you don't have to do anything either.

That sounds pretty nice, clean and simple to me. :-)

> Also considering the time information passing in between the plug
> and the host I agree with David that it is a must have, as well as
> transport information. My experience with VST is that they got all
> the automation stuff very wrong as they give the information with
> no time reference and many hosts interpret this in very strange
> ways.

Yes, there's no argument about whether we should use timestamped
events or not, I think. My experience so far is that it's both easy
to use and efficient, compared to any alternatives. (Audio rate
control streams are next, but not exactly a close runner-up when it
comes to efficiency. VST style buffer splitting is probably as far
down the list you can get...)

> On the other hand they managed to give some timing
> information when they introduced VST2 and the capacity to
> receive/send midi events. I think there is a need to unify all
> these events: automation, notes, time change, tempo change, etc.
> What a plugin/instrument writer really need is to be asked for a
> block of audio. In order to process the said block of data the
> hosts pass him a struct containing all the informations he need:
> all the events that happened during the block.

Exactly.

> So we can have
> multiple even type (note on/off, tempo change, transport event,
> automation, etc...) in a linked list (or even a list by type of
> event if you prefer that).

With the queue + cookie system, you can have the list(s) arranged any
way you like. When the hosts asks where to connect something, you
just hand it the queue you want it in, and a cookie (Index, bitfield,
handle, whatever.) The cookie is just so you can tell events apart,
even if you're using only one queue.

> All the timing & transport event can be
> processed only once for all the plugins and every event can be
> given in a sample accurate fashion (even automation, unlike VST).

Yes. Virtually all communication with a plugin is through Controls -
and those are both sample accurate and (where applicable) capable of
optimized ramping. (I think we can have cubic splines as well. Some
plugins might like it, and others can just ignore the extra parameter
and do linear.)

> What if a plugin doesn't care about this info? Just ignore it!

Or don't even ask for it, and you won't get the events. :-)

> What to do when the transport is stopped? Reference the even from
> the time when the transport was stopped.

Exactly. You'll have an event telling you *exactly* when the
transport stopped, so unless you're cheating and doing only block
accurate timing, it's trivial to get right. (If you *are* cheating,
you'll just have to look at the timestamp and compensate for the fact
that you're handling the event at the wrong time.)

Most plugins won't care at all, so they should just be allowed to
ignore musical time, transport events as well as musical timestamps.

> Of course I can be plain wrong, do not hesitate to put me on the
> right track if this is the case...

Well, if you're plain wrong, and can't understand how I even got
Audiality to play MIDI files...

Speaking of which, I'd implementing the Virtual Voice ID thing (the
version with a global table of void * for synths to use) in
Audiality, and then I intend to rip that sorry excuse for MIDI->event
translation out. I'm going to make it *full* sample accurate
throughout, and then abuse it the best I can. Expect basic beat sync
effects and stuff soon.

*Then* we can start discussing who knows anything about events,
sequencers and sample accurate processing. I find it pointless to
argue about things I can't seem to prove.

//David Olofson - Programmer, Composer, Open Source Advocate

.- The Return of Audiality! --------------------------------.
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`---------------------------> http://olofson.net/audiality -'
   --- http://olofson.net --- http://www.reologica.se ---


New Message Reply About this list Date view Thread view Subject view Author view Other groups

This archive was generated by hypermail 2b28 : Mon Dec 16 2002 - 02:25:11 EET