Re: [linux-audio-dev] Re: Plug-in API progress?

New Message Reply About this list Date view Thread view Subject view Author view Other groups

Subject: Re: [linux-audio-dev] Re: Plug-in API progress?
From: David Olofson (audiality_AT_swipnet.se)
Date: to syys   23 1999 - 18:46:21 EDT


On Thu, 23 Sep 1999, Paul Barton-Davis wrote:
[...]
> Using this terminology, what I was suggesting was that the ui-thread
> {c,w,sh}ould run on the same machine as the engine, but that because
> of X, this didn't prevent you from the display being on some other
> machine.

Yes, I realised that...

> BTW, I don't think we're ready to consider clusters yet. The latency,
> by which I mean the delay in sending a message onto a previously empty
> wire, is too great for audio synchronization.

...but clustering was just what I had in mind, which would mean that the
engine, as you defined it, would be distributed over multiple machines.

And I'm not suggesting that the whole cluster should be involved in _low
latency_ processing - 50 - 100 ms latency can be perfectly fine in many
situations, and is certainly a lot nicer than off-line processing from the end
user POV.

Also, I'm not thinking TPC/IP here. A (Beowulf class) cluster is a pretty
specialized form of network anyway, and I'd use drivers ported to RTLinux,
emulating the shared memory style IPC used on "real" supercomputers. That's a
very big difference, and the _hardware_ isn't really the problem here. People
are successfully using standard ethernet cards for real time streaming already.

> >requires the GUI code to actually work on the same system as the DSP code -
> >while you most probably don't want to load Qt, GTK+ or whatever on every node
> >of a Beowulf. And what if you decide to build a cluster of Alpha or PPC boxes?
>
> GTK is ported to these platforms. But thats beside the point. The UI
> code *has* to run on the same host if you want low-latency interaction
> between the interface and the engine. If you don't care about that,
> then you have to devise a network IPC protocol to relay changes in the
> UI to the master. This seems silly.

Which would cause most network load and latency; GUI<->X communication, or
GUI<->engine communication?

But yes, running the GUI over normal TCP/IP or something like that is likely to
cause latency problems no matter where the split is done. OTOH, I've played
XGalaga and other stuff over 10 MBit ethernet (with a few Winhogs boxes on the
same net), and it actually ran with the same frame rate as when run on the
local X server... What about dedicated NICs and Mingo's patches on both boxes
for starters?

[...]
> If the variable can change more rapidly than that, then certainly
> there is a potential problem with:
>
> res->a = src->a + src->b
>
> if the UI can twiddle with src->a and src->b directly. But the
> twiddling is still atomic, and besides, there is no computer-based UI
> that will allow you to alter two things simultaneously. So in reality,
> a or b alway change independently.
>
> the only case i can imagine where this is not true is when a single
> on-screen control element actually affects two internal variables, but
> thats just bad design of the plugin, IMHO.

Well, I'm not really thinking about UI only here... What about automation? And
what about plug-ins that accept weird data like strings and curves? (Video
folks will like that...) It's still possible to code most "transactions" just
by accessing the data in safe order, but it quickly starts to gets messy.

> >And, what about timing? How do you handle sample accuracy without going to
> >single sample buffers? Events handle that in a clean and low cost way.
>
> No they don't. You can't do sample accuracy without reducing the size
> of the engine control cycle (the number of samples it generates
> between checking for a new event in some way) to 1.

It's not really that simple... (Fortunately!) Sample accuracy doesn't mean that
every plug-in has to check for a change every single sample. It just means
that you *can* change *some* properties of *some* plug-ins at any singe sample
position. It can be handled something like this:

process(...**inputs, ...**outputs, ...**events, samples)
{
        int current_sample = 0;
        int current_event = 0;
        int count;
        while(current_sample < samples) {
                /* process until next event should take effect */
                count = event[current_event]->time - current_sample;
                current_sample = [current_event]->time;
                while(count--) {
                        process one sample;
                }
                /* handle event... */
                        .
                        .
                        .
        }
}

(This example code requires a terminator event timed at the end of the buffer to
work properly.)

The inner loop can be optimized, and doesn't have to support full sample
accuracy. (That is, SIMD code can be used if desired.)

I think that's a bit more efficient than running the whole engine with single
sample buffers...

> >kind of programming on the Amiga. Fun for a while, but then you start to
> >realize that a faster CPU is always more flexible. At least with an RTOS...)
>
> and with Ingo on your side, you realize than a faster non-RTOS is always more
> flexible, too :)

Well, I don't know if you can call Linux + Ingo's patch a non-RTOS... It is
certainly a good soft RT OS, and in real life, it appears to be hard RT capable.
Not µs precision, but that's not in the hard RT definition - bounded latency is
enough, and we seem to have just that.

Anyway, my RTOS point still remains - you'll break Mingo-Linux' great RT
performance if you use RT incapable resources or programming methods - just as
you ruin RTL's timing if you make it use non RT drivers.

The hard RT rules are unchanged (and my engine is still going to be hard RT
capable), but the user space environment is nicer to work in. :-)

[...]
> this makes no sense to me. lets say i want to change the delay time on
> a delay line. i tweak a virtual on-screen knob to do this. what does
> the ui-thread do to cause the delay time to change ? how does the
> engine know that it has happened, if indeed it has to (in Quasimodo,
> it has no idea - the value at a certain memory location just gets
> altered). this seems fundamental to me, and i simply don't understand
> from what you've written so far how the event system can do this - it
> seems based (both in terms of its name and the API) on a polling
> system. This is not going to work, or at least, it can't do the things
> I want Quasimodo to do. There is no polling in Quasimodo: the ui
> thread has direct access to the memory containing the variables used
> by the DSP thread. There is no communication between them, and the DSP
> code does not "poll"/look-at/check any memory locations to see if
> anything has been changed, ever.

Ok, to put it simple: I build a structured description of what I want the
plug-in to do, in the form of one or more events in a shared memory buffer. As
the engine does it's event routing for the whole processing net for each turn,
the events will get processed by the recieving during the next buffer period.

The only difference between "polling" once for each engine loop, and just
altering the values directly in the DSP code's variables is that you get the
same real time deadline for all plug-ins with the "polling" system. The
resolution (leaving out event time stamps) can be one buffer in both cases, but
the timing accuracy is independent of the latency with the event system.

> >Shouldn't be a problem. Also note that the events can be handled with
> >sample accuracy no matter what buffer size you use. If the MIDI
> >interface plug-in can extract exact timing info from the MIDI data,
> >that can be used to timestamp the events, so that plug-ins may use
> >the information if desired.
>
> i don't know what you're thinking of here. MIDI interfaces don't do
> timestamping. the MTC messages are of way too coarse a resolution for
> timing purposes, though they could be used for certain kinds of sync.

No, MIDI does not do timestamping, but you can still measure the exact time of
arrival of each message. Under Windoze, that would have been a great
improvement, but with a real OS, you can just run the whole engine at a
sufficient buffer rate...

However, if you want to keep the buffer size up for performance reasons, you
could have the MIDI input driver (or a user space process at a higher rate)
time stamp and buffer the events, so that you get a *fixed* latency of say, 15
ms, instead of 15 ms jitter.

//David

 ·A·U·D·I·A·L·I·T·Y· P r o f e s s i o n a l L i n u x A u d i o
- - ------------------------------------------------------------- - -
    ·Rock Solid David Olofson:
    ·Low Latency www.angelfire.com/or/audiality ·Audio Hacker
    ·Plug-Ins audiality_AT_swipnet.se ·Linux Advocate
    ·Open Source ·Singer/Composer


New Message Reply About this list Date view Thread view Subject view Author view Other groups

This archive was generated by hypermail 2b28 : pe maalis 10 2000 - 07:27:12 EST