Re: [linux-audio-dev] Re: Plug-in API progress?

New Message Reply About this list Date view Thread view Subject view Author view Other groups

Subject: Re: [linux-audio-dev] Re: Plug-in API progress?
From: David Olofson (audiality_AT_swipnet.se)
Date: su syys   26 1999 - 11:53:04 EDT


On Sun, 26 Sep 1999, Paul Barton-Davis wrote:
> >> it depends. anything which may have to mixdown a series of output
> >> buffers and/or constantly do mutex between plugins to make sure
> >> they are not touching the output buffer at the same time *does* fit
> >> into this category.
>
> >The output buffer and the plug-ins are to be on the same CPU in that
> >case. That's why it's so important to structure the processing net
> >correctly. And, as opposed to an OS running normal tasks, an audio
> >engine running plug-ins has quite a lot of control here.
>
> excellent point(s). on an SMP system, one can run a set of FX plugins,
> for example, on 1 processor and then run the mixer and the output
> plugin on a processor designated to control the hardware output.
>
> however, you may recall that the original context for this
> parallelization discussion was non-SMP systems ("clusters"). In such a
> system, shared memory is not available, so the event system you
> propose turns into a message passing system, complete with network
> latencies to contend with. trying to split the processing net up like this
> means that the plugins that don't write directly to the output buffer have
> to communicate their contribution to its intended contents over the net.

Of course. But there's a big difference between treating a cluster as if it
was a shared memory/SMP system, and treating it as what is is - a network with
message passing latencies. One or two cross-node dependencies/"latency problems"
explicitly controlled by the net builder is a lot better than dozens inserted
"randomly" by the cluster's OS...

> once again, for batch processing, a beowulf cluster doing this would
> be fantastic. my own interest (and hey, Linux is all about scratching
> one's own back!)

Yep. :-) And it saves time and work when APIs and code can be reused.

> is in live performance, and so i'm not thrilled about this kind of thing.

Ok, I see... Well, I'm more the studio kind of guy. If I can get 10 times the
CPU power without having to buy a $$,$$$ SMP box, I don't mind if most of the
processing of recorded material is done with some 100 ms latency or so.

Anyway, I'm trying hard to keep the overhead down, so my system isn't going to
be too expensive for low latency real time work. After all, that's what I
intended to use Audiality/RTLinux for in the first place! If my event system
ends up as too inefficient, I'll just have to start over.

> >> Not to mention dozens of researchers, and quite a few venture
> >> capitalists who got burned in the various ventures that tried to
> >> implement this kind of system.
> >
> >Hmm... I thought it was pretty obvious that you'll get problems if your
> >applications depend on latencies that you can't cut down.
>
> thats not the process that i think occured. instead, the claim/belief
> was that the latencies would be small, and well understood. for the
> KSR, as far as i recall (its been 5 years or more since i even saw one
> of them), the basic latencies *were* small, but not well understood,
> and it turned out to be too easy to cause pathological cases rather
> more easily than anyone thought.

Well, I guess all kernel hacker know how "well understood" the spin locks and
other internal constructs are in this respect... And that's just for SMP
systems! *heh...*

Fortunately, an audio engine where all "threads" (plug-ins) within a "task"
(sub net) are scheduled in a fixed order, one at a time, is quite a bit simpler
than an OS meant to run lots of threads that may chose to sync to anything
whenever they please.

> >That depends on what you want to do, I guess. (And on the quality of the
> >implementation itself, of course. Crappy code can kill any design...)
> >Basically, my system should be more efficient if you really want high accuracy
> >without decreasing the buffer size. But if ultra low latency is the main goal,
> >my system will probably only result in a little more flexibility at a rather
> >high cost in the form of overhead.
> >
> >However, I think that when used in situations where "normal" kind of latencies
> >are good enough, this flexibility will be very good for the usefullness of the
> >plug-in API. Frankly, what's most important; a few % of CPU, or the ability to
> >use a common plug-in for just about anything, and not forcing end users to
> >fiddle with multiple, incompatible systems? True, the extra transparency that
> >a buffered event system provides is mostly of interest to high end users, but
> >that's not the only point with it.
>
> I have to think about all this stuff.

Meanwhile, I'll try and sort out the added complexity that my system adds.
Ahem... :-) (It doesn't look all that hopeless ATM.)

> >> the notion of time is not based on "timestamps", but is synced to the
> >> DAC.
> >
> >Yes, but not with sample resolution...
>
> Now we have to be careful about terminology again. No, events in
> Quasimodo don't come with timestamps that identify the sample at which
> they occured. This is for a couple of reasons. First of all, as
> sampling rates go up, certain key input event sources (MIDI and
> keyboard and mouse and serial ports) retain the same overall
> characteristics. At 96KHz, you've got 10usec per frame. Lets posit a
> processor running at 800MHz with an average of 1.5 instructions per
> cycle. Thats 533 instructions per frame. If you're running on a UP,
> which many people will be, I think that the chances of you
> timestamping the event with sample accuracy are not good.

Depends on where your input events come from. Trigging a soft synth with the
audio input from mics will give you a source of sample accurate events.

> Its worse than that, though. When an event happens, you can only
> timestamp it with your notion of the current time, in whatever units
> that happens to be. But time must stand still during the calls to
> "process()" for each plugin currently in the net. So you can either
> stamp the event with the time at the beginning of the control cycle,
> or the time at the end, but you cannot know the offset into the
> cycle.

Not true with a driver that time stamps input data when the interrupts occur. (I
did that back in the Amiga days...) And you shouldn't even need that with real
"high accuracy" hardware.

> Its also important to note that not having time which ticks by "per
> frame" doesn't mean that you don't have "sample resolution" within the
> plugins. By making parameter updates happen asynchronously, the plugin
> can just concentrate on measuring the number of frames/samples it has
> generated so far, and doing various things accordingly. It can switch
> behaviour exactly on the 102nd sample, for example, or the 1999923'rd
> sample, or whatever.
>
> So no, its true that Quasimodo's engine can't tell you the time with
> sample resolution, but plugins know the time with sample
> resolution. Strange, eh ?

And? What to do with your perfect notion of time, when no one tells you
exactly when to do things?

> More broadly, can you outline your objections to a scheme in which
> non-net-reorganizing events are handled asynchronously (i.e. not
> routed to the plugin's process() call) ?

Well, the event can be passed at any time in any way, but I still want a way
to let plug-ins know *exactly* when to do something. The buffer size is to
severe a restriction in many cases. (Probably more important in other
situations than playing soft synths live.)

> >> if a plugin is executing in any given cycle, then its query of
> >> the current time will always return the same value. that doesn't mean
> >> there have been no events - its means its busy generating the same
> >> output buffer (and working on the same input buffer) as everybody
> >> else who will be executed during that cycle.
> >
> >"will be" is very important here.
>
> why ?

They're not executed in parallel == not at the same time. Unless the code that
"sends" events is blocked by the engine code, events may occur in the middle of
the cycle - which means that some plug-ins will execute before that event and
others after it. So, who which group does the event reciever belong to...?

(And if the engine *is* blocking the process sending events, you end up with
the exact same behaviour as my system - events can only be passed between the
cycles.)

> >> and i know that you know this, but the system i'm describing is
> >> implemented, fully functional, and about to go to release 0.1.7 this
> >> weekend :)
> >
> >Yes, but does that mean it's the perfect solution? ;-)
>
> oh, very definitely not, and i wouldn't want to give even the
> appearance of such arrogance. i mention it only because it represents
> a real working "plugin" system very much like the one under discussion
> (in terms of design goals) that embodies close to a year of evolution
> and use (plus all the experience of Csound behind it). there is no
> reason to believe its the right one, not least because it started out
> with the important goal for me of maintaining source code
> compatibility with Csound opcodes. I still think this is an excellent
> idea.

Indeed it is, and I'm not suggesting that your solution would be simply the
wrong way to go. I'm sorry if I came accros as believeing I have all the
Ultimate Solutions. (I don't, Benno... ;-)

I do appreciate your insightful comments, and I have had to do a lot of
thinking at times to see if I'd really got things right. :-)

//David

 ·A·U·D·I·A·L·I·T·Y· P r o f e s s i o n a l L i n u x A u d i o
- - ------------------------------------------------------------- - -
    ·Rock Solid David Olofson:
    ·Low Latency www.angelfire.com/or/audiality ·Audio Hacker
    ·Plug-Ins audiality_AT_swipnet.se ·Linux Advocate
    ·Open Source ·Singer/Composer


New Message Reply About this list Date view Thread view Subject view Author view Other groups

This archive was generated by hypermail 2b28 : pe maalis 10 2000 - 07:27:12 EST