[linux-audio-dev] Re: lock-free structures ... Re: MidiShare Linux

New Message Reply About this list Date view Thread view Subject view Author view Other groups

Subject: [linux-audio-dev] Re: lock-free structures ... Re: MidiShare Linux
From: David Olofson (david_AT_gardena.net)
Date: Fri Jul 21 2000 - 20:20:54 EEST


On Fri, 21 Jul 2000, Benno Senoner wrote:
> On Fri, 21 Jul 2000, David Olofson wrote:
> > On Thu, 20 Jul 2000, Stephane Letz wrote:
> > > >That means if you want to be sure that it works everywhere
> > > >you have to ship the SMP version.
> > >
> > > I think using the "lock" instruction is costly when used in UP. You should
> > > better provide
> > > two versions, actually by using the __SMP__ flag.
> > > We will publish some benchmark to show that.
> >
> > Hmm... This is definitely a reason to make sure that plugins don't
> > have to deal with these kinds of lock-free interfaces directly.
> >
> > (As to MuCoS, the idea is to base the plugin API on simple, open
> > structs and inline code/macros. Plugins won't run in separate threads
> > anyway, and in the rare cases where you really want to do that, the
> > hosts will deal with these matters as a part of the IPC.)
> >
> >
> > //David
>
> I think that applications running as plugins, will need to use these data
> structures heavily so we have to solve the SMP/UP issues in advance.

That is, one plugin <-> one direct application connection? Well, yes,
but what says it makes sense to pass one event at a time all the way
through the IPC layers, when the application thread isn't event
running SCHED_FIFO?

Event level sync is *only* required when we have to do RT
communication between RT threads (such as MIDI <-> audio thread
communication) - in other situations, events can be transferred in
chunks of sizes related to the cycle time of the application.

(For *true* RT events, such as a GUI button click resulting in some
events, this is a non-issue, since that cannot happen very
frequently. BTW, the non/soft RT application should take the overhead
here, not the RT side.)

*Exactly* what kinds of connections are you concerned about?

> As for providing ringbuffer code in a library, I am a bit against it,
> because the compiler can't inline the code, making it much slower.
> (I am thinking about the case where you access these datastructures
> heavily, thousand times per second (as in disksampler) )

Is it really required that all data is fed *directly* from the disk
buttler into the FIFOs, one transaction at a time...?

> So in the x86 case we simply could provide source level API ( eg ringbuffer.h)
> which gets then inlined by the compiler.
>
> On UP/SMP SPARCs the situation would be more complicated because there IS
> a difference in the macros.

This makes me a bit nervous... But hey, the world is far from perfect
anyway! :-)

> Alternatives ?
> (keep speed in mind, without speedy lock-free FIFOs, we will add CPU overhead
> when the access frequency increases)

Putting it a different way; what is the point in these extreme access
frequencies? With an event system, you could fill in the data for all
FIFOs, then build a list of events that describe what you have done,
and thes pass that list to the audio thread using one single API
call. How's that for minimal sync overhead?

As to plugins, they simply ignore all this, and assume that their
targets run in the same thread. If this is not the case, the host
gets to figure out how and how frequently to pass events between
threads. (Something like once every cycle of the thread with the
highest cycle rate should be safe, I think... Keep in mind that you
cannot work reliably with events timestamped less than two or three
cycles in the future anyway!)

> But perhaps it would be useful to ask the linux-kernel guys to run some
> benchmarks on SMP SPARCs to get an idea of the speed differences.
>
> Fortunately atomic_read /set macros on PPC remain the same on both SMP/UP,
> so at least on the 95% (or more) boxes in circulation, we will not face this
> issue.

Good, but I still think it would be cool if application/plugin level
code would never see it *at all*, while we still get a simpler,
faster and more flexible API.

The RT audio threads will run in a cycle based fashion, and for
reasons mentioned in the lengthy Quasimodo vs. sample accurate
processing thread, you *must not* try to communicate with an audio
thread that's running! Either you simply cannot (because it's
preempting your thread), or you don't have a clue *what* it's running
when your event gets there - it may well be done with the plugin
you're sending your event to, which will result in the plugin either
ignoring you event, or dealing with it during the next cycle.

Of course, audio buffer events are no different from the rest. Well,
actually, they're as hard real time as events will ever get in this
kind of systems; if they arrive late, you get drop-outs, and there's
not a thing in the world you can do about it.

(Other events, such as MIDI events, *may* in some cases be accepted
and processed with acceptable results even if they're a little late,
but this is not the case with audio buffers. If nothing else, you
simply can't risk trying to back out and partially re-run the
processing tree, as that may take more time than you have left for
the current frame.)

//David

.- M u C o S --------------------------------. .- David Olofson ------.
| A Free/Open Multimedia | | Audio Hacker |
| Plugin and Integration Standard | | Linux Advocate |
`------------> http://www.linuxdj.com/mucos -' | Open Source Advocate |
.- A u d i a l i t y ------------------------. | Singer |
| Rock Solid Low Latency Signal Processing | | Songwriter |
`---> http://www.angelfire.com/or/audiality -' `-> david_AT_linuxdj.com -'


New Message Reply About this list Date view Thread view Subject view Author view Other groups

This archive was generated by hypermail 2b28 : Fri Jul 21 2000 - 21:18:13 EEST