Subject: Re: [linux-audio-dev] LAAGA - main components
From: Jim Peters (jim_AT_aguazul.demon.co.uk)
Date: Wed May 02 2001 - 01:40:30 EEST
Paul Davis wrote:
> the simplest solution for that would seem to be to have aserver be an
> in-process plugin of the low-latency server. then everybody else can
> simply use a pcm_shm device via the standard ALSA PCM API, and presto,
> they work (though they don't get the ability to read data from each
> other).
I'd need to understand more about the details of this `aserver' and
`pcm_shm' stuff to understand, but if this looks good to you, I'm
willing to go along with it. How does `aserver' solve the problem of
unblocking waiting processes without making any system calls ?
> ... aserver ... (... one process collecting and distributing data
> to/from other processes) ... won't work for low latency scenarios.
Understood.
> >If we write the device driver, then it won't block. All it's going to
> >do is wake a bunch of processes. I don't know the specifics for doing
>
> the thread that calls into the kernel could block. its not actually
> very likely, but unfortunately, all the kernel code paths that involve
> file access, including to /dev/foo, require neogiating various
> filesystem locks. its a bad design, but its what we have.
Understood, although there is always a way around this kind of thing
if we really need to find it.
> >Personally, if I'm going to write an app, I'd be writing a plugin to
> >go in the critical process, with another process outside dealing with
> >the shared memory buffers. I don't care if it has a 100ms delay so
> >long as my buffers are big enough to cope, because all the
> >time-critical stuff would be in the plugin.
>
> Let me give you a scenario where this fails pretty badly by my
> standards. Suppose your GUI displays and allows you to control the
> parameter values of some kind of FX (even just gain). Every time you
> move the fader or whatever widget it is, there's 100ms of audio
> already generated and sitting in the buffer waiting to be
> played.
Hold on - in my app the plugin is generating the audio, not the
external process. If I have real-time control stuff like that going
on, I'd be writing it straight into shared memory, so it would happen
as soon as the GUI process received the mouse-movement. The 100ms
delay only applies to getting audio into the plugin that it can't
generate itself. I'd only be trying to do that for streaming off disk
or something like that, in which case there's no getting around having
a bit of a ramp-up time as you preload the buffer.
> well, i tend to toward the direction that steinberg have taken: if you
> don't use ReWire in your apps, then their audio integration with
> cubase will be poor or non-existent. moreover, if cubase is running
> and using ASIO, nobody else can use the audio interface. to me, this
> seems just fine.
This seems harsh at first reading - but if coders can use ALSA to
efficiently and conveniently write straight into the server then this
should take care of it nicely.
> >Our main target is in-process clients. But we have to support
> >out-of-process clients for interoperability with everything else
> >that's out there.
>
> i'm not sure we have to do that. see the ReWire example above. but i
> agree that a solution that includes all apps would be better, hence my
> appreciation of the an aserver-as-plugin-to-the-in-process-server
> approach (if that actually makes any sense).
If this aserver-plugin idea works, then everything is taken care of.
This supports both input and output of audio, right ? So aRts could
patch in at any point in our plugin network (with a largish fixed
latency) ?
> >My suggestion is to use shared memory, because this involves no costly
> >system calls or library calls once it is all set up and running.
>
> well, i wasn't thinking about how to move audio back and forth. i
> think if the in-process client has needs to do things like buffered
> disk i/o, it should create a thread to do that, not rely on data
> coming from another process. threads are our friends for these
> purposes. ardour-as-aes-plugin does exactly this, creating one thread
> associated with a session to handle all disk i/o.
I hadn't thought of that. I was seeing a clearer (logical) separation
between the apps and the critical process, but using threads amounts
to the same thing from another angle. This is like two processes with
a *lot* of shared memory. It would be much easier to code for, too -
much less fiddly than setting up shared memory segments.
There are still some situations that will require shared memory
buffers (rather than simple shared memory parameter structures). For
instance, a sequencer might be having to stream a list of events to a
plugin via a shared memory segment.
You still have to solve the problem of how to have one thread wait for
the main real-time thread when internal buffers become full or empty,
without putting system calls in the main real-time thread. How did
you solve this in ardour ? A short-duration sleep ?
Another thought - if you are using threads, then you could have a
thread responsible for loading new plugins. This would mean that
loading a new plugin, allocating memory for it and so on could all
happen without disturbing the real-time thread in any way (which is
running live), and then the plugin just needs to be linked into the
run-list somewhere, and off it goes. This problem had been lurking in
the back of my mind - I think threads solve this, am I right ?
Jim
-- Jim Peters / __ | \ Aguazul / /| /| )| /| / )|| \ jim_AT_aguazul. \ (_|(_|(_|(_| )(_|I / www.aguazul. demon.co.uk \ ._) _/ / demon.co.uk
This archive was generated by hypermail 2b28 : Wed May 02 2001 - 02:17:17 EEST