Re: [linux-audio-dev] LAAGA - main components

New Message Reply About this list Date view Thread view Subject view Author view Other groups

Subject: Re: [linux-audio-dev] LAAGA - main components
From: Paul Davis (pbd_AT_Op.Net)
Date: Tue May 01 2001 - 23:02:33 EEST


>better recommend the right method. Also, in general external apps are
>going to want to output sound whilst the critical process is running,
>and this data will have to be relayed to that process somehow.

the simplest solution for that would seem to be to have aserver be an
in-process plugin of the low-latency server. then everybody else can
simply use a pcm_shm device via the standard ALSA PCM API, and presto,
they work (though they don't get the ability to read data from each
other).

i don't think, however, that aserver can be the basis of the low
latency server, since its based on using data delivered via shm and
passing it onto the hardware. this is precisely the scheme (not shm; 1
process collecting and distributing data to/from other processes) that
won't work for low latency scenarios. its the difference between
synchronous and asynchronous design, in essence.

>If we write the device driver, then it won't block. All it's going to
>do is wake a bunch of processes. I don't know the specifics for doing

the thread that calls into the kernel could block. its not actually
very likely, but unfortunately, all the kernel code paths that involve
file access, including to /dev/foo, require neogiating various
filesystem locks. its a bad design, but its what we have.

>Personally, if I'm going to write an app, I'd be writing a plugin to
>go in the critical process, with another process outside dealing with
>the shared memory buffers. I don't care if it has a 100ms delay so
>long as my buffers are big enough to cope, because all the
>time-critical stuff would be in the plugin.

Let me give you a scenario where this fails pretty badly by my
standards. Suppose your GUI displays and allows you to control the
parameter values of some kind of FX (even just gain). Every time you
move the fader or whatever widget it is, there's 100ms of audio
already generated and sitting in the buffer waiting to be
played. *That's* the central problem, and thats why to do real time
stuff, you have to "generate" all the audio when the audio interface
says it needs it, and not before. "generate" can mean anything, but in
this specific case, it presumably means getting it from some existing
source, processing it in various ways, and delivering it to the
server/audio interface.

>Perhaps it is not necessary to bother with this. Still, I feel that
>if this really is going to be a killer low-latency plugin server,
>these kinds of issues need to be dealt with, not least to minimise the
>load on that server due to communication with other processes.

well, i tend to toward the direction that steinberg have taken: if you
don't use ReWire in your apps, then their audio integration with
cubase will be poor or non-existent. moreover, if cubase is running
and using ASIO, nobody else can use the audio interface. to me, this
seems just fine.

>> Abramo has pointed out that the pcm_shm and aserver components to
>> alsa-lib *already* do this. Nothing new needs to be written to use
>> this, applications that already use alsa-lib don't need modifying, and
>> thanks to his work on libaoss, even OSS API-based applications can use
>> it to, with a little help from LD_PRELOAD.
>
>I've rechecked the LAD archives, concerned that I might have missed
>some critical post from Abrama explaining pcm_shm and aserver, but I
>couldn't find it.

you're not missing it. i'm just familiar with the source, and i have
private contacts with abramo from time to time where we fight about
stuff like this :) right, abramo?

>> So there isn't any work needed to support this kind of thing. The
>> question remains, as Kai showed very clearly a couple of days back, do
>> we support only in-process clients, only out-of-process clients, or both?
>
>Our main target is in-process clients. But we have to support
>out-of-process clients for interoperability with everything else
>that's out there.

i'm not sure we have to do that. see the ReWire example above. but i
agree that a solution that includes all apps would be better, hence my
appreciation of the an aserver-as-plugin-to-the-in-process-server
approach (if that actually makes any sense).

>> With in-process clients, what do we do about GUI's? If I understand
>> your suggestions and Kai's correctly, that would be down to the client
>> in question: how it and its GUI communicated would be a private
>> affair. Did I understand this correctly?
>
>My personal opinion is that how the plugin and GUI communicate is a
>private affair, but we should recommend particular methods and provide
>example code at the very least. This is due to the detrimental effect
>of certain means of communication on the critical process - you
>yourself provided most of this information, Paul, from your experience
>with low-latency.

well, its been discussed quite a bit in the context of LADSPA GUIs,
and the best approach is still far from clear to me.

>My suggestion is to use shared memory, because this involves no costly
>system calls or library calls once it is all set up and running. The
>only issue then is how to handle an external process that wishes to
>wait on a shared memory buffer. I have made three suggestions, none
>of which have met any favourable response so far:

well, i wasn't thinking about how to move audio back and forth. i
think if the in-process client has needs to do things like buffered
disk i/o, it should create a thread to do that, not rely on data
coming from another process. threads are our friends for these
purposes. ardour-as-aes-plugin does exactly this, creating one thread
associated with a session to handle all disk i/o.

but for relaying control information, yes, i think that shm would be
ideal. the plugin would set up a control block, and the GUI would
read/write values there, with the plugin responding when to the new
values as it sees fit. it would be very good to recode ardour to work
in this way. it can clearly be done, since the GUI code is totally
distinct from the engine.

>- We write a special device driver (a bit like a named pipe), and any
> external process waiting for a shared memory buffer sleeps on this
> device. The critical process writes a byte to this device (or does
> an ioctl or something) at the end of the main loop, indicating to
> the device driver to wake up all the processes sleeping on it. This
> costs a system call in the critical process.

There isn't any difference between this and a named pipe, in
fact. Except for the possible wake-one semantics recently introduced
into the kernel (which can presumably be turned off).

--p


New Message Reply About this list Date view Thread view Subject view Author view Other groups

This archive was generated by hypermail 2b28 : Wed May 02 2001 - 00:24:22 EEST