[linux-audio-dev] Re: a new application underway ... timing issues in a client/server enviroment

New Message Reply About this list Date view Thread view Subject view Author view Other groups

Subject: [linux-audio-dev] Re: a new application underway ... timing issues in a client/server enviroment
From: Benno Senoner (sbenno_AT_gardena.net)
Date: la loka   09 1999 - 13:55:04 EDT


On Sat, 09 Oct 1999, Paul Barton-Davis wrote:
>
>
> the scenario in which the sequencer helps is simple: suppose that
> you're a program that is *not* currently generating samples and *not*
> using the RTC, but basically asleep, doing a select(2) or poll(2) on a
> whole set of various file descriptors. Our automation recording wants
> us to wake up at a certain point and do something.
>
> Currently, you can only get 20ms or so accuracy on this from user
> space in the scenario I describe. If the ALSA sequencer can do
> flexible time scheduling, then you can ensure that you will be woken
> up at exactly the right moment.
>
> What kind of program does this ? Well, certainly most audio applications
> don't need this, because they have the DAC to provide a time sync
> source. But pure MIDI apps don't have this, and so the sequencer is
> very useful for them. Programs that typically don't continuously
> output samples would also benefit from it.

Agreed,
but I think that the linux audio framework
(now that it seems official that low-latency will get into kernel 2.4),
assuming that the Audiality project (or call it like you want) will succeed,
 will converge to a audio client/server model, where you will even be able
to run the audio server as an userspace app so that all non RTLinux using
linuxers, will get high audio performance.

That means legacy apps will comunicate to the audioserver through a
/dev/dsp , /dev/midi etc. emulation (like esddsp for example).
and new apps will usee our API.
That means since the PCM device is under the server's control,
there is no problem to provide a timer for the clients.
Just let the clients wait on a fifo,message queue or semaphore,
and let the server wakeup the client at the right moment.
Using the PCM interface with 32 samples fragments, we can get a timer
with a precision down to 700usec, and if we need more we can let the
server use the RTC device.
But as usual when system load increases the scheduling jitter could
add 500-1000usecs.

Of course one could choose to run the audioserver in kernelspace
to provide even better timing (actually there is little proof that we will
really get better timing during high system load),
but I'm strongly against of this, since there will be many buggy plugins
around, which could take our linux box down.
(see DirectX on windoze).

Are there any valid reasons to NOT adopt my proposed audio client/server
model ?
By using sharedmen / efficient IPC, we could even run many
low-latency apps simultaneously with only little overhead.

I think kernel space belongs mainly to PCM/Midi driver,
and maybe some eventrouter which has to be well tested and crashproof.

comments ?

regards,
Benno.


New Message Reply About this list Date view Thread view Subject view Author view Other groups

This archive was generated by hypermail 2b28 : pe maalis 10 2000 - 07:27:13 EST