Re: [linux-audio-dev] Re: timing issues in a client/server enviroment

New Message Reply About this list Date view Thread view Subject view Author view Other groups

Subject: Re: [linux-audio-dev] Re: timing issues in a client/server enviroment
From: Benno Senoner (sbenno_AT_gardena.net)
Date: la loka   09 1999 - 21:27:39 EDT


On Sun, 10 Oct 1999, Paul Barton-Davis wrote:

>
> >An OS that provides only PCM and MIDI drivers is not suitable
> >to provide a powerful and flexible DAW enviroment.
>
> yes it is. in fact, its precisely why Paris, ProTools and the rest all
> work under Windows :) oh, ok, these are not flexible enough. i
> agree. but this has to do with inter-app communication, not
> *necessarily* access to the device drivers.

Actually people are used to an all-in-one enviroment provided by
a specified hw/sw platform.
For some people the all-in-one hw/sw can provide enough
flexibility to perform the needed tasks.
But providing alll needed features for all possible users on
a single monolithic application is just impossible.
This is why I am for a modular design.
In fact actually there is no OS providing this modular DAW enviroment
out of the box (I don't know if BeOS has some of these features).

>
> >Quasimodo is a nice app with many functions, but it's still a monolithic app,
> >and my foobar wave editor can't use your 24db lowpass filter with a simple
> >API call.
>
> thats right. it should use my 24db lowpass filter *plugin*, which
> would happen to be the same one that Quasimodo would use, in some
> future version where Csound compatibility is no longer the crux of the matter.
>
> >Paul the keyword is "integration", and I think many people are not very
> >satisfied with windozes integration between audio apps,
>
> Thats true for people who want lots of apps that all talk to the h/w
> at the same time.

Yes but people are realizing that with the increase of CPU/disk power,
the hardware IS actually capable of running more apps simultaneously.

Just as you can run 10 instances of GIMP, you should be able to run
10 instances of Cubase, as long you do not overload the CPU,
since in the latter case there are realtime requirements.

>
> Thats not true, as far as I can tell, for people who use well-designed
> audio environments.

Agreed, for example Pulsar/Scope could be thought as entirely self-contained,
(except that you need an additional MIDI sequencer app).
It's efficient, and while remaining in the Pulsar/Scope enviroment, very
flexible too.

>
> >(see for example Seer Reality monopolizes the DirectX PCM audio,
> >that means if you want to play your mp3 on your 2nd soundcard, you can't)
>
> part of that problem is just braindead design. Seer are not alone in
> this - even the wonderful Bill Schottstaedt wrote his sndlib() code to
> open /dev/dsp when the library initializes, even though it will likely
> not use it!

Yes, braindead design is one issue, but ofter the limiting factor is the
host's API, which doesn't make any assumption about concurrent execution
of several apps.

>
> >and if we are able to come up with a good standard,
> >I'm sure that many will have valid reasons to switch.
>
> i agree entirely. i just don't want the model of Linux audio to push
> the existence of "direct access" to the PCM and MIDI ports out of the
> picture, because for some, this is the right model.

Agreed, and with my proposed method it's very easy to do:
just tell the audioserver to release the device resources and let
the app doing his direct access.
I see no proble here.
I don't want to force audio apps to use our api,
I say only that IF the app uses our API then there are LOTS of benefits,
(routing, syncing, plugins etc.)

>
> two context switches per delivery of data is just silly, for some things.

actually the number is much lower:
assume there are 5 apps (which uses our API via shmem) which do
PCM output and the audio server mixing together the audio streams
outputting the result to the DAC.

assume the server is just playing a fragment and blocks on the write().
-context switch 1
client1 does his DSP computations,then writes to shmem, and then
waits for an ACK from the server (via message queue or semaphore).
-context switch 2
client2 gets the CPU , same behaviour as in the client1 case
..
..
-context switch 5
client 5 gets the CPU
- context switch 6
  server wakes up because the write() of the audio fragment is done
  now the server wakes up all the clients by sending a message (or changing
the state of the semaphore) to each client
server mixex toghether the PCM data from the shared mem buffers and
issues the write() to the PCM device.
now we are in the initial case again.

As you see:
5 clients running + 1 audioserver = 6 context switches, seems pretty ideal eh ?
:-)

I even thought about the fullduplex case:
in one of my previous postings I proposed a solution where
there is only 1 sync point for the entire client/server cycle.
that means clients and server exchange input AND output buffers simultaneously.
that means if you run 5 fullduplex clients + 1 audio servers
you get still only 6 context switches.

The only drawback is that the minimum latency is 3 fragments opposed to 2
fragments in a monolithic , non client/server app like:
while(1)
{
  read from ADC (blocking)
  do_processing
   write to DAC (blocking)
}

But even in this case you have to take into account the scheduling jitter +
do_processing which consumes almost 100% of the CPU,
that means the minimum reliable latency is 3 fragments (input-to-output).

still not convinced ?
Do you remember when I benchmarked msgsnd()/msgrcv() ?
about 20-25usecs (on PII400) round-trip time between 2 threads.
(2 context switches (since msgsnd() is non blocking) + 2 msgsnd() + 2 msgrcv()
).

Assume: we run 5 soft-synths + audioserver with 1ms fragment granularity:
we need 6 context switches ( max 60-75usec).
Which is about 5-7% CPU overhead.
If you increase the fragment size to 3ms , the overhead goes down to 1%.
Davids sample accurate event system would still provide exact timing,
and at 3ms the latency would still be much better than on other OSes / hw-boxes.

Just as David said: if you need ultra-low-latency the CPU overhead is
unavoidable.

And I think having N fullduplex apps running with the need of only
N+1 context switches per DSP cycle isn't that bad.
(actually it's the theoretical minimum)
:-)

>
> i would rather have a defined method by which any application can talk
> to any other application, and either of them might actually own the
> device nodes. you could start up a DAW, and *it* would be the engine,
> accepting data from other sources, or you could start up a replacement
> for esd, and *it* would do the same thing, etc. i don't want to force
> programs into using something other than the Unix file interface if
> they believe they want to use it.

With my model I don't make this enforcement, see above.

>
> --p

Hmm, I partly disagree, because the DAW app has provide all the audio server
functionality.
there are 2 solutions:
1) the app implements his own "server" which is rather silly since you are
reinventing the wheel.

2) the app calls our audio server, but then we are in my proposed case.
:-)

regards,
Benno.


New Message Reply About this list Date view Thread view Subject view Author view Other groups

This archive was generated by hypermail 2b28 : pe maalis 10 2000 - 07:27:13 EST