[linux-audio-dev] Realtime restrictions when running multiple audio apps .. Was: Re: disksampler ...

New Message Reply About this list Date view Thread view Subject view Author view Other groups

Subject: [linux-audio-dev] Realtime restrictions when running multiple audio apps .. Was: Re: disksampler ...
From: Benno Senoner (sbenno_AT_gardena.net)
Date: Fri Jul 14 2000 - 11:45:56 EEST


[ I am CCin Victor Yodaiken and it would be nice if the could comment on the
RTOS related topics in this mail ]

On Fri, 14 Jul 2000, Kai Vehmanen wrote:
> On Thu, 13 Jul 2000, Benno Senoner wrote:
>
> > regarding the soundserver war I am still standing to my opinion:
> > general purpose soundserver will not cover the needs of high-end audio
> > applications, and it doesn't make sense to wait forever until the
> > "holy grail" soundserver is found , which fits all possible scenarios.
> [...]
> > So let KDE use arts and GNOME esound, but
> > high-end applications eg: a low-latency softsampler/synth which communicates
>
> It sounds like you haven't even tried aRts...? You aren't really being
> fair comparing esound and aRts. Esound was made for generic purposes as
> far as I know, but aRts (Analog RealTime Synthesizer!) has a different
> background. It is going to be KDE's soundserver, but it was not designed
> for mixing occasional pling-plong sounds when you are opening your
> windows. Give Stefan's efforts a little credit, please.

I admit that I haven't tested aRts yet, but I've read much of the docs on the
website. I am in no way bashing Stephan's efforts.
Sorry for comparing arts to edsp, I know that it do much more than esd like
synthesize sounds etc, but I was comparing only the soundmixing client/server
model which is similar since both use some form of IPC to communicate between
application and soundserver.

What I want it to keep the the things _VERY_ simple, and arts is in my opinion
too complex for this task.

>
> I admit that there might (and probably will be) problems in using any
> existing soundserver. But you can always fix and tune things, as long as
> the basic design is good enough.

basic design is the key here .... my model requires _cooperation_ from the
audio app (loading these as separate plugins and imposing restrictions
on the operations each module can do and so no ... ) , while existing
soundserver work out of the box using approaches like artsdsp or esddsp.

>
> This reminds me of LADSPA development. We managed to keep it
> simple-and-stupid, and now we have a plugin API. Many said, that
> development will continue, and we will see LADSPA-v2. Ok, this will
> probably happen, but as we all know, nothing has happened yet. I think
> this is the reason why we should try to develop the current designs rather
> than start from scrath. Developing a non-trivial project always takes
> time. The sampler-sw project is a different thing, as we don't have any
> existing (free) implementations at our use.

Yes it takes time, but for now we can always pipe let's say the sound-output
of the sw-sampler into arts using arts as soundserver , so what's the problem ?

Adding the "manually-scheduled audioclients" model to arts can be done
but I prefer to encapsulate the stuff into a small and _lean_ app which is
easily maintainable and where the chances are bigger that we can produce
an efficient audio scheduler.

>
> And as for the high-end <-> low-end separation, I just don't see much
> sense in it. Developers can do what they want, but I as a user,
> want all my apps to work without clitches. And surpring or not, many "low-end" toy
> programs are much more suitable for creative use than these "professional
> apps". It's a damn shame if a program has latency problems, but if it
> produces nice sounds, I'll still use it - one way or another.

I agree and this is why I proposed to write a sort of rtsoundserverdsp module
( similar to artsdsp or esddsp) which can fetch the audio from an existing
soundserver.

This hookup will consist of two parts: a plugin which gets loaded into the
rtsoundserver and an userspace client which talks to the existing application
 or soundserver.
This way should the normal applications dropout then the other "high-end"
apps will not be affected by it (eg you are recording stuff from ADAT to disk
and do not want any interruptions).
The only risk of overall dropouts arise when the "low-end" application
runs SCHED_FIFO and blocks the CPU for several msecs.
The solution of the problem is: do not run "untrusted" (eg the apps where you
don't know that they meet the RT programming restrictions etc) as root.
This way even if the low-end app has a flawed programming model, it will
not be able to get SCHED_FIFO permissions, thus allowing the OS to preempt
it when the "high-end" ones need the CPU.
I think many audio developers are missing the concepts of realtime programming,
which impose some restrictions on the programming model.

Just look at RTLinux (ask Victor Yodaiken, he can tell you a looong story )
you can't do certain things in certain modules because it would screw timing.
RTLinux forces you to do so, userspace Linux does not force you, and that
is why flawed audio software (from a RT POV) exist.

I know it is much easier to implement a simple MP3 player using single
threading like this:

while(1) {
  read() MP3 frame from disk
  decode_MP3_frame();
  write() audio fragment to disk
}

rather than doing with two separate threads, one for audio and one
for disk IO. (audio having higher priority than disk)

As long as you do not push latencies too low, the single threaded model
works fine, since disks are quite fast these days, and reading a mere
128KBit from disk (16KB/sec) is not a big issue for a modern disk.
 
I can affirm right now, that the manual scheduling of multiple audio apps
in form of plugins of a single soundserver will deliver these rocky
2.1msec latencies even when running 4-5 apps simultaneously (as long as you do
not overbook the CPU)
(there will be only one audio thread running which schedules all
plugin callbacks in a linear fashion)

All other approaches like running 5 separate audio apps communicating via
pipe/socket to a soundserver and hoping that EACH task (clients plus
soundserver) will never miss the 700usec processing cycle is not a sure thing.
Jun Sun from Montavista has measured 1msec scheduling latencies under high load
and I fear that the client/server model is not up to the task to ensure these
700usec processing cycles.
Assume that you want to design an audio app in a "right" way like using
multithreaded approach: assume each application has 3 threads
(audio, MIDI , disk) multiply this with 5 concurrent applications and you get
15 SCHED_FIFO threads all fighting for the CPU, and the 5 audiothreads plus
5 midi threads will all fight to ALL achieve <1msec scheduling latencies.
If one of these 10threads fails then there will be some glitch
(either audio or MIDI timing variation).

With my model the number of threads will be always 3
(or perhaps the disk-threads could be duplicateds since they do not
have so tights timing constraints)
and scheduling the various plugins (with a simple subroutine call) has
virtually no cost compared to full context switches of dozen of processes.

>
> So, what I suggest now:
>
> - specify our requirements for the soundserver (*)

in order to design/tune an efficient soundserver model , we need
realworld apps which interact with it.
I will use the disksampler as testbed.
(who stops you to run 5 disksamplers with 3msec latencies ? :-)
believe it or not but it _IS_ possible with the above model)

> - check the current status of aRts, esd, X-audio, etc

You are trying to chase the "one size fits all" approach but I am convinced
that this is not the right way to go.
I prefer the interface existing soundservers to the rtsoundserver approach
rather than including it in all soundservers. (which then get mantained by
different people which can then lead to inconsistencies and so on)

Adding such a hookup will be really trivial (and easily mainatibable on
arts, esd, x-audio etc) and it will allow you to mix together windowmanager's
"ding dong" sounds, arts synth output and the sound produced by rtsoundserver
clients (like the disksampler).

And you can run SCHED_FIFO all involved tasks (aRts and his clients),
it may work for you , but there will be no guarantee that the rtsoundserver
clients will not dropout when using 2.1msec

> - list things that have to be fixed

the only fix you can do is to run all involved threads SCHED_FIFO and
tell audio developers to design their apps by following the RTOS principles.
(seems that we need a "designing realtime audio applications under Linux HOWTO")

A client/server model like arts provided that the applications are well
designed, will fork for all apps not requiring ultra-low latencies (2-3ms)
and I think you can run several (5-6) applications as aRts clients without
glitches with latencies around 10msec.

But as said a softsynth with bigger than 3-4msec latencies is not acceptable
for the professional performer. (I heard drummers are the most latency-sensitive
people).
And achieving 3msec latencies in a heavily multithreaded enviroment is for
now only a big hope and no one came up with the proof of that it can be done.

> - make a decision whether to use an existing soundserver or
> start a new project

I hope that I have given enough points to answer this question.

I neither want to spread FUD nor am I a "wheel-reinventer" because the
software has to be written by me because I do not trust other peoples work.

I say only that by keeping things simple and modular, we have the biggest
chance to achieve the best realtime performance and smallest maintainability
overhead.
 
>
> PS Just to make sure, I don't use KDE or GNOME, and I've just started to
> use aRts. So no strings attached.. ;)

I _use_ KDE and like it very much :-)

So do you know if the KDE / GNOME folks are planning to use the same sound
server ? ( if yes then I guess it will be arts :-) )
what about X-audio ?

One of Linux' disadvantages is that there is no central authority like on
windows. This has the advantage of flexibility but can lead to interoperability
problems too.
The soundservers are a classical example of this, and I am still convinced that
we should not follow the "one size fits all" concept.

I hope that I made the point clear that in order to achieve high performance
we have to accept some compromizes there is almost no way around it.
And developers / users of high-end software can live with these restrictions.

It would be nice to hear comments from other list members
 (David Olofson , Paul , Stephan etc)

PS: do not expect doing 2msec audio over network when running X11 clients,
that is asking too much since all involved processes on both machines would need
to run SCHED_FIFO , plus you have ensure plus the network doesn't get
congested

Benno.


New Message Reply About this list Date view Thread view Subject view Author view Other groups

This archive was generated by hypermail 2b28 : Fri Jul 14 2000 - 13:57:09 EEST