Re: [linux-audio-dev] mucos client/server implementation issues , I need opinions

New Message Reply About this list Date view Thread view Subject view Author view Other groups

Subject: Re: [linux-audio-dev] mucos client/server implementation issues , I need opinions
From: Benno Senoner (sbenno_AT_gardena.net)
Date: su joulu  19 1999 - 05:58:21 EST


On Sun, 19 Dec 1999, Roger Larsson wrote:
> > The API should be fullduplex, and allow a tree-like client/server
> > structure. That means every client can be the "server" of other clients.
> >
> ok,
> if by fullduplex you mean to/from audio board(s)/HDs...

By fullduplex I mean that every process can talk each other in both directions,
that means that every client can read and write data to his "server".
That leads to a flexible enviroment, since you can let pre-process and
post process data to a tree of interdependent clients.
Not always needed but somewhere very useful

> > An other issue is whether to use a pipelined approach or not,
> > that means introduce addtional latencies by adding buffers
> > (but not introducing addidional CPU load, because there is no memory ping-pong
> > copying).
> > The pipelined approach has the advantage that you can parallelize (run on
> > multiple CPUs) sequential datapaths since the audio is a data stream.
> > (parallel datapaths can be parallelized anyway without pipelining)
> > It is very simple to implement in the case of one
> > single server and many clients at the same level.
>
> by pipelined do you mean - using a pipe?
> I have been playing with that idea for communication in the large.
> (Not plugin -> plugin, but engines to engines)

no by pipelined approach I don't mean using pipes (unix fifos etc),
but using one or more intermediate buffers in order to parallelize operations.
That means during a simple clients/server communication the clients
doesn't wait that the server samples his audio fragment,
but uses the previous buffered fragment (in the startup case the
intermediate buffer is zero filled).
It increases latency by one or more fragments but makes the
buffer interdependency in a tree-like structure quite complex.

But as stated before since our DAW enviroment is composed of
many parallel paths, even with my proposed approach
there are enough parallel paths to be easily parallelized.

>
> > for example:
> >
> > client1 <---->-+
> > +---<----> server
> > client2 <---->-+
> >
> > But IMHO this flat design is not flexible enough for us.
> >
> > ( David wants to feed his softsynth output into quasimodo and then send
> > the result to the mucos server, where at this moment an an external mp3 player
> > is sending his output too)
>
> Ahh, each client/plugin has only two pipes... That would be a problem...
> But if the engine assigns pipes?

clients and server do not talk each other through pipes but only shared mem.
shared mem = zero copy overhead, and with several clients running
you can save quite a bit of bandwidth.

> > In the above example we have about 5-6 context switches per processing cycle:
> >
> > server -> client2 -> client1 -> client2 -> client3 -> etc. ....
>
> But shouldn't we be able to do a lot better than that?

you missed the point .. keep reading below :-)

>
> Suppose you have a server/engine that loads plugins as shared libraries
> (DLL)
> and makes a table of the script/drawn scheme/... of how the plugins are
> connected, then it can call each plugin JIT, no context switches! no
> need
> to do 'wait_for_...', no need to know that the memory is shared - it can
> local, you only get the pointer.
>
> Or it could be a simple engine that only handles one plugin, that is
> loaded by
> specifying a command line option - then you get one process per plugin.
>
> Or it could be a multi-instance-multithreaded-engine a mix between the
> two -
> you may start as many engines as you like. In your user process, in a
> daemon,
> as a kernel thread, as a RT-Thread.

The goal of the client/server implementation is to provide
interapplication communication, not inter-plugin communication.

Of course we run an app which hosts 20 plugins in one single
thread (or in order to take advantage of SMP in 2-4 threads).

But my goal is to let separate audio apps communicate each other
in realtime, with as little as latency possible.

Assume you have a softsynth which doesn't give you the possibility
to run as plugin of another app.
Assume you want to record the softsynth output in your HD recording app.

With my proposed client/server model, the softsynth sees a virtual
audio device where it can write the data to, and the HDrecorder
records from this audio device with as low as possible latency.

>
> The problem I have with your code are:
> * that it requires one thread per plugin-instance 'while(1) {...}'

Yes this is because you misinterpreted that the client/server runs
every plugin in a separate thread.

No, just as Paul pointed out is a mix of both:

assume there are a few monolithic apps running:
softsynth , HD recorder , FX rack.
Each app runs a set on plugins in his own thread,
but the 3 apps communicate through the client/server API,
in "real-time".

This approach lets integrate even old binary-only (through
LD_PRELOAD wrapper) apps into Mucos.

Of course the recommendation is to use a number of threads
as little as possible.
That means if an app can act as plugin, the "master" should
run the "plugin" in his thread in order to minimize scheduling overhead.
But sometime this isn't possible or desired.
And here the client/server approach comes handy.

I hope that the concepts are a bit clearer.
(sorry for my superficial explanations :-) )

Anyone that disagrees with my ideas.

> (Note that Ingo has recently patched a latency bug in this area...)

do you have the BH (botton half) patch ?
maybe this will even cut down latency further, because it might eliminate
the peaks of fragmentsize len)

Benno.


New Message Reply About this list Date view Thread view Subject view Author view Other groups

This archive was generated by hypermail 2b28 : pe maalis 10 2000 - 07:23:26 EST