Re: [linux-audio-dev] laaga implementation news bite

New Message Reply About this list Date view Thread view Subject view Author view Other groups

Subject: Re: [linux-audio-dev] laaga implementation news bite
From: Richard Guenther (rguenth_AT_tat.physik.uni-tuebingen.de)
Date: Mon Jun 11 2001 - 15:19:17 EEST


On Mon, 11 Jun 2001, Paul Davis wrote:

> >I'd like to see how you are using kill (i.e. how do you do callback
> >operation between a manager and a process at all!??).
>
> client (within a library call):
>
> sigwait (&signals, &sig);
> switch (shm_control_block->event) {
> case ProcessEvent:
> shm_control_block->process (shm_control_block->nframes);
> break;
> ...
> }
> kill (shm_control_block->next_pid, sig);
>
> engine:
> kill (client->pid, relevant_signal);

Ok, thats code I understand. How is this different from the GLAME
approach which does (same scope as your example):

  client (within a library, wrapping up a callback like operation)
    read(inputportfd, &control, 4);
    switch (control->event) {
         case ProcessEvent:
             registered_process_func (control->buffer ...);
             break;
         ...
    }
    write(outputportfd, &control, 4);

  (obviously no engine needed - operation is asynchronous, but if you
   want synchronous operation (i.e. no pipelining) you need to have
   an extra client controlling all buffer generating processes (how
   do you do sync operation with multiple buffer generators?))
 
> >Richard, who still thinks your approach is flawed.
>
> do you have an alternative? if you drive multiprocess clients
> with pipes, then reconfiguring the graph is a lot of work since
> you have to alter every client's idea of what the "next" client
> to execute is. that's my reason for using kill(2).

It doesnt matter whether you use kill(2) or blocking read/write - but
using read/write and fd's allows you to use select/poll on those fds
and other stuff (like an open /dev/dsp) which kill(2) does not allow.

Also neither with sync. nor async. operation reconfiguring of the graph
takes any work - it happens automatically. So to show how the manager
client whould ensure sync. operation look at the following graph:

     buffer gen 1 \
     buffer gen 2 - manager - worker - worker - consumer
         ... \ ----- < -----------------/

so the manager "passes" one buffer at a time (to every output it has)
and passes more buffers only after the buffers have re-arrived [note
that usually, if you are f.i. audio-in drived this happens automatically
as the following chains are required to take less CPU than the latency
to the next audio-in buffer anyway (or you're screwed) - this way
GLAME is low-latency and sync., too (without having a manager node).
Also with this approach sync. operation and automatic distribution
on a SMP machine is achieved (which you have to code explicitly), also
feedback is allowed quite naturally (though you have to watch out for
possible deadlocks - whose detection is as easy (read: hard) as
detection of failing clients)

> but its also not particularly relevant to LAAGA itself, since its
> just an implementation detail. if its the wrong approach, i'd
> welcome a better method.

Dont know if its "better", but its surely more flexible and I dont
see any downside wrt latency.

Richard.

--
Richard Guenther <richard.guenther_AT_uni-tuebingen.de>
WWW: http://www.tat.physik.uni-tuebingen.de/~rguenth/
The GLAME Project: http://www.glame.de/


New Message Reply About this list Date view Thread view Subject view Author view Other groups

This archive was generated by hypermail 2b28 : Mon Jun 11 2001 - 16:51:01 EEST