Re: [linux-audio-dev] LAAGA API Proposal

New Message Reply About this list Date view Thread view Subject view Author view Other groups

Subject: Re: [linux-audio-dev] LAAGA API Proposal
From: Richard Guenther (rguenth_AT_tat.physik.uni-tuebingen.de)
Date: Thu Jun 14 2001 - 13:11:24 EEST


On Wed, 13 Jun 2001, Paul Davis wrote:

> >Ok, we seem to know what both approaches do and what advantages and
> >disadvantages are (but we dont agree on them). So I think we either
> >need input from some other guy or we can stop discussion.
>
> More or less true, I agree. But not quite :)

So I'll just continuing answering your questions.

> >Perhaps I have time to do an implementation of my API, probably calling
> >it something different than LAAGA - ASCA (Application Stream Communication
> >API). Competition is always good.
>
> I don't think its good here. In fact, I think its terrible. We want as
> many applications that can benefit from sample-synced interaction to
> do so. Providing 2 or methods of doing this is not going to help that
> happen, in fact it will make it less likely to happen.

Too bad - but I dont really care. You cant force people to use your API
(maybe they dont like it), so I present some alternative. In the end
the better will win - where again I dont care, if its mine or yours.

> >> How does a 3rd party behave (both visibly and code-wise) when a user
> >> tries to make a second connection using an object that is already
> >> connected and doesn't do mixing?
> >
> >The connection is refused. So the user has to insert a mixing plugin
> >inbetween (if the desired effect was mixing - which is intuitive, but
> >obviously not the only possible effect).
>
> Actually, its not intuitive at all. When I route signals from a mixer
> strip on either a digital or analog mixer onto a bus, there is no
> additional "node" between the strip (which has gain control etc.) and
> the bus.

Busses dont scale. You use Crossbars. Think of a mix node as of a
crossbar.

> Your model requires that either the bus be able to handle mixing
> (which is OK), or that the person uses an additional mixing node
> (which I don't think is OK).

Well, the discussion of having implicit mixing or not seems to be just
political - I keep saying: implicit mixing is not necessary, without
the engine is simpler - dont we want automatic panning by some other
weird builtin param (like you want for a gain control)? Or a builtin
noisegate? Or ....

From an UI point of view I agree with not exposing the extra mix node
to the user. But at the scope of the backend there _is_ such extra node
(whether its builtin or not).

> >Ah, ok - this simplifies (a little bit) buffer handling and processing,
> >but makes feedback with fifo size != fragment size impossible (you said
> >that already, if I understood right).
>
> Correct. Thats the consequence of any sample-synchronous system with a
> block size of more than 1 frame.

Ok. So you dont support feedback.

> If LAAGA is not sample-synchronous, then it doesn't really accomplish
> the goal it sets out for.

You seem to imply that a async == not sample-synchronous??? Sample-
syncronous for me means if I process a stereo stream both streams are
"aligned" with sample precision. I dont get your point here - again
political, "I-never-did-it-this-way-so-its-bad".

> >Well, queueBuffer() does what it is called - it just queues the buffer
> >to the recipients buffer lists. Any recipient blocking on its (empty)
> >buffer list gets woken up (easy to implement with unix pipes or the
> >like, writes for queue(), blocking (or nonblocking for poll/select
> >like operation) reads for get()).
>
> except that you have to do one poll/select or read for every get. this
> is no so good. if i wake up from poll because one buffer has been
> queued, i don't know that the other buffers are ready yet. so i have
> to go back into poll again until they are all ready. this won't scale,
                                                          ^^^^^^^^^^^^^^^
I dont buy that.

> i think, which is why i prefer a model in which we don't wake up the
> client until we know (because of graph execution order) that all of
> its buffers are ready.

I cant do this, as independend (async.) processing of incoming buffers
(on different ports) has to be supported.

> >> this is what bothers about your "async" model. it sounds as if other
> >> components could potentially use queueBuffer() to signal a component
> >> at any time, making it very hard to write the code so that it works in
> >> a low latency situation. in my system, whenever a component is
> >> running, it already knows that all of its buffers are ready and that
> >> it will not interrupted or resignalled or whatever during its
> >> execution.
> >
> >Humm - so you cant handle "independend" graphs without being forced
> >to sync them? I.e. no SMP benefit at all?
>
> If there are subgraphs within the graph that are not connected to the
> rest of the graph except for an input, then yes, we could get SMP
> benefit. This just requires a cleverer algorithm (one that GLAME and
> many other systems have implemented already) to partition the graph.

GLAME doesnt partition. GLAME is async., the kernel schedules.

> However, i don't imagine that in many cases such partitioning will be
> possible because of the kinds of connections a user will create. If
> you're mixing incoming audio data with a stream from a softsynth,
> processing the result with an fx unit, sending the result to an HDR
> and also back to an interface for monitoring, there is no way to
> partition the system at all except on a per-channel basis, and that
> will probably be impossible because since the units are not mono.

See above - you dont support independend processing of connections
from within one app. Also imagine the following:

  sample app 1 ----- stuff --\
  sample app 2 ----- stuff ----- mixing, audio out
  hd input --------- stuff --/

all three "stuff" are independend and can be processed in parallel.

> >> yes, i know about that. i've talked to the pthread author about this,
> >> and he considers that version to be broken. kernel 2.2 and above
> >> support the "RT" signal set, so I don't consider this much of a
> >> problem.
> >
> >Err - so you're Linux only and dont support BSD or other weird
> >architectures.
>
> No. I just know that the reason linuxthreads with SIGUSR1/SIGUSR2 is
> "broken" is that it violates POSIX by using these two signals to provide
> POSIX functionality. any pthreads implementation that does this
> violates the POSIX requirement that these signals be available for
> applications to use.

May I suggest to use a RT signal instead? Also, on BSD the threads
package does not cope with using SYSV ipc in parallel - we got bitten
by this with GLAME and require installed linuxthreads package on
BSD therefore.

> This is another reason I dont like signals - signals
> >and threads dont mix portably, with read()/write()/select() you have
> >maximum portability (even NT might support such style of operation).
> >But of course using signals is implementation dependant and no
> >requirement for the API.
>
> Yes, your right, they are not very portable. But read/write can't work
> without horrendous complexity unless you have fd passing, which is
> also not portable. But you're also right that its not part of the API,
> so we don't have to worry too much.

You dont need fd passing if you dont like it - just use unix sockets
or named pipes. But we dont need to worry as long as we dont want to
expose fds for use with poll/select (which is unnecessary with sync.
operation anyway).

> >> you don't need async for this to work. if there is a feedback loop,
> >> there is no correct order for the graph execution, so you merely
> >> need a guarantee of a particular order for as the loop exists.
> >
> >There is a correct order - for the echo example, the first plugin
> >to execute is the delay plugin which needs to put out a set of zeros
> >to be able to start processing in the other nodes. So for sync.
> >operation you somehow magically need to detect that delay can produce
> >output without having input first.
>
> once again, this is a free running system. a unit with an output port
> generates output at all times, whether they have connections or
> not.

Sure.

> the delay line produces silence before there is anything
> connected to it. ergo, there is no correct order. if you run the input

So you have implicit (unsynced, for async. operation) zero input all
the time? I dont like this.

> to the delay first, you get one effect. if you run the delay first,
> you get another. all that matters is that you never change the order.
>
> --p

Richard.

--
Richard Guenther <richard.guenther_AT_uni-tuebingen.de>
WWW: http://www.tat.physik.uni-tuebingen.de/~rguenth/
The GLAME Project: http://www.glame.de/


New Message Reply About this list Date view Thread view Subject view Author view Other groups

This archive was generated by hypermail 2b28 : Thu Jun 14 2001 - 15:15:27 EEST