Re: [linux-audio-dev] LAAGA API Proposal

New Message Reply About this list Date view Thread view Subject view Author view Other groups

Subject: Re: [linux-audio-dev] LAAGA API Proposal
From: Paul Davis (pbd_AT_Op.Net)
Date: Thu Jun 14 2001 - 16:02:01 EEST


>Busses dont scale. You use Crossbars. Think of a mix node as of a
>crossbar.

How does a bus not scale, and what's a crossbar?

>>From an UI point of view I agree with not exposing the extra mix node
>to the user. But at the scope of the backend there _is_ such extra node
>(whether its builtin or not).

Ah. Do you agree with it not being present in the client-side API? Or
is "not exposing it to the user" just a matter of it not showing up in
the UI? The only thing I care about here is whether or not *clients*
have to deal with mixing themselves ...

>> If LAAGA is not sample-synchronous, then it doesn't really accomplish
>> the goal it sets out for.
>
>You seem to imply that a async == not sample-synchronous??? Sample-
>syncronous for me means if I process a stereo stream both streams are
>"aligned" with sample precision. I dont get your point here - again
>political, "I-never-did-it-this-way-so-its-bad".

I resent that comment. None of my discussion here is "political", and
I don't have a "Not Invented Here" mentality. I have strong
convictions about the presence of real problems with GLAME's approach
when used in a lowlatency/realtime situation. I view this discussion
as a way of trying to establish whether i just don't understand
GLAME's model, whether there are problems in my proposed model, how
important any particular set of problems are, and so forth. I have
said several times that I find GLAME's model quite elegant, and I mean
this as a compliment. However, I continue to see difficulties with the
way it works when its used in a situation where the "audioio"
component (as you termed it) is faced with the task:

     "there are 64 frames of data and space available on this
      h/w interface. please grab the data, drive the graph, fill the
      space, and get back to me. please do all this within 666usecs"

GLAME (like GStreamer) just doesn't seem to come from this kind of
model, and at the moment, I have still have a hard time seeing how it
can ensure that it can satisfy such a task in a reasonably
deterministic fashion. I'm not trying to insist that it can't, just
trying to understand how a system fundamentally designed to support
async processing of audio can be guaranteed to work when "forced" to
operate synchronously.

>See above - you dont support independend processing of connections
>from within one app.

I don't understand what you mean by this. Can you explain?

> Also imagine the following:
>
> sample app 1 ----- stuff --\
> sample app 2 ----- stuff ----- mixing, audio out
> hd input --------- stuff --/
>
>all three "stuff" are independend and can be processed in parallel.

i agree with you that being able to do this is a desirable goal. but i
consider this to be a somewhat unsolved problem when applied to the
low latency realm. the costs of synchronizing multiple threads as they
seek to interact with common data structures will sometimes exceed the
execution time of the operations they need to carry out. when you work
with 64 frames at a time (and perhaps less in the future if a new bus
architecture lets us), divide-and-conquer is not always a good strategy.

>> once again, this is a free running system. a unit with an output port
>> generates output at all times, whether they have connections or
>> not.
>
>Sure.
>
>> the delay line produces silence before there is anything
>> connected to it. ergo, there is no correct order. if you run the input
>
>So you have implicit (unsynced, for async. operation) zero input all
>the time? I dont like this.

why not? its precisely what is happening in the physical world when i
have a bunch of gear connected up ... (well, i wish it was; that
analog gear isn't close enough to zero for my taste).

if there's no input, and the components are driven by input, how can
the graph run? if the audioio component in your model is running, its
capture side is feeding buffers to its connections. every part of the
graph (except leaf nodes) has to execute in order for the playback
side of the audioio component to execute correctly and on time, right?

therefore, no part of the graph can be skipped just because one of its
ports is not connected to anything.

my understanding of your model is that if there is no connection, then
presumably getBuffer() will block and thats the end of the RT
characteristic of the graph. if getBuffer() doesn't block, then you're
back to my model, in which every component can only be executed when
all its input is ready (otherwise you will get audio glitches caused
by missing certain buffers on each pass through the graph).

did i miss something here?

--respectfully,
--p


New Message Reply About this list Date view Thread view Subject view Author view Other groups

This archive was generated by hypermail 2b28 : Thu Jun 14 2001 - 18:04:24 EEST