Re: [linux-audio-dev] LAAGA API Proposal

New Message Reply About this list Date view Thread view Subject view Author view Other groups

Subject: Re: [linux-audio-dev] LAAGA API Proposal
From: Paul Davis (pbd_AT_Op.Net)
Date: Tue Jun 12 2001 - 20:14:02 EEST


>Before answering the following questions I like to tell about my view
>of the typical usage pattern from the UI point of view. Lets suppose

 [ ... description elided ... ]

yes, that more or less matches my notion of things closely enough that
we clearly are aiming at the same general idea.

>I dont understand this sentence - so first for the bracketed stuff:
>We dont have "automatic" mixing (I think the concept of automatic
>mangling in any form is broken - the only useful think is buffer
>replication (by reference or copying), but without change - this
>allows to abstract the datatype from the LAAGA backend (remember
>the guys speaking about midi...)).

automatic mixing means that clients don't have to deal with the
concept of "connections" or "buffers" or "mixing". if the user
connects 6 outputs to a single input, the client doesn't have to
understand that it has to loop over them. In addition, we can support
ideas like universal connection gain controls without the clients
knowing about it.

>loop-until-no-more-buffers... well - you usually (for mono operation,
>i.e. independend processing of connections) would do this inside
>a thread (one per connection). For processing multiple connections

yes, but in your model, this thread is also executing the core
processing code for the client. in GLAME, you fetch the buffer(s), do
any mixing necessary, work on the buffer, and then wait for the next
buffer. you're doing this work in the client, right at the point where
the client would like (i think) to be operating like a LADSPA plugin.

still, i understand that you could wrap this. its really just a matter
of which side of the IPC barrier you do this on, which i think is your
point, right?

>at once (f.i. mixing) you need to be able to handle different sized
>buffers from connections.

different sized buffers? how so?

>Now a general comment: You absolutely can wrap an API like yours
>(with callbacks) around the above concept - but not the other way
>around.

Let me think about this.

> And for the simple cases (with your API, an app with two
>input ports (stereo) will receive buffers on those in sync? I.e.
>inside one callback? I dont see how you handle this at all)

of course they are in sync. the engine has a sorted execution list; a
client expecting to receive data is only executed after its data
sources have already executed; it gets called once via its process()
callback, and can get the memory areas associated with both buffers at
once to use within that callback.

>Have I mentioned that in my above example we dont have an engine at all?
>(you could call the audioio app the engine, though)

Ok, so in your model, there is no single central point where the
"signal" to initiate the processing the graph originates. Instead,
that "signal" could (theoretically) originate from anywhere. In the
real world, it will occur once some component has a buffer ready to be
delivered, such as the audioio app, which will drive all those
connected to it. These in turn will drive their buffers through to the
final destinations.

Did I get this right?

I can see some potential problems with this approach, but first I want
to make sure I understand it correctly.

>> my preference for a model where once you get the address of the single
>> memory region associated with the port, its just a chunk of memory as
>> it would be in a very simple plugin system (e.g. LADSPA). that is, its
>
>Hey wait - we dont want to do LADSPA2, do we? Interprocess communication
>is nowhere like connecting LADSPA plugins (though its possible, but you
>dont want that).

Well, actually, I do want that :) I just want to support typed ports
and out-of-process clients as well.

>I dont think explicit buffer handling is complex at all. It just makes
>things like echo/delay/feedback possible without doing handcrafted
>ringbuffers (and as such avoids unnecessary copies).

i may be dumb, but my understanding is that:

   1) you can't (correctly) implement any feedback-requiring DSP with a buffer
      size larger than 1 frame in *any* system
   2) the model you've suggested doesn't seem to me to solve this
      any better than the one i have offered.

> If the graph
>is async. driven - but by audio I/O - async. and sync. operation are
>the _same_

OK, point taken.

--p


New Message Reply About this list Date view Thread view Subject view Author view Other groups

This archive was generated by hypermail 2b28 : Tue Jun 12 2001 - 21:17:37 EEST