Re: [alsa-devel] Re: [linux-audio-dev] laaga, round 2

New Message Reply About this list Date view Thread view Subject view Author view Other groups

Subject: Re: [alsa-devel] Re: [linux-audio-dev] laaga, round 2
From: Karl MacMillan (karlmac_AT_peabody.jhu.edu)
Date: Thu May 10 2001 - 09:02:29 EEST


On Thu, 10 May 2001, Paul Davis wrote:

>
> A more useful example is application 1 writes to bus channel 1 and
> application 2 reads from bus channel 1. We know that app 1 must run
> before app 2. if app 1 also reads from bus channel 2 and app 2 writes
> there, then we have a feedback loop at the application level, and its
> not solvable in a general way: any execution order will have
> problems.
>

Ahhh . . . . I see the whole problem. Even though I knew that you were
using a physical mixer as a model it didn't occur to me that the channels
would really be this limited in functionality. This is a good example of
why an arbitrary signal flow graph is a good thing - in your model it is
going to be a pain to connect a lot of apps in series (just like it is
with a traditional mixer). And adding this to the model does not mean
that you have to give up the coping straight to dma buffers for the simple
cases (see below).

> >The point of this is that there is no reason to present data to the
> >plugins that they cannot use - if there is no operational difference
> >between channels and buses the plugins shouldn't care. The result of
> >removing it is that servers are given more flexibility for their
> >implementation.
>
> There *is* an functional difference between types of channels, just no
> difference in the API to use them. Just as in an engine that
> offered network channels (which AES may well do soon): these are
> functionally different from channels associated with an audio
> interface, but you use them in exactly the same way.
>

What a lot of terms we are using without having clear definitions for
them! Let me try again. If there is no difference in how plugins use the
channels (that is, if the api is the same regardless of the underlying
channel type) why should the plugin be given the information? You are
already coming up with additional channel types (network) and I can think
of a few more (disk, proprietary firewire protocal, video stream). By
making (allowing) plugins to deal with these issues you add a lot of
complexity. Why not let the server take care of all of this?

> >Well, this is the case with the run method but not the run_adding method.
> >I was forgetting that the default is run insead of run_adding. As far as
> >the buffering goes it is really just a simple directed graph sorting
> >problem - when there is no need for the extra buffering it could just go
> >away.
>
> I'm not talking about that kind of buffering. In the LAAGA model, its
> possible (given a server implementation) to have data moving from the
> plugin's local buffers directly into the h/w DMA buffer. But you
> cannot give the plugins access the DMA buffer of the hardware to write
> using store instructions. They don't know anything about the physical
> format of that memory in terms of sample format, interleaving
> etc. This is completely hidden inside
> {read_from,write_to}_channel(). So you have to inevitably add at least
> one extra layer of buffering quite apart from the "crude" solution to
> run/run_adding that would add another.
>

This is not really true - it is simply a matter of where the buffer is.
In your model the buffer is created and owned by the application. Using
LADSPA ports or any other api where the buffers are given to the plugin
the server owns the buffer. Either way there is a buffer of floats
somewhere. All the LAAGA model does is provide a method to copy from this
buffer to, potentially, the dma buffer. The same could be done with
LADSPA. The LADSPA model actually offers _more_ oppurtunities to reduce
the number of buffers in complex systems. Additionally, it is a less
complicated api for the plugins to use - they simply write to their ports
during the run/run_adding method and then the server calls the correct
read_from/write_from method.

Here is a quick example:

2 apps that read and write 26 channels are connected in series

input -> app1 -> app2 -> output

In your model there would be 26 channels, 26 buses, and 52-104 float
buffers (26 for each app for each input/output - input and output could
possibly be the same). In the LADSPA case (with run_adding and in place
buffers) there would be 26 channels and 26 float buffers. Not to mention
the reduced copying for the 'buses'! Also, the LADSPA case has a fixed
number of buffers regardless of how many apps are connected in series
while the LAAGA case adds at least 26 - 52 additional float buffers per
app. Even if there is no in place in the LADSPA model it is possible to
use only 52 buffers regardless of the number of apps.

> When there is buffering you are left with a) more memory needed -
> >this is really not that large of a requirement and b) potentially more
> >L1/L2 cache misses - and I have not seen hard numbers that this would be a
> >big performance hit.
>
> Indeed, it might be OK. I'll probably try it with AES soon. Thats one
> of the benefits of having a real, working model for all this :)
>

And someone with the time to work on it :)

> [ LAAGA and LADSPA ]
>
> >I would argue that these distinctions are present more in the discussion
> >and documentation than the apis. They are both about presenting a _very_
> >limited api for the exchange of audio data.
>
> Totally agreed.
> The only real differences, in
> >my mind, are 1) the run thing and 2) control data. And as far as control
> >data goes, it seems like including that would allow the sharing of midi
> >like data between apps . . .
>
> Except it would be MIDI data as floats :) yikes.
>

Yikes !?!?!? Floats are a great idea for a lot of the midi information.
Only in the world of midi does true 12 notes per octave equal temperament
exist! Pitch as integers is yikes. Here is what Perry Cook has to say
about this:

http://www.cs.princeton.edu/~prc/SKINI09.txt.html

Just round it at the end and send it out a midi port.

> >> However, my feelings on this are evolving. There might some good
> >> reasons to force the AES engine to use intermediate buffers, in which
> >> case {read_to,write_from}_channel would be redundant, and we could use
> >> the LADSPA audio port concept, and have the plugin just scribble on
> >> memory. OTOH, this brings up the run/run_adding dichotomy that I
> >> worked so hard to get away from ...
> >>
> >
> >Well, I really like not having to care about the run/run_adding thing.
> >In my own software I write C++ classes that operate on single samples
> >(tick() in STK terms) and then make run and run_adding methods from that.
> >With function inlining everything is fast. I also use iterators to
>
> this is never going to work in the multichannel case Karl. right now,
> in ardour, a 1.3ms period at 48kHz takes about 0.8msec to compute with
> 24 active channels of disk-based data and no plugins. I think I can
> reduce that (its without any optimization for a start), but a lot of
> that overhead comes from function call chains that will never be
> inlineable. To execute those same call chains one sample at a time
> across 24+ channels will be impossible on today's processors.
>

Depends on when you do the inlining :) There is no reason that you can't
create the run/run_adding method from the tick method at compile time and
connect the block (i.e. run methods) at run time. I am well aware of the
problems of running a lot of code single sample (hence my solution to
avoid it), but am still amazed at what STK can produce with doing just
that.

> >Anyway, I think it is worth thinking about just using LADSPA for this.
> >As Abramo says - one api is better than many :)
>
> I agree with the first thought, but not the second :)
>

I think that should be - the least number of well thought out apis
possible is better than lots and lots :)

> --p
> ------
> To unsubscribe from <alsa-devel_AT_alsa-project.org> mailing list send message
> 'unsubscribe' in the body of message to <alsa-devel-request_AT_alsa-project.org>.
> BUG/SMALL PATCH REPORTING SYSTEM: http://www.alsa-project.org/cgi-bin/bugs
>

_____________________________________________________
| Karl W. MacMillan |
| Computer Music Department |
| Peabody Institute of the Johns Hopkins University |
| karlmac_AT_peabody.jhu.edu |
| mambo.peabody.jhu.edu/~karlmac |
-----------------------------------------------------


New Message Reply About this list Date view Thread view Subject view Author view Other groups

This archive was generated by hypermail 2b28 : Thu May 24 2001 - 03:54:48 EEST