Re: [linux-audio-dev] LAAGA - how are we doing ?

New Message Reply About this list Date view Thread view Subject view Author view Other groups

Subject: Re: [linux-audio-dev] LAAGA - how are we doing ?
From: Jim Peters (jim_AT_aguazul.demon.co.uk)
Date: Fri May 04 2001 - 10:22:22 EEST


Paul Davis wrote:
> in a correct implementation, there are no function calls other than
> the required one to actually copy the data to a single channel. when
> compiled with optimization, a system based on aes inlines everything
> all the way down. we have to do just one call-by-pointer indirection.
> the function called is handwritten to move the data from
> non-interleaved float format to/from whatever the channel actually is
> (e.g. interleaved 16 bit, noninterleaved 24-in-32bit, etc.)

I see a problem here - if we've got 4 plugins writing to one bus that
happens to be a 16-bit bus, then we're converting float->short 4
times, and mixing in 16-bit, which is not a very good idea. If more
than one plugin is writing to a bus, then the bus needs to be a
`float' bus, converting to 16-bit afterwards.

Anyway, I thought this was going to be float all-through. Are this
alternative format busses some kind of optimisation for efficiency ?

> There are times when I have my doubts, and think
> of switching to an entirely "run" model, but I can't stand the thought
> of all the extra data copying.

I like the feature of this in that it does automatically switch
between run and run_adding as appropriate, saving the time required to
zero the bus buffers. It also gives some abstraction from the actual
format of the bus, as you describe.

However, in the case of working to/from a `float' bus, it actually
forces additional copying because it stops the plugin from reading
directly from an input bus or adding directly to an output bus.

> the whole point of request_channel() and release_channel() is that the
> setup some internals in the server so that calls to
> read_from_channel() and much more importantly write_to_channel() do
> the right thing.

I would guess that these are also to decide the order of processing
plugins. We're not permitting feedback loops, are we ? My impression
from this talk of busses and stuff is that a plugin will process a
chunk of data from one bus at time `t' and write it to another bus
also at time `t' -- not at time `t+chunk_size'. This means that
everything that writes to a bus must be processed before something
that reads from it. This means that some combinations of plugging
things together are illegal. Is this the idea ? Am I on the right
track here ?

> if the summed amplitudes on a channel exceed 1, then its clipped. this
> is digital audio, remember ? :)) this is just an attempt to model what
> would happen in hardware.

I agree with this approach. Having done half a course in Sound
Engineering, I can understand how engineers are used to having some
headroom to play with when mixing. Working in 16-bit, everyone
normalises to the full range, so if you add two 16-bit signals
together in 16-bit, it will clip almost guaranteed. However, working
with floats, there is almost endless headroom, and adding 100
normalised signals together causes no internal problems, just
requiring an adjustment at the end. I think when Kai or Paul
specified -1->1 as the range, this is not a rule that necessarily
applies when working internally. Rather it is the range you should be
aiming at if you want your app to sound about the same volume as
everyone else's. On output, however, we're hitting physical limits,
and then it has to be enforced.

> >The use of the bus metaphor seems
> >limiting, however. Why not have a complete signal flow graph similar to
> >the Max family of languages?
>
> we do. its only the terminology that's hanging you up. i have been
> drawing some diagrams (on paper, alas) to try to make this clear. here
> is rough ascii rendition:

I'm struggling a little here to understand - I was also imagining a
signal flow graph kind of thing. I'm trying to find a model to help
me understand how you're thinking about this. I'm wondering if this
corresponds to a studio patchbay. I've only used one patchbay, but
I'm guessing that they are fairly standard. I'm going to describe how
it works for those who've never seen one, and to see how this
corresponds to what we're discussing.

The patchbay is a big box full of 1/8" jack sockets. Each socket
carries a mono signal, either an input or an output of some piece of
equipment, which are all connected to the patchbay by permanent leads
at the back of the unit. You connect your equipment together using
short patch leads between these sockets on the front. The sockets are
numbered and labelled. For instance, there may be 24 inputs
corresponding to the 24 inputs of the mixing desk. There will also be
individual outputs coming from particular pieces of equipment, such as
CD player L and R outputs, or whatever. There will also be inputs and
outputs for various effects units, such as compressor IN and OUT, and
so on. All these are labelled on the patchbay so that you know what
you're plugging into what.

The patchbay that I know also has the additional feature that the
sockets are in pairs vertically, and the socket above is by default
connected to the one below if nothing is plugged into either one of
them. This means that the CD player L and R can come in by default on
mixer inputs 13 and 14, for example, saving using patchleads for
common configurations.

Now, I'm wondering if these AES busses are like the sockets on the
patchbay. Originally I was imagining the real-time server as a daemon
without a UI, but Paul mentioned a UI, so perhaps he is thinking of
having a UI to allow patching things together. Am I close ?

If so, then maybe things are starting to make a little sense. I'm
going to make up a scenario based on this to see how this looks.
Let's say we have a `patchbay' application, our UI for the server.
When an application asks the server to load up its plugin, it gives a
name to each of its inputs and outputs, and these appear immediately
on the patchbay display. Perhaps the application can also request
that it is patched through to some other application if that is
already loaded. The user can then rearrange the patch connections as
necessary, connecting and disconnecting our virtual pieces of
`equipment'.

This gives a partial view - but it doesn't cover everything that
appears to be possible with the API suggested. For instance it is
possible to connect several applications to a single bus using Paul's
API, which is not possible with a patchbay because only one plug will
fit in each socket.

Please understand that I'm not trying to force ideas on anyone here,
I'm just trying to find a model to understand how this is going to
operate.

Is anything I've described related to what you have in mind, Paul ?
Or am I way off track ?

Jim

-- 
 Jim Peters         /             __   |  \              Aguazul
                   /   /| /| )| /| / )||   \
 jim_AT_aguazul.      \  (_|(_|(_|(_| )(_|I   /        www.aguazul.
  demon.co.uk       \    ._)     _/       /          demon.co.uk


New Message Reply About this list Date view Thread view Subject view Author view Other groups

This archive was generated by hypermail 2b28 : Fri May 04 2001 - 10:57:16 EEST