Re: [linux-audio-dev] LAAGA - how are we doing ?

New Message Reply About this list Date view Thread view Subject view Author view Other groups

Subject: Re: [linux-audio-dev] LAAGA - how are we doing ?
From: Paul Davis (pbd_AT_op.net)
Date: Fri May 04 2001 - 02:07:58 EEST


>> the model is 100% non-interleaved audio. you never have more than one
>> channel to process at once, and there is zero speed benefit from
>> passing several in, since they all need to be processed one by one
>> anyway (again, since the model is 100% non-interleaved audio).
>>
>
>I was assuming non-interleaved and was not suggesting this for efficiency
>(though there is the small speed saving of function call overhead). This
>is purely an ease of use suggestion (though of dubious quality).

in a correct implementation, there are no function calls other than
the required one to actually copy the data to a single channel. when
compiled with optimization, a system based on aes inlines everything
all the way down. we have to do just one call-by-pointer indirection.
the function called is handwritten to move the data from
non-interleaved float format to/from whatever the channel actually is
(e.g. interleaved 16 bit, noninterleaved 24-in-32bit, etc.)

>> its a toss-up. size_t is defined as the maximum size of an OS object,
>> so its an undefined 2^(architecture-bits). guint32, by contrast, has a
>> defined range. its not clear which one is better. i opted for the
>> defined range.
>>
>
>Right, but there is no real advantage to the defined range assuming that
>the lower bound is guint32.

except for the fact that on 98% of all the systems it will run on,
that *is* the lower bound :)

but you're right. i should use something more like ALSA's
snd_pcm_uframes_t everywhere that there is frame count, and simply
make that a configure-time typedef.

>> the whole point of request_channel() and release_channel() is that the
>> setup some internals in the server so that calls to
>> read_from_channel() and much more importantly write_to_channel() do
>> the right thing. that is, the server takes care of the
>> "run"/"run_adding" semantics for us. the plugins have no idea what
>> they're doing except sending data to a channel. this is in marked
>> contrast to VST and LADSPA.
>>
>
>This is a nice feature.

Thanks. I think so. There are times when I have my doubts, and think
of switching to an entirely "run" model, but I can't stand the thought
of all the extra data copying.

>> if you want a different effect, you send all the plugins onto an
>> internal bus, and then have a plugin that reads from the bus, scales
>> the amplitude by a user-adjustable (or automatic?) factor, and then
>> sends it to the physical channels. the internal bus is float
>> throughout, so you'll "never" cause clipping on that level.
>>
>
>This seems like the right model to me in terms of making normalization
>something done explicitly by the user. The use of the bus metaphor seems
>limiting, however. Why not have a complete signal flow graph similar to
>the Max family of languages?

we do. its only the terminology that's hanging you up. i have been
drawing some diagrams (on paper, alas) to try to make this clear. here
is rough ascii rendition:

 +----------+ ---------> DiskStream <--> storage medium
 |AES | |
 | | |
 |channels | V
 +----------+ ---------> Route
    ||||| |
    ||||+------------------ pre-dsp-send
    |||| |
    |||| dsp-chain (inserts/plugins,ladspa)
    |||| |
    |||+------------------- post-dsp-send
    ||| |
    ||| gain stage
    ||| |
    ||| panning
    ||| |
    ||+-------------------- post-fade-send
    || |
    |+--------------------- outputs
    | .
    | . (optional)
    | .
    +---------------------- control outputs

the key thing to keep in mind is that a "bus" is just a different kind
of channel; the thing writing or reading from it has no clue that its
different unless it calls channel_type on the channel ID. so when i
talk about sending to a bus, its just a matter of making one or more
of the outputs of a Route use a channel identified by an appropriate
channel ID, nothing more.

i hope that the above diagram makes clear the flexibility that i'm
aiming for.

there are people who might want each section of the Route to be
distinct. i don't. i believe that this architecture is flexible enough
to do some pretty wacky things while also optimizing for the common
case: signals taken from channels or a diskstream, routed through a
dsp processing network, modified by gain and pan operations, and sent
to channels.

the only obstacle so far is that the dsp-chain is mono. i don't like
this, but i don't have a good alternative right now. its mostly a GUI
issue, since it would be fairly easy to set up the outputs of the
plugins to point to channels, but creating a GUI to do that kind of
thing ("send the left output of Freeverb to bus 29, and split the
right output to channel 8 and back into the dsp chain") is non-trivial.

--p


New Message Reply About this list Date view Thread view Subject view Author view Other groups

This archive was generated by hypermail 2b28 : Wed May 23 2001 - 22:40:30 EEST