Re: [alsa-devel] Re: [linux-audio-dev] laaga, round 2

New Message Reply About this list Date view Thread view Subject view Author view Other groups

Subject: Re: [alsa-devel] Re: [linux-audio-dev] laaga, round 2
From: Paul Davis (pbd_AT_Op.Net)
Date: Wed May 09 2001 - 05:28:04 EEST


>I'm not sure how 'anonymous' channel allocation works if more than one app
>(aka plug-in) is trying to co-operate.

Well, apps don't cooperate by themselves. A user has specific desires
about cooperation, and so they set up application X to share data with
application Y by choosing to have X and Y use the same set of
(internal bus) channels.

>> does that make it any clearer?
>
>almost... could you do another summary slanted as below?
>
>- in this case a real-world illustration of using the LAD elements to
>replace VST/ASIO/ReWire with a "lad-virtual-studio".
>
>... let's say two 24 channel cards, an HDR, an FX rack program, a FoH
>console & a fold-back console... sounds like a reasonable set-up to record a
>live gig no?

in what follows i'm assuming that the consoles are physical consoles,
not software. it doesn't make much difference either way.

there's one engine application. ALSA provides the abstraction that
links the two cards into a single PCM stream (pcm_multi in
alsa-lib-speak). the engine controls the ALSA PCM device. it loads the
plugins for the HDR and the FX rack "applications". both plugins
present a (G)UI for the user to set up their input and outputs. the
user configures the HDR plugin and FX plugin to route their outputs to
the appropriate channels. some of the channels correspond to internal
busses, some to physical channels. data being sent to the consoles is
delivered to physical channels; data being shared between them goes
first to an internal bus channel and then to a physical channel (or
possibly something even more complex). both plugins are driven by the
engine in a synchronus fashion.

>I don't yet understand how the app (plug-in) writers & users will see the
>abstraction of the H/W & busses.

as mentioned several times, they are just mono 32 bit floating point
streams accessed with one of 4 function calls:

   int request_channel (channel_id_t, direction_t);
   int release_channel (channel_id_t, direction_t);
   int read_from_channel (channel_id_t, sample_t *buf, frame_count_t nsamples,
                          frame_count_t offset);
   int write_to_channel (channel_id_t, sample_t *buf, frame_count_t nsamples,
                           frame_count_t offset, gain_t gain);

other than by calling channel_type (channel_id_t), there is no way to
distinguish one type of channel from any other.

>Physical connectivity of external (read H/W) interfaces and patching of
>'soft' busses has to be presented to the user somehow.

yes, but its nothing to do with the server/engine. its up to the
plugins to do that in ways that match their particular (G)UI model.

>P.S. I remain of the opinion that more than one server instance *must* be
>needed where there is more than one _unsynchronised_ H/W interface. This is
>a trivial statement - providing the implementation (or design) does not
>preclude it ;-)

this is taken care of by ALSA, not the LAAGA server. if ALSA or some
other low-level API for device access/control doesn't take care of it,
then LAAGA cannot.

--p


New Message Reply About this list Date view Thread view Subject view Author view Other groups

This archive was generated by hypermail 2b28 : Wed May 09 2001 - 06:00:48 EEST