Re: [alsa-devel] Re: [linux-audio-dev] laaga, round 2

New Message Reply About this list Date view Thread view Subject view Author view Other groups

Subject: Re: [alsa-devel] Re: [linux-audio-dev] laaga, round 2
From: Adam Olsen (adamolsen_AT_technologist.com)
Date: Wed May 09 2001 - 04:34:33 EEST


On Tue, May 08, 2001 at 07:00:28PM -0400, Paul Davis wrote:
> >And while I'm at it, why don't I give my own ideas on how things
> >should be organized? I would split the components into two types -
> >symmetric and asymmetric. Symmetric components take a given amount of
> >input from the engine, and return an identical amount of data to it
> >(probably using the same buffer). Asymmetric (not surprisingly) do
> >the opposite, take a given amount of input from the engine (possibly
> >none), and a different amount of data. This could be from changing
> >the rate for the sample, reading in from a microphone, or outputting
> >to the soundcard.
>
> this is impossible. if the engine says "process (64)", then you must
> generate and/or process 64 frames. you can generate those frames from
> less or more than 64 frames of "source data" (e.g. by interpolation or
> data reduction), but you have to still come up with 64 frames or
> you'll be responsible for audio artifacts.

I don't think I was thinking about the same thing as you were :)

It would run basically with "while (ready) run_cycle();". ready would
be controlled by either the mic input, or the soundcard output, making
sure it produced data until the requirements were met. If you had
both a mic input and soundcard output, both creating realtime
requirements, you'd have to have an asymmetric component that would
handle any differences between the input rate and output rate (eg fill
with silence).

>
> >Another distinction that could be made is internal vs external. The
> >game may be able to do all it's internal mixing and sound effects in a
> >single thread, but once it outputs it (to the soundcard or wherever),
> >another process would have to handle it. Unless of course it's
> >outputting directly to the card, then there's just the kernel-level
> >hardware api.
>
> As mentioned above, in the LAAGA proposal, there is no external mixing
> built into the API. Of course, you could choose to send all your
> output to 1 or more channel corresponding to a set of internal busses,
> and then use a second client/plugin that reads/writes from those
> busses, provides gain control, and then forwards the result to a
> physical channel. but this is merely enabled by the API, its not part
> of it.

*nod* I think that's what I was thinking of, only I assumed it'd be
the normal case.

>
> the client/plugin (unless it specifically investigates) has no clue
> what a given channel is connected to. whether a channel is an internal
> bus or is connected to a h/w interface is invisible when using the
> functions designed to deliver data to the engine. no client/plugin has
> any (visible) access to the audio h/w. all it can do is call
> read_from_channel() and write_to_channel().
>
> to reiterate, all LAAGA offers is:
>
> * abstraction of the audio h/w into a series of mono, 32 bit float
> channels along with functions to write/read data to/from them.
> * totally synchronous execution of all clients/plugins; each plugin
> has its "process()" function called by the engine at the right
> time.
> * data sharing via internal busses accessed via the exact same
> mechanism as physical channels.
>
> nothing (or almost nothing) more.
>
> does that make it any clearer?

I think so. It's basically you're just dealing with symmetric
internal stuff, leaving assymmetric and external stuff to something
else entierly. right?

-- 
Adam Olsen, aka Rhamphoryncus


New Message Reply About this list Date view Thread view Subject view Author view Other groups

This archive was generated by hypermail 2b28 : Wed May 09 2001 - 05:18:11 EEST