Re: [linux-audio-dev] One API for everything (first draft)

New Message Reply About this list Date view Thread view Subject view Author view Other groups

Subject: Re: [linux-audio-dev] One API for everything (first draft)
From: Abramo Bagnara (abramo_AT_alsa-project.org)
Date: Sun May 20 2001 - 17:26:54 EEST


Paul Davis wrote:
>
> >You're a professional audio bigot ;-) soundboxes are also for hundreds
> >of application that have always used interleaved buffers.
> >
> >Do you want *really* to have two APIs, one for professional level and
> >one for consumer one?
>
> If its possible, I'd like a single API that works for both. Thats why
> I want non-interleaved buffers, since it works for both. We've hashed
> this out before: the cost to stereo apps of using noninterleaved is
> not zero, but its small. the cost to heavily multichannel apps of
> using interleaved buffers is large. ergo, the buffer format should be
> noninterleaved.

Then a simple .wav player need first to convert it to non interleaved
and then to convert it back to interleaved.

An application like normalize would see its throughput reduced by say a
20%...

Smart idea, indeed ;-)

> I hear you say "why restrict it? why not allow either?". Because as
> Karl and I have both said, standardizing on the buffer format is
> efficient, it makes coding simpler, it works.

- efficiency is the same
- soundbox user/client need only to set the property (or not even that
if we want to have a default).
- soundbox implementation need to have one more property

one line in client and one line in soundbox implementation.

Both of us sound like a broken vinyl.

You're saying "let's break with (or punish) past and bogus file
format/applications" and I'm saying "not so fast, boy".

> My mindset starts with something somewhat like LADSPA or VST, in which
> a plugin has a series of (audio) input and output ports. These ports
> can be connected to other ports (with polarity requirements, of
> course; you can't connect two inputs together). Associated with a port
> is the address of a buffer where audio data may be read/written. This
> address is valid only for the duration of the equivalent of LADSPA's
> "run/run_adding" callback, and VST's "process/process_replacing"
> callback, executed by the engine on the plugin (i.e. it may change
> from callback to callback).

Here we differ, buffer does not change in my model. What's the rationale
behind this choice?

> But if we connect, for example, two output ports to a single input
> port, then either:
>
> 1) the output ports have to have their own buffer, and some
> "invisible hand" mixes them together and stores the result
> in the input port buffer,
> OR
>
> 2) or the plugin(s) with the output ports have to use
> "+=" when writing data to the buffer.

Perfect.

> So, the method used to send data to an output port is a function of
> what its connected to, and not a property of the port.

Wrong. In my model it's the soundbox client/user that ask for a specific
behaviour (that can be refused by soundbox, if not implemented).

Suppose you have an application (or a soundbox) that connect two
soundboxes. Then it knows what need to be done, it ask for shared
buffers if appropriate, it ask for adding behaviour if needed, etc.

-- 
Abramo Bagnara                       mailto:abramo_AT_alsa-project.org

Opera Unica Phone: +39.546.656023 Via Emilia Interna, 140 48014 Castel Bolognese (RA) - Italy

ALSA project http://www.alsa-project.org It sounds good!


New Message Reply About this list Date view Thread view Subject view Author view Other groups

This archive was generated by hypermail 2b28 : Sun May 20 2001 - 17:48:48 EEST