Re: [LAU] Daemons, daemons...kill those daemons.

From: Fons Adriaensen <fons@email-addr-hidden>
Date: Tue Jun 16 2009 - 00:40:14 EEST

On Mon, Jun 15, 2009 at 05:14:46AM +0000, Fernando Lopez-Lezcano wrote:

> > where things maybe went wrong was the idea of defining a lot of
> > different *kinds* of virtual devices via a config file, including ones
> > that did multi-app multiplexing, device sharing, even i/o to
> > non-hardware devices like the ALSA JACK plugin.
>
> Oh yeah. I never could really find out or understand how to deal with
> that. Maybe I did not try hard enough. Right now thinking about it I am
> reminded of unraveling rewriting rules in sendmail.cf (oh well, no, it
> was not that bad :-)

First thing I do when installig Fedora is remove sendmail
and install postfix - for the sole reason that I'm capable
of configuring postfix in less than 48 hours :-)

Seriously now, talking about ALSA let's just consider resampling.
Assume your hardware is running at 48 kHz and you want to use
44.1 kHz.

Typical audio hardware will interrupt whenever it has N
samples available for reading and writing, where in many
cases N will be a power of 2.

Now if you resample that you have basically two choices:

1. Keep the 'period time' generated by the HW, i.e. provide
an interface that will trigger its client at the same rate.
The consequence is that you don't have a fixed number of
samples (in this example, if the HW period is 1024, you
would have eiter 940 or 941 samples in a period). This
would be unacceptable to some clients, e.g. Jack.

2. Try to create an interface that will offer a fixed 2^n
period size. This requires extra buffering, in other words
increased latency. And the timing of such periods will be
irregular. In some cases a HW capture buffer will not be
sufficent to generate a resampled buffer, you have to wait
an extra period and you better have some samples buffered if
the client wants them. In a similar way, a single playback
buffer could complete two HW buffers, forcing you to store
one of them. If you want the period times to be more or less
regular, this again requires more latency.

A client would quite legitimately expect the two resampled
streams (capture and playback) to have the same period timing,
as they would have for direct HW access. But in ALSA dmix and
share are independent plugins, so anything goes. And even if
they were coupled, synchronising the two streams could again
imply more latency (it's possible but not simple to avoid
this).

The conclusion is that resampling at the sound card API, if
this is in any way 'period based' breaks things, at least in
the sense that it introduces more latency than the resampling
operation (which implies filtering and hence delay) does on its
own.

And there is no good reason for ever doing that.
Applications like soft synths should be capable of generating
their output at the sample rate imposed by the hardware. The
*only* cases were resampling is really needed is when dealing
with *stored* data (either being read or written). But in that
caes there will be buffering between the storage and the RT
audio code, and *that* is the right place to apply resampling,
it costs nothing (in terms of timing) if done there.

Ciao,

-- 
FA
Io lo dico sempre: l'Italia è troppo stretta e lunga.
_______________________________________________
Linux-audio-user mailing list
Linux-audio-user@email-addr-hidden
http://lists.linuxaudio.org/mailman/listinfo/linux-audio-user
Received on Tue Jun 16 04:15:01 2009

This archive was generated by hypermail 2.1.8 : Tue Jun 16 2009 - 04:15:02 EEST