Re: [linux-audio-dev] ALSA vs OSS/free

New Message Reply About this list Date view Thread view Subject view Author view Other groups

Subject: Re: [linux-audio-dev] ALSA vs OSS/free
From: Paul Davis (pbd_AT_Op.Net)
Date: Mon Mar 11 2002 - 03:58:00 EET


>All streaming interfaces that I know of are interleaved. S/PDIF / AES/EBU,
>ADAT. And just about all file- and network stream formats.

The RME Hammerfall is the best multichannel card there is. Its not
interleaved. Its the only available option for people who want
seriously multichannel (i.e. > 10 channel) setups. There is at least 1
other ALSA supported hardware device that is not interleaved. S/PDIF,
ADAT and AES/EBU use an interleaved file format internally, but since
nothing delivers raw S/PDIF, ADAT or AES/EBU data, only PCM sample
streams. thats how the Hammerfall is able to present 26 channels of
data non-interleaved, even when the underlying protocol is not. As
explained in my other email about interleaving, this allows the
host-side DAW application to be much more efficient with non-linear
editing. Yes, something is {de,re}interleaving, but we allow the apps
to be efficient, and throw dedicated hardware on the audio interface
at the problem. Its really nice.

>> hardware, and they can both move back and forth between the hardware
>> types with the inevitable extra cycles, but no change in the
>> application code.
>
>CPU load of interleaving/deinterleaving streams is so small that I can't
>notice it without running CPU cycle counter. My app even uses network
>byteorder in IPC (TCP link) and doing that conversion back and forth doesn't
>show up in total CPU load either (all of it could end up at ~0.1%).

Try it on 26 channels full duplex at 32 bits/96kHz. Its not a lot, but
it registers, and when you've only got 600usec to process the data, it
counts. Besides, as metioned elsewhere, the case for non-interleaved
lies elsewhere in the user space code.

>> no device that i know of supports 24 bit mode. S32_LE or S32_BE are
>> the correct formats. jack/ardour/ecasound and others have supported 24
>> bit I/O via ALSA for a long time.
>
>With 24-bit format I mean 24-bit MSB aligned data in 32-bit words. As I
>haven't seen any 32-bit A/D converters yet... ;)
>
>Here's part of my code, both of these succeed as calls but neither of these
>works:
>
> case 24:
> if (snd_pcm_hw_params_set_format(sndpcmPcm, sndpcmHwParams,
> SND_PCM_FORMAT_S24) != 0)
                 ^^^^^^^^^^^^^^^^^^
                 SND_PCM_FORMAT_S32_LE or SND_PCM_FORMAT_S32_BE

there are no h/w devices that offer SND_PCM_FORMAT_S24. i think its
likely you found a bug in libasound with the plug layer, not entirely
suprising, since nobody i know of has ever tried the S24 format. S24
literally means 24 bits, in case some crazy h/w maker decides to
actually offer this design.

>> Only if you want the same value for *both* streams. If you want to run
>> with a large blocksize for capture, but a small blocksize for
>> playback, that doesn't work. ALSA separates out hardware parameters
>> from software parameters, and the point at which you wake a sleeping
>> application because of available data is a software, not hardware
>And that doesn't have impact on latency?
>It will end up in nasty effects if output buffer size is not multiple or
>integer fraction of input buffer size or multiple of hardware buffer size.

1) Some cards don't permit integer ratios. Some or all of the ymf
   series from yamaha are like this: fixed interrupt intervals, but
   variable buffer sizes. You can have wierd ratios like 1 interrupt
   every 0.8 buffers. OSS can't support such cards reliably because it
   assumes that the ratio is always integral.

2) I didn't say anything about the ratios. I said that you might want
   the hardware to interrupt, say, 4 times per buffer, but only get
   woken when waiting on the playback stream 2 times per buffer. This
   can make for very nice playback latency yet reduce system load
   during capture because of the reduced "interrupt frequency" in
   user space (where the action really happens). think about what
   happens on a device where the capture and playback streams operate
   from the same clock and trigger ...

>Btw. ASIO2 doesn't support that either.

neither does jack :)

>> The application can't handle it if the problem is caused by scheduling
>> delays. The point of having the driver do it is that barring stupid
>> calls to sti/cli, it knows there is a problem immediately that it
>> happens. This makes it possible to avoid silly buzzing noises with
>> ALSA, since the driver itself can silence the audio when an xrun takes
>> place.
>
>You can check the over/underrun counters before reading data. When writing,

thats too late. the "buzz" has already happened ... :)

>it's up to driver to decide in OSS, true...

>> jack's alsa "driver" works on a per-channel basis. i don't know what
>> you mean by channel scan sequences or channel filter parameters. if
>> you explain what you mean in more detail, i can probably tell you how
>> to do it.
>
>Most DAQ cards have single high quality, high speed A/D converter and input

   [ ... description elided ... ]

Well, what you've just described is more or less exactly how the
internals of several multichannel audio interfaces work. However, none
of them expose this interface to the host CPU. as a result, no ALSA
driver at this time has no support for the operations you describe. it
would be trivial to add a control switch to an ALSA driver for such a
device that allowed you to set the scan sequence, scan interval and so
forth. most ALSA devices have several such switches on them to control
h/w specific issues. where there is a common theme (eg. selecting the
sample clock source), we have standardized on the switch name. for
other things, they are h/w specific and have to be set using an API
that is separate from the PCM API (but conceptually identical). they
are not part of the PCM API because we've seen what a mess it makes to
try to put it there in OSS (where there are all these device-specific
ioctls for "odd" hardware).

>Only programmers who wrote the code know exactly how it works. They should

I didn't write the code. I know (more or less) exactly how it
works. Proof by refutation?

>write documentation at time of writing the software. That way it's also
>easier to verify that the software does what it should.

Documentation was not written because "what it should" was not
defined. ALSA has been subject to an incremental design process, and
subject to a lot of feedback from application development. I strongly
suspect that if Kai and myself (to cite two examples) had not been
working on multichannel audio applications, ALSA would look very
different than it does today.

--p


New Message Reply About this list Date view Thread view Subject view Author view Other groups

This archive was generated by hypermail 2b28 : Mon Mar 11 2002 - 03:46:21 EET