Re: [linux-audio-dev] ALSA vs OSS/free

New Message Reply About this list Date view Thread view Subject view Author view Other groups

Subject: Re: [linux-audio-dev] ALSA vs OSS/free
From: Paul Davis (pbd_AT_Op.Net)
Date: Sun Mar 10 2002 - 19:34:39 EET


>> OSS does not do s/w mixing for any devices that i know of. it supports
>> multi-open, just like ALSA, for all h/w that supports it, where the
>> means of doing so has been described by the h/w maker.
>
>It does, here's output from /dev/sndstat of my laptop (devices 2-9 do
>samplerate/format conversion & software mixing):

Apologies for the error. I forgot about SoftOSS because when I tried
to use it 2 years ago, it routinely paniced my system, had appalling
latency characteristics, and generally seemed like a half-baked
solution to a genuine problem. The design is the same as the win32
kernel mixer, which even MS now admits was a bad idea.

>> OSS has ZERO support for non-interleaved cards. there's no way to even
>> express the idea of it in OSS. is this some kind of competition?
>
>Driver can always interleave the data from non-interleaved format. That's
>piece of cake. One for-loop.

and a lot of extra CPU cycles when all the high end applications use
non-interleaved data so that editing is easy.

the fact that OSS dictates interleaved format was a major error on
Hannu's part. i don't blame him - it mirrored the h/w available at the
time. but it was still wrong, and it continues to be wrong. the amount
of work to interleave *two* streams is pretty trivial. this would
suggest that if you are going to force a particular format,
non-interleaved should be it. that way, high end apps that are
streaming 48 non-interleaved data streams to a device that is
non-interleaved don't waste cycles converting then re-converting the
data.

however, i prefer ALSA's approach, where high end apps can use
non-interleaved format with no penalty on high end hardware, consumer
apps can use interleaved format with no penalty on interleaved
hardware, and they can both move back and forth between the hardware
types with the inevitable extra cycles, but no change in the
application code.

>Problem is that I don't know how to get the 24-bit data out of ALSA. Only
>16-bits works for me. Asking for 24-bit format works, but crashes libasound.

no device that i know of supports 24 bit mode. S32_LE or S32_BE are
the correct formats. jack/ardour/ecasound and others have supported 24
bit I/O via ALSA for a long time.

>> of course, if you want to find out how many channels the device has,
>> OSS can't help you (except by returning -EINVAL to an attempt to set
>> it).
>
>Just set it to something like 8 and see value returned in channel count.

And if the correct answer is 26? or 52, if I've merged two Hammerfalls
together? I'm not even sure OSS has even bits available for really
high channel counts.

>> if you want to change when the kernel will wake up your process
>> thats sleeping on poll or select, OSS can't help you. if you want
>
>I don't use select(), but setting the fragment size should change the
>threshold, as select() should return when there is full fragment available
>for reading or writing.

Only if you want the same value for *both* streams. If you want to run
with a large blocksize for capture, but a small blocksize for
playback, that doesn't work. ALSA separates out hardware parameters
from software parameters, and the point at which you wake a sleeping
application because of available data is a software, not hardware decision.

>> control what the driver does if there is an xrun, OSS can't help
>> you. and if you want to control standard digital audio controls, like
>
>Application should do that. Just check the over/underrun values in
>audio_errinfo (SNDCTL_DSP_GETERROR).

The application can't handle it if the problem is caused by scheduling
delays. The point of having the driver do it is that barring stupid
calls to sti/cli, it knows there is a problem immediately that it
happens. This makes it possible to avoid silly buzzing noises with
ALSA, since the driver itself can silence the audio when an xrun takes
place.

>> clock source or IEC958 configuration, OSS can't help with that either
>> unless you want to write code specific for each different h/w device.
>
>What is IEC958?

its the formal name for the digital audio standards common known as
S/PDIF and AES/EBU.

>That's not bad. ALSA could also make comedi obsolete if designed correctly
>and that would be great as I find it annoying that I have to support N+1
>different APIs for different hardware. Now I have support for five different
>APIs (OSS, ALSA05, ALSA09, comedi, ASIO2 (win32)), not to even count the
>vendor specific DAQ APIs.

ALSA0.5 is obsolete and should not, in my opinion be supported by
anyone or anything at this time. i have never heard of comedi. If you
want to support just one API, use PortAudio. It works on MacOS, win32,
OSS, and will soon work with JACK (perhaps ALSA directly as well). Its
callback based, which is good for application design.

>To obsolete comedi, is there way in ALSA to do digital I/O (reading and
>writing bits in digital interfaces using values & masks)?
>Is there way to configure gains per channel, channel scan sequences, per
>channel filter parameters and such?

jack's alsa "driver" works on a per-channel basis. i don't know what
you mean by channel scan sequences or channel filter parameters. if
you explain what you mean in more detail, i can probably tell you how
to do it.

>My point here is that if we have complex API, then it should be able to do
>all the complex things. I'm not going to call ALSA simple.

if ALSA can't do it, its because nobody ever mentioned the idea.

>> Not much of ALSA is documented at this time. Feel free to volunteer to
>> help with the effort.
>
>I'm fulltime working with other things (like writing applications that use
>ALSA and OSS) so I don't have time to do it. It's also pretty hard to write
>documentation when there is no documentation as I didn't write the code...
>:)

And you wonder why there is no documentation?

>> multiple process access to the same device? What about support for
>> implementing "subset" devices (e.g. 12 stereo devices mapped to a 24
>> channel card)? Should they be in the kernel?
>
>Yes, what is the problem here? It's just matter of copying data to correct
>place. One for-loop again.

Absolutely wrong. It doesn't work that way at all, unless you add
loads of buffering thus destroying low latency opportunities.

>> OK, so you want to rewrite Linux. That has nothing to do with ALSA.
>
>It doesn't need any rewrite because all can be done using kernel modules.

The description you offered was not of Linux, with or without modules.

>> as erik mentioned, user-space device drivers appear to be coming to
>> linux bit by bit. we don't know enough about them yet to really
>> understand how good of an idea they are at this point.
>
>Linux is becoming more and more microkernel'ish and I believe it's good
>trend.

Linux is not microkernel-ish in any way shape or form. Microkernels do
not generally run dynamically loadable modules with full kernel
priviledge, and they use message passing of some form between
modules. In Linux, a loaded kernel module is indistinguishable from
statically linked code, and has full access via memory reference and
direct function call to 100% of the kernel (bar the new stuff about
GPL'ed and non-GPL'ed drivers). You can redefined "microkernel" if you
want, but I was involved in MK research in the early 1990s, and Linux
doesn't represent that line of thought in any sense.

>> I spend too much of my time working on Linux device drivers to be
>> wasting any more on drivers for an operating system that almost nobody
>> will use. Sorry.
>
>I don't think RT-Linux and RT-AI as different operating systems. Only as
>realtime extensions to Linux kernel that could be included in standard
>kernel.

They need new drivers. In my mind, thats as good/bad as a new OS.

>> Abramo and I once sketched out how to write ASIO on top of alsa-lib.
>> However, JACK is even simpler than ASIO, and it already exists (insert
>> ob. rant about CVS access here).
>
>I would like to see bufferswitch callback directly called from the interrupt
>handler.

The process is woken from the interrupt handler, and with SCHED_FIFO
or SCHED_RR on the sleeping process, it will be set running when the
handler finishes. You don't want to call it "from" the interrupt
handler - that leads to the kind of nonsense you have on Mac OS <= 9,
where the audio code literally runs in interrupt context. The scheme
we have now provides an extra delay of less than 5usec, with massively
improved flexibility for what the bufferswitch callback actually does.

--p


New Message Reply About this list Date view Thread view Subject view Author view Other groups

This archive was generated by hypermail 2b28 : Sun Mar 10 2002 - 19:23:04 EET