Re: [linux-audio-dev] ALSA vs OSS/free

New Message Reply About this list Date view Thread view Subject view Author view Other groups

Subject: Re: [linux-audio-dev] ALSA vs OSS/free
From: Paul Davis (pbd_AT_Op.Net)
Date: Sat Mar 09 2002 - 04:27:51 EET


>Some of my negative points for ALSA (compared to commercial OSS):
>
> - Linux only

QNX too. :)))

> - Too difficult to install and configure for end users

Fair point, up until Linux 2.5, where its present by
default. Configuration is still an issue, but thats because we offer
much more flexibility for configuration and with that comes a cost
that has be dealt with through good user config tools that have not
yet been written or finished.

> - More difficult to program

actually, its easier. it just isn't documented to make it seem that
way. the major benefits are that no device-specific hacks are needed
and format conversion is handled by alsa-lib. there are others. i
can't imagine what JACK's code would look like under OSS. it would be
have a bunch of code to do things that ALSA either handles or makes
irrelevant.

> - Seems to require dynamically linked libasound (?)

You can link statically.

> - use libasound doesn't follow unix ideology of "everything is a file",
> users should only know about /dev/<something>

you are welcome to program using /dev/snd/whatever. the API is the
standard Unix API, featuring open/read/write/close/ioctl/mmap. nothing
stops you from doing this except that it tedious and you'll end up
back in the OSS-world of device-specific code, because the
/dev/snd/whatever interfaces strictly represent the hardware
capabilities of the underlying devices.

haven't you understood the stupidities that OSS has forced on people
because of this assumption that a program should access /dev/foo? this
works when the kernel side of /dev/foo embodies all that /dev/foo is
supposed to do. this is true for filesystems, disk drives, tape
devices and more.

but in the case of audio, linus, alan and others have made it clear
(and most of us agree with them) that they do not accept the idea that
format conversion, channel mapping, {de,re}interleaving, device
sharing (*not* h/w multiplexing) etc. should live in the kernel if at
all possible. therefore, there are really useful aspects of an audio
API that under Linux *cannot* live in the kernel. where do *you* think
they should be?

the convention of accessing an inode that directly talks to a driver
is what stops OSS apps from being used flexibly, since nothing can
interpose between the app and the device without using LD_PRELOAD (gack!)

btw, IRIX doesn't follow the ideology you mention either. not that
this says much?

--p


New Message Reply About this list Date view Thread view Subject view Author view Other groups

This archive was generated by hypermail 2b28 : Sat Mar 09 2002 - 04:17:28 EET