[linux-audio-dev] desktop audio resumed

From: Maarten de Boer <mdeboer@email-addr-hidden>
Date: Fri Jul 01 2005 - 13:45:39 EEST

Hello,

I just did some rereading of some parts of the "What Parts of Linux
Audio Simply Work Great?" thread, that talk about the problems with
soundcards that do not support multiple streams, and thought it would be
good if we could actually come up with an advice to the desktop
developers (Gnome and KDE mainly), distribution developers, and audio
application developers in general.

This document should contain a detailed description of the current
situation, of how we got there (i.e. how the desktop "sound daemons"
actually created bigger problem than a solution, and why alsa does not
do mixing in software by default), of how different user requirements
lead to different solutions that are not always compatible (i.e. the
"professional audio" vs "normal" users), and of all the different
solutions currently available (and interfering with eachother).

I believe such an overview is essential. I think most people on this
mailing list have a pretty good idea about his, but do others? For
example, I get the impression that there is a lot of misunderstanding or
ignorance about alsa and dmix.

Then it should propose an ad-hoc solution, and some guidelines of how to
work towards a future in which everybody (including jwz ;-) ) is happy
with linux audio that "just works". (I found jwz rant unjustified and unpleasant, but we can use it in our advantage if we give the right response, which, with a bit of luck (?) will get the same attention from the slashdot hords as jwz's blog)

The ad-hoc solution, I believe, is something that should work "right
now", or at least, as good as possible, with as little as possible
changes to existing applications. For one, this would mean: making sure
that dmix is being used when necesary, that no applications use the hw:
devices explicitely, but the "default", that OSS applications use
libaoss (If I understand correctly, libaoss can be told the use the dmix
plugin, while alsa oss emulation will always use the hw device, or am I
wrong here?). This is mainly a thing of the distro's.

The remaining problem here is what to do with jackd. When the
"professional" user runs jackd, and jackd complains that it is not using
the hw: device directly, the solution should be obvious for him, and the
non-jack apps should continue to work like before (but should be
restarted, I suppose). Could anyone comment on this? The "occasional"
jackd user can just run jack through the dmix plugin, which, if I
understand correctly, would cause higher latency, but we are not talking
about the "professional" user here.

Proposing a roadmap for the future is much harder, but I think we can
talk about that later.

For now, I would like your opinion on the issue. Do you agree that such
a document would be feasible and useful, and that the proposed
structure/contents make sense? I am not sure that the mailinglist is the
best way to write this document, maybe we could use a Wiki. I guess the
first step would be to look for the relevant messages in the "What Parts
of Linux Audio Simply Work Great?" thread, and write a short overview of
those.

maarten
Received on Thu Jul 7 16:17:01 2005

This archive was generated by hypermail 2.1.8 : Thu Jul 07 2005 - 16:17:01 EEST