Subject: Re: [alsa-devel] Re: [linux-audio-dev] Re: Toward a modularizationof audio component
From: Kai Vehmanen (kaiv_AT_wakkanet.fi)
Date: Sat May 05 2001 - 01:03:19 EEST
On Fri, 4 May 2001, Abramo Bagnara wrote:
>> - we don't have a standard API for accessing sound hardware
> And here ALSA PCM API may already play a very good role. One API,
> different backends. The backend is writed once.
Yes, definitely!
[combining apps with LAAGA]
> I have to say that my aim is more ambitious: reusable audio components.
[...]
> - audio file format encoder/decoders
> - audio file dropout free reader/writer
> - FX
> - visual scopes
> - etc.
And you don't seem to be the only one. I must say I'm starting to suffer
from audio solution overload. :) For instance, we have ALSA, AES/ardour,
aRts, CSL and now also gStreamer. In the background Glame, Gdam and
whoknows what projects are just waiting for to join the party. :)
Seriously guys, where's the catch? Writing audio apps can't be this
complex! :D Ok, I guess it's time to get back to the basics: what
requirements I - the average Linux musician - have for audio apps:
1. I want run normal audio apps without extra hassle.
+ no problem; ALSA 0.9.x drivers handle both OSS and ALSA native apps
- it's true that there are already some possibly interesting apps
I can't use directly (aRts/esd/etc only apps)
2. I want reliably play, mix and record files.
+ with the proper choice of apps and some kernel tuning this
can be achieved
3. I have a good beat running on my sw drum machine app XXX, and
I'd like to record this to a file. Unfortunately XXX don't
yet implement streaming to disk. And even if it didn, as I want
to tweak the controls as I play, XXX would need to be able to
both stream to file and the soundcard. So how about other
options?
- ok, starts to get difficult; I know at least ALSA's aserver
can do this, but I have no idea about how to set this up
- aRts, gstreamer, esd, no idea whether possible or how to
set up ...
- btw; a few months ago, I _was_ able to do this by using
ALSA 0.5.x's loopback device; it was simple to understand,
and thus easy to use (!)
4. I want to start my audio multitrack and MIDI-sequencer software
at the same time (in sync that is), so that I can record new
stuff while the MIDI track is playing on the background
(external MIDI-hw is generating the audio).
- ok, lost at sea again; hmm, I guess I could hack the multitrack
app to send its MIDI-start to a named pipe, and patch the
MIDI-sequencer so that it'll read from the that pipe; hmm,
might work ;)
- no ideas about other possible solutions
5. Same as (4), but now I want to replace the MIDI-sequencer with
a virtual softsynth. Hmm, maybe I'll use EVO as a softsampleer.
In addition I'd like to send the softsynth's output to one
track of my multitrack sw.
- d'oh! - this is even more desparate!
6. and the list goes on ...
---Hmm, these use-cases sound quite familiar, don't they? No complex dsp-networks, no sub-1ms requirements, just a few simple things I do all the time with my analog gear. And maybe a bit suprising is that what's holding me back _IS NOT_: a) lack of applications b) poor performance
... no, I just can't do these things. So as a result, I - the average Linux musician - will wait for Paul to integrate Quasimodo, gstreamer plugins, EVO and Midi Mountain to Ardour so I can finally to what I want! >;D
-- http://www.eca.cx Audio software for Linux!
This archive was generated by hypermail 2b28 : Sat May 05 2001 - 00:43:06 EEST