Re: [linux-audio-dev] audio routing (was: Re: LADSPA GUI Issues)

New Message Reply About this list Date view Thread view Subject view Author view Other groups

Subject: Re: [linux-audio-dev] audio routing (was: Re: LADSPA GUI Issues)
From: Kai Vehmanen (kaiv_AT_wakkanet.fi)
Date: Sun Mar 12 2000 - 17:52:27 EST


On Sun, 12 Mar 2000, Paul Barton-Davis wrote:

>>1) A new audio I/O API that is similar to ALSA's sequencer API.
> This is the much-debated "replacement for esd". Some heat and not much
> light was generated over on alsa-devel last week about this.

Well, not exactly. ESD is mainly about mixing multiple outputs to
single audio device (network transparently). This isn't necessarily
needed for the audio routing API. We could start with something simple
like this:

1. app-A connects to the output port '1' of the ard (audio router daemon)
        - A can now write to the port
        - A can ask the daemon whether any input is connected to 1
        - A sets the audio parameters
2. app-B connects to the input port '1'
        - B can for instance be an ALSA/OSS output-plugin
                => we could have default port for ALSA, OSS,
                   file streaming, null, etc
        - B is forced to use the audio format specified by A
                => the first one who connects, gets to decide
3. app-C tries to connect to input/ouput port '1'
        - gets EBUSY (software mixing not implemented :))

Ok, you get the picture. Implementing the above system shouldn't be too
difficult (no mixing, no format conversions), but it would still have
many uses. Buffering and latency considerations are the only hard parts.
But we could use the LADSPA design (==> use the same principles for
audio format, buffering, etc issues).

Of course, for this to be useful, it would have to be widely accepted
(like LADSPA and MuCoS).

>>3) Standard UNIX IPC-mechanisms (pipes, sockets, etc). These of course
>> work, but are too clumsy for every day use.
> Actually, the two IPC mechanisms you list *don't* work. A Unix pipe is
> 5K long. For decent sampling rates, it doesn't hold enough data to
> handle a couple of context switches between the processes at each

Yes, this is true. I was talking about both realtime and non-realtime
uses... Even non-realtime routing would be better than the current
situation.

>> do it all on one machine (live recording, software synths,
>> samplers and sequencers --> multitrack-recording software --> master).
> if any more than one of the above has to run SCHED_FIFO, i expect that
> concurrent execution of them will not work unless you are operating
> with long latencies (large buffers). you need about 100msec of

Well, I don't need to run them all at the same time...

        - record a drum beat from Greenbox to ecasound (non-realtime)
        - record a synth-line live with Quasimodo (Quasimodo -> ecasound,
          ecasound -> output)
        - same with a bunch of other software synths
        - etc...

> buffering to record+playback 24 tracks of audio from the hammerfall
> without SCHED_FIFO; if you go smaller than that, a regular thread will
> create clicks and pops (and thats on a dual PII-450!)
[...]
> roll on the 4 way athlons :)

I still don't worry too much about performance. When I started working
on ecasound I had a 486/33Mhz/8MB. Today I have a dual-celeron-466Mhz/128MB
dedicated for recording purposes. But as hardware gets better, programs
keep on getting bigger and buggier. Because of this, reuse, ease-of-maintainance
and correctness are much more interesting than getting that extra few
percents of speed.

-- 
Kai Vehmanen <kaiv_AT_wakkanet.fi> -------- CS, University of Turku, Finland
 . http://www.wakkanet.fi/ecasound/ - linux multitrack audio processing
 . http://www.wakkanet.fi/sculpscape/ - ambient-idm-rock-... mp3/ra/wav


New Message Reply About this list Date view Thread view Subject view Author view Other groups

This archive was generated by hypermail 2b28 : Mon Mar 13 2000 - 00:44:04 EST