Re: [linux-audio-dev] audio application mixing/routing arch

New Message Reply About this list Date view Thread view Subject view Author view Other groups

Subject: Re: [linux-audio-dev] audio application mixing/routing arch
From: Paul Barton-Davis (pbd_AT_Op.Net)
Date: Thu Mar 30 2000 - 06:38:05 EEST


>> first requirement is: a thread doing real time audio synthesis never
>> blocks for anything except the audio hardware. Thats already a rather
>> complex design requirement for many people.
>
>Threads, although they do have their difficulties, are very pervasive
>in modern programming. They are required (or are the simplest and
>most common of several solutions) for proper interactive UI design,
>are the popular method of choice for writing servers, etc. I don't
>think that they'll scare off too many people.

It wasn't threads per se that I was referring to, but the notion that
you had to write a thread that never, ever blocked for anything except
the audio h/w. This is not hard, but not trivial either, especially
given that you're using threads ...

>I wasn't trying to invent an impractical case; certainly it's not
>hard to dream up things that would make a Cray sweat. I was only
>referring to the two-process version, which I hope is not
>unreasonable.

2 processes or 2 threads ? In Linux, a process is a task, and a thread
is a task too. So I expect that a 2 task model is not feasible for
anything terribly interesting, since this is the basic minimum number
required for low latency real time audio I/O.

>I started writing a rather bitter response to this along the lines
>of saying: "Cubase is so self-contained that it is a complete
>operating system. As such, it has several strengths in the latency
>department, but I'm not willing to give up all the benefits of a
>general purpose operating system like Linux."

Even though you rescinded this condemnation (:) in the next paragraph,
I would point out that a lot of this has to do with the underlying OS
that Cubase is running on. Its not really possible for Cubase to rely
on MacOS or Windows for services, because they don't provide them, or
don't provide them with enough speed to be usable. Linux, by contrast,
can do this, and I don't expect that a Linux implementation of Cubase
would give you this feeling for very long. Well, I would *hope* that.

>Along these lines, would it not make more sense to talk about a
>LADSPA daemon with no interface being the host? I'm confused when
>people say "my HDR will be a host", "my soft-synth will be a host",
>etc. Shouldn't they all be clients? At the very least, they're
>conceptually peers, so one can't be hosted by the other.

Clients/servers: we've adopted the terminology that a client does not
run in the same address space as the server, so a client program is
faced with all the same kinds of problems that i was alluding too in
previous messages. It implies IPC between distinct tasks, plus all the
context switching between them. By contrast, the plugin/host model is
low-overhead.

But notice, there's no technical reason why one host cannot be a
plugin to another host. If the first host provides the necessary
interface, the second host can load it just like any other plugin. The
fact that a call to its "run()" function will involve more computation
than in a single plugin is a "side effect".

But yes, we want more than one host. First of all, it avoids the
"Cubase is my OS" syndrome. Secondly, and more importantly, it will
allow, for example, an HDR program and a softsynth to share plugins,
even when they are not run at the same time (i.e they don't share any
common routing). Imagine some EQ plugin, for example, that you would
love to be able to use both with the HDR system during playback and
with the softsynth during live performance.

--p


New Message Reply About this list Date view Thread view Subject view Author view Other groups

This archive was generated by hypermail 2b28 : Thu Mar 30 2000 - 13:28:35 EEST