Re: [linux-audio-dev] audio application mixing/routing arch

New Message Reply About this list Date view Thread view Subject view Author view Other groups

Subject: Re: [linux-audio-dev] audio application mixing/routing arch
From: Paul Barton-Davis (pbd_AT_Op.Net)
Date: Wed Mar 29 2000 - 04:44:26 EEST


>That's not a good sign. At least for me, a soft-synth is utterly
>useless if it cannot be recorded while being played from a MIDI
>controller in realtime. Does this mean that every soft-synth and
>soft-sampler must double as a single-track HD recorder?
>
> "realtime" ,--> audio output
> MIDI ------> soft-synth ---<
> input `--> disc

If the soft-synth is not multithreaded, then disk i/o will lead to an
inability to meet the latency demands of real time performance. So the
first requirement is: a thread doing real time audio synthesis never
blocks for anything except the audio hardware. Thats already a rather
complex design requirement for many people.

> "realtime"
> MIDI ----> soft-synth --. mix to
> input \ multitrack ,--> audio output
> >---> audio ---<
> existing / editor `--> new track
> audio ----' to disc
> tracks
>
>I find it very hard to believe that this kind of setup would
>necessarily result in unusable amounts of latency. It's such
>a fundamental thing that without it, there wouldn't be any
>concept of using computers in a music studio.

Huh ? There are all kinds of uses of them that don't involve anything
like the above setup. Non-linear real time editing, HDR, realtime
synthesis, non-realtime synthesis, mixdown, FX processing ... these
are all useful functions for computers in a studio, and none of them
have to involve the setup you describe above.

In the chain above, there are at least 2 processes (the synth and the
editor), possibly three or four or five depending on how one were to
decompose the tasks. In the worst case (5), we have:

          soft synth
          disk playback
          audio editor
          mixer
          disk recorder

thats 5 context switches just for one single execution of the whole
chain. Even if this was implemented as a single process with multiple
threads, thats about 5 x 20usec for Linux threads (not a particular
efficient implementation, but its what we have right now), or 100
usec, or 0.1msec. If you're using low latency, you might be buffering
about 5ms of audio, so you've just chewed up 2% of the total cycle
time with context switches alone. Move this to whole process switches,
and the switch cost is even higher, plus all the inter-process
synchronization primitives cost a lot more than the thread equivalents
because they don't live in the same address space anymore.

In short, I just don't see this kind of chaining as feasible to do
when built out of process components.

>Thus I really hope that it is possible to run a nontrivial
>flowgraph in separate processes; otherwise we're not ever
>going to get anywhere.

Sure we are. VST and TDM and the rest don't require separate
processes; I don't think that you could argue that ProTools or Cubase
"hasn't got anywhere". They use a plugin model instead, which is
vastly more efficient than threads, let alone processes.

(of course, the host applications for plugins may well be
 multithreaded, and do the same kinds of chaining internally as
 you were drawing. but thats very different from doing this out of
 whole processes that are connected via some kind of IPC only.)

--p


New Message Reply About this list Date view Thread view Subject view Author view Other groups

This archive was generated by hypermail 2b28 : Wed Mar 29 2000 - 05:26:17 EEST