Re: [linux-audio-dev] Re: timing issues in a client/server enviroment

New Message Reply About this list Date view Thread view Subject view Author view Other groups

Subject: Re: [linux-audio-dev] Re: timing issues in a client/server enviroment
From: Paul Barton-Davis (pbd_AT_Op.Net)
Date: la loka   09 1999 - 19:11:27 EDT


>I agree on the fact that apps should be able to do their own pugin hosting,
>but sooner or later we will need some engine which allows routing and
>integration of simultaneously running audio apps,
>or audio software software vendors will not be very motivated to port
>their apps to linux and audio software users will not switch to linux
>for serious audio work.
>What do you say when users will begin asking:
>"why am I unable to record the gigasampler pcm output in my cubase
>or drive the reaktor synth from my preferred sequencer, and recording
>the result on my HD recorder ?
>sooner or later we will need "the linux soundsystem".
>An app can choose to not use our api by using legacy /dev/dsp , /dev/midi,
>but it will be feed through our engine.
>If you don't run the engine, legacy apps wouls still function.

thats fine.

>An OS that provides only PCM and MIDI drivers is not suitable
>to provide a powerful and flexible DAW enviroment.

yes it is. in fact, its precisely why Paris, ProTools and the rest all
work under Windows :) oh, ok, these are not flexible enough. i
agree. but this has to do with inter-app communication, not
*necessarily* access to the device drivers.

>Quasimodo is a nice app with many functions, but it's still a monolithic app,
>and my foobar wave editor can't use your 24db lowpass filter with a simple
>API call.

thats right. it should use my 24db lowpass filter *plugin*, which
would happen to be the same one that Quasimodo would use, in some
future version where Csound compatibility is no longer the crux of the matter.

>Paul the keyword is "integration", and I think many people are not very
>satisfied with windozes integration between audio apps,

Thats true for people who want lots of apps that all talk to the h/w
at the same time.

Thats not true, as far as I can tell, for people who use well-designed
audio environments.

>(see for example Seer Reality monopolizes the DirectX PCM audio,
>that means if you want to play your mp3 on your 2nd soundcard, you can't)

part of that problem is just braindead design. Seer are not alone in
this - even the wonderful Bill Schottstaedt wrote his sndlib() code to
open /dev/dsp when the library initializes, even though it will likely
not use it!

>and if we are able to come up with a good standard,
>I'm sure that many will have valid reasons to switch.

i agree entirely. i just don't want the model of Linux audio to push
the existence of "direct access" to the PCM and MIDI ports out of the
picture, because for some, this is the right model.

two context switches per delivery of data is just silly, for some things.

i would rather have a defined method by which any application can talk
to any other application, and either of them might actually own the
device nodes. you could start up a DAW, and *it* would be the engine,
accepting data from other sources, or you could start up a replacement
for esd, and *it* would do the same thing, etc. i don't want to force
programs into using something other than the Unix file interface if
they believe they want to use it.

--p


New Message Reply About this list Date view Thread view Subject view Author view Other groups

This archive was generated by hypermail 2b28 : pe maalis 10 2000 - 07:27:13 EST