Re: [linux-audio-dev] mucos client/server implementation issues , I need opinions

New Message Reply About this list Date view Thread view Subject view Author view Other groups

Subject: Re: [linux-audio-dev] mucos client/server implementation issues , I need opinions
From: Paul Barton-Davis (pbd_AT_Op.Net)
Date: la joulu  18 1999 - 23:39:51 EST


>ok,
>if by fullduplex you mean to/from audio board(s)/HDs...

i think benno said that he means fullduplex in the sense that clients
can talk to each other - any client can be a server, and vice versa.

>by pipelined do you mean - using a pipe?
>I have been playing with that idea for communication in the large.
>(Not plugin -> plugin, but engines to engines)

you can't use pipes for high quality audio. they don't hold enough
data (5K) to be useful across more than 1 context switch, and
sometimes not even that (depending on the sample rate and sample
size).

>But shouldn't we be able to do a lot better than that?
>
>Suppose you have a server/engine that loads plugins as shared libraries
>(DLL)
>and makes a table of the script/drawn scheme/... of how the plugins are
>connected, then it can call each plugin JIT, no context switches! no
>need
>to do 'wait_for_...', no need to know that the memory is shared - it can
>local, you only get the pointer.

this was discussed during the mammoth API flood between David and
myself. We want a system that supports both plugins and inter-task
communication, and easily allows code used for one to be reused for
the other. In some cases (many) the plugin system makes more
sense. But for some things, its nicer to have things instantiated as
separate tasks (processes). the MuCOS API is supposed to support both.

>Instead I am looking at using different sample frequencies to handle
>stuff like that.
>offset could be a signal with a sample frequency of:
>* A constant value / parameter has zero sample frequency.
>* A UI knob is sampled at least with 2 Hz, maybe 10 Hz (100 ms)
>* A signal from another audio plugin is probably at least 8000 Hz.
>But note that the source to this all these signal types
>can be generated by another plugin (or rather several other types of).

this is basically what Csound and most subsequent systems have
done. The Nord modular samples control signals at 24KHz, but uses
48KHz for audio data. Csound and SAOL (and Quasimodo) let you specify
the sample rates for both data types independently.

also, from my own experience, you should not even begin to pay
attention to the existence of a UI. the UI system (X, Windows,
whatever) will provide whatever "sampling" rate it can (based on mouse
motion interrupts, for example) - the internals of the system should
not try to second guess this.

don't get too caught up in the rates stuff. i recently wrote a long
memo to the SAOL guys about rate stuff - its a definite win to have it
as a concept in a system, but I think that both Csound and even more
so SAOL have confused a semantic entity with something that *must* be
mirrored in the implementation. I feel confident based on my work with
Quasimodo and the Csound opcodes in saying that this is not
true. SAOL, for example, will absolutely not allow the following kind
of expression:

   if (control_signal < audio_signal)
         audio_signal = value;
      
the complaint will be that the value of the conditional takes the
lowest rate of its elements, and the body cannot have a rate greater
than that of the conditional. Ack. C and C++ let you do this just
fine, and so should a sound synthesis/processing language.

--p


New Message Reply About this list Date view Thread view Subject view Author view Other groups

This archive was generated by hypermail 2b28 : pe maalis 10 2000 - 07:23:26 EST