Re: [linux-audio-dev] Re: Plug-in API progress?

New Message Reply About this list Date view Thread view Subject view Author view Other groups

Subject: Re: [linux-audio-dev] Re: Plug-in API progress?
From: Paul Barton-Davis (pbd_AT_Op.Net)
Date: su syys   26 1999 - 23:55:30 EDT


>An RTL driver can easily catch events and time them within 5 usecs or so on an
>average Celeron box. The hardware can be more troublesome...

regular Linux can do the same thing - remember, RTL doesn't improve
the latency of an ISR (sti/cli excepted).

>> thats still a lot
>> of samples that may or may not have been generated so far by any given
>> plugin. The timestamp will be accurate, but there is no way to sync it
>> to the sample generation process.
>
>If you know the offset between the time base of the event and the time base of
>the output device, there's no problem.

sure, but there is no fixed offset if its measured with sample
resolution. you simply don't know the difference at any particular
point in time.

>> The best you can do is sync it to
>> the state of the DAC, as understood by the engine, which is well known
>> immediately after a write to the DAC has returned, but not known at
>> any other time.
>
>True. With OSS and ALSA (currently, at least). What about an IOCTL that gives
>you the exact time of the next sample you'll get if you do a read()?

ALSA provides an ioctl that does something fairly close to this, so it
wouldn't be hard to fix i think. but this still won't give you sample
accuracy unless you read the data from the driver in single samples.

Consider the following scenario:

         * engine has just finished a control cycle, and has
           returned from sending data to the output interface (DAC,
           ADAT, whatever). We assume that it blocks until space
           is available there, assuring us that we are synced with
           the output stream.

         * engine grabs a control-cycle's worth of input samples
           to make them available for the next cycle of plugin
           execution.

         * engine starts plugin cycle.

         * interrupt occurs from some external h/w source (not audio). its
           timestamped, preferably by the driver, but possibly in user
           space, and the event it represents becomes available for
           use by the engine.

         * the engine finishes the plugin cycle, and sends the data
           to the output interface.

         * the engine gets the cycle's worth of input data.

         * it starts the next plugin cycle, queueing the event in the
           plugins' event buffers.

OK, so how is that event supposed to have sample accurate sound ? All
we know at that point that the event occurs is that a control cycle
has started. We have no idea how far along we are, because we could be
in the middle of any given plugin, or the beginning or the end, or in
between them. We could estimate how long the cycle will take, based on
previous cycles with the same net, and then judge from that, but this
is not going to provide sample accuracy.

The best I can see is to use the ioctl call to determine the current
time as far as the *output* interface is concerned. You can't use the
*input* interface, because it will give a time based on the samples
you've already read, rounding its accuracy to the buffer size.

But this implies that timestamping happens in user space, because
otherwise its a cross driver call, which may not be a very good idea
from an ISR context. If its in user space, I don't think that the
timing will be better than 0.5ms, and even that is with the user
thread running SCHED_FIFO, and presumably not the engine thread - bad
news on a UP box ...

>> What *are* the events that might tell a plugin to "do something" ?
>>
>> The class of events that I imagine to have important time properties
>> are those that reconfigure the processing net. Turning on a
>> plugin. Turning it off. Not much else.
>>
>> When someone tweaks a MIDI CC, I expect the effect to take place right
>> *now*. When someone twiddles an on-screen control element, I expect
>> the effect to take place right *now*. I don't see where a notion of
>> "when to do things" enters into the area of parameter updates.
>
>Automation. Live systems are obviously *very* different from studio systems...
>:-)

Nope. The automation I've seen in ProTools, Logic, the Pulsar and
others tells something when to change a parameter. It doesn't tell a
plugin to *do* anything.

>> I consider a plugin to be a little piece of code that generates
>> samples according to an algorithm and the state of its parameters. It
>> doesn't change what it does unless you (1) change the algorithm, which
>> is nonsensical - the plugin *is* the algorithm (+) or (2) change the
>> parameters. The parameters can be updated in real time. So I can't see
>> what kind of effects you might be talking about.

>The audio clip player of a hard disk recording engine. It's actually
>*required* to work with at least sample accuracy.

thats a different issue. we're talking about events that alter the
behaviour of the plugin. perhaps i'm missing something here.

>Plug-ins that are modulated from envelope generators in a soft synth
>or sampleplayer. These need to stay in sync with the audio data they
>process,

are you talking about envelope generators that are part of the same
system ? once again, these just do parameter updates - they do not
change the code that a plugin executes.

>2) An any machine (UP or SMP), this plug-in may execute within a
> fraction of the time it takes to play the buffer it's changing.
> That is, the time at which the plug-in recieves the event, and
> acts upon it has a very unlinear relation to the buffer time.

Right this is a point you've made before. I accept it, and I'm not
trying to get around it. But my point is that the example shows how
async parameter updates remove any need for events or timestamps for
the most common class of events. Whether or not they actually occur
asynchronously with plugin execution is a side-effect, and mostly
irrelevant.

 [ ... code criticism elided ... ]

OK, OK. I didn't mean this as a piece of production code, just psuedo
code for pedagogical purposes. If I have to write it reasonably
efficiently, it would look something like this:

process (MyClosureType *closure)
{
        unsigned int nframes = control_cycle_frames;

        switch (*closure->switch_state) {
        case 1:
                while (--nframes) {
                        ... foo ...
                };
                break;
        case 2:
                while (--nframes) {
                        ... bar ...
                };
                break;
        .
        .
        .
        }
}

>> But there aren't very many events that do that ... Most
>> of them are just parameter updates. These don't change what the plugin
>> "does" in the sense that it needs to be told about it. They may well
>> change the output of the plugin, but it doesn't have to care about
>> that :)

>MIDI events? I'd say they change things pretty much in soft
>synth... (Note: I'm not thinking about soft synths built from lots of
>small plug-ins here. This is raw, optimized code for generating N
>voices.)

ah, thats what I mean by having a different idea of what a plugin
should be.

i don't want to use such a piece of code - its just doesn't do what i
want, which is to be modular, flexible, reusable and extensible. ok,
so you gain a few percent speed by coding it that way, but by next
year, we'll be back to square one on that count as long as N stays the
same (which I'd argue it basically does).

every time i want to rip out the oscillator algorithm, it means
recompiling the plugin. every time i decide i want to alter the filter
design, it means more code, possible slowdowns if I want to be able to
select different filters at runtime, etc. for me, the whole attraction of
"plugins" is that they avoid this kind of thing.

so, yes, in the case you describe, MIDI events would be significant,
since they may fundamentally alter what the plugin does. but in a
system where all a MIDI note does is to insert a new copy of the
plugin into the processing net, there is no reason to communicate such
events to plugins at all.

[ just a note on something related to this: polyphonic plugins are one
  area of quasimodo where things don't feel quite right. because there
  is only one copy of the output buffer for the plugin, each "voice" (an
  instance of the plugin within the current DSP program/net) has to
  cooperate a little bit to avoid trashing the current contents of the
  buffer. This is easily accomplished: the plugin uses a special DSP
  instruction that resets the output buffer(s) once per control cycle
  regardless of the number of voices, and the plugin uses "+=" instead
  of "=" when assigning values to the output buffer, so that we end up
  with the summed output of all the voices, not just the output of the
  last voice to run.

  even so, it bothers me.
]

i don't think that our aims for Audiality and Quasimodo really differ
at all. my purpose in discussing all this has been to try to see if I
am not noticing something that would prevent Quasimodo from evolving
into exactly the kind of system we are both talking about. I mean, as
far as offline processing goes, Quasimodo already handles Csound
scores, which in many cases can never be handled in real-time. You
just turn on Quasimodo's "tape" interface, shutdown the DAC, and let
it run. By morning, your 512-piece orchestra, which individual reverb
for each instrument, should have finished playing "Happy Birthday" or
its equivalent in Sweden :)

--p


New Message Reply About this list Date view Thread view Subject view Author view Other groups

This archive was generated by hypermail 2b28 : pe maalis 10 2000 - 07:27:12 EST