Re: [linux-audio-dev] Re: Plug-in API progress?

New Message Reply About this list Date view Thread view Subject view Author view Other groups

Subject: Re: [linux-audio-dev] Re: Plug-in API progress?
From: David Olofson (audiality_AT_swipnet.se)
Date: su syys   26 1999 - 17:05:12 EDT


On Sun, 26 Sep 1999, Paul Barton-Davis wrote:
> >> cycle. Thats 533 instructions per frame. If you're running on a UP,
> >> which many people will be, I think that the chances of you
> >> timestamping the event with sample accuracy are not good.
> >
> >Depends on where your input events come from. Trigging a soft synth with the
> >audio input from mics will give you a source of sample accurate events.
>
> i don't get this. are you suggesting that you are reading from an ADC,
> and then using the time implied by the sample count ? this will work
> for some cards, and not for others (some don't use the same clock for
> input as for output).

In that case you're not using hardware that's can handle your
requirements...

> It certainly won't provide sample accurate
> timestamping if you use one card for input and one for output: clock
> drift.

True, it doesn't with current OSS or ALSA drivers. But drivers with sample
accurate sync support, it's not a problem. No hardware support needed as long
as the IRQ timing as precise enough, which it is on most main stream cards.
(That is, the ones that really need it, as they have no sync interfaces.)

I intend to add this capability to the drivers I port to RTL, which means that
most cards will provide better than sample accurate timing. Not sure, but I
think some kind of sync system has been discussed on the ALSA list as well.

> OK, so if you use a Hammerfall and something else with word
> clock syncing, its not a problem, but I didn't think that you wanted
> to depend on this kind of hardware detail.

It's nice with decent hardware, but as I said, it can be done in the driver as
well. You even get about the same precision as with "real" cards when using a
kernel with low latency IRQ handling.

> I know that I don't. And
> sure, you can play games to resync the cards every so often, but this
> is not the making of a reliable system ...

Resync? If the cards drift, you simply have to drop or insert samples, or if
the difference is big, resample. Or if you don't care about an exact
sample:sample relation between the trigger inputs (as in my example), just ask
the drivers about the exact time, and convert the event time stamps from the
trigger sensor plug-ins. (That would actually be handled automatically with a
sub net system as I think of it, as these plug-ins would end up in their own
sub net that's sync'ed with the input card.)

> >Not true with a driver that time stamps input data when the
> >interrupts occur. (I did that back in the Amiga days...) And you
> >shouldn't even need that with real "high accuracy" hardware.
>
> not sample accurate though, just real-world accurate. If you're
> running with +/- 0.5ms scheduling jitter (woo-hoo!),

An RTL driver can easily catch events and time them within 5 µs or so on an
average Celeron box. The hardware can be more troublesome...

> thats still a lot
> of samples that may or may not have been generated so far by any given
> plugin. The timestamp will be accurate, but there is no way to sync it
> to the sample generation process.

If you know the offset between the time base of the event and the time base of
the output device, there's no problem.

> The best you can do is sync it to
> the state of the DAC, as understood by the engine, which is well known
> immediately after a write to the DAC has returned, but not known at
> any other time.

True. With OSS and ALSA (currently, at least). What about an IOCTL that gives
you the exact time of the next sample you'll get if you do a read()?

> in short, you can't get sample accurate timestamping unless you reduce
> the control cycle to 1, which we're agreed is a bad idea.

Still don't agree. You can't get single sample latency without a single sample
cycle, but you *can* timestamp events as precicely as the hardware and OS
allows, and you *can* use that information to handle the events with a fixed,
sample accurate delay.

You can't do it without driver support (which may require a hard RT kernel if
there's no hardware support), but there's not really a technical problem.

> >> So no, its true that Quasimodo's engine can't tell you the time with
> >> sample resolution, but plugins know the time with sample
> >> resolution. Strange, eh ?
> >
> >And? What to do with your perfect notion of time, when no one tells you
> >exactly when to do things?
>
> Ah, now we begin to get down to the meat (although I don't eat the
> stuff :)
>
> What *are* the events that might tell a plugin to "do something" ?
>
> The class of events that I imagine to have important time properties
> are those that reconfigure the processing net. Turning on a
> plugin. Turning it off. Not much else.
>
> When someone tweaks a MIDI CC, I expect the effect to take place right
> *now*. When someone twiddles an on-screen control element, I expect
> the effect to take place right *now*. I don't see where a notion of
> "when to do things" enters into the area of parameter updates.

Automation. Live systems are obviously *very* different from studio systems...
:-)

> I consider a plugin to be a little piece of code that generates
> samples according to an algorithm and the state of its parameters. It
> doesn't change what it does unless you (1) change the algorithm, which
> is nonsensical - the plugin *is* the algorithm (+) or (2) change the
> parameters. The parameters can be updated in real time. So I can't see
> what kind of effects you might be talking about.

The audio clip player of a hard disk recording engine. It's actually *required*
to work with at least sample accuracy.

Plug-ins that are modulated from envelope generators in a soft synth or
sampleplayer. These need to stay in sync with the audio data they process, or
you'll have "analog feel" no matter if you like it or not. I certainly don't
want that kind of behaviour from an all-digital system. It's a flaw that should
have gone away with the DSP + MCU synth design (although I believe some are
still doing it that way...), as it forces you to use samplers and static
waveforms in cases where you don't want to.

> Perhaps we have a different notion of what a plugin is ? It doesn't
> seem that way, however.

Perhaps a different notion of what a plug-in should be able to do, and with
what accuracy?

> (+) NOTE: this doesn't mean that the plugin can't contain conditional
> code. But this is just handled with a parameter update. Consider a
> plugin with a switch that toggles between two different choices. Right
> now, the parameter that models the switch state is set to 1. Someone
> clicks the GUI representation, and we decide to change it to a 2. No
> problem: no timestamping needed - we just alter the parameter
> asynchronously, and the plugin keeps running.
>
> process (MyClosureType *closure)
> {
>
> while (samples_to_generate) {
>
> if (*closure->switch_state == 1) {
> ... foo ...
> } else {
> ... bar ...
> }
> }
>
> }
>
> There - we just told the plugin to do something, and it did it. No
> events, no timestamps. What am I missing ?

1) The engine will most likely block the GUI code on a single CPU
   machine, so you'll never recieve events in the middle of the
   process loop.

2) An any machine (UP or SMP), this plug-in may execute within a
   fraction of the time it takes to play the buffer it's changing.
   That is, the time at which the plug-in recieves the event, and
   acts upon it has a very unlinear relation to the buffer time.

Besides, having the conditional code inside the inner loop isn't very nice from
a code optimisation POV. Even if the jump predictor will be right most of the
time on a modern CPU, it will fail two or three times in the start of a change,
and that may be a significant ammount of pipeline flushes with a small buffer
size. (A pipeline flush on a Pentium MMX, P-II/III or Celeron costs around 3
instruction cycles IIRC...)

And if you have more switches, you get more trouble... For such situations, you
would be better off using a function pointer that you change when you recieve an
event, or a swith() with an ordered list of cases, as that gets optimized into
a jump table. (And if the CPU and/or compiler is smart, there are no pipeline
flushes on jump tables, as other instructions can be executed while the jump
address is being calculated.)

> >> >> if a plugin is executing in any given cycle, then its query of
> >> >> the current time will always return the same value. that doesn't mean
> >> >> there have been no events - its means its busy generating the same
> >> >> output buffer (and working on the same input buffer) as everybody
> >> >> else who will be executed during that cycle.
> >> >
> >> >"will be" is very important here.
> >>
> >> why ?
>
> >They're not executed in parallel == not at the same time. Unless the
> >code that "sends" events is blocked by the engine code, events may
> >occur in the middle of the cycle - which means that some plug-ins
> >will execute before that event and others after it. So, who which
> >group does the event reciever belong to...?
>
> Once again, what kind of events are we talking about here ? Things
> that reconfigure the net don't get handled until the end of the
> control cycle.

Plug-ins dont' recieve that kind of events. (They do, however, recieve events
telling them how to act after the net, if necessary, but such events are always
first in the buffer. [If I decide to send them to the process() function at all,
that is. I have to think more about that.])

> But there aren't very many events that do that ... Most
> of them are just parameter updates. These don't change what the plugin
> "does" in the sense that it needs to be told about it. They may well
> change the output of the plugin, but it doesn't have to care about
> that :)

MIDI events? I'd say they change things pretty much in soft synth... (Note: I'm
not thinking about soft synths built from lots of small plug-ins here. This is
raw, optimized code for generating N voices.)

> >I do appreciate your insightful comments, and I have had to do a lot of
> >thinking at times to see if I'd really got things right. :-)
>
> Likewise. I am enjoying the challenge of figuring out if Quasimodo's
> system really does make as much sense as I think it does, or if I
> missed something large, or if I missed something small. Its all good
> stuff.

I think Quasimodo's system makes a lot of sense, especially for pure low
latency real time applications. There's no way my system will beat it WRT
performance in that kind of situations.

My goal is to design and implement a system that can cover the whole range
from ultra low latency real time to off-line processing - without ending up as
something that doesn't do anything too well... (Simple, eh? ;-) The low overhead
system of Quasimodo is very inspiring when trying to turn an inherently complex
design into something nice, efficient and useful. Remains to see what all this
results in...

//David

 ·A·U·D·I·A·L·I·T·Y· P r o f e s s i o n a l L i n u x A u d i o
- - ------------------------------------------------------------- - -
    ·Rock Solid David Olofson:
    ·Low Latency www.angelfire.com/or/audiality ·Audio Hacker
    ·Plug-Ins audiality_AT_swipnet.se ·Linux Advocate
    ·Open Source ·Singer/Composer


New Message Reply About this list Date view Thread view Subject view Author view Other groups

This archive was generated by hypermail 2b28 : pe maalis 10 2000 - 07:27:12 EST