Re: [linux-audio-dev] Random thought on HDR latency compensation

New Message Reply About this list Date view Thread view Subject view Author view Other groups

Subject: Re: [linux-audio-dev] Random thought on HDR latency compensation
From: Paul Barton-Davis (pbd_AT_Op.Net)
Date: Tue Apr 25 2000 - 04:20:01 EEST


>3 & 4) [Advanced apologies if I've missed the point and am being
>particularly patronising.] This is where the elaborate debates on
>multithreading begin - while the current frame [T,T+D) is being played by
>the DSP on your audio card, another thread will be running the plugins
>required to produce the next frame [T+D,T+2D) - and one (or more) further
>thread(s) will be asking the kernel to load the frame of data [T+2D,T+3D)
>after this! [In practice D may vary and more than three frames may be
>"pipelined" at once.]

Actually, depending on what you mean by "load", this might be unlikely
in a real-time system. If all you mean is "retrieve existing data
corresponding to frame [T+ND,T+(N+1)D], then sure. In ardour, for
example, we are typically fetching up to *250* frames ahead from the
disk.

However, if you are describing the actual generation/processing of
that data ahead of time, then no, you wouldn't do this. You want to
minimize the time between when a "frame" (i tend to use the word
"block") is computed and when it will be played. Since most audio
algorithms don't vary the time they take to process D samples (i.e. if
it took 30usec with one block of D samples, the next one will also
take 30usec), there is little gain to "queueing" further frames/blocks
other than the one after the currently playing one. If you do queue up
more, you increase the event latency (the time between some external
event, such as a MIDI note on, and the audible result of that event).

Note that I made this mistake early on with Quasimodo when I first
started working on it. I decided to try to do an end-run around the
"throttling" caused by waiting for one fragment of audio to be
played/recorded by the audio h/w, and just keep on processing the next
frame/block of audio data. Result: event latencies go to hell, since
when a MIDI note on arrives, I've got 10-20 frames/blocks queued up
waiting to go out the door. In theory, one could perform some
sophisticated de-queueing, but I decided to accept the same basic
design that Csound, VST and many other programs did - generate/process one
frame/block while the previous one is playing/next one is recording.

Of course, if you're not running in real-time mode, then there are no
samples going to/from the audio h/w, and you can just as fast as you
possibly can.

Other than that, I think that Richard's answers are as clear as any
that you'll get from me.

--p


New Message Reply About this list Date view Thread view Subject view Author view Other groups

This archive was generated by hypermail 2b28 : Tue Apr 25 2000 - 04:53:42 EEST