Re: [linux-audio-dev] Random thought on HDR latency compensation

New Message Reply About this list Date view Thread view Subject view Author view Other groups

Subject: Re: [linux-audio-dev] Random thought on HDR latency compensation
From: Tom Pincince (stillone_AT_snowcrest.net)
Date: Sat Apr 22 2000 - 05:24:52 EEST


I am beginning to think that the term “latency” is being used to mean
more than one thing. I am certain that the term “real time” is often
used to mean more than one thing. I see four basic areas where latency
is an issue.

1) Latency in the recording/monitoring path
2) Latency in the playback path
3) Latency in a real-time stream (real-time?)
4) Latency with buffers

Most of the issues in each of these areas are identical, but there are
subtle differences.

1) Latency in the recording/monitoring path

If the performer is monitoring himself post a/d during recording,
latency is highly relevant and not arbitrary. The issue here is human
perception. Delays of more than 20msec are heard as a separate event
and can interfere with the performers sense of rhythm, so 20msec is the
absolute maximum acceptable latency. If the performer is singing or
playing an instrument with audible acoustic output, delays of 2-20msec
can produce audible phase interference. If this upsets him, 1msec is
the maximum acceptable latency.

2) Latency in the playback path

If recording deals with maximum allowable latency, playback deals with
minimum allowable latency. Any amount of drop-outs is unacceptable so
the playback buffers must be large enough to guarantee smooth playback.
It is still desirable to keep buffer size as small as is practical, but
not at the expense of risking even one drop-out.

So how much latency is acceptable in the playback path? In recording,
playback latency is only an issue during overdubs. The performer will
not play anything until he hears a track being played back so, if the
recording software is capable of offsetting the record start time to
account for playback buffer size, then the issue here is “how long can
you keep a musician waiting?” I have found five seconds to be about the
maximum time before the musician starts loosing his feel for the music
and starts displaying visible signs of frustration. Five seconds of
playback buffer can make up for very slow hardware and/or a lot of
demanding plugins. The other issue here is the responsiveness of the
plug-in to real time parameter adjustments, and is very important during
mixdown. In this case the recording engineer is like a musician and the
recording console is his instrument, so we are back to a 20msec maximum
latency.

3) Latency in a real-time stream

I have a Sony R7 digital reverb. This device gives me the option of
using a noise gate before the reverb. The noise gate has a pre-delay of
zero to a few hundred samples. The gate monitors the signal before the
pre-delay, and opens when the signal goes above the programmed
threshold. The gate takes time to go from fully closed to fully open.
The pre-delay allows the signal to arrive at the gate the instant that
the gate is fully open. In this way none of the signal is lost. The
latency of this effect is equal to the number of pre-delay samples
multiplied by the sampling frequency, since the input signal is live and
there is no playback buffer. There is no way to improve latency and
there is no risk of drop-outs. I think this is the kind of latency that
Jarno and Jorn are referring to.

4) Latency with buffers

Chunks of audio are read from the hd to a playback buffer before the
audio is actually played back. If a plug-in can process the contents of
the buffer before the next read from the hd, there are no drop-outs. If
the hardware is too slow or the plug-in is too demanding, there are
drop-outs. Plugins don’t increase latency because the size of the
playback buffer and the rate of hd read are the same regardless of
whether there is a plugin or not. I think this is what Paul is talking
about.

Paul says that plugins have no effect on latency. True, however
latency, as a function of buffer size, does have an effect on plugins.
Bigger buffers allow more plugins on slower hardware with greater
latency and no risk of drop-outs. Smaller buffers offer smooth
real-time responsiveness but require fast hardware and conservative use
of plugins or risk drop-outs.

In recording, latency is a problem, in playback it is a tool.

Soooo, now I can clarify my original comment regarding plugins and
latency.

1) Can a host application be designed to determine the computational
ability of the hardware upon which it resides?
2) Can a plugin architecture be designed that can communicate its
computational requirements to the host application?
3) If the answers to 1 and 2 are yes, is it possible/desirable for the
host application to dynamically resize playback buffers to guarantee no
drop-outs regardless of hardware capability or plugin load, while
simultaneously offsetting the start time of recording tracks to preserve
sample accurate overdubs? This would let the user choose between
features and speed.
4) In a multi-track environment, would there be any efficiency gains by
increasing buffer size to accommodate only those tracks with a heavy
plugin load while maintaining alignment of all tracks by simply delaying
the start time of the other tracks?

Tom


New Message Reply About this list Date view Thread view Subject view Author view Other groups

This archive was generated by hypermail 2b28 : Sat Apr 22 2000 - 05:47:55 EEST