RE: [linux-audio-dev] Random thought on HDR latency compensation

New Message Reply About this list Date view Thread view Subject view Author view Other groups

Subject: RE: [linux-audio-dev] Random thought on HDR latency compensation
From: Richard W.E. Furse (richard_AT_muse.demon.co.uk)
Date: Tue Apr 25 2000 - 02:59:23 EEST


I don't think there's an issue here - I suspect it's just different use of
terminology. Systematic latency can certainly be thought of as
"side-effect" delay. I personally think this is latency in a useful sense,
however it has absolutely no affect on a successful running of a real-time
application (which I expect is Paul's primary concern at present). This is
quite unlike process latency which can cause an entire application to
glitch.

Impressed by the 15years of thought on this - I can only claim 10. ;-) I'll
attempt to address the questions. (If I'm interpreting what you're after
correctly!)

1) Simple block processing seems to be in vogue. Nyquist uses an altogether
different approach as may (I'm not sure) Glame. Ignoring systematic
latency, a plugin receives a block of audio relating to a period of
"logical time" [T,T+D) (the "frame") and sends out a block of audio
relating to the same period. This happens in both offline and real-time
use. The algorithm also has a process latency given by the real-world time
it takes to produce this output from the input - we are much more
interested in this for real-time applications. If we reintroduce systematic
latency, the logical time periods that the input and output buffers relate
to may be shifted although their lengths remains the same.

2) Varies by architecture. Most systems use a single block-size for
algorithms each frame. HDR applications such as Ardour (and the kernel
itself for that matter) have special behaviour designed to separate disk
access to some extent from the main operation of the system. This allows
use of more efficient buffer sizes when performing the disk access itself
and avoids time-wasting while waiting for disk access where possible.
Related issues apply when accessing audio hardware.

3 & 4) [Advanced apologies if I've missed the point and am being
particularly patronising.] This is where the elaborate debates on
multithreading begin - while the current frame [T,T+D) is being played by
the DSP on your audio card, another thread will be running the plugins
required to produce the next frame [T+D,T+2D) - and one (or more) further
thread(s) will be asking the kernel to load the frame of data [T+2D,T+3D)
after this! [In practice D may vary and more than three frames may be
"pipelined" at once.] The kernel will see to it that while the disk-access
thread is waiting for the disk to sort itself out, the kernel will "block"
it and pass the CPU over to a thread that can use it (etc). The wonders of
UNIX...

I hope this makes some sense,

-- Richard

PS - There's some mixed use of the term "frame" too. I use it to mean the
period of logical time [T,T+D) above, however some people use it to
indicate a logical duration of one sample. This is usually obvious from
context.

-----Original Message-----
From: Tom Pincince [SMTP:stillone_AT_snowcrest.net]
Sent: Monday, April 24, 2000 8:39 AM
To: linux-audio-dev_AT_ginette.musique.umontreal.ca; pbd_AT_op.net; richard_AT_mu
se.demon.co.uk
Subject: RE: [linux-audio-dev] Random thought on HDR latency compensation

Based on Richard's definitions, I am contemplating both process and
systematic latency. I am trying to understand how Paul's statement,
that plugins don't contribute to latency, is true. For example, let's
imagine 2 soundfiles with the same start time, each playing back through
a separate channel. Channel 1 has no plugins and channel 2 contains 2
plugins connected in series. As I try to imagine the first sample in
soundfiles 1 and 2 reaching their respective d/a's simultaneously, some
simple questions come to mind. I realize that the nature of these
questions reveals my relative level of ignorance, but my interest in
this topic is so strong that I must dare myself to ask them. I feel
confident that after I gain just a little more knowledge I will be able
to make meaningful contributions to this cause, since I have actually
been contemplating these issues deeply for 15 years but have not felt
inclined to interact with developers working on proprietary platforms
and have only recently become aware of the Linux way.

1) Does plugin 1 process an entire frame before outputting to plugin 2,
with the output taking the form of a single, completely processed frame,
or are individual samples sent from plugin 1 to plugin 2 as they are
processed? For this process to produce no latency I imagine that the
former must be true.

2) Does the audio in channel 1 travel from the hd to the d/a in frames
equal in size to the frames being processed by plugin 1?

3) Is the total amount of time available for a frame to be processed by
both plugins equal to the time between hd reads?

4) If 3 is true, is a partially processed frame simply sent on its way
and a new frame loaded into plugin 1 at each hd read?

Tom


New Message Reply About this list Date view Thread view Subject view Author view Other groups

This archive was generated by hypermail 2b28 : Tue Apr 25 2000 - 03:32:12 EEST