[linux-audio-dev] timebases and sync in LAAGA

New Message Reply About this list Date view Thread view Subject view Author view Other groups

Subject: [linux-audio-dev] timebases and sync in LAAGA
From: Benno Senoner (sbenno_AT_gardena.net)
Date: Sun Jun 17 2001 - 02:50:41 EEST


Hi,

I'm by no means a "synchronization-expert" , but I'd like if you folks could
explain if LAAGA would be able to run a setup like the one that follows below:

assume we have a video input source that sends SMPTE data and we want to
syncronize an audio stream we have to it.

eg:

SMPTE---------------+--------+
                    | |
internal HDR track--+ LAAGA |---- audio out
                    | |
softsynth ----------+--------+
  

SMPTE acts as the master clock, the HDR and the softsynth track are
internally generated.

Our goal is to simply mix together HDR and softsynth which should stay
perfectly in sync with the SMPTE signal.

Now assume that we have a cheap soundcard which drifts a few % from the
nominal 44.1kHz output frequency.

The HDR and softsynth tracks know only the concept of audio frames. (eg each
one produce let's say 64frames each time their process() functions are
called).

To snyc the audio to the SMPTE data we would need to adapt the resulting
sampling rate in order to compensate for the soundcard's drift (assume the
drift is constant to simplify things).

With techniques like interpolation or by replicating/dropping a sample to
correct the samplerate drift, we can keep things in sync.

But what I'd like to see is support from LAAGA to perform these actions.

One problem I see is that the softsynth and HDR stream do think in terms of
"frames since start".

With the above samplerate compensation routines, even if the soundcard's
samplerate will have some builtin drift, the process() callbacks will still
see an output samplerate of exactly 44.1kHz and this is ok.

But in other cases where there is no trusted sync source, or a faulty sync
source (that has some drift in it too), then the "current frame" value seen
by the softsynth and HDR streams do not represent the time
t=(current_frame/44100).

For example if the softsynth needs to generate a note at time t=10secs
which correspond to 441000 frames after start, then in presence of a
samplingrate which differs from the nominal 44.1kHz, the timing would be
wrong.

I do realize that correcting the starting time of the note is not enough since
the duration of the note will be wrong too etc, and that in the case that
external sync sources are not needed, the only sideeffect will be that the
resulting syth track will be played a bit to slow or a bit too fast.

But I foresee problems if we begin to feed all sorts of hw inputs into our
LAAGA server which have all differing timings. (due to hardware
imperfections).
The situation gets more complicated when the need to have a "master clock
source" arises.

I think we should think about these problems in advance, otherwise it will
severely limit the ability of LAGAA to work with different audio I/O
interfaces at the same time.

eg the user having a Trident and SB AWE64 installed in his system will
certainly want to use them simultaneously within LAAGA without experiencing
dropouts and other weird things after a few mins due to samplerate drifting.

I'm not familiar with windows apps like Cubase or Logic so I was wondering
what kind of sync functionality these apps offer.
(eg if it's possible to handle the SMTPE-sync case mentioned above , the
drifting soundcards etc)

Thoughts ?

Benno.


New Message Reply About this list Date view Thread view Subject view Author view Other groups

This archive was generated by hypermail 2b28 : Sun Jun 17 2001 - 02:48:16 EEST