Re: [linux-audio-dev] timebases and sync in LAAGA

New Message Reply About this list Date view Thread view Subject view Author view Other groups

Subject: Re: [linux-audio-dev] timebases and sync in LAAGA
From: Benno Senoner (sbenno_AT_gardena.net)
Date: Mon Jun 18 2001 - 17:05:28 EEST


On Sunday 17 June 2001 15:07, you wrote:
> >assume we have a video input source that sends SMPTE data and we want to
> >syncronize an audio stream we have to it.
> >
> >eg:
> >
> >SMPTE---------------+--------+
> >
> >internal HDR track--+ LAAGA |---- audio out
> >
> >softsynth ----------+--------+
>
> No problem at all.
>
> Note that its likely that the most common way this would be done would
> be to use an audio i/o client as the driver for the engine, and have
> that audio i/o client provide the master timebase based on SMPTE that
> it also is able to receive.

Is it right to assume that in the case you described, the audio i/o client
would provide the timebase (current frame) and at the same time would adapt
the playback speed of the audio out interface based on the SMPTE input.
(through interpolation/sample insertion/skipping) ?

> But there's nothing to prevent the SMPTE
> handling being done by another client, or the HDR client, or whatever.

ok

>
> >Now assume that we have a cheap soundcard which drifts a few % from the
> >nominal 44.1kHz output frequency.
>
> In fact *all* soundcards fail to match the frequency exactly, some are
> just very close.
>
> >One problem I see is that the softsynth and HDR stream do think in terms
> > of "frames since start".
>
> No, if they are LAAGA clients that care about sync (and not all do;
> the softsynth probably doesn't care at all; a MIDI sequencer driving
> the softsynth would be more likely to) they pay attention to the LAAGA
> sync timebase every time they execute their process() callback.
>
> >With the above samplerate compensation routines, even if the soundcard's
> >samplerate will have some builtin drift, the process() callbacks will
> > still see an output samplerate of exactly 44.1kHz and this is ok.
>
> thats up to the audio I/O client/driver. LAAGA doesn't know anything
> about it.

ok.

>
> >But in other cases where there is no trusted sync source, or a faulty sync
> >source (that has some drift in it too), then the "current frame" value
> > seen by the softsynth and HDR streams do not represent the time
> >t=(current_frame/44100).
> >
> >For example if the softsynth needs to generate a note at time t=10secs
> >which correspond to 441000 frames after start, then in presence of a
> >samplingrate which differs from the nominal 44.1kHz, the timing would be
> >wrong.
>
> Two rules:
>
> 1) if you care about sync time, you have to pay attention to the
> LAAGA timebase.
> 2) if you care about wallclock time, you need a different non-LAAGA
> timebase.
>
> the number of elapsed frames is *never* an indication of the "time"
> for two reasons:
>
> 1) sample rate drift
> 2) the timebase client may have a transport control and the user
> just "rewound".
>
> So, the timing would not be wrong if the timebase client is counting
> time correctly. it would only be wrong if the client that cares about
> time tries to tell the time itself by counting frames. you can't do
> that.
>
> So, the softsynth generates a note when the timebase says its the
> right time. This may or may not be when the "elapsed frame count"
> indicates the same time.

Ok, but I guess synching only to wallclock time is not good for a sample
playback engine that not only needs to get the

>
> ***BUT****
>
> You're thinking about a problem that is not well defined. Since the
> audio will be output when the audio interface has delivered 44100 * 10
> samples to its outputs, you could say that the correct time to
> generate the sound actually *is* when the audio i/o says this time has
> come around.
>
> So the real problem is that you haven't defined the problem well
> enough. Do you want to produce sound at the time referenced by the
> timebase, or at the time corresponding to the "wallclock time" as
> measured by someone listening to the output from the audio interface?

Yes, I do agree that I did not define the problem very well, but as I stated
I have no experience with syncronization in the audio world, that's why I
asked what the possible scenarios would look like.

>
> These are two different things, and they typically represent different
> kinds of applications. In the sync-to-video example, the sound needs
> to be produced when the SMPTE client, acting as the timebase, says its
> time. However many samples have been processed to that point is
> completely irrelevant. In the case of an alarm bell that must ring in
> 10 seconds, the client doesn't care about the system timebase, and in
> fact may not even care about the frame count.

Ok for the alarm bell.
But assume that the softsynth wants to play a sampled loop which must
stay perfectly in synch with the background music that is already recorded on
the video tape ?

Assume the loop starts at time 0:15 (mins: secs) (at frame 661500) , and
ends at 0:25 (duration = 10secs, 441000 frames )
(samplerate = 44.1kHz)

What's the best way to do this from the softsynth (or a HDR playback engine)
perspective ?

Personally I would put the softsynth in slave mode where it gets the current
song position (in frames) from the audio i/o client (which adapts playback
speed of the soundcard to stay in sync with the video timebase).
That way even if the reference timebase varies, the loop generated by the
softsynth/HDR engine would still be perfectly in sync without needing to care
about timing (from the softsynth POV).

Does this make sense ?

>
> >But I foresee problems if we begin to feed all sorts of hw inputs
> >into our LAAGA server which have all differing timings. (due to
> >hardware imperfections). The situation gets more complicated when
> >the need to have a "master clock source" arises.
>
> a LAAGA engine is driven by a single driver. its timebase is provided
> by a single client.

Yes this is was clear from the beginning.
My doubts regarded mainly how to keep the sync.

>
> if you wish to aggregate multiple audio h/w into a single
> driver/client, use ALSA's PCM "multi" device.

Interesting.
I was not able to follow ALSA development lately, can you (or any other ALSA
developer (Abramo,Jaroslav?)) briefly explain us how this works and what
kind of capabilities the "multi" PCM devices offer ?

>
> if there are additional clients that get data from other h/w (maybe
> non-audio, for example), thats fine, but they don't replace or control
> the exist LAAGA driver and only one client can server as the timebase.

Understood.
I assume the LAAGA API will provide a way to speficy which client produces
the reference timebase, right ?

>
> >eg the user having a Trident and SB AWE64 installed in his system will
> >certainly want to use them simultaneously within LAAGA without
> > experiencing dropouts and other weird things after a few mins due to
> > samplerate drifting.
>
> I've explained many times before that:
>
> 1) such functionality has NOTHING to do with LAAGA. ALSA PCM
> "multi" is responsible for handling that (or some other
> audio i/o driver).

ok

> 2) IMHO, people who want to do this are wasting their time
> and the programming community's time.

In what sense ?
That for serious recording you should use only pro-audio hw that has
hardware based sync support so that the problem does not arise ?

If you meant this then I do agree, but there are lots of hobbists doing
recording sessions and they perhaps would like to use their two
el-cheapo soundcards simultaneously.

>
> >I'm not familiar with windows apps like Cubase or Logic so I was wondering
> >what kind of sync functionality these apps offer.
> >(eg if it's possible to handle the SMTPE-sync case mentioned above , the
> >drifting soundcards etc)
>
> They don't support multiple soundcards unless ASIO/MME/DirectX do, and
> even then its up to those layers to handle the drift, not the
> application.

Ok, understood.

Paul, thanks that you shed some light on the sync issues.

cheers,
Benno.


New Message Reply About this list Date view Thread view Subject view Author view Other groups

This archive was generated by hypermail 2b28 : Mon Jun 18 2001 - 17:17:09 EEST