Re: [linux-audio-dev] MIDI sync issues; mmc, mtc, ...

New Message Reply About this list Date view Thread view Subject view Author view Other groups

Subject: Re: [linux-audio-dev] MIDI sync issues; mmc, mtc, ...
From: Jack O'Quin (joq_AT_io.com)
Date: Thu Dec 21 2000 - 18:02:54 EET


Kai Vehmanen <kaiv_AT_wakkanet.fi> writes:

> On Tue, 19 Dec 2000, Paul Barton-Davis wrote:
>
> > ps. i believe that most of the sync issues i mentioned in those
> > mid-year emails were all solved and/or were red-herrings. the one
> > remaining hard part for ardour is that i don't believe its safe to
> > send the MTC data from the audio thread, and i don't yet have a scheme
> > that i trust for getting it sent on time via a different thread.
>
> Just the thing I meant bring up next. Now even if its doable to implement
> a steady, outgoing MTC stream from a userspace app, it still seems
> difficult to combine it to an existing audio app. If we generate the
> MTC-bytes in the audio loop, MTC accuracy depends on the used buffersize
> (1024 samples frames at 44100kHz -> 23ms per iteration). Also, writing
> directly to midi-devices from the audio thread, we risk blocking the audio
> operations. So it would look like we need to do the midi i/o from
> another thread - either put a ringbuffer between the audio and midi
> threads or have the midi thread generate the MTC-bytes itself.
>
> Another thing (probably ecasound-specific problem) is what time to
> actually use when generating MTC. For a wall-clock sync, it's reasonable
> to use the "engine-time", ie. how many samples have passed the main engine
> loop. But this won't do if we want to use MTC for syncing audio. Then we
> need to use soundcard's current pointer to calculate the current time...

This may be a somewhat naive suggestion. But, I've been reading up on
the ALSA sequencer implementation. It appears well thought out, and
extremely flexible from what I can see.

I have only recently come to understand just how crude the timimg of
user-space realtime threads is. I'm amazed you folks have been able
to accomplish so much within those limitations. What modest realtime
experience I've had was with much lower-level and lower-latency system
environments. So, I'm reluctant to trust my intuition in these
matters. But, here goes, anyway... :-)

For something with a stable, standardized interface like MTC, I tend
to think in terms of a kernel-mode implementation, perhaps using an
interrupt handler driven by one of the various realtime clocks
provided by various sound cards or other system hardware. It seems
possible to achieve one or two orders of magnitude less jitter that
way.

The ALSA sequencer design fully supports user-space and kernel-mode
clients attaching to various ports to send and recieve MIDI messages
using multi-client queues. I'm imagining a relatively simple driver
that just "keeps the beat", with user-mode clients telling it about
tempo changes and stuff like that. Maybe someone has already written
one.

Of course, this approach only works on systems running ALSA. But,
MIDI support under OSS looks like it's probably too crude for such a
sophisticated application, anyway. Am I wrong about that?

Just my $0.02...

-- 
  Jack O'Quin
  Austin, Texas, USA


New Message Reply About this list Date view Thread view Subject view Author view Other groups

This archive was generated by hypermail 2b28 : Thu Dec 21 2000 - 19:41:12 EET