Re: [linux-audio-dev] introduction & ideas

New Message Reply About this list Date view Thread view Subject view Author view Other groups

Subject: Re: [linux-audio-dev] introduction & ideas
From: Martijn Sipkema (msipkema_AT_sipkema-digital.com)
Date: Mon Feb 25 2002 - 12:18:50 EET


> >The idea of a single 'system clock', (POSIX CLOCK_MONOTONIC would do)
> >to synchronise different media streams and midi (which is not in
> >OpenML) is the correct way IMHO.
>
> in existing digital systems, the way you synchronize independent
> streams is to provide a common *driving* clock signal like word clock.
>
> the UST proposed by dmSDK/OpenML doesn't act as a clock in the sense
> of say, word clock. its just a way of finding out the current time. as
> such, it cannot play the role that word clock plays in
> synchronization. its much more like the role that SMPTE plays in audio
> systems - it doesn't really provide proper audio sync at all, it just
> provides a low resolution check on "where are we now", and leaves the
> implementation of a PLL that can correct for drift to something else.

You are correct in saying that UST/MSC can't provide sample accurate
audio sync, and I don't think it is ment for that. This has to be done at
the hardware level. Syncing multiple soundcards would only be possible
when it is supported by the hardware, at least when sample accuracy is
wanted. But when syncing audio playback to SMPTE, for example, from
a video tape, then there is no sample clock. You could still provide fairly
accurate synchronisation if you have a soundcard with 'pitch' (or do sample
rate conversion yourself), so here this synchronisation would be of help.
But then again, I think it is great for syncing MIDI, GUI, video/display
with
(a) audio streams that are sample synced in hardware.

> >A synchronous execution model also isn't really needed for a
> >non-realtime application that might not be able to run with
> >SCHED_FIFO and could as well just have a large buffer that it fills
> >from time to time.
>
> on SCHED_FIFO: most people who use media players like the fact that
> when you adjust the volume, the volume changes "in real time". you
> don't need to go down to 1.3msec interrupt intervals for this, but you
> definitely need to do better than 187msec.

Then maybe we do all need SCHED_FIFO for these applications. It just
complicates security though. Is there a way to have an application use
SCHED_FIFO for its audio part and not be a SUID root?

> on synchronous execution: this isn't about the operation of a single
> program. if you've only got one program, then there is no "synchrony"
> to be concerned about. the SE model used by CoreAudio, JACK and others
> exists to ensure that the streams are always in sync with *each
> other*. you don't need low latency to implement SE - they are
> orthogonal characteristics. that said, the dmSDK model doesn't scale
> well to low latency as far as i can see, and it certainly isn't very
> friendly toward SE.

That's what I thought. Too bad. We could do with a standard media API.
It would be nice if jack could use the UST/MSC. Does it have something
similar?

> >It is not clear to me if or how well dmSDK supports inter application
audio
> >streaming.
>
> it works. you just open a new "jack" (heh), and deliver/receive
> messages to/from it. the jacks can cross application boundaries, just
> as they cross kernel/user space ones.

A jack isn't symetric though, i mean the receiving end of an output jack
does
not behave like an input jack. So there is a difference between a 'device'
and an 'application'. Does jack have this difference? I don't think it is
necessary,
allthough it might be because of the UST/MSC synchronistion. In my MIDI API
I do have a difference between a logical device that has an output and an
application that receives MIDI data.

> >> you could probably reimplement a synchronous execution system like
> >> JACK on top of OpenML, but then what would be the point of that?
>
> >You'd have a seperate low level library, and a utility library for
> >the synchronous execution. I'm programming a MIDI API right now, and
> >for receiving MIDI I can choose between a 'callback' or a 'read'
> >interface. The read interface is the more fundamental one; A
> >'callback' interface can easily be implemented on top of it, so that
> >is what I intend to do. For a MIDI API it would be nice to timestamp
> >the events with a single clock, that is also used by the audio/video
> >API. OpenML would provide this. At least that part I think is ok.
>
> Well. The problem is: to interoperate "usefully" (perhaps i should say
> "properly"), the apps need to be using the SE API, not the underlying
> one. This is precisely the case right now, where JACK uses ALSA as the
> underlying API for audio. Apps can clearly use ALSA directly if they
> wish, but if they do, they can't participate with JACK, at least not
> in a way that preserves all of JACK's semantics. This doesn't say that
> ALSA or JACK are dumb, and it clearly doesn't negate JACK's dependence
> on ALSA. But the JACK API conceptually "supercedes" ALSA. I don't mean
> that its superior or anything like that. I mean that its designed
> around a different and "more abstract" concept. If you build an SE
> model on top of dmSDK/OpenML, the dmSDK/OpenML disappears just like
> ALSA disappears within JACK.

There is a difference in that ALSA is used only aw an endpoint, and OpenML
could
be used for application interconnection possibly.

> Put more concisely: where an API provides communication methods as
> well as mere data access and transformation methods, everyone who
> wants to communicate must use the same protocol that the API fronts
> for. dmSDK/OpenML clearly *wants* to be that API.

You are right. I was just hoping for a standard media API. For video it
might still be ok. And OpenGL extensions for synchronisation would be
nice, as well as the display sync. I think for the the UST approach is
good.

> BTW, its not at all clear to me that a read interface is the more
> fundamental one. I think that Unix-heads like myself tend to feel that
> way about it because we've grown up with read and write this way. If
> we'd grown up with a completely memory-mapped system that simply
> interrupted a thread when data was ready, and you then used direct
> pointer access to "read" it (as is possible in several interesting 64
> bit operating systems with a single address space), i think you might
> see the poll/interrupt API as the fundamental one. I mean, anyone can
> write read() based on memcpy(), right ? :)

Exactly! So read() is the more fundamental one. :)

--martijn


New Message Reply About this list Date view Thread view Subject view Author view Other groups

This archive was generated by hypermail 2b28 : Mon Feb 25 2002 - 12:07:47 EET