Re: [linux-audio-dev] introduction & ideas

New Message Reply About this list Date view Thread view Subject view Author view Other groups

Subject: Re: [linux-audio-dev] introduction & ideas
From: Paul Davis (pbd_AT_Op.Net)
Date: Mon Feb 25 2002 - 03:31:12 EET


>The idea of a single 'system clock', (POSIX CLOCK_MONOTONIC would do)
>to synchronise different media streams and midi (which is not in
>OpenML) is the correct way IMHO.

in existing digital systems, the way you synchronize independent
streams is to provide a common *driving* clock signal like word clock.

the UST proposed by dmSDK/OpenML doesn't act as a clock in the sense
of say, word clock. its just a way of finding out the current time. as
such, it cannot play the role that word clock plays in
synchronization. its much more like the role that SMPTE plays in audio
systems - it doesn't really provide proper audio sync at all, it just
provides a low resolution check on "where are we now", and leaves the
implementation of a PLL that can correct for drift to something else.

>A synchronous execution model also isn't really needed for a
>non-realtime application that might not be able to run with
>SCHED_FIFO and could as well just have a large buffer that it fills
>from time to time.

on SCHED_FIFO: most people who use media players like the fact that
when you adjust the volume, the volume changes "in real time". you
don't need to go down to 1.3msec interrupt intervals for this, but you
definitely need to do better than 187msec.

on synchronous execution: this isn't about the operation of a single
program. if you've only got one program, then there is no "synchrony"
to be concerned about. the SE model used by CoreAudio, JACK and others
exists to ensure that the streams are always in sync with *each
other*. you don't need low latency to implement SE - they are
orthogonal characteristics. that said, the dmSDK model doesn't scale
well to low latency as far as i can see, and it certainly isn't very
friendly toward SE.

>It is not clear to me if or how well dmSDK supports inter application audio
>streaming.

it works. you just open a new "jack" (heh), and deliver/receive
messages to/from it. the jacks can cross application boundaries, just
as they cross kernel/user space ones.

>> you could probably reimplement a synchronous execution system like
>> JACK on top of OpenML, but then what would be the point of that?

>You'd have a seperate low level library, and a utility library for
>the synchronous execution. I'm programming a MIDI API right now, and
>for receiving MIDI I can choose between a 'callback' or a 'read'
>interface. The read interface is the more fundamental one; A
>'callback' interface can easily be implemented on top of it, so that
>is what I intend to do. For a MIDI API it would be nice to timestamp
>the events with a single clock, that is also used by the audio/video
>API. OpenML would provide this. At least that part I think is ok.

Well. The problem is: to interoperate "usefully" (perhaps i should say
"properly"), the apps need to be using the SE API, not the underlying
one. This is precisely the case right now, where JACK uses ALSA as the
underlying API for audio. Apps can clearly use ALSA directly if they
wish, but if they do, they can't participate with JACK, at least not
in a way that preserves all of JACK's semantics. This doesn't say that
ALSA or JACK are dumb, and it clearly doesn't negate JACK's dependence
on ALSA. But the JACK API conceptually "supercedes" ALSA. I don't mean
that its superior or anything like that. I mean that its designed
around a different and "more abstract" concept. If you build an SE
model on top of dmSDK/OpenML, the dmSDK/OpenML disappears just like
ALSA disappears within JACK.

Put more concisely: where an API provides communication methods as
well as mere data access and transformation methods, everyone who
wants to communicate must use the same protocol that the API fronts
for. dmSDK/OpenML clearly *wants* to be that API.

BTW, its not at all clear to me that a read interface is the more
fundamental one. I think that Unix-heads like myself tend to feel that
way about it because we've grown up with read and write this way. If
we'd grown up with a completely memory-mapped system that simply
interrupted a thread when data was ready, and you then used direct
pointer access to "read" it (as is possible in several interesting 64
bit operating systems with a single address space), i think you might
see the poll/interrupt API as the fundamental one. I mean, anyone can
write read() based on memcpy(), right ? :)

--p


New Message Reply About this list Date view Thread view Subject view Author view Other groups

This archive was generated by hypermail 2b28 : Mon Feb 25 2002 - 03:23:30 EET