Re: [linux-audio-dev] introduction & ideas

New Message Reply About this list Date view Thread view Subject view Author view Other groups

Subject: Re: [linux-audio-dev] introduction & ideas
From: Martijn Sipkema (msipkema_AT_sipkema-digital.com)
Date: Mon Feb 25 2002 - 02:51:27 EET


> "establish synergy to multi-purpose and re-purpose content for a
> variety of distribution mediums ...."
>
> gack. who writes this dreck?

ok, you have a point here :)

> >OpenML aims to be a cross platform media API. I wonder what the people
> >on this list think about it. Is it suitable for prefessional low
> >latency audio?
>
> from an audio perspective, its based around SGI's dmSDK. it doesn't
> provide a synchronous execution model, just messages and
> buffering. end of story from my perspective.

But for a low level API that doesn't impose policy, I don't think that is a
bad thing
per se, and, as you already mentioned, you can easily implement a
synchronous
execution model on top of dmSDK. But are there any drawbacks to the dmSDK
API
when used this way? It seems to me quite an elegant low level API, with the
UST/MSC
time information returned.

The idea of a single 'system clock', (POSIX CLOCK_MONOTONIC would do) to
synchronise different media streams and midi (which is not in OpenML) is the
correct
way IMHO.

A synchronous execution model also isn't really needed for a non-realtime
application
that might not be able to run with SCHED_FIFO and could as well just have a
large
buffer that it fills from time to time.

It is not clear to me if or how well dmSDK supports inter application audio
streaming.

> the spec says:
>
> Normal operating system methods of synchronization fail when multiple
> streams of media must stay "in sync" with each other. Each stream, as
> has been described earlier in this chapter, is broken into a set of
> buffers and put into a queue to avoid the large (and unpredictable)
> processing delays that frequently occur on non-realtime operating
> systems. However, a new problem is introduced by now having multiple
> independent queues of buffers that need to be synchronized.
>
> To solve this problem, ML provides feedback to the application about
> when each buffer actually started passing through the jack. By
> looking at the returned timestamps, the application can see how much
> two streams of buffers are drifting from each other, relative to how
> far apart they should be. It can then make an corrections, for
> example, skipping a video frame, to reduce the drift.
>
> pardon my arrogance, but: bwahahahahahaha!!
>
> IMHO, this is a system designed for stuff like DVD playback, animation
> creation/viewing, "consumer"-type stuff. its not well designed for
> building black boxes that happen to run a general purpose OS and are
> used as dedicated audio components.
>
> you could probably reimplement a synchronous execution system like
> JACK on top of OpenML, but then what would be the point of that?

You'd have a seperate low level library, and a utility library for the
synchronous
execution. I'm programming a MIDI API right now, and for receiving MIDI I
can
choose between a 'callback' or a 'read' interface. The read interface is the
more
fundamental one; A 'callback' interface can easily be implemented on top of
it, so
that is what I intend to do. For a MIDI API it would be nice to timestamp
the
events with a single clock, that is also used by the audio/video API. OpenML
would provide this. At least that part I think is ok.

--martijn


New Message Reply About this list Date view Thread view Subject view Author view Other groups

This archive was generated by hypermail 2b28 : Mon Feb 25 2002 - 02:39:46 EET