Subject: Re: [linux-audio-dev] introduction & ideas
From: Paul Davis (pbd_AT_Op.Net)
Date: Sun Feb 24 2002 - 15:42:51 EET
"establish synergy to multi-purpose and re-purpose content for a
variety of distribution mediums ...."
gack. who writes this dreck?
>OpenML aims to be a cross platform media API. I wonder what the people
>on this list think about it. Is it suitable for prefessional low
>latency audio?
from an audio perspective, its based around SGI's dmSDK. it doesn't
provide a synchronous execution model, just messages and
buffering. end of story from my perspective.
the spec says:
Normal operating system methods of synchronization fail when multiple
streams of media must stay "in sync" with each other. Each stream, as
has been described earlier in this chapter, is broken into a set of
buffers and put into a queue to avoid the large (and unpredictable)
processing delays that frequently occur on non-realtime operating
systems. However, a new problem is introduced by now having multiple
independent queues of buffers that need to be synchronized.
To solve this problem, ML provides feedback to the application about
when each buffer actually started passing through the jack. By
looking at the returned timestamps, the application can see how much
two streams of buffers are drifting from each other, relative to how
far apart they should be. It can then make an corrections, for
example, skipping a video frame, to reduce the drift.
pardon my arrogance, but: bwahahahahahaha!!
IMHO, this is a system designed for stuff like DVD playback, animation
creation/viewing, "consumer"-type stuff. its not well designed for
building black boxes that happen to run a general purpose OS and are
used as dedicated audio components.
you could probably reimplement a synchronous execution system like
JACK on top of OpenML, but then what would be the point of that?
--p
This archive was generated by hypermail 2b28 : Sun Feb 24 2002 - 15:35:02 EET