Subject: Re: [linux-audio-dev] Re: sched_setscheduler question ...
From: John Lazzaro (lazzaro_AT_CS.Berkeley.EDU)
Date: Sun Jun 11 2000 - 21:12:15 EEST
> I hope that my point of view might be useful.
Yes, it has -- this whole discussion with you,
Benno, and Kai has been very helpful, and I think
I understand the issues well enough now to
re-architect -timesync mode, in way that will
let blocking write()'s do most of the SCHED_OTHER
synchronizing work, with explicit yielding by
the SCHED_FIFO thread reserved for the sort of
situations Kai commented on in his list reply.
Thanks to everyone for participating -- I've been
lurking for a few months in hopes of picking up
all the information I needed for the rewrite
passively, but in the end there's no substitute
for interactivity.
Here's an answer to the question in your
post about sfront, just to wrap things up:
> Who invented -timesync mode ? Is it part of the
> MP4-SA specification ?
Our first specification of timesync mode was an
attempt to handle all of these issues at once:
---[1] Structured Audio has a normative declaration of the "execution decoder cycle" -- what happens when. See:
http://www.cs.berkeley.edu/~lazzaro/sa/book/control/saolc/index.html#order
on the right-hand panel for a simplified version, or 5.7.3.3.6 in the spec for the normative version, p. 25 in the document linked to Appendix H on this page:
http://www.cs.berkeley.edu/~lazzaro/sa/book/append/index.html
For this discussion, the main thing to note about this spec is that new control input gets processed once per control cycle, at the very start. The length of the control cycle is set by SAOL code, and can be arbitrary (but is, thankfully, fixed).
Although Annex 5.F in the standard offered an easy out, we wanted to try to have real-time MIDI control input conform to 5.7.3.3.6 as closely as possible. More specifically, we wanted a narrow aperature around the start of a control cycle to capture MIDI control data -- all complete MIDI commands arriving to the left of the aperature window would be guaranteed to make it into the next execution decoder cycle, all complete MIDI control data to the right of the aperature window would be guaranteed not make it into the execution control cycle.
The motivation for this was to minimize the variance (jitter) in the control loop, the importance of which our musician friends at CNMAT
http://www.cnmat.berkeley.edu/
down the street always emphasize. Being alive and spinning, rather than blocked, near the aperature window seemed to be the way to minimize the variance, especially before these SCHED_FIFO issues made it clear the positive aspects of blocking in -timesync mode.
A secondary motivation for the low jitter (and for not using Annex 5.F techniques) was that if this real-time MIDI input data was also being captured for later playback under Structured Audio semantics, we wanted the audio to be normatively identical in real-time and stored cases -- no "the replay feel is different that what I played" issues.
[2] We wanted sfront to have a general audio and control API, that would work for both files and real-time I/O, so that people could contribute drivers to the distribution w/o having to understand sfront internals. See:
http://www.cs.berkeley.edu/~lazzaro/sa/sfman/devel/cdriver/intro/index.html
to get a sense of where the API is currently.
[3] The control cycle length is set by the SAOL program, and when multiplied out to be a number of audio sample bytes, will be arbitrary. This, coupled with the fact that many devices really want their "buffers" or "fragments" to be a power of two, meant that the control cycle of the SAOL decoder and the audio I/O process were not going to be happening in a tightly synchronized way (i.e. every control cycle fits into exactly N fragments), but were going to float against each other in time. And the APIs mentioned in [2] reflect this. This floating would seem to imply that every so often, you'll be blocked at precisely the wrong time -- right around the aperature window -- if you let your writes block at all. Thus, the spinning approach.
---
So in short, we wanted the odds of being blocked during the aperature window to be really low, to minimize the variance, and this is what led to using a large number of fragments. I think this isn't dissimilar from the SMPTE issues you were discussing on the list a few months ago -- any synchronization to an unyielding time window would seem to want the odds of not being blocked during the time window to be low as possible.
However, what's different in this case is, given the granularity of MIDI, the plausible range of control rate values SAOL users are likely to use, real-world soundcard and PCI-bus issues, etc, is variance truly a major issue in this case? Intuitively, it may not actually be a problem in practice -- we'll see when I re-architect sfront's timesync mode to use a small number of fragments.
--jl
------------------------------------------------------------------------- John Lazzaro -- Research Specialist -- CS Division -- EECS -- UC Berkeley lazzaro [at] cs [dot] berkeley [dot] edu www.cs.berkeley.edu/~lazzaro -------------------------------------------------------------------------
This archive was generated by hypermail 2b28 : Sun Jun 11 2000 - 21:49:08 EEST