Re: [linux-audio-dev] introduction & ideas

New Message Reply About this list Date view Thread view Subject view Author view Other groups

Subject: Re: [linux-audio-dev] introduction & ideas
From: David Olofson (david_AT_gardena.net)
Date: Sun Mar 10 2002 - 02:20:29 EET


On Saturday 09 March 2002 17.37, Martijn Sipkema wrote:
[...]
> I think accurate MIDI timing eventually comes down on how well the
> operating system performes.

To put it simple: I think that line of thinking eventually leads to
heavy abuse of the system. You are *not* supposed to have a general
purpose CPU manage low level timing, if you can help it.

I mean, you aren't using one IRQ per audio sample, are you? ;-)

MIDI doesn't strike me as being very different - it basically just
has a notion of "no data here", whereas audio interfaces do not.

I don't even view 3D accelerators as very different, for that matter!
You pass commands to it through a FIFO buffer, and when you've given
it all commands to render one frame, you push a command that waits
until the vertical retrace (*) and then swaps the new buffer into
display. This way, there's no need to "hardsync" your application
with the retrace to have sustained, full frame rate animation. Just
ensure that the command buffer never runs empty.

(*) Most Linux drivers don't make use of this where present, and
   don't even care to handle retrace sync'ed flips any other way -
   which results in Windoze still being the only platform where you
   can have *really* smooth animation. :-/

> If Linux had a better performing
> nanosleep() , i.e. if it would reprogram the clock chip to generate
> one shot interrupts at the exact time the first ready thread will
> need to be woken up, then the MIDI timing would be close to what
> the hardware is able to obtain. I don't think a new clock would
> make much sense. You could just run the kernel with a higher clock
> resolution I think.

Yeah, but
        1) The system timer should not be abused for things that
           the MIDI interface + driver should handle, and...

        2) Timing below the ms level should be done by the MIDI
           interface in the first place, to avoid scheduling
           overhead for every single MIDI message.

Now, good design is one thing. Available hardware is another.

Nearly all MIDI interfaces and most video cards in existence are
utterly poor designs! :-(

(Yeah, I'm in bashing mode today, partly as a result of getting into
a discussion that ended with me being criticized for supporting
features that are only available under Win32 DirectDraw or
DirectGraphics. What should I do? Sit silently and accept that
Windoze beats the crap out of Linux when it comes to serious gaming?
Guess I'll just have to figure out a way of implementing retrace
sync'ed triple buffering without those retrace IRQs, and without busy
waiting polling, as there's no other Linus Compliant (TM) way of
fixing this...)

[...]
> For multiport interfaces there are internal FIFOs besides the
> serial port FIFO. Also, this latency is inherent to MIDI and there
> is nothing that can be done about it. Knowing all about the FIFO
> would perhaps enable feedback to the application about when the
> MIDI command was actually transmitted, but that information isn't
> of any practicall use. The application
> could have known it was flooding the MIDI interface by looking at
> the number of events it is sending.

Yeah, I think I actually suggested keeping track of "MIDI bytes sent
since last buffer empty state", in order to estimate the current
latency for a MIDI byte sent to the driver...

[...]
> No interface I know of supports a buffered stream, except maybe the
> new midiman interfaces, and they thus would have some delay. The
> steinberg unit has a internal clock. The Emagic units use buffered
> commands that can be triggered using a short message. They won't
> give details of course.

Hmm. Honestly, it all seems like cheap hardware hacks to me - not
proper solutions.

Even a h/w FIFO on the parallel port, with the half-full output
hooked to an IRQ generating pin, the output driven by a 31250 Hz
oscillator, and each output bit driving a MIDI out port looks better
to me - you could even use DMA transfers, and you'd have total
control in the driver, down to the MIDI bit resolution! ;-)

> > Not that a good Linux/lowlatency build causes all that much
> > jitter, but it's still more than you'd get from any dedicated,
> > professional MIDI device, such as a hardware sequencer. *That's*
> > what this is all about, as I understand it; achieving dedicated
> > h/w performance on workstation based hardware.
> >
> > We've already proved that this is possible with audio. Now, it
> > turns out that MIDI is actually *harder*, at least if we want to
> > achieve *true* dedicated h/w accuracy... *heh*
>
> I hope (and think) that Linux will actually be able to provide good
> enough scheduling for this sometime in the near future.

So do I - but just as with these bl**dy video cards I'm whining
about, I still consider this kind of solutions kludges. (Damn! Am I
starting to sound like Linus now!? ;-)

> > > IMHO a MIDI driver should be a (real time) user space process
> > > able to accurately transmit MIDI events on the hardware from
> > > CLOCK_MONOTONIC or something similar.
> >
> > Well, that's fine if
> > 1) there's a h/w timer to wake the thread up at the
> > right times, and
>
> I disagree. Just use POSIX realtime clocks on a good kernel.

Sure - but you *are* aware that reprogramming the timer on virtually
any PC main board stalls your CPU for hundreds of cycles, as it has
to be done through the dreaded ISA derived "port" logic, right?

RTL and RTAI schedulers do this all the time (*), and people are
whining about the overhead on a regular basis.

(*) except on SMP systems where you can use the much better timers of
   the standard SMP "glue" logic, that unfortunately is disabled on
   virtually all single UP mainboards.

> You
> just happen to need a realtime kernel for MIDI.

No. You need a real time kernel to output MIDI with accurate timing,
unless you have a properly designed MIDI interface.

> And then there will
> still be jitter in a dense
> MIDI stream, since a message takes about 1ms to transmit.

Yes - but having total control of where you are, you can potentially
improved the situation a little by having the application sort events
according to priority (ie "how sharp is the attack of this sound"),
so that the most important events are played as close to the exact
time as possible, while less important events are placed before and
after, according to their timing relation to the higher priority
events.

I've noticed that explicitly ordering events manually in the
sequencer, using offsets below the timing resolution, can improve
tightness a lot with a fast synth. (Like the Roland JV-1080, which -
unlike the older models - doesn't have a dog slow MCU for MIDI
decoding.)

I would say the benefits of better utilization of the MIDI bandmidth
are very real. This is *not* just theory, but a real possibility -
that unfortunately requires better hardware to be fully explored.

> > 2) it's OK that the scheduling jitter is visible on
> > the MIDI outputs.
>
> This will not be significant with a good scheduler.

No chain is stronger than it's weakest link, and Linux is not a "µs
class" RTOS - and I don't think it ever will be. Professional users
will demand that *all* events are on time, every time. Heck, *I* do,
and I'm not really a professional...

(I didn't ask for, or get any money for that old intro they used on a
house/techno tour. Then I'm still not a pro, right? ;-)

> > Without "buffering" MIDI interfaces, a workstation is not going
> > to deliver the same timing accuracy as say, a h/w sequencer - not
> > without a hard real time kernel like RTL or RTAI.
>
> That's a problem with the kernel then.

You don't expect the kernel guys to sacrifice overall throughput for
near RTL/RTIA class scheduling accuracy, do you? :-)

> BeOS could do this.

From what I've heard (STILL no real figures!), it's not all that
much, if at all, better than Linux/lowlatency... Most importantly, we
have no proof whatsoever that BeOS can continously deliver that kind
of worst case latencies during heavy system stress, the way
Linux/lowlatency can.

(Hard is HARD, ok? I'm still representing the RTLF, and I'm still a
hard RT programmer at heart. "It ain't hard until proven so." In
fact, Linux/lowlatency shouldn't be considered hard RT for critical
applactions either, due to it's complexity, but it's reliable enough
that I've never seen a missed deadline on a proved working setup. :-)

> Linux
> could probably too without too much modification.

If "without too much modification" includes making it even more
preemptible than the latest 2.5 kernels, sure...

Note that I'm not saying that it'll never happen! Just look at how
the issues with scaling to high end SMP systems more or less
invalidated fundamental design rules.

> The scheduling
> latency is good enough with the latency/preemptive kernel patches.
> It is just the clock resolution that is a little low.

Well, then I guess you just don't need to push MIDI to the very
limits for your application.

Using too few synths with too few MIDI inputs is a different matter,
though. (Perhaps I should just buy a rack of JV-2080's and E-mu
samplers and stop whining? Naah. I'm planning to ditch MIDI for synth
control altogether instead. :-)

> When buffering, MIDI through performance will suffer.

Yes and no: Latency is one thing - jitter is another. Most people
will find jitter to be *much* more harmful.

"Buffering" doesn't mean that you have to buffer several ms. As MIDI
doesn't react as violently to missed dealines as does audio, you can
cheat and cut latencies below that of audio by using less buffering,
and accepting the occasional, tiny peak. (Of course, that requires
that the driver and h/w provide means of resyncing with the "MIDI
clock" whenever you get buffer xruns!)

//David

.- M A I A -------------------------------------------------.
| Multimedia Application Integration Architecture |
| A Free/Open Source Plugin API for Professional Multimedia |
`----------------------> http://www.linuxaudiodev.com/maia -'
.- David Olofson -------------------------------------------.
| Audio Hacker - Open Source Advocate - Singer - Songwriter |
`-------------------------------------> http://olofson.net -'


New Message Reply About this list Date view Thread view Subject view Author view Other groups

This archive was generated by hypermail 2b28 : Sun Mar 10 2002 - 19:21:28 EET