Re: [linux-audio-dev] introduction & ideas

New Message Reply About this list Date view Thread view Subject view Author view Other groups

Subject: Re: [linux-audio-dev] introduction & ideas
From: David Olofson (david_AT_gardena.net)
Date: Wed Mar 13 2002 - 07:47:47 EET


On Sunday 10 March 2002 23.47, Martijn Sipkema wrote:
> > > I think accurate MIDI timing eventually comes down on how well
> > > the operating system performes.
> >
> > To put it simple: I think that line of thinking eventually leads
> > to heavy abuse of the system. You are *not* supposed to have a
> > general purpose CPU manage low level timing, if you can help it.
>
> Most current MIDI interfaces are just serial ports, they have no
> clock and/or
> timing facilities. If they would have, then you could use that for
> the accurate
> scheduling. So, I can't help it.

Unless you want to build your own interface, of course. ;-)

This reminds me of a linux-kernel thread on LinModems that would
require RTLinux to work reliably, and how including RTLinux in
mainstream kernels would inspire even worse designs - which would
eventually result in RTLinux going soft real time, due to too many
drivers using it... *heh*

My point is that the same applies to "lowlatency" class real time in
user space, to some extent.

> > I mean, you aren't using one IRQ per audio sample, are you? ;-)
>
> Well, eh... No. You? :)

I have been known to, actually. (Just had to see if RTLinux was up to
it - and it was! Not much power left for useful work, though... :-)

[...]
> > Yeah, I think I actually suggested keeping track of "MIDI bytes
> > sent since last buffer empty state", in order to estimate the
> > current latency for a MIDI byte sent to the driver...
>
> If you're late you're late.

Yes indeed - but I don't see what that has to do with this.

If you're refering to "buffer empty state", this has nothing to do
with being late. It's just what happens occasionally whenever you're
not constantly utilizing the full MIDI bandwidth. The MIDI output
thread should see this happening in it's estimations, and use it to
reset it's prediction variables, to avoid drifting out of sync with
the MIDI interface.

(Of course, none of this would be needed if you could just read back
the current state of the MIDI interface, or perhaps even better, just
drive the MIDI thread from an IRQ that is synchronized with the MIDI
UART in a very well defined way, in terms of "MIDI bit/byte clock".)

> [...]
>
> > Sure - but you *are* aware that reprogramming the timer on
> > virtually any PC main board stalls your CPU for hundreds of
> > cycles, as it has to be done through the dreaded ISA derived
> > "port" logic, right?
> >
> > RTL and RTAI schedulers do this all the time (*), and people are
> > whining about the overhead on a regular basis.
> >
> > (*) except on SMP systems where you can use the much better
> > timers of the standard SMP "glue" logic, that unfortunately is
> > disabled on virtually all single UP mainboards.
>
> I did not know that. There is nothing that can be done about this?

Nothing short of using an SMP mainboard, one of the few UP mainboards
that enable SMP features, or throwing in a PCI timer card. (If you
can even find such a thing. Most industrial I/O cards have various
timers on them, but the money those cost will usually buy you a good
SMP mainboard - and a CPU for the extra socket, if you're lucky!)

> Well, 1ms jitter would still be quite good enough for MIDI I think.

Yes. (If people even consider using Windows for serious MIDI work, it
has to be "acceptable", at least...)

But then you'd have a ~1 kHz timer firing off IRQs all the time
anyway - so why not just sleep on the RTC?

BTW, is it possible to read N bytes from the RTC device, to say
"Please skip N cycles before waking me up"...?

Another idea: How about sharing the RTC through multi-open? I would
assume this has been considered and rejected because only *some*
features can be shared. However, the point is that with multimedia
applications requiring higher frequency "clocks" than HZ, simply
being able to get a "heartbeat" from the RTC device would be
sufficient. The RTC would run at the highest rate requested by anyone
using the device, and the driver would keep track of when to wake up
which sleeper.

Of course, I could just read the code (more carefully than last time)
and see if I can figure something out - I just want to know if anyone
knows if there's any point in even considering it.

> The standard Linux/x86 10ms certainly is not though.

Right.

> > > You
> > > just happen to need a realtime kernel for MIDI.
> >
> > No. You need a real time kernel to output MIDI with accurate
> > timing, unless you have a properly designed MIDI interface.
>
> But with most MIDI hardware you will need a realtime kernel for
> accurate timing.

Yeah, that seems to be the brutal (and rather alarming) truth...

> And even with properly designed hardware it is
> nice for a sequencer application to be able to be able to do better
> than 10ms sleep accuracy.

It is indeed, but that doesn't warrant running with sub 1 ms output
buffering all the time. As it is now, even the most primitive MIDI
player will have to do that... :-/

> Besides, I don't think the overhead of having to reschedule for
> every MIDI event would be that large.

Well, several non-x86 archs *do* have HZ == 1024... Then again, that
doesn't imply that they actually *switch* once per jiffy - let alone
twice or more.

Anyway, we're already using these kind of rates for low latency
audio, and it's not resulting in very much overhead there.

> > > And then there will
> > > still be jitter in a dense
> > > MIDI stream, since a message takes about 1ms to transmit.
> >
> > Yes - but having total control of where you are, you can
> > potentially improved the situation a little by having the
> > application sort events according to priority (ie "how sharp is
> > the attack of this sound"), so that the most important events are
> > played as close to the exact time as possible, while less
> > important events are placed before and after, according to their
> > timing relation to the higher priority events.
>
> A sequencer could support track priority, most seem to do this
> depending on the track number in some way.

It would actually be rather surprizing if a sequencer *doesn't*
gather events in some simple, well defined order. :-)

> > I've noticed that explicitly ordering events manually in the
> > sequencer, using offsets below the timing resolution, can improve
> > tightness a lot with a fast synth. (Like the Roland JV-1080,
> > which - unlike the older models - doesn't have a dog slow MCU for
> > MIDI decoding.)
> >
> > I would say the benefits of better utilization of the MIDI
> > bandmidth are very real. This is *not* just theory, but a real
> > possibility - that unfortunately requires better hardware to be
> > fully explored.
>
> Just don't send too much data over a single MIDI wire.

Of course - but why making it worse than it is by not utilizing the
full MIDI "resolution"?

Besides, it's easy to say "don't send too much data" if you have
enough synths that you can afford to use only 30% of the polyphony of
each one...

[...]
> > You don't expect the kernel guys to sacrifice overall throughput
> > for near RTL/RTIA class scheduling accuracy, do you? :-)
>
> Hmm.. Then they could maybe make it optional, at run time (or
> compile time).

It would have to be compile time - but it probably won't happen, as
it would require that practically *everything* is fully preemptible.
Such an environment is *not* a fun place to hack drivers in. And of
course, it's very different from the current environment.

> Does Linux allow the clock resolution to be set at
> run time like QNX?

No.

[...]
> > Note that I'm not saying that it'll never happen! Just look at
> > how the issues with scaling to high end SMP systems more or less
> > invalidated fundamental design rules.
>
> Yes, the kernel will probably have to become fully preemptible
> anyway and driver writers will have to stop, in the case of a fully
> preemtible kernel, spinlocking for longer than a couple of (40?)
> microseconds.

Yes, but there still seems to be *lots* of work to do before we can
even have 2.2.10-lowlatency class real time on a mainstream kernel.

> > > When buffering, MIDI through performance will suffer.
> >
> > Yes and no: Latency is one thing - jitter is another. Most people
> > will find jitter to be *much* more harmful.
> >
> > "Buffering" doesn't mean that you have to buffer several ms. As
> > MIDI doesn't react as violently to missed dealines as does audio,
> > you can cheat and cut latencies below that of audio by using less
> > buffering, and accepting the occasional, tiny peak. (Of course,
> > that requires that the driver and h/w provide means of resyncing
> > with the "MIDI clock" whenever you get buffer xruns!)
>
> I guess that an extra latency of about 2-5 ms would still be
> acceptable.

Yes... We just need to design a proper MIDI interface for that to
actually have the desired effect. *heh*

//David

.- M A I A -------------------------------------------------.
| Multimedia Application Integration Architecture |
| A Free/Open Source Plugin API for Professional Multimedia |
`----------------------> http://www.linuxaudiodev.com/maia -'
.- David Olofson -------------------------------------------.
| Audio Hacker - Open Source Advocate - Singer - Songwriter |
`-------------------------------------> http://olofson.net -'


New Message Reply About this list Date view Thread view Subject view Author view Other groups

This archive was generated by hypermail 2b28 : Thu Mar 14 2002 - 01:07:11 EET