Re: [linux-audio-dev] Audio/Midi system - RT prios..

From: Paul Davis <paul@email-addr-hidden>
Date: Sat Dec 31 2005 - 15:03:06 EET

On Sat, 2005-12-31 at 00:04 +0100, fons adriaensen wrote:
> On Fri, Dec 30, 2005 at 05:10:44PM -0500, Paul Davis wrote:
> > On Fri, 2005-12-30 at 22:27 +0100, Pedro Lopez-Cabanillas wrote:
> > > On Friday 30 December 2005 17:37, Werner Schweer wrote:
> > >
> > > > The ALSA seq api is from ancient time were no realtime threads were
> > > > available in linux. Only a kernel driver could provide usable
> > > > midi timing. But with the introduction of RT threads the
> > > > ALSA seq api is obsolete IMHO.
> > >
> > > I don't agree with this statement. IMHO, a design based on raw MIDI ports used
> > > like simple Unix file descriptors, and every user application implementing
> > > its own event schedule mechanism is the ancient and traditional way, and it
> > > should be considered obsolete now in Linux since we have the advanced
> > > queueing capabilities provided by the ALSA sequencer.
> >
> > low latency apps don't want queuing they just want routing. this is why
> > the ALSA sequencer is obsolete for such apps. frank (v.d.p) had the
> > right idea back when he started this, but i agree with werner's
> > perspective that the queuing facilities are no longer relevant, at least
> > not for "music" or "pro-audio" applications.
>
> I'd agree with Pedro on this.
>
> 1. If things have to be timed accurately, it seem logical to concentrate
> this activity at one point. At least then the timing will be consistent,
> you can impose priority rules in case of conflict, etc.

in a low latency *live* system, "timing" doesn't really exist outside of
the current period. there is no concept of "when" that exists beyond the
end of the current period.

> 2. Translating from data having an implicit or explicit timestamp
> associated with it, to a physical signal having a real physical time is
> something that belongs at the system or even hardware level, just as it
> does for audio.
> When you are dealing with midi in software, it should just be timetagged
> data, just as audio samples are. The only place where the timing matters
> is when midi is output on a real electrical midi port.
> Trying to deliver e.g. note-on events from a software sequencer to a soft
> synth exactly 'on time' is a waste of effort - what the synth needs to know
> is not 'when' on some physical time scale the note starts, but at which
> sample it should start. In other words, the note-on event needs a timestamp
> that can be converted easily to a frame time.

well, clearly, yes. but the point of the ALSA sequencer's queuing
abilities (as distinct from its routing abilities) is really to schedule
stuff "far off" in the future. my claim is that live applications never
need scheduling beyond the of the end of the current period. as a
result, for this class of applications, most of the ALSA sequencer's
capabilities are redundant, which is compounded because it currently has
no way of providing sufficiently accurate scheduling (to be fair, at the
moment neither does user space).
Received on Sat Dec 31 16:15:07 2005

This archive was generated by hypermail 2.1.8 : Sat Dec 31 2005 - 16:15:07 EET