Subject: Re: [linux-audio-dev] App intercomunication issues, some views.
From: Paul Davis (pbd_AT_op.net)
Date: Wed Jul 24 2002 - 17:32:10 EEST
>If I use an absolute sleep there is basically no difference. The drift
>will be the same, but instead of scheduling events from \'now\' I can
>spcify the exact time. So a callback would then be like:
>
>- get the UST and MSC for the first frame of the current buffer for input
MSC implies timestamped buffers. as i've indicated, i think this is a
bad design. Defining the semantics of MSC in a processing graph is
hard (for some of the same reasons that jack_port_get_total_latency()
is hard to implement).
in a system supporting low latency well, the buffer you are working
should be considered to be related to "now" as closely as
possible. the only gap between "now" and when it was actually
collected or will be delivered to the connectors on the interface is
defined by the latency of the input or output path. either way, the
application should consider itself to be working against the "now"
deadline.
but anyway, this is irrelevant, because MSC is not the timebase to use
for this - you need to use transport time.
>- get the MSC for the first frame of the current buffer for output and
> estimate the UST for that frame.
>- calculate the UST values for the MIDI events that are to occur during the
> output buffer.
>- schedule the MIDI events (the API uses UST)
i see no particular difference between what you've outlined and what i
described, with the exception that the current "MSC" is a global
property, and doesn't belong to buffers. its the transport time of the
system.
>This has two advantages:
>
>- since you get UST for the input buffer you have a better estimation of
> when the output buffer will be performed.
you're making assumptions that the output path from
the node matches the input path. this isn't true in a general
system. the output latency can be totally different from the input
latency. imagine an FX processor taking input from an ALSA PCM source
but delivering it another FX processor running a delay line or similar
effect before it goes back to an ALSA PCM sink.
>- the MIDI messages will be queued and thus will need an absolute timestamp.
they don't need an absolute timestamp to be applied in user-space:
they just need a non-adjustable tag that indicates when they should be
delivered. obviously, at some point, this has be to converted to an
absolute time, but that doesn't need to be part of the API. "deliver
this in 1msec" versus "deliver this at time T" - the latter requires
UST, the former just requires something with the semantics of nanosleep.
>> of course, the above simple example doesn\'t take audio latency into
>> account, but thats left as an an exercise for the reader :)
>
>Using MSC for every buffer, the latency is the difference between output
>MSC and input MSC.
as indicated above, this isn't generally true.
>> now, the truth is that you can do this either way: you can use an
>> absolute current time, and schedule based on that plus the delta, or
>> you can just schedule based on the delta.
>
>But how can this be done in another thread at a later time?
thats an implementation issue, mostly for something like the ALSA midi
layer. i've said before that i'd to see snd_rawmidi_write_with_delay()
or something equivalent. it would be down to the driver to figure out
how to ensure that the data delivery happens on time, and how it would
work would probably vary between different hardware.
--p
This archive was generated by hypermail 2b28 : Wed Jul 24 2002 - 19:12:34 EEST