Re: [linux-audio-dev] App intercomunication issues, some views.

New Message Reply About this list Date view Thread view Subject view Author view Other groups

Subject: Re: [linux-audio-dev] App intercomunication issues, some views.
From: Paul Davis (pbd_AT_op.net)
Date: Wed Jul 24 2002 - 22:22:49 EEST


>> as i\'ve indicated, i think this is a
>> bad design. Defining the semantics of MSC in a processing graph is
>> hard (for some of the same reasons that jack_port_get_total_latency()
>> is hard to implement).
>
>Why? On a audio card interrupt buffers traverse the entire graph right? i.e.
>for every \'node\' its process() function is called for accepting audio data
>from that interrupt and producing audio data for the buffer available for
>writing on that interrupt, right?

consider: node B
                       / \
    ALSA PCM -> node A node D -> ALSA PCM
                       \ /
                         node C

what is the latency for output of data from node A ? it depends on
what happens at node B, node C and node D. if node B and node C differ
in their effect on latency, there is no single correct answer to the
question.

>> but anyway, this is irrelevant, because MSC is not the timebase to use
>> for this - you need to use transport time.
>
>Transport time is in frames, right? And there is a transport time available
>for input buffers and output buffers?

No. Its computable by using jack_port_get_total_latency(). buffers
don't come "with" timestamps. the transport time indicates the frame
time at the start of the cycle. it may move forward, backwards, etc.

i just realized that we need to add direction to the transport info
structure, otherwise you can't scrub ... (i've just been working on
adding scrubbing to ardour, a distinctly non-trivial task).

>> >- get the MSC for the first frame of the current buffer for output and
>> > estimate the UST for that frame.
>> >- calculate the UST values for the MIDI events that are to occur during the
>> > output buffer.
>> >- schedule the MIDI events (the API uses UST)
>>
>> i see no particular difference between what you\'ve outlined and what i
>> described, with the exception that the current \"MSC\" is a global
>> property, and doesn\'t belong to buffers. its the transport time of the
>> system.
>
>The current MSC isn\'t global. The MSC is different for input and output
>buffers.

I know. I meant that the equivalent of MSC in JACK *is* global.

>> >- since you get UST for the input buffer you have a better estimation of
>> > when the output buffer will be performed.
>>
>> you\'re making assumptions that the output path from
>> the node matches the input path. this isn\'t true in a general
>> system. the output latency can be totally different from the input
>> latency.
>
>I did not make that assumption I think.
>
>> imagine an FX processor taking input from an ALSA PCM source
>> but delivering it another FX processor running a delay line or similar
>> effect before it goes back to an ALSA PCM sink.
>
>That latency is intended in the effect and has nothing to do with the MSC.

OK, for a delay line, that's true. but for other things .. try saying
that to authors and users of VST plugins, where setInitialDelay() is
critical for not messing up the results of applying various effects to
different tracks. Adding a compressor, for example, typically shifts
the output by a few msecs, which the user does not want. The output
latency is clearly not equivalent to the input latency in the general
case.

>So, instead of using UST throughout you use a relative time in the API,
>which then has to be immediately converted to some absolute time to still
>make any sense later. Also using UST is more accurate.
>
>const struct timespec PERIOD;
>
>for (;;) {
> nanosleep(PERIOD);
>}
>
>is less accurate (will drift) then
>
>struct timespec t;
>clock_gettime(CLOCK_MONOTONIC, &t);
>
>for (;;) {
> t += PERIOD; // i know you can\'t actually do this with struct timespec...
> clock_nanosleep(CLOCK_MONOTONIC, TIMER_ABSTIME, &t, NULL);
>}

i agree with you on this. the problem is that i don't think that any
real application works like this - instead, the MIDI clock needs to
sync to the current audio transport frame, in which case we have:

----------------------------------------------------------------------
 // compute the delta until the MIDI data should be delivered by
 // checking the current transport time and then:

const struct timespec t;
clock_gettime (CLOCK_WHATEVER, &t);
t += delta;
clock_nanosleep (CLOCK_WHATEVER, TIMER_ABSTIME, &t, NULL);
----------------------------------------------------------------------

we could do that with nanosleep() and the effect would be
indistinguishable. we are constantly re-syncing to the transport time
every time we iterate, thus removing the drift issue from
consideration.

>A common \'wall clock\' is needed to compare the times of events from
>different
>media. UST provides this.

i don't entirely agree on this. you need a common clock, yes. but by
making the "wall clock" be something other than one of the media
clocks, you now create roughly twice as much work in handling drift.

however, in practice, since the audio clock is low resolution compared
to, say, MIDI requirements, we have to fall back on a different clock
anyway. Ardour's Session::audible_frame() function, for example, has
to use the cycle counter to estimate the elapsed time since the start
of the last JACK cycle (ie. the last audio interface interrupt).

>> >Using MSC for every buffer, the latency is the difference between output
>> >MSC and input MSC.
>>
>> as indicated above, this isn\'t generally true.
>
>I do not understand this. JACK should be able to know the latency.

it does. but not by computing the difference you describe.

>> >> now, the truth is that you can do this either way: you can use an
>> >> absolute current time, and schedule based on that plus the delta, or
>> >> you can just schedule based on the delta.
>> >
>> >But how can this be done in another thread at a later time?
>>
>> thats an implementation issue, mostly for something like the ALSA midi
>> layer.
>
>If I tag a MIDI message with a absolute time then the MIDI implementation
>can at a later time still determine when the message is to be performed.
>How can this be done without an absolute stamp?

it can't. the question is whether the user-space API should be using
absolute or relative stamps.

>> i\'ve said before that i\'d to see snd_rawmidi_write_with_delay()
>> or something equivalent. it would be down to the driver to figure out
>> how to ensure that the data delivery happens on time, and how it would
>> work would probably vary between different hardware.
>
>I really don\'t think it is as easy as adding a write_with_delay(). Should
>the driver accept only time-ordered messages? How about MIDI through for
>a sequencer application?

i didn't suggest getting rid of snd_rawmidi_write(). i meant adding
the "..with_delay()" function. snd_rawmidi_write() would still be
available for immediate delivery. MIDI thru should not really be done
in software, but it if has to be, thats already possible with the
existing ALSA rawmidi API.

--p


New Message Reply About this list Date view Thread view Subject view Author view Other groups

This archive was generated by hypermail 2b28 : Wed Jul 24 2002 - 22:33:17 EEST