Re: [linux-audio-dev] App intercomunication issues, some views.

New Message Reply About this list Date view Thread view Subject view Author view Other groups

Subject: Re: [linux-audio-dev] App intercomunication issues, some views.
From: Martijn Sipkema (msipkema_AT_sipkema-digital.com)
Date: Thu Jul 25 2002 - 00:18:22 EEST


[...]
>
> consider: node B
> / \\
> ALSA PCM -> node A node D -> ALSA PCM
> \\ /
> node C
>
> what is the latency for output of data from node A ? it depends on
> what happens at node B, node C and node D. if node B and node C differ
> in their effect on latency, there is no single correct answer to the
> question.

Handling this kind of latency is a different story. Perhaps this shouldn\'t
even be in JACK since as you pointed out there is no right way of handling
it. This could perhaps be handled on a higher level if at all.

JACK basically still is:

input buffer -> JACK graph -> output buffer

on every callback. If a node does internal buffering that should not affect
the MSCs.

[...]
> >Transport time is in frames, right? And there is a transport time available
> >for input buffers and output buffers?
>
> No. Its computable by using jack_port_get_total_latency(). buffers
> don\'t come \"with\" timestamps. the transport time indicates the frame
> time at the start of the cycle. it may move forward, backwards, etc.

cycle? You got me lost...

[...]
> >The current MSC isn\'t global. The MSC is different for input and output
> >buffers.
>
> I know. I meant that the equivalent of MSC in JACK *is* global.

Well, then it isn\'t really equivalent, is it? :)

[...]
> OK, for a delay line, that\'s true. but for other things .. try saying
> that to authors and users of VST plugins, where setInitialDelay() is
> critical for not messing up the results of applying various effects to
> different tracks. Adding a compressor, for example, typically shifts
> the output by a few msecs, which the user does not want. The output
> latency is clearly not equivalent to the input latency in the general
> case.

If you use a hardware compressor with a high delay there is also no way
to compensate. It would be nice to have a way to compensate, but JACK is
not the right place for this I think. Anyway, if it is in JACK, it still
would have to be at a higher level than the basic intput/output latency,
i.e. without taking extra node latency into account.

> >So, instead of using UST throughout you use a relative time in the API,
> >which then has to be immediately converted to some absolute time to still
> >make any sense later. Also using UST is more accurate.
> >
> >const struct timespec PERIOD;
> >
> >for (;;) {
> > nanosleep(PERIOD);
> >}
> >
> >is less accurate (will drift) then
> >
> >struct timespec t;
> >clock_gettime(CLOCK_MONOTONIC, &t);
> >
> >for (;;) {
> > t += PERIOD; // i know you can\\\'t actually do this with struct
timespec...
> > clock_nanosleep(CLOCK_MONOTONIC, TIMER_ABSTIME, &t, NULL);
> >}
>
> i agree with you on this. the problem is that i don\'t think that any
> real application works like this - instead, the MIDI clock needs to
> sync to the current audio transport frame, in which case we have:
>
> ----------------------------------------------------------------------
> // compute the delta until the MIDI data should be delivered by
> // checking the current transport time and then:
>
> const struct timespec t;
> clock_gettime (CLOCK_WHATEVER, &t);
> t += delta;
> clock_nanosleep (CLOCK_WHATEVER, TIMER_ABSTIME, &t, NULL);
> ----------------------------------------------------------------------
>
> we could do that with nanosleep() and the effect would be
> indistinguishable. we are constantly re-syncing to the transport time
> every time we iterate, thus removing the drift issue from
> consideration.

There is still the problem of getting an accurate estimate of system
time of some audio frame. This is needed to calculate at what time
to output a MIDI message. I think this approach is better then just
counting on being scheduled just after the audio hardware interrupt.
This is certainly not the case with somewhat large audio buffers and
multiple nodes.

> >If I tag a MIDI message with a absolute time then the MIDI implementation
> >can at a later time still determine when the message is to be performed.
> >How can this be done without an absolute stamp?
>
> it can\'t. the question is whether the user-space API should be using
> absolute or relative stamps.

I think it should. What would be a good reason not to? I can think of
several good reasons why I would want to use absolute time.

[...]
> i didn\'t suggest getting rid of snd_rawmidi_write(). i meant adding
> the \"..with_delay()\" function. snd_rawmidi_write() would still be
> available for immediate delivery. MIDI thru should not really be done
> in software, but it if has to be, thats already possible with the
> existing ALSA rawmidi API.

I don\'t think there are any alternatives for doing MIDI through in software.
And having both a snd_rawmidi_write() and a snd_rawmidi_write_with_delay()
really isn\'t that trivial. How can a correct MIDI stream be guaranteed?
rawmidi doesn\'t operate on MIDI messages and thus it is hard two interleave
the two streams. I won\'t even start on system exclusive...

--martijn

Powered by ASHosting


New Message Reply About this list Date view Thread view Subject view Author view Other groups

This archive was generated by hypermail 2b28 : Thu Jul 25 2002 - 00:18:19 EEST