Re: [linux-audio-dev] App intercomunication issues, some views.

New Message Reply About this list Date view Thread view Subject view Author view Other groups

Subject: Re: [linux-audio-dev] App intercomunication issues, some views.
From: Paul Davis (pbd_AT_op.net)
Date: Thu Jul 25 2002 - 05:06:05 EEST


>> consider: node B
>> / \\
>> ALSA PCM -> node A node D -> ALSA PCM
>> \\ /
>> node C
>>
>> what is the latency for output of data from node A ? it depends on
>> what happens at node B, node C and node D. if node B and node C differ
>> in their effect on latency, there is no single correct answer to the
>> question.
>
>Handling this kind of latency is a different story. Perhaps this shouldn\'t
>even be in JACK since as you pointed out there is no right way of handling
>it. This could perhaps be handled on a higher level if at all.

well, since there isn't a wrong way either, we are happy to leave it
in place :)

>JACK basically still is:
>
>input buffer -> JACK graph -> output buffer

this isn't necessarily correct. JACK can run without anything going in
or out. the internal clients don't know this, but they still function
as if there were true "external" end points for the data. its more
accurate to just say:

         JACK graph

and nothing else.

>on every callback. If a node does internal buffering that should not affect
>the MSCs.

right, because there isn't really an MSC anywhere. as you noted, the
global jack transport time isn't really equivalent. nothing in the
main JACK API says anything about doing anything except handling a
buffer corresponding to "now" (+/- latency). the transport API couples
"now" with a notion of a significant frame position.

applications that want to work on data for times other than "now (+/-
latency)" can do so, but its not part of the JACK API and they need to
do it using a different mechanism and in another thread.

>> >Transport time is in frames, right? And there is a transport time available
>> >for input buffers and output buffers?
>>
>> No. Its computable by using jack_port_get_total_latency(). buffers
>> don\'t come \"with\" timestamps. the transport time indicates the frame
>> time at the start of the cycle. it may move forward, backwards, etc.
>
>cycle? You got me lost...

sorry, its the name for one iteration through the graph after being
woken by the driver. so the transport time is the frame position for
the first sample of the buffers being processed by each process() call.

>If you use a hardware compressor with a high delay there is also no way
>to compensate. It would be nice to have a way to compensate, but JACK is

VST and most DAW's compensate. Its not up to JACK to do it, but a
client that wants to do this needs the right information.

>not the right place for this I think. Anyway, if it is in JACK, it still
>would have to be at a higher level than the basic intput/output latency,
>i.e. without taking extra node latency into account.

there isn't any difference in JACK. each port has its own latency
figure associated with it (zero by default). it doesn't matter whether
the port represents a physical input/output connector or
not. jack_port_get_total_latency() traverses the connection graph from
a given port to a terminal point, collecting latency information along
the way.

>There is still the problem of getting an accurate estimate of system
>time of some audio frame. This is needed to calculate at what time
>to output a MIDI message.

I still don't see why you need this. If I queue an event by saying
"play this in 0.56msecs", how whatever i queued the event with goes
about doing delivering it on time is an internal implementation
detail. there are several mechanisms available, some better than others.

If I say "play this at time T", then yes, some kind of UST is
needed. This is much harder to do than the relative method I described.

> I think this approach is better then just
>counting on being scheduled just after the audio hardware interrupt.
>This is certainly not the case with somewhat large audio buffers and
>multiple nodes.

sure, i don't think the audio clock is ever going to be suitable for
MIDI scheduling. you need the firm timers or KURT patches or a Gravis
Ultrasound interface.

then you just do all scheduling with a relative offset from now. the
largest times will be on the order a few hundred msecs, and the common
case will be more like 1-5msecs.

     delta_till_emit = event.time - transport_position +
                              jack_port_get_total_latency (relevant_port);

>> i didn\'t suggest getting rid of snd_rawmidi_write(). i meant adding
>> the \"..with_delay()\" function. snd_rawmidi_write() would still be
>> available for immediate delivery. MIDI thru should not really be done
>> in software, but it if has to be, thats already possible with the
>> existing ALSA rawmidi API.
>
>I don\'t think there are any alternatives for doing MIDI through in software.

"not doing it at all" :)

>And having both a snd_rawmidi_write() and a snd_rawmidi_write_with_delay()
>really isn\'t that trivial. How can a correct MIDI stream be guaranteed?
>rawmidi doesn\'t operate on MIDI messages and thus it is hard two interleave
>the two streams. I won\'t even start on system exclusive...

i didn't say it would be easy. but there are MIDI mergers that have
been around for years and years - its certainly not impossible to
do. the "huge sysex delays delivery of scheduled data" case is clearly
not handleable: its an overcommit of available resources (in this
case, MIDI bandwidth), and can't be fixed.

you just put a MIDI parser in the rawmidi stream, and it becomes
fairly easy. MIDI parsing requires very little code and very little state.

--p


New Message Reply About this list Date view Thread view Subject view Author view Other groups

This archive was generated by hypermail 2b28 : Thu Jul 25 2002 - 05:14:15 EEST