Re: [LAD] "enhanced event port" LV2 extension proposal

From: David Olofson <david@email-addr-hidden>
Date: Fri Nov 30 2007 - 12:23:52 EET

On Friday 30 November 2007, Krzysztof Foltman wrote:
[...several points that I totally agree with...]
> If you use integers, perhaps the timestamps should be stored as
> delta values.

That would seem to add complexity with little gain, though I haven't
really thought hard about that...

It seems more straightforward to just use sample frame offsets when
sending; you just grab the loop counter/sample index. However, in the
specific case of my "instant dispath" architechture, you'd need to
look at the last event in the queue to calculate the delta - but then
again, you need to touch that event anyway, to set the 'next'
field... (Linked lists.) No showstopper issues either way, I think.

When receiving, OTOH, deltas would be brilliant! You'd just process
events until you get one with a non-zero delta - and then you process
the number of sample frames indicated by that delta. (Obviously,
end-of-buffer stop condition must be dealt with somewhere. Adding a
dummy "stop" event scheduled for right after the buffer would
eliminate the per-audio-fragment check for "fragment_frames >
remaining_buffer_frames".)

> Perhaps fractional parts could be just stored in events that demand
> fractional timing (ie. grain start event), removing that part from
> generic protocol.

That's another idea I might steal! ;-)

I'm not sure, but it seems that you'd normally not want to drive a
sub-sample timestamped input from an integer timestamped output or
vice versa. An output intended for generating grain timing would be
concerned about generating events at the exact right times, whereas a
normal control output would be value oriented.

This may not seem to matter much at first, but it makes all the
difference in the world if you consider event processors. With pure
values, you might want to add extra events or even regenerate the
signal completely, but this would break down when controlling
something that relies on event timing. Might be worth considering
even in non modular synth environments, as you might want to edit
these events with in sequencer. This is starting to sound like highly
experimental stuff, though. :-)

> Perhaps we're still overlooking something.

I'd want to try actually implementing some different, sensible plugins
using this before I really decide what makes sense and what doesn't.
Granular synthesis is about the only application I can think of right
now that *really* needs sub-sample accurate timing, so that's the
scenario I'm considering, obviously - along with all the normal code
that doesn't need or want to mess with anything below sample frames.

//David Olofson - Programmer, Composer, Open Source Advocate

.------- http://olofson.net - Games, SDL examples -------.
| http://zeespace.net - 2.5D rendering engine |
| http://audiality.org - Music/audio engine |
| http://eel.olofson.net - Real time scripting |
'-- http://www.reologica.se - Rheology instrumentation --'
_______________________________________________
Linux-audio-dev mailing list
Linux-audio-dev@email-addr-hidden
http://lists.linuxaudio.org/mailman/listinfo/linux-audio-dev
Received on Fri Nov 30 16:15:02 2007

This archive was generated by hypermail 2.1.8 : Fri Nov 30 2007 - 16:15:02 EET