Re: [LAD] LV2 format stuff

From: David Olofson <david@email-addr-hidden>
Date: Wed Nov 14 2007 - 17:26:45 EET

On Wednesday 14 November 2007, Krzysztof Foltman wrote:
> David Olofson wrote:
> > I would think that users might want some way of "upgrading" their
> > project files to use new plugin versions without just manually
> > ripping out and replacing plugins, but even without some API help,
> > I'd rather not see hosts trying to do this automatically...
> Well, a common solution is to store plugin version identifier (it
> could even be a sequence number assigned by plugin author) in the
> song. Then, the plugin is able to convert at least the current
> parameter values (but not, say, automation tracks) on song load.
>
> It doesn't solve *all* the compatibility problems, but can solve the
> most immediate one, I think.

Provided plugins are identified by URIs, and same URI implies 100%
compatibility, how do you actually find the new version of the
plugin?

Then again, in that case, we're really talking about a brand new
plugin, but it seems to me that there is some useful grayzone here;
plugins that are mostly compatible with their predecessors. New major
versions, if you like.

Provide a version history in the form of an array of URIs, so hosts
can find and deal with this if desired?

Just brainstorming a little here...

> > (suggesting a newer, partially compatible version of the plugin if
> > there is one), but again, no silent automatic upgrades, please.
> > Too much risk of the new version not working as expected.
> >
> Automatic conversion worked with VST and Buzz.

Well, considering we're still talking about 100% compatibility (the
new version has the same unique ID), it *should* work - but in
reality, it all comes down to the quality of the plugins, or rather,
how well they actually maintain this claimed compatibility.

> But, warning the user about possible incompatibility because of
> newer version is a good idea.

Yes... Just putting a warning messagi in some log or something could
be a very useful "debugging" tool. If it doesn't sound right, you
start by having a look at that log.

> Maybe a plugin should be able to override it if it's absolutely
> certain that no compatibility problems may arise, but that may cause
> problems :)

Right; everyone *thinks* their bug fixed versions are 101% compatible
with the old versions - so next thing, hosts start overriding the
override feature. :-D

[...]
> I love the idea of fixed point 16:16 timestamp (assuming the time
> would be relative to current buffer start, not some absolute time).

Yep, that's what I had in mind. (Absolute time definitely belongs in
some optional dedicated timeline interface.)

> Most plugins would just shift timestamps by 16 bits and compare them
> to the loop iterator :) Sounds practical.

Exactly. And besides, even when you do use the fraction part, you'll
normally *only* be interested in the fractional part. Assuming you're
implementing sample accurate timing first (why bother with sub-sample
otherwise?), you're already at the very sample the event
should "land" in, so you just want to know how much to nudge that
initial oscillator phace, or whatever you need to do.

[...]
> I bet most plugins wouldn't support fractional part of timestamps,
> and those that would, could report it as a separate feature, for use
> in granular synthesis-aware hosts :) Yes, I'm reaching too far ahead
> here, but you kind of asked for it :)

So, a different interface for control events that just happen to have
fractional timestamps? Well, it does the job as far as dedicated
granular synth "plugin packs" go, but then you can't mix these ports
with other control ports. I was kind of thinking truly modular
synthesis here... :-)

> > Other than that, I'm not sure it has much value outside of
> > marketing... Any other real uses, anyone?
> >
> Can't think of any. Events for true oscillator "hard sync", perhaps
> (phase reset with subsample precision).

Yeah, that actually sounds like an interesting application.

Just a moment ago, I realized it can be used for things
like "multilooping", skipping into samples and the like, similar to
the "sampleoffset" command found in some old trackers. That sounds
like modular synth stuff again, though. (Implementing advanced
looping effects as a separate plugins, instead of building all
features you can think of into the sampler, only to still forget half
of the ones you actually want.) That is, it's probably going into
Audiality 2, but it may not make sense in LV2.

[...timeline/transport...]
> A separate port type (which would probably be implicitly
> auto-connected by most hosts) would perhaps be nice for that, just
> so that things aren't scattered too much. Although plain float (or
> other) ports for bpm and position could do, too. What do you think?

I prefer the latter, actually. If it's just two values, it doesn't
really need a dedicated interface, I think.

That way, you could even throw in standard event processor plugins to
mess with these. Add some suitably colored noise and you've just
humanized the arpeggiator. :-)

The bad news is that doing this without the right calculations means
tempo and position start to disagree, potentially confusing the
plugin that tries to track them.

OTOH, you can't really assume that integrating tempo gives you
position and vice versa anyway... Consider a sequencer that's
changing the tempo using linear ramping. Should it send one tempo
event per sample frame? If not, should tempo values be instantaneous
tempo corresponding to the respective timestamps, or should it be
based on integrated position?

My vote: Relax the relation and assume that sequencers will generally
deal in instantaneous values. Tracking this data means you look at
tempo for relative timing, and position for absolute timing, assuming
no exact relation between them. In fact, this results in a useful
bonus feature: A sequencer could *deliberately* send nominal tempo
while advancing position at a different speed, to implement
half/double speed without having arpeegiators and whatnot going
totally insane.

Either way, it's just some values physically, so as far as normal
timeline functionality is concerned, the plugins just need to get
that information one way or another.

> > Makes sense to me. An "icon" like this is basically just a small
> > GUI
> > that doesn't take any user input. (Well, it *could*, but shouldn't
> > rely on it, as a host really using it as an icon probably wouldn't
> > care to let it have any input events...)
> >
> It could.

Sure. Maybe the plugin should somehow be told what the host expects?
(Icon, metering, status, master control(s) etc.) Kind of silly if you
render a master volume slider that can't be operated in some
hosts. :-)

[...very sensible examples elided...]

> The ideas come from BEAST user perspective, when there are certain
> things that require opening too many windows :) Plus, if done well,
> it could be quite an eye-candy for modular environments.

Absolutely - and I think it can be of use outside modular synths as
well. Why not render a column of "icon GUIs" instead of just plugin
names in the insert effect box? (The host might want to reserve the
right to use at least one mouse button for calling up the full GUI
there - but then again, you'd probably want that in a modular synth
as well...)

> > Somewhere around here is where I'd suggest using a "notification"
> > style control interface instead - ie function calls, VST 1 style,
> > or events of some sort. ;-)
> >
> Well, the parameter group bitmask is easy for host and easy for
> plugin, and is completely optional for both (if the host doesn't
> want to bother with setting "parameters changed" bitmask, it can
> just set all 1's - and when the plugin doesn't want to get the
> information about what parameters have changed, it just ignores the
> bitmask and assumes that all parameters changed).

Indeed; it's just that it brings the interface complexity closer to
timestamped events without actually adding more than a small part of
the functionality. Indeed, it's optional, but I have this funny idea
that features like that are actually meant to be used. ;-)

> In other words, it's a decent optimization if both host and plugin
> support it, and it's harmless for those which don't support it.

Yep.

> What's more, supporting it is really easy - for host it's "just look
> up which bits you need to set when changing certain parameters", for
> plugin it's even simpler - check the bits and do certain
> calculations.

Well, in the context of Audiality 2, it's actually not the *host*
doing any of this, as connections and protocols are mostly opaque.
(The host just tells the plugins to make a connection, and that's
that. Depending on the protocol, there could be a single float value,
an audio buffer, an event queue or whatever in between, with each
port having a pointer to it.)

This has some implications as to how protocols are implemented, as you
can't just shift complexity in any specific direction (that would be,
towards the host), as there is normally plugin code on both sides,
with just some shared data in between.

> VST1-style notification (function call on every parameter change)
> would work too, but it's pretty inefficient, especially when
> changing several parameters at once.

Yeah, that's why I'd never consider that approach for audio. It has
more overhead than timestamped events, can't handle sample accurate
timing (*) and doesn't scale well at all.

(*) Well, the calls can obviously pass timestamps, but that
    does exactly nothing to help plugins implement it.
    Instead, each plugin has to implement it's own internal
    event system or similar to be able to make use of the
    timestamps.
       Meanwhile, with the DSSI/XAP/Audiality approach, you
    just loop reading events, checking timestames and
    processing audio. This interface model is a perfect fit
    for the implementation in most cases. And, it's still
    easy to quantize event processing as needed if your
    inner loop has some granularity > 1 sample frame.

> Float-valued events would also work, although they'd
> push a bit of human work on plugin side, which may be undesirable,
> because there will be more plugins than hosts.

Now you lost me... In what way does the value type affect this?

[...]
> > Just realized that relationship too, but I'm not totally sure
> > about the details yet. I'm probably going to try a 2D addressing
> > approach; some ports may have multiple connections wired to
> > different abstract instances of things (mixer voices, synth
> > voices...) in the plugin.
> >
> My usual suggestion - keep it very simple.

Indeed, I'm trying hard, but keeping it *too* simple just offloads a
heap of issues to the implementations. Polyphonic synth control is a
requirement in the case of my application, so I'm trying to figure
out some nice solution that, if possible, can also be used for other
stuff.

Right now, plain 2D indexing seems to be it; one fixed dimension and
one dynamic, corresponding to "what ports I provide" and "how many of
them", respectively.

For example, your average polyphonic synth would have a number of
global controls with only one instance of each, and a number of voice
controls of which there is one per allocated (potentially virtual)
voice. It could also have a number of output mixer controls, where
you use the the second dimension for addressing busses - that is, a
separate 2D matrix of controls, independent of the one for voices.

This should cover most interesting cases, right?

> > Is that (not being able to allocate hundreds or thousands of
> > voices at any time, real time safe) an actual restriction to
> > anyone...?
> >
> Not to me. Hundredrs/thousands of individually controlled voices is
> an uncommon, extreme case.

Yes, that's what I'm thinking - and unless you're dealing in that kind
of numbers, you could either be smart and only allocate exactly as
many voices as you'll need, or you just grab a sufficient number of
them.

Real time MIDI input might actually be the worst realistic case here,
and if you want to be *totally* safe, you just grab 128 voices per
channel and index them directly using NoteOn/Off pitch.
Realistically, you'll probably do just fine with some 16-32 voices
for all practical matters. Make it a parameter of the MIDI->event
mapper, in case some users have lots of fingers. ;-)

(Note: The sustain pedal is just a control, and it's implementation is
entirely down to the synth implementation, regardless of how voice
addressing is done. I'm really talking about virtual voices here; not
direct addressing of physical voices - although that's really just a
synth implementation detail, just as it is with MIDI.)

> Of course, voice management still belongs to a plugin, in my
> opinion, because different plugins can implement it in very
> different ways (monosynth, normal polysynth, polysynth using extra
> voices for unison, sf2 player using voice structures for layers).

I totally agree.

> It's just that the host should be able to tell the plugin to treat
> certain notes in a certain way (individual pitch bend for selected
> notes etc).

Exactly.

What I call "voice addressing" is really just what MIDI is (ab)using
the pitch field for: Telling the synth which *note* I'm talking
about, regardless of whatever it might be wired to inside the synth
ATM - if it's even wired at all.

> Is that acceptable? I think Fruityloops plugin standard had an
> individual control over each note (per-note pitchbends on piano
> roll, etc), and it worked pretty well, too bad I don't really
> remember how did they implement it

Well, MIDI has this too - although it's very limited: All you've got
is NoteOn, NoteOff and Poly Pressure. :-) (IIRC, you can do more than
that with some extensions, but this doesn't seem to be widely
implemented.)

I think what we want is for synths to be able to have any number of
voice controls (corresponding to MIDI Poly Pressure), just as they
can have any number of plugin wide controls (corresponding to MIDI
CCs).

[...]
> I think only certain notes would be "tagged" for individual control,

Actually, no, you need this to be able to stop the notes as well
(another voice control event, just as for MIDI) - so it really does
come down to how many notes you want to have playing at once.

> so a limit of 16 "note tags" doesn't seem to be very limiting
> (assuming we use a (channel, note tag) addressing).

I don't care much for channels in the context of plugins (a plugin
instance is a "channel" in my view), but that's another
discussion. :-)

> If that's what you mean, of course. On the other hand, maybe someone
> has use for more than 16 tags per channel?

Well, in the case of Audiality, this is a non-issue. You just hook up
controls for 1024 voices, if you want that many.

The only "problem" here is that you may not be able to do this on the
fly, in real time, since the plugin most probably has to allocate
more internal state memory to handle new connections.

(Then again, that's not a critical issue in Audiality either, as it
allows custom memory managers. Throw in a TLSF allocator with
a "sufficiently large" pool, and you're fine, as long as plugins are
reasonably quick at initialize new voices.)

> Unfortunately, my experience is limited here - for average
> synth and average musician it's fine, but maybe for things like MIDI
> control of stage lights etc it's not enough?

Wouldn't know about stage lights, but MIDI works pretty well for most
people doing reasonably "normal" music. What we're talking about here
is mostly beyond MIDI, and some people (users and developers alike)
would probably consider the very concept of voice control beyond
NoteOn/NoteOff overdesign. Meawhile, some people are using one MIDI
channel per voice to work around the limitations of MIDI...

Either way, once you have voice control *at all* - and this is
required to implement NoteOff - with a proper design, I think generic
voice controls come more or less for free.

It would probably be good manners for synths to provide a plugin wide
pitch control to simulate "good old" MIDI Pitch Bend (you could use
it for tuning too, I guess...), so voice control opponents can stay
away from using voice controls entirely. ;-)

//David Olofson - Programmer, Composer, Open Source Advocate

.------- http://olofson.net - Games, SDL examples -------.
| http://zeespace.net - 2.5D rendering engine |
| http://audiality.org - Music/audio engine |
| http://eel.olofson.net - Real time scripting |
'-- http://www.reologica.se - Rheology instrumentation --'
_______________________________________________
Linux-audio-dev mailing list
Linux-audio-dev@email-addr-hidden
http://lists.linuxaudio.org/mailman/listinfo/linux-audio-dev
Received on Wed Nov 14 20:15:06 2007

This archive was generated by hypermail 2.1.8 : Wed Nov 14 2007 - 20:15:07 EET