[linux-audio-dev] [MuCoS] More thoughts on events and routing

New Message Reply About this list Date view Thread view Subject view Author view Other groups

Subject: [linux-audio-dev] [MuCoS] More thoughts on events and routing
From: David Olofson (david_AT_gardena.net)
Date: to helmi  17 2000 - 20:53:46 EST


Hi.

Below the .sig are thoughts I've written down during the last 2
or 3 days.

(There's some more, but I think these are the interesting sections -
and the ones that might affect the prototype code I'm hacking a
great deal.)

Comments? Better ideas?

Can anyone figure out what this actually means, or should I try to
get some overview doc together ASAP? (That's what that "Overview"
section on the site was meant for, anyway.)

I think I'll write a little doc (with figures) on the event routing,
ports and channel mapping - if nothing else to get a better idea of
what it actually looks like... :-)

Bed time.

(I really have to sleep more than 3 hrs/day sometimes! ;-)

//David

PS. Note that this "Array Events" thing is actually just a performance
hack - it doesn't really add any functionality. Besides, it means
either extra overhead for single events, or that plugins have to
accept single and array events as different kinds of events. Maybe the
performance can be greatly improved in many situations with this -
maybe it's just useless. I have to hack some code and think more about
the implementation details to tell...

.- M u C o S --------------------------------. .- David Olofson ------.
| A Free/Open Multimedia | | Audio Hacker |
| Plugin and Integration Standard | | Linux Advocate |
`------------> http://www.linuxdj.com/mucos -' | Open Source Advocate |
.- A u d i a l i t y ------------------------. | Singer |
| Rock Solid Low Latency Signal Processing | | Songwriter |
`---> http://www.angelfire.com/or/audiality -' `-> david_AT_linuxdj.com -'

--8<-------------------------------------------------------------------

Can all events be of a fixed size?
----------------------------------
Well, yes, but that reduces the chances of sticking with the same,
binary compatible transport layer for very long. OTOH, the heap
allocator cannot handle large allocations (restricted by the heap
buffer size) anyway, so it can't be used as a generic memory
management system anyway. A higher level interface for Huge Data
Blocks is already planned in, and partially designed. (That's what
carries audio data and similar things.) It'll probably also be used
for off-line/soft real time transfers within the MuCoS real time
transport layer, such as uploading samples to sampleplayers.

Then again, thinking of the future of multimedia, with 3D positional
audio and the like, is a fixed event size really appropriate? 32 bytes
seems reasonable, but as the event header takes 16 bytes, there isn't
room for more than 4 32 bit floats or 2 64 bit floats. Is this really
enough? Should more complex events be sent as multiple, simpler
events? (For example, 3 events for the X, Y and Z components of a 3D
position or velocity event.) Could a "Linked Event" extension be used
instead of dynamic event allocation?

This is where the details of the event protocol gets in. Basically,
the idea is to allow as much freedom as possible when connecting
plugins and clients, without the need for active event translation in
the engine. While bundling lots of related parameters together in a
single event can reduce event handling overhead, it complicates the
event specification. It also forces the engine to do merging and
splitting of events, as there will a bigger set of event types passed
between plugins/clients. For example, connecting three 1-dimensional
outputs from different plugins to the 3D (X, Y, Z) inputs on a
channel on another plugin requires intermediate processing in the
engine, or by an adapter plugin, unless the X, Y and Z components
have separate events. That is, you get a problem somewhere either way.

Array Events
------------
Possibly, this could be addressed by supporting array events in the
API spec, rather than specifying lots of application specific,
multidimensional event types. That is, defining an event as:

struct event_t {
        struct event_t *next; /* next event */
        int time; /* timestamp */
        short kind; /* event kind */
        short channel; /* target channel */
        short subchannel; /* target subchannel */
        short entries; /* # of write entries */
};

followed by a table of property write entries in the form:

struct propwrite_t {
        short property; /* what to write */
        union {
                short di16;
                long di32;
                long long di64;
                float df32;
                double df64;
                void *dp;
        } data;
};

Note: The size of a propwrite entry isn't constant! It'll most
likely be aligned for best performance - the constants and/or macros
in the MuCoS API headers should be used to get this right.

Where to put channel numbers?
-----------------------------
Plugins have output contexts with fixed indexes from 0 and up. Each
output context can be set to transmit on a channel or subchannel of
an event output port. Obviously, putting the target
channel/subchannel info inside the event struct probably means that
the event has to be copied if there are multiple recipients that want
to receive the events on different input channels.

Then again, a way around this would be to remove the 1:1 event
target channel:input channel relation. That can be done using virtual
input channels, pretty much like for the output mapping. (A table
with a port + channel entry for each actual output channel.) The host
would provide a target channel to plugin input channel translation
table for each plugin.

To reiterate, plugins have input channels and output channels. A
plugin has a table with an entry containing a port address and a
virtual channel number for each output channel. The host sets up this
table appropriately, so that the plugin can use it to route each
output channel to the right port, filling in the right virtual
channel number. We now have events with channel fields that can be
selected as the host pleases.

To map these to real plugin input channels, another table is used.
This is just a simple table of channel numbers, where the plugin
looks up what actual input channel should receive events on each
virtual channel.

This might seem like a lot of overhead, but keep in mind that it
actually eliminates all "active" runtime routing from the engine! The
transmit port/channel table cannot be avoided, unless every single
channel gets it's own port - which is insane with MIDI sequencer like
(lots of outputs) plugins/clients and the like, and also means that
the engine will always have to sort events by timestamps, even if
they all come from the same source... (Events from a port will always
be in timestamp order.)

And the alternative to the virtual-to-input channel translation
table is either copying and modifying events, or passing the target
channel numbers outside the event structs, and then copying/modifying
that data.

Note: The subchannel field is meant to be used for lower lever
addressing. It's meant for situations where the engine isn't expected
to do any mapping - subchannels are always mapped 1:1 when one plugin
is connected to another. Simply put, the subchannel field is just a
part of the event data, and the engine will only pass it on. (Of
course, some engines may support subchannel remapping, or it can be
done with intermediate plugins - but if that's needed, it's probably
because the plugin shouldn't have used subchannels in the first
place!)

----------------------------------------------------------------->8----


New Message Reply About this list Date view Thread view Subject view Author view Other groups

This archive was generated by hypermail 2b28 : pe maalis 10 2000 - 07:23:27 EST