[linux-audio-dev] Re: [l Re: Plug-in API progress?

New Message Reply About this list Date view Thread view Subject view Author view Other groups

Subject: [linux-audio-dev] Re: [l Re: Plug-in API progress?
From: David Olofson (audiality_AT_swipnet.se)
Date: to syys   23 1999 - 15:23:53 EDT


On Thu, 23 Sep 1999, Benno Senoner wrote:
[...]
> > With an event based system, you get automation virtually for free. Many
> > plug-ins won't even need to use custom events for their processing <-> GUI
> > communications, which means they can just send their "parameter change" events
> > and let the engine record the communication if desired. Custom events could be
> > split into two groups - "automation enabled" and "private".
>
> Agreed, I like this idea,
> for example the volume-slider (the GUI part) of a gain-control plugin just
> installs an event-sending port and sends the events to the plugin DSP code.
> Plus the DSP code installs an event-sending port and connect the event-receiver
> port of the volume-slider.
> That means if someone (a sequencer/ or arbitrary plugin) changes the gain value,
> the fader would move automagically in the correct position.
> Or is this suboptimal ?

Well, with the event system I'm working on is based on Recieve Ports only. That
is, you don't write all your events with destination addresses to a single
output buffer for dispatching by the engine, as in the design I might have
mentioned earlier. (Perhaps I'll move back to that kind of design, but I'd
really like some realistic processing net models with traffic statistics first.)

What you get is an input buffer of events from multiple sources. Each event
has a reference to the port that the sender would like to have any responses
posted to.

That is, there's no way to broadcast, or even send a single event to more than
one port. You have to create one instance of the event, and write the pointer
to it to the buffer of each recipient. Something like that will have to be
done somewhere anyway - I'm just not sure there's a point in demanding that
functionality from engine implementations.

There's one trick that *could* be done at virtually no cost though -
sending the same buffer to multiple recipients. What I'm getting at is this: If
the DSP <-> GUI communication is based on the two sending events directly to
each other's event ports, the automation system could tap data from those
communication paths by scanning the GUI -> DSP buffers for input, and sending
it's output to both the GUI and the DSP code.

> Do you plan network-transparency to high bandwidth streams too or only
> for the event system.
> networked evens make sense to me, like sending MIDI events over network.
> But sending 60 tracks over the net is a bit harder.

Not a hardware problem, but what I have in mind isn't really networking - it's
about building clusters using 100 MB or gigabit ethernet with RTLinux RT
communication drivers. More like extending connecting the machines into
something like a huge SMP box, than building a network...

Anyway, I still think there are uses for higher latency and lower bandwidth
streaming over networks, perhaps with Rether or a similar protocol. (Rether is
like "RTLinux for ethernet" - it provides a real time protocol with guaranteed
maximum latency and bandwidth allocation, and runs TCP/IP on the unallocated
time slots.) "Office multimedia streaming" isn't exactly my biggest interest,
but I don't think we should eliminate the possibilities to support that kind of
things.

> We should take an approach similar to X:
> it we are forced to use the network use it,
> if there are better alternatives like shmem, use the latter.

Yes, optimize for the situation where the speed is really needed, and just keep
in mind that it should be possible to extend it to support other environments.
Trying to support *everything* right in the basic protocols is bound to fail.
"Sub nets & gateways" philosophy rules here, I think...

[...GUI...]
> Agreed, but keep a door open for multiple toolkits (see my proposed multiple
> GUI handlers), since I wouldn't code in GTK afer falling in love with Qt.
> :-)

If you just create a window and hand over the X connection (as Paul
Barton-Davis suggested), you shouldn't really need much toolkit support from
the GUI engine, I think... It's much like running Qt, GTK+, Tk, <whatever>
applications under a window manager.

> If you need a writer for the Qt-GUI handler .. I'm here ...
> :-)

<legal stuff>
Speaking of Qt; I came up with an interesting licencing problem a while ago.

Let's say we define a GUI markup language for the plug-in API, and developers
start developing proprietary, closed source plug-ins for it. And then someone
hacks a host that uses Qt to render GUIs for plug-ins using our API... Who pays
the developer's licence for the closed source plug-ins?

Reading the QPL, I get the impression that developing closed source code that
uses QPL in *any* way requires the $$$ license. (The IANAL disclaimer applies.)
But the plug-in developers didn't intend their plug-ins to run under a Qt host,
or perhaps didn't even know that something like that could be written. Does
that mean that the end user - by loading a closed source plug-in from a
developer without a Qt developer's license on the Qt host - is guilty of
breaking the QPL?
</legal stuff>

//David

 ·A·U·D·I·A·L·I·T·Y· P r o f e s s i o n a l L i n u x A u d i o
- - ------------------------------------------------------------- - -
    ·Rock Solid David Olofson:
    ·Low Latency www.angelfire.com/or/audiality ·Audio Hacker
    ·Plug-Ins audiality_AT_swipnet.se ·Linux Advocate
    ·Open Source ·Singer/Composer


New Message Reply About this list Date view Thread view Subject view Author view Other groups

This archive was generated by hypermail 2b28 : pe maalis 10 2000 - 07:27:12 EST