[linux-audio-dev] Jack's IPC overhead

From: Michael Ost <most@email-addr-hidden>
Date: Thu Jun 15 2006 - 20:49:59 EEST

We are considering re-architecting our VST host in Receptor to run each
plugin in a separate process and connecting the processes with Jack. How
much extra overhead can we expect?

Receptor is basically a PC running our VST host. That's the only audio
app that is running. Currently all the VSTs run in our host app's
process, so we have no context switch or system call to get them to
process.

This helps keep processing overhead low. But the downsides are that (1)
all plugins share the same 2GB VM space and (2) we can't make use of a
machine with > 2GB RAM. (And (3) one crashing VST can take down our
whole mixer, but that's another story...)

As it turns out customers are interested in using Receptor to run sample
players which are hungry for both VM and RAM. So giving each VST its own
process would give the VST its own VM space with lots of elbow room, and
allow Linux give all of those plugin apps access to the additional RAM
even though they are still 32bit apps.

As we looked over the Jack docs, it seems like a natural for supporting
this kind of architecture. We would break out our VST support into a
separate app and connect them to our Host app via Jack. This seems to be
how FST is implemented and how Jack is intended to be used.

So does anyone have a sense of how much overhead is introduced by the
per-process() IPC that Jack uses? Our worst case would be 57 VST plugins
with a 32x2 sample buffer (.725 msecs). How much extra overhead would
those 57 Jack calls to process() add to the overall processing time? Any
other gotchas?

Thanks for any help... mo
Received on Fri Jun 16 00:15:01 2006

This archive was generated by hypermail 2.1.8 : Fri Jun 16 2006 - 00:15:02 EEST