Re: [linux-audio-dev] MidiShare Linux

New Message Reply About this list Date view Thread view Subject view Author view Other groups

Subject: Re: [linux-audio-dev] MidiShare Linux
From: David Olofson (david_AT_gardena.net)
Date: Thu Jul 20 2000 - 01:51:33 EEST


On Wed, 19 Jul 2000, Benno Senoner wrote:
[...]
> I am thinking about using a simple shared buffers model
> (within the manual audio scheduler model).
> The input audio data is read form the audio device by the scheduler and put in
> a shared buffer which can be read from every "plugin".
> For the output, every plugin receives an output buffer where to put the
> data (or add it up). Just like the VST2 synth API. But with separate audio,midi
> and disk threads.
> I think MIDIshare could be suitable to manage the MIDI data because it is
> relatively low-bandwidth, but speaking of audio you have to reduce the API
> to mere shared buffers , trying to avoid any memory ping pong and event
> handling.

Some MuCoS ideas suggesting that there are more to event systems than
the "one event per data buffer" concept:

/*----------------------------------------------------------
  Buffer Operations
----------------------------------------------------------*/
#define MCS_OP_TRANSFER 6
/* Transfer ownership of an array of data to another Plugin.
 event.timestamp = <when to deal with this>
      .target = <recipient channel & subchannel>
      .data.array = & + sizeof<data to change owner>
*/

/*
NOTE: The access semantics for the data referenced by the
      following two operations is not defined on this level.
      How a Plugin finds out when another Plugin is done with
      a shared or "borrowed" buffer is defined by the protocol
      specified for the Channel and Subchannel. The Host
      negotiates protocols and sets up two-way/feedback
      connections as required.
*/
#define MCS_OP_SHARE 7
/* Tell a Plugin about some memory we want to share. A Host
   would typically send this to chains of Plugins in order
   to reuse the same audio buffers, avoiding copying. (Which
   requires that the Plugins in question add to the buffers
   rather than overwriting them, of course.)
 event.timestamp = <when to take this in account>
      .target = <channel & subchannel>
      .data.array = & + sizeof<data to share>
*/
#define MCS_OP_NOTIFY 8
/* Notify a Plugin that the specified data has changed. Can
   be used for handshaking between Plugins that share
   buffers via MCS_OP_SHARE.

==== Interesting part ========================================
   NOTE: Some shared buffer protocols don't use this
         operation, but simply assume that the data changes
         every cycle. (That is, the MCS_OP_CYCLE event works
         like a global trigger, replacing lots of
         simultaneous MCS_OP_NOTIFY events.) This is how
         normal, fixed rate audio streams are handled.
==== /Interesting part =======================================

 event.timestamp = <n/a - who cares?>
      .target = <channel & subchannel>
*/

Basically, when streaming audio, a full buffer event is only
transmitted when the buffer address is changed, ie only when starting
up the engine in the normal case. Thanks to the cycle based model, we
can assume that all channels set up according to the normal protocol
will have new data in the buffers every time a plugin is executed,
rather than starting every cycle with telling the plugin about every
single buffer that has been updated.

> I partly disagree about the timestamped audio fragments concept , because
> if you can run realtime and if each audio "plugin" gets the timing information
> ( the current time expressed in samples like Cubase does), then it can produce
> the audio fragment just in time, without precalculating future data which has
> to be scheduled later.

Actually, you *can't* do anything that requires future data, as it
simply hasn't been recorded yet with the kind of latencies we're
dealing with. And as to generating data ahead of the real output,
yes, this is a nice way to do buffering without an extra buffering
layer, but it simply doesn't make sense in a cycle based callback
plugin engine, where everyone is hardsynced to the buffer size
anyway.

Finally, if you put timestamps an every audio package, you have a
system that can deal with drop-outs a bit better. Also irrelevant
inside a cycle based engine, as the whole thread drops out if a
plugin takes to long to execute.

It does make sense for soft RT streaming between threads, though.
However, if hard RT is required, drop-outs *cannot* be fixed once
they occured, so shuffling timestamps around for that matter is just
a waste of CPU power - if you get a drop-out somewhere in the chain,
you're screwed anyway.

> So I do not see big advantages of scheduling audio fragments in future,
> because it buys us nothing in terms of latency and rendering precision,
> but adds a non negligible overhead.

Useful (essential, actually!) when streaming between threads using
different scheduling rates and buffer sizes, but it should probably
be a part of *that* variant of the API, and not supported by the
plugin API used inside the RT engine threads.

//David

.- M u C o S --------------------------------. .- David Olofson ------.
| A Free/Open Multimedia | | Audio Hacker |
| Plugin and Integration Standard | | Linux Advocate |
`------------> http://www.linuxdj.com/mucos -' | Open Source Advocate |
.- A u d i a l i t y ------------------------. | Singer |
| Rock Solid Low Latency Signal Processing | | Songwriter |
`---> http://www.angelfire.com/or/audiality -' `-> david_AT_linuxdj.com -'


New Message Reply About this list Date view Thread view Subject view Author view Other groups

This archive was generated by hypermail 2b28 : Thu Jul 20 2000 - 03:36:52 EEST