Re: [linux-audio-dev] MidiShare Linux

New Message Reply About this list Date view Thread view Subject view Author view Other groups

Subject: Re: [linux-audio-dev] MidiShare Linux
From: Benno Senoner (sbenno_AT_gardena.net)
Date: Wed Jul 19 2000 - 20:02:02 EEST


On Wed, 19 Jul 2000, Paul Barton-Davis wrote:

> >
> >Why just especially me ?
> >Did I generate too much fuzz with the sampler issues ?
> >:-)
>
> You're one of the few people here that are using lock free data structures.

Unfortunately ...
more people should use them , but since it is a bit more tricky than simple
mutexes people tend to avoid them (we are all too lazy) and often aren't even
aware of the existence of lock-free fifos. :-)

>
> >(yea, no one single mutex only thread shared mem and lock free fifos,
> >works like a charm (but Stephane is still reporting crashes when sending
> >fast subsequent note events (using MIDIshare :-) ) , hopefully we can
> >fix the problem soon because I am unable to reproduce the crashes here.
>
> Note: the lock free FIFO that we use is not SMP safe, unless you've
> started using atomic_t's, and in addition, the design requires a
> bounded FIFO size. MidiShare doesn't use limits like this.

Huh ?
No SMP-safe ?
I thought 32bit accesses are atomic on x86 boxes (both up and SMP) ?

If you are speaking about sparc SMP boxes then you are right
(I followed the thread on l-k-m-l).
And as for the bounded sizes do you refer to the 24bit limtitation of
atomic_t on SPARCs ?
if yes then it's not a big problem because 24bit = 16M samples =
ringbuffer sizes of max 32MB , more than enough (I am currently using
256KB sizes so go figure :-) )

Paul , please clarify the issues above because I am getting a bit confused
about the SMP-unsafeness of the lock-free structures.

BTW: shoul I definitively use atomic_t for the read and write pointers in the
ringbuffer structure ?
So at least SMP SPARC users will be happy. :-)

>
> MidiShare currently supports audio events as well as MIDI. Stephane
> demonstrated it sharing audio input (and output) across several
> different applications.

But for ultra-low latency, I believe that sending audio events will be to heavy
because of the relarively high data rate ( a Hammerfall at 52 channels uses
QUITE some bandwidth ...)

I am thinking about using a simple shared buffers model
(within the manual audio scheduler model).
The input audio data is read form the audio device by the scheduler and put in
a shared buffer which can be read from every "plugin".
For the output, every plugin receives an output buffer where to put the
data (or add it up). Just like the VST2 synth API. But with separate audio,midi
and disk threads.
I think MIDIshare could be suitable to manage the MIDI data because it is
relatively low-bandwidth, but speaking of audio you have to reduce the API
to mere shared buffers , trying to avoid any memory ping pong and event
handling.

I partly disagree about the timestamped audio fragments concept , because
if you can run realtime and if each audio "plugin" gets the timing information
( the current time expressed in samples like Cubase does), then it can produce
the audio fragment just in time, without precalculating future data which has
to be scheduled later.
So I do not see big advantages of scheduling audio fragments in future,
because it buys us nothing in terms of latency and rendering precision,
but adds a non negligible overhead.

thoughts ?

Benno.


New Message Reply About this list Date view Thread view Subject view Author view Other groups

This archive was generated by hypermail 2b28 : Wed Jul 19 2000 - 19:46:19 EEST