Re: [linux-audio-dev] discussion about development overlap

New Message Reply About this list Date view Thread view Subject view Author view Other groups

Subject: Re: [linux-audio-dev] discussion about development overlap
From: Benno Senoner (sbenno_AT_gardena.net)
Date: Thu Sep 28 2000 - 03:51:08 EEST


On Thu, 28 Sep 2000, Paul Barton-Davis wrote:
>
> >Let's take another example: I've recorded a few tracks with ardour, and
> >now I'd like record a synth track using EVO. How to do this? One solution
> >would be to use MIDI-MTC to syncronise the apps, and add disk-output
> >routines to EVO. I haven't tested this, but doesn't sound optimal to me.
> >With a sound server, you'd just need soundserver-output routines in EVO,
> >and that's all. You start the playback in ardour, and you can use EVO as
> >usual!
>
> right, this is the "audio ALSA sequencer" thing, AKA "audio routing in
> MidiShare". Please note that MidiShare can do all this stuff, at least
> the development version, though I don't know what the latency issues
> are.

I hope that the midishare code is flexible enough to be transformed into a pure
userspace midishare server for my proposed rtsoundserver model
(must audio and MIDI).
Audio routing would be useful too so that several "clients" (aka plugins)
could get/send audio from/to various sources which can be real
input/outputs or other clients.

As for disk output in EVO, this is very easy to do (basically all the streaming
code is already here, just write() instead of read() ),
but I don't like this "offline" method.
I'd like playing back some tracks using Ardour, while I'm playing synth track
on EVO which is outputted to the soundcard and gets recorded into an ardour
at the same time.
I like What You Hear Is What You Get.

And that's what rtsoundserver aims to.
But as said it requires cooperation from the app which as to be written as
a rtsoundserver-plugin.
I hope that someday "pro" apps like Ardour can be converted to this
model.
(the audio thread has to be basically completely non-blocking and can't wait on
mutexes etc, and I do not know how hard this would be to do in Ardour
(basically you have to transform mutexes into lock-free message passing
(which is sometimes more complicated than simply locking a variable))

I think Ardour and EVO are two good candidates to play a significant role
to help defining the rtsoundserver model.

A general recommendation to audio application developers which in future want to
achieve very low latencies and want that their apps can be easily converted to
the rtsoundserver model, is that they should design their apps with
a non-blocking audiothread in mind even if they do not need these low latencies
right now.
(eg if you use 50-500msec audio buffers you can use mutexes which wait on
GUI or disk threads, but when move into the single digit msec domain,
this begins to become an issue)

cheers,
Benno.


New Message Reply About this list Date view Thread view Subject view Author view Other groups

This archive was generated by hypermail 2b28 : Thu Sep 28 2000 - 02:37:09 EEST