Re: [linux-audio-dev] Re: LAAGA - main components (fwd)

New Message Reply About this list Date view Thread view Subject view Author view Other groups

Subject: Re: [linux-audio-dev] Re: LAAGA - main components (fwd)
From: Kai Vehmanen (kaiv_AT_wakkanet.fi)
Date: Mon Apr 30 2001 - 12:19:32 EEST


On Mon, 30 Apr 2001, Stefan Westerfeld wrote:

> I think indeed the whole LAAGA discussion sometimes looks like "lets recreate
> artsd". I mean, we're talking about a few things here.

Yup, that's why I specifically wanted your input to this discussion. ;)

> 2. You need audio routing between applications
> Agreed. The aRts way to do this is that the to-be-routed audio I/O of the
> applications takes place inside the sound server. There, you have a flow
> system which means you can route any application to any other (due to the

At least this is clearly out of LAAGA scope. Whatever routing is done,
it's up to the server implementation.

> 3. You need to support out-of-process clients, in-process-clients, networked
> clients and so on.
[...]
> aRts supports all possible combinations, where
[...]
> It's also possible to activate most components everywhere. So for instance,
> if you write a wave editor, you can use the same effect that is normally
> running inside the server to do a reverb in your client.

This is a nice feature indeed. But this transparency also comes with a
price. I've spent some time browsing through the aRts codebase, and it's
mostly very clean, and easily understandable. And especially if you only
have to deal with the MCOP interfaces. But when you want to take a look
under the hood, things get much more complex. The code itself is quite
straightforward, but it's really hard to keep track of what is executing
what, and where are we going next. For instance, it took me a while before
I found the StdIoManager class in arts/mcop/iomanager.cc ... which has the
inner loops for the MCOP system (or so I hope :)).

So I guess from a marketing point-of-view, MCOP-free explanations of how
things work in aRts will probably have a better hit rate here on LAD. ;D

It's not just that we can't understand complicated code, but it's easier
to compare different designs, and see if they possibly overlap, if we are
speaking the same language. In the end we are always just calling
functions, spawning threads, using sockets/files or using the sysv
IPC. That's about the whole set. (Note: and nope, let's not get
to the "in the end it's just ones and zeros" level :))

> So I think LAAGA and aRts share most if not all basic assumptions, where
> LAAGA appears to be a subset of aRts. So let me try to view the differences:
[...]
> A. C has been brought up as preferred language for LAAGA

Yup, although most of us use C++, I think C should still be
preferred. It's possible to make a C wrapper for a C++ API, but it's
pretty much always easier from C to C++...

> B. LAAGA does focus a lot on being /really realtime/
> This includes things like: you may not malloc, you may not open files, you
> may not use sockets, you may not hold locks anywhere in non-rt threads, you

True, at least for the server side. Still it's probably wise to allow
not-so-realtime-capable clients even if that means possible performance
problems. But with properly implement client code, the above is true.

> C. LAAGA maybe tries to solve the minimal subset
> I.e. it would be a nice idea if LAAGA would just be a header file like LADSPA,
> and would just cover in-process co-activation of audio applications, not more
> and not less. But well, I am not sure where the debate is going here.

I think this is becoming more and more important as a target for the 1st
phase LAAGA.

> * CSL: should be the API to go for "just playing a stream". If every
> application would be ported there, then using aRts for doing sound output
> or using OSS for sound output, or using another server architecture for
> sound output should be just a matter of having a CSL driver for that.
[...]
> I think establishing another "play a stream" API in LAAGA (as in A2) would
> be a really really BAD idea. We have a lot of them already, really ;).

This is true. But some kind of API will be needed as the direct
callback API for LAAGA clients. But this will be very different
from CSL, as it only has a couple of functions, nothing else.

Btw; did you consider using ALSA's pcm-API for CSL...? CSL is a clean API,
but it lacks some of the features offered by ALSA's PCM interface. As the
ALSA interface is based on a shared library (libasound), it'd have been
easy to write an alternative implementation (ie. a aRts/OSS backend). Oh
well, it's also true that ALSA is not yet stable...

> * C bindings (as in A1, A3): well, basically if you want a client side
> communication API and a server side component implementation API for aRts
> that bases on C, the task is doing another language binding for aRts. That
> kind of thing should be feasible, and not /too/ hard to do, and might be
> worth doing anyway (to make aRts more GNOME programmer friendly).

Not a piece of cake either. ;) But yes, definitely worth doing.

[low-low-low-low latency]
> So basically, one way to go for LAAGA would be setting up the relation between
> LAAGA and aRts as the relation between rtlinux and linux: aRts would be one
> lower priority (but still realtime) thread/process in the LAAGA framework,
> whereas LAAGA would be written with provable deterministic response.

This is reasonable. It should actually be quite easy to change the roles.
We might even end up running ardour, arts and ecasound as LAAGA clients to
the XMMS LAAGA server. :)

And of course it's clear that the LAAGA design is aimed at very specific
use cases. One classical example used to promote ReWire is connecting
ReBirth to Cubase. It's clear that when you have multiple files streamed
from disk, and rebirth running at the same time, you need performance. In
the Linux side I used Ardour and TerminatorX as examples. On the other
hand, I think aRts is serving rather well as the KDE2.x soundserver. And
rewriting things just for the added complexity is never a wise move...

> Even if you want to go that way, it would IMHO be important to design these
> issues with keeping an eye on both, LAAGA and aRts, to make sure that if you
> are done, things will interoperate. I.e. it would be stupid if you could
> route audio inside aRts between noatun and an aRts instrument, but not between
> ardour and the aRts instrument, because internal aRts streams couldn't be
> exported sanely to LAAGA.

Yes, and also adding ALSA to the list. In the optimal case no LAAGA is
needed at all, but we'll just have to see what we end up with...

> As a small example: you talk about SHM a lot. MCOP currently doesn't use SHM
> for transporting audio streams, but adding this should be quite easy, so I
> think a patch to aRts is much better than recreating a transport layer again.

Yes, this is probably something worth trying. But as you said earlier,
this shouldn't be done without thorough tests. It would be interesting to
hear people's opinions on using UNIX sockets to move large amounts of
data...?

> It is really really really important not to split up the development of linux
> audio architecture in lots of different subsets. aRts provides a good audio
> server, and much more already. It is flexible, widely used and widely tested,
> portable and so on. I hope I have shown the ways how and where things can be
> combined together to satisfy everybody.

Couldn't agree more. This is exactly what I want to do - uhm, not to split
up, I mean. ;) The three driving forces now seems to be the three big A's
- ALSA, Ardour and aRts. I have the benefit of not taking part in these
projects (nor KDE or GNOME), so I'm at least a somewhat neutral person...
:) But yes, it would be great to create more co-operation. Even if we end
up as fierce competitors, it's IMHO best to have regular death matches
here on LAD. :D

-- 
 http://www.eca.cx
 Audio software for Linux!


New Message Reply About this list Date view Thread view Subject view Author view Other groups

This archive was generated by hypermail 2b28 : Mon Apr 30 2001 - 12:03:27 EEST