[linux-audio-dev] Re: LAAGA - main components (fwd)

New Message Reply About this list Date view Thread view Subject view Author view Other groups

Subject: [linux-audio-dev] Re: LAAGA - main components (fwd)
From: Stefan Westerfeld (stefan_AT_space.twc.de)
Date: Mon Apr 30 2001 - 02:59:48 EEST


   Hi!

On Mon, Apr 30, 2001 at 12:05:50AM +0300, Kai Vehmanen wrote:
> ---------- Forwarded message ----------
> Date: Mon, 30 Apr 2001 00:03:15 +0300 (EEST)
> From: Kai Vehmanen <kaiv_AT_wakkanet.fi>
> To: linux-audio-dev_AT_ginette.musique.umontreal.ca
> Subject: Re: [linux-audio-dev] LAAGA - main components
>
> On Fri, 27 Apr 2001, Paul Davis wrote:
>
> >>--cut--
> >>process():
> >> read_from_file() // can block
> >> apply_eq()
> >> server->write_to_laaga_channel()
> >>--cut--
> > i prefer this model over your second one, but i understand the problem
> > with read_from_file(). one possibility might be to identify the things
> > that might reasonably be expected to do in process(), and provide
> > non-blocking methods of doing all of them. however, given that
> > read_from_file(), which may look simple, really is not simple at all
> > for some applications (e.g. ardour), i'm not sure that this is a
> > promising approach.
>
> Hmm, it might be better to first concentrate on the well-behaving LAAGA
> clients, ie. those that already have solved the read_from_file() issue. So
> LAAGA clients would provide:
> - server callbacks
> - one function for spawning the GUI-process
>
> So here, it's client's responsibility to implement the IPC, and take care
> of blocking system calls in process(). If (or when :)) we get this
> working, then the next problem is converting existing audio apps (... and
> how to make connections between running apps...).
>
> In this second phase we probably need a stub LAAGA client plugin, and a
> set of client-side library functions for communicating with the server
> process. This means that there will be at least three distinct APIs:
> 1) LAAGA server-side API
> 2) LAAGA direct client API
> 3) LAAGA stub client API
>
> With (3), it should be quite easy to provide an ALSA/OSS style API for the
> client apps.
>
> >>Now I think to the two big questions are:
> >>- who performs the process-to-process communication
> > let me add:
> > - *is* there any process-to-process communication?
>
> Now using the above numbering, LAAGA APIs (1) and (2) don't require
> any kind of IPC. But with (3), we need IPC. Otherwise connecting apps
> using different GUI-toolkits will not be possible.
>
> >>Now it's not clear what we should aim at? Maybe we should limit LAAGA to
> >>only support 1.3. Ie. client application provides a set of callback
> >>functions (audio server context), and one function that is forked() as the
> >>main() of a new process. At the moment, this approach sounds rather good
> >>to me. Comments?
> > that would work. however, it does fail to address the question of how
> > the other process (the "worker/gui process") communicates with the
> > code and data being used within the audio thread.
>
> Ok, I guess what I describe above is the (1)+(2) LAAGA API
> combination. To handle the IPC, we have API (3).
>
> > much careful thought about all these issues is needed. thanks for
> > getting the ball rolling.
>
> Indeed. I just hope that we can get other to join in. For instance, Stefan
> and Abramo, how these plans seem like from your perspective? Do think
> we're recreating aRts or aserver, or are we onto something. Can yoy
> picture aRts/aserver as an implementation of one of the above?

I think indeed the whole LAAGA discussion sometimes looks like "lets recreate
artsd". I mean, we're talking about a few things here.

1. To get LOW latency, you don't really want to have multiple processes

Agreed. This is why aRts suggests putting everything that requires LOW latency
into the sound server process, whereas everything that doesn't can live either
in the server process, or in the client process. This is implemented and works.

2. You need audio routing between applications

Agreed. The aRts way to do this is that the to-be-routed audio I/O of the
applications takes place inside the sound server. There, you have a flow
system which means you can route any application to any other (due to the
synchronous nature also without latency).

You can plug things together, put effects in the middle, or master mixers,
and so on.

3. You need to support out-of-process clients, in-process-clients, networked
   clients and so on.

Basically, you want to run some clients in-process, some clients out-of-process
and some clients mixed. The usual seperation will be

 GUI/Frontend (client process)

----||---------------------------------- communication

Server: Signal processing part (server process)

So basically, all clients will have some parts outside the server, and some
parts inside the server. Alternative to this, you might also want to have
everything inside the server, and use seperate threads.

aRts supports all possible combinations, where

- if you have in-process-only clients, they probably should be threaded, so
  that they do not block the server - then it's the clients responsibility
  to implement communication

- if you have out-of-process clients, aRts will provide network transparent
  method invocations and streams and such - basically, you won't see whether
  you are talking to a remote or local component

It's also possible to activate most components everywhere. So for instance,
if you write a wave editor, you can use the same effect that is normally
running inside the server to do a reverb in your client.

 - o -

So I think LAAGA and aRts share most if not all basic assumptions, where
LAAGA appears to be a subset of aRts. So let me try to view the differences:

A. C has been brought up as preferred language for LAAGA

This can be divided in three items:

A1 - LAAGA server-side API
A2 - LAAGA direct client API
A3 - LAAGA stub client API

B. LAAGA does focus a lot on being /really realtime/

This includes things like: you may not malloc, you may not open files, you
may not use sockets, you may not hold locks anywhere in non-rt threads, you
may not ...

C. LAAGA maybe tries to solve the minimal subset

I.e. it would be a nice idea if LAAGA would just be a header file like LADSPA,
and would just cover in-process co-activation of audio applications, not more
and not less. But well, I am not sure where the debate is going here.

So how to adress these?

A.

As for all the C APIs, I think there are two things worth mentioning here
about aRts.

* CSL: should be the API to go for "just playing a stream". If every
  application would be ported there, then using aRts for doing sound output
  or using OSS for sound output, or using another server architecture for
  sound output should be just a matter of having a CSL driver for that.

  I think establishing another "play a stream" API in LAAGA (as in A2) would
  be a really really BAD idea. We have a lot of them already, really ;).

* C bindings (as in A1, A3): well, basically if you want a client side
  communication API and a server side component implementation API for aRts
  that bases on C, the task is doing another language binding for aRts. That
  kind of thing should be feasible, and not /too/ hard to do, and might be
  worth doing anyway (to make aRts more GNOME programmer friendly).

  I've been discussing the technical side of this already with Tim Janik
  (he's a main Gtk/Glib developer, so he probably knows how to do that
  kind of thing). Basically, once you have the code generator in mcopidl
  adapted things shouldn't be that hard. The idea would be to base this
  on Glib, as you probably still want an object system, signals, inheritance,
  data structures and so on.

  That would create generic client and server bindings to all aRts objects
  in one step, so it's probably a nice idea to do it.

Still, I could imagine all those developers that want a plain-and-simple
understood-in-ten-minutes in-process component API as in the minimal
subset idea. So if LAAGA would specify something like that, well, go for
it. But then, PLEASE do look at aRts first, and specify an API that is
implementable there ;). Otherwise it makes no sense.

B.

Ok, I see that you have a point to make against aRts if you want to guarantee
low-low-low-low latency. You might not even pass a filename to aRts currently,
as this does a few mallocs, nor a message, nor ... anything.

So basically, one way to go for LAAGA would be setting up the relation between
LAAGA and aRts as the relation between rtlinux and linux: aRts would be one
lower priority (but still realtime) thread/process in the LAAGA framework,
whereas LAAGA would be written with provable deterministic response.

Even if you want to go that way, it would IMHO be important to design these
issues with keeping an eye on both, LAAGA and aRts, to make sure that if you
are done, things will interoperate. I.e. it would be stupid if you could
route audio inside aRts between noatun and an aRts instrument, but not between
ardour and the aRts instrument, because internal aRts streams couldn't be
exported sanely to LAAGA.

All in all, I'd like to add that maybe keeping aRts as it is and doing latency
benchmarks FIRST and THEN deciding whether we need to change something inside,
or whether a new provable low-latency thing is necessary might be better than
blindly assuming you need that in any case.

As a small example: you talk about SHM a lot. MCOP currently doesn't use SHM
for transporting audio streams, but adding this should be quite easy, so I
think a patch to aRts is much better than recreating a transport layer again.

C.

Well, as I said, maybe a minimal in-process activation API like the one
Paul suggested with AES or your LAAGA or anything should be supported by
aRts, and maybe that would already solve everything we're talking about
here.

 - o -

Conclusion

I think that LAAGA has a few interesting points that are discussed here, but
overall, I think there are really two important issues

 * look what you can do with aRts already (in can do a lot of what you want)
 * if you can't do something with aRts, solve it
   - with the minimal effort
   - in a way that both can interoperate later

It is really really really important not to split up the development of linux
audio architecture in lots of different subsets. aRts provides a good audio
server, and much more already. It is flexible, widely used and widely tested,
portable and so on. I hope I have shown the ways how and where things can be
combined together to satisfy everybody.

I'd like to see the free software community building cars, not reinventing
wheels in the audio sector ;).

   Cu... Stefan

-- 
  -* Stefan Westerfeld, stefan_AT_space.twc.de (PGP!), Hamburg/Germany
     KDE Developer, project infos at http://space.twc.de/~stefan/kde *-         


New Message Reply About this list Date view Thread view Subject view Author view Other groups

This archive was generated by hypermail 2b28 : Mon Apr 30 2001 - 03:49:02 EEST