[linux-audio-dev] LAAGA - key issues

New Message Reply About this list Date view Thread view Subject view Author view Other groups

Subject: [linux-audio-dev] LAAGA - key issues
From: Kai Vehmanen (kaiv_AT_wakkanet.fi)
Date: Fri Apr 27 2001 - 23:04:50 EEST


It seems that I will be having some time to spend on this API issue in the
next few months. As the very first thing, I'd like to step back from the
implementation level for a second. So I'd like to hear your comments on
the following issues.

But before that, one thing I want to emphasize is that I have no desire to
start a competing audio server framework. If the LAAGA discussion starts
reminding some already existing designs (aRts, CSL, esound, aserver, etc),
then the implementation will focus on improving that specific project. So
by this, I welcome everybody to join the discussion....

(btw; if you have missed the last few posts, the LAAGA working name stands
for Linux Audio Application Glue API).

1. Requirements, part I (the easy ones :))

* LAAGA should allow streaming of low-latency, high-bandwidth audio
  between independent applications.
* LAAGA should provide full synchronation (all apps are always
  in perfect sync + means for issuing start/stop/etc commands).
* Applications connected using LAAGA may have their own graphical
  interfaces. More specifically, LAAGA apps can use different
  GUI toolkits.
* To represent audio data, LAAGA should use 32bit floats that are
  normalized to [-1,1].
* Individual streams of audio should be noninterleaved, ie. mono
  streams.

2. Requirements, part II (need discussion)

* Should it be possible to be able to make LAAGA connections
  between already running apps? The other alternative is
  that clients are compiled as plugins and instantiated
  by the server.
* Should it be possible to add and remove clients when the
  server is running?

3. Known difficult issues

* Audio server is driven by the audio hardware. Server
  should never perform any blocking system calls, or
  otherwise wait for external events, which might result
  in missed audio interrupts.
-> Still, in many audio applications, system calls are
   needed to actually produce the audio!
-> Lock-free double buffering will help, but it will
   add latency.
* Allowing multiple GUI-toolkits in practise means
  multiple processes (ie. using multiple threads is
  not enough).
-> Possibly useful IPC technique are SysV shared memory
   (for audio) and message queue (for control messages).
-> Sockets and pipes are not usable in the audio server
   context.

-- 
 http://www.eca.cx
 Audio software for Linux!


New Message Reply About this list Date view Thread view Subject view Author view Other groups

This archive was generated by hypermail 2b28 : Thu May 24 2001 - 00:00:52 EEST