[linux-audio-dev] It's been more than a year... Have things improved?

New Message Reply About this list Date view Thread view Subject view Author view Other groups

Subject: [linux-audio-dev] It's been more than a year... Have things improved?
From: Juan Linietsky (coding_AT_reduz.com.ar)
Date: Sun Jul 06 2003 - 23:35:41 EEST


Hi everyone... I guess it's been more than a year since the last time we
discussed such issues here. I am sure that everyone here, aswell as myself,
works very hard to mantain and improve their respective apps. Because of it,
the intention of this list post is to try to inform myself, aswell as
possibly other developers about the status of many things that affect the way
we develop our apps under linux.

As many of you might remember, there were many fundamental issues regarding
the apis and toolkits we use, I will try to enumerate and explain each one of
them as best as I can. Please I ask everyone who is more up to date with the
status of each to answer, comment, or even add more items.

1- GUI programming, and interface/audio syncronization. As well as I can
remember, a great problem of many developers is how to syncronize the
interface with the audio thread. Usually, we have the interface running at
normal priority and the audio thread running at high priority (SCHED_FIFO) to
ensure that it wont get preempted while mixing, specially when working with
low latency. For many operations we do (if not most) we can resort to shared
memory to do changes, as long as they are not destructive. But when we want
to lock, it is most certainly that we will suffer from a priority inversion
scenario. Althought POSIX supports functionality to avoid such scenarios from
happening (Priority ceiling/inheritance), there are no plans to include
support for such anytime soon in Linux (at least for 2.6, from what Andrew
Morton told me). Althoght some projects exist, it will not likely to become
mainstream for a couple of years (well, low latency patches are not
mainstream either, with good reason).
I came to find out that the prefered method is to transfer data through a FIFO
(due to the userspace lock free nature), althought that can be very annoying
for very complex interfaces.
What are your experiences on this subject? Is it accepted to lock in
cases where a destructive operation is being performed? (granted
that if you are doing a mixdown you will not be supposed to be doing that)
From my own perspective, I've seen even commercial HARDWARE to lose
the audio, or even kill voices when you do a destructive operation, but I dont
know what users are supposed to expect. One thing I have to say about this
also, is JACKit (and apps written for it ) low tolerance for xruns. I found
many apps (or even JACKit itself) would crash or exit when such happens,
I understand xruns are bad, but I dont see how they could be problem if you
are "editing" (NOT recording/performing) and some destructive operation
needs to lock the audio thread for a relatively long time.

2-The role of low latency/Audio and MIDI timing. As much as we love working
with low latency (And I personally like controlling softsynths from my roland
keyboard). In many cases, if not most? It is not really necesary, and it can
be counterproductive, since working in such mode eats a lot of CPU out of the
programs. Low latency is ideal when performing LIVE input and you want to
hear a processed output. Examples of this are input from a midi controller and
output from a softsynth, or input thru a line (guitar for example) and
processed output (effects). But Imagine that you dont really need to do
that.. you could simply increase the audio buffering size to have latencies
up to 20/25 milliseconds, while saving CPU, preventing xruns, and the latency
is still perfetly acceptable for working in a sequencer, for example or doing
audio mixing of pre-recorded tracks. Doing things this should also ease the
pain to softsynth writers, as they wouldnt be FORCED to support low latency
for their app to work properly. And despite the point of view of many people,
many audio users and programmers dont care about low latency and/or dont need
it. But such scenario, at least a year ago, was(is?) not possible under
Linux, as softsynths (using ALSA and/orJACKit) have no way to syncronize
audio and midi, unless running in low latency mode, where it no longer matters
(audio update interval is so small that works as a relatively high resolution
timer). Last time I checked, Alsa could not deliver useful timestamping
information for this, and JACKit would also not deliver info on when did the
audio callback happened. I know back then there were ideas floating aroudn on
integrating MIDI capabilities to JACKit to override this problem and provide
a more standarized framework. I dont see how should also MIDI sync/clocks
help in this case, since it's basically meant for wires or "low latency"
frameworks.

3-Host instruments. I remember some discussion on XAP a while ago, but having
been to the page recently, I saw no progrerss at all. Is there still really a
need on this? (besides the previous point) or is it that ALSA/JACKit do this
better, besides prooviding interface abstraction? Also, I never had very
clear what is the limitation regarding the implementation of the VST api
under linux, granting that so many opensource plugins exist. Is it because of
the api being propertary, or similar legal reasons?

4-Interface abstraction for plugins.: We all know how our lovely X11 does not
r allow for a sane way of sharing the event loop between toolkits (might this
be a good idea for a proposal?) So it is basically impossible to have more
than a toolkit in a single process. Because of this, I guess it's impossible
and unfair to decide on a toolkit to configure LADSPA plugins from a GUI.
I remember Steve Harris proposed the use of (rdf was it?), and plugins may
also provide hints, but I still think that such may not be enough if you want
to do advanced features such an envelope editor, or visualizations for things
such as filter responses, cycles of an oscilator, etc. Has anything happened
in the latest months regarding to this issue?

5-Project framework/session management. After much discussion and a proposal,
Bob Ham started implementing LADCCA. I think this is a vital component, and
will grow even more important granting the complexity that an audio setup can
lead to. Imagine if you are runniing a sequencer , many softsynths, effect
processors and then a multitrack recorder, all interconnected via ALSA or
JACKit.. saving the setup for working later on can be a lifesaver. What is
it's state nowadays? And how many applications provide support for it?

Well, those are basically my main concerns on the subject, I hope to not have
sounded like moron, since that is not my intention at all. I am very happy
with the progress of the apps, and It's great to see apps like swami,
rosegarden or ardour become mature with the time.
Well, cheers to everyone and lets keep working hard!

Juan Linietsky


New Message Reply About this list Date view Thread view Subject view Author view Other groups

This archive was generated by hypermail 2b28 : Sun Jul 06 2003 - 23:49:59 EEST