Re: [linux-audio-dev] Re: AudioServer Standard?

New Message Reply About this list Date view Thread view Subject view Author view Other groups

Subject: Re: [linux-audio-dev] Re: AudioServer Standard?
From: David Olofson (audiality_AT_swipnet.se)
Date: to syys   23 1999 - 20:31:23 EDT


On Fri, 24 Sep 1999, Stefan Westerfeld wrote:
[...]
> Not really. aRts requires really tight timing for tasks like full duplex
> audio processing, hard disk recording, realtime midi synthesis (I don't
> want to wait more that some ms when pressing a key on my midi keyboard),
> etc.

Unavoidable problem, of course...

> For that reason, hooking aRts to another audio server engine (like for
> instance esd as well), is probably impossible. You could build a "link-
> me-in" /dev/dsp realtime virtualization server, which then loads both,
> audioality and aRts as shared libs. On the other hand, you could probably
> build audioality as one aRts plugin or aRts as one audioality plugin.

Yes, that would be a lot more efficient. (That is, it could even be usable, as
opposed to streaming between the two tasks. :-)

> I think all those ideas sound strange and only show that it makes no sense
> to build two projects with exactly the same goal, purpose and technology.
> What is the sense of having two fully featured realtime multimedia
> processing engines running? Normally, people tend to run one linux kernel
> and one X server for instance.

Yes, I agree. The only point with those ideas was as a way to solve the
transfer period, if we were to use aRts until we manage to get Audiality to a
usable level. (If Audiality actually becomes usable before aRts has evolved
into an optimized client-server style engine, that is.)

Anyway, it's probably a good idea to coordinate development of our projects,
and other Open Source audio projects as well (trying to use the same plug-in
API for example), as that improves the chances of the long term result being The
Audio Subsystem. :-)

[...]
> > Also, I start to prefer C to C++, at least for this kind of stuff... Perhaps
> > I've read too much Linux kernel code? ;-) (And, I was an asm die hard for a few
> > years, when I hacked on the Amiga...)
>
> The aRts plugins are plain C++ - CORBA would be much too slow for things like
> that. It's just that things like session management, distribution, flow
> graphs, etc. are handled over a CORBA interface. That way you can for
> instance build flow graphs with a visual editor, while the synthesis
> server isn't linked to Qt, X11 and similar.

Ok, I was thinking of the plug-in API and the extra overhead that C++ generates
if you're not careful. Although that overhead is usually insignificant, it
might become noticable when dealing with very small buffers, like less than 32
samples. (Used in feedback loops, and ultra low latency under RTLinux.)

> Of course I don't know what exactly you want to do in Audioality and how,
> but from the discussion and from the webpage it seems to me like the
> goals you have are very close to the goals of aRts. On the other hand, aRts
> is under development about two years, and has many of the things you say
> you want to achieve already. And more.

Yes, I think we're trying to do about the same thing. Unless there's some
very fundamental difference between our projects, there are basically two ways
to go, I think:

1) Keep developing Audiality, as a new engine built from scratch, learning from
the experience with aRts and other systems, and supporting the latest concepts
from the ground up.

2) Drop the Audiality engine development, and instead concentrate on the new
plug-in API, porting aRts to RTLinux to cover the sub 3 ms latency range, and
other similar efforts.

Question: What would result in the best solution in the long run?

> For instance it will integrate really nicely into the next version of
> kooBase (which will be called Brahms), has really decent flow graph
> management, audio server functionality (which is why this topic came up),
> etc.
>
> Of course, it isn't perfect, and there are quite a few things that could
> be done a lot better. But it is a program you can install, play with,
> change a few lines, recompile, and immediately see the effect. Of course,
> if you want to have a new plugin API and a new signal flow scheduler,
> you change a bit more. But still you can keep the whole framework, GUI
> stuff, flow graph stuff, etc., which won't care about that kind of
> things.

That's kind of what I had in mind... Perhaps I could use aRts as a starting
point for the implementation of Audiality? It could save time for me, and might
also result in some useful new that can be ported back to aRts until Audiality
is mature enough to use.

> So why don't you just consider working on aRts? It's open source and your
> ideas/code is always welcome. I think linux should try to solve problems
> once, right, together and then build on what is there. Finally, that also
> seems to have happened for Desktops, where we have now Gnome and KDE, and
> most people just joined one of the projects, so we have two very nice
> solutions here.
>
> Thats what should happen for audio, too.

Yes, I agree, and we certainly need to get rid of this chaos of different
plug-in APIs and incompatible engines. What I'll do depends on how different
aRts is from my Audiality plans... I want the plug-in and client APIs to be
flexible, efficient and powerful enough for most Linux audio developers to use,
or it won't be much point. I'm aiming for something more like an OS audio
infrastructure than a specialized music/audio engine.

Regards,

//David

 ·A·U·D·I·A·L·I·T·Y· P r o f e s s i o n a l L i n u x A u d i o
- - ------------------------------------------------------------- - -
    ·Rock Solid David Olofson:
    ·Low Latency www.angelfire.com/or/audiality ·Audio Hacker
    ·Plug-Ins audiality_AT_swipnet.se ·Linux Advocate
    ·Open Source ·Singer/Composer


New Message Reply About this list Date view Thread view Subject view Author view Other groups

This archive was generated by hypermail 2b28 : pe maalis 10 2000 - 07:27:12 EST