[linux-audio-dev] Modular audio

New Message Reply About this list Date view Thread view Subject view Author view Other groups

Subject: [linux-audio-dev] Modular audio
From: Jarno Seppanen (jams_AT_cs.tut.fi)
Date: ti tammi  26 1999 - 08:14:41 EST


        Hello! As I'm about to spend two weeks abroad, I'll post some
thoughts and answers to previous topics (on the Audiotechque list).
        
        I personally have been speculating about the possibility to have a
"streaming" or "flowing" functionality in the operating system, kinda like
what the BeOS Media Kit has. At this level, audio inputs and outputs and
software would be depicted with blocks and wires.

        For example, every line out channel of a soundcard would be a consumer
block and e.g. the microphone input would be a producer block. The software
running on the computer would plug to these blocks as needed. Some filter
application could be seen as just another block on the "audio desktop" and
could be wired to between the microphone and the line out. A software synth
would be a producer block and could be wired either directly to the audio
outputs or via an arbitrary number of filter blocks.

        EsounD (and SGI libaudio) implement real-time mixing of outgoing audio
streams. Now this would be a mere adder block, placed just before the "line
out" block. How would this sound to you? Comments? This is exactly what the
BeOS has currently; how would it be if Linux had it also?

         ------------------------
           An Audio Application - an audio editor, patchbay, etc.
         ------------------------
 - - - - - - - - - - - - - - - - - - - - - - "above driver-level" interface
       ---------------------------
         Flowing/streaming layer - e.g. BeOS, Sonic Flow, Sig++, etc.
       ---------------------------
       ----------------------------
         Portable sound I/O layer - PSL, EsounD, what?
       ----------------------------
 - - - - - - - - - - - - - - - - - - - - - - kernel interface
 -------------- ------------- ------------ ---------
   Linux/ALSA Linux/OSS SGI IRIX SunOs
 -------------- ------------- ------------ ---------
 analog, digital, multichannel etc. physical audio interfaces

Stefan Westerfeld <stefan_AT_space.twc.de> writes:

> As to the flow system aRts currently uses, it doesn't do blockwise
> calculations. (Singe sample - which is of course slow, but correct).

        Correct calculations can be done using frames as well; samplewise
calculations are just frames with a length of 1. :-)

> I would consider it a restriction if you have fixed size blocks, because
> they don't allow you to do real short feedback loops. I'd rather have
> something like flexible size blocks, which autoadapt on the fly to have
> the right size.

        But this would add a lot of complexity and buffering in order to split
and join different-length frames, right? (BTW, I prefer to use the term
"frame" for a short bit of a signal and "block" for a signal processing
algorithm). If there is a need for short feedbacks, the global frame duration
should be shortened.

> Besides that, it might be a great idea to fit Sonic Flow as flow system
> into aRts (inventing new flexible blocksize scheduling), and perhaps to
> use PSL as sound library. This would really avoid duplicated (and in-
> compatible) work, since aRts has the Flow GUI builder already.

        Yes, I have been imagining about having one core dataflow library used
by multiple applications. I would like to co-operate with you in Arts, within
the very small time window I have. I also think that it is important to
separate the GUI from the workings of the dataflow system with a well-defined
API.

Andrew Clausen <clausen_AT_alphalink.com.au> writes:

> Yep. That's what was going through my mind. I was originally thinking Esound, but
> everyone told me it wasn't a good idea, because its slow (its meant for TCP/IP isn't
> it?). So I wrote PSL. I only have access to ALSA and OSS. When I went back to

        But EsounD is currently being included in Gnome and is being actively
(?) developed and this makes it a better idea? I think it would be optimal if
we could join all these various small projects into an unified framework, see
e.g. the diagram on the top of this mail.

> So this is a problem. Esound doesn't seem to be much better - it supports other
> systems, but they're untested. Actually, I think the esd should depend on PSL or even
> Sonic Flow for portable I/O. It doesn't seem right to have the server directly
> talking to the kernel.

        In order to get things right, there should be a well-defined API
between the different layers. We want higher-level functionality (right?),
but we don't want the higher-level functions to flip bits directly in the
soundcard (right?) but via device drivers etc.

        Joel Dinolt: have you had any advances with the C language API for
Sonic Flow? Are you still planning to investigate on the matter? How about
the GUILE support?

        Speaking for myself, I remember saying that I would have the support
for hierarchical networks ready in a few days. Well, now the time estimate
has been multiplied by pi and rounded up to the nearest multiple of ten, and
there's still no results. After coding two days I noticed there was a flaw in
the design and had to start all over. I'll get back to the subject after 2
weeks.

-- 
-Jarno


New Message Reply About this list Date view Thread view Subject view Author view Other groups

This archive was generated by hypermail 2b28 : ma maalis 13 2000 - 11:59:35 EST