[linux-audio-dev] Audio engine stuff

New Message Reply About this list Date view Thread view Subject view Author view Other groups

Subject: [linux-audio-dev] Audio engine stuff
From: Juhana Sadeharju (kouhia_AT_nic.funet.fi)
Date: ke helmi  16 2000 - 10:51:01 EST


Hello. I wrote quite quickly what is below. All these patent problems
frustates so that I wanted to put a couple of tricks out ASAP. I have
no idea if they are new or not. You can find them below too.

It is a conincidence that Anders Torger writes about specially compiled
filter code about the same time. How do you insert the compiled code
to your system?

If anything below rises further ideas, please post to list.
My strategy of keeping these away from the list for a half of year
just might not be the right way.

As what comes to the "for quick start keep the beginnings of samples
in memory" patent: my friend's friends wrote such application which
ended up to two customers (but the code is not available anymore, guess),
another reader mentioned about similar program (but I don't know
exactly what is the status with the code). Sad! We could look at the
Amiga, Atari, Mac software archives for programs which uses the feature.
Anyone? But this is starting to be hopeless because I cannot even open
lha archive files...

The point is (as said in a patent mailing list) we need to be able to
prove that the feature was available earlier. If you wrote a private
program with such feature, you have to be able to prove that you wrote
the program earlier --- and how to do that if you didn't distribute the
software, or nobody remembers your software anymore? Etc. figure it!

Yours,

Juhana

Feb 16, 2000
Juhana Sadeharju
kouhia at nic.funet.fi

This material relates to an audio engine but the same methods may be useful
with any applications needing the similar solutions. I'm publishing these
so that we can avoid future patent problems (has happened before because
"making public" is not the same than "publishing" -- copy this article
to your archive).

The audio engine can be run as part of the application or as standalone
audio engine server. It has following parts: audio engine butler
process(es) (thread(s)), i/o processes (threads), dsp processes (threads),
disk processes (threads).

To get maximum performance all of these processes should be run as
soft-RT or hard-RT processes (but not necessarily). i/o and disk
processes should run with highest priorities (but not necessarily).

i/o processes waits for A/D, D/A, digital i/o, MIDI, network etc. inputs
to the engine. Some of these processes could run with lower priority;
for example, it is is desired that engine should be reliable even network
is breaking, then network process should be run with a lower priority.

A dsp process (thread) executes an audio flow network (similar to what is
used in Csound and other flow based audio software) or executes a special
purpose dsp system (flow network is more general but a bit slower).

Disk processes reads and writes disks. One process per disk is desired.
One process per stream is found useful too.

An engine butler runs with lowest priority (normally) and serves the
main engine.

 -*-

Details for i/o processes can be found from existing GNU software.

 -*-

Disk process may get data buffers as floating buffers (taken from
a buffer pool), via ring buffers, or any known buffering method.
Buffers can be read/written as a whole, splitted, or read/write
can be delayed according to time scheduling of the data. Disk process
makes sure that we get data ontime according to scheduling. Any free
time can be then spend in writing to disk or loading non time critical
data.

Exact method to read/write data to disk using read()/write() or
mmap() are described elsewhere (for example, linux-audio-dev).

Audiofiles (or any large files) are handled through engine's own virtual
memory system using the known methods from OS virtual memory systems.
[ I made this public at 1991-1994 in a few e-mails one of which was posted
to octave-bug mailing list of Octave GNU software. ]
Random accesses or sequential accesses are provided. Virtual memory system
includes a page pool (buffer pool). Buffers can be taken and locked from
the buffer pool, for example, so that three buffers forms a sub buffer pool
for some particular stream, or that one buffer is used as ring buffer.
Basically most used data is kept in the buffers. Sub buffers causes
that no large audio stream can wipe small data out of the vm system.

More reliable disk access is provided by requesting that any disk
access (of any processes in the computer) is done through the engine.
For example, a wave editor application would ask waveform graph data
through the engine's disk service.

Not all disk reads/writes need the same size of buffers. For example,
a low sample rate audio or graphing data needs shorter buffers than
high samplerate; otherwise the disk load will not be uniform.

To optimize disk seek times: the disk could contain one large file which
inside the engine builds its own file system; because now we can measure
or could know where on the physical disk our data is located we can
minimize seek times by reading data in order which minimizes the seek
times. For example, if we make the file to a newly formatted disk, then
we could read data in increasing location order. For example, if we
have several audiofiles, we could measure the seek times to their
beginnings (say) for getting their locations.

 -*-

Dsp process executes the audio flow system of the engine. Such audio
flow systems are well-known, see for example Csound. Dsp process
could have a special purpose program instead of general flow system
if needed.

The (flow) program running in dsp process could emulate a synthesizer,
be a frequency analyser, be an audio effect device, be a hard disk
recorder, or whatever.

The data flowing is the dsp process can be audio data, MIDI data,
or any data.

The flow network is changing during the time. New applications could
be join to the engine. New instruments could be added according to
musical score.

The dsp process itself cannot maintain this large changes in the network.
The task is handled by the engine butler which builds the (sub) network
and then switches it on.

The flow network has sub networks called instruments. (See Csound for
details.) A musical score could add and remove instruments hundrets
of times during the playing of composition. To speed up the changings
engine butler keeps up an instrument pool(s). When engine needs a new
instrument, butler already has structure and memory for it, all
initialized. When the engine releases the instrument, butler initializes
it and puts it back to pool.

Flow schedulers are documented elsewhere and available from GNU software.

 -*-

Problem: if the only existing audio engine serving all applications
is started earlier, then how an application can insert a dsp plug-in
to the dsp process for execution?

Solution 1: engine butler loads the plug-in with dll() without
effecting performance of dsp process; butler then forks a new
dsp process; now because all dsp info are kept in shared memory,
we can change the dsp process on-the-fly without affecting the
performance.

Solution 2: engine butler or application loads the plug-in program
as byte array to shared memory where from dsp engine executes it;
alternatively, butler and application could use dll() and copy the
program then to shared memory.

Any other dynamic library method could be used as well.

Dsp process could be build by loading and hardwiring plug-ins by
handling them as arrays of data. (This kind of hardwiring has been
done in Lucas Film's audio processor at early '80.)

Also plug-ins could be made by hardwiring language elements together
(with some visual language if not with a script language) directly
to the arrays of code.

 ==end==


New Message Reply About this list Date view Thread view Subject view Author view Other groups

This archive was generated by hypermail 2b28 : pe maalis 10 2000 - 07:23:27 EST