Re: [linux-audio-dev] back to the API

New Message Reply About this list Date view Thread view Subject view Author view Other groups

Subject: Re: [linux-audio-dev] back to the API
From: David Olofson (audiality_AT_swipnet.se)
Date: su loka   17 1999 - 13:23:34 EDT


On Sun, 17 Oct 1999, Paul Barton-Davis wrote:
> [ ... missing reply ... ]
>
> >What's the date/time of that post, in case I'm not thinking about
> >the right one?
>
> note sure precisely, since i don't keep records/copies generally, but
> it was my reply to your reply to my post of Tues Oct 12 1999, 12:18:13
> EDT (subject "big picture, for a moment").

Weird... I found the 12:18:13 post, my reply, but no further posts
in that thread. Lost or accidentally deleted, I guess... Could
someone forward those posts to me?

[...]
> what would it mean to say that the "audio" has changed ? thats hardly
> a difference that makes a difference! audio is really a data stream,
> not a series of events.

That depends a bit on whether you want to view audio as abstract
streams, or as what they actually are; buffers of samples. True, it
doesn't really make sense to say "Hey! A new buffer. :-)" every
freaking cycle, as it's *required* that there is anyway.

However, it changes a bit when dealing with some compressed data
formats and other situations when variable data rates are needed.
Also, allowing plug-ins to handle muted inputs by themselves (as
opposed to leaving it to the engine to entirely disable the plug-in
when there is no input, and the specified tail has died out) require
that the engine can say which buffers are empty/muted.

The information that a plug-in gets:
        (implicit information inside [],
        all seen from the plug-in's perspective,
        I: = input,
        O: = output)

-----------------------------------------------------
I:[cycle starts now!] //process() call
I:[local time starts at 0] //Event timing rel. buffer start
I:N samples to process //In the closure...
                                //(Set at init time - could
                                //be changed with an event.)
I:[the data is in the
   current buffers]
        .
        . //...Normal events...
        .
O:[N samples ready] //Must be done before return
O:[cycle ends now!] //process() return
-----------------------------------------------------

IMO, a logical distinction between two kinds of events can be made.
The kind of events that clearly belongs in the event system are those
that are directly related to the processing. The other kind is events
that actually occur in real life, directly and unconditionally
affecting the plug-in, ie process() being called, or the audio
streams being prepared so that the plug-in can access the data.
Obviously, there's no need to send an event telling the plug-in that
it has been called! :-) But after that, the clear distinction starts
to fade... Do we tell the plug-in that the buffer size has changed?
Or even that there really *is* a new buffer (that is, the audio port
isn't muted)...?

> you've already demarcated them linguistically,
> and suggested that a plugin would get audio via at least one audio
> port "which is not the same as its event port" (my wording, your
> intent, i hope). this strongly suggests to me that its a mistake to
> consider these to be the same thing. look at two of the most salient
> differences:
>
> 1) data size: audio (N samples per call to process (); likely
> to be N >= 32 ... 128 bytes for RT; non-RT

How does a sampleplayer interface with a streaming/caching "daemon"?
(A special case that shouldn't be supported?)

> processing might go way higher ... 4KB ?)

(IIRC, VST had a 32 kB default buffer size before they realized you
don't want to wait for ages to hear the effect of an automation
edit...)

> event (with my Bateson inspired idea, about 24-32 bytes)
>
> 2) data rate: audio (MB-GB/sec, continuous)
> event (often zero, typically bytes/sec, very intermittent,
> typically bursty)
>
> i am therefore not convinced that these two very different entities
> should be considered the same thing at all.

What I consider most important here is good one kind of port is at
doing the other one's job. Clearly, streams of fixed size raw data
buffers is rather useless for events, and the event system can't
carry high bandwidth data in-band.

But how different is sending an event from passing an extra argument
to process()?

The "huge block events" would not change the current event system
design - only define events that can reference "out-of-band" data,
pretty much like what you describe below. The most important part of
that extension is defining the semantics, so that it can be handled
correctly by the engine.

The audio streams could be handled using an extension based on, or
similar to the "huge block events". If it's logical, efficient,
flexible enough and overall a good idea, remains to see. There might
be better ways, and/or ways that do the same thing with a better
balance between efficiency in the average case and special settings.
I'd like to have high efficiency in all situations, of course! :-)

> what about jaroslav's example of an instrument request ? well, the
> wording is interesting. "request" ... this is not a notice that
> anything *has* changed, its a notice that someone would like something
> *to* change. if you had to cast it into an event, it would look more
> like "XX has placed a request with YY". that is: the state of some
> "request" parameter has just been changed. there should be, as you get
> close to concluding, some other way to move whatever data is
> associated with the "request" parameter from the one place to another.

>From Jaroslav's post:
> A little dream:
>
> Event - picture 768x576 from grabbed from the BTTV driver 8-bits per
> color
> Event - Piano instrument 96kHz, 24-bit resolution

The request/reply thing was my solution for actions that cannot be
ekpected to be carried out in real time, like sending a full 96 kHz
piano over to a synth. This data needs to be streamed out-of-band, at
the highest possible speed. Anyway, that kind of operations don't fit
well with the plug-in concept anyway - only clients should do it,
probably. (But a sampler *plug-in*...? Hmmm...)

The picture, OTOH, fits into the streaming system. For this to make
sense in a real time system, there would be a stream of pictures. One
picture corresponds to one audio sample; both could be called a
frame. (Why? A picture is an array of pixels, covering one frame
period. A sample is a DC level, held for one sample period.)

As for events and out-of-band, non real time transfer;
1) Set up you data buffer.
2) Send the event "Yo! There's a buffer for ya' at &buffer".
3) Stay off the buffer...
4) ...until you get the event "Ok, got it. (&buffer)"

Even though the event system is buffered and timestamped, I don't see
it as very different from function calls, RPC or other ways of
getting things done. Real time vs. non real time is probably what
makes the most obvious difference here. The timestamping + buffering
has the side effect of true real time events ("now!" as opposed to
"exactly at t = 0 + x"), and thus out-of-band, "ASAP events" not
fitting in very well.

> this hints at one problem with my Bateson inspiration: strings and
> other things where "state" is not a scalar. i am trying to think about
> this. i really don't want a silly limit on string/array size on the one
> hand, and on the other, a ridiculous consumption of unnecessary memory
> if an implementation uses pool-based memory allocation (which mine
> will/does).

This is a problem that probably doesn't have any perfect solution -
perhaps not even a nice one. (I'm not giving up in a while yet,
though!)

The reason why I decided on the qm allocator with heaps for the
events is that it allows quick allocation of lots of small blocks
without fragmentation. It also replaces deallocation with garbage
collection in a very low cost way. (When a qm_heap_t is replaces a
buffer, the reference to the old one goes where the data will be
used, and is flushed from there.)

As soon as the global heap of buffers is turned into a freelist, we
get search and splitt overhead, risk of complicated memory leak bugs,
deallocation overhead, and most importantly; _memory fragmentation_.
This is a *very* serious problem, and was the reason why you had to
reboot old Amigas and Macs on a regular basis when using applications
that were nasty to the memory manager. Unless we can take even more
overhead and remap memory pages (which requires a quite non-trivial
kernel hack, as we still need shared meory...), there's no way around
it.

There is an intermediate half-solution. (One implementation can be
found in the Kernel.) Set up a number of global buffer heaps, where
the buffers have different sizes. Power-of-2 sizes (like 32, 64,
128,...) seems to be a good distribution for the average case. During
initialization, the heaps are filled with sufficient numbers of
buffers. When allocating, a buffer is grabbed from the first
non-empty heap that has buffers of sufficient size. To make the
system more adaptive; increase the number of big buffers, and add
migration/transformation between the heaps.

Costs a lot more than the qm way, but can be implemented more
efficiently than a classic "search & split" memory manager. It
eliminates fragmentation, but it restricts allocation size to the
largest block size, and relies on real life not being too different
from the statistics used to balance the heaps...

> one possibility is that things needing to work with non-scalar data
> request a piece of global memory (regular memory for threads, shmem
> for processes), and then work with that memory as the storage for the
> non-scalar. the event would just contain a void * then, pointing to
> the global memory.

That's about what the underlying implementation of "huge blocks"
would be.

> this has its problems, not least of which is
> atomicity of the contents of the memory. if the plugin has been told
> that, say, a string value has changed, but then the string changes
> again before/while its looking at it, this is, uhm, not good :)

If you tell someone that you put a ladder in the right place, you
don't move it away just when he's about to step on it, do you? :-)

A standard problem in multitasking/multithreaded environments, and
can't be solved transparently with other magic than copying... (How
do you copy a ladder...!?)

//David

 ·A·U·D·I·A·L·I·T·Y· P r o f e s s i o n a l L i n u x A u d i o
- - ------------------------------------------------------------- - -
    ·Rock Solid David Olofson:
    ·Low Latency www.angelfire.com/or/audiality ·Audio Hacker
    ·Plug-Ins audiality_AT_swipnet.se ·Linux Advocate
    ·Open Source ·Singer/Composer


New Message Reply About this list Date view Thread view Subject view Author view Other groups

This archive was generated by hypermail 2b28 : pe maalis 10 2000 - 07:27:59 EST