Re: [linux-audio-dev] back to the API

New Message Reply About this list Date view Thread view Subject view Author view Other groups

Subject: Re: [linux-audio-dev] back to the API
From: Paul Barton-Davis (pbd_AT_Op.Net)
Date: su loka   17 1999 - 22:53:30 EDT


>> 1) data size: audio (N samples per call to process (); likely
>> to be N >= 32 ... 128 bytes for RT; non-RT
>
>How does a sampleplayer interface with a streaming/caching "daemon"?
>(A special case that shouldn't be supported?)

can you elaborate on the problem ? it did occur to me, BTW, that in
your scenario of the engine outsourcing to a client the job of sending
output to an actual audio output source, there is *no* way to reliably
determine the fixed latency that is needed to compute sample-accurate
timestamps. you have no idea what the client is doing with the data. i
suppose that our API could require the client to give us a number and
post events if it ever changes.

>> processing might go way higher ... 4KB ?)
>
>(IIRC, VST had a 32 kB default buffer size before they realized you
>don't want to wait for ages to hear the effect of an automation
>edit...)

thats reasonable, but if you want to use a really great reverb (say,
convolution based) fed through a fancy plugin and you know it will
take hours to generate, large buffers *might* be a good idea.

>But how different is sending an event from passing an extra argument
>to process()?

in one of your descriptions, you talked about a plugin having "one
input audio port and one output audio port". i think that having just
one of each is a bad idea. but anyway, i don't think that any of this
stuff should be passed as an argument to process() anyway. instead, we
want something like this:

          struct plugin {
                 int process (struct plugin *);
                 .
                 .
                 .
                 audio_port_t *audio_input;
                 audio_port_t *audio_output;
                 event_list_t event_list;
         };

the engine can then manipulate

    plugin->audio_input[0]

etc. before calling process(), passing it a pointer to "itself". think
"closures" ...

>As for events and out-of-band, non real time transfer;
>1) Set up you data buffer.
>2) Send the event "Yo! There's a buffer for ya' at &buffer".
>3) Stay off the buffer...
>4) ...until you get the event "Ok, got it. (&buffer)"

sounds like what i had in mind, with the addition of the explicit "OK,
got it (&buffer)". i think this is the way to handle things like this.

>The reason why I decided on the qm allocator with heaps for the
>events is that it allows quick allocation of lots of small blocks
>without fragmentation. It also replaces deallocation with garbage
>collection in a very low cost way. (When a qm_heap_t is replaces a
>buffer, the reference to the old one goes where the data will be
>used, and is flushed from there.)

at the moment, i am preferring an implementation that uses fixed event
sizes, since events are just "differences that make a difference". you
don't need a heap or a real allocator at all - you use a pool system
identical to the incredibly efficient one used in the kernel. it took
me a while to figure out how it worked, and it probably dates back to
the 60's, but its really cool, and really fast. it looks like a
freelist, but it isn't, really.

assumption: every object to be allocated has a pointer to another
object of the same kind that is usable when the object is
"deallocated".

Pool setup:

        struct object_pool {
               object_t *objs;
               object_t *next_free;
        };

        1) allocate a pool of the objects:

           object_pool.objs = (object_t) malloc (sizeof(object_t) * POOL_SIZE);

        2) connect each "next" pointer to make a free list

           for (i = 0; i < POOL_SIZE - 1; i++) {
               object_pool.objs[i]->next = &object_pool.objs[i+1];
           }
           
        3) mark the top of the freelist

           object_pool.next_free = object_pool.objs[i];

object allocation:

        object_t *
        alloc_object ()
        {
                object_t *obj;

                obj = object_pool.next_free;
                object_pool.next_free = obj->next;
                /* for safety, do: obj->next = 0; */
                return obj;
        }

object deallocation:

        void
        dealloc_object ()
        {
                obj->next = object_pool.next_free;
                object_pool.next_free = obj;
        }
        
this is not all correct, and it does handle OOM conditions. however,
you'll find this basic structure all over the kernel. its beautiful,
really beautiful.

>As soon as the global heap of buffers is turned into a freelist, we
>get search and splitt overhead, risk of complicated memory leak bugs,
>deallocation overhead, and most importantly; _memory fragmentation_.

nope. not if the objects are all the same size.

i am now waiting for your obvious explanation of why events cannot all
be the same size, and then why all buffers cannot be the same size,
since we have already talked about a scheme in which the engine
decides the optimum buffer size based on plugin requirements.

furthermore, since changing buffer size is a pretty drastic thing to
do to the system (many small details to handle if this ever happens),
it seems not unreasonable to just set up a new buffer pool, and maybe
discard the old one, though this seems harder.

so i see us having 2 pools of (differently) fixed sized objects,
events and buffers, using this kernel-style allocation mechanism.

and of course, note that since only the engine allocates and
deallocates from both pools, no locks are necessary.

>> this has its problems, not least of which is
>> atomicity of the contents of the memory. if the plugin has been told
>> that, say, a string value has changed, but then the string changes
>> again before/while its looking at it, this is, uhm, not good :)
>
>If you tell someone that you put a ladder in the right place, you
>don't move it away just when he's about to step on it, do you? :-)

sure. the solution is actually pretty obvious:

      void *get_string_pointer (const char *str)
      {
                ... lookup existing strings, perhaps via hash ...
                if found, return pointer to string
                if not found, alloc "global" memory, store string there,
                   and return pointer.
      }

then, when you want to say "input file just changed to XXX", you
actually say "input file just changed (value at <address>)", and now
we now that the address will always hold that value. if necessary, we
can do reference counting too, to ensure that we can periodically
sweep away stale values.

--p


New Message Reply About this list Date view Thread view Subject view Author view Other groups

This archive was generated by hypermail 2b28 : pe maalis 10 2000 - 07:27:59 EST