Re: [linux-audio-dev] API design again [MORE code]

New Message Reply About this list Date view Thread view Subject view Author view Other groups

Subject: Re: [linux-audio-dev] API design again [MORE code]
From: David Olofson (audiality_AT_swipnet.se)
Date: su loka   10 1999 - 11:18:49 EDT


On Sun, 10 Oct 1999, Paul Barton-Davis wrote:
> >That's effectively what happens when a client communicates with a
> >server; it needs to sleep at some point, or it will hang the
> >system... :-)
>
> actually, it doesn't have to sleep. this is a problem i've dealt with
> many times in the past. you basically divide the engine state into two
> parts, one that is completely private to the engine thread, and one
> that can be manipulated (with locks if necessary) by any other
> thread.the engine thread periodically reassesses its public state. no
> sleeping, unless two non-engine threads both want to mess with the
> same part of the engine's public state at the same time. if the state
> elements are simple and non-correlated, you can even use atomic_t and
> avoid locks.

So where do you get the CPU time for other tasks, if the engine never
sleeps...? Or am I missing what you're actually saying here?

> >> But wait - how is the engine thread going to safely copy or otherwise
> >> use this data when the MIDI input thread (or any other h/w-driven
> >> thread) could modify the port's event list at any time ?
> >
> >You don't do that. Once an event is "posted", it is considered
> >property of the destination context.
>
> there is no destination context. one thread *cannot* write to another
> thread's event_port_t. all that a sensor thread can do is to post a new
> event to its port, and hope that the right people find it there. in
> this case, the right people is the engine thread.

("Context" is the word I chose to use for sub nets, no matter if
they're running on the same thread or not. This is because it's
significant when dealing with cycle times, event buffer life times
and ralated things. It may not be appropriate for client/server
relations, although the same rules still apply...)

Anyway... FIFO. Any other behavior would be erroneous and fatal in
this case. If your events are late for the cycle, you're missing a
deadline, which is a Very Bad Thing(tm) in a real time system.

> > And the event data is going
> >nowhere until the buffer is flushed. You allocate memory, fill in the
> >event_t, and add it to the list.
>
> Nope. I didn't say modify the events within the list, I said modify
> the list. When the engine takes a look at the list, it needs to know
> where it starts and where it finishes. we have to be very careful to
> ensure that this can be done without locks.

It's rather safe if your new list is entirely on your side of a
FIFO... :-)

> >Actually, there's one more level; the transfer of the finished list
> >of events (for the cycle) to the destination port.
>
> this can't happen without locking. we don't want to do this.

A FIFO doesn't need locking in the single reader - single writer
case, and it also allows queueing of multiple event buffers. That
means we get cycle period matching almost for free. The engine only
needs to grab the buffers one by one and do the compulsory time stamp
conversion.

> You write of a "finish cycle operation"
>
> >...or you can add all events to the list of the shadow event port, and
> >then just finish_cycle(my_event_port) to send the list off to the
> >engine. The engine will never see the shadow port at all - only the
> >code that gathers new events from the queues is aware of the fact
> >that the events actually come from a different thread.
>
> sorry, this won't work. once again, a sensor thread is not going to
> wait for a command to "finish the cycle". its working asynchronously
> with respect to the engine.

So, your "cycle time" will have to be based on the only wake up
source you have; the device you're sensing. However, that does not
mean that you have to pass one event at a time. Just send'em all of
before you go back to sleep.

> then you provide two hints about non-shared-memory implementations:
>
> >What I proposed earlier was just to wake up the MIDI thread an extra
> >time to perform this operation once per engine cycle, rather than
> >doing finish_cycle() for every event, or every timestamp. With shared
> >memory and lock free queues it doesn't make much difference, but if
> >would in other settings...
>
> >> struct event_port_t {
> >> qm_heap_t *heap; /* how to allocate */
> >> event_port_t *engine_listen_next; /* link to engine listen list */
> >> event_port_t *subscriber_head; /* subscriber list */
> >> event_port_t *subscriber_tail; /* subscriber list */
> >> event_t *event_head;
> >> event_t *event_tail;
> >> }
> >
> >Hmmm... Maybe at least some of these should be hidden in the private
> >parts of the implementation. (They will probably look different for
> >non-shared memory connections and other weird things...)
>
> 1) in general, we should not be supporting non-shared memory
> directly. for example:

Agreed. However, we don't have to make it inherently difficult to
implement...

> 2) sensor threads make no sense for other settings. if you want to
> collect information across a network, for example, thats OK, but you
> still have a sensor thread that is responsible for collecting it in
> the same way as if it were MIDI data or whatever.

Ok, just set up a communication thread with a sensible cycle rate, in
case event rate >> fixed latency. ("Fixed latency" being the jitter
elimination delay that makes time stamps useful.) What I mean is that
it's insane to send events 20 times during a single fixed latency
period.

> we can use this principle for any kind of connection that can't use
> shared memory.
>
> i don't care what set of sensor threads you might want to run, but I
> don't want the core engine messed up by non-shared memory junk which
> will slow it down and make it much more complex.

No, that's not the idea. I only want to clean up the API, and make it
posible to change the implementation without breaking the API.

API != implementation.

> >It fits my idea of a client... The most significant difference from a
> >plug-in is that it has it's own thread, and runs whenever it wants.
>
> aha! you admit it: "runs whenever it wants". this is why the
> "finish_cycle" operation idea won't work.

finish_cycle() *can* be called whenever you want. (Plug-ins don't
need it, as returning from process() has the same meaning.) It's just
that *recommending* that it's not called more frequently than
necessary will make optimization of the implementation a bit easier.

> "client" is a word that suggests someone who gets something from a
> "server". the sensor threads don't do that - in fact, they get almost
> nothing from the server, and spend their life entirely in its
> service. so "slave" might be a better term.

That's true... (However, I think "slave" suggests that the sensor
thread is controlled by the system somehow - which is not the case.)

The client/server/plug-in terminonolgy doesn't fit too well. "Plug-in"
is still ok, and is different from what we have called clients and
servers so far. But clients and servers are too similar to make a
distinction... They both connect through the same API, and the only
difference is the balance between send and recieve. They're just
nodes in a network... They may be plug-in hosts (engines), they can
publish some services, or use some services provided by other nodes.
Drivers are no different...

And we don't even have a name for the *API* yet! *hehe*

//David

 ·A·U·D·I·A·L·I·T·Y· P r o f e s s i o n a l L i n u x A u d i o
- - ------------------------------------------------------------- - -
    ·Rock Solid David Olofson:
    ·Low Latency www.angelfire.com/or/audiality ·Audio Hacker
    ·Plug-Ins audiality_AT_swipnet.se ·Linux Advocate
    ·Open Source ·Singer/Composer


New Message Reply About this list Date view Thread view Subject view Author view Other groups

This archive was generated by hypermail 2b28 : pe maalis 10 2000 - 07:27:13 EST