Re: [LAD] a *simple* ring buffer, comments pls?

From: Fons Adriaensen <fons@email-addr-hidden>
Date: Sat Jul 09 2011 - 17:38:15 EEST

On Sat, Jul 09, 2011 at 04:25:22PM +0300, Dan Muresan wrote:

> >> The apps already need to do some type of synchronization internally.
> >> For example a player's disk thread, when its ringbuffer is full, needs
> >> to wait for the process thread to consume some data and thus free up
> >
> > Depends. If both ends are periodic processes no other synchronisation
> > is required. And e.g. Jack callback is such a process, and likely to
> > be one end.
>
> How about the other "end" (i.e. the "disk thread"?) Would that
> normally be periodic?

It could be, and that would be perfectly ok in some cases.
But I'm not arguing agains synchronisation, and my own implementation
of this ringbuffer optionally provides in either direction (buffer
becoming non-empyt/non full).
 
> OK, even if your disk thread is periodic for some reason, how does
> that argue for library-level synchronization, *instead of* app-level
> synchronization? In this case the cost would be the same -- no loss.

I don't see the point.
 
> > You may be right about the (HW as opposed to compiler) re-ordering of
> > data w.r.t. pointers on some architectures. But AFAIK, at least on Intel
> > and AMD writes are not re-ordered w.r.t. other writes from the same CPU,
>
> "From the same CPU"? Are we regressing to non-SMP-only schemes? And

No, I'm talking about SMP systems. Writing the data and updating
the write pointer is done by the same thread and hence CPU, these
actions won't be re-ordered.

> "Intel and AMD" only?

There is no legal obligation for code to be portable. Nor is there
any moral obligation. If I choose to support only Intel and AMD PCs
and not embedded systems or mobile devices (and for the kind of SW
I write that does make sense) then that is my choice, period.

I get usually sick when computer scientist or language buffs start
waving their finger about programming style etc. There is room for
some pragmatism in everything.

> > Regarding the volatile declarations, at least on my version (which
> > is slightly different from Jack's) there is no performance penalty.
>
> Under which access patterns, with what compiler / optimization flags
> etc? I would not make such generalizations... Volatile frustrates the
> optimizer's ability to chose the optimal access patterns.

In the example I provided the essential point is that there
is *one* *correct* access pattern which is to read it once
for each call to f(), to ensure that the same value is used
everywhere in that function. Declaring this value volatile
and taking a local copy does exactly the right thing.
The alternative would be protect it by a mutex for as long
as f() runs. For no good reason, as I don't mind it being
overwritten while f() runs. Would that be more 'optimal' ?

Ciao,

-- 
FA
_______________________________________________
Linux-audio-dev mailing list
Linux-audio-dev@email-addr-hidden
http://lists.linuxaudio.org/listinfo/linux-audio-dev
Received on Sat Jul 9 20:15:02 2011

This archive was generated by hypermail 2.1.8 : Sat Jul 09 2011 - 20:15:02 EEST