Re: [LAD] [LAU] cancelling I/O with libsndfile

From: Fons Adriaensen <fons@email-addr-hidden>
Date: Tue Jun 14 2011 - 15:12:25 EEST

On Tue, Jun 14, 2011 at 02:42:18PM +0300, Dan Muresan wrote:

> For sure, two user-space caches add a useless extra layer of copying.

Not only copying (which is cheap anyway), but also logic.
Your application layer cache knows what you want to achieve,
will have some ad-hoc strategies and try to do the right thing.
The intermediate one doesn't have that and may well work against
you. The same problem exists with the system level buffering,
but you can't do much about that.
 
> > One way to organise the buffer is to divide it in fragments
> > of say 1/4 second, and have the start offset of each one
>
> Yes, that's exactly how my implementation works. My ringbuffer is
> divided in fragments.
>
> > quantised to the same value. The syscall overhead becomes
> > trivial for such a lenght. When you need new fragments, you
>
> Yes, that's why I argued for long requests over short ones.
>
> > the actual read() call). The one you can't cancel is no big
> > deal, just let it happen. It will take little time compared
>
> And this is why I gave the NFS example earlier in the discussion...
>
> What if it the data comes from the network, e.g. a streaming server
> that has just 110% the bandwidth of the actual real-time playback? As
> I said, in this case you almost double the latency if you can't cancel
> the request.

I don't know what your app is doing, so I'll assume it's some sort
of player. Now if you relocate, you send the commands to read the
data at the new position to your reader thread. Assume your buffer
is 2 seconds, so that's 8 commands to read 1/4 of second. You can't
safely start playback again unless you have at least a second or
so buffered. No assume you have a new relocate before that time.
Again you send the commands to read 8 blocks of 1/4 second. There
is some logic in the app that makes these cancel the previous ones
that have not yet started. So you end up with 1 you can't cancel
against 8 that have to be done anyway. That's not big loss. One
of my players works this way. From a user point of view a relocate
happens almost instantly.

If the read bandwidth is just above what is required for continuous
streaming, then very probably you can't support this sort of thing
without extra delays. But even in this case they don't need to be
very big. In the example above one extra fragment is read (the one
you can't cancel) compared to the four or so you need to have done
anyway before you can resume playing safely. So it takes as most
25% more time.

If the reads happen on an NFS volume then even if the cancel would
work on the client side that doesn't imply that the data won't
be transmitted by the server anyway. So nothing would be gained
by cancelling.
 
Ciao,

-- 
FA
_______________________________________________
Linux-audio-dev mailing list
Linux-audio-dev@email-addr-hidden
http://lists.linuxaudio.org/listinfo/linux-audio-dev
Received on Tue Jun 14 16:15:06 2011

This archive was generated by hypermail 2.1.8 : Tue Jun 14 2011 - 16:15:06 EEST