Re: [linux-audio-dev] HD-recording frustration: linux makes it almost impossible :-(

New Message Reply About this list Date view Thread view Subject view Author view Other groups

Subject: Re: [linux-audio-dev] HD-recording frustration: linux makes it almost impossible :-(
From: Benno Senoner (sbenno_AT_gardena.net)
Date: Mon Apr 10 2000 - 14:16:31 EEST


On Mon, 10 Apr 2000, Paul Barton-Davis wrote:
> >The 2nd CPU isn't needed, since SCHED_FIFO processes not calling disk
> >I/O routines will run just fine, therefore the audio thread doesn't
> >suffe r much during heavy disk I/O ( about 50-150ms on normal
> >kernels, about 1-3ms on a lowlatency kernel)
>
> As I pointed out, I think its important for the butler thread to be
> SCHED_FIFO as well. If its not, and its issuing disk i/o requests
> serially, then it can effectively reduce the disk throughput because
> of the delay between a read request being complete, and the butler
> thread running again. However, it obviously has to run at lower
> priority than the audio thread.

I fully agree here, but unfortunately the delay between two read() or
write() request is the minor issue.
It is the read() and write() call which blocks for 6 secs sometimes,
even if you try to read/write only a small amount of date.
(even a df or an ls in the console blocks for several secs during
the critical phases (kernel buffer flushing)
>
> >The problem is not having the buffer refilled up to 2k samples or up to 10k
> >samples, the problem is when your disk thread gets blocked by the
> >damn buffer flushing.
>
> Well, the problems on my system came from using non-contiguous files,
> too small chunk size for i/o and other things that all led to not
> having the buffers filled in time. Once I got that solved, its been
> easy since then (bar the read/write ordering issue, and I think thats
> easily solved by throwing more memory at the problem).

Yes, with 4MB per track it would even work without inode preallocation,
but 50 x 4 = 200MB = quite a bit of memory , not worth to waste it
as mere track buffers (the disk latencies aren't 6 secs big :-) )

>
> But as I said in a previous message, I think the problem may be that
> you are allocating fs blocks as you go, which is a performance killer
> for more than one reason. If you preallocate the files, there is no
> (or almost no) metadata to be written.
I will try this, I have a spare partition to experiment with it.

[ .. varispeed issues ]

> >Why will it not work ?
> >Do you have any alternatives ?
> >Anyway the algorithm adapts itself to the actual conditions.
>
> If you fill an extremely empty track, it requires more data than a
> "regular" track. This will take more time to fetch. This delays the
> subsequent tracks being filled. If you the refill the next one, it will
> also take longer than you "expect", and so on. If this occurs to too
> great an extent, you will fail to refill the "last" tracks at all
> before the audio thread catches up with you.

No problem Paul , I thought about this issue while designing my algorithm:
it is really simple, just limit the maximum read/write size:
if the extremely empty track has 2MB space in it, I don't read the whole 2MB,
but I limit the upper size to 256kb for example, that ensures a fast service
time, and low first-to-last-track refill time. (but again if you set the max I/O
size too low, (eg 64kb, then throughput will suffer and you will be able to do
less tracks than with larger IO sizes, but that's obvious)

>
> The way to avoid this is to always read the same size chunk of data
> for each track, BUT then, if that didn't fill all the tracks, continue
> around the loop again to fill them up. That way, slowed down or
> regular speed tracks get filled in time, and you use what would
> otherwise be slack time in the butler thread (the time between
> finishing a refill and the next "signal" from the audio thread/wakeup
> from usleep) to "top up" (this sounds very english - does it make
> sense?) the speeded up tracks.

Yes I am doing this: try to fill all tracks up to free_space, but limited by
max IO size, that means if there is an extremely empty track,
during the next run it gets refilled further, maxIO samples at time.
The condition for the butler thread to sleep is that all tracks have
less than a minimum amount of samples available in their buffers. (8192 in my
case). (But even if you set this threshold to zero it doesn't change much,
because e 20ms usleep() is nothing to the 6secs stall in the read() / write()
syscalls.

>
> >Ok you can use a clean disk and prealloc the files etc, but I want it to work
> >in
> >the average disk too)
>
> I can almost guarantee you from my own mistakes that this will not
> work.
Except you use HUGE buffer sizes, but again , a waste of expensive memory.
( 200-300MB of RAM still costs quite a lot)
>
>
> I would always get an underrun within a couple of seconds. Then at
> some point, I was running a test program on the same files, and I
> noticed that the seek+read performance really picked up in the last
> 30secs or so of the file. It was then I remembered that this was a
> dirty filesystem, because it occured to that this was a section of the
> files that had ended up being fairly contiguous. I cleaned the fs out,
> remade the filesystem, and recreated ardour's directory and track
> files.
>
> Bingo! Almost perfectly predictable performance.

Nice, but I am wondering if windows users are required to do this too,
or if they simply can run their hdrecorder on a non-fresh disk , without
substantial performance loss.
But that could be the RAWIO issue, since RAWIO isn't that sensible
in this regard.

>
> So, its also possible that your problems are not caused by the buffer
> cache at all, but by using fragmented files.

I don't think so: during playback only, I can read many many tracks which
are almost certainly all fragmented since I am testing this on my root
filesystem, and there is almost no problem.
The charts I've plotted clearly indicates that when you do a find / for
example, then the metadata updating generate big peaks in the
buffersize curves. (I will post the testcode and graphs today or tomorrow)

Benno.


New Message Reply About this list Date view Thread view Subject view Author view Other groups

This archive was generated by hypermail 2b28 : Mon Apr 10 2000 - 17:17:30 EEST