Re: [linux-audio-dev] HD-recording frustration: linux makes it almost impossible :-(

New Message Reply About this list Date view Thread view Subject view Author view Other groups

Subject: Re: [linux-audio-dev] HD-recording frustration: linux makes it almost impossible :-(
From: Paul Barton-Davis (pbd_AT_Op.Net)
Date: Mon Apr 10 2000 - 00:52:06 EEST


Silly title.

>Notice that in the following text I am refering to audio tracks in
>32bit format (signed long) in order to simulate Paul's 24bit
>recording. ( 32bit 44.1khz mono)

I am doing 32 bit/48kHz right now.

>The problem is when there is a buffer cache flush (kernel disk write
>buffers), then things get realy ugly. even with the disk thread
>using SCHED_FIFO, it doesn't get rescheduled for 4-8 secs
>SOMETIMES. (since I am calling read() and write() ) ARRGH !
>Hopefully SCSI disk will make this better , but on Windoze you can
>record painless on EIDE disks, therefore the HD recording has to work
>on Linux on IDE disk as well, or I am forced to call it an UNUSABLE
>OS. :-)

No, its not unusable. You're trying to get the best of both worlds:
using the kernel's disk buffering, but wanting it to cost nothing. If
you want Windows like performance, you'll need to use a disk access
method that, like Windows, doesn't go through a kernel buffer
cache. Its called rawio, and its designed for exactly this
purpose. The problem is, of course, that nothing else can understand
the data you've written unless you "describe" it.

Linux 2.3 has rawio built in (i think there are 2.2.X patches), so if
you want that kind of operation, you should use it. Assuming that you
don't have lots of other disk i/o going on that will still require
buffer cache flushing, you'll get Windows-like performance from it. It
would be nice if there was a filesystem or something that was layered
above rawio, but unfortunately, such a concept doesn't exist in Linux:
a filesystem by definition uses the buffer cache. The best you could
do is to provide a library that understands the data layout on your
raw partition. This is probably the area where BeOS wins the biggest
prizes for streaming media applications - several people pointed out
on linux-kernel recently that for big data transfers, BeOS will do DMA
straight into/from user space. Its not really faster that way, but
there's no buffer cache to complicate matters, and you still get the
data going to/from a "real filesystem". Sigh. I'm sure someone will
implement something like this for Linux at some point.

Having said all that, I suspect that there are also problems with the
IDE drivers that exacerbate this issue. Think about it: the data rate
to/from the disk is well known. The buffer cache doesn't change it,
but it makes it bursty. This isn't a problem with the SCSI drivers,
but its seemed fairly clear from lots of reports from different people
with different angles that the IDE drivers cause interrupt blocking
for relatively long periods of time, and this can prevent even a
SCHED_FIFO thread from getting proper access to the processor. So
perhaps its fair to say that Linux has unusable device drivers for
streaming media on IDE devices.

If you want to continue to use the buffer cache, you'll need to play
games either with the kernel or with your hardware configuration to
deal with its existence. For example, if you don't want to use rawio,
then use a dual CPU machine and use SCSI with more than 1 disk. Using
2 CPU's will allow you to have a much bigger comfort zone when doing
this kind of real-time stuff, and using SCSI with more than 1 disk
will more or less guarantee throughput.

> Even with 2MB buffer per track we miss a deadline sometimes.
>And that is independent of the algorithm, I think Ardour will
>encounter the same problems.

Given that I just recorded 45 minutes of 4 channel 32/48 audio this
morning in the studio, without any problems, and with clear
improvements over the Alesis M20 ADAT's we have there as well, I think
not :) And to be perfectly honest, I don't care if it doesn't work on
uniprocessor IDE systems. I'm writing it because I want a professional
hard disk recorder at my friends studio, and in my estimation, a
uniprocessor with IDE disks and a preemptive multitasking OS are not
the right combination. If you can get it to work that way, good for
you.

>I will release my code within the next 48hours, so you ULTRA SCSI folks,
>can benchmark your disk with my test program.

Note that SCSI's only advantage over IDE is the ability to queue
requests with multiple targets. Also, some SCSI devices allow deeper
requests queues. The actual per-disk throughput is rarely higher than
IDE, and sometimes lower, as I discovered in my recent quest to get to
the bottom of my own disks performance (which turn out to actually be
pretty good given their 2 year age).

>Paul, your statement about ringbuffers == bad is wrong , since
>the boundary condition occurs very seldom , ( a small % of cases),
>plus I am doing page alignment of read()s and write()s therefore
>it should be pretty optimized.

Thats not true as a general case. If in fact i/o to your ringbuffer
always takes place with a single i/o operation, then there is little
benefit and some overhead using a ring buffer as opposed to a ring of
buffers.

In my experience, if I refill a ringbuffer with the space thats
actually available when I come to refill it, as opposed to how much
was there when the butler thread woke up, I hit boundary conditions
virtually all the time. If I just assume that when the butler wakes
up, there is a static amount to refill, then sure, in general refills
can be made to miss the boundaries, but then why use a ringbuffer at
all ? There are virtually no cache benefits because of the amount of
data we're talking about.

>PS2: I will demonstrate to Paul that varispeed is PAINLESS (no or
>little performance degradation) with my code, (first demo will not contain
>varispeed).

I didn't say that varispeed was painful. I said that doing it the way
you sketched out (determine space per track; sort tracks; refill each
track) will not work, or more precisely, it will not work if you at
all close to the limits of the disk throughput. If you are operating
comfortably within the disk throughput limit, then sure, its just
fine.

--p
 


New Message Reply About this list Date view Thread view Subject view Author view Other groups

This archive was generated by hypermail 2b28 : Mon Apr 10 2000 - 01:23:31 EEST