Re: [linux-audio-dev] 32/96 x 24 playback improvements, I think

New Message Reply About this list Date view Thread view Subject view Author view Other groups

Subject: Re: [linux-audio-dev] 32/96 x 24 playback improvements, I think
From: Paul Barton-Davis (pbd_AT_Op.Net)
Date: to helmi  03 2000 - 10:18:07 EST


>> Anyway, we don't need to push the bandwidth limits of hard disks -
>> they can easily do what we need them to do. Even with random seeking
>> and small block sizes (4K), I can get 40MB/sec out of my Ultra-2 SCSI
>> drives. Thats way more than we need for 24/24/96 (9MB/sec).
                                              ^^
                                              should be 32 (bits)

>Hi folks, I watched the multitrack playback discussion,
>as far as I can tell you,
>using multiple threads to read the track could not be the best idea.
>In my multitrack playback experiments, I got the best results using:
>
>- mmap() / munmap() instead of read()
> (read() seems to read ahead too much, therefore wasting some of the disk
> bandwidth

Well, read() invokes a read-ahead heuristic that assumes sequential
access to a file. Assuming that this is what's going to happen, this
can be a good thing, although it effectively means that you're
actually buffering even more than you think (its just that some of it
remains in kernel space before being copied up to use space on the
next read(2)).

But I may return to the mmap, if only because it reduces copying.

> - using one single thread to read all tracks in sequence

Done. I am still a little suprised that this is better, but I do
accept that it is, simply because my experiments say so.

>- using a single thread you schedule readings by "most empty" playback buffer,

There is no such thing in a double buffered system. Every read is the
same size: one side of the double buffer. We queue it as soon as the
read/write pointer crosses into the other "side" of the double buffer.
A buffer "side" is either full, or waiting to be filled. Thats all.
This also means that you can size the buffers for the optimum read
size, whereas if you do vari-sized reads, things can get a little
slower in terms of throughput.

> which allows you to easily implement varispeed.
> ( varispeed is easy to implement in a multithreaded context too, but the
>problem is that in that case streams playing at higher pitch (=playback
>frequency) have to be "prioritized" in order to improve reliability.

varispeed is a problem that i have yet to consider. definitely a good
idea, but i am not sure if i can do it without actually resampling the
data, which i don't think i can do for 24 tracks at the same time as
recording.

>I am sceptical about the fact the the disk driver always serializes
>disk access in the best possible manner.There are simply too much
>idiotic cases where the logic of the disk driver doesn't figure out
>well the access patterns of certain applications.

you're right. however, thats a function of the buffer cache block
replacement policy and the disk read ahead heuristic that assume
sequential file access. it just so happens that for audio recording
and playback, both of these are almost exactly what we want. I think.

in general though, yes, the disk driver doesn't do the optimal
thing. Oracle, for example, gets hit by this a lot, because it very
definitely does *not* do sequential accesses on its data files.

>> The problem is seek time, and thats not affected by the device driver
>> apart from its disk scheduling policy. Since in all cases the goals of
>> a disk driver are basically the same - reduce seeking, reduce seeking,
>> reduce seeking - its not clear to me that a dedicated driver would be
>
>yes, reduce seeking , but the driver has to give up some seeking
>optimization in exchange of a lower disk access latency.

I don't understand this. seeking *is* the cause of disk access
latency. how can you lower one by raising the other ?

>> If nothing else, use fdatasync, rather than fsync, so that the head
>> doesn't have to seek back/forward to update the inode metadata as well
>> as the actual disk data.
>
>I agree, during writes you should call fdatasync() every 0.3sec - 2sec,
>in order to avoid a large disk flush (on bigmem boxes) which could stall
>other operations (disk reads for the playback) and cause dropouts.

Personally, I would prefer to avoid it completely, and use a dual
processor machine, which solves this particular problem :)

>Paul, are you requiring a separate disk for the audio tracks ?
>I guess yes, because I tested multitrack playback on a single disk box,
>and you if you fire up a large app like netscape, or do some disk I/O,
>after a few secs the HDR application is unable to keep the buffers filled,
>which results in a buffer overrun. (=dropout).

Requiring ? No. Preferring, yes.

The studio system has 3 18GB Ultra-2 drives on their own Ultra-2 cable
(sadly not their own channel - that would be nice, but I haven't found
a motherboard for PII/III with a dual channel SCSI controller, and I
don't want to install a 2nd SCSI controller just for this. Not yet,
anyway). Nothing but recording/playback will happen on these disks,
and I am also hoping to find ways to ensure contiguous block
allocation on these filesystems. Its vaguely possible, believe it or
not, that it might make more sense to put a VFAT file system on them,
but right now, its just ext2 with a 4K blocksize.

My testing, however, is happening on my own machine, on disks shared
with the rest of the system (a configuration I *must* alter at some
point).

>Anyway a disk doesn't cost a fortune anymore, and if you want you can
>easily use multiple disks and spread the tracks across disks giving you
>a bigger bandwidth and seek performance.

I have been thinking about this multi-disc approach, but it seems to
me that complexity of it (not a lot, but definitely some) makes it
undesirable when a single disk can definitely meet the demand. Since
the disks I have in the studio are mounted in removable chassis, I
prefer to think of them as self-contained tapes. Remember - we need
9MB/sec for 24t/32b/96kHz. Even EIDE drives can do this :)

--p


New Message Reply About this list Date view Thread view Subject view Author view Other groups

This archive was generated by hypermail 2b28 : pe maalis 10 2000 - 07:23:27 EST