Re: [linux-audio-dev] info point on linux hdr

New Message Reply About this list Date view Thread view Subject view Author view Other groups

Subject: Re: [linux-audio-dev] info point on linux hdr
From: Benno Senoner (sbenno_AT_gardena.net)
Date: Sat Apr 22 2000 - 15:48:23 EEST


On Sat, 22 Apr 2000, Paul Barton-Davis wrote:
> >As Andreas pointed out, there is no portable/filesystem independent way to
> >discover the blocksize used by your FS.
> >The thing you are doing in ardour doens't allocate ALL blocks since
> >you are using 4096 increments (which is the preferred IO size = intel page
> >size),
>
> Oh yeah ?
>
> ardour/track.cc:162:
>
> for (i = statbuf.st_size; i < file_bytes; i += statbuf.st_blksize) {
>
> > and is 4 times as big as the 1024 bytes used by ext2 by default.
>
> Wrong. ext2 picks the fs blocksize mostly according the total size of
> the partition, plus the number of inodes to be supported, etc.

Not always true,
for example my machine has a 16GB disk, which initially had Redhat 5.2
installed.
And the chosen blocksize was 1024, even if the partition was fairly big.
(notice that at the time, I used all default settings of the RH 5.2 installer)
The machine was later I upgraded to 6.0 and then 6.1, but as you know
the upgrading process let's the old ext2 fs in place.
Maybe newer distros pick 4096 as default, but you cannot assume this
to be always true.

>
> pbd[242]>sudo dumpe2fs /dev/sda3 | grep -i size
> dumpe2fs 1.18, 11-Nov-1999 for EXT2 FS 0.5b, 95/08/09
> Block size: 4096
> Fragment size: 4096
> Inode size: 128
>
> This fs was created without specifying the blocksize ... satisfied ?
>
> Yes, in theory, there could be a mismatch between the fs blocksize and
> st_blksize, but I happen to know that ext2 will "always" use 4K blocks
> on a large partition unless its given other parameters that
> suggest a better size.

Think about all these boxes like mine, upgraded from old distros,
still using 1024 byte blocks.

Anyway your preallocation loop makes no sense to me since:
- on filesystems using 4096 bytes/block (like yours),
your preallocation algorithm touches ALL blocks, therefore giving
equal (or a bit worse) performance as plain write()s.

- on filesystems using 1024 bytes/block, your code will only preallocate 25%
   of all blocks, making the purpose of the allocation almost useless.

Paul, the _ONLY_ and _BEST_ way to go is to preallocate via write(),
it will work on any filesystem with any blocksize with the maximum speed.

Hopefully you agree with me now (sorry but I am right :-) ), if not please
explain why, and come with the proof that your method has an edge over
plain write()s.

 Moreover, in the eventual documentation for
> ardour, it will state that the fs used for recording *should* be
> created with an explicit argument to ensure the blocksize is set to a
> known value. OK ?

I disagree here,
the documentation should say that the blocksize should be big (4096), not for
preallocation purposes, but for maximum speed purposes.
Fewer blocks = less overhead in the block handling lists inside the kernel,
plus 4096 matches exactly the intel pagesize, which is another bonus.

It might be even possible that ringbuffer usages go down quite a bit using 4096
bytes due to the lower overhead in the block handling routines.

Next week I will perform some tests with hdrbench on a spare partitions, using
1024 and 4096 blocksizes.

Benno.


New Message Reply About this list Date view Thread view Subject view Author view Other groups

This archive was generated by hypermail 2b28 : Sat Apr 22 2000 - 15:38:17 EEST