Re: [linux-audio-dev] [Fwd: [patch]2.4.0-test6 "spinlock" preemption patch]

New Message Reply About this list Date view Thread view Subject view Author view Other groups

Subject: Re: [linux-audio-dev] [Fwd: [patch]2.4.0-test6 "spinlock" preemption patch]
From: Aki M Laukkanen (amlaukka_AT_cc.helsinki.fi)
Date: Fri Sep 08 2000 - 16:20:17 EEST


On Thu, 7 Sep 2000, Benno Senoner wrote:
> Ok write may be a special case because it can be queued up so that
> adiacent block can get written in one rush, but for reading this will not
> work so well IMHO.
> (how long does the kernel wait before reading ?)

The read latency default 50000, while write latency default is 100000. These
values can be tuned with elvtune or powertweak (recent CVS versions). Take
a look at this article for more information:

http://www.strasbourg.linuxfr.org/jl3/features-2.3-1.html

Those values are for an older version of the elevator code and are not
comparable. The latency value does not present a real time unit rather it is
used as a counter value in requests and decreased everytime an incoming
request is compared with a request already in the queue. Do remember that
the elevator values are tuned for typical server configurations where
aggressive request merging is beneficial.

> I agree that the seeks should be minimized, but IMHO the best way to achieve
> this is to let the userspace app read and write large blocks, and at some point
> increasing the read/write blocksize does not increase the throughput anymore.
> And I think this value lies around 256-512KB (ad PBD will agree on this).

Except it doesn't work that way. Each block is allocated separately in the
kernel, just see mm/filemap.c:generic_file_write(). If you see improved
performance from bigger user-space writes, it is because of other constant
overhead, such as system call, checking for file locking etc. Ext2 block
allocation strategy tries to allocate files in sequential order, i.e. it has
a "goal block" which it tries to allocate before it tries any other blocks.
This doesn't change the fact that in some cases (albeit pathological) each
block of a file could be far apart from the others.

> Plus think about the fact that a good HDR app has to provide varispeed too
> (eg one track playing slower or faster than the other), thus all the
> interleaving at FS layer are worthless in that case (and perhaps it can
> even lead to lower performance compared to a plain FS)

It could very well be so although in case of streamfs by making each
fragment bigger (default is 4*32=128kB) could possibly alleviate this.

> I am very sceptical that a big amount of requests get merged,
> especially in the case of big read/write sizes.
> (eg. reading 256KB x 40 tracks = 10MB , that means that all the requests,
> in order to optimize the reading path, would need to get delayed by 1-2secs
> before reading, which I HARDLY believe.

No all but a large portion. Remember, that block device layer only sees a
bunch of 4kB requests.

-- 
D.


New Message Reply About this list Date view Thread view Subject view Author view Other groups

This archive was generated by hypermail 2b28 : Fri Sep 08 2000 - 17:14:55 EEST