Re: [Fwd: [linux-audio-dev] info point on linux hdr]

New Message Reply About this list Date view Thread view Subject view Author view Other groups

Subject: Re: [Fwd: [linux-audio-dev] info point on linux hdr]
From: Stephen C. Tweedie (sct_AT_redhat.com)
Date: Mon Apr 17 2000 - 21:04:55 EEST


Hi,

On Mon, Apr 17, 2000 at 07:21:31PM +0200, Benno Senoner wrote:

> > The only way you can get much better is to do non-writeback IO
> > asynchronously. Use O_SYNC for writes, and submit the IOs from multiple
> > threads, to let the kernel schedule the multiple IOs. Use large block
> > sizes for each IO to prevent massive amounts of disk seeking. O_DIRECT
> > in this case is not an instant win: it is completely orthogonal to the
> > IO scheduling issue.
>
> Are you suggesting to fire up multiple threads where each writes a couple of
> files (in 256kb chunks) with O_SYNC ?

That sort of thing, yes.

> how many threads in you opinion ?

Good question --- somebody really needs to benchmark it. At least one per
file, though.
 
> The reading thread: should that still be only one, in order to prevent seeking ?

Maybe. There are competing pressures. You don't want the mapping
information in the files to cause extra seeks, so there is some compromise
involved. I'd guess at least two threads per file, but you _really_
need to get it profiled. What is really needed here is a more efficient
way of encoding the files on disk using fewer indirection blocks: it's
likely to be indirection seeks as much as data seeks which cost here.

> Anyway I don't understand particularly well why multiple writers should buy us
> anything, since the writing thread basically does nothing but issuing write()s
> to disk. That's manual scheduling of write()s in user space (in 256k blocks).

Because if you have got enough IO requests sent to the kernel that you
can present all your outstanding writes at once, then the kernel can
sort out an efficient way of writing them one at a time with minimal seeks.

> Since the thread runs SCHED_FIFO (or nice(-20) ), it should be always
> ready to issue new write() requests as soon as the disk finished the previous
> one.
>
> Or am I missing something ?

Pipelining. With O_SYNC (or O_DIRECT), you have a latency between the
completion of one command and the application sending the next command.
With the data all sent into the kernel, we can actually queue multiple
commands to the disk at once. It's a lot more efficient.

> Anyway I think that O_SYNC in a single threaded model is very slow

Yes, it will be!

> performance. Hopefully this is not true for O_DIRECT too, or it will be
> unusable. (with O_SYNC I a mere 50-60% # of tracks compared to running without
> O_SYNC).

Of course it will be the same for O_DIRECT. How can it be any different?
It's the same IOs being scheduled.

--Stephen


New Message Reply About this list Date view Thread view Subject view Author view Other groups

This archive was generated by hypermail 2b28 : Mon Apr 17 2000 - 21:44:04 EEST