Re: [linux-audio-dev] audio-disk thread interaction Was: Re: discussion about development overlap

New Message Reply About this list Date view Thread view Subject view Author view Other groups

Subject: Re: [linux-audio-dev] audio-disk thread interaction Was: Re: discussion about development overlap
From: Benno Senoner (sbenno_AT_gardena.net)
Date: Fri Sep 29 2000 - 14:09:25 EEST


Scott,
I thought the same a few months ago, but after running benchmarks,
I discovered that one single thread with reasonable read write sizes
(256KB), is at part or faster than a multithreaded approach when reading/writing
files. This is because of the fact that when reading realitvely big chunks at
time, the disk seeks are minimized, and not even the evelator can impove
things in this situation especially because the 256KB blocks could
be located anywhere on the harddisk
That means as long as you write 256KB "fragments", a fragmented
disk does not matter, since you will always read 256KB blocks at time.

sometimes the elevator may help a bit but other times it can degrade
performance too, because of a too fine granularity.
(eg: launch 2 big executables of 20MB each simultaneously:
it will take much longer than launching them in sequence.
This is because the OS tries server disk requests for both apps,
leading high seek frequencies)

But as said in my other mail, I will leave both possiblities in the
plugin scheduler model:
running all disk threads within one single thread
or running N separate threads.
So everyone is free to choose the method which works best for him.

Benno.

On Fri, 29 Sep 2000, Scott McNab wrote:
> Benno Senoner wrote:
>
> > Assume we are running EVO (which does disk IO) and ardour
> > at the same time:
> > We all know meanwhile that the disk is used more efficiently when
> > using one single thread to issue IO requests, rather than multiple
> > threads which could cause unnecessary disk seeks because the
> > kernel's elevator can't predict everything.
>
> I strongly disagree with this statement. The whole point of using
> multiple IO threads and an elevator algorithm is so the operating system
> can order disk block read requests in the most efficient manner (by
> minimising head seeks).
>
> If you serialise all your IO into one thread then the OS can't do any of
> this and has no option but to perform accesses sequentially, which may
> seriously degrade performance if your disk is even slightly fragmented.
>
> > So in theory EVO and ardour could have their own private disk threads,
> > but if we impose some restrictions, like that for each disk callback,
> > the app cannot read/write more than X bytes from/to disk,
> > then we could live with one single disk thread which schedules
> > the disk callbacks of each apps in sequence, giving a better overall
> > performance.
>
> This is crazy - userspace applications have _no_ idea of how the data is
> physically distributed on the disk and therefore have no way of knowing
> what is the 'best' way to order IO requests.
>
> If you have two threads that need data at the same time, then by all
> means schedule them at the same time! This way the OS can then decided
> that while performing a full-disk head seek required to pick up data for
> thread #1 it might as well pick up thread #2's data on the way.
>
> > The limitation of the transfer size will ensure an adequate
> > "context switch" frequency among disk callbacks of multiple clients,
> > ensuring that every callback is scheduled regularly, without big
> > stalls.
> > This is needed to keep the buffersizes of diskthreads reasonably
> > low without risking a dropout.
> >
> > So in Ardour's case , to be rtsoundserver-compliant,
> > the thread creation would need to be replaced with a message
> > sent via lock-free structure to the disk "client" so that he knows
> > what operations to perform.
> > But this is a trivial change IMHO.
>
> Or just keep separate read threads and just let the OS handle it. Period.
>
> Scott


New Message Reply About this list Date view Thread view Subject view Author view Other groups

This archive was generated by hypermail 2b28 : Fri Sep 29 2000 - 13:53:59 EEST