Re: [linux-audio-dev] low-latency scheduling patch for 2.4.0

New Message Reply About this list Date view Thread view Subject view Author view Other groups

Subject: Re: [linux-audio-dev] low-latency scheduling patch for 2.4.0
From: yodaiken_AT_fsmlabs.com
Date: Mon Jan 29 2001 - 17:44:10 EET


On Sun, Jan 28, 2001 at 10:17:46AM -0600, Joe deBlaquiere wrote:
> A recent example I came across is in the MTD code which invokes the
> erase algorithm for CFI memory. This algorithm spews a command sequence
> to the flash chips followed by a list of sectors to erase. Following
> each sector adress, the chip will wait for 50usec for another address,
> after which timeout it begins the erase cycle. With a RTLinux-style
> approach the driver is eventually going to fail to issue the command in
> time. There isn't any logic to detect and correct the preemption case,
> so it just gets confused and thinks the erase failed. Ergo, RTLinux and
> MTD are mutually exclusive. (I should probably note that I do not intend
> this as an indictment of RTLinux or MTD, but just an example of why
> preemption breaks the Linux driver model).

Only if your RTLinux application is running. In other words, you cannot
commit more than 100% of cpu cycle time and expect to deliver.
I think one of the common difficulties with realtime is that time-shared
systems with virtual memory make people used to elastic resource
limits and real-time has unforgiving time limits.

>
> So what is the solution in the preemption case? Should we re-write every
> driver to handle the preemption? Do we need a cli_yes_i_mean_it() for
> the cases where disabling interrupts is _absolutely_ required? Do we
> push drivers like MTD down into preemptable-Linux? Do we push all
> drivers down?
> In the meantime, fixing the few places where the kernel spends an
> extended period of time performing a task makes sense to me. If you're
> going to be busy for a while it is 'courteous' to allow the scheduler a
> chance to give some time to other threads. Of course it's hard to know
> when to draw the line.

Or what is the tradeoff or whether a deadlock will follow.
step 1: memory manager thread frees a few pages and is courteous
step 2: bunch of thrashing and eventually all processes stall
step 3: go to step 1

alternative
step 1: memory manager thread frees enough pages for some processes to
        advance to termination
step 2: all is well

and make up 100 similar scenarios. And this is why "preemptive"
OS's tend to add such abominations as "priority inheritance" which
make failure cases rarer and harder to diagnose or complex schedulers
that spend a significant fraction of cpu time trying to figure out
what process should advance or ...

>
> So now I am starting to wonder about what needs to be profiled. Is there
> a mechanism in place now to measure the time spent with interrupts off,
> for instance? I know this has to have been quantified to some extent, right?
>
> --
> Joe deBlaquiere
> Red Hat, Inc.
> 307 Wynn Drive
> Huntsville AL, 35805
> voice : (256)-704-9200
> fax : (256)-837-3839

-- 
---------------------------------------------------------
Victor Yodaiken 
Finite State Machine Labs: The RTLinux Company.
 www.fsmlabs.com  www.rtlinux.com


New Message Reply About this list Date view Thread view Subject view Author view Other groups

This archive was generated by hypermail 2b28 : Tue Jan 30 2001 - 17:57:53 EET