Re: [linux-audio-dev] 2.4.0test5-pre4-lowlat latencies benchmarked, 2.2.16+Ingo's LL patch = strange behaviour

New Message Reply About this list Date view Thread view Subject view Author view Other groups

Subject: Re: [linux-audio-dev] 2.4.0test5-pre4-lowlat latencies benchmarked, 2.2.16+Ingo's LL patch = strange behaviour
From: Andrew Morton (andrewm_AT_uow.edu.au)
Date: Wed Jul 26 2000 - 06:03:43 EEST


Iain Sandoe wrote:
>
> Hi Benno,
>
> I'm hoping that a comment you've made here might throw some light on the
> situation with LinuxPPC.
>
> > The very strange thing is that 2.2.16 although the latency behaviour is good
> > it shows high spikes (around 2msec) at regular intervals, and you see that
> these
> > are in green, which means that when our thread is in the cpu wasting loop,
> > something pre-empts us and chews away about 2msec.
>
> On LinuxPPC (2.2.17pre10) these spikes are 8 to 10ms on occasion - maybe
> every 19/20 seconds.
>
> The only thing that can pre-empt this thread is an IRQ (right?)
>
> So we thought that the 8-10ms might be an IRQ block. So I've just spent 2
> weeks doing an modified version of Jun Sun's IRQ latency tool... (posted day
> before yesterday - URL http://www.drfruitcake.com/linux/irq_blk.html for any
> PPC audio types on this list).

Iain,

when a CPU is executing kernel code it will run-to-completion. That is,
the CPu won't run another process until it returns from the current
system call (or page fault) or until it voluntarily deschedules itself.

The problem is that there are circumstances under which the kernel
simply has a lot of computing to do. Take the case of:

        malloc(100 megs)
        touch the memory
        exit()

when exit() is running in the kernel it has to sit in a loop freeing up
25,000 pages of memory. Each one takes quite a lot of work and the net
effect is that the exit() system call takes 25 milliseconds.

So it's not related to the duration of interrupt routines. The longest
ISR duration on a well-tuned Linux box is 60 microseconds.

The only way around this problem is:

1: Make the kernel preemptible. This could be done fairly easy on a
uniprocessor box with some trickery, but I don't think anyone's done it.

2: Put in selected preemption points where the kernel knows it's being a
CPU hog and so voluntarily reschedules periodically.

3: Add more CPUs. And keep adding, and keep adding.... This isn't a
very good solution.


New Message Reply About this list Date view Thread view Subject view Author view Other groups

This archive was generated by hypermail 2b28 : Wed Jul 26 2000 - 06:38:10 EEST