Re: [linux-audio-dev] Re: new preemptive kernel-patch from Montavista available

New Message Reply About this list Date view Thread view Subject view Author view Other groups

Subject: Re: [linux-audio-dev] Re: new preemptive kernel-patch from Montavista available
From: Benno Senoner (sbenno_AT_gardena.net)
Date: Fri Nov 24 2000 - 00:35:39 EET


On Thu, 23 Nov 2000, yodaiken_AT_fsmlabs.com wrote:
> On Thu, Nov 23, 2000 at 10:17:25AM -0800, Nigel Gamble wrote:
> > My work is definitely a work in progress and proof of concept targetted
> > towards 2.5. I want to show that these techniques, which have been used
> > successfully in at least two previous versions of real-time unix,
> > REAL/IX from MODCOMP and IRIX from SGI, are easily applicable to Linux,
> > also.
>
>
> It's amazing how two people can look at the same thing and see something completely
> different. I look at REAL/IX and IRIX and see them as definitive proof that this
> technique is a total disaster. Both operating systems bloated into impossible to
> maintain, buggy, slow, disasters that needed specialized and very expensive hardware
> to provide unimpressive RT.
>
> MODCOMP's marketing claims 100microseconds worst case from interrupt to driver activation (not process activation) and "typical" "in the 50 microsecond range on a 133 MHz Pentium based system"
> (not clear whether this is a off-the-shelf or MODCOMP hardware). Ordinary Linux
> give "typical" under 10 microseconds. Why is Linux performance so much better than the
> _marketing numbers_ for REAL/IX ? And I am willing to bet that Linux performance on
> real applications is stunningly better. Reason: this "succesful" technique imposes a
> huge cost that no amount of engineering effort can overcome.

I'm by no means an expert, but judging prurely based on testresults,
the preemption-based approach (even Andrew patches which are not that intrusive
as Ingo ones) seems to be suitable to cover the need most realtime multimedia
apps. ( 2-3msec)

>
> Benno: would you like a version of your benchmark that uses the RTLinux realtime from
> user space signal handlers ? Cort's been looking at it, and he thinks the major change
> needed will be to get around your adjustment for Linux usleep(20000) sleeps for 30ms.

Hmm in latencytest I do not use sleep() since I use either the RTC or the
soundcard as a timing source.

He is probably refering to the latency-graph API
(http://www.linuxdj.com/latency-graph/)

Where I included the small example testlatency.c
which uses this lib to generate a latency diagram.
And in order show the upper/lower range boundaries correctly, I made the
assumption that the kernel sleeps 30msec instead of the 20msecs requested.

Basically testlatency.c is not meant as a benchmark, it was meant only as an
example to illustrate how to use the API.

Of course I'm curious to see what kind of performance RTLinux signal handlers
 would deliver.

But for that we should measure performance under the same conditions as
when using latencytest (RTC or audio IRQ).

So what would need to get rewritten is latencytest
(http://www.gardena.net/benno/linux/audio)
in order to support RTLinux signal handlers, but that code is a piece of crap
and not worth to touch anymore.

My goal was to rewrite latencytest from scratch using latency-graph to record
and display the data, plus perhaps integrating "pluggable" stresstest routines.
(eg where one can choose to include his script / executable to perform
additional stresstest)

But unfortunately for now I'm hopelessly ovebooked .. :-(

On the other hand if you look at the latencytest outputs almost all spikes are
while waiting in the write() call (writing to soundcard) so basically the
non-determinism happens when we are sleeping.

I'm not familiar with the RTLinux userspace signals model, but in the case
of latencytest you would need a method to wake up the thread after
one fragment has been processed.
Would this involve the standard write() call (with /dev/dsp as a device) too ?
If yes, this point aren't we suffering from userspace scheduling latencies too?
I mean, if the kernel code is just doing some lengthly operation
(like traversing zillions of inodes), can we preempt that piece of code, even
with IRQs disabled ?
correct me please if I'm wrong.

As you hard-RT guys know, even on multimedia we are interested in
"worst case" (*) latencies.
(*) worst case = we can do not encounter a value higher than this for
several days. (as opposed to "never encounter ... " in RTLinux)
 
Benno.


New Message Reply About this list Date view Thread view Subject view Author view Other groups

This archive was generated by hypermail 2b28 : Fri Nov 24 2000 - 00:08:05 EET