Re: [linux-audio-dev] Re: costs of IPC

New Message Reply About this list Date view Thread view Subject view Author view Other groups

Subject: Re: [linux-audio-dev] Re: costs of IPC
From: Abramo Bagnara (abramo_AT_alsa-project.org)
Date: Tue May 15 2001 - 23:57:27 EEST


Paul Davis wrote:
>
> abramo - i've just noticed a possibly serious flaw with your test
> program.
>
> in real-life, there is (period_duration-cpu_time_for_period) after the
> completion of a "proc" cycle during which the processor is free to run
> other threads.
>
> in the test, i notice that for the sp case, my processor load on 1 cpu
> goes to 100% and stays there for the duration of the test (not
> suprising, its running SCHED_RR and never sleeps). this means that you
> don't really have enough cache pollution effects from other processes
> at all in this case.
>
> i can't test the running-on-UP case here because i have a dual CPU
> system, but there is a similar effect there as well. ctx code will own

Just boot it with "nosmp" parameter.

> the CPU as long as it has a thread ready to run, which is essentially
> always. therefore no non-ctx thread ever gets to use the CPU, and the
> only cache-driven effects you are seeing comes from the data that
> *doesn't* fit into the cache. this would explain the slightly
> "catastrophic" curve that steve showed - as soon as the pollution data
> size exceeds the cache space "dedicated" to ctx, things get *much*
> worse. but in a h/w driven system, its quite likely that the effective
> cache size would be smaller because other threads would have used it
> between periods.
>
> do you see what i'm getting at? does it make sense?
>
> if i'm right, then in a real world scenario (some other process has
> used the cache between invocations of proc()), you'd could very well
> see dramatic slowdowns with memory footprints notably smaller than
> steve's curves suggest.

If memory footprint threshold become smaller the difference between sp
and mp decrease. I've just verified this effect with attached ctx, if
you increase worker area size you'll note too.

> but i could, as usual, be missing something obvious. you could test
> this by using usleep() to put the threads to sleep, measuring the
> cycles they actually sleep for, and subtracting that from the
> total. you'd also have to ensure a realistic work-load from "other
> processes", such as a GUI for each component (and thus the X server)
> that wants to update some screen displays.

Here the problem is that "realistic work-load" is hard to define because
it's related to its memory footprint. Also consider that I don't think
that the GUI would run every period. And consider that usleep can't
sleep in not busy loop for less than a jiffie.

I think that we might consider one worker as the gui (at least on UP).
GUI effect is surely less than that (if I'm not wrong average cache
pollution effect can be measured in cache lines/time units and then
although GUI may have a greater footprint its run frequency is lower).

-- 
Abramo Bagnara                       mailto:abramo_AT_alsa-project.org

Opera Unica Phone: +39.546.656023 Via Emilia Interna, 140 48014 Castel Bolognese (RA) - Italy

ALSA project http://www.alsa-project.org It sounds good!


ctx.c


New Message Reply About this list Date view Thread view Subject view Author view Other groups

This archive was generated by hypermail 2b28 : Wed May 16 2001 - 00:25:42 EEST