Re: [linux-audio-dev] Re: costs of IPC

New Message Reply About this list Date view Thread view Subject view Author view Other groups

Subject: Re: [linux-audio-dev] Re: costs of IPC
From: Scott McNab (sdm_AT_fractalgraphics.com.au)
Date: Thu May 17 2001 - 04:46:24 EEST


Jörn Nettingsmeier wrote:
>
> Steve Harris wrote:
> >
> > On Wed, May 16, 2001 at 01:47:17PM +0200, Abramo Bagnara wrote:
> > > > http://plugin.org.uk/spmp/worker-diff3.png
> > >
> > > The difference peak correspond to total_footprint == cache_size.
> >
> > That was my conclusion too. I can test this on our 6x 700MHz 2meg cache
> > machine when it arrives. My opinion is that this is what desktop machines
> > of the future will resemble. Though with faster ram.
>
> here i disagree.
> while it would be nice it it were so, i don't see that happen.
> right now, 2-way smp is the only thing that where power scales to
> costs for home users.
> and even there... - how long have we been waiting for that dual
> 1.3gig athlon chipset ?
>
> no, unless the big silicon giants switch over to multiple cpus per
> chip, we won't have more than 2. and even that is questionable, see
> the p4. smp seems the last thing the marketing folk of intel and amd
> care about.
>
> as to cache size, i think 2 meg is out of the question for consumer
> hardware. the xeons are painfully expensive compared to 256k
> coppermines, and why should that ever change ? (btw, do the xeons
> run the 2megs at full clock speed ?)
> somebody has pointed out before the widening gap between processor
> and ram speed.
>
> it seems that larger smp machines are waaay too tricky to program
> and make real use of to ever hit the mass market (= become
> affordable for power users).
> does windows xp even support 2-way smp ? not that i care, but this
> fact is crucial to the market prices of such hardware.

Large smp configurations are tricky to program in situations where
your architecture is not geared for it. If LAAGA was designed around
using separate processes/threads per audio component then the
parallelism would evolve naturally - you have 4 soft-synths running?
then you will automatically use 4 CPUs (if you've got them).

It seems to me that this debate is a little like the debate between
DOS and Windows users when Windows 95 first came out - on one side
you had Windows users preeching the virtues of multi-tasking, on
the otherside you had hard-core DOS users who kept arguing that
the extra overhead of Windows just slowed everything down and you
could do everything faster by using DOS. Now six years later noone
would even dream of using DOS for anything of significance. The
harware abstraction provided by the operating system greatly
outweighs the slight performance hit you might be able to avoid if
you bypassed the operating system altogether.

This same debate occured when virtual memory was first introduced
on the 386 - sure it slowed everything down by adding a cycle or
two to every memory access but in the end the benefit vastly
outweighed the cost.

My point is, the operating system's role in life is to make sure
user programs can make maximum use of the available resources.
If you want to be able to take advantage of multiple CPUs then
there is no option but to use of the operating systems facilities
for doing so.

At the moment this seems like an extra cost for little gain for
"most" people (much like virtual memory was) but in five years
time it will be essential, since the speed of light is quickly
putting an upper limit on all single-threaded processing.

Manufacturers are already realising that parallelism is the only
way to keep Moore's Law satisfied with announces of SMP-on-a-chip
processors in the pipeline.

Scott


New Message Reply About this list Date view Thread view Subject view Author view Other groups

This archive was generated by hypermail 2b28 : Thu May 17 2001 - 05:17:15 EEST