Re: [linux-audio-dev] Performance and Elegance? (Was: High Cost of

New Message Reply About this list Date view Thread view Subject view Author view Other groups

Subject: Re: [linux-audio-dev] Performance and Elegance? (Was: High Cost of
From: Jay Ts (jay_AT_toltec.metran.cx)
Date: Wed May 16 2001 - 13:06:57 EEST


> Paul Davis says:
> > processors have gotten faster, but applications are spending more real
> > clock time than ever dealing with the kernel in one way or another, so
> > why is this?

> John Regehr replied:
> [...] The answer, IIRC, is pretty simple: the OS
> must move a fair amount of data around and doesn't have much locality,
> so it's often limited by the memory subsystem and can't take advantage
> of modern clock speeds.
> [...]
> DRAM is still slow and chip clocks are through the roof.
> [...]
> I read somewhere that the relative difference in speed between a fast P4
> and main memory is about the same as the difference between an 8086 and
> a hard drive.

I've been skimming many of the messages here, and am not sure I'm totally
clear on this, so let me ask a simple question:

Is the context switch "slowness" related directly to RAM "slowness"?
Because if so, then there may be some light at the end of the tunnel.

Either way, I'd like to point out a few things relative to the subject
of RAM speed.

First of all, it is possible to make fast RAM. Consider CPU registers
or level 1 cache. Those are simply memory, and they run as fast as
the CPU can handle the data. Basically, RAM is made of transistors,
just like the CPU, and can be made to run just as fast. The technological
limitation is mostly in the area of bus speeds, that is, connecting
that fast memory to the CPU over a bunch of relatively long wires.
The workaround is to put the fast RAM on chip, or very close to the CPU.

So why are SIMMs and DIMMs so "slow"? That is mainly the result of
the engineering tradeoffs that have been made. It's not like the
slowness of disk drives, which are limited by having moving mechanical
parts.

During the mid-1980's through 1990's, memory speed was not favored by
manufacturers (and programmers and hence, users) as much as capacity,
low power consumption and heat dissipation, and some other factors. The
industry as a whole was under the illusion that slow memory could be
compensated by a large and fast cache. That was much more true in 1985
than it is today, and the limitations of caching slow memory are really
starting to hurt. And not just us digital audio folks...

Now we have cheap computers that have space on the motherboard for
1.5 GB of RAM, which costs about $.50 per megabyte. It is big and
cheap, and how many people need 1.5 GB of RAM? Most users can do
pretty well in 64 MB. Is the demand for memory capacity growing at
the same rate as processor speeds are increasing? I'm not sure it
is - what do you think? Or how about this: is the slowness of RAM
hurting enough that people will pay more for faster memory?

On that one, I think the answer is yes. First, because memory prices
have dropped very quickly over the past year (after being at a high
plateau for a couple of years, which was very unusual). Second, because
the "bleeding edge" applications such as desktop video that are pushing
the chip and computer manufacturers to ship faster computers also are
limited by slow memory. Third, because the chip and computer manufacturers
are still competing neck-and-neck to create faster computers, and they
really have to do whatever it takes to stay in the lead.

Consider, a reasonable benchmark used to be the Dhrystone, and then later
people moved to bigger (not cache-resident) and more complex (graphic)
benchmarks, and maybe in the future, the benchmarks will compare digital
video (and maybe even digital audio ;-) applications. Oh, hey, remember
that right now (or about a year ago, which was the last time I checked!)
Quake frame rate is considered by some to be a good way to compare the
speeds of computers! Whatever will be used as a "reasonable benchmark"
in a few years is very likely going to be chosen to measure RAM speed.

So again, I'm saying that it may be wise to stay open about future
improvments and innovations in hardware. They are as much of a
historical fact as are the limitations that are being talked about,
and we really don't know what is going to happen, either way, based
on historical data.

Of course, there will still be massive speed improvements using a
monolithic (single process) approach that avoids the scheduler (and
context switches) as much as possible, but I think *maybe* in the future
a multiprocess approach will not be such a huge performance hit.

- Jay Ts
jayts_AT_bigfoot.com


New Message Reply About this list Date view Thread view Subject view Author view Other groups

This archive was generated by hypermail 2b28 : Wed May 16 2001 - 13:28:35 EEST