Re: [LAD] the future of display/drawing models (Was: Re: [ANN] IR: LV2 Convolution Reverb)

From: David Olofson <david@email-addr-hidden>
Date: Thu Feb 24 2011 - 03:24:04 EET

On Wednesday 23 February 2011, at 23.47.33, Dominique Michel
<dominique.michel@email-addr-hidden> wrote:
[...]
> Another problem is the hardware. All the PC video cards are video
> driven. That imply than the card have to refresh the whole screen in
> order to change one pixel. That is not old technology, that is PC
> technology. At the same time than the first PC was computers like the
> Amiga or the Atary.

Though it would be theoretically possible to do partial refreshing of some
types of displays, that would be very, very implementation specific - and
quite pointless. In applications where you care about frequent updates with
accurate timing, you'll usually want that for the whole screen anyway. In
other applications, you just use a frame buffer and some $2 of RAMDAC hardware
to repeatedly pump that out in some standard serialized format.

This has nothing whatsoever to do with the way graphics is rendered.

(Well, true vector displays as seen in some ancient arcade games would be a
gray zone - but believe even those had some sort of "frame" buffer of sorts
somewhere; they just stored a table of coordinates rather than a 2D array of
color codes.)

> In the Amiga, the video card was vectorial, to change one pixel, all
> that was needed was the new pixel value and its x y coordinates.

I still do exactly that in various projects, via SDL, on Windows, Linux, Mac
OS X and a dozen other platforms. It works with OpenGL and Direct3D as well,
though it can be a bit tricky to get right, as buffering and page flipping can
be set up in a number of different ways.

> To change a part of the screen, the Amiga was using vectorial
> objects called sprites.

The sprites were just tiny hardware overlays, not actually changing anything
persistently - which is the very point with them. Lots of restrictions though,
there were only 8 of them, though you could multiplex them vertically.

> So, even for complex visual objects, the
> computational time was much lower than with the video approach, and the
> 2D on such old machines is still competitive against the 2D on the most
> powerful PC of today.

As someone who's done quite a bit of to-the-metal graphics programming on the
Amiga and PC/VGA, as well as the usual DirectDraw, GDI, X11, OpenGL etc, I
don't quite see what you mean here. Even the C64 did pretty much what we do
today; it just had a lot less data to move around!

Indeed, the C64, Amiga (and VGA) had hardware scrolling, and the former two
had those hardware sprites. Those features could indeed save loads of cycles -
but only in some very special cases. Many Amiga trackers used hardware
scrolling of a single bitplane for low cost scrolling of the pattern view, but
that's a clever trick with many limitations. The sprites were used for slider
knobs and VU meters, but as there were only 8 channels, that required some
clever coding in any non-trivial application. And, it would actually have a
*higher* cost (due to DMA stealing CPU cycles, among other things), except for
the moments when the user is actually dragging a knob...!

So, for the most part, it was the usual pixel pushing we're still doing today
- and not only that; it was dog slow and awkward to do due to the bitplane
memory layout; for each pixel you had to twiddle individual bits in multiple
locations. Same deal with anything before AGA "Chunky pixels", 256 color VGA
and the HighColor/TrueColor era.

The Amiga had the blitter of course, but that's just a simple precursor of the
3D accelerators we have now. That is, it did the same thing as you'd do with
the CPU, only a bit faster (provided you were on the standard 7.14 MHz 68000)
and more restricted.

> At that time, 3D was almost non existent. To develop the 3D
> capabilities, most efforts from the manufacturers was spend on
> improving the video based cards. Now, the situation is than the 2D part
> of a video card is so little than the manufacturers are considering to
> remove it and use the 3D part to get the 2D from the card.

So, what's wrong with that? The alternatives are to use ancient, limited 2D
APIs, or tax the CPU with custom software rendering. 2D rendering APIs
essentially just cover random crippled subsets of 3D accelerator
functionality.

I long for the day when OpenGL (or similar) is the single, obvious answer to
any realtime graphics rendering needs, so we can just forget about all this
rectangular blits and limited or no blending nonsense!

(And this is coming from some weirdo who actually likes to play around with
software rendering and other low level stuff...! :-)

> I don't get the advantage of this approach for a workstation.

It's just so much quicker and easier to get the job done! As a bonus, even a
dirt cheap integrated GPU will get the job done many times faster than a
software implementation, and without stealing cycles from your DSP code.

> A workstation is not about 3D gaming but about making some work.

2D rendering is just a subset of 3D... Where does "3D" start? Non-rectangular
blits? Scaling? Rotation? That's all basic 2D stuff, and a 3D accelerator
doesn't add much more than z-buffering and perspective correct transforms on
that level - and you can disable that to save the occasional GPU cycle.

> 3D cards are very hungry for electricity, and they will be an overkill for
> anyone that is not working on some kind of 3D development. The
> electricity providers will certainly like them very much, but my wallet
> and the environment don't like them.

A CPU consumes a lot more power doing the same job. Actually, it can't even do
the job properly unless you want it to take forever, and it'll still be much
slower. Dedicated hardware makes all the difference in the world when it comes
to efficiency.

Sure, my (now "ancient") rig burns around 1000 W when working at full speed -
but that means blending transformed textures all over my 2560x1600 screen at a
few THOUSAND frames per second! Obviously, it only takes a tiny, tiny fraction
of that power to get your average GUI application to feel responsive. And,
unless you're scrolling or somehow animating major areas of the screen all the
time, you don't even need that.

> So, I think than a complete discussion on that matter should include
> the hardware part, that is how to make power and computational
> efficient 2D video cards.

I don't see how one could realistically design anything that'll come close to
a down-clocked low end 3D accelerator in power efficiency. What are you going
to remove, or implement more efficiently...?

Also, 3D accelerators are incredibly complex beasts, with ditto drivers. (Part
because of many very clever optimizations that both save power and increase
performance!) But, hardcore gamers and other power users need or want them, so
they get developed no matter how insanely overkill and pointless they may
seem. As a result, slightly downscaled versions of that technology is
available dirt cheap to everyone. Why not just use it and be done with it?

-- 
//David Olofson - Consultant, Developer, Artist, Open Source Advocate
.--- Games, examples, libraries, scripting, sound, music, graphics ---.
|   http://consulting.olofson.net          http://olofsonarcade.com   |
'---------------------------------------------------------------------'
_______________________________________________
Linux-audio-dev mailing list
Linux-audio-dev@email-addr-hidden
http://lists.linuxaudio.org/listinfo/linux-audio-dev
Received on Thu Feb 24 04:15:05 2011

This archive was generated by hypermail 2.1.8 : Thu Feb 24 2011 - 04:15:05 EET