[TUHS] Origins of the frame buffer device

arnold at skeeve.com arnold at skeeve.com
Tue Mar 7 22:08:21 AEST 2023


Larry McVoy <lm at mcvoy.com> wrote:

> It's funny because back in my day those GPUs would have been called vector
> processors, at least I think they would.  It seems like somewhere along
> the way, vector processors became a dirty word but GPUS are fine.
>
> Color me confused.

I think vector processing is used for things like

	for (i = 0; i < SOME_CONSTANT; i++)
		a[i] = b[i] + c[i]

that is, vectoring general purpose code. GPUS are pretty specialized
SIMD machines which sort of happen to be useful for certain kinds
of parallelizable general computations, like password cracking.

Today there are both standardized and proprietary ways of programming
them.

> About the only reason I can see to keep things divided between the CPU
> and the GPU is battery power, or power consumption in general.  From
> what little I know, it seems like GPUs are pretty power thirsty so 
> maybe they keep them as optional devices so people who don't need them
> don't pay the power budget.
>
> But even that seems suspect, I would think they could put some logic
> in there that just doesn't feed power to the GPU if you aren't using
> it but maybe that's harder than I think.

You're on target, not just in the GPU world but also in the CPU
world.  Modern Intel CPUS have a lot of circuits for turning power
consumption up and down dynamically.

Modern-day CPU development is much harder than we software types generally
realize. I worked at Intel as a software guy (bad juju there, let me
tell you!) and learned a lot about it, from the outside. For a given
x86 microarchitecture, from planning until it's in the box in the store
is like a 5+ year journey. These days maybe even more, as I left
Intel 7.5 years ago.

Arnold


More information about the TUHS mailing list