[TUHS] Origins of the frame buffer device

Larry McVoy lm at mcvoy.com
Tue Mar 7 09:24:29 AEST 2023


On Mon, Mar 06, 2023 at 06:16:19PM -0500, Norman Wilson wrote:
> Rob Pike:
> 
>   As observed by many others, there is far more grunt today in the graphics
>   card than the CPU, which in Sutherland's timeline would mean it was time to
>   push that power back to the CPU. But no.
> 
> ====
> 
> Indeed.  Instead we are evolving ways to use graphics cards to
> do general-purpose computation, and assembling systems that have
> many graphics cards not to do graphics but to crunch numbers.
> 
> My current responsibilities include running a small stable of
> those, because certain computer-science courses consider it
> important that students learn to use them.
> 
> I sometimes wonder when someone will think of adding secondary
> storage and memory management and network interfaces to GPUs,
> and push to run Windows on them.

It's funny because back in my day those GPUs would have been called vector
processors, at least I think they would.  It seems like somewhere along
the way, vector processors became a dirty word but GPUS are fine.

Color me confused.

About the only reason I can see to keep things divided between the CPU
and the GPU is battery power, or power consumption in general.  From
what little I know, it seems like GPUs are pretty power thirsty so 
maybe they keep them as optional devices so people who don't need them
don't pay the power budget.

But even that seems suspect, I would think they could put some logic
in there that just doesn't feed power to the GPU if you aren't using
it but maybe that's harder than I think.

If it's not about power then I don't get it, there are tons of transistors
waiting to be used, they could easily plunk down a bunch of GPUs on the
same die so why not?  Maybe the dev timelines are completely different
(I suspect not, I'm just grabbing at straws).


More information about the TUHS mailing list