[TUHS] history of virtual address space
ron minnich via TUHS
tuhs at tuhs.org
Tue Jan 20 05:44:50 AEST 2026
I was trying to remember how virtual address space evolved in Unix.
When I joined the game in v6 days, processes had virtual 0->0xffff, with
the potential for split I/D IIRC, getting you that bit of extra space. The
kernel also had virtual 0->0xffff, also with split I/D. The kernel
accessed user data with mov/load from previous address space instructions.
But also on the 11, trap vectors chewed up a chunk of low physical memory,
and the diode ROM took out some chunk of high physical. The bootstrap you
set from the front panel started with many 1's.
Later, when we had bigger address spaces on 32- bit systems, page 0 was
made unavailable in users address space to catch NULL pointers. Did that
happen on the VAX at some point, or later?
Later we got the "kernel is high virtual with bit 31 set" -- when did that
first happen?
At some point, the conventions solidify, in the sense that kernels have
high order bits all 1s, and user has page 0x200000 and up; and user mode
does not get any virtual address with high order bit set. My rough memory
is that this convention became most common in the 90s.
So, for the pre-mmu Unix, what addresses did processes get, as opposed to
the kernel? Did the kernel get loaded to, and run from, high physical? Did
processes get memory "above trap vectors" and below the kernel?
For the era of MMUs, I kmow the data general 32-bit systems put user memory
in "high virtual", which turned out to be really painful, as so many Unix
programs used address 0 (unknown to their authors!). What other Unix
systems used a layout where the kernel was in low virtual?
I'm not even sure where to find these answers, so apologies in advance for
the questions.
More information about the TUHS
mailing list