[TUHS] history of virtual address space
Warner Losh via TUHS
tuhs at tuhs.org
Tue Jan 20 10:05:34 AEST 2026
On Mon, Jan 19, 2026 at 12:45 PM ron minnich via TUHS <tuhs at tuhs.org> wrote:
> I was trying to remember how virtual address space evolved in Unix.
>
> When I joined the game in v6 days, processes had virtual 0->0xffff, with
> the potential for split I/D IIRC, getting you that bit of extra space. The
> kernel also had virtual 0->0xffff, also with split I/D. The kernel
> accessed user data with mov/load from previous address space instructions.
> But also on the 11, trap vectors chewed up a chunk of low physical memory,
> and the diode ROM took out some chunk of high physical. The bootstrap you
> set from the front panel started with many 1's.
>
> Later, when we had bigger address spaces on 32- bit systems, page 0 was
> made unavailable in users address space to catch NULL pointers. Did that
> happen on the VAX at some point, or later?
>
4.2BSD on the Vax dereferencing 0 would seg fault. I'm not sure when this
started. I am sure that System V or 32V on the Vax dereferencing 0 gave
0. There were many USENET threads about this in the mid 80s. I have first
hand experience with 4.2BSD, but not the others.
> Later we got the "kernel is high virtual with bit 31 set" -- when did that
> first happen?
> At some point, the conventions solidify, in the sense that kernels have
> high order bits all 1s, and user has page 0x200000 and up; and user mode
> does not get any virtual address with high order bit set. My rough memory
> is that this convention became most common in the 90s.
>
I think that's a 32-bit thing as well. Part of it was driven by
architecture specific
concerns. BSD has linked its kernels at 0xc0000000 since I think 4BSD (I
didn't
check). The space between 0x80000000 and 0xbfffffff sometimes was kernel,
sometimes
user depending on the architecture and kernel implementation. FreeBSD does
4G/4G for 32-bit kernels for PowerPC and i386 (though it used to have a
3G/1G split
for i386). 32-bit arm, though, is still just 3G/1G for user/kernel split.
IIRC, there were some
that had a 2G/2G split, but they escape me at the moment (which is the
classic 'high bit is
kernel).
64-bit architectures make it super easy to have lots of upper bits reserved
for this
or that, and is beyond the scope of this post.
> So, for the pre-mmu Unix, what addresses did processes get, as opposed to
> the kernel? Did the kernel get loaded to, and run from, high physical? Did
> processes get memory "above trap vectors" and below the kernel?
>
16-bit Venix loaded the programs at CS:0/DS:0 but CS and DS specified
segments
that were well above the hardware interrupt vectors at the bottom and the
ROM
reset vectors at the top. 16-bit Venix expected programs not to be naughty
and set
their own DS/ES, but many did to ill effect (since the first trap would set
them back
on return).
> For the era of MMUs, I kmow the data general 32-bit systems put user memory
> in "high virtual", which turned out to be really painful, as so many Unix
> programs used address 0 (unknown to their authors!). What other Unix
> systems used a layout where the kernel was in low virtual?
>
I've not seen any, but that doesn't mean they don't exist.
Warner
> I'm not even sure where to find these answers, so apologies in advance for
> the questions.
>
More information about the TUHS
mailing list