On Sat, May 23, 2020 at 01:08:28PM -0400, Clem Cole wrote:
So, C came along and was 'better than
assembler' and allowed 'production
quality code' to be written, but with the exception of the far pointer
stuff, pretty much worked as dmr had defined it for the PDP-11. So code
could be written to work between compilers and systems. When the 386 DOS
extenders show up, getting rid of far, and making it a 32-bit based
language like the Vax and 68000, C had won.
Certainly having a flat 32 bit compiler was eventually useful, but even
prior to that the impact of 'far' pointers wasn't always an issue.
For simple tasks, one simpy ignored it (wrote w/o 'far'), and the compiled
as either small or large memory model. It was only if one wanted to
optimise the code that 'far' became an issue, and a lot of code was never
shipped, so didn't need to be so optimised.
Even a lot of the shipped code I worked on with those DOS based compilers
simply used large memory model, and ignored 'far'.
More of an issue was the segmented memory, and that structures couldn't
be larger than 64k. For targetting DOS, compilers eventually offered 'huge'
pointers, and possibly a 'huge' memory model which hid the problem; but
were of no use in protected 16 bit mode - which the embedded RT-OS I was
developing for at the time used.