[TUHS] C history question: why is signed integer overflow UB?
Clem Cole
clemc at ccc.com
Sun Aug 17 12:25:33 AEST 2025
below...
On Fri, Aug 15, 2025 at 10:18 AM Dan Cross <crossd at gmail.com> wrote:
> [Note: A few folks Cc'ed directly]
>
>
>
> Starting with the 1990 ANSI/ISO C standard, and continuing on to the
> present day, C has specified that signed integer overflow is
> "undefined behavior"; unsigned integer arithmetic is defined to be
> modular, and unsigned integer operations thus cannot meaningfully
> overflow, since they're always taken mod 2^b, where b is the number of
> bits in the datum (assuming unsigned int or larger, since type
> promotion of smaller things gets weird).
>
> But why is signed overflow UB? My belief has always been that signed
> integer overflow across various machines has non-deterministic
> behavior, in part because some machines would trap on overflow while
> others used non-2's-complement
> representations for signed integers and so the results could not be
> precisely
> defined: even if it did not trap, overflowing a 1's complement machine
> yielded a different _value_ than on 2's complement. And around the
> time of initial standardization, targeting those machines was still an
> important use case. So while 2's complement with silent wrap-around
> was common, it could not be assumed, and once machines that generated
> traps on overflow were brought into the mix, it was safer to simply
> declare behavior on overflow undefined.
>
We need someone like Peter Darnell, Plauger, and some of the original ANSI
C weenies that had to argue this all through in those days, but I think you
caught the core issues.
This was just one of the many troublesome things that had to be worked
through. Until C89, the only ``official'' C was what Dennis shipped at any
given time, and that was a bit ephemeral — particularly as the PCs and
microprocessors C compiler implemented started to play fast and lose with
the C syntax.
The core task of the C89 was to do as little harm as possible. My memory
is that Dennis was pretty cool and rarely played his trump card (far
pointers is one case I am aware that he told the 8086 people to pound sand)
, but the compromise that the committee tended to use to kick the can down
the road and get the standard out the door was to make things UB with the
hopes that later versions could find a way to tighten things up.
Truth is, other language specs had used that (like Fortran) , so it was not
a bad idea.
So, back to your question, I can not say what the actual cause if why there
was a conflict WRT to signed integer overflow, but I bet it was that,
since so many compilers handled it in different ways, the committee did not
have a way to make a formal standard that would work, and they never found
one later.
FWIW: Remember, C89 tossed a lot of systems like the PDP-11 away with
floating point. It says, we are going to use IEEE 754. So just because an
old system used a format, did not guarantee it would be accepted.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.tuhs.org/pipermail/tuhs/attachments/20250816/f75b383b/attachment-0001.htm>
More information about the TUHS
mailing list