[TUHS] To NDEBUG or not to NDEBUG, that is the question

steve jenkin via TUHS tuhs at tuhs.org
Sat Oct 18 11:44:02 AEST 2025


This thread, responding to the original, moved to COFF, not about Early Unix.
============================================================


> On 17 Oct 2025, at 22:42, Aharon Robbins via TUHS <tuhs at tuhs.org> wrote:
> 
> Now, I can understand why assert() and NDEBUG work the way they do.

> Particularly on the small PDP-11s on which C and Unix were developed,
> it made sense to have a way to remove assertions from code that would
> be installed for all users.


How many computing workloads are now CPU limited,
and can’t afford run-time Sanity Checking in Userland?

For decades, people would try to ‘optimise’ performance 
by initially writing in assembler [ that myth dealt with by others ].

That appears to have flipped to using huge, slow Frameworks,
such as Javascript / ECMA script for ‘Applications’.

I’m not advocating “CPU is free, we can afford to forget about optimisation”.

That’s OK with prototypes and ‘run once or twice’, 
human time matters more, but not in high-volume production workloads.

The deliberate creation of bloat & wasting resources (== energy & dollars)
for production work isn’t Professional behaviour IMHO.

10-15 years ago I saw something about Google’s web server 
CPU utilisation, being 60%-70%, from memory.

It struck me that “% CPU" wasn’t a good metric for throughput anymore,
and ’system performance’ was a complex, multi-factored problem,
that had to be tuned per workload and target metric for ‘performance’.

Low-Latency is only achieved at the cost of throughput.
Google may have deliberately opted for lower %CPU to be responsive.

Around the same time, there were articles about the throughput increase
and latency improvement by some large site moving to SSD’s.
IIRC, their CPU utilisation dropped markedly as well.

Removing the burden of I/O waits causing deep scheduling queues 
somehow reduced total kernel overhead. 
Perhaps fewer VM page faults because of shorter process residency…

I’ve no data on modern Supercomputers - I’d expect there to be huge
effort in turning resources for individual applications & data sets.

There’d be real incentive at the high-end to maximise ‘performance’,
as well as at the other end: low-power & embedded systems.

I’m more talking about Commercial Off the Shelf and small- to mid-size
installations:
- the things people run every day  and suffer from slow response times.

--
Steve Jenkin, IT Systems and Design 
0412 786 915 (+61 412 786 915)
PO Box 38, Kippax ACT 2615, AUSTRALIA

mailto:sjenkin at canb.auug.org.au http://members.tip.net.au/~sjenkin



More information about the TUHS mailing list