[TUHS] On the unreliability of LLM-based search results (was: Listing of early Unix source code from the Computer History Museum)

arnold at skeeve.com arnold at skeeve.com
Sun Jun 1 06:09:07 AEST 2025


It's been going on a for a long time, even before AI. The amount
of cargo cult programming I've seen over the past ~ 10 years
is extremely discouraging.  Look up something on Stack Overflow
and copy/paste it without understanding it.  How much better is
that than relying on AI?  Not much in my opinion.  (Boy, am I glad
I retired recently.)

Arnold

Luther Johnson <luther.johnson at makerlisp.com> wrote:

> I think when no-one notices anymore, how wrong automatic information is, 
> and how often, it will have effectively redefined reality, and humans, 
> who have lost the ability to reason for themselves, will declare that AI 
> has met and exceeded human intelligence. They will be right, partly 
> because of AI's improvements, but to a larger extent, because we will 
> have forgotten how to think. I think AI is having disastrous effects on 
> the education of younger generations right now, I see it in my 
> workplace, every day.
>
> On 05/31/2025 12:31 PM, andrew at humeweb.com wrote:
> > generally, i rate norman’s missives very high on the believability scale.
> > but in this case, i think he is wrong.
> >
> > if you take as a baseline, the abilities of LLMs (such as earlier versions of ChatGP?) 2-3 years ago
> > was quite suspect. certainly better than mark shaney, but not overwhelmingly.
> >
> > those days are long past. modern systems are amazingly adept. not necessarily intelligent,
> > but they can (but not always) pass realistic tests, pass SAT tests and bar exams, math olympiad tests
> > and so on. and people can use them to do basic (but realistic) data analysis including experimental design,
> > generate working code, and run that code against synthetic data and produce visual output.
> >
> > sure, there are often mistakes. the issue of hullucinations is real. but where we are now
> > is almost astonishing, and will likely get MUCH better in the next year or three.
> >
> > end-of-admonishment
> >
> > 	andrew
> >
> >> On May 26, 2025, at 9:40 AM, Norman Wilson <norman at oclsc.org> wrote:
> >>
> >> G. Branden Robinson:
> >>
> >>   That's why I think Norman has sussed it out accurately.  LLMs are
> >>   fantastic bullshit generators in the Harry G. Frankfurt sense,[1]
> >>   wherein utterances are undertaken neither to enlighten nor to deceive,
> >>   but to construct a simulacrum of plausible discourse.  BSing is a close
> >>   cousin to filibustering, where even plausibility is discarded, often for
> >>   the sake of running out a clock or impeding achievement of consensus.
> >>
> >> ====
> >>
> >> That's exactly what I had in mind.
> >>
> >> I think I had read Frankfurt's book before I first started
> >> calling LLMs bullshit generators, but I can't remember for
> >> sure.  I don't plan to ask ChatGPT (which still, at least
> >> sometimes, credits me with far greater contributions to Unix
> >> than I have actually made).
> >>
> >>
> >> Here's an interesting paper I stumbled across last week
> >> which presents the case better than I could:
> >>
> >> https://link.springer.com/article/10.1007/s10676-024-09775-5
> >>
> >> To link this back to actual Unix history (or something much
> >> nearer that), I realized that `bullshit generator' was a
> >> reasonable summary of what LLMs do after also realizing that
> >> an LLM is pretty much just a much-fancier and better-automated
> >> descendant of Mark V Shaney: https://en.wikipedia.org/wiki/Mark_V._Shaney
> >>
> >> Norman Wilson
> >> Toronto ON
> >
>


More information about the TUHS mailing list