[TUHS] On the unreliability of LLM-based search results (was: Listing of early Unix source code from the Computer History Museum)
Luther Johnson
luther.johnson at makerlisp.com
Sun Jun 1 08:47:49 AEST 2025
I think we could call many of these responses "mis-ambiguation", or
conflation, they mush everything together as long as the questions posed
and the answers they provide are "buzzword-adjacent", in a very
superficial, mechanical way. There's no intelligence here, it's just
amazing how much we project onto these bots because we want to believe
in them.
On 05/31/2025 03:36 PM, James Johnston wrote:
> Well, I have to say that my experiences with "AI based search" have
> been beyond grossly annoying. It keeps trying to "help me" by sliding
> in common terms it actually knows about instead of READING THE DAMN QUERY.
>
> I had much, much better experiences with very literal search methods,
> and I'd like to go back to that when I'm looking for obscure papers,
> names, etc. Telling me "you mean" when I damn well DID NOT MEAN THAT
> is a worst-case experiences.
>
> Sorry, not so much a V11 experience here, but I have to say it may
> serve the public, but only to guide them back into boring,
> middle-of-the-road, 'average mean-calculating' responses that simply
> neither enlighten nor serve the original purpose of search.
>
> jj - a grumpy old signal processing/hearing guy who used a lot of real
> operating systems back when and kind of misses them.
>
> On Sat, May 31, 2025 at 2:53 PM Luther Johnson
> <luther.johnson at makerlisp.com <mailto:luther.johnson at makerlisp.com>>
> wrote:
>
> I agree.
>
> On 05/31/2025 01:09 PM, arnold at skeeve.com
> <mailto:arnold at skeeve.com> wrote:
> > It's been going on a for a long time, even before AI. The amount
> > of cargo cult programming I've seen over the past ~ 10 years
> > is extremely discouraging. Look up something on Stack Overflow
> > and copy/paste it without understanding it. How much better is
> > that than relying on AI? Not much in my opinion. (Boy, am I glad
> > I retired recently.)
> >
> > Arnold
> >
> > Luther Johnson <luther.johnson at makerlisp.com
> <mailto:luther.johnson at makerlisp.com>> wrote:
> >
> >> I think when no-one notices anymore, how wrong automatic
> information is,
> >> and how often, it will have effectively redefined reality, and
> humans,
> >> who have lost the ability to reason for themselves, will
> declare that AI
> >> has met and exceeded human intelligence. They will be right, partly
> >> because of AI's improvements, but to a larger extent, because
> we will
> >> have forgotten how to think. I think AI is having disastrous
> effects on
> >> the education of younger generations right now, I see it in my
> >> workplace, every day.
> >>
> >> On 05/31/2025 12:31 PM, andrew at humeweb.com
> <mailto:andrew at humeweb.com> wrote:
> >>> generally, i rate norman’s missives very high on the
> believability scale.
> >>> but in this case, i think he is wrong.
> >>>
> >>> if you take as a baseline, the abilities of LLMs (such as
> earlier versions of ChatGP?) 2-3 years ago
> >>> was quite suspect. certainly better than mark shaney, but not
> overwhelmingly.
> >>>
> >>> those days are long past. modern systems are amazingly adept.
> not necessarily intelligent,
> >>> but they can (but not always) pass realistic tests, pass SAT
> tests and bar exams, math olympiad tests
> >>> and so on. and people can use them to do basic (but realistic)
> data analysis including experimental design,
> >>> generate working code, and run that code against synthetic
> data and produce visual output.
> >>>
> >>> sure, there are often mistakes. the issue of hullucinations is
> real. but where we are now
> >>> is almost astonishing, and will likely get MUCH better in the
> next year or three.
> >>>
> >>> end-of-admonishment
> >>>
> >>> andrew
> >>>
> >>>> On May 26, 2025, at 9:40 AM, Norman Wilson <norman at oclsc.org
> <mailto:norman at oclsc.org>> wrote:
> >>>>
> >>>> G. Branden Robinson:
> >>>>
> >>>> That's why I think Norman has sussed it out accurately.
> LLMs are
> >>>> fantastic bullshit generators in the Harry G. Frankfurt
> sense,[1]
> >>>> wherein utterances are undertaken neither to enlighten nor
> to deceive,
> >>>> but to construct a simulacrum of plausible discourse.
> BSing is a close
> >>>> cousin to filibustering, where even plausibility is
> discarded, often for
> >>>> the sake of running out a clock or impeding achievement of
> consensus.
> >>>>
> >>>> ====
> >>>>
> >>>> That's exactly what I had in mind.
> >>>>
> >>>> I think I had read Frankfurt's book before I first started
> >>>> calling LLMs bullshit generators, but I can't remember for
> >>>> sure. I don't plan to ask ChatGPT (which still, at least
> >>>> sometimes, credits me with far greater contributions to Unix
> >>>> than I have actually made).
> >>>>
> >>>>
> >>>> Here's an interesting paper I stumbled across last week
> >>>> which presents the case better than I could:
> >>>>
> >>>> https://link.springer.com/article/10.1007/s10676-024-09775-5
> >>>>
> >>>> To link this back to actual Unix history (or something much
> >>>> nearer that), I realized that `bullshit generator' was a
> >>>> reasonable summary of what LLMs do after also realizing that
> >>>> an LLM is pretty much just a much-fancier and better-automated
> >>>> descendant of Mark V Shaney:
> https://en.wikipedia.org/wiki/Mark_V._Shaney
> >>>>
> >>>> Norman Wilson
> >>>> Toronto ON
>
>
>
> --
> James D. (jj) Johnston
>
> Former Chief Scientist, Immersion Networks
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.tuhs.org/pipermail/tuhs/attachments/20250531/9c15e13e/attachment.htm>
More information about the TUHS
mailing list