[COFF] On the unreliability of LLM-based search results

Alexis flexibeast at gmail.com
Tue May 27 14:17:45 AEST 2025


[Redirected from the TUHS list]

George Michaelson <ggm at algebras.org> writes:

>> We're way off topic. Warren should send a kill.
>
> That said: please don't repeat the "hallucinate" label. It's 
> self
> -aggrandisement. Its deliberate, to foster belief "it's like 
> thinking"
>
> It's not a hallucination, it's bad model data and bad constraint
> programming. They're not thinking, or dreaming, or demanding not 
> to be
> turned off, or threatening or bullying: They're not Markov 
> chains
> either but they're a damn sight closer to a machine than a mind.

Point taken, although i think trying to change that language might 
be
tilting at windwills at this point. Still, i'll try to use 
alternate
phrasing, e.g. "LLMs are known to output nonexistent
sources". (Alternative phrasings welcome.)

i'd also be interested in an analysis of this:

  > An artificial intelligence model created by the owner of 
  > ChatGPT has
  > been caught disobeying human instructions and refusing to shut
  > itself off, researchers claim.

-- 
   https://www.stuff.co.nz/world-news/360701275/openai-software-ignores-explicit-instruction-switch


Alexis.


More information about the COFF mailing list