I cringe whenever someone says "I asked #ChatGPT/#Claude/..." about facts.
I've referenced the full specs, enabled deep research & extended "thinking", and would get back a response elaborating all the "research" they've done and how the fact just wasn't available and so the report alas! had to proceed with some assumption.
Which might be reasonable — if the fact wasn't explicitly stated in the very link I provided as part of the prompt.
This is not hallucination. This is failure.
@gregkh That's a most excellent term, thanks for making me aware of it!
Unfortunately, it seems most people genuinely excited about GenAI these days are; or operate in areas where facts/correctness are optional. It's like learning your colleague idolizes certain men or calls CO2 "the gas of life" 💔
(Which dampens my own technical fascination with the tech. Excuse me, but we can do *what* with "mere" stochastics and Monte Carlo‽ You can't tell me that's not cool.)