LLMs are not designed to tell us what is true. they donβt lie or truth-tell or hallucinate. they output information that sounds good and coherent. relating to them as agents with reason is dangerous and harmful.
Spot on and a wonderful rabbit hole of links.