Plausibility Over Truth
LLMs are optimized for plausibility (Does this sound like something a human would say?) rather than truth (Does this correspond to reality?). Therefore, Ungrounded Generation is not a bug; it is a feature of the architecture. The model is successfully doing what it was trained to do: predict the next likely token.
The "Bullshit" Theory
Philosopher Harry Frankfurt defined "bullshit" not as lying (which requires knowing the truth and hiding it) but as a lack of concern for the truth. In this strict philosophical sense, Ungrounded Generation is the automated production of bullshit. The AI doesn't know it's lying; it's just trying to fill the silence with something that sounds good.
Field Notes & Ephemera
Diagnostic Shift: Calling it "Ungrounded Generation" reminds the user that the missing piece is Grounding—which only the human can provide. It shifts the focus from "fixing the bot" to "steering the system."