unearth.wiki

Evaluative Literacy

/ɪˈvæljʊətɪv ˈlɪtərəsi/ Pedagogical term adapted for AI-assisted workflows.
Definition The metacognitive skill required to effectively partner with generative AI in high-stakes domains (coding, medicine, law). It shifts the human's primary role from generation (writing code/text from scratch) to evaluation (assessing AI output for correctness, security, and nuance). It is the opposite of "Cognitive Offloading" or laziness; it is an active, rigorous mode of reading-as-debugging.

From Author to Reviewer

In traditional workflows, writing and thinking were synonymous. In AI workflows, writing is cheap. Thinking now resides in the edit. The Evaluative Literate user does not blindly accept code; they treat every AI suggestion as a "pull request" from a junior developer—talented but prone to hallucination. They must possess enough domain expertise to spot subtle bugs that the AI glosses over.

The Paradox of Expertise

One cannot effectively evaluate what one does not understand. Therefore, AI tools do not replace the need for human expertise; they raise the bar for it. To use a coding assistant safely, you must be a better coder than the assistant. If you aren't, you are not a partner; you are a liability.

Field Notes & Ephemera

Field Standard: "Trust but verify." The AI is not an oracle. It is a stochastic parrot with a very large vocabulary. It is your job to teach it grammar.
Stratigraphy (Related Concepts)
Cognitive Offloading Collaborative Flow Jagged Frontier Metacognition Debugging Critical Thinking

a liminal mind meld collaboration

unearth.im | archaeobytology.org