Discussion about this post

User's avatar
Matt Searles's avatar

This is a sharp epistemological argument and I think it identifies exactly the right properties. But I'd push back on the conclusion — the four properties you name aren't inherently absent from AI. They're absent from current AI infrastructure. That's a fixable problem.

Conversational stakes: if every AI assertion is a signed, immutable event on a hash-chained graph, the agent's claims are on the permanent record. Reputation accumulates from claim history — a source whose claims survive challenges gains credibility, one whose claims are debunked loses it. That's a stake.

Identifiability: on an event graph, an AI agent has identity through accumulated behaviour — who deployed it, what authority it operates under, what its track record is. Not a persona. A chain.

Background trust: derived from the chain, not from institutional affiliation. Same way your biology teacher's reliability comes from the institutional role — the agent's reliability comes from verifiable history.

Follow-up questions: the causal chain is the follow-up. You don't need to ask "why did you say that?" You walk the chain and see the inputs, the reasoning, the evidence cited.

The properties you've identified as necessary for testimonial knowledge are architectural requirements. I've been building the architecture that provides them. mattsearles2.substack.com

No posts

Ready for more?