Why Your AI Coding Assistant Produces Drift
It's not the model. It's the absence of a discipline that constrains what a stateless reader can derive.
I’ve been building software for almost twenty years. For most of that time, the job was understanding the problem, then writing the code that solves it.
Sometime in the last two years, the second half of that sentence disappeared. I haven’t written a single line of code for over 9 months.
I published a preprint this week that explains why, not as a productivity story, but as a formal one. The argument runs through Chomsky’s grammar hierarchy, Martin’s sequence of programming paradigm shifts, and six production systems I built under the methodology I’m proposing.
The core claim: we are at the first programming discipline of the pragmatic dimension. Every prior discipline constrained what was permitted (syntax) or what communicated to a reader with context (semantics). Generative Specification is the first one that constrains what a *stateless reader*, a model with no prior context, can derive. The discipline is not about AI features. It’s about what you have to externalize for AI to work correctly.
The failure mode it addresses is drift: architecturally incoherent output generated at AI speed, propagating across every session that inherits the corrupted context. Every person I’ve spoken about this with in the last year has seen this. The discipline that addresses it is available now.
Preprint: https://lnkd.in/gygRa9Gh
If you’re building with AI-assisted tools and the architecture is getting harder to control, not easier, this is the argument that explains why.
