The Ambient Engineer
The monitor screen is the wrong interface for this.
I have not written code in months. Nor application, nor testing, nor deployment, nor command line scripts.
I have fifteen to twenty projects open at any time, six or seven active in any given session, the others paused at a deploy, a test run, a decision I have not made yet. When one pauses, I move to the next. What I do now is specify: I describe what should be built, why, to what standard, in what shape, with what constraints and failure modes. The model builds it, executes the commands, writes the configuration files, calls the APIs.
The monitor screen is the wrong interface for this.
Not because screens are bad. The role that remains in AI-assisted development is an orchestrator who holds intent and decides what comes next, and it does not require a workstation. It requires attention, judgment, and the ability to communicate precisely. I spent twenty years learning to speak the machine’s language: high level languages, type systems, compiler flags, a new language every two years, a new syntax every project, 30,000 line XSLTs I had to read end to end just to understand what transformation was happening. The machine could not meet me where I think. I had to go to where it lived. That constraint is dissolving.
What replaces it is three physical layers: a thin display in the field of view, a pocket unit for routing and bridging, and compute wherever it lives. Mini PCs, cloud instances, laptops. Incoming signals arrive already summarized to significance, responses go back by voice without switching physical context. The hardware already exists. What does not exist yet is the orchestration layer: the specification surface that determines which signals rise to attention, at what frequency, prioritized by urgency, clustered and ordered by what actually needs a human decision.
Imagine gesture-minimizable overlays showing messages alongside summaries and recommended actions, and being able to reply on content and tone just by voicing it. Dynamic calls with real-time captions, translations, automatic action items. Diagrams and decisions saved to folders you summon on command. An ordered queue of inputs needed from your many AI assistants who are building your projects and need occasional but precise guidance. Charts, slides, art, music, video, articles, podcasts, each shaped by describing and constraining the outcome across iterations until it matches what you desire.
This is the same discipline I apply to software, now applied to the surface that matters most: my own attention. It lets me step away from the screen, move through the world, socialize, exercise, think, while the critical arrives immediately and everything else waits in organized flows rather than noise. It enables the dispersed, asynchronous work that is natural when AI is executing your specification and needs occasional direction, without foreclosing the periods of deep focus that now resolve on much faster cycles.
When execution cost approaches zero, the limiting constraint on what gets built shifts from effort to the ability to specify correctly and the quality of your attention. That is a more democratic bottleneck. Not perfectly democratic, because specifying correctly is a real skill and skills are unevenly distributed, but the barrier becomes epistemic rather than economic. Understanding replaces capital as the gate.
I published the theory behind this discipline this week: Generative Specification (https://doi.org/10.5281/zenodo.19073543). The ambient interface is where it leads.
This is the year the interface catches up. Sixty years of screen, keyboard and mouse. It all resolves to gesture and specifying.
