The Disappearing Interface
From screens to protocols: AI is disrupting decades of reliance on apps and websites
Imagine this: one AI wants to persuade another of a strategy. Does it open PowerPoint, polish a slide deck, and animate a pie chart?
No. It just sends structured arguments, data, and instructions.
Now ask yourself: if more of our work shifts to AI agents, and more decisions flow between them, why would we still design like it’s 2012?
We build websites, apps, and presentations for human eyes. But increasingly, the user isn’t a person. It’s a machine. And machines don’t care about your hover states.
In fact, when we have alternatives, nor do humans. Visual interfaces are useful only without better options. But now we can just ask our AI tools, via text or voice, to do things on our behalf. No clicking required.
As we do this more, we have less need for fancy interfaces, and more need for standards and protocols. And for those rare instances where apps are useful? We can get minimalist and personalized ones generated on-demand.
Protocols beat pictures
AI agents don’t want your fancy front-end. They want structured, machine-readable access.
This is why standards like Model Context Protocol have gotten so much attention. MCP lets AI tools plug into databases, drives, and services using a common language. Another protocol, Agent-to-Agent, allows different AI agents to securely coordinate across vendors and platforms.
An even simpler standard that works well for AI models is Markdown—text that uses characters like _underscores_ to indicate formatting. Encouraged by standards like llms.txt, many companies (especially developer-oriented ones) now offer text-based Markdown content for AIs
We are, of course, also getting AIs capable of browsing websites. But these are nowhere near as reliable as AIs using APIs, MCPs, or text files. After all, websites and apps are often confusing even for smart humans.
One app to rule them all
Humans aren’t machines, but we also struggle with confusing interfaces—and a seeming unending proliferation of apps.
We’re tired of app clutter. Tired of searching through layers of UI to do simple things. Tired of bloated tools designed for edge cases we never use.
That’s a big reason why chatbots like ChatGPT, Claude, and Gemini have taken off. They don’t offer a hundred buttons. They offer a box. You type (or talk), they act.
Will the AI interaction paradigm always remain this limited? Probably not. Chatbots can already generate images, videos, and charts when needed. We also know from the roadmap for ChatGPT that they’ll soon generate appropriate apps when relevant.
But even when visual output is needed, it’s done within the flow of a conversation. And in rare cases that users need richer interactivity, AI can now build AI-powered interfaces on demand that are personalized, task-specific, and continuously evolving.
We’re entering an era when any required interfaces don’t need to be pre-built. They can be generated, on demand, to do exactly what you need—and nothing more.
Never look at a calendar again
Sometimes when I think about AI assistants, I get a vision of Don Draper from Mad Men. Draper didn’t type his own memos or book his own appointments. His secretary did. If he needed to know his schedule? He could just ask.
This feels like the future as AI tools get more mature and connect to more of our data. Think about how you typically schedule a meeting today. You open a calendar app, search for open times, send invites. Wouldn’t you rather just ask your assistant: “Find a time for me and John next week?”
This will happen across an increasing range of workflows. Booking meetings, logging expenses, planning travel, and more. AI will eat the interfaces for all of these workflows. You ask, it acts.
What’s left for design
This doesn’t mean design is dead. After all, someone will need to design the AI apps—as well as new hardware, like glasses, and mystery objects yet to be revealed.
Some of the places visual design will still matter include:
Inherently visual tasks like graphic design, video editing, 3D modeling
Immersive experiences like games, AR, VR, spatial environments
Communication to humans like diagrams and data visualizations
We will also still need design primitives—sliders, toggles, progress bars—embedded in AI tools. Not entire apps, just the elements needed, when needed. For example, if ChatGPT books me an Uber, I’d like to know how close it is to picking me up.
But overall, we may have reached peak interface design, in terms of its priority relative to AI model capability, machine-first standards and protocols, and integrations. And that has broad implications.
Winners and losers in the interface collapse
This shift doesn’t just affect designers. It affects entire companies. Who will be the winners and losers? Some thoughts:
Winners
✅ AI labs with foundation models
✅ Protocol creators
✅ Chat-native interface builders
✅ Tools that expose structured APIs
✅ Voice-first, ambient hardware platforms
Losers
❌ GUI-heavy SaaS platforms with weak APIs
❌ App-store-reliant businesses
❌ Front-end-only designers and developers
❌ Screen-focused hardware vendors
When your business is built around screens, and the screen becomes optional, you have a problem—one reason people are so concerned about Apple lagging in AI.
The next platform shift
We’ve been through interface revolutions before. From command line to GUI. From desktop to web. From web to mobile apps. Each time, the dominant paradigm felt permanent—until it wasn’t.
Now comes another shift. Not to a new screen. To no screen. To an ambient, always-on agent. One that speaks the language of intent. One that builds the interface you need on-demand. One that does things on your behalf, often invisibly.
Design won’t disappear. You just won’t notice it. It will be in the minimalism of hardware that puts AI front and center. In the quality of the audio for your voice interactions. In the architecture of the integrations that empower your AI to do things on your behalf.
And the new design language? Words.
Not slides.