Across the Designer-Verse: The Reality of AI First Design
I didn't change disciplines. I changed dimensions.
Vibe coding will kill your career. No, vibe coding will save your career. Wait wait wait. Which you are you? Which dimension are you from? The one where AI saved design? Or the one where AI doomed design? I have a story to tell you from the dimension I come from.
The posts are everywhere. “Why vibe coding failed me.” “The promise vs. the reality.” “AI First Design is not ready for primetime.” “It’s a career ender.” “It’s a career starter.” “You have to be AI first.” “No, you need to be visual first.”
And they’re not wrong. And they’re not right.
The frustrations are real. The context window fills up. The model hallucinates a dependency that doesn’t exist. You spend three hours debugging code you didn’t write and barely understand. The 3D printer metaphor keeps appearing: push a button, magic object appears, but it’s not production-ready and creating something novel requires skills beyond button-pushing.
This article was sent to a group chat: 10 things I learned from burning myself out with AI coding agents by Benji Edwards, Ars Technica's Senior AI Reporter.
I respect it. He’s a subject matter expert, and journalist and writer. Fully credit where credit is due. But Benji was not building anything real, and by “real” I mean there was no planning or systems thinking of architecture, separation of layers, future-planning, refactoring, scalability, a iterative roadmap, componentization.
He didn’t treat it like a scaled down business product project.
But me? I was a IT Specialist and Digital Service Expert at NASA. I was a Silicon Valley UX designer (the literal block in Mountain View where Shockley came up with the semiconductor is where my apartment was). I live in the bleeding edge of invention and art and design and tech.
And I just… hmm… I felt like it was telling a story but not the whole story. So this article is a different viewpoint. No shade.
Because see, I’ve been living inside Claude Code for 96 hours straight. Literally only stopping for necessary life functions. I decided it was time to get up to speed, so I decided to learn as much as I could over the long MLK weekend, alone in freezing Maine like I was a character in a Stephen King novel, trapped and forced to write.
And I agree with a lot of the criticism of vibe-coding or vibe-designing.
But those aren’t the things that are the problem! It’s not the AI!
The biggest problem isn’t context windows or hallucinations or spaghetti code. The biggest problem is that everyone’s building toys. They expect to see production-grade results with playground-grade inputs.
"I joined a band so I could hit my feelings with sticks."
See, there’s a pattern in these vibe coding and “AI First Design” disillusionment. The authors experiments with AI coding. They build demos, prototypes, weekend projects, “just messing around.” They hit walls. They write about the walls. The walls are real! Context fills up. Architecture drifts. The model loses track of what it built three hours ago. Everything eventually collapses into an unmaintainable tangle.
And then the conclusion: AI coding is fun but limited. Great for prototypes, not ready for production. Maybe someday.
Newsflash buster: they’re doing it backwards.
The problems they’re describing are toy problems. Not in the sense of being trivial, in the sense of emerging specifically because there’s no scaffolding, no architecture, no actual product discipline underneath the experiment.
When you “vibe code” a demo, you’re asking the AI to simultaneously figure out:
what you’re building
how to structure it
how to implement it.
That’s three jobs and you likely are only describing how it should present.
The model is powerful but it’s not magic. It will make locally optimal decisions that create globally incoherent systems. It will solve today’s problem in a way that breaks tomorrow’s feature. It will build you a house with no foundation and then everyone will be shocked when the walls crack.
Vibe-coding is the same as writing a book by opening a word processor and typing “It was a dark and stormy night” and then realizing that what it actually take to write a novel isn’t “vibe” or flow.
It’s planning (I have written 5 novels). And when you fail to plan, you plan to fail: one pleaseburger cheese.
"I didn't really know him until it was too late."
Over the long weekend, I built a functional creative writing and novel development tool. AI-powered, AI-enabled, built entirely in Claude Code. It has a real feature set. It has a release roadmap. It has users (me, initially, but that counts).
And the problems everyone complains about? Mostly didn’t happen.
At all.
Not because I’m smarter. Not because the model magically got better for me. Because I did the boring work first.
Before I wrote a single line of product code, I built what I’m calling Investiture (yes, it’s a Stormlight Archive reference. Storms!). It’s an extensible diagnostic system architecture boilerplate that decouples everything. View layer, logic layer, data layer. Separation of concerns enforced from the start. Hot-swappable components. Enterprise-ready patterns adapted for a solo builder.
Now, this sounds like exactly the kind of thing AI coding is supposed to let you skip. “Just describe what you want! Natural language! No architecture needed!”
That’s the lie.
The architecture is the prompt. When you give the AI a well-structured system to work within, you’re not asking it to make structural decisions anymore. You’re asking it to implement features within a structure that already makes sense. The context window stops filling up with architectural thrash. The model stops proposing solutions that break other parts of the system. Everything becomes composable, testable, replaceable.
I got half way through my weekend just using vanilla JavaScript when I realized I needed so much more, and I wanted to move to React. A refactor, an impossible task when vibe-coding.
Not now. I was able to replace the vanilla JS layer and components with React while leaving the logic, data, presentation, and styles alone.
I was able to do a heart transplant architecture refactor in about an hour. And the front end in the browser? Didn’t have to touch it.
And it isn’t vibe-coded, spaghetti, “works on the surface but is a mess underneath.”
The “underneath” is as coherent and designed with intention as the rest. It’s real. I am not experimenting with what vibe-coding or AI can do; I am experimenting with what I can do.
It’s not a concept.
"You think you know the rest. You don't."
Now here’s where it gets interesting.
The conventional wisdom says AI coding democratizes development. Anyone can build! No expertise required! Vibe your way to an MVP!
But the people succeeding with these tools aren’t the ones with the least expertise. They’re the ones who already understand system architecture, separation of concerns, why you decouple your data layer from your view layer. The ones who know what questions to ask before they start building.
The 3D printer analogy from the Ars Technica piece is exactly right, but the lesson most people draw is wrong. Yes, anyone can push a button and get a shape. But the people making useful things with 3D printers are the ones who understand materials science, structural engineering, tolerances, post-processing. The machine didn’t eliminate expertise. It shifted where expertise matters.
Same thing is happening with AI Design:
🕹️ The toy-builders jump straight to “build me a todo app” and discover that the model makes bizarre architectural choices that compound into unmaintainable systems. They conclude the technology isn’t ready.
🔧 The product-builders start with “here’s my system architecture, here’s how components communicate, here’s the contract between layers” and discover that the model becomes remarkably reliable when it’s not also being asked to be the architect.
The difference isn’t the model. It’s whether you’re treating AI as a replacement for thinking or as an amplifier for decisions you’ve already made.
"This mask is my badge."
This matters beyond just “how to use Claude Code better.”
There’s a weird techno-utopian narrative that AI will flatten expertise. That soon everyone will be able to do everything, and the specialists will become obsolete. The vibe coding discourse feeds this: look, non-programmers are shipping apps! The barriers are falling!
But barriers and scaffolding are different things.
A barrier keeps you out. Scaffolding helps you build. The AI removed some barriers (you don’t need to memorize syntax anymore). But it didn’t remove the need for scaffolding. In fact, it made scaffolding more important, because the AI will happily build you a 50-story structure with no foundation and no load-bearing walls, and it’ll look great right up until it doesn’t.
The people burning out on AI coding aren’t burning out because the technology failed. They’re burning out because they’re trying to build without blueprints, and the AI is powerful enough to make that feel like it’s working until suddenly it isn’t.
"Maybe she didn't have a choice."
So here’s what I actually learned from my 96-hour sprint:
Architecture first. Before you prompt a single feature, have a system design. Doesn’t have to be complex. Just has to exist. Layer separation, component contracts, data flow. Write it down. Put it in your CLAUDE.md. The model needs to know the shape of the world it’s building in.
Product roadmap. Not “I want to build a cool thing.” An actual list of features, prioritized, with dependencies mapped. When you know what’s coming next, you build today’s feature in a way that doesn’t break tomorrow’s. The AI can’t think ahead for you.
Diagnostic infrastructure. This is my secret weapon. Build observability into your system from day one. Logging, state inspection, component health checks. When something goes wrong (and it will), you need to be able to see where. The model is excellent at fixing problems when it can see the problem. It’s terrible at fixing problems when everything is a black box.
Treat features as units. One feature, one branch, one merge. Don’t let the AI sprawl across your codebase “improving” things while it implements something else. Scope creep is deadly with these tools.
Read the code. Not every line, but enough to understand what’s happening. The model will occasionally do something clever that you don’t understand. That’s fine. The model will also occasionally do something stupid that you don’t understand. That’s not fine. You need enough comprehension to tell the difference.
"I can do both."
The irony is that all of this is just... creative production through the alchemy of idea and technology. Same as it ever was.
Not cutting-edge AI prompting techniques. Not secret workflows. Just the same boring principles that have made code maintainable for decades: separation of concerns, clear interfaces, documentation, testing, incremental development.
The AI didn’t change what makes software good. It changed who gets to build it and how fast they can move. But “fast” only helps if you’re moving in the right direction. And direction requires knowing where you’re going.
Most people’s problems with AI coding aren’t AI problems. They’re planning problems. Architecture problems. Product problems. The AI just makes those problems manifest faster and more visibly than traditional development would.
Which is maybe the most useful thing about it. The feedback loop is so tight now that bad decisions hurt immediately. You can’t coast on “we’ll refactor later” because there is no later, there’s just the next prompt. Either your foundations hold or they don’t.
Which is why service design and the upstream colors of UX or product design matter more now, not less. If you are spending time on visual design instead of this, you are trapped traveling backward in time.
You cannot turn smoke and ash back into wood. That reality is gone.
"I never found the right band to join…”
I’m not saying vibe coding is useless. For throwaway experiments, one-off scripts, learning exercises? Vibe away. The model is genuinely fun to play with. And that approach may evolve to be functional at scale in collaborative, production environments.
But the discourse has gotten confused. People are vibing their way into products and then being surprised when the products collapse. That’s not an AI failure. That’s a category error. You can’t vibe your way to production. You never could. The AI just made it look temporarily possible.
For actual products? Do the boring work. Build the architecture. Make the roadmap. Treat this like what it is: a remarkably powerful tool that still requires a skilled operator.
The future isn’t AI replacing developers. It’s AI amplifying the developers who already know what they’re doing. The ones who understand systems. The ones who plan before they build. The ones who treat architecture as seriously as implementation.
I am not talking about frame-dragging or wormhole magic. I am talking about becoming untethered from an outdated reality.
“…so I started my own.”
Someone’s going to read this and say “well sure, but you’re a technologist.” Or worse: “you’ve shifted from design to development.” Or just “you have experience that the modern designer doesn’t because you’re old AF, they didn’t have to suffer through the HTML era of the 1900’s.”
No.
I’m a designer. I’m not a developer who learned some design. I’m a designer who happens to understand systems. I started in Photoshop 3.5 and didn’t even move to Sketch until 2015!
And here’s what I need people to hear: nothing about my design process or mental model changed. Nothing.
I’m still doing user research. Still mapping journeys. Still identifying pain points and designing solutions. The choreography of how humans move through systems. All of it. Same brain, same process.
The only thing that changed is the loom. And a good weaver never blames their loom.
Design was never about the tool. It was always about the thinking. The decisions. The why behind the what. It’s also not primarily about visuals or what it looks like, that is a type of design.
The fact that I can now implement my own designs doesn’t make me less of a designer. It makes me a designer with fewer dependencies and more capabilities.
And now, the weaver is unknown by the loom.
That’s the line I want to draw. This IS product design. This IS UX design. This IS service design. I haven’t changed disciplines. I’ve just slipstreamed into a new dimension with new powers. And you can do it too, because you are a designer, you are a builder, you are a creator, not a pixel-pusher, not digital labor. Not hands.
You can take what you need from all that is around you and make of it something more.
See the repo WIPs on Github here:














See the linkedin post and help me get ahold of the algorithm by commenting there: https://www.linkedin.com/feed/update/urn:li:activity:7419580935618121728/