Stop Drawing Pictures of Software
AI is pushing designers out of Figma and into the material they actually ship.
I published “Product Design Is Changing“ on Monday. LinkedIn largely agreed. Reddit was hostile. A former creative director I worked with years ago left a comment: “So much for years of craft and imagination... I didn’t sign up for this.”
He’s right. None of us signed up for it. But the reactions kept splitting along the same fault line: people who agree the shift is happening but are still opening Figma every morning and drawing pictures of software. That’s the thing I’ve been chewing on all week—why the default workflow is so sticky even when everyone can see it changing.
For decades, the job has been: design in one tool, hand off to engineers who rebuild it in another. We draw pictures of apps. We sweat over pixels in those pictures. Then someone else translates them into the thing that actually ships. Every stage of that handoff generates waste—the alignment meetings, the redlines, the QA passes comparing mockups to code. Laura Klein’s NN/g piece on empowered teams catalogs where all that overhead lands: PMs spending 70% of their time coordinating, fragmented squads producing products that feel like they were designed by strangers. That’s what happens when the picture and the product are two different artifacts maintained by two different groups of people.
Some companies have stopped doing it that way entirely. At Intercom, every designer now ships code to production. Zero did 18 months ago. The CPO’s test for any role: what would a startup founded today do here? Over at Anthropic, teams group-sculpt living prototypes with no spec and no roadmap beyond 90 days—and shipped Claude Cowork ten days after someone first had the idea. Dan Shipper’s Every runs four products with single-person teams and 99% AI-written code. Amazon’s two-pizza team just became a two-slice team. In all of these, the designer works in the final medium. No pictures. No handoff.
So why is everyone else still drawing? Part of it is that the advice hasn’t caught up. A UX Magazine piece told designers to sharpen their critical thinking and be the conscience in the room—the kind of thing you can agree with without changing anything about your day-to-day. Jan Tegze’s piece on job shrinkage gets at why that advice doesn’t land. He quotes a CEO: “Our senior people and our junior people are equally lost when we ask them what we should do. The seniors are just more articulate about their uncertainty.” The mockup workflow felt like the job. Letting go of it means admitting that a lot of what filled the day was production, not strategy.
The piece I’d actually hand a designer is Tommaso Nervegna’s Claude Code guide, which shows what working in the final medium looks like in practice: spec-driven development through conversation with an AI agent, design process applied to code instead of Figma. The designer’s value is in the questions asked before any code gets written. That’s the craft now—not the mockup, but the specification and the judgment call.
Matt Shumer wrote a piece saying that the disruption tech workers are living through is heading for every knowledge-work profession. Design sits in an interesting middle—the interfaces people see and touch still need human judgment, but the production layer underneath keeps compressing.
Here’s what I keep coming back to. A designer can read this newsletter, agree with all of it, and still open Figma on Monday and draw pictures of software exactly the way they did last year. The shift demands more than updated beliefs. It demands picking up the actual material.
ASCII Me
Over the past couple months, I’ve noticed a wave of ASCII-related projects show up on my feeds. WTH is ASCII? It’s the basic set of letters, numbers, and symbols that old-school computers agreed to use for text.
I think it’s sort of a halo effect from Claude Code and the nostalgia designers and developers have for text-based terminals. Anyway, I wrote a roundup of stuff that’s caught my eye.
What I’m Consuming
Frames That Force Us to Look. A piece connecting the 1972 Napalm Girl photo to the image of five-year-old Liam Conejo Ramos being taken by ICE agents—and why still photographs move public opinion in ways video can’t. Deb Aldrich traces the lineage through Kent State and Pete Souza’s Obama-era work, then warns that government-manipulated images are actively undermining photographic trust. (Susan Milligan / PRINT Magazine)
Agentic Lovemarks: How Brands Can Top Both Human and AI-Driven Shortlists. Arjan Kapteijns builds on Thomas Marzano’s Brand Constitutions manifesto to ask what happens to Kevin Roberts’ classic Lovemarks framework when AI agents start mediating brand discovery. His argument: brands now need to speak to two audiences simultaneously—the humans whose hearts they want to win and the intelligent systems acting on those humans’ behalf. Love stays human, but respect has to become machine-readable. (Arjan Kapteijns / Brandingmag)
These 3 ‘Addictive’ Social Media UX Features Are on Trial. Grace Snelling covers the Los Angeles lawsuit arguing that infinite scroll, ephemeral content, and algorithmic recommendations are deliberately engineered to be addictive—especially to kids. Snap and TikTok already settled; Meta and Google are the remaining defendants. The legal theory frames social media UX as a public nuisance, borrowing from the playbook used against the cigarette industry. (Grace Snelling / Fast Company)
Who Cares if Matt Damon’s ‘Odyssey’ Helmet Is Historically Accurate? (Gift link) Paul McAdory examines why audiences nitpick historical accuracy in film adaptations of Homer and Brontë, arguing that the obsession with fidelity reflects something deeper—a desire for stable reference points in an unstable present. When you can’t control where the world is heading, policing whether Odysseus has the right helmet becomes a strange kind of comfort. (Paul McAdory / The New York Times)




