The Reps You Can’t Skip
The work AI makes easiest to skip is the work that matters most.
I’ve been thinking about what actually makes someone good at their job. Not competent, but good. The kind of good where you look at a screen and know something is off before you can articulate why. The kind where you can tell a product decision is wrong from across the room.
Christina Wodtke calls this “compressed experience”—years of pattern recognition folded into a split-second gut reaction. A seasoned designer isn’t being mystical when they sense a flow is broken. They’ve just done it enough times that the signals are automatic. And the only way to build that compression is reps. Not reading about reps. Actual reps.
Peter Yang landed on the same idea from a different direction. His 25 product beliefs after a decade at Roblox, Reddit, Amazon, and Meta boil down to a claim that makes credentialists uncomfortable: the only thing that matters is what you’ve shipped and your ideas to improve the product. He estimates fewer than 10% of PMs actually dogfood their own product weekly. If you’re not paying attention to the thing you ship, you’re not building judgment. You’re just accumulating tenure.
Now add AI to this. Anthropic published a randomized controlled trial on junior engineers learning a new Python library. The group using AI assistance scored 17% lower than those who coded by hand. They didn’t finish meaningfully faster, either. The biggest gap was in debugging—the exact skill you need most when your job is overseeing AI output. Anthropic’s researchers said it plainly: AI may accelerate productivity on skills you already have while hindering the acquisition of new ones.
Daniel Miessler put this more viscerally, riffing on a Karpathy interview about entropy. Adults “collapse” over time—revisiting the same thoughts, narrowing their aperture, ossifying. AI makes it worse. You’re outsourcing your thinking to a system that learned by averaging the internet. The outputs pass a first glance. But nothing in there will surprise anyone because the model optimizes for the most statistically probable next token.
So how do you stay sharp while using the tools? Daniel Rosenberg has one approach: design the conceptual model before you touch a screen. Objects are nouns. Actions are verbs. Attributes are adjectives. Your interface is a language before it’s a layout, and if you get the grammar wrong, no amount of visual polish saves you. Jumping straight to screens means making those language decisions implicitly, without realizing it. Rosenberg’s argument is that you should write the grammar first—literally work in text before pixels.
Patrick Morgan comes at it differently: externalize your reasoning in plain text so AI can actually work with it. Three Markdown files—process, taste, raw thinking. Your judgment has to exist somewhere outside your head, and it has to be legible. Designers externalize visual thinking all the time with moodboards and component libraries. But we rarely write down why we made the choices we made. Do that, and AI amplifies your judgment instead of replacing it.
Erika Flowers makes the organizational version of this argument. On the Invisible Machines podcast, she compares AI adoption to roofing: before you can install anything, you spend a week building scaffolding, setting up tarps, rigging safety harnesses. Nobody wants to fund that part. Everyone wants the flashy use case. But the scaffolding—data integration, governance, connected workflows—is what makes the use case actually work.
All of it comes back to the same thing. The slow, sometimes painful work that builds real understanding is the work AI makes easiest to skip. And skipping it has a cost that doesn’t show up until later—when you’re debugging output you don’t understand, or shipping a product you can’t explain, or deploying an AI feature on top of infrastructure that was never ready for it.
I keep coming back to a line from Anthropic’s study: “Cognitive effort—and even getting painfully stuck—is likely important for fostering mastery.” Getting stuck is the apprenticeship. The reps are the point. Use AI to do more of them, not fewer.
What Wall Street Gets Wrong About SaaS and What’s Next
I wrote two pieces on the SaaS market. What Wall Street Gets Wrong About SaaS makes the case that AI won't kill enterprise software, for the same reason Zapier didn't kill systems integrators and Squarespace didn't kill web designers—capability doesn't equal DIY, and every wave that was supposed to eliminate professional work ended up expanding the market instead. Then in What's Next in Vertical SaaS, I pulled that thread forward with Charlie Warren, Eli Dukes, and my colleague Duncan Grazier: if SaaS survives, where does defensibility live next? Their answer is the data recipe—the specific chain of decisions, intent, and context that turns raw information into economically valuable results. The model layer isn't the moat anymore. The thinking behind the output is.
What I’m Consuming
A History of the Permalink. In early 2000, Jason Kottke added permanent links to individual blog posts, which Caroline Van Oosten highlighted on prolific.org and helped popularize the idea of persistent URLs. Folks at Blogger, Matt Haughey, Evan Williams, and Paul Bausch developed a code solution using anchor IDs to link to single entries, enabling true permalinks across blogs. By late March 2000, Bausch published an official guide, and the concept spread, ultimately evolving into what we now call permalinks. (Jay / The History of the Web)
Does Software Piracy Exist? Matthew Butterick, a type designer, argues the optimal level of software piracy is greater than zero. Zero piracy means either you never shipped or you gave it away free—both produce $0 in revenue. Maximum revenue happens in the middle, where some piracy occurs. And most people downloading your fonts from pirate sites were never going to pay anyway. (Matthew Butterick)
The Moylan Arrow of Software. After the death of James Moylan, the designer of that little arrow on your fuel gauge showing which side the gas cap is on, Marcin Wichary asks: what’s the software equivalent? His pick is iOS’s Security Code AutoFill—it benefits everyone, solves a real friction, shows up at the right time, and once you know it’s there, you love it forever. (Marcin Wichary / Unsung)
Wes Cook and the Centralia McDonald’s Mural. The full story behind Cabel Sasser’s XOXO 2024 talk: a decade-long effort to save a hand-painted McDonald’s mural from a remodel, a canvas that peeled right off the wall, a hidden “COMING SOON!” message from 1980, and a suspiciously detailed cow udder that nearly triggered a lawsuit. A wonderful read. (Cabel Sasser)




