Stake or Slop
Your professional reputation is what gets you to push back on the model.
Every AI chat window carries the same warning. “Claude is AI and can make mistakes. Please double-check responses.” ChatGPT runs a version of it under every chat box. Gemini does the same. Many of us read it once and then turn blind to it. The reason isn’t laziness. Checking the model’s output is work, and work without stake doesn’t get done. Raj Nandan Sharma made the case that taste at the end of a pipeline is fragile. The selector who stands there picking the best of ten AI drafts has no skin in the result. This week’s posts pushed me to look closer at what does. The answer that kept coming back is stake: your professional reputation attached to the work.
Authorship requires saying yes or no. Kieran Klaassen puts it this way on Dan Shipper’s AI & I: “If you ship something—if you make a statement in the world—and you want it to be your own, you have to say yes or no at some point. You cannot fully automate everything.” Klaassen built the compound engineering plugin, so he isn’t romantic about handcraft; he is making an art-and-ownership point. Authorship lives in the yes-or-no moments at the start and end of the workflow. Without those decisions, the output is yours only in the sense that you typed the prompt.
The macro version shows up on Decoder, where Nilay Patel reads through the polls on AI sentiment. Public favor for AI sits below ICE. Gen Z, the heaviest users, are also the most negative: 31% feel angry about AI, up from 22% the year before, per Gallup. Sam Altman has called this AI’s marketing problem. Patel rebuts him, and I think he is right. The public has been served years of output that obviously had no one’s name on it, and they have stopped pretending it is worth their attention. Slop is now ubiquitous.
The aesthetic version has a vocabulary now too. Matt Ström-Awn’s “expansion artifacts” picks up Ted Chiang’s three-year-old “blurry JPEG” line and turns it inside out. Chiang called the failures compression artifacts. Ström-Awn writes, “I think they’re expansion artifacts.” His catalog runs from text stuffed with hedging words like “delve” and “tapestry” to the six-fingered hands and purple gradients in the visual outputs. These are the visible tells of work that nobody pushed back on. Karri Saarinen makes the same point from the design side. He writes that design is the search for fit between form and context, where context is the full set of forces (needs, constraints, edge cases) that make a problem what it is. Today’s prompt-to-code tools produce form against a thin slice of that context. “The form is there,” Saarinen writes. “The fit is not.” The form is now cheap. The fit is the part that still takes stake.
So who actually has stake? Cat Wu, Head of Product for Claude Code, describes a hiring filter built around it. Anthropic hires engineers with product taste who can “see user feedback on Twitter through to ship a product at the end of the week with almost no product involvement.” These are people whose name is on the ticket from intake to ship.
Designers get a sharper version of the same opportunity. Coding agents have closed the gap between idea and working interface for non-programmers, Andy Matuschak argues. For forty years, designers couldn’t work with the code that turns a static mockup into something interactive. They could only describe the behavior they wanted and hand it off. Now they can iterate on the actual material—the running interface, not a flat picture of one—and take stake in the result.
At the team scale, Maggie Appleton argues that alignment is the new bottleneck. She’s a staff research engineer at GitHub Next, where she’s building Ace, a multiplayer coding workspace designed for the fact that today’s AI coding tools are single-player even though software is built by teams. When implementation is cheap, the cost moves to picking what to build. That picking only counts when the team holds shared stake in the answer.
The warning at the bottom of every chat window is correct. Many of us scroll past it because nothing depends on us reading it. The work that travels—the work people defend in a meeting, the work that gets shared because someone vouched for it—is the work somebody put their name on. That has been true for decades. AI did not change it. AI just made it possible to ship a lot of work that does not.
What I’m Consuming
Field Notes from the In-Between. Kris Puckett spent twenty years wanting to build software and not building it. He kept opening Xcode and closing it. Then he opened Claude and asked. Fourteen thousand lines of Swift later, his iOS app Epilogue is in the App Store. Puckett is living the shift Andy Matuschak argues for above: the bottleneck used to be coding ability, and now it’s articulation. “The skill is being precise about what you don’t know.” (Kris Puckett)
How I Designed a Free Music Font for 5 Million Musicians. This is the kind of design video I could watch all day. Tantacrul, MuseScore’s head of design, takes you inside the obsession behind Leland, the new default notation font. The treble clef alone went through multiple revisions because every change that looked correct in isolation broke in context against the staff lines and other symbols. Leland is named for Leland Smith, the late Stanford composer who built SCORE in Fortran in the 1970s and drew hundreds of notation symbols by hand on a 31,000-vector display. The whole video is a love letter to a craft we don’t see a lot of. (Tantacrul)
The Beauty of Bézier Curves. Freya Holmér builds cubic béziers from first principles in twenty visual minutes. She moves from the basic lerp construction to De Casteljau’s algorithm, then to the polynomial form, then to derivatives: velocity, acceleration, and jerk. The arc-length parameterization section is where the curves stop being beautiful and start being approximations, and Holmér is honest about it. Useful for designers who want to understand what their pen tool is actually doing. (Freya Holmér)
Vertical AI Maximalism. Charlie Warren argues the wedge product is over for vertical AI. The 2010s playbook (start narrow, integrate with the incumbent, expand from there) assumed building software was hard and incumbents were friendly. Both have collapsed. Vertical SaaS multiples halved earlier this year, and incumbents are pulling API access. Warren’s prescription: build the full agent-native platform that replaces the incumbent rather than a wedge that depends on its cooperation. (Charlie Warren)
Who Owns the Code Claude Wrote? On March 31, Anthropic accidentally leaked Claude Code’s source. A developer used Claude to rewrite the whole thing as “claw-code,” and it hit 100,000 GitHub stars in a single day, the fastest repo in GitHub history. Anthropic issued DMCA takedowns. But by their own lead engineer’s admission, much of Claude Code was written by Claude itself. Can Anthropic actually claim copyright on code mostly written by an AI? Legal Layer pulls on that thread and a couple of others, including whether your employer’s IP clause reaches the side projects you built with company-licensed AI tools, and whether GPL-trained models can quietly mix copyleft licenses into your codebase. (Legal Layer)



