Using Humble Creative Machines
Plus: How AI is reshaping UX design, the state of gen AI in the enterprise, and what "6-7" reveals about childhood creativity.
Early last week I listened to an episode of Design of AI which featured Dr. Maya Ackerman is wonderful. She echoed a lot of what I’ve been thinking about recently—how AI can augment what we as designers and creatives can do. There’s a ton of content out there that hypes up AI and how it can do the job of a specialist like a marketer or designer. They proclaim, just “Type this prompt and instantly get a marketing plan!” or “Type this prompt and get an entire website!”
Ackerman, as interviewed by Arpy Dragffy-Guerrero:
I have a model I developed which is called “humble creative machines” which is [the] idea that we are inherently much smarter than the AI. We have not reached even 10% of our capacity as creative human beings. And the role of AI in this ecosystem is not to become better than us but to help elevate us. That applies to people who design AI, of course, because a lot of the ways that AI is designed these days, you can tell you’re cut out of the loop. But on the other hand, some of the most creative people, those who are using AI in the most beneficial way, take this attitude themselves. They fight to stay in charge. They find ways to have the AI serve their purposes instead of treating it like an all-knowing oracle. So really, it’s sort of the audacity, the guts to believe that you are smarter than this so-called oracle, right? It’s this confidence to lead, to demand that things go your way when you’re using AI.
Her stance is that those who use AI best are those that wield it and shape its output to match their sensibilities. And so, as we’ve been hearing ad nauseam, our taste and judgement as designers really matters right now.
I’ve been playing a lot with ComfyUI recently—I’m working on a personal project that I’ll share if/when I finish it. But it made me realize that prompting a visual to get it to match what I have in my mind’s eye is not easy. This recent Instagram reel from famed designer Jessica Walsh captures my thoughts well:
I would say most AI output is shitty. People just assumed, “Oh, you rendered that an AI.” “That must have been super easy.” But what they don’t realize is that it took an entire day of some of our most creative people working and pushing the different prompts and trying different tools out and experimenting and refining. And you need a good eye to understand how to curate and pick what the best outputs are. Without that right now, AI is still pretty worthless.
It takes a ton of time to get AI output to look great, beyond prompting: inpainting, control nets, and even Photoshopping. What most non-professionals do is they take the first output from an LLM or image generator and present it as great. But it’s really not.
So I like what Dr. Ackerman mentioned in her episode: we should be in control of the humble machines, not the other way around.
Highlighted Links
Silicon clay: how AI is reshaping UX design
Andrew Tipp does a deep dive into academic research to see how AI is actually being used in UX. He finds that practitioners are primarily using AI for testing and discovery: predicting UX, finding issues, and shaping user insights.
The highest usage of AI in UX design is in the testing phase, suggests one of our 2025 systematic reviews. According to this paper, 58% of studied AI usage in UX is in either the testing or discovery stage. This maybe shouldn’t be surprising, considering generative AI for visual ideation and UI prototyping has lagged behind text generation.
But, in his conclusion, Tipp echoes Dr. Maya Ackerman’s notion of wielding AI as a tool to augment our work:
However, there are potential drawbacks if AI usage in UX design is over-relied on, and used mindlessly. Without sufficient critical thinking, we can easily end up with generic, biased designs that don’t actually solve user problems. In some cases, we might even spend too much time on prompting and vibing with AI when we could have simply sketched or prototyped something ourselves — creating more sense of ownership in the process.
2025: The State of Generative AI in the Enterprise
There’s a lot of chatter in the news these days about the AI bubble. Most of it is because of the circular nature of the deals among the foundational model providers like OpenAI and Anthropic, and cloud providers (Microsoft, Amazon) and NVIDIA.
OpenAI recently published a report called “The state of enterprise AI” where they said:
The picture that emerges is clear: enterprise AI adoption is accelerating not just in breadth, but in depth. It is reshaping how people work, how teams collaborate, and how organizations build and deliver products.
AI use in enterprises is both scaling and maturing: activity is up eight-fold in weekly messages, with workers sending 30% more, and structured workflows rising 19x. More advanced reasoning is being integrated— with token usage up 320x—signaling a shift from quick questions to deeper, repeatable work across both breadth and depth.
Investors at Menlo Ventures are also seeing positive signs in their data, especially when it comes to the tech space outside the frontier labs:
The concerns aren’t unfounded given the magnitude of the numbers being thrown around. But the demand side tells a different story: Our latest market data shows broad adoption, real revenue, and productivity gains at scale, signaling a boom versus a bubble.
AI has been hyped in the enterprise for the last three years. From deploying quickly-built chatbots, to outfitting those bots with RAG search, and more recently, to trying to shift towards agentic AI. What Menlo Venture’s report “The State of Generative AI in the Enterprise” says is that companies are moving away from rolling their own AI solutions internally, to buying.
In 2024, [confidence that teams could handle everything in-house] still showed in the data: 47% of AI solutions were built internally, 53% purchased. Today, 76% of AI use cases are purchased rather than built internally. Despite continued strong investments in internal builds, ready-made AI solutions are reaching production more quickly and demonstrating immediate value while enterprise tech stacks continue to mature.
Also startups offering AI solutions are winning the wallet share:
At the AI application layer, startups have pulled decisively ahead. This year, according to our data, they captured nearly $2 in revenue for every $1 earned by incumbents—63% of the market, up from 36% last year when enterprises still held the lead.
On paper, this shouldn’t be happening. Incumbents have entrenched distribution, data moats, deep enterprise relationships, scaled sales teams, and massive balance sheets. Yet, in practice, AI-native startups are out-executing much larger competitors across some of the fastest-growing app categories.
How? They cite three reasons:
Product and engineering: Startups win the coding category because they ship faster and stay model‑agnostic, which let Cursor beat Copilot on repo context, multi‑file edits, diff approvals, and natural language commands—and that momentum pulled it into the enterprise.
Sales: Teams choose Clay and Actively because they own the off‑CRM work—research, personalization, and enrichment—and become the interface reps actually use, with a clear path to replacing the system of record.
Finance and operations: Accuracy requirements stall incumbents, creating space for Rillet, Campfire, and Numeric to build AI‑first ERPs with real‑time automation and win downmarket where speed matters.
There’s a lot more in the report, so it’s worth a full read.
What I’m Consuming
What ‘67’ Reveals About Childhood Creativity. My Gen Z kids are sick of “6-7.” If you’ve been under a rock, the phrase is a contemporary trend among Gen Alpha that reflects a long-standing tradition of children’s folklore and vernacular. This phenomenon demonstrates that children continue to create and share their own unique language and memes, adapting them rapidly in the digital age, similar to historical patterns. (Allegra Rosenberg / Atlas Obscura)
The Reverse-Centaur’s Guide to Criticizing AI. The current AI narrative emphasizes job replacement, creating an investment bubble driven by tech companies’ need for continuous growth. Cory Doctorow argues that AI tools are designed to create “reverse centaurs” (humans serving machines) rather than assist humans, and critiques the expansion of copyright as a solution for creative professionals. Instead, it proposes advocating for sectoral bargaining rights to protect workers against AI and market monopolies. (Cory Doctorow / The Pluralistic)
The Spark. Fun, immersive—but short—interactive story by Digital Panda, digital design agency. Reminds me of something that would have been featured as a site of the day on The FWA back in the Flash days. (Digital Panda)






Really sharp take on the humble machines framing. That Jessica Walsh point about how much actual work goes into getting decent AI output is something I see all the time - people assume prompting once and done, but its like 10+ iteration cycles to get someting that doesn't look generic. I've been spending more time curating and refining AI outputs than I ever thought, basically treating it like a really fast but messy intern who needs constant direction.