Texture 112 by mercurycode
I've been dabbling with AI here and there but my ethical stance and a fear of losing something precious has kept me at bay. As designer, a socialist and a human worker, among other things, I have not welcomed the great imposition of AI.
Yesterday I connected Claude Code to Figma through an MCP (Model Context Protocol) integration in VS Code. I watched it read component names, infer design intent, and generate code in moments that I otherwise would have spent days and weeks building. It was useful. Annoyingly useful.
—
I've been vocally against this for longer than I'd like to admit. Not any one tool, but the whole thing: the speed, the money, the way it's being built and deployed, and the harms that are real and structural and being absorbed somewhere other than by the people extracting the value. Data centres consuming essential resources. The cognitive load shifting away from thinking, toward filtering. Junior designers, already squeezed, competing with tools trained on our labour. Already-fragile job markets getting messier. I still think all of that. Nothing I've seen diminish my concerns.
Whats changed was something less comfortable.
—
I listened to Simon Willison on a podcast recently and he said: "The only universal skill is being able to roll with the changes". It really stuck with me, not because it was reassuring, but because it was honest about what's actually required right now.
“The only universal skill is being able to roll with the changes.” — @simonwillison.net youtu.be/wc8FBhQtdsA?...
There's a quote often attributed to Keynes, probably misattributed: "When the facts change, I change my mind. What do you do, sir?" The framing is useful regardless of who said it. Intellectual honesty means updating when the evidence changes. The fact that I've been loud about a position doesn't exempt me from that. Changing my mind isn't a complete hypocrisy. I'm not even fully sure I have changed it.
But I see that refusing to change it or even engage honestly with the topic, when the situation has clearly shifted would be.
—
The structural harms haven't changed. They're still happening, still being built in. What changed is what refusing to engage actually achieves.
Stepping back doesn't slow the extraction and enshittification. It doesn't protect junior designers or stabilise labour markets. It doesn't solve the climate problem. But what it will do is remove me from a field I've been in for nearly two decades, at the exact moment when people with a critical perspective might have something useful to contribute.
I could hold a line on principle. I understand why people do. But I'd rather understand the leverage points from inside than commentate from outside. I'm thinking about how to keep this human in the loop.
—
Not because the problems aren't real. But because I need to understand how this works, where the leverage points are, what it actually means to work with these tools intentionally rather than get swept along by them.
This publication will be my field journal to document my adoption of AI in my design practice. What I'm learning, and losing. The problems I'm hitting and the wins I'm gleaning. The lines I'm drawing and discipline I'm trying to keep.
When AI helps me work through a design problem, I want to stay the one deciding whether it's right. The one responsible for what ships. Good design systems thinking turns out to be useful preparation here: clarity about structure, intent, and constraint. That's what keeps the work intentional when the tools are always pushing toward more automation. I'm not claiming I've figured this out. I'm trying to not pretend that not engaging is a coherent position.
So I'm leaning in to it all. Or perhaps giving in. I haven't figured it out yet.