Skip to content

Where Creation Starts Now

IDEO’s Zoey Zhu wonders if the real shift in AI and creativity is happening before anyone makes anything.

Four people sit on stage in front of a Future House backdrop, participating in a panel discussion. A screen displays event details and headshots, while an audience watches. The lighting is bright and colorful.
All images: IDEO/​Zoey Zhu

Published

Topic

AI, Innovation

Companies

IDEO

Somewhere between the third panel on AI and creativity and the second free espresso, I realized everyone in the room was having the same argument in a different font. Is it good enough? Can you tell? Who made it, really? The conversation about AI and creativity has been running this loop for years, obsessed with outputs, endlessly relitigating quality, authorship, and authenticity.

But I spent two days at SXSW FuturePIXEL House in rooms full of filmmakers, musicians, and designers, and I kept noticing something else: the most charged moments weren’t about what AI produces. They were about what happens before anything is made — that disorienting space between having a feeling and knowing what to do with it.

That’s where things are shifting, and I think the shift matters more than anyone’s giving it credit for.

Creators aren’t primarily using tools like Midjourney or Adobe Firefly to finish things. They’re using them to think. To pressure-test an instinct before investing in it. To externalize a vague feeling before they know how to build it. What matters isn’t the output; it’s what the output provokes.

Figma has evolved into an ideation surface, a place to work through structure and relationships before anything is committed. Runway is functioning more as a mirror than a renderer, a surface you think against, not a pipeline you push through.

Here’s the thing I found most counterintuitive: AI’s wrong answers are often more useful than its right ones.

A digital mind map on a black background displays interconnected screenshots, images, and videos, with “BABEL” labeled centrally, suggesting a visual project outline with multiple media elements branching out.

When Runway generates a frame with the wrong light, wrong mood, wrong emotional register, the filmmaker has to articulate what they actually wanted. The wrongness forces a kind of clarity that’s nearly impossible to reach in the abstract. You can’t say not that” without getting closer to yes, this.”

AI video tools can produce footage that’s startlingly close to what you imagined. But startlingly close” is also specifically, usefully wrong, and that gap between an AI’s best guess and a creator’s actual vision is where some of the most interesting creative decisions are being made right now.

(A note written days after SXSWOpenAI announced it’s shutting Sora down — the Disney deal, the filmmaker community, the workflows built around it, gone almost overnight. Jarring, but also clarifying. Because it points to something the landscape keeps demonstrating: the tools are volatile. They will be built, hyped, adopted, and sometimes shuttered before the ink dries on a partnership agreement. The question that survives every platform shutdown is the same one: What do I actually want to make, and why?

Meanwhile, the tools gaining ground are becoming more agentic, more embedded, more capable of holding creative logic across time. Adobe’s Firefly Custom Models now let creators train image generators on their own visual identity. What’s happening isn’t a handoff from human to AI. It’s a shift from tool-use to co-authorship of a process — and the resistance the tool provides is itself a design material.

AI’s wrong answers are often more useful than its right ones.

But origination is still entirely human, and that distinction is becoming more important as the tools get better. AI tools offer filmmakers genuine multi-shot narrative control. Variant is treating web creation as a living, evolving process. These are real capabilities. And yet none of them can answer the question underneath the question: Why this story? Why now? For whom?

That answer comes from lived experience, from identity and loss and obsession and decades of paying attention. AI offers a richer surface to work against. The spark still belongs to the person in the room.

Something else surprised me at SXSW: the process itself is becoming the content.
The clearest example was TapTV’s “SIGN】What Comes After Curiosity?,” an experimental short where the process canvas literally is the film — the thinking, the branching, the dead ends and pivots. It’s racked up millions of views, and I think what that signals is that audiences aren’t just tolerating visible creative process; they’re actively seeking it out.

It’s not an isolated signal either. Midjourney’s most engaged communities share prompting workflows as much as final images. The creative journey is becoming legible, and people are drawn to that legibility in ways that feel new.

In music, the most interesting territory isn’t who writes the song — it’s what happens in the first 20 minutes of trying. The loops, the half-formed ideas, the accidental chord change that becomes the whole record. Tools like Suno are expanding what’s possible in those early moments, not by replacing the creative act but by making the exploratory space around it more productive.

We’ve spent years asking whether AI-made work is real.” I think the more useful question is: What does AI make possible in the part of creativity that hasn’t been visible until now? The pre-creative space — the fumbling, the instinct-testing, the working-against — has always existed. AI is just making it legible, and sometimes even shareable, for the first time.



IDEO creative technologist Zoey Zhu hosted a panel: Future of Storytelling: Where creativity meets technology at FuturePIXEL House at SXSW 2026

A large group of people sit closely together in rows of chairs, attentively watching a presentation or event in a room with blue lighting and white curtains.