s Generative AI is reshaping creativity and strategy in tandem: Jeremy Utley’s “idea-flow” lens shows that large language models (LLMs) multiply the volume and variance of ideas, yet teams still underperform when they treat the machine as a clairvoyant instead of a colleague. My own practice frames this dynamic as synthetic cognition—an extended, embodied mind formed by dancers, datasets, and algorithms in the same improvisational loop. Blending Utley’s principles of experimentation, thought-partnership, and anti-satisficing with the performance-research methods behind Born in Latent Space, the framework below offers a theory of how AI can unlock radical imagination and rigorous strategy.
Utley argues that the only sustainable metric for innovation is the throughput of potential solutions—what he calls idea flow. In embodied art contexts, that throughput is not textual but multimodal: gestures, images, sensor data. When these modalities circulate through an LLM, they constitute what cognitive scientists call an extended mind—a mesh in which thinking is distributed across bodies and artifacts. Thus, idea flow becomes synthetic cognition, a dance of human and machine attention.
Herbert Simon’s concept of satisficing explains why many AI-assisted teams plateau at the first plausible output. Utley’s research with Kian Gohar shows the same trap: in a field experiment, most groups using ChatGPT stopped iterating too soon and performed worse than control groups. The cure is deliberate divergence—cycling prompts, roles, and constraints until novelty metrics peak. Wharton scholars confirm that structured prompting can double idea variance in product-ideation tasks.
Ben Franklin’s Junto club inspired Utley’s monthly CEO cohorts: a space where leaders share unfinished experiments rather than polished decks. Our framework generalizes this into a KPI: ten AI experiments per team per quarter. McKinsey’s 2025 survey finds that firms progressing fastest toward AI maturity make experimentation a leadership mandate rather than an optional skunkworks. Failure to run trials—not failure of the trials themselves—becomes the punishable offense.
Utley popularizes a role-based “thought-partner” prompt that forces the model to interrogate you one question at a time before giving advice, then attack its own reasoning. Role prompting is now a recognized technique for increasing depth and precision in LLM output. Executives who feed their board decks through such multi-role chains predict 91 % of directors’ questions in advance—a non-obvious but high-impact use of generative AI. In creative labs, I adapt the same pattern: a dramaturg-GPT critiques narrative arcs, while a choreographer-GPT proposes counter-movements in real time.
Psychology and philosophy on the extended-mind thesis warn that outsourced cognition can erode critical thought if unexamined. BCG’s multi-industry study likewise shows that value is destroyed when users blindly trust the model in domains where it is weakest. Therefore, every synthetic-cognition cycle must include a reflexive pass: an agent (or human peer) critiques biases, provenance, and power relations in the generated material. This step resonates with decolonial aims in my Pangea AI project, ensuring that the choreography of ideas remains plural and responsible.
Jeremy Utley’s mantra—use AI to use AI—meets the performer’s credo of improvise with what shows up. When we entwine those logics, AI becomes neither a mere accelerator nor a threat but a co-creative infrastructure that stretches how organizations and artists think, decide, and dream. By institutionalizing experimentation, rejecting satisficing, and embracing synthetic cognition, we convert fleeting sparks of machine-human interplay into sustained strategic advantage and cultural insight.