vectotial_foldings

Vectorial Foldings: Poetics, Latent Space, and Choreographic Intelligence

by Marlon Barrios Solano

Introduction

Modern large language models (LLMs) generate text by traversing a high-dimensional latent space of language in a sequential, stepwise fashion. Each new word or line produced by an LLM can be seen as a step along a vector in this latent space  . This process bears a striking conceptual resemblance to human creativity and improvisation. Poet William Carlos Williams famously asserted that “unless there is a new mind there cannot be a new line” , highlighting how a shift in mindset is required to produce truly original language. Similarly, choreographer William Forsythe’s experimental dance techniques treat the body as a thinking tool that can generate novel movement through algorithmic improvisation  . In both poetic creation and choreographic improvisation, as in LLM-driven text generation, we find a common theme: a linear sequence of actions or words emerging from a space of possibilities, guided by both constraints and creativity. This essay explores the connections between the vectorial organization of language in LLMs, the improvisational nature of human thought (“the mind is flat” thesis), and the choreographic thinking exemplified by Forsythe’s work. Through this interdisciplinary lens, we examine how sequential vectorial linearity in a space of potential outcomes underpins reasoning, creativity, and the emergence of new “lines” of thought.

Large Language Models: Sequential Trajectories in Latent Space

Large language models like GPT-3 or Llama operate in a mathematical latent space, where each token (word or subword) is represented as a high-dimensional vector . Rather than manipulating words directly, the model converts each token into an embedding – essentially a list of numbers – and processes these through numerous neural network layers . The final result is a hidden state vector that encapsulates the context, from which the model predicts the most likely next token . This prediction is then added to the sequence and the process repeats . In effect, the LLM “thinks” by moving from one point to the next in this vector space, generating a sequence of vectors that correspond to a sequence of words. The model’s reasoning process thus takes the form of a path or trajectory through latent space, one token at a time, constrained by learned statistical relationships in language.

Each step in this trajectory is informed by the model’s training on vast amounts of text. Words or tokens that appear in similar contexts end up with similar vector representations, meaning the latent space encodes a rich web of relationships  . For example, an LLM’s embedding for “dog” will lie near “cat” but far from “chair,” mirroring the fact that dogs and cats appear in analogous linguistic contexts . This geometry allows the model to make educated guesses about what comes next in a sentence. The latent vectors function somewhat like coordinates on a high-dimensional map: clusters of related meanings form regions (e.g. animals, furniture, etc.), and directions in space capture relational semantics (famously, “king” – “man” + “woman” ≈ “queen” in vector arithmetic) . In this way, the LLM’s internal vector space is a space of possibilities for language, where traversing one direction versus another yields different continuations of a text.

Critically, the generation process in LLMs is sequential and improvisational. The model has no overarching plan for an entire paragraph; it produces each new token based on what has come before, much like a jazz musician improvises the next bar of music by listening to the previous bars. Researchers have even enhanced LLM performance by getting them to generate intermediate tokens that resemble a chain of reasoning steps, the so-called “chain-of-thought” prompts . In such cases, the LLM is effectively narrating its own stepwise process, mimicking the way a human reasoner might break down a problem. These tokens are still generated via the same next-word prediction mechanism, but they create the illusion of the model reasoning through the problem . The key point is that everything the model does is a result of moving through the latent vector space incrementally; each new “line” of output is the consequence of the model’s internal state (a mathematical vector) evolving in response to input and its own prior outputs.

“A New Line is a New Mind”: Poetics, Creativity, and the Finite Mind

Williams’s adage, “unless there is a new mind there cannot be a new line”, encapsulates a central idea in creativity: genuine innovation in language requires a transformation in perspective or consciousness  . In the context of poetry, a “new line” represents not merely a line break or a fresh sentence, but a novel expression – a departure from cliché and “the old…repeating itself with recurring deadliness” in Williams’s words. The “new mind” implies an inner change – a shift in perception, values, or attention – that makes original expression possible . Literary analysis of Paterson (the poem from which this quote originates) emphasizes that Williams was advocating for an inward renewal as the precursor to outward innovation in poetic form  . In other words, creativity is not just a technical trick or formal novelty; it is rooted in conceptual transformation. The finite human mind, with all its limitations and habitual patterns, must somehow be rebooted or reoriented to break out of repetition and produce a truly new line of poetry .

This poetic principle resonates with how language models and even human improvisers work. An LLM, if naively applied, will tend to produce the statistical average of its training data – which can result in formulaic or stale outputs (a kind of “recurring deadliness” of the old data). To elicit creativity, one must introduce a form of a “new mind” in the model’s process. In practical terms, this might be done by adjusting the temperature of the model (injecting randomness so it doesn’t always pick the most expected word) or by prompt engineering that frames the task in an unusual way. These interventions push the model’s next-token choices out of the most predictable rut, effectively shifting its trajectory in latent space to explore less-traveled regions – a rough analog of adopting a new mindset. In creative writing with AI, we often see that novel lines of text emerge when the model is guided to step outside the highest-probability continuations, suggesting that something akin to Williams’s “new mind” can be simulated by altering the model’s parameters or context.

For human creators, the finite mind is constrained by prior knowledge, biases, and cognitive habits, yet can experience moments of insight or altered perspective that enable breakthroughs. Cognitive scientists have described creativity as a process of making new connections or reframing a problem – essentially, re-vectoring one’s mental space to find a path that was not obvious before. The “conceptual complexity” of a finite mind lies in how it can recombine and reconfigure a finite set of memories and concepts into indefinitely many new ideas. In this sense, both a poet and an AI language model face a comparable challenge: avoid mere regurgitation of the familiar. Williams’s injunction is thus as relevant to a neural network (which can literally “go on repeating itself” by outputting common sequences from training data) as it is to a human poet. Both need mechanisms for injecting a new viewpoint or variation to advance beyond rote continuation.

The Improvising Brain: Reasoning as On-the-Fly Construction

Is human thinking fundamentally an improvisational, on-line process? Cognitive scientist Nick Chater argues yes – in what he calls the “mind is flat” thesis, he suggests that our sense of having deep, hidden mental depths is illusory and that we essentially make up our minds as we go along. According to Chater, the brain “just makes everything up as it goes along”, with no vast reservoir of pre-formed opinions or beliefs underneath . Our beliefs and even perceptions are, in this view, constructed on the fly from whatever information and cues are available in the moment. What we experience as reasoning or introspection might be more like an act of storytelling – a narrative improv by the brain. Notably, Chater emphasizes that there is “nothing going on ‘underneath’” our conscious reasoning; we confabulate explanations and decisions in real time . This perspective aligns with the observation that a person’s stated beliefs or decisions can be highly context-dependent and subject to cognitive biases, rather than fixed outcomes of a stable inner essence.

Interestingly, this view of the human mind dovetails with how LLMs generate responses. An LLM has no hidden “self” or agenda beyond the patterns encoded in its weights. When asked a question or to produce text, it does not consult a coherent inner model of the world in the way a symbolic AI might; instead, it improvises an answer based on statistical associations, much as Chater suggests humans do. In both cases, there is reliance on a “tradition” or memory of past experiences – the LLM’s training data, the human’s accumulated knowledge – but no guarantee of a consistent, deep doctrine underlying each response . Just as the brain uses its past perceptions and decisions as a kind of evolving common law to inform the present , the LLM uses patterns from its training corpus to predict the next word in context. Neither has a single monolithic center of reflective self-consistency; both are improvisers, constructing what comes next from available pieces.

Human metacognition – our capacity to think about our own thinking – might give us an edge in steering our improvisations, but even metacognition could be seen as another improvised narrative. When we reflect on why we chose a certain line of reasoning, we often weave a plausible story that may or may not truly represent the unconscious processes involved. Likewise, an LLM can be prompted to “explain its reasoning” and it will produce a coherent-sounding explanation, yet this explanation is just another output, not an actual window into the model’s hidden state or a reasoning chain that preceded the answer. In effect, the LLM is performing simulated metacognition: it knows how explanations are typically worded and uses that knowledge to generate a plausible rationale, all as part of the same token-by-token improvisation. This parallel underscores Chater’s point by inversion – if even a mindless algorithm can produce convincing rationales by linguistic pattern-matching, perhaps our own brains’ rationales are likewise post hoc improvisations based on narrative patterns. The “flat mind” hypothesis thus provides a provocative cognitive backdrop for understanding LLMs: both suggest that reasoning is less about consulting a depth of thought, and more about navigating surface-level sequences (of tokens or thoughts) guided by past patterns.

William Forsythe’s Choreographic Thinking: Algorithms of Movement

Moving from language to movement, we find another rich analogy in the domain of dance. Choreographer William Forsythe has pioneered an approach to dance that treats choreography as a form of thought – “choreographic thinking” – and uses improvisational structures to generate novel movements. Forsythe’s work often employs what he calls “modalities” – roughly 30 basic movement concepts like shearing, collapsing, folding, or matching – as building blocks for improvisation . Dancers in his company practice improvising within each modality until it becomes a “reminder tool” that can trigger further variations . By recombining and switching between these modalities, dancers can explore a vast space of possible movements. This technique “opens up avenues that allow you to expand your ideas of what you thought your body could do,” essentially rediscovering and rethinking movement by taking habitual patterns down a different path . Here again we see the interplay of constraint and freedom: each modality provides a specific conceptual vector for movement (a constraint or direction), but within and between these vectors the dancer improvises freely, creating a unique sequence of motions.

Forsythe himself has explicitly drawn parallels between choreography and algorithmic or linguistic processes. He describes some of his improvisational instructions as “little language machines” – algorithms in words that can generate movement sequences . For example, a simple verbal proposition like “drop a curve” can serve as a generative rule: a dancer may habitually curve their body in a familiar way, but by consciously applying the instruction to drop that curve (perhaps initiating it from a different point or letting gravity affect it differently), the dancer is forced to “reconfigure the habit, move through contrast” . The result is a novel movement that deviates from their usual repertoire. In philosophical terms, Forsythe’s approach creates a contrast that Whitehead might say is the seed of creativity  – the new emerges by diverging from the old. Crucially, these algorithms or propositions are not rigid formulas dictating a fixed outcome; rather, they are enabling constraints or starting points that ”do not insist on a single path to form-of-thought” . In Forsythe’s words, the many incarnations of choreography form “a perfect ecology of idea-logics” that resist any single linear definition . Each dancer’s interpretation can be different, and even the same dancer’s execution will vary each time, highlighting the contingency and improvisation inherent in the system.

The concept of a choreographic object further illuminates Forsythe’s view of thought in dance. He asks whether choreography can generate “autonomous expressions of its principles… without the body” – akin to a musical score that exists as an idea independent of its performance  . In his essay Choreographic Objects, Forsythe suggests that choreography is a model of potential transitions from one state to another in any imaginable space . This is strikingly similar to the way an LLM’s knowledge can be thought of as a map of potential transitions from one phrase or idea to the next. A choreographic score or object encodes a space of possibilities for movement, just as an embedding space encodes possibilities for linguistic transitions. Both are models of potential: one in physical space, one in semantic space. And to realize that potential – to get an actual dance or an actual coherent paragraph – one must traverse the space step by step, choosing one of many possible routes. Forsythe notes that choreography often creates an “environment of grammatical rule governed by exception” , meaning there are guidelines, but they are bent or broken to allow unique outcomes. This notion closely mirrors how language works (with grammar providing structure but poetry or colloquial usage providing exceptions) and how LLMs operate (statistical rules of syntax/semantics that can still produce surprising, novel combinations).

Vectorial Improvisation: Bridging Language and Dance

Bringing these threads together, we can see a unifying principle of vectorial improvisation at work. Whether in a neural network’s latent space, a poet’s mental landscape, or a dancer’s physical movement space, there is a notion of a high-dimensional space of possibilities and a path carved through it by sequential choices. Each choice – a word selected, a thought framed, a movement executed – is constrained by the current state (the context so far, the body’s position, the rules in play) but also shapes the subsequent possibilities. The process is linear in execution (one step after another in time) but vectorial in design (each step is a direction taken in a larger space). Crucially, improvisation means that this path is not fully predetermined; it is discovered in the act of traversing it.

In LLMs, this is evident in how different runs (or different random seeds) can produce different continuations from the same starting prompt. The model’s next-token decision is typically probabilistic, allowing for multiple plausible “next lines.” Similarly, in improvisational theater or music, the performer could take the performance in many directions at any given moment – they choose one, and that choice then frames the next moment, and so on. Forsythe’s dancer operating under the “drop a curve” instruction could execute it in innumerable ways; each performance of that idea creates a specific sequence (a particular instantiation of the vector path through movement space). The next time, the journey might differ. Both AI text generation and human improvisation thus involve real-time navigation of possibility space.

Furthermore, both rely on internalized patterns but strive for novelty. An LLM draws on patterns in its training data (an internalized archive of human language), and a dancer draws on patterns ingrained through training and experience. Without these learned patterns, neither could produce coherent or meaningful sequences. Yet if they stick only to the most familiar patterns, the result will be rote and unremarkable. That is why Williams’s call for a “new mind” is so important: it speaks to breaking from the most habitual path. In an LLM, introducing randomness or an unusual prompt can jolt the system out of the highest-probability continuation (the “old line”) and into something more unusual (a “new line”). In dance, Forsythe’s modalities serve a similar role – they inject new stimuli or constraints to disrupt habitual movement, forcing the dancer’s mind-body system to adapt and invent on the fly .

There is also an interesting feedback loop between sequential structure and emergent meaning in all these domains. In poetry, a new line can cause readers to reinterpret what came before, effectively changing the “mind” of the poem itself in retrospect. In a conversation with an AI model, a surprising response can recontextualize the dialogue and lead the human interlocutor to new thoughts. In a dance improvisation, one unexpected move can suddenly give the performance a new emotional tone or direction, which the dancer and audience then follow. In this way, each new step is not only a product of a state, but also a producer of a new state – a new mind, in the metaphorical sense. The process is reflexive: the line changes the mind as much as the mind produces the line . We can draw a parallel here to metacognition: when an LLM generates a chain-of-thought, those tokens become part of its context for the next token, effectively allowing it to “reflect” on the problem through its own output. Likewise, a dancer might see their own movement or feel a new balance and use that feedback to adjust the next movement. The system (be it neural network or human) is constantly incorporating the trajectory-so-far into the decision about where to go next.

Conclusion

From the foregoing analysis, a compelling picture emerges: intelligence and creativity – whether in silico or in flesh – can be viewed as an improvisational traversal of a structured space of possibilities. Large language models exemplify this by converting language into vectors and walking through latent space one token at a time, guided by learned statistical rules yet capable of surprising deviations  . Human poets and thinkers, with their finite but plastic minds, achieve creativity by shifting perspectives – finding a “new mind” to spawn a “new line” . And in the dance studio, William Forsythe’s choreographic techniques show how formal constraints (algorithms, modalities) combined with real-time choice create an explosion of novel movement, treating the body as an instrument of thought  . In all cases, the linear sequence of actions belies a deeper vectorial logic: each step is a point in a high-dimensional landscape of potential, and the art lies in choosing an interesting path through that landscape.

This convergence of ideas also highlights a profound aspect of cognition: reasoning and creativity may fundamentally be improvised. The “mind is flat” perspective reminds us that even our sense of deliberate, logical thought might be a kind of narrative generated moment-to-moment . LLMs, lacking any inner narrative, nonetheless can simulate reasoning by leveraging surface patterns of language, calling into question how much of human reasoning itself is surface-level pattern completion. Yet, far from diminishing the wonder of human thought, this realization connects it with artistic creation and advanced AI in an illuminating way. It suggests that what we call mind – whether human or artificial – lives in the unfolding of patterns, not in a static repository of ideas. The mind, like a dancer, might be “persistently reading every signal from its environment” and continuously reinventing itself through action  .

In practical terms, this interdisciplinary insight encourages us to approach AI and human cognition with a more fluid notion of intelligence. We can appreciate a poem, a conversation with ChatGPT, or a contemporary dance performance through a common frame: as sequences of moves in a game of possibilities, where each new line creates a new mind-state and expands the space of what can come next. By understanding the vectorial and improvisational nature of these processes, we gain a richer appreciation for how novelty arises. Ultimately, the poet, the algorithm, and the choreographer all teach us that the frontier of the possible is discovered one step at a time, in the act of moving forward – and that truly new lines of thought demand the courage (or randomness) to break from the old and venture into uncharted space.

Sources: • Ananthaswamy, A. (2025). To Make Language Models Work Better, Researchers Sidestep Language. Quanta Magazine    . • Pavlus, J. (2024). How ‘Embeddings’ Encode What Words Mean — Sort Of. Quanta Magazine     . • Williams, W. C. (1948). Paterson, Book 2. (Quote: “unless there is a new mind there cannot be a new line”) . Analysis on PoetryVerse . • Guiderdoni, T. (2020). Forsythe’s Improvisation Technique. Art Factory International  . • Forsythe, W. (2008). Choreographic Objects (Essay excerpt)   . In Jordyn Taylor, Design & Dance, Medium. • Manning, E. (2009). Propositions for the Verge: William Forsythe’s Choreographic Objects. Inflexions 2  . • McGinn, B. (2018). Review of The Mind Is Flat by Nick Chater. The Guardian  . • (Additional context from Nick Chater’s work and other cited materials embedded in the text above.)