by Marlon Barrios Solano
William Carlos Williams once declared that “unless there is a new mind there cannot be a new line.” In Williams’s poetics, form and thought are inseparable — a truly new poetic line can only arise from a transformed mode of thinking. This insight, radical in its time, has uncanny resonance with how large language models (LLMs) generate text. Modern LLMs don’t retrieve sentences from a library; they navigate a high-dimensional latent space of meaning, “traversing a path” as each new word is chosen. In a sense, an LLM’s output is a kind of procedural poetry — a stepwise movement through a vectorized semantic field.
This essay explores the bridge between Williams’s idea of “the new line = the new mind” and the way LLMs “think” in vectors, with a focus on cognitive parallels between human and machine language, and the poetic implications of this parallel. We will also consider how concepts like line breaks (in poetry) and latent space (in AI) can be compared — for example, seeing line breaks as shifts of attention, and latent space as a new poetic geography beyond inherited forms. The goal is a deeper understanding of creativity as a change in direction in both human and synthetic cognition.
Williams believed the poetic line is an event of thought, not a mere ornament. In his epic Paterson, he famously wrote:
“Unless there is a new mind there cannot be a new line, the old will go on repeating itself with recurring deadliness.”
Here Williams diagnoses the “recurring deadliness” of traditional verse forms — if a poet’s perception and thinking remain unchanged, the lines on the page will inevitably fall into old patterns. For Williams, form emerges organically from fresh perception; the line break and rhythm should be dictated by how the mind moves, not by inherited rules of meter.
This means the line is procedural, not pre-given: writing a line of poetry is a cognitive act, a trajectory that the poet’s attention carves through experience and language. Repeating someone else’s form (say, using the same old iambic pentameter or sonnet structure) without a new mindset would only replicate “the old…with recurring deadliness” — essentially:
old mind = old line
Williams went so far as to call for “sweeping changes from top to bottom of the poetic structure,” declaring “we are through with the iambic pentameter… the sonnet” in pursuit of an authentically American, modern form. The new line for him was inherently tied to new cognition: form as the footprint of a fresh thinking process.
One way to appreciate Williams’s view is to consider what happens at a line break in a poem. Far from being a random slice, a line break guides the mind’s movement.
“If a line break has a meaning, it is roughly ‘pay attention — something interesting is happening.’”
Part of the effect comes from the reader’s slight pause as the eye moves to the next line — a miniature cognitive reset that forces a reevaluation of what came before and what comes next. Williams and his modernist peers exploited this, using enjambment and unusual lineation to jolt the mind into new perceptions instead of lulling it with familiar patterns.
In cognitive terms, a line break is a shift in attention: the mind must realign context and expectation at each new line. This is analogous to how, in an LLM’s processing, each new token causes the model to recalculate attention across the sequence. The transformer architecture literally uses an attention mechanism that weighs prior words anew at each step.
A poetic line break in human reading serves a similar role to an attention update in a model’s generation — in both cases, there’s a reorientation that allows for surprise or novelty to enter. Every line (or every token) is a chance to discover something new in the ongoing sequence.
Under the hood of an LLM, language is encoded as vectors — arrays of numbers representing words and their contexts. These vectors live in what’s called a latent space, a high-dimensional mathematical space where semantic relationships are geometric.
A classic illustration is:
In a trained embedding space, the vector difference between king and man points in a similar direction as the difference between queen and woman, so that adding that difference to woman lands near queen.
Figure (conceptual):
A simplified projection of a word embedding latent space, where relationships such as king : queen :: man : woman appear as spatial directions. This illustrates how meaning in an LLM’s latent space is relational and directional, not fixed.
In an LLM, generating text means moving through latent space step by step. Unlike a human author, the model has no intent or consciousness — but it does have a form of synthetic cognition: it “thinks” by calculating which direction to go next in the vector space of meaning.
Each next word is chosen based on:
Crucially:
The AI does not retrieve a sentence; it traverses a path.
Text is motion. The output unfolds as a journey through the model’s learned manifold of language.
This idea of traversal can be seen as a computational parallel to Williams’s notion of writing as a process of discovery. For a poet, a thought “finds its form” as the line breaks and images develop; for an LLM, a response emerges by navigating latent space to connect vectors and generate new outputs. The analogy can be made explicit: “human thinking = linking memories and concepts… AI thinking = navigating latent space to connect vectors.” The latent space is essentially the “conceptual brain” of the AI — where context and meaning live during generation.
What’s striking is how relational and emergent this latent space is. There is no fixed center or absolute scale — meaning is defined by proximity and direction among vectors. A latent space can be understood as an embedding manifold where items resembling each other are positioned closer to one another; positions are defined by latent variables capturing those resemblances. Meaning in an LLM is entirely contextual: concepts have meaning by virtue of their position relative to others.
This echoes Williams’s poetics, where a word or image gains meaning from how it is placed in the poem (in relation to other words, line breaks, etc.), not from some preset significance. Both in modern poetry and in AI, sense is relational, not absolute — it emerges from a field of interactions.
We can now draw direct parallels between Williams’s new line and the model’s latent vector trajectory — effectively comparing human poetic cognition with AI’s vector-based cognition:
Seen this way, a poetic line and a latent vector trajectory share key qualities: both are directional, history-dependent, and capable of surprise when guided by a new orientation. They are each temporary stabilizations in a larger field of possibilities — a line of verse momentarily crystallizes a thought, just as a sequence of vectors momentarily holds the AI’s “thought state” before it moves on. Both yield originality only if they veer away from the familiar.
Both Williams’s poetics and LLM behavior suggest that creativity is less about adding new content and more about changing the orientation of the mind or model. Novelty comes from a turn in the road rather than a new roadbed.
Williams (Human Poet)
To break out of stagnation, Williams advocated breaking inherited meters and forms. He experimented with variable line lengths, unpredictable enjambments, and the rhythms of everyday American speech. This often meant a deliberate redirection of attention: focusing on overlooked subjects or using line breaks to make readers see a phrase differently. His “new mind” was often anti-traditional — rejecting European metrics and rooting poetry in local vernacular and imagery.
LLMs (Synthetic Mind)
An LLM does not willfully change direction — it follows probabilities. But users and designers can induce shifts. Prompting is a form of poetic intervention. Temperature adjustments encourage divergence: at high temperature, the model is more likely to choose less common words, effectively “inventing” more as it goes. Fine-tuning can imbue the model with a “new mind” more literally by altering weights and changing the landscape of latent space. Chain-of-thought prompting can shift the orientation from direct answering to exploratory reasoning. In short: to get creative output, one must alter the conditions of traversal.
Both human poets and AI models benefit from cultivated unpredictability. Williams embraced spontaneity as a corrective to dead form; analogously, we inject randomness or unusual context to prevent boilerplate output. Novelty arises when the next step is not fully determined by the previous norm.
Williams imagined poetry not as architecture but as a field of action — an open space of possibility. Field-based thinking resonates with latent space: a multi-dimensional continuum where there is no fixed center, no single correct path, and meaning emerges from local relations. Every point in latent space is defined by relation to others, just as every word or line in a poem finds meaning from placement in the whole.
We can push this parallel further via postcolonial geography. Williams, as an American poet, was consciously breaking away from Eurocentric forms — in a sense, decolonizing his art. Instead of treating English poetry’s highest forms as inherited from England (sonnets, Shakespearean cadence), he re-rooted poetry in American landscape and vernacular. This meant exploring new poetic geographies: local idioms, modern city life, American rhythms.
Latent space can be seen as a new geography of language, one not bound to a single cultural center. Within the model, sonnets, haiku, hip-hop, code, and vernacular prose coexist as vectors. There is no single hierarchy dictating the “default” — there are many directions one can travel. Working in latent space becomes a kind of poetic cartography: choosing routes, activating marginal regions, resisting default hegemonies of training distributions.
If we rewrite Williams today:
Unless there is a new vector, there cannot be a new thought.
Or more provocatively:
Prompting is poetics.
Latent traversal is thinking.
A “new mind” in AI terms could mean:
The challenge is to supply those new vectors deliberately — much as an artist challenges themselves to see differently.
Understanding the kinship between Williams’s human-centric poetics and LLM generation has practical implications for writers, poets, and artists working with AI.
Ultimately, poetry as thought-in-motion and AI text generation as vector-motion converge into a shared intuition:
Meaning emerges through traversal.
Creativity emerges through reorientation.
This bridging of Williams and LLMs reminds us that language is a cognitive act — movement through a space of meaning. Williams’s legacy is a challenge to think differently to write differently. The advent of AI doesn’t change that — it gives us new arenas (latent dimensions) in which to practice the same challenge.
A poet and an AI may seem worlds apart, but both meet in this proposition:
A line of words is a line of thought.
And the direction that line takes makes all the difference.