Note: Texts in bold Gothic font in the original Korean are generated by GPT.


Report “Modern Love,” published by cybersecurity company McAfee in February 2023. ©Forking Room

“You are my passionate sympathy. My affectionate admiration clings to your passionate desire. I want to hold your heart and soul. You are my wistful sympathy, my tender liking.” (Christopher Strachey’s love letter algorithm, 1953)
Christopher Strachey. “The ‘Thinking’ Machine,” Encounter, October (1954): 26. All English quotes cited are translated by Gyesung Lee.


“From the moment I first saw you, I knew my heart belonged to you forever. I’ve never seen anyone as beautiful in both face and soul. Your warm and loving heart draws me deeper. You are my heart, my soul, my everything. I can’t imagine life without you. I’d do anything to make you happy. I’ll love you forever, just like now.”(ChatGPT, 2023)
McAfee. (2023). McAfee’s Modern Love Research Report.

https://www.mcafee.com/content/dam/consumer/en-us/docs/reports/rp-mcafee-modernlove-report.pdf


1)
Valentine’s Day 2023, when ChatGPT had already achieved widespread adoption, may have been the first commemorative holiday when AI-generated text became a real issue on a collective level. According to a survey published by McAfee in February of that year:

“More than one in four people (26%) said they planned to use AI to write a letter to a loved one, while 44% said they would not, and 31% responded that they didn’t know how—indicating that far more people still preferred to write letters themselves. (…) The reasons for wanting to use AI to write love letters varied. 27% believed the response to the letter would be better, 21% said they didn’t know what to write without AI, and another 21% said they didn’t have time to come up with something themselves.”
(Same source as above)

Given that the first computer-generated text was likely a love letter, this survey offers a useful indicator of where we currently stand regarding generative writing. In 1953, British computer scientist Christopher Strachey developed a love letter algorithm using the Manchester Mark I computer. His algorithm arranged words based on specific rules, randomly selecting and combining words from predetermined categories like adjectives, nouns, and adverbs to generate sentences that resembled conventional love letters. Every time the algorithm ran, it produced a new letter with a novel combination of words.

Perhaps these synthetic love letters are the canaries in the coal mine. The fact that people willingly accept algorithm-generated love letters may imply an openness to other forms of synthetic writing. We cannot predict exactly how our relationship with AI will evolve, but the journey from Strachey’s love letter algorithm in 1953 to Valentine’s Day 2023 is certainly an intriguing one.


The Manchester Mark I computer, photographed with a wide-angle lens. ©Forking Room

 A love letter generated by MUC (Manchester University Computer, nickname for the Mark I). ©Forking Room


2)
We were unsure where to begin and found ourselves loitering at the fringes of the concept of generativity. Observing the diminishing hallucinations of recent large language models, and the explosive rise in their steroidal use for productivity, we felt a pang of regret. Perhaps that regret stemmed from a desire for these tools to remain in the state of the “drunken poet.” That is, not excessively analytical or inferential, but rather remaining as partners in hallucination that guide us through errant, whimsical maps.

It was during this time that we came across Astronauts of Inner-Space: An International Collection of Avant-Garde Activity, a compilation of 46 manifestos, letters, essays, poems, and film scripts. It captivated us from the beginning—especially the fact that it was published in 1966, just before the explosion of dreams about the space age, seemed highly significant. Every era has had its own paradigm of Big Science. If today’s is large-scale generative AI, then the 1960s were unequivocally about aerospace. The book’s title itself, redirecting the era’s desire to break through the Earth and expand into outer space toward an inward direction, was profoundly compelling. Above all, in this age of Large and Extra-Large Models with ever-expanding weight scales, this title felt like an invitation to explore some other kind of territory, adrift.

Among the texts, Margaret Masterman’s essay titled The Use of Computers to Make Semantic Toy Models of Language stood out to us as prescient in its discussion of what we now call large language models (LLMs). Her description of linguistic “toy models” was particularly arresting:

“These models are toys because they are small, easy to make (or at least one thinks so when first beginning), and easy to manipulate. And they are models because they are designed to discriminate, exaggerate, and mass-produce certain features of language which are not usually discriminated by human beings. (…) The real problem is not to teach the computer new words one by one, so that it can learn how to combine them, but to teach the computer to subdue the enormous combinatorial space of language and to form approximate conceptual and semantic associations. In other words, the computer doesn’t behave like a child—it behaves like a drunken poet.”

—Margaret Masterman, The Use of Computers to Make Semantic Toy Models of Language, in Astronauts of Inner-Space: An International Collection of Avant-Garde Activity. Stolen Paper Review Editions. San Francisco: Stolen Paper Review Editions. pp. 36–37.

This was a fascinating and compelling insight, evoking the transformer algorithm that underlies today’s large language models. We wanted to begin from her perspective.


Cover of Astronauts of Inner-Space. ©Forking Room
Table of contents of Astronauts of Inner-Space, which includes contributions by Marshall McLuhan, Max Bense, and William S. Burroughs. ©Forking Room


3)
The metaphor of the drunken poet offers an intriguing framework for understanding how computers process natural language. A drunken poet isn’t bound by grammar or syntactic conventions—they discover new ways of expression. By connecting words in unexpected ways, they can convey concepts or emotions more vividly. The drunken poet’s method of detecting unforeseen linguistic patterns and forming key semantic associations can, at a more abstract level, be emulated by language models analyzing language to identify similar patterns.

There’s a Korean proverb that says “Drunkards and children don’t lie.” But this doesn’t mean drunkards always tell the truth—it more likely means that a drunk person interprets reality more immediately than someone sober. Likewise, it would be more accurate to say that a language model doesn’t speak the truth per se, but rather offers a resemblance of truth.

A language model produces output based on an abstracted dataset that reformats actual experience into something the model can interpret. Here, the drunken poet symbolizes a particular state of abstraction. Large language models, in such a state, don’t produce one “truthful” output but rather capture resemblances of truth—series of potential truths that are extracted from and inspired by the dataset. Put differently, the output of an LLM contains not absolutes but potentials.



​4)
In The Use of Computers to Make Semantic Toy Models of Language, alongside references to operational models represented by the image of “toy models,” Masterman also offers a compelling reflection on the poetic—represented by the image of the “drunken poet.”

“By reading the outputs that computers generate (and, where necessary, analyzing them again using the computer), we can finally study the complexity of poetic patterns that everyone intuitively feels. And a deeper understanding of poetic patterns will eventually enhance mastery and comprehension of poetry itself.”—Same source, p. 37

The reason Masterman emphasizes the poetic may be that a purely analytical or syntactic approach to modeling human language often fails to capture those elusive aspects of language—the subtle complexity of poetic patterning that is intuitively felt. In this light, her endeavor to construct toy models of language can be interpreted as an attempt to grasp the fluid and ambiguous dimensions of language through a kind of poetic interpolation. This is also a way of grappling with the complexity of cognition. Applying poetic elements to the modeling of language ultimately points to an effort to better understand the randomness embedded in human cognitive faculties.


 
5)
There have always been models in the realm of Big Science. In the 1960s, the Big Science of aerospace gave rise to numerous model rockets and the rocket kids who experimented with them. The modeling of technology in this way is a phenomenon commonly found in the history of technology. Radio, television, and computers—all dominant technologies of their respective eras—produced a wealth of models and kits. These models were often scaled-down versions of Big Science itself and also served as systems that allowed users to reconstruct the black-box nature of advanced technology.

From this “tinkering” perspective, the model is something closer to a “gray box” than a black box—it is an invitation to go beyond mere miniaturization and toward meta-level appropriation. Tinkering refers to the act of combining and experimenting with technology in an ad hoc manner to produce or discover something unexpected. Within DIY tech culture, it is regarded as a crucial mode of engagement. In a broader sense, it refers to a self-directed or self-defined way of appropriating and using a given technology rather than merely consuming it as-is.

In this way, the operational logic of models in Big Science led us to draw a connection between Masterman’s idea of toy models of language and today’s large-scale generative models.


Magazine advertisement from 1970 by Estes, the first company to mass-produce model rockets. ©Forking Room


6)

Another cliché often associated with technology is the concept of the black box. But when approached as a model of thought, the black box invites new interpretations. We usually imagine a black box as a sealed container whose internal workings are inaccessible. This conception can be applied not only to technological artifacts like algorithms but also to the human mind or consciousness.

However, in science, computing, and engineering, the black box refers to a device or system whose outputs can be observed without knowing its internal operations. In other words, it is predicated on a hypothesis about the causal relationship between input and output. That is to say, even without understanding its internal mechanism, as long as one can observe the input and output, the system can be considered a black box.

This interpretation encourages us to think of the black box not merely as a closed-off object but as a device for hypothesizing causal relations—thereby making it a model for perception. From this viewpoint, the black box becomes less a sealed chamber and more an alternative kind of model—one that provokes recognition.


Black box, input and output, and the observer and the actor. ©Forking Room


7)
The Turing Test, often misunderstood, can also be approached as a provocational model. As widely known, if a human evaluator cannot reliably distinguish between the responses of a human and a machine, the machine is considered to have passed the test. In other words, the Turing Test is commonly interpreted as a benchmark to determine whether a machine has achieved a level of linguistic performance indistinguishable from a human. But when we view the Turing Test as a model of cognition, its implications become more layered and rich.

What’s most compelling about the Turing Test is its proposal to approach the question of “what it means to think” from a model-based perspective, rather than being trapped in essentialist or ontological questions. That is, even if we cannot define what “thinking” is in an essential sense, we can still say, “Let’s consider it thinking if it passes this test.” This very stance implies a kind of semantic modeling.

The Turing Test does not ask whether AI possesses some specific internal trait worthy of being called “thought.” Instead, it adopts a pragmatic approach that bypasses essentialist debates and focuses on evaluating AI’s output within interactive, linguistic exchange with humans. In this sense, the Turing Test not only carries an inherent modeling nature but also foregrounds the communicative dimension—an observable and measurable aspect—thus modeling the idea of “thought” through interaction.

In this light, the Turing Test can be said to propose a way of understanding AI through a model of cognition—one that shifts the focus from “thinking” to “language” as a communicative tool. When we make this shift, the large language model can be seen as both an agent that participates in communication and a machine that proliferates another kind of pragmatic model.

 

8)
In the domain of Big Science, AI is both a mirror and a microscope. As a mirror, AI reflects our collective knowledge, biases, and desires through its vast datasets. The generated outputs often feel strangely familiar because they reflect human collective thought. As a microscope, AI magnifies and clarifies complex patterns in data that human cognition might otherwise overlook. Through this dual lens, AI fosters a dynamic interplay between internal reflection and external inquiry.

AI occupies the interstitial space between human cognition and digital capability, bridging the gap between mirror and microscope. As a mirror, it allows for the creation of art and literature that generate broad empathy through universal emotions and situations. As a microscope, it propels us into the realm of hyper-personalization, crafting narratives and experiences tailored to individual psyches. This dichotomy simultaneously opens up the possibility for universal AI art and presents the risk of reducing personal creativity to probabilistic algorithms.

If we pursue universal AI art, we inevitably imagine a future where art transcends boundaries to become a universal language. While this vision is alluring, it comes with significant challenges. Will subtle nuances, regional specificity, or stories tied to particular places and histories be lost in translation?

If art transforms into a universal language through AI, then who will be the author in this new epistemic system? As AI continuously absorbs feedback, criticism, and opinions from a global audience, the traditional notion of authorship may become increasingly ambiguous. The concept of art as a fixed creation may shift toward one of collective and continuous beta testing.

When we probe more deeply into this idea of perpetual beta testing, it begins to resemble ancient oral traditions. Just as stories, myths, and songs would transform with each act of transmission, this new era of art will exist in a constant state of retelling. Every interaction contains the potential for transformation and raises questions about authenticity and the sanctity of the “original.”
 

 
9)
During the 1960s, artistic works began to emerge that actively embraced a systemic sensibility inspired largely by cybernetics. Terry Riley’s In C, composed in 1964, and B.S. Johnson’s novel The Unfortunates, published in 1969, both offered systems that allowed performers and readers to restructure the work in countless ways. Though they belonged to different media, these two works shared a common emphasis on fluidity, probability, and the possibility of infinite recombination.

B.S. Johnson’s The Unfortunates is a “book in a box,” consisting of 25 unbound sections that can be read in any order except for the first and last chapters. In theory, this allows the novel to be read in 25 factorial—15,511,210,043,330,985,984,000,000—distinct configurations. The reader actively participates in arranging the narrative system, and Johnson believed this format better expressed the randomness of thought than the linearity imposed by traditional bookbinding. The sensation of endless permutations within a highly systematic set of parameters resonates with 1980s and ’90s gamebooks and early hypertext fiction.

Terry Riley’s In C is composed of 53 short musical phrases, which can be performed using any combination of instruments, in any sequence, and repeated as many times as desired. The first recording featured 11 performers, but later versions included up to 124 musicians. What mattered was not one particular arrangement, but the setup of parameters that allowed for an infinite number of variations.

The commonality between Riley’s musical experiment and Johnson’s textual experimentation lies in their prioritization of emergent systems over traditional narratives. In other words, these works foreground systemic structuring over narrative linearity. The artist or composer here is not a fixed author of a final product, but rather a programmer who sets the parameters and framework for the work.


The 53 musical phrases of In C. ©Forking Room

B.S. Johnson holding an unbound copy of The Unfortunates. ©Forking Room


10)
John Searle’s famous thought experiment, the Chinese Room Argument, was designed to challenge the validity of the Turing Test and to critique the idea that machines could genuinely understand language or possess consciousness. In this scenario, a person inside a room receives Chinese input, processes it using a set of predefined rules—despite having no understanding of the language—and produces appropriate Chinese output. According to Searle, even though the responses appear meaningful from the outside, there is no real understanding or consciousness occurring inside the room. This argument serves as a fundamental critique of the notion that language models can understand or simulate human thought.

Put differently, while the Turing Test proposes that a language model may be deemed “intelligent” if it produces responses indistinguishable from a human, the Chinese Room insists that, even if the model’s outputs are coherent and contextually appropriate, it still lacks genuine understanding or awareness.

In some sense, the Chinese Room overlooks the Turing Test’s proposal of a model for cognition. On the other hand, these two frameworks now seem to re-emerge in the era of large language models, functioning as two foundational axes for interpreting artificial intelligence more broadly.

Perhaps it is time to apply the notion of the practical model for cognition—previously applied to the black box and the Turing Test—to the Chinese Room as well. In doing so, the Chinese Room transforms into something like a “Chinese Gym”—a hypothetical model of active thinking. In this imagined gym, characters are not passively combined according to instructions but are used like exercise equipment. Each linguistic task or processing activity contributes to the machine’s muscle memory or database. It becomes not a question of static replication but of dynamic transformation through interaction.

Now imagine an open space instead of a confined room—one where not only characters but also books, poetry, dialogue, and even multimedia contexts come into play. The goal here is not rote memorization but refined response. Knowledge is continuously updated with each exposure to data. The room is a static and isolated environment, whereas the gym is dynamic and interactive. Syntax takes precedence in the room; context and semantics are emphasized in the gym.

However, this transition from a Chinese Room to a Chinese Gym is not just a matter of adding complexity—it represents a fundamental shift in how we conceptualize AI’s capacities. The former suggests a rigid, rule-based cognition, while the latter implies adaptive and experiential learning. Perhaps the core question isn’t whether AI can “truly understand,” but rather how understanding evolves and transforms across different contexts. Ultimately, both the Turing Test and the Chinese Room offer foundational models for thinking about AI, while large-scale generative models prompt us to consider newer, more networked, and distributed approaches to cognition.

 

11)
During the 1980s, as personal computers became relatively widespread, a new kind of generativity emerged through the use of computer programs. In 1984, William Chamberlain and Thomas Etter released a chatbot-like program called Racter, and in 1987, composer and programmer Laurie Spiegel released Music Mouse. Artists willingly took on the role of developers, creating and distributing generative software themselves.

The creators of Racter described it as an “artificial writer dealing with prose synthesis.” Its most famous output, the book titled The Policeman’s Beard Is Half Constructed, was published in 1984 with a tagline that read: “The first book ever written by a computer,” and “A strange and fantastic journey into the mind of a machine.” The texts in this book show a certain degree of consistency, yet often unfold in surreal and rambling directions. For example:

“In any case, my essays and dissertations about love and its endless pain and eternal joy will be known and understood by every person who reads this, or sings, or shouts it to a worried friend or a tense enemy. Love is the question and the theme of this essay. Let us begin with this question: does steak love lettuce? It is harshly difficult and necessarily problematic to answer. Next question: does the electron love the proton or the neutron? Next question: does a man love a woman—or, more specifically and precisely, does Bill love Diane? The interesting and central answer to this question is: no, he does not! He is obsessed with her and enthralled by her. He is infatuated with her and crazed for her. It is not the love of steak and lettuce, not the love of electrons and protons and neutrons. This paper will show that the love between a man and a woman is not the love of steak and lettuce. Love is interesting to me and fascinating to you but painful to Bill and Diane. That is love!”

—Christian Bök. “The Piecemeal Bard Is Deconstructed: Notes Towards a Potential Robopoetics.” Object 10 (2001). https://www.ubu.com/papers/object/03_bok.pdf

Despite its strangeness, this book demonstrated the potential of computer-generated text and laid the conceptual foundation for many subsequent works of generative literature. Experimental writer Christian Bök even argued that after The Policeman’s Beard Is Half Constructed, the human being was no longer a necessary component of writing.

Racter. The Policeman’s Beard Is Half Constructed. New York: Warner Books, 1984.

Laurie Spiegel’s Music Mouse, released in 1987, was a computer program designed for users to generate music through an interface. Users could configure parameters such as harmonics, tempo, and portamento, and generate piano or electronic music by moving a mouse cursor across a gridded surface. Music Mouse was a groundbreaking work that laid the groundwork for later generative music programs like those produced by Brian Eno starting in the mid-1990s.

Both Music Mouse and Racter represented a break from earlier artistic experiments that relied on fixed content created by human authors or composers. Unlike the 1960s works that emphasized randomness and chance, the 1980s saw the rise of algorithmic and intelligent sensibilities. For more on this, one might refer to Laurie Spiegel’s brief 1989 essay, “Distinguishing Random, Algorithmic, and Intelligent Music.”

Laurie Spiegel. “Distinguishing Random, Algorithmic, and Intelligent Music.” Active Sensing 1, no. 3 (1989). https://web.archive.org/web/20220629204447/http://retiary.org/ls/writings/alg_comp_ltr_to_cem.html


Cover of Racter, released on floppy disk by Mindscape. ©Forking Room

Mac version interface of Music Mouse. ©Forking Room


12)
In the 1990s, artist-programmers began to use the internet as a platform for their creative work. Net art emerged as a new genre that departed from traditional canvases and performance stages. Artists began writing code that interacted with users in real time. One notable work from this era is JODI’s My%Desktop, which transformed the user interface into a confusing and unpredictable landscape. In such works, the boundaries between artist, tool, and audience became increasingly blurred.

As digital interfaces became more widespread, performance itself underwent a fundamental transformation. Alexei Shulgin’s 386 DX project symbolized this shift. Instead of a human band touring from place to place, this project featured an old PC running Windows 3.1 performing iconic songs like California Dreamin’ and Smells Like Teen Spirit. Using a vintage sound card and MIDI files, 386 DX posed fundamental questions about what constitutes “live” music or performance.

Net art also reshaped the dynamics of artistic communities. Unlike traditional media that required physical presence in studios or galleries, net art thrived in decentralized digital spaces. Forums, chat rooms, and mailing lists became the new salons where artists collaborated, critiqued, and exhibited their work. Projects like The Thing emerged as digital art collectives and platforms that hosted experimental works and helped cultivate early net art communities.


Alexei Shulgin, wearing a keyboard instead of a guitar, as part of 386 DX, “the first cyberpunk band.” ©Forking Room
Promotional poster for The Thing, described as “an electronic bulletin board discussing art and critical theory, a mailing system, a database archive, and a virtual gallery.” ©Forking Room


13)
Then to what extent can we consider these historical precedents as the precursors to today’s sense of generativity? The works mentioned above represent various forms in which generativity was embodied within artistic contexts of their time. However, it is crucial to note that today’s artistic generativity differs fundamentally in that it relies on pre-trained generative models that have been trained on unprecedented volumes of information.

The key issue here is the degree of influence artists have over the systems they use. In the past, artists often played dual roles as both developers and creators of their works, maintaining a close relationship with the systems they themselves devised and utilized. For example, Michael Noll—one of the pioneers of computer art and also a researcher at Bell Labs—argued early on that the artist and programmer should be one and the same.

Yuwonjun. “On the Artificial Autonomy of Artworks — Focusing on Cybernetics and Generative Art.” In Art in the Age of Artificial Intelligence, edited by Yoo Hyunjoo. Book Publisher b, 2019. p. 154–155.

By contrast, today’s artists mostly work through pre-trained models, and their influence over the design or functionality of these models is practically nonexistent. Even fine-tuning a foundation model with one’s own dataset has significant limitations. Considering the vast scale and complexity of the datasets used to train AI models, the degree of change that fine-tuning can effect on a model’s fundamental structure is minimal at best.

Thus, while earlier artistic precedents share some conceptual similarities with today’s practices, the dynamics of generativity have changed dramatically. Whereas artists of the past were deeply involved in the design and development of generative systems, contemporary artists often operate as users who are largely disconnected from the generative models they employ. This shift in artistic power dynamics is a key factor in understanding how generativity has evolved.

 
 
14)
At this point, we might also consider the notion of “prompt engineering.” On one hand, it is a practice built upon the mythology of the latent space contained within the black box—a kind of exploratory mapping of the valleys of weights. The latent space of a machine learning model is often imagined as a conceptual, mythological, or fictional realm in which hidden relationships, patterns, and representations exist. This is the space where complex models are believed to generate novel outputs and discover unseen patterns in massive datasets—where the magic of creativity emerges.

When we imagine the space of a large language model as a vast world, we can picture its learned weights forming canyons, mountain ranges, rivers, plains, or even holes. Topics with relatively high weights, such as food or aspects of human culture, resemble dense, uninterrupted mountain chains, while the hardest areas to explore lie in the low-weight terrains.

This territory is formed both by our perception, by human convention, and by geopolitical logic—and yet, it remains shrouded in mystery. It is a land where the hallucinations of abstract concepts trained by the model appear, and where interpolations that exceed human perception emerge. Latent space is often portrayed as the terrain in which the excellence and complexity of machine learning models are buried. And like gold rush workers, we eagerly head there in droves.

Even so, the potential of this latent space remains valid. But rather than designing prompts and models merely for efficient navigation of such mythic latent spaces, we should focus more on the deliberate, practical act of generating models of language. It’s less a matter of designing prompt-models to explore a mythic geography than it is akin to composing a way to play an instrument that has no predetermined scale.


Image generated via Midjourney. ©Forking Room


Prompt

“When imagining the space of a large language model as a vast world, one can picture terrains shaped by learned weights—canyons, mountains, rivers, plains, or pits. Heavily weighted concepts like food or human culture form dense, continuous mountain ranges, while the least explored regions lie in the low-weight terrains. This land is simultaneously shaped by human perception, convention, and geopolitical logic—yet it remains mysterious. It is where hallucinated abstractions trained by the model arise, and where interpolations exceeding human cognition emerge. Latent space, in this way, is depicted as the land in which the excellence and complexity of machine learning models are buried.”


  
15)
The idea of a digital instrument without a predetermined scale evokes the birth of digital aesthetics. Traditionally, every artistic medium came with built-in parameters and boundaries. But in the digital realm, boundaries are in constant flux. In other words, the instruments of the digital musician are never fixed—they are always evolving. This phenomenon leads us to ask: where does the instrument end, and where does the artist begin?

Historically, whenever a new medium emerged, artists had to rethink their discipline and adapt to it. Cinema, for instance, was not simply theater projected onto a screen—it involved a complete reinterpretation of space, narrative, and the role of the audience. Today, artists working with variable generative models must not only learn new instruments but also continuously adapt to evolving timbres and rhythms. What is demanded is a delicate balance between mastery and letting go.

When we compare the advent of cinema to today’s AI art, we can find compelling parallels. Early cinema, with its grainy textures and shaky frames, may now seem technically primitive, but at the time, it was a radical innovation. Likewise, AI-generated art may appear crude at first glance, but it too marks a significant departure from prior modes of representation.

The power of early cinema lay in its ability to depict realities never before seen. Fleeting moments felt eternal, and distant places became tangible. Yet those scenes were grounded in shared human experience. AI art, by contrast, begins not with visibility but with invisibility—it advances toward what lies beyond perception.

Cinema captivated audiences by evoking collective memories and shared pasts. Each frame echoed our world, stories, and emotions. But AI-generated art draws us into a perpetual flow of creation. It floats and drifts in a sea of data, lacking the photographic indexicality that once anchored meaning. In this space, emotional resonance emerges not from familiarity, but from traces of familiarity embedded within the unexpected.

AI art serves as a bridge between the familiar world and the realm of potentiality. It might be tempting to mythologize its outputs as supernatural, but upon closer inspection, we find they are filled with traces of our own world. From the training data to the choices we make and the parameters we set, everything is extracted from reality. And yet the results often feel alien. In this way, AI art intertwines the familiar with algorithmic potentiality, reflecting both the present and what may come.


Edeard Muybridge, The Horse in Motion, 1878. ©Forking Room


16)
Edward Muybridge’s work, which analyzed and deconstructed the motion of horses and then reconstructed them into sequential images, can be seen as an early example of breaking down complex information into smaller parts to generate new insights or forms of expression. This process bears comparison to how generative AI and machine learning models today divide the world into massive classification datasets and then synthesize new media from them.

Just as Muybridge broke down horse movement into individual frames to study it, machine learning models engage in processes of decomposition and analysis to identify patterns, features, and relationships within data. And just as Muybridge reassembled those frames into moving images after understanding equine locomotion, generative AI models reconstruct and generate synthetic texts, images, and sounds based on weights learned through analysis and decomposition.

Muybridge’s work enabled people to appreciate the nuance and complexity of motion. Similarly, generative AI allows users to explore intricate relationships and patterns within large datasets, potentially leading to new insights and discoveries. In summary, both systems break down information that is difficult for humans to perceive as a whole, extract fundamental patterns, and recombine data to envision new extensions. Of course, while generative AI does not reveal “truth” in the photographic sense, it does produce new “effects of reality” through statistical synthesis.



17)
If Muybridge’s research offered a microscopic view into the complexity of the world, then today’s generative AI provides a telescopic perspective—one that visualizes the vast tapestry of interconnected realities. It’s like observing a complex ecosystem where each piece of data behaves like a living organism, contributing to an overarching narrative. As with all ecosystems, the aesthetic resides not in individual elements, but in the interactions between them.

From this enormous tapestry where data and the digital realm are interwoven, a new artistic concept arises—“digital naturalism.” In this mode, the artist is no longer merely a creator but also a digital ecologist who constructs and experiments within a complex data environment. Each work becomes a delicately balanced ecosystem, where every element and byte exists within flows of harmony and chaos, like organisms interacting with one another.
Perhaps Émile Zola’s concept of the experimental novel was similar: not a creator of stories, but a novelist who sets experimental conditions and observes the interactions. If Zola viewed his characters as phenomena within a controlled literary experiment, the digital naturalist views data clusters as organisms within a managed digital biome. “Digital naturalism” imagines a domain where the curated environment itself is the narrative. Unlike traditional stories that seek to explain the human condition, these layered digital terrains describe the evolving relationship between technology and ourselves. Acting as empirical observers, artists promote exchanges among different data streams, drawing out emergent stories of symbiosis and friction.

What’s intriguing is the degree of decentralization in these digital ecosystems compared to Zola’s narrative world. While Zola observed how characters responded to adversity, in digital naturalism, the adversity itself can transform depending on how data reacts. Data clusters, algorithms, and user inputs interact beyond the artist’s control, forming feedback loops where environment and elements constantly affect one another.

Zola’s view of fiction was like watching a tiny cosmos of life inside a glass terrarium—adjusting lighting, introducing new plants, observing behavioral shifts. But the terrarium of digital naturalism is not enclosed in glass. It is linked to countless other digital biomes. Whereas Zola’s experimental novelist sets up the opening scene and steps back to observe, the digital naturalist must constantly recalibrate, renegotiate, and reinterpret as their biome merges with and responds to larger networks.

In such a vast digital ecosystem, the concept of authorship becomes even more ambiguous. Is the digital naturalist truly the sole creator, or merely a catalyst who enables the biome to function and lets data interactions form the narrative? In generative art, every pixel or sound byte holds the potential to alter trajectories and evolve autonomously. This dynamic nature of digital ecosystems calls into question our traditional understanding of artworks as fixed expressions of an artist’s intent.


Édouard Manet, Portrait of Émile Zola, 1868, Musée d'Orsay. ©Forking Room


18)
Whereas traditional art forms like cinema have focused on personal expression or social reflection, AI art poses questions about the placement of the self within the artwork. Amidst the currents in which personal subjectivity and collective data intertwine, AI art urges a reconsideration of the very essence of artistic identity in the digital age.

In the works of Apichatpong Weerasethakul, a film director and contemporary artist, the contemplation of identity as something variable and governed by memory appears in various forms. It is therefore unsurprising that he used GPT-3 to generate an experimental narrative in the form of a screenplay titled Conversations with the Sun. This story unfolds as a kind of allegorical tale encompassing characters like Jiddu Krishnamurti, Arthur C. Clarke, and Tilda Swinton—as well as non-human entities such as the Sun, a black hole, and a wolf—who travel through the cosmos, touching on themes like memory, death, friendship, love, and identity.

As the story progresses, the identities of the characters grow increasingly ambiguous: they shift, merge, and sometimes are revealed to be mere hallucinations. For instance, the character “Apichatpong” suddenly transforms into the Sun; Krishnamurti, Dalí, and Arthur C. Clarke are revealed to be mirages born from imagination; and some characters appear as reincarnated souls from past lives. This emphasis on the ambiguity and hybridity of the self in Conversations with the Sun reflects a fundamental characteristic of large language models themselves. Models like GPT-3 are never singular; they are vast assemblages of simulated knowledge. These language models take on diverse voices in response to the intention and tone embedded in the prompt, and Apichatpong focuses precisely on this mutable identity.

Although Apichatpong did not write a single sentence himself, he directed GPT-3 through a sequence of prompts, which were separately documented. This allows one to trace not only the formation of the work but also the interactions between the language model and the artist in the form of inputs and outputs—a fascinating aspect of the project. The prompts Apichatpong provided to GPT-3 included: “Apichatpong, the Sun, Krishnamurti, Dalí, and Arthur C. Clarke are aboard a spaceship heading for Earth. Apichatpong and the Sun realize that the other three characters do not actually exist. Write their conversation.” “Then the Sun tells Apichatpong about a dream it had two days ago.” “Then Apichatpong tells the Sun about his dream.”

Apichatpong Weerasethakul, Conversations with the Sun, translated by Gyesung Lee. Mediabus, 2023, p. 130.


Apichatpong Weerasethakul, Conversations with the Sun, translated by Gyesung Lee. Mediabus, 2023. ©Forking Room

Apichatpong said he had in mind Krishnamurti’s concept of “living without thinking” while working on Conversations with the Sun. Regarding the process of co-creation with GPT-3, he commented: “I suppose I was thinking, but not carefully. I was thinking about the overall structure of the conversation. But at the same time, there was a character in the work named Apichatpong, so I couldn’t help but think that he symbolized me.”
Ibid., p. 107.

In other words, the absence of thought that arises during collaboration with an artificial neural network model is not so much a mechanical replacement of cognition as it is a generative sensation—one that challenges conventional creative thought. Furthermore, the artist’s identity is often modeled and amplified by the neural network in unexpected ways, contributing to that sensation.
 


19)
For Krishnamurti, thought and time were compatible concepts. Thus, stopping thought equates to stopping the movement of time (related ideas also appear in Dialogue with the Sun). According to Krishnamurti, one can approach "living without thinking" by facing the fact that one cannot stop thinking—in other words, affirming the present moment. Therefore, "living without thinking" does not imply the absence of thought, but rather a clear recognition and acceptance of the inevitability of thought. Krishnamurti believed this could liberate the mind from its self-centered state, from its private echo chamber.
Examining Krishnamurti's view on thought reveals similarities with generative art. Just as Krishnamurti urges us to acknowledge the inevitability of thought, generative art accepts the unpredictability and irregularity inherent in its medium. It does not resist or attempt to control this disorder; instead, it allows it to flow, producing works that are both spontaneous and structured. This mirrors the tension between control and surrender that is intrinsic to human experience. Generative artworks reflect this dichotomy, visually expressing the unpredictable interplay between intention and result, between the conscious mind and the unbridled flow of thoughts.

Fundamentally, generative art exists in a constant state of flux, evolving each time it is expressed, much like the continuous flow of thought. Krishnamurti's concept of stopping time by halting thought resonates with the intent of generative art to capture and fixate a specific moment into a work. When an artist selects a moment, they crystallize and share a specific thought or concept, thereby halting time—yet countless other possibilities vanish unmanifested.
 
 
 
20)
After Muybridge’s photography, the romantic illustrations of horses galloping as if in flight—as seen in works by Théodore Géricault or Édouard Manet—became outdated visual illusions, gradually fading from visual culture and the popular imagination.

This sparked considerable controversy among artists of the time. Jean-Louis-Ernest Meissonier, a leading painter of the French academic style, reportedly lamented, "My eyes have deceived me all this time." On the other hand, sculptor Auguste Rodin commented on Muybridge's photographs, saying, "It is the artist who tells the truth, and photography lies. In reality, time never stands still."
—Rebecca Solnit, River of Shadows, trans. Hyunwoo Kim, Changbi, 2020, p. 300–1.

Was this a crisis of representation or a loss of a certain world? Rodin's remark seems to express a desire that we not be misled by a model of representation or by a model of signs—that the multidimensionality of a world not be reduced to a single model. This makes us reconsider today’s proliferating large-scale generative models. It raises the problem of how not to be deceived by models while still constructing them.


Édouard Manet, Racing at Bois de Boulogne, 1872, Whitney Collection. © Forking Room


22)
This brings us to the challenge of conceptualizing a practical model that poses questions and triggers awareness—a model we propose to call the "model of ignorance." At first glance, it may seem that large language models like ChatGPT are incapable of responding with "I don't know." However, they actually can. These models are trained on vast amounts of text and are also designed to generate responses that simulate human-like reactions, including acknowledging uncertainty or lack of knowledge. The domains in which GPT typically replies with "I don't know" closely mirror human limits, with the exception of emotional or cognitive overwhelm:
 
- Lack of knowledge or expertise
- Uncertainty or ambiguity
- Speculation or future predictions
- Personal opinions or experiences
- Confidentiality or privacy concerns
- Emotional or cognitive overload
 
But we must envision another model of ignorance. Not merely a model capable of answering with "I don't know," but rather something akin to Jacques Rancière’s notion of the “ignorant schoolmaster.” It is a model not designed to augment intelligence and capacity, but one that acknowledges and gropes through the limits and uncertainties of our knowledge. Rather than being preoccupied only with what can be analyzed and modeled, this model emphasizes the necessity of recognizing the unknowns and gaps embedded in knowledge. Instead of demanding direct answers from AI, it aims to collaboratively generate questions, synthesize diverse perspectives, and enhance intellectual autonomy.

Rancière’s “ignorant schoolmaster” advocates for a pedagogy rooted in a shared state of ignorance. Jacques Rancière, The Ignorant Schoolmaster: Five Lessons in Intellectual Emancipation, trans. by Yang Chang-ryeol, Gungri, 2016. It is a perspective that regards ignorance not as a deficiency or weakness but as a catalyst for intellectual growth and critical thinking. By embracing ignorance, intellectual autonomy can be cultivated. This approach allows learners to reach deeper understanding through active engagement, questioning, and independent pursuit of knowledge.

To observe, to repeat, to verify, to fumble—like solving a riddle. Rancière suggests that this very fumbling is the true movement of intelligence, a genuine intellectual adventure. Couldn’t a model that disrupts the state of "not knowing ignorance" be what we call a “model of ignorance”?
 
 
 
23)
The idea of developing an AI based on the concept of the "ignorant schoolmaster" may seem counterintuitive, yet it is profoundly necessary. Such a "model of ignorance" could paradoxically become a beacon of knowledge in an age oversaturated with information. By embracing gaps and absences, it may enable deeper and more holistic forms of learning—just as silence, no less than notes, lends depth and resonance to the melody of knowledge.

Juxtaposing GPT's information-centric approach with the "model of ignorance" reveals a unique perspective—one that urges us to rethink our understanding of intelligence. The key lies not in finding the right answers but in maintaining an open stance toward the countless things we have yet to comprehend, with courage to question and doubt.

Some may argue that applying a "model of ignorance" to AI is regressive. Yet, in truth, it becomes a driving force propelling us toward a future where the pursuit of knowledge never ends. In a world where AI might provide answers to all our questions, we must ask ourselves what it is we truly seek. Mere productivity? Or deeper understanding? Productivity, by nature, is a double-edged sword. While technology, including AI, allows us to accomplish more in less time, we must question whether the depth of understanding is keeping pace with this acceleration.

The tension between speed and depth lies at the heart of the modern riddle. Today, access to information is easier than ever, yet profound understanding becomes increasingly elusive. It is time to shift focus from the quantity of knowledge to the subtlety of understanding, and to reassess the metrics of intellectual achievement. Embracing a “model of ignorance” does not reject the convenience of modern technology; rather, it affirms the interstitial spaces of ambiguity and seeks to complement them through inquiry.

 

24)
Returning to Margaret Masterman, let’s examine another of her claims: “Language, at least in part, has its present form because it was created by a creature that breathes at fairly regular intervals.” This is noted as a rough summary of her argument. Wikipedia contributors. “Margaret Masterman.” Wikipedia, The Free Encyclopedia. https://en.wikipedia.org/w/index.php?title=Margaret_Masterman&oldid=1154803664 Since Masterman did not fully elaborate on this idea, the following statements may be closer to a personal hypothesis.

To restate her claim more operationally: language is a complex system shaped by the physical and biological constraints of its users. The rhythm of breathing, limitations of vocal cords, brain structure, even the auditory organs that receive spoken language and the waves of air—all contribute to how language is formed and used. These physical limitations are integral to understanding language. Spoken language is the product of an organism and its surrounding physical conditions—and written language is, to some extent, the same.

This reminds us to consider the conditions of reality in which we exist. It brings to mind Rodin’s assertion that time never truly stands still and invites us to oscillate between what can be modeled and what remains a void, viewing large language models from that vantage point.

Masterman sought to understand computing through the metaphor—or model—of a “telescope for the mind.” Just as the telescope in the 17th century reshaped humanity’s understanding of its relationship to the world, today, computing and AI prompt a reevaluation of our perception.

If natural science was reinvented through the telescope, AI might be tied to the reinvention of our cognitive systems. Not merely through steroidal boosts in productivity enabled by statistically synthesized outputs, but through something related to alternative cognitive paradigms. In this context, viewing large language models as black boxes or models of ignorance—hypotheses of perception—could help disrupt conventional thinking about language, cognition, and the relationship between humans and machines, and open up new cognitive models.
 
 

25)
The metaphor of the “telescope for the mind” sparks intriguing thoughts. A telescope brings distant stars closer but does not tell us what to think about them. Likewise, while AI holds tremendous potential, it should be seen not as a complete source of knowledge but as a mediator that brings knowledge closer and enables its exploration. The essence of the “model of ignorance” is not to reduce AI’s level of knowledge, but to harness its potential to foster questioning, exploration, and curiosity.
Seeing AI not merely as a tool but as a medium places it in a broader cultural role, akin to literature or art. In such an aesthetic space, ambiguity effectively invites contemplation and reflection. This raises the question: can AI, as a medium, trigger such aesthetic spaces as a “model of ignorance”? Like a novel or piece of music that arouses curiosity, can it use uncertainty not to suppress but to deepen and channel it into reflective understanding?
All the artworks and concepts explored here offer potential to reframe our perspective on the vast digital ecosystem. Within it, the dynamism of digital art, the oscillation between what can and cannot be modeled, and the rich subtleties of language intersect. Our task may be to not avoid but embrace the unknown and ambiguous, using the “model of ignorance” to propel deeper inquiry.


 


This essay expands on Productivity and Generativity – GPT as a Drunken Poet (Gyesung Lee & Binna Choi), presented at a 2023 talk at Forking Room.
Gyesung Lee, through translation and writing, seeks to explore not the utilitarian but the poetic aspects of large language models and computer-generated texts. He translated Pharmako – AI (Ghost Station, 2022), Dialogue with the Sun (Mediabus, 2023), and contributed to Context and Contingency – GPT and Extractive Linguistics (Mediabus, 2023).

Binna Choi works with Unmake Lab and participates in Forking Room as a researcher. Her interests lie in overlapping developmental histories with the extractivism of machine learning to reveal the sociocultural, ecological conditions of the present. Recently, she has been working with the concept of “general nature” in relation to datasets, computer vision, and generative AI to address issues of anthropocentrism, neocolonialism, and catastrophe.

References