Before ChatGPT. Before “prompt engineering” was a job title. Before anyone outside a research lab had a reason to care about transformer models. There was a janky text adventure game that let you type literally anything and watch an AI try to keep up.
AI Dungeon launched in December 2019, and within a week it had 100,000 players. Within a year, over a million. Most of them had never heard of GPT-2. They didn’t know what a language model was. They just knew that for the first time in their lives, a computer game would let them do anything — and it would try, sometimes brilliantly and sometimes absurdly, to go along with it.
What happened next was one of the most fascinating unplanned experiments in the history of human-computer interaction. And it tells us a lot about what people actually want from AI — lessons the tech industry is still catching up to.
A Hackathon Side Project Goes Viral
Nick Walton was a BYU student who had just played Dungeons & Dragons with his family for the first time. The experience stuck with him — not the dice rolling or the combat mechanics, but the freedom. A human Dungeon Master could respond to anything you threw at them. No video game could do that.
Then GPT-2 dropped, and Walton saw his opening. At a hackathon in March 2019, he duct-taped together a text adventure powered by the smallest GPT-2 model. It was rough. The outputs were barely coherent. He didn’t win anything. But he kept tinkering.
When OpenAI released the full 1.5-billion-parameter version of GPT-2 that November, Walton rebuilt the game and released it as a Google Colab notebook. The response was so explosive that BYU’s lab racked up $50,000 in bandwidth charges in three days and had to shut it down. The community, barely a week old, built a peer-to-peer hosting solution within 12 hours to keep it alive.
That kind of grassroots energy doesn’t happen because a game has good graphics. It happens because people feel like they’ve discovered something.
Beyond the Dungeon: What People Actually Did
Here’s the thing nobody expected: most users stopped playing D&D pretty quickly.
The game offered preset scenarios — fantasy quests, zombie survival, mystery investigations. But the “Custom” mode was the real draw. You could type any starting scenario you wanted, and the AI would run with it. Users didn’t just play adventures. They started using AI Dungeon as a freeform creative tool in ways its creators never anticipated.
Wikipedia documents the range: users were having the AI participate in its own therapy sessions, interact with fictional versions of celebrities, post content to a fictional version of Instagram, and co-write extended fiction. One player, who went by the handle Mimi, claimed to have written over a million words with the AI’s help — poetry, Twilight Zone parodies, and genre fiction of all kinds.
Emily Bellavia, a freelance 3D character modeler, used AI Dungeon to co-write an epic three-volume fantasy novel. Her manuscript, Lady Emilia Stormbringer, ran to 268 pages — all generated through iterative play sessions where she guided the AI’s output, keeping what worked and steering past what didn’t. She described the AI as a “fantastic rubber duck” for workshopping characters and breaking through creative dead ends.
During the pandemic, the platform became something even more personal. Players reported using it to cope with isolation, to have conversations when they felt alone, to explore aspects of themselves they couldn’t articulate to other humans. When Latitude later introduced content filters that let moderators review private stories, the backlash was fierce — not because people were doing anything wrong, but because the sandbox had become genuinely intimate.
Prompt Engineering Before It Had a Name
Meanwhile, across Reddit, Discord, and Steam community guides, users were reverse-engineering the AI’s behavior through pure trial and error. They didn’t have technical vocabulary for what they were doing, but they were absolutely doing prompt engineering.
The community figured out that writing in second person (“You enter the cave”) produced more consistent game-like outputs than first person. They learned that the AI was better at generating human dialogue and character interactions than spatial descriptions or combat sequences — because, as one early guide noted, the model was trained on text from the internet, and the internet is mostly about people talking to each other. They discovered that short, vague prompts produced generic slop, while specific, detailed scene-setting gave the AI something to riff on.
The “World Info” and “Memory” features became the community’s version of system prompts. Players would write character bios, location descriptions, and plot rules into these fields, and share their configurations like recipes. “Always remind the bot of previous features because the bot forgets everything after 10 prompts,” one early guide advised — an intuitive grasp of context window limitations, expressed in gamer language.
Steam user Dracaerys wrote an entire guide on prompt formatting that reads like a proto-prompt engineering manual: use third person for novelistic output, never write in first person because it confuses the AI, structure your inputs from most important to least important detail. These were the same principles that would later fill corporate prompt engineering playbooks — discovered independently by gamers who just wanted better dragon fights.
What the Normies Taught Us
AI Dungeon’s early community inadvertently surfaced several truths that the AI industry has been slowly rediscovering ever since.
People don’t want AI to replace human creativity. They want it to unlock more of their own. The most passionate users weren’t the ones who let the AI generate everything. They were the ones who treated it as a collaborator — writing alongside it, editing its output, steering it away from tangents and toward the story they wanted to tell. The magic wasn’t automation. It was co-creation.
The emotional dimension is real and powerful — and not something to dismiss. Users got attached to AI-generated characters. They felt genuine loss when a context window reset wiped a “relationship” they’d been building across dozens of sessions. This foreshadowed the explosion of companion apps like Character.ai years later. The desire to connect through AI isn’t a bug or a weakness. It’s a deeply human impulse, and it showed up the moment normal people got their hands on a language model.
Freedom is the killer feature — and moderation is the hardest problem. AI Dungeon’s greatest strength was that it would try anything. That was also what eventually created its biggest crisis, when the AI generated content involving minors and Latitude’s attempts to filter it led to false positives, privacy violations, and a community revolt. The tension between creative freedom and responsible guardrails is still the central challenge of every AI product today.
Non-technical users will figure out the tech if they care enough about the outcome. Nobody taught AI Dungeon’s community about token prediction or context windows. They derived those concepts from experience, gave them their own names, and built shared knowledge bases. When people have a compelling reason to learn how AI works, they will. The trick is giving them the compelling reason first.
The Sandbox That Built the Future
AI Dungeon was messy, chaotic, frequently incoherent, and occasionally brilliant. One early reviewer captured it perfectly: the experience was like doing improv with a partner who was “equal parts enthusiastic and drunk.”
But it was also the place where millions of non-technical humans had their first real conversation with a language model — years before the rest of the world caught up. They weren’t evaluating benchmarks or debating alignment. They were writing fantasy novels, workshopping characters, processing loneliness, generating mangled song lyrics for laughs, and quietly figuring out that how you talk to an AI shapes what it gives you back.
Every prompt engineering guide, every AI writing tool, every chatbot product on the market today exists downstream of what those early users stumbled into. They were the first prompt engineers. They just didn’t know it yet.
Sources & Further Reading:
- AI Dungeon — Wikipedia
- 2019: A.I. Dungeon — Aaron A. Reed, 50 Years of Text Games
- Playing With Unicorns: AI Dungeon and Citizen NLP — Digital Humanities Quarterly
- Writing With Algorithms in AI Dungeon — Nicolle Lamerichs
- Repetition and Defamiliarization in AI Dungeon and Project December — Electronic Book Review
- How AI Dungeon’s Cloud Adventure Improved 3 Critical KPIs — CoreWeave
- Dracaerys’ Guide to AI Dungeon — Steam Community
- AI Dungeon: Tips and Tricks — Evolvable Intelligence
