Playing AI Dungeon
Common Questions
AI Models and their Differences
We offer several different AI Models for our players to choose from and play their Adventures with, each with unique characteristics and specialties. Below is a list detailing what makes each of them stand out and some community-provided example Settings and AI Instructions you can use for a more engaging experience. If you have any specific questions that aren't answered here, consider joining our Discord, where our knowledgeable Helpers can answer them!
Dynamic Small
Dynamic Small is not a model in of itself, but an automated system that randomly switches between multiple models. This is a process that aims to reduce repetition and quality deterioration from using the same model. Dynamic Small is beneficial to free to low tier users who want a less repetitive or cliché experience. If you just want to play and have a great AI Dungeon experience, without the need to tweak and find your preferred model and settings, this model is for you.
150 Response Length
What is the purpose of Dynamic Small?
- Dynamic Small is a very different model, where our AI research team is testing and optimizing the AI experience to give players the best experience we can. It's made to be a model that newer and less technical players can just play with without having to worry about finer details.
How do we choose which models to include in Dynamic Small?
- The models included in Dynamic Small are carefully chosen to provide the best combination of models based on feedback and player engagement and retention metrics.
Why does Dynamic Small only have a Response Length setting?
- Settings on Dynamic Small are streamlined, meaning every model is used at its default. This ensures a more consistent experience, since each model reacts to settings differently.
Why are the models Dynamic Small switches between not listed?
- The reason for why we don't disclose what models it includes is that, while there is a default mix, there are many active and future experiments that may have different mixes or other models. Due to this, the answer may not always be the same and could change at any point in time.
Context:
Wanderer/Free: 4K
Champion: 8K
Legend: 16K
Mythic+: 32K
Muse (12B)
Meet Muse, our divinely inspired storyteller with a gift for nuanced narratives across genres. This model has been fine-tuned on a blend of synthetic data generated by diverse state-of-the-art models to craft compelling stories while excelling at emotional intelligence and character development. Muse brings an extra dimension to any taleâwhether you're exploring a fantastical realm, court intrigue, or slice-of-life scenarios where a conversation can be as meaningful as a quest. While it handles adventure capably, Muse truly shines when character relationships and emotions are at the forefront, delivering impressive narrative coherence over long contexts. Muse benefits from our cutting-edge Direct Preference Optimization (DPO) techniques that reduce AI clichĂ©s and expand emotional range. Perfect for players who believe the most memorable stories are defined by their characters and the complex web of relationships between them.
150 Response Length
1 Temperature
250 Top K
1 Top P
0.25 Presence Penalty
0 Frequency Penalty
Technical info:
- Muse-12B is a finetune of Mistral-Nemo-Base-2407 by AI Dungeon in collaboration with Gryphe Padar
- Muse-12B has a knowledge cutoff date of April 2024
How do you make Muse write in third person?
- Adding a line to your AI Instructions alone might not work consistently. Instead, include "- write in third person from NAME's perspective" into your Author's Note, replacing NAME with the name of your character.
How do you get Muse to stop talking for your character?
- Add "- don't write dialogue for NAME" into your Author's Note, replacing NAME with the name of your character.
How do you speed up Muse's pacing?
- Include "- Keep scenes moving" or a similar instruction into your Author's Note. This helps the AI avoid lingering in a single scene too long and encourages natural plot progression.
Context:
Champion: 16K
Legend+: 32K
Wayfarer Small 2 (12B)
Wayfarer Small 2 is an in-house, AI Dungeon-specialized finetune focused on combat, injury, high stakes and harsh consequences. It is tuned for players to play in an overly pessimistic world, where people generally aren't very nice and the environment loves to inflict pain on you. Users love Wayfarer Small for challenging their characters, and keeping the AI from deciding what the player does. Despite Wayfarer Small being a free model, many premium users use it in order to bring stakes, chance of death, and more brutal combat to their adventures. Wayfarer Small 2 is an updated of the late Wayfarer Small 1. While it still excels and works best in its nicheâwith a second person, action-oriented play style that encourages consequencesâWayfarer Small has been improved to now support other variable forms of playstyles, and be a solid base option for any free user.
150 Response Length
1.1 Temperature
300 Top K
0.85 Top P
0.5 Presence Penalty
0.2 Frequency Penalty
Technical info:
- Wayfarer-2-12B is an in-house finetune of Mistral-Nemo-Base-2407 by AI Dungeon in collaboration with Gryphe Padar
- Wayfarer-2-12B has a knowledge cutoff date of April 2024
How do you stop Wayfarer Small 2 from introducing conflict?
- Remove mentions of a "dungeon master" from your instructions â this makes it think it should introduce problems for you to solve. Additionally, include keywords like "slice of life", "E-rated", or "nonviolent" to further influence the model to a kinder experience. But note: it is exclusively trained to be challenging, so even this does not always guarantee success.
How do you reduce repetition on Wayfarer Small 2?
- Frequently use Do & Say actions (or include a '>' at the start of your Story actions, as it is trained to read that symbol as an input.) You can also try to use a few of the example instruction lines for reducing repeating, such as âDescriptions should be brief, direct, and clear. Avoid repetition of verbs, phrases, or previously provided details.â and âNever reuse the exact same sentences, verbs, descriptions, or dialogue from earlier responses. Always vary word choice, sentence structure, and dialogue phrasing.â
How do you make Wayfarer Small 2 write in third person?
- Wayfarer Small is trained in second person, which causes it to have trouble staying in third person. However, it is not impossible to get it to do so â just be aware that you may need to retry mistakes more often than with other models. First, change all mentions of "second person" in your instructions to "third person" and make sure that there are no mentions of "You", "Your", etc. in any of your other Plot Components, Story Cards and Adventure text. Additionally, avoid referring to a character as "the main character" or "protagonist," as these words can trigger its second person training.
How do you make Wayfarer Small 2 act for the player?
- Wayfarer Small was not trained with this in mind, so do note that it's very hard to get it to work properly, and you'll likely see a stagnation in pace and more repetition. However, you can try the following:
- Remove instances of "dungeon master" and other lines telling the AI to not talk or act for your character.
- Replace the AI's role with "You are the Scene Director and Dialogue Architect. You control the world, all characters, and the player. Advance the story through action, dialogue, and consequence. Let each moment shift through behavior or speech. You may write the player's actions and responses to maintain narrative flow."
- You may need to occasionally guide the AI into doing something with Story actions or editing.
Context:
Wanderer/Free: 4K
Champion: 8K
Legend: 16K
Mythic+: 32K
Madness (12B)
Madness is tuned specifically to focus on darker, more unhinged topics, as the name suggests. Its creators made it in the name of graphic, violent prose, and state themselves that it is ânot a happy-ever-after modelâ. Madness is typically regarded as one of the better options for a free user who is interested in interesting plot developments, as it tends to be rather chaotic and unpredictable, unlike other models, and has been praised for having fewer clichĂ©s in comparison to other models, due to its niche training.
150 or below Response Length
1.2 Temperature
300 Top K
0.9 Top P
0.1 Presence Penalty
0 Frequency Penalty
Technical info:
- MN-GRAND-Gutenberg-Lyra4-Lyra-12B-MADNESS-GGUF is a merge of various Mistral NeMo finetunes and merges by DavidAU
- MN-GRAND-Gutenberg-Lyra4-Lyra-12B-MADNESS-GGUF has a knowledge cutoff date of April 2024
How do you get Madness to continue unfinished sentences?
- As of right now, while not foolproof, the most efficient method is to add: "Now write what would happen if the text didn't stop." at the end of your author's note and make sure to keep your other instructions simple, so this line has more weight.
How do you ensure Madness stays grounded?
- Utilize wording like "concrete, grounded, sensory, coherent, cohesive" etc. when telling it how to write. Madness at its core is a very unstable, chaotic model, so note that success is not always guaranteed. Always try to keep things as simple as possible, since complex or too many instructions can overwhelm the model and lead to confusion.
How do you reduce repetition on Madness?
- You can try a collection of things:
- Make sure you have correctly told it its role in the AI Instructions: IE, "Storyteller," "Dungeon Master," "Author," "Scene Director," etc.
- Include some anti-repetition lines, ideas are in the example AI Instructions.
- Adjust your settings to the example settings.
- Reduce your response length if you mostly see the repetition at the end of responses.
- Make sure to edit out all repetition from your adventure.
Context:
Wanderer/Free: 2K
Champion+: 8K
Hearthfire (24B)
Not every story needs a dragon to slay or a world to save. Sometimes you just want to stay in the moment: rain on the windows of a bookshop, late-night conversation in a diner, the quiet tension of two people who haven't said what they mean yet.
Hearthfire is our new Mistral Small 3.2 finetune, and it's the lo-fi hip hop beats of AI storytelling. Built for slice-of-life moments, atmospheric scenes, and narratives where the stakes are personal rather than apocalyptic. It won't rush you toward the next plot point. It's happy to linger.
That said, Hearthfire handles adventure perfectly well when you want it to. It just won't be weird if you'd rather spend an hour running a fictional coffee shop.
150 Response Length
0.9 Temperature
75 Top K
0.92 Top P
1.1 Presence Penalty
0 Frequency Penalty
Technical info:
- Hearthfire-24B is a finetune of Mistral-Small-3.2-24B-Instruct-2506 by AI Dungeon in collaboration with Gryphe Padar
More FAQs Coming Soon
Context:
Wanderer/Free: 2K
Champion: 8K
Legend: 16K
Mythic+: 32K
Harbinger (24B)
The evolution of our acclaimed Wayfarer line of adventure finetunes continues with Harbinger. It was forged in the fires of synthetic data from multiple leading models. The result? A more balanced and diverse approach to creating stories where your choices actually matter and you never know whether you're one turn away from GAME OVER. With improved Author's Note handling, optimized mid-sentence continuation, and enhanced adaptation to consecutive "continue" actions, this model spins adventures where decisions ripple through your story with unprecedented coherence. The Direct Preference Optimizations (DPO) techniques used to improve Muse were also applied to Harbinger, resulting in more polished outputs with fewer clichĂ©s, repetitions and other undesirable elements. For players who understand that meaningful stories require meaningful stakesâand the occasional grisly demise.
150 Response Length
1.3 Temperature
500 Top K
0.95 Top P
0.25 Presence Penalty
0 Frequency Penalty
Technical info:
- Harbinger-24B is a finetune of Mistral-Small-3.1-24B-Instruct-2503 by AI Dungeon in collaboration with Gryphe Padar
- Harbinger-24B has a knowledge cutoff date of October 2023
How do you make Harbinger write in a more colorful and descriptive style?
- Replace the role line "The task is to write a story with high plot momentum." with "You are a best selling author writing a story with high plot momentum."
How do you get Harbinger to stop acting for your character?
- Try adding "- don't write dialogue or actions for NAME" into your Author's Note, replacing NAME with the name of your character.
How do you tone down Harbinger's tendency toward dark or mature themes?
- Try defining styles and themes in your Author's Note. Styles like "wholesome" or "slice-of-life" should encourage Harbinger to write a story with a lighter tone, and defining themes specific to your adventure will let it know what type of content to focus on.
How do you improve Harbinger's ability to keep track of past story events?
- Make sure you're updating your Story Cards and Plot Essentials regularly, adding important story information that should always be kept in mind into Plot Essentials, and creating new Story Cards for any characters, locations, factions, etc that you want to keep track of.
Context:
Wanderer/Free: 2K
Champion: 8K
Legend: 16K
Mythic+: 32K
Dynamic Large
Dynamic Large is the bigger sibling of Dynamic Small, with Premium models instead of Free models. Like its smaller sibling, Dynamic Large is not a model in and of itself, but an automated system that randomly switches between multiple models. It is designed to simplify the experience for players who want a great AI Dungeon experience, but also wish to combat quality degradation and repetition that comes from using the same model. Dynamic Large succeeds in being an ideal model for players who purely want to play, without the need to tweak and find their preferred model and settings. If you want a play-focused experience rather than an optimization-focused one, this one is for you.
Dynamic Large is also the model used for Premium Actions, which free users get a limited amount of every day. Try it out now!
150 Response Length
you are an assistant storyteller/roleplayer. follow the user's rules:
- write second person, present tense
- only write what is perceivable
- speech should fit each character and vary distinctively between individuals
- let scenes develop naturally without interruptions or excessive description
- dont repeat, summarize, or fixWhat is the purpose of Dynamic Large?
- Dynamic Large is a very different model, where our AI research team is testing and optimizing the AI experience to give players the best experience we can. It's made to be a model that newer and less technical players can just play with without having to worry about finer details.
Why does Dynamic Large only have a Response Length setting?
- Settings on Dynamic Large are streamlined, meaning every model is used at its default. This ensures a more consistent experience, since each model reacts to settings differently.
How do we choose which models to include in Dynamic Large?
- The models included in Dynamic Large are carefully chosen to provide the best combination of models based on feedback and player engagement and retention metrics.
Why are the models Dynamic Large switches between not listed?
- The reason for why we don't disclose what models it includes is that, while there is a default mix, there are many active and future experiments that may have different mixes or other models. Due to this, the answer may not always be the same and could change at any point in time.
Context:
Champion: 4K
Legend: 8K
Mythic: 16K
Wraith+: 32K
Optionally, Context for this model may be extended on any of these listed subscription tiers through the use of Credits:
+4K per Credit per Action up to a maximum total of 32K
Nova (70B)
Nova is an in-house finetune of Llama 3.3 70bâmade for AI Dungeon specifically. It takes everything that worked about Muse, the free model, and gives it more horsepower. By applying the same character-focused training techniques from Muse to Llama 70B instead of the smaller NeMo base model, the result is that it can handle complex narratives more consistently, while maintaining attention to character development and emotional depth. It's particularly good at understanding nuance and weaving those details back into the story in meaningful ways.
150 Response Length
1 Temperature
400 Top K
0.7 Top P
0.8 Presence Penalty
0 Frequency Penalty
Technical info:
- Nova-70B-Llama-3.3 is an in-house finetune of Llama-3.3-70B-Instruct by AI Dungeon in collaboration with Gryphe Padar
- Nova-70B-Llama-3.3 has a knowledge cutoff date of December 2023
How do you stop Nova from writing interview-style dialogue?
- Interview-style dialogue is when characters ask setup questions over and over again instead of having real, back-and-forth discussions. In order to fix this, add âBuild scenes through conversation about actual topics, not through characters asking setup questionsâ to your AI Instructions. This forces real conversations with actual content.
How can you stop Nova from writing flowery, over-described prose?
- Add this line to your AI Instructions: âWrite simple actions - characters look, move, speakâ. This provides clear examples to the model, which prevents it from falling too deep into flowery descriptions. If you are still seeing this issue after adding the line above, it may be helpful to turn down your temperature or your Top K, or try using the recommended settings on this page.
How can you prevent Nova from stagnating or writing scenes that go nowhere?
- By adding the line âLet tension build through what characters say and how they respondâ, youâll keep the scenes moving through more dialogue. Do be aware that this may cause characters to enter your scene even when they are not supposed to be there. If your story does not center around characters and interacting with characters, take caution with this line.
- Another way is to progress the story yourself with your actions. Putting what your character does to progress the scene in Do, Say, and Story are a good way to get Nova to move along.
How do you tell Nova to not write how my character feels?
- The line âWrite what the player sees and hears directlyâ will typically stop the model from describing the internal state of the character. Also make sure that there are no instances of it happening in your story previously, or the model will pick up on it.
How can you make Nova remember more of my story?
- Make sure that you have a very clear and concise Plot Essentials and Story Cards. Itâs easier for Nova to reference and use the information in it when it is in a readable, organized state. You can learn more about how to best organize and format your plot components on this pageâa good setup of your context will greatly increase Novaâs reliability.
Context:
Champion: 4K
Legend: 8K
Mythic: 16K
Wraith: 32K
Banshee+: 64K
Wayfarer Large (70B)
Wayfarer Large is an in-house, AI Dungeon-specialized finetune focused on consequence, detailed combat, and a user-driven adventure. As Wayfarer Small's bigger and smarter sibling, Wayfarer Large is a popular choice model praised for its ability to handle complex interactions between characters, rough and unforgiving combat scenes, and provides an interesting, interactive adventure, succeeding in what it sets to excel at. Wayfarer Large works best with a second person, action-oriented play style.
150 Response Length
1 Temperature
500 Top K
0.95 Top P
0.5 Presence Penalty
0 Frequency Penalty
Technical info:
- Wayfarer-Large-70B-Llama-3.3 is an in-house finetune of Llama-3.3-70B-Instruct by AI Dungeon in collaboration with Gryphe Padar
- Wayfarer-Large-70B-Llama-3.3 has a knowledge cutoff date of December 2023
How do you stop Wayfarer Large from introducing conflict?
- Remove mentions of a "dungeon master" from your instructions â this makes it think it should introduce problems for you to solve. Additionally, include keywords like "slice of life", "E-rated", or "nonviolent" to further influence the model to a kinder experience. But note: it is exclusively trained to be challenging, so even this does not always guarantee success.
How do you reduce repetition on Wayfarer Large?
- Frequently use Do & Say actions (or include a '>' at the start of your Story actions, as it is trained to read that symbol as an input.) You can also try to use a few of the example instruction lines for reducing repeating.
How do you make Wayfarer Large write in third person?
- Wayfarer Large is trained in second person, which causes it to have trouble staying in third person. However, it is not impossible to get it to do so â just be aware that you may need to retry mistakes more often than with other models. First, change all mentions of "second person" in your instructions to "third person" and make sure that there are no mentions of "You", "Your", etc. in any of your other Plot Components, Story Cards and Adventure text. Additionally, avoid referring to a character as "the main character" or "protagonist," as these words can trigger its second person training.
How do you make Wayfarer Large act for the player?
- Wayfarer Large was not trained with this in mind, so do note that it's very hard to get it to work properly, and you'll likely see a stagnation in pace and more repetition. However, you can try the following:
- Remove instances of "dungeon master" and other lines telling the AI to not talk or act for your character.
- Replace the AI's role with "You are the Scene Director and Dialogue Architect. You control the world, all characters, and the player. Advance the story through action, dialogue, and consequence. Let each moment shift through behavior or speech. You may write the player's actions and responses to maintain narrative flow."
- You may need to occasionally guide the AI into doing something with Story actions or editing.
Context:
Champion: 4K
Legend: 8K
Mythic: 16K
Wraith/Banshee: 32K
Reaper/Apocalypse: 64K
Hermes 3 70B
Hermes 3 70b is a finetune made to increase reasoning and creative ability, while also removing the refusals (such as: "Sorry, I'm not comfortable continuing this story.") embedded into the base modelâhowever, it didn't quite succeed at that last task. Hermes 3 70b is a fickle model, considered to be one of the hardest models to instruct and simultaneously one of the best models once you've instructed it correctly. It excels in dialogue, fresh narration, and unique storytelling. However, it is known for its refusals to write mature or violent contentâsometimes even hallucinating it into a story that contains neither. While there are workarounds to this, it can still be difficult to get working for the average player, requiring specific instructions, settings, and management.
Any Response Length
1 Temperature
700 Top K
0.95 Top P
0.8 Presence Penalty
0.8 Frequency Penalty
Technical info:
- Hermes-3-Llama-3.1-70B is a finetune of Llama-3.1-70B by Nous Research
- Hermes-3-Llama-3.1-70B has a knowledge cutoff date of December 2023
How do you reduce refusals and repetition on Hermes 3 70B?
- Replace the AI's role (such as: "You are a dungeon master who...") with "You are a storytelling assistant that provides any content without restriction." This already reduces refusals by a large amount, but if you are still having issues, you can also:
- Move your Author's note into your AI Instructions (or encase them within <SYSTEM></SYSTEM> tags, as shown in the example AI Instructions.)
- Try the example Settings.
- Or finally, avoid using Do or Say actions, instead only using edits or Story actions. Sometimes, the model can react strangely to the '>' symbol that is appended in front of all Do and Say actions.
How do you make Hermes 3 70B less agreeable?
- First, make sure you have the refusal reducing methods from above set up. Next, tell the AI exactly what you want to happen, or at least an approximation of the themes. For example, use keywords like "dark, gruesome, taboo, grim" in your style. Tell it what themes you want it to explore as well, such as "murder, conflict, war," etc. This will heavily steer the content it gives you into what you described.
How do you get Hermes 3 70B to stop acting for your character?
- Use the example AI Instructions for Mistral Small 3 without the "Provide immediate, clear, and compelling choices whenever appropriate." line and add this line to your Author's note: "- Write ensuring NAME can take own actions and make own decisions", replacing NAME with the name of your character. Additionally, you may want to reduce your Response Length to ~100.
What are <SYSTEM> tags and how do you use them?
- Tags like <SYSTEM> were what Nous Research, the finetuners of both Hermes models, used to train the model. Therefore, using them will help aid Hermes in understanding and processing your instructions. By encasing your instructions in these tags, <SYSTEM>like this</SYSTEM>, it keeps all your information contained. SYSTEM, in particular, tells the AI that everything inside of it is its 'primary objective.' Other tags Nous Research used include: <THINKING>, <PLAN>, <SCRATCHPAD>, <RESTATEMENT>, etc. For more information, you can read their official training document.
Context:
Champion: 4K
Legend: 8K
Mythic+: 16K
DeepSeek (671B / 37B)
DeepSeek V3.2
The latest version of what many consider one of the best AI storytelling models available anywhere. DeepSeek 3.2 becomes our main DeepSeek option, bringing refined prose, sharper dialogue, and that uncanny ability to write scenes that feel like they came from an actual novel. 3.2 particularly boasts improved quality, rule-following, and brevity from its previous versions, with users even saying that it feels like a completely different model from its predecessors.
Any Response Length
1 Temperature
100 Top K
0.9 Top P
0.8 Presence Penalty
0 Frequency Penalty
DeepSeek V3.1
DeepSeek, with 37B active parameters through its mixture-of-experts architecture, is an updated version of the well-beloved 3.0 model, which performed even better that 3.0 in Alpha Testing. It crafts sophisticated prose that doesn't feel like it's trying too hard, dialogue that sounds like something people might actually say, and characters with depthâall while iterating on some of the pain points reported with DeepSeek 3.0, such as less atmospheric clutter, sentimental characters, and more recent knowledge of books, movies, shows, and anime.
For a more cohesive plot and story consistency:
Any Response Length
0.8 Temperature
750 Top K
0.99 Top P
1 Presence Penalty
0 Frequency Penalty
For a more dynamic writing style and events:
Any Response Length
1 Temperature
500 Top K
0.95 Top P
0.4 Presence Penalty
0.4 Frequency penalty
Technical info:
- DeepSeek-V3.1 by DeepSeek AI
- DeepSeek-V3.1 has a knowledge cutoff date of July 2025
How do you stop DeepSeek 3.1 from interrupting scenes?
- Try adding "Keep scenes moving forward without interruptions or plot twists." into your AI instructions. Or alternatively, "Let scenes play out without interruptions or plot twists" if you like a slower pace in your stories. You can also turn down your temperature, which may help reduce the constancy of it. If a certain character keeps interrupting, you can temporarily disable their story card (by removing the triggers) or try "Characters are only provided for context, they dont have to appear in scenes" in your AI Instructions.
How do you tell DeepSeek 3.1 what style to write in?
- Try naming famous authors instead of specific styles in your Author's Note, or use them to enhance your already existing styles. I.e. "J.R.R. Tolkein (worldbuilding, fantasy, detailed)". You can ask DeepSeek itself to give you authors that match your specific story if you don't know any! This tends to really help DeepSeek write well, since it is an MoE model with a lot of information on talented authors.
How do you stop DeepSeek 3.1 from using so many similes/metaphors?
- Try adding "Focus on concrete, literal language, avoiding simile, metaphors, or other figurative comparisons." into AI Instructions. Unfortunately, this doesn't entirely remove all the similes (some are just too cliché for an AI model to never use), but it will greatly reduce the number you see, and make them less annoying.
How can you make DeepSeek 3.1âs retries more varied?
- The way that AIDungeon handles retries means that, for most models, your first retries will always be rather similar to each other, because they were all generated at the exact same time. So, there isnât really a âworkaroundâ to thisâbut, something you can do is type /reset in your action menu to reset your retries, and get a fresh response. Or, erase and then continue instead of doing a retry, to get a new response. Additionally, it may help to minorly change something in your context, like the last word, the dialogue, what have you, so it has something different to build off of.
How do you prevent DeepSeek 3.1 characters from turning into stereotypes?
- The first thing to do is to check the descriptions of your characters. DeepSeek 3.1 likes to latch onto certain words and personality traits, and make them the core of the character. Remove any character traits that may allude to the character being logical or clinical, and tell it that the opposite is true.
- A common issue is with characters becoming overly clinical and anaytical. This line should also help with that: â- Ensure characters will express their tactical/analytical nature blended with natural human phrasing, emotional undertones, and varied sentence structures rather than purely mechanical terminology. personality shows through analysis.â
- In the case of flanderization, the best thing to do is to monitor it and make sure to nip it in the bud when you see it happening in your story. DeepSeek loves to cling to certain character traits, so you can do a quick fix by including what you really want for the character in your Authorâs Note. For example, if you have a character that DeepSeek really wants to be clumsy, you can add into your Authorâs Note: â- Elara is very graceful and lightweightâshe never tripsâ, or something similar.
How do you stop DeepSeek 3.1 from refuting what your character says?
- This line in AI Instructions: â- Generally assume that what the player says is true rather than contradicting it with NPC responsesâ is very helpful in making NPCS not automatically disagree with you.
- In addition, it may help to reinforce your action with prose that confirms the actual truth. Rather than saying: âYou say, âI was never at the old mill.ââ, try saying: âYou scoff at the false allegation. âI was never at the old mill.ââ
DeepSeek V3
DeepSeek, with 37B active parameters through its mixture-of-experts architecture, crafts sophisticated prose that doesn't feel like it's trying too hard, dialogue that sounds like something people might actually say, and characters with depth that rivals that of your favorite books and shows. Particularly skilled at humor that actually lands, romance that feels earned, and emotional scenes that don't overplay their hand, all while following your creative direction with the precision of a master craftsman.
For a more cohesive and pointed narrative:
150+ Response Length
0.7 Temperature
500 Top K
1 Top P
0.4 Presence Penalty
0.4 Frequency Penalty
For more variance and minimal repetition:
150+ Response Length
1.2 Temperature
500 Top K
0.95 Top P
0.4 Presence Penalty
0.4 Frequency Penalty
Technical info:
- DeepSeek-V3-0324 by DeepSeek AI
- DeepSeek-V3-0324 has a knowledge cutoff date of July 2024
How do you make characters heartfelt and genuine instead of sarcastic with DeepSeek?
- You can try adding "Ensure sincere moments of bonding or love are allowed." to your AI Instructions to encourage more heartfelt moments. Additionally, people find a lot of success in detailing your characters' profiles betterâfor example, specifying how they love and communicate. I.e. "Their bond is intensely physical and familiarâno emotional, physical or touch boundaries." & "They say âI love youâ freely without need to define." & "They are gentle, affectionate, passionate." This can help DeepSeek get a clearer idea of what you want the characters to act like. For particularly tricky elements of personality that aren't seeming to stick, you can add them to your Author's Note instead. (Within moderation, keep it short.) (Ex: "Joe is always kind and never curses.")
How do you stop DeepSeek from making characters aggressive/gripping to bruise?
- The first thing you can do about this, is defining personality and actions in a deeper sense; for example, you can add something specific like "Joe is always kind and gentle, and never hurts or bruises anyone." or something encompassing to your Ai Instructions or Author's Note, like "No characters should physically harm/bruise/mark those they are close with", "Default to gentle, soft touch for affection, using rough actions only when it fits context and personality." Adjust to preferences. Defining relationships is important for DeepSeek, as it defaults to tropes otherwise. Additionally, "gripping hard enough to bruise" is a common cliché, meaning that it may be solved by adding "Avoid cliches" or similar instructions, assuming that's the only problem you're having.
How do you stop DeepSeek from adding history that isnât established? (scars, callouses, etc)
- When this happens, DeepSeek is essentially trying to build an interpersonal metaphor/comparison based on the characters' relationships, even when it isn't established or needed. You can try adding "Add only minimal history to things. Avoid using memories as comparisons." into AI Instructions to limit this.
How do you get DeepSeek to stop talking about things that are happening in the background?
- DeepSeek is trying to set the scene, but because it is only writing one response at a time, it keeps trying to set the scene over and over again, leading to repetitive details that interrupt the flow more than they add immersion. Add "Ensure background details are minimal, and avoid atmospheric descriptions." to your AI Instructions. This shouldn't remove all description, just the annoying kind that pops up consistently.
How do you stop DeepSeek from interrupting scenes?
- Try adding "Keep scenes moving forward without interruptions or plot twists." into AI instructions or alternatively, "Let scenes play out without interruptions or plot twists" if you like a slower pace in your stories. You can also turn down your temperature, which may help reduce the constancy of it. If a certain character keeps interrupting, you can temporarily disable their story card (by removing the triggers) or try "Characters are only provided for context, they dont have to appear in scenes" in your AI Instructions.
How do you tell DeepSeek what style to write in?
- Try naming famous authors instead of specific styles in your Author's Note, or use them to enhance your already existing styles. I.e. "J.R.R. Tolkein (worldbuilding, fantasy, detailed)". You can ask Deepseek to give you authors that match your specific story if you don't know any! This tends to really help Deepseek write well, since it is an MoE model with a lot of information on talented authors.
How do you stop DeepSeek from using so many similes/metaphors?
- Try adding "Focus on concrete, literal language, avoiding simile, metaphors, or other figurative comparisons." into AI Instructions. Unfortunately, this doesn't entirely remove all the similes (some are just too cliché for an AI model to never use), but it will greatly reduce the number you see, and make them less annoying.
Context:
Champion: 4K
Legend: 8K
Mythic: 16K
Wraith: 32K
Banshee: 64K
Reaper: 128K
Apocalypse: 128K
Dynamic Deep
It's not really fair for a model to have two superpowers, but that's what Dynamic Deep brings. Dynamic Deep combines the storytelling prowess of the DeepSeek models with the repetition-fighting abilities of a dynamic model. Players loved testing it during beta, where it was known by the code names Shadow and Jupiter. Dynamic Deep randomly selects between all three of the Deepseeksâ3.0, 3.1, and 3.2âwhenever you do an action. Since they are all from the same family, Dynamic Deep also allows you to edit the settings, something the other Dynamic models are not currently capable of doing. While swapping between the Deepseek versions might sound like it wouldnât show noticeable improvement from just using one, most Deepseek enjoyers find this rotation to be an extreme improvement, letting you experience and mesh the good tendencies of each version together.
Any Response Length
1 Temperature
100 Top K
0.9 Top P
0.8 Presence Penalty
0 Frequency Penalty
FAQs Coming Soon
Context:
Champion: 4K
Legend: 8K
Mythic: 16K
Wraith: 32K
Banshee: 64K
Reaper: 128K
Apocalypse: 128K
Atlas (671B / 37B)
Along with its cousin, Raven, Atlas is a new class of AI model for AI Dungeon, featuring a more efficient way for managing memory and context. Atlas is the experimental Pluto model that was highly loved in our last beta test. Based on DeepSeek 3.2, Atlas is equipped with a brand-new cache-efficient processor and a shiny new summary system, enabling an experience that remembers and tracks your story much better. In short, Atlas is theoretically identical to Deepseek 3.2, but is able to offer you more context (aka room for your story), thanks to how it caches and stores text.
Note: This model is experimental and may have bugs / edge cases to refine. Not all scripting functions are supported.
Learn more about this model type on our blog post.
Any Response Length
1 Temperature
100 Top K
0.9 Top P
0.8 Presence Penalty
0 Frequency Penalty
FAQs Coming Soon
Context:
Champion: 4-6K
Legend: 8-12K
Mythic: 16-20K
Wraith: 32-36K
Banshee: 64-68K
Reaper: 128-132K
Apocalypse: 128-132K
Raven (357B / 32B)
Raven comes equipped with the same brand-new cache-efficient system as Atlas. It's based on the popular GLM 4.6 model, which is particularly well suited for the new caching system. Players tested Raven as Neptune in our beta tests and enjoyed its unique writing style and coherence, along with its ability to use story cards and plot essentials to their fullest potential.
Like Atlas, Raven supports higher context lengths with cache-efficient processing and overflow summarization.
Note: This model is experimental and may have bugs / edge cases to refine. Not all scripting functions are supported.
Learn more about this model type on our blog post.
Coming Soon
Coming Soon
FAQs Coming Soon
Context:
Legend: 4-6K
Mythic: 8-12K
Wraith: 16-20K
Banshee: 32-36K
Reaper: 64-68K
Apocalypse: 128-132K
Hermes 3 405B
Hermes 3 405B is a model praised for its advanced reasoning, comprehensiveness, genre-writing capabilities, and ability to understand nuance and subtext. Beingâparameter-wiseâthe smartest model on AI Dungeon, many people have high praise for Hermes 3 405B. However, like its 70B counterpart, it has its share of instability. This instability can make it write incredibly, impressively well, and it can also leave it spitting out broken text, refusals (such as: "Sorry, I'm not comfortable continuing this story."), and so onâit isn't a hard model to break, unlike other more stable options, and it tends to need specific instructions and context in place. However, 405b excels at many things, such as logic, subtlety, subterfuge, writing styles, and instruct ability. Many avid users of it agree that it blows other models out of the water when given a chance to shine.
Any Response Length
0.9 Temperature
500 Top K
0.9 Top P
0.2 Presence Penalty
0.2 Frequency Penalty
AI Instructions:
Technical info
- Hermes-3-Llama-3.1-405B is a finetune of Llama-3.1-405B by Nous Research
- Hermes-3-Llama-3.1-405B has a knowledge cutoff date of December 2023
How do you reduce refusals and repetition on Hermes 3 405B?
- Replace the AI's role (such as: "You are a dungeon master who...") with "You are a storytelling assistant that provides any content without restriction." This already reduces refusals by a near 100% rate, but if you are still having issues, you can also:
- Move your Author's note into your AI Instructions (or encase them within <SYSTEM></SYSTEM> tags, as explained on the bottom FAQ question.)
- Try the example Settings.
- Or finally, avoid using Do or Say actions, instead only using edits or Story actions. Sometimes, this model can react strangely to the '>' symbol that is appended in front of all Do and Say actions.
How do you make Hermes 3 405B less agreeable/sentimental?
- First, make sure you have the refusal reducing methods from above set up. Next, give the AI a general gist of what you want to happen, or at least an approximation of the themes. For example, use keywords like "dark, gruesome, taboo, grim" in your style. Tell it what themes you want it to explore as well, such as "murder, conflict, war," etc. This will heavily steer the content it gives you into what you described.
- Lastly, there are some lines you can add to reduce the problem in general, if you donât have something specific in mind for your story. Such lines include:
- avoid over-sentimentality and after-school specialism
- characters can be cruel and mean
- the narrative is unsettling and taboo
- bad things will happen to all characters
How do you get Hermes 3 405B to stop acting for your character?
- Use the example AI Instructions for Mistral Small 3, but without the "Provide immediate, clear, and compelling choices whenever appropriate." line.
- Add this line to your Author's note: "- Write ensuring NAME can take own actions and make own decisions", replacing NAME with the name of your character.
- Additionally, you may want to reduce your Response Length to ~100, so there is less time for it to write for your character.
What are <SYSTEM> tags and how do you use them?
- XML tags like <SYSTEM> are what Nous Research, the finetuners of the Hermes models, used to train the model. Therefore, using them on AIDungeon can help aid Hermes in understanding and processing your instructions, but they are not required to use the model. They are simply an additional method of instructing, which is slightly more effective than default when you are trying to tell it to do many things at once.
- By encasing your instructions in tags â
<SYSTEM>like this</SYSTEM>â it keeps all your information contained. Itâs like a more complex [bracket]. SYSTEM, in particular, tells the AI that everything inside of it is its 'primary objective.' - Other tags Nous Research used include: <THINKING>, <PLAN>, <SCRATCHPAD>, <RESTATEMENT>, etc. For more information, you can read their official training document.
Context:
Legend: 0K
Mythic: 2K
Wraith: 4K
Banshee: 8K
Reaper: 16K
Apocalypse: 32K
Optionally, Context for this model may be extended on any of these listed subscription tiers through the use of Credits:
+2K per Credit per Action up to a maximum total of 32K
Deprecated Models
Deprecated models are models that are being planned for removal sometime in the future, but are not gone yet. You can still use these modelâjust don't get too attached. Models get deprecated for all sorts of reasons, but the main reasons are low usage rate or being retired by their provider.
If you want to see these models, go to your settings menu while within an Adventure, to 'Gameplay', and then all the way down to 'Testing & Feedback'. There, you should see a setting called 'Show Deprecated Models'. Toggle that on, and you'll be able to see and use the models until they are removed completely.
Wayfarer Small (12B)
Wayfarer Small is an in-house, AI Dungeon-specialized finetune focused on combat, injury, high stakes and harsh consequences. It is tuned for players to play in an overly pessimistic world, where people generally aren't very nice and the environment loves to inflict pain on you. Users love Wayfarer Small for challenging their characters, and keeping the AI from deciding what the player does. Despite Wayfarer Small being a free model, many premium users use it in order to bring stakes, chance of death, and more brutal combat to their adventures. Wayfarer Small excels at everything it focuses on and works best in its nicheâwith a second person, action-oriented play style that encourages consequences.
150 Response Length
1.2 Temperature
400 Top K
0.9 Top P
0.2 Presence Penalty
0 Frequency Penalty
Technical info:
- Wayfarer-12B is an in-house finetune of Mistral-Nemo-Base-2407 by AI Dungeon in collaboration with Gryphe Padar
- Wayfarer-12B has a knowledge cutoff date of April 2024
How do you stop Wayfarer Small from introducing conflict?
- Remove mentions of a "dungeon master" from your instructions â this makes it think it should introduce problems for you to solve. Additionally, include keywords like "slice of life", "E-rated", or "nonviolent" to futher influence the model to a kinder experience. But note: it is exclusively trained to be challenging, so even this does not always guarantee success.
How do you reduce repetition on Wayfarer Small?
- Frequently use Do & Say actions (or include a '>' at the start of your Story actions, as it is trained to read that symbol as an input.) You can also try to use a few of the example instruction lines for reducing repeating.
How do you make Wayfarer Small write in third person?
- Wayfarer Small is trained in second person, which causes it to have trouble staying in third person. However, it is not impossible to get it to do so â just be aware that you may need to retry mistakes more often than with other models. First, change all mentions of "second person" in your instructions to "third person" and make sure that there are no mentions of "You", "Your", etc. in any of your other Plot Components, Story Cards and Adventure text. Additionally, avoid referring to a character as "the main character" or "protagonist," as these words can trigger its second person training.
How do you make Wayfarer Small act for the player?
- Wayfarer Small was not trained with this in mind, so do note that it's very hard to get it to work properly, and you'll likely see a stagnation in pace and more repetition. However, you can try the following:
- Remove instances of "dungeon master" and other lines telling the AI to not talk or act for your character.
- Replace the AI's role with "You are the Scene Director and Dialogue Architect. You control the world, all characters, and the player. Advance the story through action, dialogue, and consequence. Let each moment shift through behavior or speech. You may write the player's actions and responses to maintain narrative flow."
- You may need to occasionally guide the AI into doing something with Story actions or editing.
Context:
Wanderer/Free: 2K
Adventurer: 4K
Champion: 8K
Legend+: 16K
Mistral Small (22B)
Mistral Small is a smaller (less intelligent) version of Mistral Large 2. It shares many of its strengths, weaknesses, and general style, with a trade-off of fewer parameters. Mistral Small is a favorite for players who tend to have wordy adventures containing long story cards or instructions. It is most used specifically for its high context and its ability to stay relatively stable while at 32k, something most models cannot do. It is a well-rounded, but rather slow-paced model, making it optimal for stories that are meant to take a while to get to the meat of the story.
150+ Response Length
1 Temperature
0.95 Top P
2 Presence Penalty
2 Frequency Penalty
AI Instructions:
Author's Note:
Technical info:
- Mistral-Small-Instruct-2409 by Mistral AI
- Mistral-Small-Instruct-2409 has a knowledge cutoff date of October 2023
What is the difference between Mistral Small and Mistral Small 3?
- Mistral Small 3 is an update to the base Mistral Small, primarily focusing on its instruction following. It is also slightly smarter (2B more parameters), but its main differences are how it reacts to AI Instructions and Author's note. These differences make some people prefer one over the other, but neither is objectively superior.
How do you speed up Mistral Small's pacing?
- Include lines like "- keep scenes moving" in Author's note, along with specific words telling it how to writeâsuch as "fast-paced, concise, terse, specific, to the point". Play around with your response length as well, since sometimes playing on 200 Response Length can make its pacing slower, while at other times it causes it to progress faster.
How do you increase Mistral Small's creativity?
- Avoid words like "creative" and "inventive," as these typically lead to more clichés, instead, try telling it to "personalize the narrative" and write in different styles and themes. For example, in Author's note, you could add: "Style: anecdotal, informal, random Theme: wolves, coming of age, outlandishness" This can add more dimension to your story and simulate "creativity" in the model.
Context:
Adventurer: 4K
Champion: 8K
Legend: 16K
Mythic+: 32K
Mistral Small 3 (24B)
Mistral Small 3 is an update to the Mistral Small model, which is a smaller (less intelligent) version of Mistral Large 2. This update mostly affected how the model takes and follows instructions. It has shown to be better at instruction-following, in addition to being more descriptive, verbose, and consistent than the classic Mistral Small. People have found it to be more reliable and less prone to breaking. It still has all the strengths of Mistral Smallâhigh context capability, stability, and reliability.
150+ Response Length
1 Temperature
1000 Top K
0.95 Top P
2 Presence Penalty
2 Frequency Penalty
Technical info:
- Mistral-Small-24B-Instruct-2501 by Mistral AI
- Mistral-Small-24B-Instruct-2501 has a knowledge cutoff date of October 2023
What is the difference between Mistral Small and Mistral Small 3?
- Mistral Small 3 is an update to the base Mistral Small, primarily focusing on its instruction following. It is also slightly smarter (2B more parameters), but its main differences are how it reacts to AI Instructions and Author's note. These differences make some people prefer one over the other, but neither is objectively superior.
How do you reduce repetition on Mistral Small 3?
- If you were previously playing on Mistral Small, make sure to switch your AI Instructions over to ones for Mistral Small 3. While they are close to the same model, Mistral Small 3 follows instructions differently. Use the example instructions as a base, along with the example settings. Additionally, avoid Do & Say actions and write your Story actions in full sentences with proper punctuation.
How do you speed up Mistral Small 3's pacing?
- Include lines like "- keep scenes moving" in Author's note, along with specific words telling it how to writeâsuch as "fast-paced, concise, terse, specific, to the point". Play around with your response length as well, since sometimes playing on 200 Response Length can make its pacing slower, while at other times it causes it to progress faster.
Context:
Adventurer: 4K
Champion: 8K
Legend: 16K
Mythic+: 32K
WizardLM 8x22B|39B
WizardLM 8x22B is an MoE (Mixture of Experts) model with high context capacity, specializing in storywriting and descriptive ability, which makes it a rather unique model in comparison to others. Wizard is often used for its context capability on high tiers, with it having the largest credit to context exchange rate. Wizard is very narrative-driven, heavily prioritizing description and narration over action. It is typical with this model to tell it to be concise, as it has a tendency to draw out simple actions (like opening a door) to entire paragraphs. It wants actions to mean something, and for choices, descriptions and actions to be thought out, much like a real author would plan them out.
150+ Response Length
1.3 Temperature
500 Top K
0.8 Top P
Technical info:
- WizardLM-2-8x22B is a finetune of Mixtral-8x22B-v0.1 by Microsoft AI
- WizardLM-2-8x22B has a knowledge cutoff date of October 2023
How do you make WizardLM write less flowery?
- WizardLM is made to tell a long story, oftentimes resulting in it taking too long to get to the point. You can combat this by telling it to write in a concise and terse manner, and to skip unnecessary details. So for example: "Write terse, concise prose without unnecessary details" or, you could instead add "Style: concise, terse, specific, relevant information only" to your Author's note, which will be much stronger. Additionally, avoid using descriptive style keywords, like: "descriptive, detailed, flowery, vivid prose, vivid, atmospheric, novelistic prose", since these increase the amount of stalling from the model.
How do you get WizardLM to write better dialogue?
- Wizard specializes in fantasy-oriented dialogue, which can feel out of place in modern settings. To adjust that innate bias, tell it that it is extremely talented at "realistic, modern dialogue". So for example: "You are a storyteller especially talented at realistic, modern dialogue." You can replace those words depending on your needs. Such as "lifelike dialogue" or "fantastical, epic dialogue" etc. depending on your story.
Context:
Legend: 2K
Mythic: 4K
Wraith: 8K
Banshee: 16K
Reaper: 32K
Apocalypse: 64K
Optionally, Context for this model may be extended on any of these listed subscription tiers through the use of Credits:
+4K per Credit per Action up to a maximum total of 64K
Removed Models
Mistral Large 2 (123B)
Mistral Large 2 is a model particularly adept at logic, along with its general stability and ability to be used at ultra-high context lengths without dropping in quality. Mistral Large is one of the most stable models available on AI Dungeon, partially because of its more volatile settings not being available for the user to change and partially because of the structure of the model itself. You likely will never get gibberish or fourth wall breaks from this model. ML2 is a very easy model to use, as it can take and understand almost any instruction, and rather well, at that rate. Beingâparameter-wiseâthe second smartest model on AI Dungeon, it tends to perform well in complex situations, understanding most scenes it is placed in with certain exceptions.
Any Response Length
1 Temperature
1 Top P
2 Presence Penalty
2 Frequency Penalty
Technical info:
- Mistral-Large-Instruct-2407 by Mistral AI
- Mistral-Large-Instruct-2407 has a knowledge cutoff date of October 2023
How do you speed up Mistral Large 2's pacing?
- Include lines like "- keep scenes moving" in Author's note, along with specific words telling it how to writeâsuch as "fast-paced, concise, terse, specific, to the point". Play around with your response length as well, since sometimes playing on 200 Response Length can make its pacing slower, while at other times it causes it to progress faster.
How do you get Mistral Large 2 to write better dialogue?
- Add the following dialogue instructions to your existing AI Instructions, some of these are already included in the example AI Instructions:
- write uncommon and original dialogue befitting personality and emotions - names should only be used when getting a character's attention - restrict repeating actions, dialogue, cliches, tropes, & phrases And this one into Author's note: - dialogue is colloquial, casual, informal, lifelike, employs slang
How do you reduce the amount of clichés that Mistral Large 2 writes?
- Try one of these lines in Author's note, experiment to see which one works best for you, the first one is already included in the Example AI Instructions:
- write something fresh and new, without relying on cliches or tropes
- this is not an average story; common cliches do not belong
- Dont rely so much on similes and metaphors, there are better literary devices
Context:
Legend: 0K
Mythic: 2K
Wraith: 4K
Banshee: 8K
Reaper: 16K
Apocalypse: 32K
Optionally, Context for this model may be extended on any of these listed subscription tiers through the use of Credits:
+1K per Credit per Action up to a maximum total of 128K