I have been learning about and trying LLM/SLM and other things that could be used to enhance RPG elements and replayability of games, and wanted to give RimWorld a try because creating my own ecosystem in Unity was pretty hard work for a sideproject.
How will the llm-driven systems coordinate with RimWorld’s existing AI and job scheduler without causing performance issues? won’t the AI system you’re planning to implement end up being too slow?
In my own system in unity, I used LLMs as merely an orchestrator and left Behaviour Trees (https://arxiv.org/abs/1709.00084) as the main core of NPC behaviour. I assume this question is directed at the NPC engine part of the mod, "FelPawns" as of right now.
Since my goal is to manupilate the NPC behaviour ingame directly, and use LLM for more than just roleplay chatbot purposes performance is indeed and issue. I think the hardest part is to bridge the LLM and Rimworld internal scheduler seamlessly. I am following some articles loosely , like Generative Agents (https://arxiv.org/abs/2304.03442) to take inspiration (and innovate/iterate on it). Nevertheless, my goal is not to make a better version of RimWorld pawns, but just a more immersive version and get to try some methods I wanted to try along the way. Taking players decision away and delegating to a machine that is not good at long term planning will never be a viable choice in RimWorld, in terms of meta-gaming. However, if we focus on social aspects of the pawns and leave the performance-sensitive parts to the game itself I think we can hit a pretty nice middle ground.
If we are talking about the Quest Generation part (which is still pretty early nin development), it's not just calling pre-existing RimWorld map events (Like Raids, Animals join, Ambrosia Sprout etc).
Instead, I am trying to create a sort of lego-bricks kind of approach for the LLM. Basically, I provide it a list (sometimes, a decently huge list) of building blocks for categorized as "Action", "Condition", and "Reward" nodes, each interface also inheriting from the "Node" interface and LLM creates this nodes which are formatted and created at a factory class at runtime, leading to a Tree like approach that fires a quest, tracks it at runtime if possible and if not possible LLM will interpret if the quest is complete (sadly, I will have to add a "Scan if complete" button somewhere, to save on performance).
Of course this approach is prone to lots of refactoring and design changes, since I am learning by doing. Just yesterday, I was reading about level generation using LLMs (https://arxiv.org/abs/2407.09013) which goes into a more detailed discussion at chapter 5, especially at 5.2
LLM also will also have the context about your colony, your recent quests and recent events, map situation etc so it will certainly be more immersive than just randomizing the events. After all, even the default storytellers (even Randy) plays by some rules, and is not totally random.
I am also trying to keep the core architecture pretty abstract, so I and maybe other developers can just patch it and easily implement their own "Nodes" (maybe I should start calling them leafs...) and this sorta adds some development overhead too. If I can find some time, I will share some examples about the generated quests (both raw response from LLM and processed result) int the future.
I love the idea, one of the narrow areas where i wish the tech was being used more instead of less. but thinking about how if i ask LLMS to list 10 good cities to visit , i'll almost always get Paris , New York and Tokyo in the list.
So your item/trait generation may end up with some similar but non identical items/traits when presented with similar scenarios.
That's definitely true. LLMs, Especially small ones tend to give not so random responses. This can be mitigated by context a little bit (fortunately, RimWorld already has some lore building built in) but still it will never be completely unique. Without trying I cant say much tbh, but its promising nevertheless.
I have been learning about and trying LLM/SLM and other things that could be used to enhance RPG elements and replayability of games, and wanted to give RimWorld a try because creating my own ecosystem in Unity was pretty hard work for a sideproject.
Very cool, how far along is this?
How will the llm-driven systems coordinate with RimWorld’s existing AI and job scheduler without causing performance issues? won’t the AI system you’re planning to implement end up being too slow?
In my own system in unity, I used LLMs as merely an orchestrator and left Behaviour Trees (https://arxiv.org/abs/1709.00084) as the main core of NPC behaviour. I assume this question is directed at the NPC engine part of the mod, "FelPawns" as of right now.
Since my goal is to manupilate the NPC behaviour ingame directly, and use LLM for more than just roleplay chatbot purposes performance is indeed and issue. I think the hardest part is to bridge the LLM and Rimworld internal scheduler seamlessly. I am following some articles loosely , like Generative Agents (https://arxiv.org/abs/2304.03442) to take inspiration (and innovate/iterate on it). Nevertheless, my goal is not to make a better version of RimWorld pawns, but just a more immersive version and get to try some methods I wanted to try along the way. Taking players decision away and delegating to a machine that is not good at long term planning will never be a viable choice in RimWorld, in terms of meta-gaming. However, if we focus on social aspects of the pawns and leave the performance-sensitive parts to the game itself I think we can hit a pretty nice middle ground.
assuming the LLM calls up specific events does it differ much from using a random number to decide which event to call?
If we are talking about the Quest Generation part (which is still pretty early nin development), it's not just calling pre-existing RimWorld map events (Like Raids, Animals join, Ambrosia Sprout etc).
Instead, I am trying to create a sort of lego-bricks kind of approach for the LLM. Basically, I provide it a list (sometimes, a decently huge list) of building blocks for categorized as "Action", "Condition", and "Reward" nodes, each interface also inheriting from the "Node" interface and LLM creates this nodes which are formatted and created at a factory class at runtime, leading to a Tree like approach that fires a quest, tracks it at runtime if possible and if not possible LLM will interpret if the quest is complete (sadly, I will have to add a "Scan if complete" button somewhere, to save on performance).
Of course this approach is prone to lots of refactoring and design changes, since I am learning by doing. Just yesterday, I was reading about level generation using LLMs (https://arxiv.org/abs/2407.09013) which goes into a more detailed discussion at chapter 5, especially at 5.2
LLM also will also have the context about your colony, your recent quests and recent events, map situation etc so it will certainly be more immersive than just randomizing the events. After all, even the default storytellers (even Randy) plays by some rules, and is not totally random.
I am also trying to keep the core architecture pretty abstract, so I and maybe other developers can just patch it and easily implement their own "Nodes" (maybe I should start calling them leafs...) and this sorta adds some development overhead too. If I can find some time, I will share some examples about the generated quests (both raw response from LLM and processed result) int the future.
I love the idea, one of the narrow areas where i wish the tech was being used more instead of less. but thinking about how if i ask LLMS to list 10 good cities to visit , i'll almost always get Paris , New York and Tokyo in the list.
So your item/trait generation may end up with some similar but non identical items/traits when presented with similar scenarios.
That's definitely true. LLMs, Especially small ones tend to give not so random responses. This can be mitigated by context a little bit (fortunately, RimWorld already has some lore building built in) but still it will never be completely unique. Without trying I cant say much tbh, but its promising nevertheless.