The illusion of life – Managing NPCs in Avatar: Frontiers of Pandora
About the talk
In this talk we will be looking at NPCs in the open-world of “Avatar: Frontiers of Pandora”, specifically the challenges of NPC management and meeting performance budgets. “Avatar: Frontiers of Pandora” was developed by Massive Entertainment in Sweden, in close collaboration with other Ubisoft studios, including Ubisoft Düsseldorf in Germany and published by Ubisoft in December 2023 for PC, PlayStation 5 and Xbox Series X|S. The game uses the Snowdrop engine and is an Action-Adventure set in the world of James Cameron’s Avatar.
We will dive deep into handling LOD (Level of Detail) for AI logic, from adjusting update-rates and impostors to virtualized NPC. Targeted LOD boosts let us improve responsiveness where needed. Our culling solution for NPC and their AI allowed us to almost double the number of NPC in some settlements and enemy installations. In case of unexpected situations, as systems randomly interact in the open-world, we apply automated stabilization of AI performance. We conclude with an overview of how the presented LOD tech interacts with NPC activities (working, cooking, eating, sleeping, …) in our major NPC settlements, creating the illusion of thriving Na’vi life.
Takeaway
– Gain insight into challenges and solutions of NPC management in open – world games.
– Do more with AI logic LOD – than just reduced update – rates in the distance.
– Accept you might be over NPC budget in an unpredictable world, just deal with it.
– Consider that you might want to implement NPC Culling and Ghosts.
– Create the illusion of life in your settlements by combining the strengths of various tech solutions, while hiding the weaknesses.
Experience level needed: Intermediate, Advanced
On‑Device Generative Agents for Lifelike NPCs
About the talk
Developers are starting embrace generative AI to elevate NPC believability, but current cloud‑based agent frameworks are far too costly and slow for real‑time integration. In this session, you’ll learn how Atelico’s Generative Agents Realtime Playground (GARP) leverages proprietary small LLMs, cognitive memory architecture, and hot‑swappable adapters to run lifelike, emergent NPC behavior directly on consumer hardware with zero inference cost and no rate‑limits. We’ll walk through our end‑to‑end pipeline:
– Game‑object serialization & prompt templating
– Memory DB design (working vs. long‑term memory, retrieval, reflection)
– Adapter‑based fine‑tuning for character, planning & chat
– Performance optimizations (quantized models, parallel LM calls, caching, guardrails)
You’ll come away with actionable guidelines for implementing LLMs in your game, architecting cognitive agents, and designing player interactions that nudge emergent narratives, all without breaking the bank.
Takeaway
– AI Engine Architecture: How to orchestrate LLMs, memory and adapters for real‑time agents
– Implementation Patterns: Prompt templating, JSON serialization, parallel calls & safety guardrails
– Performance Strategies: Quantized small models, LoRA swapping, prefix caching, batching
– Design Best‑Practices: Balancing autonomy vs. authorial control, chat‑driven nudging, emergent behavior design
– Integration Guidelines: Godot plugin workflows, data pipelines & dev tooling All these are lessons learned implementing GARP and our first, yet unannounced, game.
Experience level needed: Intermediate
Supporting thousands of simulated NPCs in the open world of KCD2.
About the talk
One of Kingdom Come: Deliverance’s most prominent features is the deeply simulated open world. All of our NPCs are doing their daily routines and reacting to player actions even when the player is long gone. In Kingdom Come: Deliverance 2 we have not only quadrupled the amount of NPCs on the map to nearly 2400 but also concentrated around half of that into a single city. To keep the frame times and memory reasonably low, we had to introduce new techniques for the level of detail of the AI simulation. All while making the scripting interface as oblivious to the LODs as possible.
Takeaway
– Advantages of having the AI for whole open world simulated
– What and when can be optimized away to cause minimal impact on player experience
– Techniques and ideas for LOD implementation
– Importance of setting the environment and scripting limitations early in the development
Experience level needed: Intermediate
Use Cases and Practical Methodologies for Reinforcement Learning for Learning at Agents at Scale
About the talk
As games continue to build diverse and dynamic gaming experiences there is constant pressure to be able to iterate quickly, adjust NPC behavior and create more and more dynamic content. Within this context, in this session, we will:
– Explore Use Cases: The wide variety of ways that these reinforcement learning techniques can be applied in games of all kinds
– Discuss: Telemetry and player data collection methodologies as input to some of these models
– Demo: See how Databricks can help you scale: Experience data generation, efficient training on CPU and GPU, experimentation with managed MLFlow, deployment of models and governance of assets, usage and monitoring of training workflows
– Explain How: To enable this dynamism for your future titles
You will leave this session with new ideas on how this might apply: ones that are feasible today and aspirational, more challenging, use cases for the future.
Predicting Combat Outcomes in Total War
About the talk
In this talk, we’ll share how we used supervised learning to predict one-on-one unit combat outcomes in Creative Assembly’s Total War series. The system aims to improve autoresolver results and help battle AI decide when units should engage or pull back. We’ll walk through how we built an automated in-game simulation environment to gather data, trained our prediction model, results and finally, the challenges we ran into along the way. We’ll also highlight the key takeaways – both from the technical side and from working closely across R&D and development teams.
How to bully AI into delivering meaningful gameplay
About the talk
Generative AI has the potential to deliver radical new gameplay experiences – featuring emergent stories, unshackled characters, and unprecedented player agency.
But it won’t do it without a fight.
Left to its own devices, Generative AI will happily churn out incoherent, soulless slop. A million miles from game-ready quality. And no – contrary to popular belief – you cannot fine-tune and prompt your way out of the problem. So stop trusting the model, and start bullying it.
This talk draws on seven years of R&D in the AI gameplay space – culminating in the development of our game Dead Meat. It argues that developers are not going far enough in their efforts to wrangle AI – and advocates for the adoption of an additional “bully layer” that forces AI to deliver to a human authorial vision.
We’ll give a behind the scenes peek at what “bullying” AI means in practice – combining real-time “direction”, dynamic context control, and intelligent quest systems to generate meaningful AI-powered gameplay. This will include real-world examples from our own game Dead Meat, as well as a number of our other key demos.
Because, as we know from experience, GenAI is not magic: it is a deeply flawed technology. But, if you get it its face and bully the hell out of it to do what you want, when you want, it CAN create experiences that feel magic.
Takeaway
– Stop putting your faith in AI’s ability to semi – autonomously – create good gameplay experiences. Left to its own devices, AI WILL create slop.
– Stop believing that prompting and fine – tuning can save you from AI slop. Despite the near – universal emphasis of these methods, they cannot deliver game – ready quality.
– Start BULLYING AI into doing what you want. Additional technologies are needed to force AI to create something good, because it sure as hell won’t do it on its own accord.
– Start adopting a “bully” layer that intelligently manages character direction and conversational quests in real – time – based on game consciousness, player desires, and authorial intention.
– GenAI is not magic: it is a deeply flawed technology. But, if you get it its face and bully the hell out of it to do what you want, when you want, it CAN create experiences that feel magic.
Experience level needed: Beginner, Intermediate, Advanced
Let the NPCs Fight: Learning Attack Reach from Real Gameplay Data
About the talk
In this talk, we’ll explore how we approached the challenge of determining the attack range of NPCs in the latest Assassin’s Creed title. The complexity of animation systems, procedural adjustments, and varied environments made manual measurement unreliable and unscalable. To solve this, we developed a data-driven approach that captures real gameplay animations in a controlled environment, processes the data through rigorous cleaning, and analyzes it using data science techniques. This method allows us to validate and monitor attack ranges across a wide variety of NPCs and animations, ensuring consistency and catching regressions early. The talk walks through the full pipeline—highlighting lessons learned, future opportunities for automation, and why we chose data science over pure machine learning.
Takeaway
– Scaling validation across large content sets Learn how to move beyond manual data entry by building a scalable, automated system to validate gameplay behaviors across hundreds of assets – saving time while improving consistency.
– Designing reproducible environments for gameplay data collection See how creating a controlled, testable world can help teams gather high – quality animation data that reflects real gameplay conditions, enabling more reliable analysis.
– Choosing data science over machine learning – on purpose Understand when simpler, interpretable data science methods are more effective than complex models, especially when clarity, validation, and team collaboration are priorities.
– Building a pipeline for continuous monitoring and regression detection Discover how to implement a lightweight system that flags unexpected changes in gameplay behavior, helping teams catch bugs early and maintain design intent over time.
Experience level needed: Intermediate