Conference Programme
Thursday 7th - Welcome Reception & Networking
Friday 8th - Two Tracks of Talks
We'll be announcing our schedule soon.
- Monday 3rd November 2025
- Tuesday 4th November 2025
- Ian Gulland Lecture Theatre
- The Cinema
About the talk
In this talk we will be looking at NPCs in the open-world of “Avatar: Frontiers of Pandora”, specifically the challenges of NPC management and meeting performance budgets. “Avatar: Frontiers of Pandora” was developed by Massive Entertainment in Sweden, in close collaboration with other Ubisoft studios, including Ubisoft Düsseldorf in Germany and published by Ubisoft in December 2023 for PC, PlayStation 5 and Xbox Series X|S. The game uses the Snowdrop engine and is an Action-Adventure set in the world of James Cameron’s Avatar.
We will dive deep into handling LOD (Level of Detail) for AI logic, from adjusting update-rates and impostors to virtualized NPC. Targeted LOD boosts let us improve responsiveness where needed. Our culling solution for NPC and their AI allowed us to almost double the number of NPC in some settlements and enemy installations. In case of unexpected situations, as systems randomly interact in the open-world, we apply automated stabilization of AI performance. We conclude with an overview of how the presented LOD tech interacts with NPC activities (working, cooking, eating, sleeping, …) in our major NPC settlements, creating the illusion of thriving Na’vi life.
Takeaway
– Gain insight into challenges and solutions of NPC management in open – world games.
– Do more with AI logic LOD – than just reduced update – rates in the distance.
– Accept you might be over NPC budget in an unpredictable world, just deal with it.
– Consider that you might want to implement NPC Culling and Ghosts.
– Create the illusion of life in your settlements by combining the strengths of various tech solutions, while hiding the weaknesses.
Experience level needed: Intermediate, Advanced
About the talk
One of Kingdom Come: Deliverance’s most prominent features is the deeply simulated open world. All of our NPCs are doing their daily routines and reacting to player actions even when the player is long gone. In Kingdom Come: Deliverance 2 we have not only quadrupled the amount of NPCs on the map to nearly 2400 but also concentrated around half of that into a single city. To keep the frame times and memory reasonably low, we had to introduce new techniques for the level of detail of the AI simulation. All while making the scripting interface as oblivious to the LODs as possible.
Takeaway
– Advantages of having the AI for whole open world simulated
– What and when can be optimized away to cause minimal impact on player experience
– Techniques and ideas for LOD implementation
– Importance of setting the environment and scripting limitations early in the development
Experience level needed: Intermediate
About the talk
In this talk, we’ll share how we used supervised learning to predict one-on-one unit combat outcomes in Creative Assembly’s Total War series. The system aims to improve autoresolver results and help battle AI decide when units should engage or pull back. We’ll walk through how we built an automated in-game simulation environment to gather data, trained our prediction model, results and finally, the challenges we ran into along the way. We’ll also highlight the key takeaways – both from the technical side and from working closely across R&D and development teams.
About the talk
In this talk, we’ll explore how we approached the challenge of determining the attack range of NPCs in the latest Assassin’s Creed title. The complexity of animation systems, procedural adjustments, and varied environments made manual measurement unreliable and unscalable. To solve this, we developed a data-driven approach that captures real gameplay animations in a controlled environment, processes the data through rigorous cleaning, and analyzes it using data science techniques. This method allows us to validate and monitor attack ranges across a wide variety of NPCs and animations, ensuring consistency and catching regressions early. The talk walks through the full pipeline—highlighting lessons learned, future opportunities for automation, and why we chose data science over pure machine learning.
Takeaway
– Scaling validation across large content sets Learn how to move beyond manual data entry by building a scalable, automated system to validate gameplay behaviors across hundreds of assets – saving time while improving consistency.
– Designing reproducible environments for gameplay data collection See how creating a controlled, testable world can help teams gather high – quality animation data that reflects real gameplay conditions, enabling more reliable analysis.
– Choosing data science over machine learning – on purpose Understand when simpler, interpretable data science methods are more effective than complex models, especially when clarity, validation, and team collaboration are priorities.
– Building a pipeline for continuous monitoring and regression detection Discover how to implement a lightweight system that flags unexpected changes in gameplay behavior, helping teams catch bugs early and maintain design intent over time.
Experience level needed: Intermediate
About the talk
In Prologue, we use machine learning to generate realistic landscapes in seconds on a player’s GPU. In this talk I’ll explore the pitfalls, risks and opportunities of using ML for landscape generation, and how we use a combination of level design, PCG and ML to create a resilient and varied world generation system.
Takeaway
– What are the costs/benefits/risks/opportunities of using ML to generate landscapes
– How best to work in a game team implementing an ML system An understanding of latent space methods, ideally enough to think about
– How they could be used in other projects
Experience level needed: Beginner, Intermediate, Advanced
About the talk
This talk explores the evolving legal landscape surrounding the use of artificial intelligence in the video games industry. It will provide an overview of key legal considerations, including intellectual property rights, data protection, contractual obligations, and liability issues arising from AI-driven content and interactions. The session will also address regulatory developments, ethical concerns, and best practices, offering practical insights for developers, publishers, and legal professionals involved in the creation and deployment of AI technologies within video games.
Attendees will gain a clearer understanding of the challenges and opportunities presented by AI, and how to navigate the complex intersection of law and innovation in this dynamic industry.
Takeaway
– Intellectual Property Rights: Attendees will understand the complexities of intellectual property – rights related to AI – generated content in video games.
– Regulatory Developments: Attendees will gain insights into the latest regulatory developments affecting the use of AI in video games.
– Contractual Obligations: Attendees will learn about the contractual considerations when using AI in game development.
– Liability Issues: The talk will address the potential liability issues arising from the use of AI in video games.
– Data Protection and Privacy: The talk will highlight the importance of complying with data protection laws when deploying AI in video games.
Experience level needed: Intermediate
About the talk
This presentation explores how game companies can leverage AWS to prepare small language models that run directly on phones and desktop computers. We’ll demonstrate practical approaches for fine-tuning open-source models for specific gaming needs, enhancing them through distillation techniques, and optimizing for edge performance. Learn how to enable function calling capabilities that let these models trigger actions within your applications. We’ll showcase an efficient AWS feedback loop that continuously improves model responses by collecting user interactions and eliminating generic “slate” answers through targeted updates. Discover how this approach delivers responsive AI experiences while enhancing privacy, reducing costs, and enabling offline functionality across your customer devices.
Takeaway
– Develop and deploy Small Language Models for Multiple Platform.
– Implementing SLM Optimizations and function calling for operational techniques and triggering in-game actions.
– How to build a Continuous Improvement feedback loop to improve the AI experience.
Experience level needed: Beginner, Intermediate
About the talk
In today’s world of LLMs, is there still a need for planners? I argue yes, maybe more now than ever. This talk will first reflect on the use of GOAP and HTN planners in games over the past 20 years. Next we explore how LLMs appear to be able to plan… yet actually are fundamentally unable to plan with the attention to detail that AI characters in games require. Finally, we discuss where LLMs can add real value for authoring symbolic planning domains, and where there could be opportunities for hybrid solutions combining symbolic planning logic with generative plans in the future.
Takeaway
– Pros & cons of GOAP vs HTN planners.
– How to plan for multiple characters, actions, and dialogue all at once.
– Why LLMs are not a silver bullet replacement for planners.
– How LLMs can democratize designing content for planner – powered NPCs.
Experience level needed: Intermediate, Advanced
About the talk
Developers are starting embrace generative AI to elevate NPC believability, but current cloud‑based agent frameworks are far too costly and slow for real‑time integration. In this session, you’ll learn how Atelico’s Generative Agents Realtime Playground (GARP) leverages proprietary small LLMs, cognitive memory architecture, and hot‑swappable adapters to run lifelike, emergent NPC behavior directly on consumer hardware with zero inference cost and no rate‑limits. We’ll walk through our end‑to‑end pipeline:
– Game‑object serialization & prompt templating
– Memory DB design (working vs. long‑term memory, retrieval, reflection)
– Adapter‑based fine‑tuning for character, planning & chat
– Performance optimizations (quantized models, parallel LM calls, caching, guardrails)
You’ll come away with actionable guidelines for implementing LLMs in your game, architecting cognitive agents, and designing player interactions that nudge emergent narratives, all without breaking the bank.
Takeaway
– AI Engine Architecture: How to orchestrate LLMs, memory and adapters for real‑time agents
– Implementation Patterns: Prompt templating, JSON serialization, parallel calls & safety guardrails
– Performance Strategies: Quantized small models, LoRA swapping, prefix caching, batching
– Design Best‑Practices: Balancing autonomy vs. authorial control, chat‑driven nudging, emergent behavior design
– Integration Guidelines: Godot plugin workflows, data pipelines & dev tooling All these are lessons learned implementing GARP and our first, yet unannounced, game.
Experience level needed: Intermediate
About the talk
As games continue to build diverse and dynamic gaming experiences there is constant pressure to be able to iterate quickly, adjust NPC behavior and create more and more dynamic content. Within this context, in this session, we will:
– Explore Use Cases: The wide variety of ways that these reinforcement learning techniques can be applied in games of all kinds
– Discuss: Telemetry and player data collection methodologies as input to some of these models
– Demo: See how Databricks can help you scale: Experience data generation, efficient training on CPU and GPU, experimentation with managed MLFlow, deployment of models and governance of assets, usage and monitoring of training workflows
– Explain How: To enable this dynamism for your future titles
You will leave this session with new ideas on how this might apply: ones that are feasible today and aspirational, more challenging, use cases for the future.
About the talk
Generative AI has the potential to deliver radical new gameplay experiences – featuring emergent stories, unshackled characters, and unprecedented player agency.
But it won’t do it without a fight.
Left to its own devices, Generative AI will happily churn out incoherent, soulless slop. A million miles from game-ready quality. And no – contrary to popular belief – you cannot fine-tune and prompt your way out of the problem. So stop trusting the model, and start bullying it.
This talk draws on seven years of R&D in the AI gameplay space – culminating in the development of our game Dead Meat. It argues that developers are not going far enough in their efforts to wrangle AI – and advocates for the adoption of an additional “bully layer” that forces AI to deliver to a human authorial vision.
We’ll give a behind the scenes peek at what “bullying” AI means in practice – combining real-time “direction”, dynamic context control, and intelligent quest systems to generate meaningful AI-powered gameplay. This will include real-world examples from our own game Dead Meat, as well as a number of our other key demos.
Because, as we know from experience, GenAI is not magic: it is a deeply flawed technology. But, if you get it its face and bully the hell out of it to do what you want, when you want, it CAN create experiences that feel magic.
Takeaway
– Stop putting your faith in AI’s ability to semi – autonomously – create good gameplay experiences. Left to its own devices, AI WILL create slop.
– Stop believing that prompting and fine – tuning can save you from AI slop. Despite the near – universal emphasis of these methods, they cannot deliver game – ready quality.
– Start BULLYING AI into doing what you want. Additional technologies are needed to force AI to create something good, because it sure as hell won’t do it on its own accord.
– Start adopting a “bully” layer that intelligently manages character direction and conversational quests in real – time – based on game consciousness, player desires, and authorial intention.
– GenAI is not magic: it is a deeply flawed technology. But, if you get it its face and bully the hell out of it to do what you want, when you want, it CAN create experiences that feel magic.
Experience level needed: Beginner, Intermediate, Advanced
About the talk
“Good Enough” AI: Pragmatic Approaches for Crafting Compelling Player Experiences in AAA Games, will cover the practicalities of building effective AI within the demands of large-scale game development. Drawing on my time as a Senior AI Programmer on Cyberpunk 2077’s vehicle AI, I’ll explain how we developed systems from scratch using Behavior Trees and implemented editor tools to support content creators. I’ll illustrate how these sensible strategies enabled a variety of engaging vehicle interactions across numerous quests and open-world scenarios, including the Delamain questline. The aim is to show that truly effective game AI often comes down to prioritizing player experience and efficient development over sheer algorithmic complexity.
Takeaway
– Empowering Content Creators Through Tooling
– Connecting AI to Creative Vision
– Lessons from the AAA Trenches
– Practical Strategies for Impactful AI in games
Experience level needed: Beginner, Intermediate
About the talk
Ambient characters breathe life into open world games, enhancing immersion and forging emotional connections with players through subtle environmental storytelling. Yet, the systems driving these characters often receive less attention than core gameplay mechanics like combat or stealth. This talk tackles the challenge of creating believable and engaging ambient character simulations without reinventing the wheel.
We’ll delve into a practical toolkit of proven, scalable, and maintainable techniques that prioritize performance and minimize production risks. Whether you’re approaching ambient character simulation for the first time, or an experienced developer seeking to broaden their knowledge and adopt industry best practices, this session provides effective solutions and actionable insights you can implement immediately.
Takeaway
– Learn to effectively advocate for strategic investment in living world AI by understanding its impact on player engagement, environmental storytelling, and overall business value.
– Know how to implement an ambient character simulation in a way that is scalable, maintainable and minimizes production risks through a collection of tried and tested techniques and best practices.
– Understand what problem each of the presented techniques is trying to solve, and how to critically evaluate when it makes sense for your game project.
Experience level needed: Beginner
About the talk
In this talk, we’ll explore how mobile developers can harness on-device ML for a variety of use cases. We’ll discuss practical approaches to running language models and other inference workloads locally to power features like dynamic dialogue, NPC behaviors, and adaptive systems. We’ll also show how neural graphics can enhance rendering and visual fidelity while reducing performance costs.
About the talk
In this talk, I will share what it is like to work as a research scientist in a major AAA game company, supporting multiple studios with cutting-edge AI and tools. Game development is a fast-paced, high-stakes environment, naturally risk-averse, so our role in the applied R&D development is to build practical, proof-of-concept ideas that solve real production problems. I will walk through the full lifecycle of a successful research project, from identifying pain points with game developers and designers, through literature review and prototype development, to working closely with teams to deliver usable tools. I will present our approach to research integration and I will share examples from projects we have worked on, including internal tools and experiments from franchises like Plants vs. Zombies and Battlefield. I will also reflect on the unique challenges of transitioning from academic research to the game industry — and why I believe research is fundamental to the future of game development.
Takeaway
– How to successfully start and finish a research project within a game company. The framework can be used by any research team within a game company, small or large.
– Lessons learned and mistakes not to be repeated in a research project. We will see practical examples of real research projects shipped into games.
– Understand why research is important for the next generation of games and how a research team can deliver value to game studios.
Experience level needed: Beginner
About the talk
As video games continue to evolve in complexity and scope, accessibility, approachability, and learnability are no longer niche considerations—they’re essential. In this talk, we will explore the differences between accessibility, approachability, and learnability, as well as the overlap between them. We will discuss the difficulties in developing accessible, approachable, and learnable games, why the industry lacks clear benchmarks, and the best resources and solutions for game developers. We’ll dive into some examples of modern games, the challenges they pose to players, and the current assistive techniques used in industry. Then, we’ll see how various AI techniques (including Utility AI, Goal-Oriented Action Planning, and Large Language Models) can dynamically assist players without compromising gameplay. This presentation will not only explain the concepts of accessibility, approachability, and learnability, but encourage creative thinking about how emerging technologies can make games more inclusive, intuitive, and fun for everyone.
Takeaway
– What accessibility, approachability, and learnability are, and how they intersect.
– How putting careful thought and consideration into accessibility, approachability, and learnability benefits the community, game, and studio.
– Why there is a lack of industry standards and research in academia.
– The limitations of current strategies/implementations of accessibility, approachability, and learnability, both in terms of assistance, and difficulties of development.
– How we can implement traditional AI – into games to increase accessibility, approachability, and learnability in games.
Experience level needed: Beginner, Intermediate
- Ian Gulland Lecture Theatre
- The Cinema
About the talk
Take an insider look at the police chases in Cyberpunk 2077: Phantom Liberty and dive into the dynamic spawning of road blockades and MaxTac AV encounters, which add flavor to regular police chases and are designed to keep players on the edge of their seats. Discover how the CD PROJEKT RED team leveraged Night City’s vast, vertical environment with graph-based lane discovery and asynchronous spawn points generation, ensuring seamless and engaging pursuits. With maximizing player’s fun in mind, this talk will also explore the unique technical challenges and solutions required to make these two features work.
Takeaway
Gain insights into the dynamic spawning of police chase add – ons, including road blockades and MaxTac AV encounters, as seen in Cyberpunk 2077: Phantom Liberty. Understand how to architect and implement these features effectively, balancing performance with immersive player experience.
Learn how to creatively leverage tools like:
– Asynchronous batch processing to handle complex logic efficiently without blocking the game thread.
– Graph – based traffic lane discovery for intelligent placement and movement within urban road networks.
– Physics – based overlap checks to validate spawn locations and ensure believable, non – intrusive placements.
Discover what makes a scalable and successful system for reactive world events in an open – world AAA game.
Experience level needed: Intermediate
About this Talk
Goal-Oriented Action Planning is 20 years old in 2025 since it was introduced in F.E.A.R in 2005. This 20th anniversary is the opportunity to visualize how planning works in a game when it is used as the decision procedure for NPC behaviors.
Takeaway
Answers to below questions:
– Why should we limit anything about the planner?
– Does the planner work more for some actions/goals than others?
– When heuristics are used, is a cheap action/goal/plan more frequent than others? Can the action/goal/plan cost influence its frequency?
– How many NPCs call the planner within a given time-frame?
– Is there a loss of control? Does planning take control of the game (instead of GD and LD) ? Could plans built by the planner be unexpected? Could surprises be planned?
– Does HTN Planning make any difference? o How Behaviour Analytics can be achieved?
About the talk
We will explore the relation of design and iteration on the example of NPC perception in Kingdom Come: Deliverance and its sequel. How we made NPCs see in a realistic, open world RPG, what we thought is going to work and what we actually had to do to make it work. What it takes to tune a system to behave in both scripted and emergent situations, to provide enjoyable but nontrivial gameplay and enhance immersion. All that under 3 ms per frame. We will be reminded of lessons that might seem basic but still could pose a challenge over 10+ years of development.
Takeaway
– Designing for everything is doomed to fail; why?
– Iteration takes long but can lead to success
– The importance of stable teams
– The importance of selling features to the player
Experience level needed: Intermediate
About the talk
I’m not a game designer; I am an AI Product Manager with a psychology background who recently transitioned into the gaming industry. But I’ve spent the past several years building the AI companions Replika and Blush, which millions of people talk to every day. I’ve seen firsthand how and why people form emotional bonds with AI. In this talk, I’ll explore what the phenomenon of AI companionship can teach us about designing richer, more immersive experiences in games.
We’ll look at how advances in generative AI might enable NPCs to become interactive companions. This talk will analyze the design/UX strategies and psychological principles that make AI “friends” engaging. We will explore why people want to talk to an AI and form these connections. We’ll also dive into the ethical and safety implications of AI companions, from safeguarding users (especially younger players) against harm to preventing emotional over-dependence, and discuss best practices for responsible implementation. Hopefully, by the end of this talk, the audience will gain a nuanced understanding of AI companionship that goes beyond simplistic narratives of benefit or harm.
Takeaway
– Learn from real – world successes and failures.The audience will see what actually works and what doesn’t when creating AI characters based on lessons from AI companions. We’ll explore strategies for creating believable AI characters and learn why defining an AI’s personality and boundaries – before – you start is critical.
– Understand the psychology of player attachment. This talk will explore – why – players get attached to AI, helping to understand the human needs these companions meet like combating loneliness or practicing social skills.
– Start thinking about building AI companion experiences ethically and safely. Audience will get an idea of the real emotional implications for players and responsible design in the realm of AI Companions.
Experience level needed: Beginner
About the talk
Reinforcement learning promises powerful game AI, but making it production-ready is another story. This talk shares Riot’s lessons from working at the intersection of research and live games—where safety, evaluation, and reliability matter as much as raw performance. We’ll highlight common failure modes, suggest ways to manage variance and telemetry, and offer ideas for how RL can be applied responsibly in production. Along the way, we’ll reflect on what’s realistic today versus what remains hype, and how to get real value out of RL without overpromising.
About the session
Investment into AI companies is sky-high while investment into games companies has fallen from their recent Covid peak. So when AI and games intersects, where does the smart money go? In this panel, leading investors share insights into how they evaluate opportunities in this space. From gameplay powered by machine learning to AI-assisted tools for game development, learn about what’s hot, what’s risky, and what’s next.
Takeaway
– Knowledge on how different types of investors evaluate opportunities in games and AI
– Insights into what are the hottest areas for investments in the space
– Tips for founders and developers looking to attract investment
– Understanding of what investors perceive to be the greatest risks
– Perspectives on the future of AI with games
Experience level needed: Beginner
About the talk
Debugging complex algorithms can be difficult, especially for complex AI behaviours which can be executed over several frames or asynchronously is a nightmare. But doing this across all major gaming platforms? That’s a unique horror for AAA game developers. At Havok we have fought this battle for years, and we’re here to introduce you to our weapon of choice: Cross-Platform Determinism. Determinism allows us to relive crash scenarios as often as we want across platforms. A bug found on a console can be replayed on PC in debug. Saving time, reducing frustration, and simplifying debugging. Saying development is now a dream might be pushing it a bit too far, but only a bit!
In this talk you will learn how determinism, and in particular cross-platform determinism, can help game developers implement better tooling for recording player sessions and reproducing issues. From running fast math operations on different CPU architectures, to managing deterministic multi-threading models, and dealing with compiler bugs, learn practical tips to implement cross-platform determinism in your game.
About this Talk
Dungeon Keeper was once called “Game of the Millennium” – whether or not that’s true, Bullfrog Production’s RTS remains an early use of ‘real AI’ in games. Evil minions ran around with high autonomy in a user-generated Dungeon to build emergent gameplay with dark humour. This is the story of how that was done. Retro-AI architecture seeks to inform character behaviours today, with an emphasis on how to build character navigation, motion and spatial perception against constantly changing design directions.
Takeaway
– How to evolve an AI system before you know what the game is…
– …and to tune the AI system against (insane) design and hardware constraints.
– Why AI architecture needs compact data, clear representations, layered functionality, and chains-of-tools.
VALUE: character motion is still a top priority today. What was real-time a couple of decades ago is feasible today: per-character; inside dynamic tool-chains; at search-time; at inference time.
About the talk
This talk presents a practical, explainable framework that bridges the gap between raw AI pathfinding metrics and human-centered game design. Using Sokoban as a case study, we show how unsupervised clustering and visual analytics can decode difficulty, uncover hidden structural archetypes, and guide adaptive solver selection. Our results demonstrate how obstacle density, deadlock potential, and other features drive both algorithmic and human difficulty, enabling designers to make faster, more informed decisions about procedural content.
Takeaway
– How to translate opaque AI solver metrics into designer – relevant difficulty insights.
– Evidence – based design heuristics: why obstacle density and deadlock potential are critical levers in puzzle complexity.
– How clustering reveals hidden level archetypes and balances procedural content.
– Practical workflow for integrating explainable AI tools into game design pipelines.
– The value of adaptive solver selection for efficiency and player experience.
Experience level needed: Intermediate
About the talk
Game developers, especially those working on single-player or mobile titles, are often reluctant to integrate online APIs due to latency, cost, instability or platform constraints. Yet, the promise of generative AI remains strong, provided it can be fast, private, cheap, and seamlessly embedded into game experiences.
This talk explores a production-ready approach to deploying small, fine-tuned, dynamic language models inside games — without needing any internet connectivity. While many LLM-based use cases in games focus on low-hanging fruit like quest dialogue, we want users to take control of any text. We strive to bridge the gap between loose ideas or raw knowledge bases and structured data pipelines — the foundation for generating what we call Diamonds. Ultimately, you’ll understand how to generate reliable results with dramatically reduced memory usage and power consumption, even on the smallest of devices.
Takeaway
– Examine the operational and UX challenges of API – bound systems in videogames and how offline language models unlock new possibilities
– How is it that strategic templating governing input/output behaviours can be paired with strong alignment to your domain for reliable, adaptable outputs in a compact package
– Performance wins of embedded language models: tiny, fast, green and offline a. Side – by – side results comparing models across loading time, RAM use, etc.
– Lessons from building a toolchain for developers: what has and hasn’t worked
– What comes next: a peek into our ongoing R&D and Dynamic AI Platform features.
Experience level needed: Intermediate, Advanced
About this Talk
In this talk, I will share what it is like to work as a research scientist in a major AAA game company, supporting multiple studios with cutting-edge AI and tools. Game development is a fast-paced, high-stakes environment, naturally risk-averse, so our role in the applied R&D development is to build practical, proof-of-concept ideas that solve real production problems.I will walk through the full lifecycle of a successful research project, from identifying pain points with game developers and designers, through literature review and prototype development, to working closely with teams to deliver usable tools. I will present our approach to research integration and I will share examples from projects we have worked on, including internal tools and experiments from franchises like Plants vs. Zombies and Battlefield. I will also reflect on the unique challenges of transitioning from academic research to the game industry — and why I believe research is fundamental to the future of game development.
Takeaway
– How to successfully start and finish a research project within a game company. The framework can be used by any research team within a game company, small or large.
– Lessons learned and mistakes not to be repeated in a research project. We will see practical examples of real research projects shipped into games.
– Understand why research is important for the next generation of games and how a research team can deliver value to game studios.
About the talk
Reinforcement learning (RL) is already successfully used for playtesting games without human intervention, allowing game studios to reduce the significant time and cost spent for this important task in game development. We present a case study on testing the goalie AI in the hockey game NHL26, which is using a traditional rule-based algorithm, by training a forward player with RL to find exploits in the goalie’s behavior. We show that out of the box RL algorithms only provide limited value for this kind of testing scenario, because they converge to a single exploit strategy. With some simple steps we can enhance the algorithm to instead provide a set of qualitative and diverse solutions that provide additional value to game designers, since they allow finding multiple fix-worthy exploits in a single experiment iteration. For our first deployment of our approach, within a single experiment we were able to find six exploit strategies that were qualitatively similar to those that game testers found in hour long manual testing sessions.
Takeaway
– Reinforcement Learning is good at learning one solution to a problem by overfitting to the environment. With some simple tricks we can create a diverse set of solutions which is more valuable for game testing, since game designers can for example find multiple game exploits in a single development iteration.
– My presentation shows that simple solutions are often the better way to go for AI development. Specifically for the case of NHL26, I show that an easy trick allows us to learn a diverse set of qualitative solutions, without using a cutting edge Quality Diversity algorithm, which are known to be unstable, hard to tune, and debug.
– RL for playtesting can serve as as a non-invasive way to start a research relationship with a game studio. RL for game AI on the other hand requires more trust, because there is still a lack of authorial control for such agents, removing some of the control that game AI designers are used to from more traditional algorithms.
Experience level needed: Beginner, Intermediate
About the talk
Behavior Trees are a powerful tool in Unreal Engine for building game AI, but they often fall short when dealing with complex decision-making, multiplayer support, and flexible gameplay systems. We’ll explore practical ways to overcome these limitations by integrating Utility AI into Behavior Trees, enabling more adaptive and responsive creature behaviors. We’ll also dive into how the Gameplay Ability System can be used to decouple decision-making from execution, making AI logic more modular, designer-friendly, and better suited for multiplayer environments.
Takeaway
– Understand key limitations of Behavior Trees and how to work around them.
– See how studios are doing AI in real life games.
– Discover how to make AI more adaptive and responsive.
– How the Gameplay Ability System can simplify and scale AI behavior.
– Gain strategies for building network – friendly AI in multiplayer games.
Experience level needed: Intermediate
About the talk
This talk presents a novel way to integrate Gemini within game development workflows and runtime experiences on Unreal Engine.
The first facet focuses on enhancing productivity within the Unreal Editor. By leveraging multimodal capabilities, we propose a system that assists game developers with various tasks, from generating code snippets, and reasoning on visual programming and UI layouts to creating game assets and providing contextual design suggestions and debugging assistance. This integration aims to streamline the development process and significantly lower the barrier to entry for complex game creation.
The second aspect involves the automatic exposition of functions from Unreal Engine to the Gemini API through formalized function declarations. This allows Gemini agents to directly call and execute game functions at runtime, dynamically providing arguments as needed. This capability significantly reduces the cost of creating Agents for Agentic architectures powered by Gemini and unlocks unprecedented potential for game creation workflows as well as game runtime.
We will present real-world applications of these integrations, specifically detailing their use by the team behind Google Maps’ Immersive View.
Takeaway
– Attendees will take away how multimodal LLMs can help them on a variety of content creation tasks. Video game creation is a multimodal process and Multimodal LLMs, like Gemini, are now able to understand that context. Multimodal, unlocked their use in one of the most complex content creation field.
– Our integration was originally done within 48 hours and required no modification to the engine. Attendees will learn how to create an “agentic architecture” – powered by LLMs in any engine, with an example written for Unreal Engine.
– Attendees will get insights on the usage of LLMs within Google, specifically within the Immersive View team.
Experience level needed: Beginner, Intermediate, Advanced
About the talk
Despite amazing progress in generative AI, even the largest and smartest large language models have serious limitations in their reasoning abilities, as shown by results on game-playing benchmarks.
On the other hand, simulation-based AI (SBAI) agents make intelligent decisions based on the statistics of simulations using a forward model of a problem domain, providing a complementary type of intelligence. SBAI algorithms have very attractive properties, including instant adaptation to new problems, tunable intelligence and some degree of explainability.
In this talk I’ll present recent results on combining SBAI with LLMs to develop capable game-playing agents, and argue that—right now—it’s an especially good time to be an AI engineer.
About the talk
Working with LLM agents sometimes feels like herding cats—from small prompt tweaks causing outsized effects, to unpredictable player behaviors surfacing troublesome outcomes. In this talk, Batu Aytemiz will share practical insights drawn from developing a Roblox game with over 20,000 daily active users. He will discuss tools and techniques designed to make AI agents more reliable, safer, and easier to iterate on, ranging from lightweight, custom-built internal tools anyone can implement, to simple yet powerful prompting practices you can start using right away.
Takeaway
– Why we should test LLMs incredibly rigorously, and the potential ethical concerns of not doing so. – An iterative workflow that helps you evaluate and improve your LLM pipeline in a systematic manner, grounded in user data.
– Techniques that are focused on increasing iteration speed when making prompt changes.
– Techniques that are focused on verifying that the changed prompts aren’t causing unexpected behaviors.
– Simple tool to analyze bulk user data while respecting privacy.
Experience level needed: Intermediate