Showing: 11 - 20 of 22 RESULTS

Diversity in Autonomous Playtesting: a Case Study on Reinforcement Learning for NHL26

About the talk
Reinforcement learning (RL) is already successfully used for playtesting games without human intervention, allowing game studios to reduce the significant time and cost spent for this important task in game development. We present a case study on testing the goalie AI in the hockey game NHL26, which is using a traditional rule-based algorithm, by training a forward player with RL to find exploits in the goalie’s behavior. We show that out of the box RL algorithms only provide limited value for this kind of testing scenario, because they converge to a single exploit strategy. With some simple steps we can enhance the algorithm to instead provide a set of qualitative and diverse solutions that provide additional value to game designers, since they allow finding multiple fix-worthy exploits in a single experiment iteration. For our first deployment of our approach, within a single experiment we were able to find six exploit strategies that were qualitatively similar to those that game testers found in hour long manual testing sessions.

Takeaway
– Reinforcement Learning is good at learning one solution to a problem by overfitting to the environment. With some simple tricks we can create a diverse set of solutions which is more valuable for game testing, since game designers can for example find multiple game exploits in a single development iteration.
– My presentation shows that simple solutions are often the better way to go for AI development. Specifically for the case of NHL26, I show that an easy trick allows us to learn a diverse set of qualitative solutions, without using a cutting edge Quality Diversity algorithm, which are known to be unstable, hard to tune, and debug.
– RL for playtesting can serve as as a non-invasive way to start a research relationship with a game studio. RL for game AI on the other hand requires more trust, because there is still a lack of authorial control for such agents, removing some of the control that game AI designers are used to from more traditional algorithms.

Experience level needed: Beginner, Intermediate

When Research Meets Release Dates: Production-Grade RL for Games

About the talk
Reinforcement learning promises powerful game AI, but making it production-ready is another story. This talk shares Riot’s lessons from working at the intersection of research and live games—where safety, evaluation, and reliability matter as much as raw performance. We’ll highlight common failure modes, suggest ways to manage variance and telemetry, and offer ideas for how RL can be applied responsibly in production. Along the way, we’ll reflect on what’s realistic today versus what remains hype, and how to get real value out of RL without overpromising.

How to power-up Unreal Engine Behavior Trees: Subnautica 2

About the talk
Behavior Trees are a powerful tool in Unreal Engine for building game AI, but they often fall short when dealing with complex decision-making, multiplayer support, and flexible gameplay systems. We’ll explore practical ways to overcome these limitations by integrating Utility AI into Behavior Trees, enabling more adaptive and responsive creature behaviors. We’ll also dive into how the Gameplay Ability System can be used to decouple decision-making from execution, making AI logic more modular, designer-friendly, and better suited for multiplayer environments.

Takeaway
– Understand key limitations of Behavior Trees and how to work around them.
– See how studios are doing AI in real life games.
– Discover how to make AI more adaptive and responsive.
– How the Gameplay Ability System can simplify and scale AI behavior.
– Gain strategies for building network – friendly AI in multiplayer games.

Experience level needed: Intermediate

Panel: Investment Trends at the Intersection of AI and Games

About the session
Investment into AI companies is sky-high while investment into games companies has fallen from their recent Covid peak. So when AI and games intersects, where does the smart money go? In this panel, leading investors share insights into how they evaluate opportunities in this space. From gameplay powered by machine learning to AI-assisted tools for game development, learn about what’s hot, what’s risky, and what’s next.

Takeaway
– Knowledge on how different types of investors evaluate opportunities in games and AI
– Insights into what are the hottest areas for investments in the space
– Tips for founders and developers looking to attract investment
– Understanding of what investors perceive to be the greatest risks
– Perspectives on the future of AI with games

Experience level needed: Beginner

Gemini in Unreal Engine: Enhancing game development with multimodal AI

About the talk
This talk presents a novel way to integrate Gemini within game development workflows and runtime experiences on Unreal Engine.
The first facet focuses on enhancing productivity within the Unreal Editor. By leveraging multimodal capabilities, we propose a system that assists game developers with various tasks, from generating code snippets, and reasoning on visual programming and UI layouts to creating game assets and providing contextual design suggestions and debugging assistance. This integration aims to streamline the development process and significantly lower the barrier to entry for complex game creation. 
The second aspect involves the automatic exposition of functions from Unreal Engine to the Gemini API through formalized function declarations. This allows Gemini agents to directly call and execute game functions at runtime, dynamically providing arguments as needed. This capability significantly reduces the cost of creating Agents for Agentic architectures powered by Gemini and unlocks unprecedented potential for game creation workflows as well as game runtime.
We will present real-world applications of these integrations, specifically detailing their use by the team behind Google Maps’ Immersive View.

Takeaway
– Attendees will take away how multimodal LLMs can help them on a variety of content creation tasks. Video game creation is a multimodal process and Multimodal LLMs, like Gemini, are now able to understand that context. Multimodal, unlocked their use in one of the most complex content creation field.
– Our integration was originally done within 48 hours and required no modification to the engine. Attendees will learn how to create an “agentic architecture” – powered by LLMs in any engine, with an example written for Unreal Engine.
 – Attendees will get insights on the usage of LLMs within Google, specifically within the Immersive View team.

Experience level needed: Beginner, Intermediate, Advanced

Simulation-Based AI with LLMs for Game Agents

About the talk
Despite amazing progress in generative AI, even the largest and smartest large language models have serious limitations in their reasoning abilities, as shown by results on game-playing benchmarks.
On the other hand, simulation-based AI (SBAI) agents make intelligent decisions based on the statistics of simulations using a forward model of a problem domain, providing a complementary type of intelligence. SBAI algorithms have very attractive properties, including instant adaptation to new problems, tunable intelligence and some degree of explainability.
In this talk I’ll present recent results on combining SBAI with LLMs to develop capable game-playing agents, and argue that—right now—it’s an especially good time to be an AI engineer.

Debugging Across Time and Platforms: The Power of Determinism

About the talk
Debugging complex algorithms can be difficult, especially for complex AI behaviours which can be executed over several frames or asynchronously is a nightmare. But doing this across all major gaming platforms? That’s a unique horror for AAA game developers. At Havok we have fought this battle for years, and we’re here to introduce you to our weapon of choice: Cross-Platform Determinism. Determinism allows us to relive crash scenarios as often as we want across platforms. A bug found on a console can be replayed on PC in debug. Saving time, reducing frustration, and simplifying debugging. Saying development is now a dream might be pushing it a bit too far, but only a bit!

In this talk you will learn how determinism, and in particular cross-platform determinism, can help game developers implement better tooling for recording player sessions and reproducing issues. From running fast math operations on different CPU architectures, to managing deterministic multi-threading models, and dealing with compiler bugs, learn practical tips to implement cross-platform determinism in your game.

Retro-AI: Dungeon Keeper

About this Talk
Dungeon Keeper was once called “Game of the Millennium” – whether or not that’s true, Bullfrog Production’s RTS remains an early use of ‘real AI’ in games. Evil minions ran around with high autonomy in a user-generated Dungeon to build emergent gameplay with dark humour. This is the story of how that was done. Retro-AI architecture seeks to inform character behaviours today, with an emphasis on how to build character navigation, motion and spatial perception against constantly changing design directions.

Takeaway
– How to evolve an AI system before you know what the game is…
– …and to tune the AI system against (insane) design and hardware constraints.
– Why AI architecture needs compact data, clear representations, layered functionality, and chains-of-tools.
VALUE: character motion is still a top priority today. What was real-time a couple of decades ago is feasible today: per-character; inside dynamic tool-chains; at search-time; at inference time.