Showing: 21 - 29 of 29 RESULTS

Bringing Research to Life in AAA Game Production

About this Talk
In this talk, I will share what it is like to work as a research scientist in a major AAA game company, supporting multiple studios with cutting-edge AI and tools. Game development is a fast-paced, high-stakes environment, naturally risk-averse, so our role in the applied R&D development is to build practical, proof-of-concept ideas that solve real production problems.I will walk through the full lifecycle of a successful research project, from identifying pain points with game developers and designers, through literature review and prototype development, to working closely with teams to deliver usable tools. I will present our approach to research integration and I will share examples from projects we have worked on, including internal tools and experiments from franchises like Plants vs. Zombies and Battlefield. I will also reflect on the unique challenges of transitioning from academic research to the game industry — and why I believe research is fundamental to the future of game development.

Takeaway
– How to successfully start and finish a research project within a game company. The framework can be used by any research team within a game company, small or large.
– Lessons learned and mistakes not to be repeated in a research project. We will see practical examples of real research projects shipped into games.
– Understand why research is important for the next generation of games and how a research team can deliver value to game studios.

Variety in Autonomous Playtesting: A Case Study on Reinforcement Learning for NHL26

About the talk
Reinforcement learning (RL) is already successfully used for playtesting games without human intervention, allowing game studios to reduce the significant time and cost spent for this important task in game development. We present a case study on testing the goalie AI in the hockey game NHL26, which is using a traditional rule-based algorithm, by training a forward player with RL to find exploits in the goalie’s behavior. We show that out of the box RL algorithms only provide limited value for this kind of testing scenario, because they converge to a single exploit strategy. With some simple steps we can enhance the algorithm to instead provide a set of qualitative and diverse solutions that provide additional value to game designers, since they allow finding multiple fix-worthy exploits in a single experiment iteration. For our first deployment of our approach, within a single experiment we were able to find six exploit strategies that were qualitatively similar to those that game testers found in hour long manual testing sessions.

Takeaway
– Reinforcement Learning is good at learning one solution to a problem by overfitting to the environment. With some simple tricks we can create a diverse set of solutions which is more valuable for game testing, since game designers can for example find multiple game exploits in a single development iteration.
– My presentation shows that simple solutions are often the better way to go for AI development. Specifically for the case of NHL26, I show that an easy trick allows us to learn a diverse set of qualitative solutions, without using a cutting edge Quality Diversity algorithm, which are known to be unstable, hard to tune, and debug.
– RL for playtesting can serve as as a non-invasive way to start a research relationship with a game studio. RL for game AI on the other hand requires more trust, because there is still a lack of authorial control for such agents, removing some of the control that game AI designers are used to from more traditional algorithms.

Experience level needed: Beginner, Intermediate

How to power-up Unreal Engine Behavior Trees: Subnautica 2

About the talk
Behavior Trees are a powerful tool in Unreal Engine for building game AI, but they often fall short when dealing with complex decision-making, multiplayer support, and flexible gameplay systems. We’ll explore practical ways to overcome these limitations by integrating Utility AI into Behavior Trees, enabling more adaptive and responsive creature behaviors. We’ll also dive into how the Gameplay Ability System can be used to decouple decision-making from execution, making AI logic more modular, designer-friendly, and better suited for multiplayer environments.

Takeaway
– Understand key limitations of Behavior Trees and how to work around them.
– See how studios are doing AI in real life games.
– Discover how to make AI more adaptive and responsive.
– How the Gameplay Ability System can simplify and scale AI behavior.
– Gain strategies for building network – friendly AI in multiplayer games.

Experience level needed: Intermediate

Gemini in Unreal Engine: Enhancing game development with multimodal AI

About the talk
This talk presents a novel way to integrate Gemini within game development workflows and runtime experiences on Unreal Engine.
The first facet focuses on enhancing productivity within the Unreal Editor. By leveraging multimodal capabilities, we propose a system that assists game developers with various tasks, from generating code snippets, and reasoning on visual programming and UI layouts to creating game assets and providing contextual design suggestions and debugging assistance. This integration aims to streamline the development process and significantly lower the barrier to entry for complex game creation. 
The second aspect involves the automatic exposition of functions from Unreal Engine to the Gemini API through formalized function declarations. This allows Gemini agents to directly call and execute game functions at runtime, dynamically providing arguments as needed. This capability significantly reduces the cost of creating Agents for Agentic architectures powered by Gemini and unlocks unprecedented potential for game creation workflows as well as game runtime.
We will present real-world applications of these integrations, specifically detailing their use by the team behind Google Maps’ Immersive View.

Takeaway
– Attendees will take away how multimodal LLMs can help them on a variety of content creation tasks. Video game creation is a multimodal process and Multimodal LLMs, like Gemini, are now able to understand that context. Multimodal, unlocked their use in one of the most complex content creation field.
– Our integration was originally done within 48 hours and required no modification to the engine. Attendees will learn how to create an “agentic architecture” – powered by LLMs in any engine, with an example written for Unreal Engine.
 – Attendees will get insights on the usage of LLMs within Google, specifically within the Immersive View team.

Experience level needed: Beginner, Intermediate, Advanced

Simulation-Based AI with LLMs for Game Agents

About the talk
Despite amazing progress in generative AI, even the largest and smartest large language models have serious limitations in their reasoning abilities, as shown by results on game-playing benchmarks.
On the other hand, simulation-based AI (SBAI) agents make intelligent decisions based on the statistics of simulations using a forward model of a problem domain, providing a complementary type of intelligence. SBAI algorithms have very attractive properties, including instant adaptation to new problems, tunable intelligence and some degree of explainability.
In this talk I’ll present recent results on combining SBAI with LLMs to develop capable game-playing agents, and argue that—right now—it’s an especially good time to be an AI engineer.

But why is it saying that?? Making LLM powered AI bots a bit more reliable

About the talk
Working with LLM agents sometimes feels like herding cats—from small prompt tweaks causing outsized effects, to unpredictable player behaviors surfacing troublesome outcomes. In this talk, Batu Aytemiz will share practical insights drawn from developing a Roblox game with over 20,000 daily active users. He will discuss tools and techniques designed to make AI agents more reliable, safer, and easier to iterate on, ranging from lightweight, custom-built internal tools anyone can implement, to simple yet powerful prompting practices you can start using right away.

Takeaway
– Why we should test LLMs incredibly rigorously, and the potential ethical concerns of not doing so. – An iterative workflow that helps you evaluate and improve your LLM pipeline in a systematic manner, grounded in user data.
– Techniques that are focused on increasing iteration speed when making prompt changes.
– Techniques that are focused on verifying that the changed prompts aren’t causing unexpected behaviors.
– Simple tool to analyze bulk user data while respecting privacy.

Experience level needed: Intermediate