3 RESULTS

AgentMerge: Enhancing Battlefield Automated Issue Management with LLMs

Description: The Battlefield QV department manages an automated issue workflow that handles reports coming from diverse data sources and entities, such as error APIs, automation systems, etc. An important part of this workflow is the interaction with the issue tracking manager Jira, where tickets are created automatically using data retrieved from the reports. The process is not fully automated, as there are still parts that require the hardcoding of rules that may change over time or a manual intervention that can become time consuming when the volume of tickets reaches high peaks. The talk explores the potential of Large Language Models (LLMs) in automated issue management within the Battlefield franchise. It has the goal to address the identification of duplicate issues, replacing the inefficiency of the previous hardcoded rules. We will demonstrate how the same solution based on LLMs could also be reused in all the projects utilizing the same version of Frostbite (our engine). Furthermore, the talk discusses the challenges and best practices of integrating a research project into an established game development workflow, and how to overcome these challenges.

Takeaways:

  • How to leverage the potential of LLMs for specific use cases in game development, particularly in QA.
  • Insights into real-world QA improvements achieved through the use of LLMs compared to traditional approaches, along with the challenges in measuring these improvements.
  • An understanding of the challenges and opportunities associated with using machine learning in game production, and how to effectively combine research and development efforts.

RL Agent Training is Property Based Testing

Description: Training RL agents in games requires collecting lots of data from various game states and trajectories. As games grow in complexity, it is easy for some unintended functionality to affect the distribution of data that an RL agent would be trained on. This means an agent’s behaviour may be affected and ultimately be a signal that some property of the game does not match intentions. This matches the criteria for a property based test and is an inspiration for future game testing mechanisms.

Takeaways:

  • An intuition of property based testing.
  • A concrete example of how RL training helped identify a bug
  • An inspiration for how RL training can be more incorporated into property tests

Empowering Game Designers with Automatic Playtesting

Description: The complexity of modern tabletop games has been steadily increasing since the mid-1990s. This results in an increase in time spent by designers developing (2-3 years on average from idea to commercialisation) and playtesting (6-24 months) a game, raising the barrier of entry to market for independent designers or small companies which do not have enough resources at their disposal. The effect is also felt by players, who find it harder to play such games due to the steep learning curve. This talk will explore how Tabletop R&D, a spin-out company from Queen Mary University of London, aims to address these issues and democratize the tabletop games market by providing game designers with automatic playtesting tools. Using the latest in Game AI technology and digital twins of tabletop games, we speed up development times, reduce costs and increase efficiency of an otherwise traditionally lengthy analogue process.

Takeaways:

  • How to use automatic play-testing with AI agents.
  • Learn about a diverse set of metrics to evaluate game-play experience.
  • The value of exploring the design space of your game.