18 May 2023

Tame the algorithm: a game of AI control

AI’s impact on society can feel overwhelming. This is a transformational technology demanding us to make reconsider what it is to be human and what we must do to protect that.

LARPing is a modality for collaborative exploration of complex ideas and scenarios, including in critical seminar settings. It can yield richer understanding than regular debates because participants are asked to pose as actors who they may otherwise just consider in the abstract. This enables greater empathy and appreciation of the concepts faced by decision makers.

This Live Action Role Play (LARP) aims to give participants hands-on experience of managing and regulating algorithms.

I have run this game on numerous occasions with writer Sara Pereira. We are available to support or host this and similar games, just get in touch.

Below is an outline of the game’s mechanics so you can get a feel for it or even try it yourself.


Participants play the roles of stakeholders in a scenario where an algorithmic recruitment app is being criticized for perpetuating bias and discrimination. The objective of the LARP is to collaboratively decide on one of the players’ proposals to address these concerns in a way that is fair and transparent for all parties. The game is competitive, with individual proposals and voting rounds that introduce additional challenges to the decision-making process.

Roles and world-building

Each participant takes on the role of one of the following stakeholders:

Algorithm Engineer: Responsible for the development and maintenance of the algorithm.

Disgruntled worker: An individual who uses the app to find work and suspects they lost work on unfair grounds

Happy worker: An individual who uses the app to find work and matched with their dream job through the algorithm.

Business User: An employer who uses the app to recruit workers and make decisions about who to hire and fire.

Government Regulator: Unelected official, responsible for overseeing the implementation of the algorithm in accordance with laws and regulations.

Digital ethics campaigner: An investigative reporter looking to uncover the truth about the app and its effects.

Politician: An elected official who is responsible for regulating algorithms, but also has a vested interest in promoting economic growth.

Players get role cards with details of their own perspective and some prompts that they can use in the game. Card details are included below.

They should spend 10-30 minutes thinking about their role and the world-building notes before gameplay begins.


Round 1: Introduction – In the first round, each participant introduces themselves and their role. Participants discuss the current state of the algorithmic recruitment app and the concerns that have been raised about it.

e.g. “I’m an engineer working on the app. We accept that there have been problems with it just as there are in human HR decisions. There have also been huge benefits from it. We understand the desire for visibility into why the app makes the decisions it does but, just as we can’t open a human brain, there are complications in seeing inside.”

Round 2: Investigate – Participants split into groups of two-three to discuss proposals. Each group investigates the app and its impact from their perspective. For example, workers may discuss the impact of the app on their job hunting, while app managers may discuss how they might affect the algorithmic decision-making process. In doing so, they suggest and refine proposals that may resolve the bigger problem whilst suiting their own motivations as listed on their card.

Round 3: Proposal – Participants come together as a large group. Each player presents their proposal to improve the app design process and make it more equitable. Players can suggest their own proposal or propose one that has been proposed by another player if they think it’s the best solution.

Proposals can involve placing systemic demands on other players, such that they stand to lose their jobs if they fail to achieve certain criteria. For example, the engineer can lose their job if they have failed to offer a sufficient degree of transparency in the engineering process, and if such a proposal is introduced that involves the threat of job loss, that player will lose their job (and the game) if the proposal is accepted by the group. This gives players extra incentives to argue against certain proposals (because they want to keep their jobs).

Round 4: Debate and voting – Players discuss the proposals and debate the merits of each one. Each player then privately writes down a list of the proposals in order of preference, with the top being the proposal that they prefer the most. The proposals are then tallied, and the top proposal(s) are selected for implementation.

Whoever’s proposal was selected gains three points. If that proposal entails one of the roles potentially losing their job because the proposal places more responsibilities on them, then that person is the loser. e.g. if a proposal that would fire app engineers who fail to offer sufficient transparency is chosen, then engineer is fired and loses three points.

Round 5: Conclusion – Participants may now reflect on the decision-making process and the outcomes of the LARP. What were the strengths and weaknesses of the chosen proposal? What lessons can be learned for future algorithmic governance scenarios?

Overall, the objective of the LARP is to explore the nature of algorithmic decision-making and the role of different stakeholders in governing it.

World-building notes

These notes detail the world that the LARP is based in. Anything not defined here can be decided on by participants during the worldbuilding phase, drawn from elements of the real world and from your imagination.


We are here having these discussions because the WorkLife app has faced some controversy from people questioning decisions it has made. Cases include workers who say they have missed out on work opportunities because they were unfairly graded by the system, and investigations on the matter were found to be inconclusive.

Nevertheless, it was agreed that greater transparency was needed so a multi stakeholder group came together to find proposals for improvements. You are that group.

Some facts about the situation:

The WorkLife app is fairly widespread and has a substantive impact on the life chances of individuals seeking work, as well as the business outcomes of those firms who use it for human resources.

The app uses an algorithm to suggest workers to interview and hire. The algorithm is proprietary and it is not clear how it works nor what data it ingests.

It has long been suspected that hiring decisions made with the app are unfair. Significant anecdotal evidence has been gathered by campaigners to suggest that the algorithm applies baseless and problematic prejudices e.g. workers living further from a workplace than other candidates are less likely to get jobs. However, there is not sufficient data available to make a proper analysis.


Here are some further notes on LARPs and AI in society.

Discussion on the lure and utility of LARPing: https://futureofstorytelling.org/story/larp-and-co-created-reality

“Any application of predictive optimization should be considered illegitimate by default unless the developer justifies how it avoids these flaws.” 2022 https://predictive-optimization.cs.princeton.edu

“One of the algorithm’s developers told ProPublica that leasing agents had ‘too much empathy’ compared to computer generated pricing.” https://www.propublica.org/article/yieldstar-rent-increase-realpage-rent

…the major harms caused by AI are already here, and therefore “Regulatory efforts should focus on transparency, accountability and preventing exploitative labor practices.”  https://www.politico.com/newsletters/digital-future-daily/2023/04/11/timnit-gebrus-anti-ai-pause-00091450