Which AI harms & risks will mobilise the public to act?
- Sam Nadel

- 8 hours ago
- 5 min read

Despite high levels of public concern about AI and strong support for regulation, there has been very little widespread public action. Our new research investigates what might close that gap - and the findings have clear implications for everyone working to hold the AI industry to account.
*Read the full report here* This report - the first UK study of its kind - involved a randomised controlled trial with 3,467 nationally representative participants, each assigned to read a short news-style vignette about one of eleven AI harms and risks. We then measured how reading that vignette changed their concern, their support for regulation, and their willingness to take action.
What we foundAI-enabled warfare drives the deepest concern
Of all eleven risk areas, reading about AI-enabled warfare - autonomous weapons that make life-or-death decisions without human oversight - produced the most consistent increase in general concern about AI. It was the only condition that credibly shifted people on all three of our general concern measures: agreement that AI is a threat, support for slowing development, and support for government regulation.
Perhaps most strikingly, it was also the only risk that affected people's voting intentions. After reading about AI-enabled warfare, participants said they would be significantly more likely to vote for a party that made tackling AI risks a priority.

Figure 1: All eleven risk conditions increased general AI concern compared to the control group. AI-enabled warfare showed the strongest effect and was the only condition to credibly shift all three component measures: perceived AI threat, support for slowing development, and support for regulation.
This finding lands in a particularly charged moment. As we publish, autonomous AI systems are being deployed at scale in conflicts in Ukraine and Iran. The recent dispute between Anthropic and the Pentagon signals that the US government is keen to ramp up its use of AI in warfare, even when they are being warned that current AI systems are not reliable enough for such purposes. Previous polling across 26 countries shows 61% of people oppose lethal autonomous weapons. Our data suggest that making this vivid and concrete to the public could be a significant lever for the AI movement.
Environmental harms: the strongest mobiliser
While warfare generated the most general concern, environmental harms produced the biggest shift in people’s willingness to take action. When participants read about data centres depleting water supplies and driving carbon emissions, their stated willingness to act on these issues - signing petitions, contacting MPs, joining organisations - increased substantially. It was the issue where the gap between baseline willingness and post-reading willingness was largest.

Figure 2. Willingness to act on different AI harms/risks after reading about them. Environmental harm showed the largest increase in willingness to act - the gap between those who read about it (purple) and those who read about a different risk (grey) was greater than for any other issue.
This aligns with a growing real-world movement. In the US, opposition to data centre construction has become one of the fastest-growing grassroots campaigns in the country, with an estimated $98 billion in projects blocked or delayed in a single quarter of 2025. In the UK, that movement is beginning to take shape. For campaigners, our data suggest this is fertile ground - and that it offers the additional advantage of connecting AI concerns to the existing climate movement infrastructure.
The other harm/risk area that saw a similar boost in willingness to act as a result of reading about it was bias and discrimination due to AI. This arises when the outputs from automated systems systematically disadvantage particular groups - whether by race, gender, age, disability, or other characteristics. A range of groups across the US and UK are already tackling this issue on the ground including the Algorithmic Justice League, the AI Now Institute, Amnesty International, and the Ada Lovelace Institute.
Extinction risk: a harder sell - but there are indirect routes
Of all the risks we tested, extinction risk (X-risk) - the possibility that superintelligent AI could pose an irreversible threat to humanity's survival - generated the lowest levels of concern and the second-lowest willingness to act. Reading about X-risk directly did little to shift these scores; even after engaging with a detailed X-risk vignette, concern remained lower than for every other risk area.
A key finding was that reading about AI-enabled warfare increased concern about X-risk more than reading about X-risk itself. Organisations whose primary focus is X-risk may benefit from leading with concrete, near-term harms - particularly autonomous weapons - to build the foundation of concern that makes broader existential arguments land.
What actually drives people to act: anger, not anxiety
We tested a range of psychological factors to understand what turns concern into action. The results were clear: after reading a vignette, anger was the only emotion that significantly predicted willingness to take action. Anxiety, fear, sympathy, and a sense of powerlessness did not predict action - in fact, fear and anxiety may trigger a freeze response rather than a motivational one.

Figure 3. Temporal proximity (perceiving a risk as already happening or imminent) was the strongest predictor of willingness to act. Anger was the only emotion that reliably predicted action. Perceived personal relevance had no significant effect; concern for others did.
Two other factors mattered significantly. Perceiving a risk as temporally proximate - already happening, or likely within the next few years - strongly boosted willingness to act. And concern for others was a stronger motivator than personal relevance - suggesting that appeals to the wellbeing of future generations, or of communities facing acute harms, may be more effective than individual-interest framing.
The job displacement paradox
Job loss consistently tops surveys as the AI issue people are most aware of and worried about. Yet in our data, it ranked near the bottom on mobilisation potential. Issue salience, in other words, is not a reliable guide to mobilisation potential. For campaigners, this suggests that meeting people where they are (job loss) may be a useful entry point, but that moving the conversation towards other issues - particularly warfare and environmental harms - is likely to produce more meaningful action.
The bigger picture
One consistently encouraging finding runs through the whole study: any information about AI harms increases people's general concern. The public is not apathetic - it is underinformed. When people learn about what is actually happening, they care more. There is a significant latent constituency for an AI movement. The task is to reach it with the right messages, through the right frames, in ways that generate the moral outrage that drives people into action.
This research was generously funded by Changing Ideas, which supports people, movements and organisations challenging the status quo. The study was pre-registered and conducted with a nationally representative UK sample via Prolific. Full methods, supplementary materials, and all vignettes are available in the full report. Social Change Lab conducts research on social movements to understand their impact. Through evidence-based analysis, we help movements and funders maximise their impact. Find all our research at socialchangelab.org




Comments