top of page


Most people are worried about AI. Between 60 and 80% support regulation. But concern alone hasn't translated into widespread public action - and in the absence of that action, powerful tech industry interests are shaping outcomes largely unchallenged.

This study asks: what will it take to close the gap between caring and doing?

We conducted a randomised controlled trial with 3,467 UK participants, each randomly assigned to read a short, news-style vignette about one of eleven AI harms or risks - or no vignette at all. We then measured how reading about that harm affected their concern and their willingness to act.


What we found


AI-enabled warfare drives the strongest overall concern. Reading about autonomous weapons - AI-controlled drones and robots making life-and-death decisions without human oversight - was the only risk that shifted people on every measure we tested: agreement that AI is a threat, support for slowing development, support for regulation, and even voting intentions.

Environmental harms are the strongest mobiliser. When people read about the environmental costs of AI - the water, energy, and land consumed by data centres - their willingness to take action rose substantially. AI bias and discrimination showed a similar effect, suggesting both issues have real campaign potential.

Extinction risk ranks last for concern - but there's an indirect route. Of all eleven risks, people were least concerned about the possibility that AI could pose an existential threat to humanity. Reading about it directly didn't shift attitudes much. However, reading about AI-enabled warfare increased concern about extinction risk more than reading about extinction risk itself. Concrete, present-day harms may be a more effective way to communicate about catastrophic risks.

Job displacement is salient but not a strong mobiliser. Despite being by far the issue people mention most spontaneously, job loss didn't stand out on concern or willingness to act - a reminder that salience is not a reliable proxy for mobilisation potential.

Anger, not anxiety, drives action. The psychological factors that predicted willingness to act were anger (not fear or anxiety), perceived temporal proximity (the sooner the harm, the stronger the response), and concern for others rather than oneself.


Why it matters


Over a quarter of participants clicked through to sign a real AI regulation petition - a signal of significant latent concern waiting to be channelled. The public is not apathetic; it is underinformed. When people learn about specific AI harms, they care more, and on certain issues that concern is convertible into a willingness to act.

For campaigners, funders, and movement strategists, the findings point toward a set of practical choices: which risks to lead with, which emotions to tap into, and how to frame messages that move people from worry to action.

Keywords: AI risks, AI harms, public mobilisation, AI regulation, AI safety, existential risk, extinction risk, X-risk, autonomous weapons, environmental harms, AI-enabled warfare, job displacement
 

Suggested citation: Ostarek, M., Rogers, C. Kenward, B. & Nadel, S. (2026). “Which AI harms and risks will mobilise the public to act?”. Social Change Lab. https://doi.org/10.5281/zenodo.18937001

Cover image by Sinem Görücü / https://betterimagesofai.org / https://creativecommons.org/licenses/by/4.0/

Funding for this report was generously provided by Changing Ideas which supports people,movements and organisations challenging the status quo.

bottom of page