What might make people act on AI? First findings from our mobilisation study
- Sam Nadel

- 6 days ago
- 7 min read
"It seems that the regulatory bodies and governments are frightened of the tech companies thus giving them even more power than they already have."

We're currently running a large study testing which AI harms and risks are most likely to mobilise people to action. We believe that significant public involvement in shaping the direction of AI development is crucial in the current moment. AI is evolving and being widely integrated into systems of all kinds extremely fast, with little public oversight. Tech companies hold unprecedented power, wealth, and influence - particularly over the US government. Without large scale public involvement, policies are likely to be made to serve their interests - as we see with Trump’s executive order on AI. The prevailing ethos is one of ‘moving fast and breaking things’ rather than proceeding with the caution and regulation that the majority want. Although there are abundant surveys on how the public feels about AI, there is almost nothing (notably excepting this recent report) which tries to understand what might turn their concerns into action. Our study addresses this urgent gap.
What we asked people about
The study uses carefully designed vignettes presenting different AI harm/risk scenarios to a representative sample of around 3,500 people across the UK. The areas cover everything from the risk of job loss to existential threat, automated weapons systems to cognitive decline, environmental harms to disinformation (11 risk areas in all). We measure not just people’s concern about these issues, but also their willingness to actually do something about them. The full experimental findings will be available in the coming weeks.
While we are working on the various main analyses, something interesting emerged from a single open question tucked into the survey. Near the end, after all the experimental conditions and quantitative questions were done, we asked people, "Could you tell us in 1 or 2 sentences how you feel about the development of AI?"
The question was optional. Just an open text box without a word limit. People could write whatever they wanted or skip it entirely.
Almost no one skipped it.
Out of around 3,500 participants, nearly everyone took the time to write something. This is extremely unusual! Optional open-ended questions in surveys typically see much lower response rates - often as low as 50% even for questions requiring just a one word answer. So when nearly everyone responds, it’s a sign the topic has grabbed them. They have thoughts they want to express. We might even speculate that perhaps they have few other meaningful opportunities to do so.
These spontaneous responses, which we emphasise are just based on answers to a single question, give us a glimpse of the landscape of public concern.
So, what keeps people up at night?
"I think it is quite scary. Job losses are a major consideration - if you don't have a job, how do you live and pay your way?"
Job displacement concerns dominate. Without prompts, around 15% of people referred to their concerns about AI and employment. They didn’t just express concern about job loss, they were also worried about changes to the quality of work - increased workplace surveillance, management by algorithm, and the erosion of meaningful work. This was by far the most commonly mentioned area of concern.
"It has it's [sic] benefits in a few areas but on the whole I detest it. The resources it is draining away is unacceptable."
The second most common worry was perhaps more surprising: environmental harms. Roughly 5% of respondents mentioned AI's water consumption, energy demands, and contribution to climate change. Despite growing media coverage of this issue, it hasn't yet had the type of ongoing coverage that concerns such as misinformation, deep fakes, surveillance and other harms have had. People are connecting the dots themselves, from data centre construction to resource extraction - possibly a sign of the high levels of pre-existing climate concern in the UK, or current concerns about the already high cost of energy - and the understanding that AI might only make matters worse.
"It has a lot of benefits. However I am worried about the harm it can do as well. I never know what is real anymore."
Misinformation came up frequently too, alongside surveillance and privacy concerns (each mentioned by around 4% of people). Then came worries about AI undermining critical thinking, concerns about (human) creativity and copyright, autonomous weapons, and the biases baked into algorithms (between approximately 2% and 4 % of responses).

And an existential risk to humanity?
An important motivation for carrying out this study was the fact that the campaign groups currently most actively trying to mobilise the public on AI (Control AI, Pause AI, Stop AI) are focused on the existential threat to humanity posed by superintelligent AI. Here in this open response, human extinction risk (x-risk) was rarely mentioned (around 1%). And sometimes when it was raised, it was primarily to throw cold water on the possibility of it.
"I think that in general the risks are overblown. There are undoubtedly some concerns over potential misuse harming individuals, but claiming that it threatens the existence of humanity is ridiculous."
This could be important. Recent research looked at whether narratives about existential risk distract from immediate harms, largely finding that they didn’t. What our data here suggests is something more fundamental: for most people, existential risk is not on the radar at all. There is no latent distraction from x-risk concerns. People are mostly just not thinking about it.
Mainly, people feel rationally ambivalent
"I feel it does benefit me in some ways, I ask for stocks and shares advice, advice for where to go on holiday, help with my CV etc, but it does concern me in other ways. I do think it can make my brain a bit lazy and I worry about it going too far etc."
Most people don't have simple, clear cut views about AI. The responses revealed complex, often contradictory feelings. People are simultaneously excited and worried, optimistic and anxious. They see benefits and risks, often in the same sentence. Very few are blindly for or completely against any AI development. These ambivalent feelings seem like an impressively rational response to a genuinely complicated situation.
Anger is an energy… but most are more scared than angry
"I am fearful of the impact on society that AI will have. Loss of jobs and the control and power of a select few who own and can direct powerful AI.”
Fear and worry dominated the emotional landscape of these responses - substantial numbers, even amongst those who were broadly optimistic, expressed anxiety about AI. But when we looked for expressions of anger or outrage - the emotions that typically drive people to protest, organise, and demand change, we found rather little. This distinction matters because social movements don't mobilise on fear alone. They mobilise on moral outrage, on the sense that something unjust is happening and must be stopped. The climate movement learned this when it shifted from polar bear images to fossil fuel accountability campaigns. The civil rights movement didn't win on concern about inequality, it won by making inequality morally repugnant.
There is one thing people are angry about
“In the hands of unscrupulous immoral people like Musk and Grok it can be weaponised against people and they should be held accountable."
"Disgusting I mean just look at grok on twitter being used to undress photos of women and CHILDREN it's absolutely abhorrent and just being used to strip women's bodily autonomy with no regulation."
To the extent that people did voice sentiments along the lines of anger, they were directed at the leaders of some of the main AI corporations. We should note that the survey was fielded in a week when the news of the Grok/X "nudification tool" was prominent in the media. This might have explained that particular target - although there was also a broader sense of distrust and disdain for Musk, the Broligarchy, and for the unregulated power of a few major AI corporations.
“It could be very good for humans, it could be very bad. At the moment it seems the only people with any say over what direction it's taking are the people who stand to benefit financially - people like Musk, Zuckerberg. I have my doubts that they care about the longterm or about ordinary people."
The mobilisation challenge
"It is developing too fast, a few big tech companies will have too much power and control. The development of AI needs to be tightly regulated."
When respondents suggested solutions, many called for stronger and better regulation. They want governments to step in with rules, oversight, and guardrails. Very few mentioned taking any personal action themselves. This might have been the nature of the question we asked or a sign of passivity - someone should do something about this, but I don’t know who. This mindset is not ideal for AI safety campaigners trying to build a highly energised mass movement to counter the trillion dollar AI industry and the governments increasingly under its spell.
The exception was around those galvanised by strong concerns for child safety, particularly referencing Grok’s nudification tool. Here, there was a clear sense of moral outrage. The Grok controversy transformed abstract AI risk into something very visceral and deeply disturbing.
"[It’s] terrifying to me, especially as a mum of two young children."
But we can’t ignore the biggest group of respondents who were neutral or uncertain about AI development. People who genuinely don't know what to think yet.
"Cautious and keen to understand more, I can see both positives and negatives at the moment"
These are not the words of your average would-be-campaigner. This big uncertain middle represents a challenge and an opportunity. These people are not necessarily opposed to AI activism, but they're not ready for it yet.
These findings give hints of possible strategic directions, though we'll hold back on those until the full experimental results are in. At this stage, our conclusion is that the public concern is there - but the anger, the sense of active agency and the stomach for a fight - are still some way behind.
Full findings from this study will be available in February. Sign up to our newsletter if you would like to stay up to date.




Comments