top of page

AI is on the march. Is the AI safety movement ready?

  • Writer: Sam Nadel
    Sam Nadel
  • 5 days ago
  • 4 min read

Our new report maps the emerging AI safety movement and finds critical gaps in grassroots mobilisation and public voice.

Artificial intelligence is advancing at a breathtaking pace with models that can reason through complex problems, code better than most software engineers, and autonomously complete tasks that would have seemed impossible just months ago. But alongside these capabilities come immediate dangers: biased algorithms denying people jobs, AI-generated disinformation undermining democracy, and the growing fear that we're hurtling toward a future where AI systems operate beyond human control.

Our new report, AI is on the march. Is the AI safety movement ready?, examines whether civil society is prepared to meet this moment. We've mapped the emerging ecosystem of groups working to ensure AI develops safely and democratically and what we found is both encouraging and deeply concerning.


The stakes couldn't be higher

The numbers tell a stark story. In 2024, over $250 billion was invested globally in AI development. Of that, less than 0.1% went to safety research. Meanwhile, AI capabilities are doubling every 7-9 months in key metrics like autonomous task completion. We're witnessing an unprecedented concentration of power in the hands of a few tech companies, racing to build systems they openly acknowledge could pose existential risks to humanity. This isn't science fiction. AI harms are already visible: facial recognition systems leading to wrongful arrests, AI-generated deep fake videos intended to sway elections, and automated systems eliminating jobs without warning or recourse. ChatGPT recently falsely accused a real person of murdering their children. These aren't isolated incidents—they're early warning signs of what happens when transformative technology develops without meaningful public oversight.


A movement in its infancy

We focus here on groups that place public engagement at their core: those informing,

mobilising, representing, or giving voice to the people who are often left out of elite AI

governance conversations. We identify four main categories of civic response:

  1. Protest and disruption groups like PauseAI and Stop AI are taking to the streets, demanding halts to frontier AI development until safety measures are in place. With over 45,000 signatures on Control AI's open statement alone, these groups are translating public concern into visible action.

  2. Narrative-shaping organisations such as the Algorithmic Justice League and DAIR are exposing current harms and challenging the tech industry's framing of AI as inevitable progress. They're making the human costs of AI visible through research, storytelling, and advocacy.

  3. Public literacy initiatives like We and AI and CivAI are working to democratise understanding of AI risks. Through workshops, interactive demos, and citizen assemblies, they're equipping people to engage meaningfully in decisions about AI's future.

  4. Infrastructure builders including the Ada Lovelace Institute and AI Incident Database are creating the connective tissue for a broader movement—research, convening spaces, and documentation of AI failures that activists and policymakers can build upon.


Critical gaps remain

Despite these encouraging signs, our mapping revealed significant weaknesses in the AI safety ecosystem:

  • The movement is tiny and fragmented. Most groups operate on shoestring budgets with volunteer labour. There's little coordination between organisations focused on present harms versus future risks, limiting their collective impact.

  • Narrative battles are being lost. The tech industry's framing of AI development as an inevitable race dominates public discourse. Few groups have successfully challenged this narrative or articulated compelling alternatives.

  • Movement infrastructure is minimal. Compared to more established social movements, the AI safety space lacks shared platforms, legal support networks, training programmes, and sustainable funding mechanisms.

Learning from history

The good news? Social movements have a track record of rapidly shifting public consciousness and policy when conditions align. The climate movement transformed from a niche concern to a global force demanding action. The civil rights movement achieved transformative change through sustained organising and moral clarity. Even the nuclear disarmament movement—perhaps the closest historical parallel to AI safety—succeeded in establishing international treaties and norms around existential risk.

As we wrote in our recent op-ed for Waging Non-Violence, the AI safety movement can take important lessons from history about when and how social movements succeed. Grievances must be widely felt—and polling shows strong public concern about AI's impacts on jobs, privacy, and human agency. Trigger events can catalyse action—from AI-caused disasters to whistleblower revelations that shock the public conscience. Leaders must emerge who can articulate compelling visions—the movement needs its Greta Thunberg or Martin Luther King Jr. Narratives must resonate—shifting from technical jargon to human stories that connect AI risks to everyday values. And resources must flow—sustainable funding that allows groups to professionalise and scale their impact. The AI safety movement shows promise in some areas but remains critically under-developed in others.

The path forward

Our report identifies several interventions that could catalyse a more effective AI safety movement:

  • A dedicated fund for ethical AI could provide flexible support to emerging groups, particularly those experimenting with novel public engagement strategies or working outside traditional NGO structures.

  • Cross-movement learning initiatives could help AI safety advocates benefit from the tactical knowledge of climate, labour, and digital rights organisers.

  • Narrative development research could identify messaging that resonates with diverse publics and counters tech industry framing.

  • Geographic expansion efforts could build AI safety infrastructure in neglected regions, ensuring the movement represents global rather than just Western concerns.

Why this matters now

Geoffrey Hinton, often called one of the "godfathers of AI", recently argued: "If we just carry on like now, just make profits, it's gonna happen, [AI will] take over. We have to have the public put pressure on governments to do something serious about it." The window for influence is narrowing. As AI systems become more powerful and embedded in critical infrastructure, course correction becomes ever harder. We need a movement that matches the scale and urgency of the challenge—one that puts human agency and democratic values at its centre. A mass AI safety movement could become the force we desperately need. The question is: will we act in time?

This is part of our new programme of work exploring how social movements can support the safe development and governance of artificial intelligence. If you know of individuals or organisations working in this area - or have ideas for who we should be speaking with, we’d love to hear from you.

 
 
 

Comments


Social Change Lab is a non-profit company limited by guarantee registered by Companies House, company number: 13814623 For any enquiries, email: info@socialchangelab.org.

bottom of page