Disinformation, particularly when amplified by artificial intelligence (AI), poses a growing threat to democracies worldwide by undermining access to reliable information – a right recognised under international law. As AI technologies rapidly evolve, they facilitate the creation and dissemination of false content at scale, raising urgent concerns about information integrity and democratic resilience. In response, a diverse array of national and regional regulatory approaches has emerged, ranging from content oversight to media literacy initiatives. However, these responses remain fragmented and uneven across countries.
This policy brief examines global trends in disinformation governance, analysing the rising number of regulatory initiatives, the geopolitical landscape, and the implications of AI-driven manipulation. It underscores the need for balanced, human rights-based policies that combine accountability with citizen empowerment, particularly in light of commitments made through frameworks like the UN Global Digital Compact and the G20 AI Principles.
The brief proposes key recommendations for the G20, including establishing a High-Level Task Force on AI and Information Integrity, encouraging ethical AI guidelines, and fostering global cooperation. It advocates for regulatory models that balance freedom of expression with protections against hate speech, based on international human rights standards such as the UN Rabat Plan of Action. It also calls on internet intermediaries to ensure algorithmic transparency and human rights accountability.
Finally, it emphasises the role of civil society, academia, and non-governmental organisations in promoting digital literacy – especially among marginalised groups – to ensure that societies can navigate the evolving digital landscape with resilience and informed agency.