Italy moves ahead on AI and right to information, rest of Europe should follow suit

As Europe approaches elections, Italy is working on a bill for regulating the development and use of artificial intelligence. Reporters Without Borders (RSF) welcomes this ambitious bill, which – if adopted in its current form – would  strengthen the right to reliable information for Italians, in particular by combatting deepfakes, and calls on other European Union member states to study it.

A provisional version of this bill, which was initiated by the government, was published at the start of the April. It establishes principles for financing and supervising AI in many areas including health, employment, education and research, copyright and national security. But it is on protecting the right to information that, at this stage, the bill is most ambitious. It aims to require AI systems to respect the integrity and pluralism of information, and provides criminal sanctions for the publication of deepfakes that harm others.

“The uncontrolled development of AI poses a major threat to the right to reliable information and, therefore, to democracy and fundamental rights. In Europe as elsewhere, the current legislative framework falls far short of what is needed to effectively protect this right. In this regard, the measures proposed by Italy to regulate AI in the online information arena are promising and we urge governments, and first of all those of the EU, to draw inspiration from them. We will, in the meantime, pay close attention to this bill with the aim of ensuring that, if adopted, it remains fully respectful of press freedom.

Arthur Grimonpont

Head of RSF’s AI and Global Challenges desk

The bill, which could be modified in the course of its passage through the council of ministers, chamber of deputies and senate, aims to limit the risks that AI’s development poses to the economy, society and fundamental rights.

To protect the right to reliable information, it establishes the general principle that “the use of artificial intelligence systems in the field of information [must be carried out] without compromising media freedom and pluralism, freedom of expression, and the objectivity, completeness, impartiality and integrity of information.” This provision has a particularly broad scope given the central role that recommendation algorithms play in the dissemination of information in the digital arena, from social media to search engines.

Transparency and identification of AI-produced content

The bill also aims to impose strict transparency on AI-generated images and sounds, saying these should be “clearly identified as such by visible marks or audio announcements to inform users of the artificial nature of the content.” Digital platforms and media would also be required to implement the necessary means to guarantee this transparency – a principle already affirmed in the Paris Charter on AI and Journalism with regard to the media, and by the EU guidelines for platforms and search engines following adoption of the Digital Services Act (DSA).

Criminal sanctions

As regards generative AI, the bill does not limit itself to requiring transparency – a requirement already enshrined in the EU’s AI Act and considered insufficient by many stakeholders. In line with RSF’s recommendations for limiting the proliferation of deepfakes, the bill envisages criminal sanctions for anyone who causes “wrongful harm to others” by publishing audio-visual content “modified or manipulated by AI so as to mislead regarding their authenticity or its origin.” The offence would carry a sentence of one to five years in prison.

RSF will pay attention to ensuring that using deepfakes transparently and without intention to deceive the public continues to be lawful – especially with regard to content of a humorous or satirical nature.

As they stand, the bill’s measures constitute a positive signal at a time when AI-generated content is already being used to impersonate journalists and others, manipulate opinion and influence the outcome of elections. But RSF says they should be supplemented by an obligation on providers of generative AI systems to prevent the creation of harmful or dangerous deepfakes.

To this end, RSF calls for the rapid adoption by the media and digital platforms of technical standards guaranteeing the origin of authentic content such as those developed by the Coalition for Content Provenance and Authenticity (C2PA).

Previous
Previous

Russian authorities raid journalist Ksenia Klochkova´s apartment

Next
Next

Death of Franco-Irish journalist Pierre Zakrzewski at the start of the war in Ukraine: his family and RSF welcome the opening of a judicial investigation for war crimes in France