The EU has urged tech companies, including TikTok, Meta, and Microsoft, to clearly label any services that could spread AI-generated disinformation. Vera Jourova, a European Commission vice president, has called for companies to build in necessary safeguards to prevent malicious actors from using generative AI to generate disinformation. The tech companies are part of the European Union's voluntary code of practice on disinformation, which aims to help social media companies comply with the Digital Services Act and prevent future problems. The code includes commitments such as preventing fake news from making money, ensuring transparency on political ads, and cooperating with fact checkers.
While AI can be a force for good, it also raises fresh challenges for the fight against disinformation, according to Jourova. At last month's Google I/O conference, the search giant announced a new feature that lets users see if a picture is AI-generated. Twitter has also added the ability to fact-check images, citing AI-generated media as a reason. However, Elon Musk officially signed off on Twitter's withdrawal from the voluntary code, which has provoked European officials. Jourova said that Twitter has chosen the hard way, and its actions and compliance with EU law will be scrutinized vigorously and urgently.
In summary, the EU has called on tech companies to clearly label any services that could spread AI-generated disinformation and build in necessary safeguards to prevent malicious actors from using generative AI to generate disinformation. While some companies have signed up to the voluntary code of practice on disinformation, others, like Twitter, have withdrawn and face scrutiny from European officials. The use of AI in generating disinformation poses fresh challenges for the fight against disinformation, but measures like fact-checking and transparency on political ads can help prevent future problems.