Thousands of malicious actors have exploited the democratization of the web to flood the internet with fake news and toxic content for economic or political purposes. But the democratization of AI heralds a further step in the creation of false information. Welcome to the post-industrial era of fake news
The advent of generative AI allows for the creation of new content (text, images, code, music, etc.) on demand and at almost no cost. You have probably seen these examples: fake images of Pope Francis in a down jacket or a forceful arrest of Donald Trump on the street. Hyper-forgeries (Deep fakes): fake videos of personalities.
AI has followed the same democratization process as microcomputing and the internet before it: once reserved for specialists, it has become accessible to everyone. AI is now inexpensive, accessible, and legally unregulated.
A simple example is given by Jean-Hugues Roy in Quebec Science: “It is easy to create a web interface for someone who knows a bit about computing and programming with GPT [the artificial intelligence model used by ChatGPT]. Just provide texts from a disinformation website and ask to translate and adapt them for Quebec by adding superlatives and sensationalist content,” explains the creator, who named his fake media L’Express de Montréal.
At the end of the 19th century, what came to be known as yellow journalism emerged in the United States: a form of sensationalist journalism that made headlines with misleading titles. This was already a form of industrialization of fake news. There was also extensive use of photos, sometimes modified, or artists’ interpretations.
The inventor of this type of press was the famous press magnate William Randolph Hearst, who inspired the character of Citizen Kane portrayed and directed by Orson Welles in 1941.
What is new with AI, however, is that it almost eliminates the need for human labor. William Randolph Hearst needed journalists to write his fake news in his yellow press. This is no longer the case today.
This benefits what are known as content farms: sites that use low-wage labor to mass-produce articles and maximize their advertising revenue by driving maximum traffic to their pages. These farms are beginning to automate their production and are using AI to automatically generate sensationalist articles and fake news in bulk. According to NewsGuard, a startup that assesses the credibility of news websites, there are currently 452 AI-generated news and current affairs sites operated without human supervision. One of their techniques is to take existing articles and rewrite them with AI, asking it to dramatize the situation to make them more clickable.
AI-generated disinformation is not only cheaper and faster, but it is also a bit more effective. In a study published in the journal Science Advances, three researchers from the University of Zurich showed that false information written by AI is more likely to be believed than that written by humans. The results of the study show that participants were 3% less inclined to believe fake tweets written by humans than those written by GPT-3. The articles would be more concise and more simply written.
On the other hand, AI-generated articles are becoming so realistic that even news media are fooled. The Irish Times, an Irish daily, was caught relaying an article titled “Why is the Irish women’s obsession with fake tan problematic?” The opinion piece, written by a certain Acosta Cortez, accused Irish women using tanning products of cultural appropriation and fetishizing dark-skinned people.
The problem is, this Acosta-Cortez, like the comedian Sonia Bélanger (from the bistro Le Troquet, in Gatineau), does not exist: her profile was generated by an AI. The person behind this hoax claims to have wanted to make his friends laugh by mocking the excesses of Wokism (cultural liberalism). Surprisingly here: it’s the AI’s ability to identify divisive subjects like cultural appropriation and generate content on these subjects using this theoretical toolbox very well.
Several solution approaches have been explored. There is a lot of talk about public education, especially in schools. Organizations such as the Quebec Centre for Media and Information Education and the Science Press Agency focus on training the public to detect fake news. Reporters Without Borders has proposed the idea of a certification system that would guarantee that sites disseminate credible information.
Europe has focused on the political regulation of technology platforms. Platforms like Google are also trying to de-list these sites.
However, many experts recommend working upstream in the training of these AIs, that is, by training them only with true and relevant information, which requires AI companies to work with media recognized for the quality of their informational content.
But this data collection is likely to be the next battleground between news media and technology companies. Large groups like the “New York Times”, CNN, Reuters, Le Monde, have already blocked the data collection robots of AI companies.
You know the saying: “Once bitten, twice shy”, From a mishap comes an excess of caution. These media remember very well that nearly 25 years ago, they gave free access to their content to Google News with the known consequences. They had their pages plundered and didn’t get much in return. This verified content is the wealth of these media, and they intend to sell it at its fair value today.