China, Russia Target Audiences Online With Deep Fakes, Replica Front Pages 

Looking directly into the camera, the news anchor speaks of the importance of U.S-Chinese cooperation in supporting the global economy.

At first glance, the woman appears to be presenting a regular newscast. But neither the broadcaster nor the Wolf News branding on the video is real. It's a deep fake, generated by artificial intelligence.

Viewers who look closely may see a few clues that something is off. The voice sounds unnatural and it doesn't sync with the movement of the mouth.

The video is one of two that appeared on social media in posts that seemed to promote the interests of the Chinese Communist Party (CCP), New York-based research firm Graphika said in a report last month.

Advancements with generative AI tools have sparked concerns about the technology's capacity to create and disseminate disinformation at an unprecedented scale. The fake news anchors feed into those concerns.

Those technological advancements come as a February report from the European Union describes the multipronged approach China and Russia are taking to try to control narratives about everything from foreign policy to the war in Ukraine.

The use of fake news anchors themselves wasn't the most surprising aspect for Tyler Williams, director of investigations at Graphika. In 2018, The Guardian reported that China's state-run news outlet Xinhua had presented the world's first AI news anchor.

Still, Williams told VOA, "we were initially surprised to see it within this context."

Graphika came across the news anchor deep fakes on platforms including Facebook, Twitter and YouTube while monitoring pro-China disinformation operations that the research firm has dubbed "spamouflage."

First identified in 2019, spamouflage refers to an extensive network of Beijing-linked accounts that disseminate pro-China propaganda.

"We've been tracking this spamouflage IO [influence operation] campaign for several years now," Williams said. "And this is the first time we've seen this campaign use this kind of technique or technology."

A spokesperson at the Chinese Embassy in Washington told VOA the Graphika report "is full of prejudice and malicious speculations" that "China firmly opposes."

"In recent years, some Western media and think tanks have slandered China's use of fake social media accounts to spread so-called 'pro-China' information," the spokesperson said via email. "China believes that every social media user has the right to voice his or her own voice."

Multiple bodies, however, have documented how China censors social media and even jails users who criticize the government.

Trust erosion

The skill and efficiency with which AI can generate disinformation is particularly worrying to Williams.

"The bigger concern is just the continued erosion of trust — whether it's news media, or news published on social media platforms. That level of authenticity is more and more in question as we see this scale up, which we assume it will," he said. "To me, that's the primary concern. Do we end up in this final, zero-trust, cynical environment where everything is fake?"

"That's kind of a doomsday scenario," Williams quickly added. These developments shouldn't be blown out of proportion yet, he cautioned.

Currently, the technology is far from being perfected, according to Bill Drexel, who researches artificial intelligence at the Center for a New American Security think tank in Washington.

"When I saw the videos initially, I thought it was almost humorous because they didn't go with a particularly high-quality deep fake," he told VOA. "But it's kind of a dark omen of things to come, as far as disinformation abroad goes."

"China's kind of infamous for its foreign disinformation being tone deaf and often counterproductive," Drexel said.

But China is not alone in using technology for disinformation.

The EU External Action Service report focused on Russian and Chinese disinformation and found that Moscow was supporting disinformation operations that impersonate international media outlets.

The study analyzed a sample of 100 cases of what it terms "information manipulation" from October through December. With 60 examples tied to the Russian invasion of Ukraine, Moscow's aim is to distract audiences, deflect blame or direct attention to different topics, the report found.

"This war is not only conducted on the battlefield by the soldiers, it is waged in the information space trying to win the hearts and minds of the people," EU foreign policy chief Josep Borrell said in a February speech. "We have plenty of evidence that Russia is behind coordinated attempts to manipulate public debates in open societies."

Print and TV media are the most frequent targets of Moscow's impersonation, in particular when targeting Ukraine.

The report cited four cases where fake cover pages imitating European satirical magazines were created to attack Ukraine and President Volodymyr Zelenskyy.

"Nobody is off limits from seeing their identity or brand misused," the report said. "Threat actors use impersonation to add legitimacy to their messages and to reach and affect audiences familiar with and trusting the impersonated entities."

The Russian Embassy in Washington did not reply to VOA's email requesting comment.

The spoofing strategy is rudimentary, but disinformation doesn't need to be sophisticated to be effective, said Nika Aleksejeva, who researches Russian disinformation at the Atlantic Council's Digital Forensic Research Lab.

Sometimes the basics work better, she said.

Attributing fake stuff to media outlets makes the lies more plausible. The aim is to make readers think, "'If this legitimate media outlet wrote about it, it must be true,'" Aleksejeva said.

This strategy is particularly effective because when readers click around on the fake pages, they'll be brought back to the real news site, according to Aleksejeva.

"It takes more vigilance from a reader to actually notice that something is off," she told VOA from Latvia's capital, Riga.

Aleksejeva is also concerned about how generative AI could be used to supercharge disinformation campaigns. Now, she said, it's as easy as feeding an AI tool some details and asking for a story.

"The volume will definitely change," she said. "It's just so much easier to invent a story."

Source: VOA

Previous
Previous

Colorado Proposal Would Cut Public Records Costs for Media 

Next
Next

Thai Journalists Wary of Proposed Media Ethics Act