AI to drive fake
Expect the next gen of fake news activists to use a powerful toolset of digital advertising to pursue its goals. And in an artificial intelligence-driven world, they will be able to deploy a disinformation campaign with real-time sentiment analysis, multichannel automated content distribution through organic posts and ad buys, along with contingency- based responses to current events, reckon Dipayan Ghosh and Ben Scott in the #Digitaldeceit report from New America’s Public Interest Technology and Harvard Kennedy School.
They argue that it’s quite possible Russian disinformation campaigns worked pretty well in spite of mediocre tradecraft, with practitioners simply taking advantage of basic tools designed to deliver targeted persuasive messages to tens of millions of people at low cost and with minimum transparency.
They also benefited from the fact that many other domestic political actors were also running paid and unpaid content on social media to promote salacious, divisive or emotionally manipulative political messages. And once AI-driven audience targeting has locked onto a successful combination of demographics, messages and attention-spending user behaviour, it will naturally steer all similar content into the same pathways.
“Disinformation campaigns are functionally little different from any other advertising campaign, and the leading Internet platforms are equipped with world class technology to help advertisers reach and influence audiences,” write Ghosh and Scott. “That is the business. As such, the economic incentives of the platforms and the political objectives of disinformation operators are aligned.”