Research published Wednesday in the journal Science Advances suggests it is possible to identify bad actors, or trolls, in real-time using machine learning algorithms.
According to researcher Jacob Shapiro, a professor of politics and international affairs at Princeton University, misinformation campaigns can reveal themselves in two main ways.
“To have influence, coordinated operations need to say something new, or they need to say a lot of something that users are already saying,” Shapiro told UPI in an email. “You can find the first because it’s unusual content by definition.”
Finding the second is harder, but Shapiro and his colleagues thought they could design and train a computer learning algorithm to catch trolls.
“When influence campaigns try to shift a conversation with large amounts of content, they rely on relatively low-skilled workers producing a lot of posts,” Shapiro said. “Workers are not natives of the influence targets and need to be trained on what ‘normal’ looks like. Moreover, their managers need standards to assess performance.”
According to the new study, there is no one variable that gives a troll away.
After all, social media platforms are highly dynamic mediums, where users are constantly changing how they engage. As a result, Shapiro said trolls have to adapt their content production, too.
Thanks to the machine learning capabilities of the new algorithm, this complexity didn’t prevent researchers from sussing out trolls.
“What our research shows is that in any given period for any given campaign, a large share of the troll activity looked different from normal users in discernible ways,” Shapiro said.
Researchers suggest the algorithm, once its machine learning prowess is improved, could be adopted and deployed by both online platforms and governments.
As with any probabilistic model, it’s likely the algorithm would make mistakes when distinguishing between genuine users and trolls.
“That’s why one should never use this kind of tool to make attribution of specific accounts,” Shapiro said.
Instead, Shapiro sees the technology being used to help governments and online platforms anticipate the topics, scope and effects of a foreign influence campaign. The technology could also help moderators queue up content and accounts for more careful scrutiny.
Photo Credit : https://www.picpedia.org/clipboard/research.html