LISTEN TO TLR’S LATEST PODCAST:
by Micah J. Fleck
The existence of fake accounts and/or bots isn’t surprising to those who have become public figures on social media, as there are many fake fans and retweets and follows that tend to piggyback on growing supporter bases for the purposes of advertising. But a new study independent from Twitter’s own researchers has now discovered that over twice as many accounts on the service are fake than originally estimated.
According to a summary of the study by CNBC:
Researchers at USC used more than one thousand features to identify bot accounts on Twitter, in categories including friends, tweet content and sentiment, and time between tweets. Using that framework, researchers wrote that “our estimates suggest that between 9% and 15% of active Twitter accounts are bots.”
Since Twitter currently has 319 million monthly active users, that translates to nearly 48 million bot accounts, using USC’s high-end estimate.
The report goes on to say that complex bots could have shown up as humans in their model, “making even the 15% figure a conservative estimate.”
At 15 percent, the evaluation is far greater than Twitter’s own estimates. In a filing with the SEC last month, Twitter said that up to 8.5 percent of all active accounts contacted Twitter’s servers “…without any discernable additional user-initiated action.”
Since that equates to roughly 20 million more bot accounts than Twitter’s own assessment, that could be an issue in light of analyst concerns about user growth. In a recent research report, Nomura Instinet analysts wrote that “Twitter’s revenue growth has slowed to the mid-single digits, as the platform has struggled to attract new users over the past year…”
USC’s researchers also highlight the benefits of some bots, writing, “many social bots perform useful functions, such as dissemination of news and publications…”
But the USC report also points to the downside of bots, saying, “there is a growing record of malicious applications of social bots. Some emulate human behavior to manufacture fake grassroots political support… [and] promote terrorist propaganda and recruitment.”
Oh, good. Automated political agenda-pushing. That doesn’t sound dangerous at all.