Robotic fingers typing on keyboard

Bots are prolific on social networks like Twitter and are shaping the way we communicate, whether we realize it or not. (Photo/Shutterstock)

Science/Technology

We’re in a digital world filled with lots of social bots

Networks of automated agents hold sway over social media, with major implications for politics, national security and free speech

July 08, 2016 Andrew Good

Back in 2014, the social media company Cynk had an exceptional day on the market: The price of its penny-stock shares jumped in value by more than 25,000 percent, driving its value up to $5 billion.

Pretty good for a company with no assets or revenue and just one employee.

The key to Cynk’s rise was a suspicious Twitter storm advertising its surging stock price. A small army of accounts all seemed to be tweeting the same information — almost as if it was part of a coordinated network.

The story of Cynk highlights just one of the ways in which bots — automated agents driven by very basic artificial intelligence — are beginning to shape our digital world. Bot networks have been found promoting celebrities as well as politicians, and the Office of Naval Research is concerned with how bots shape discussions of global affairs, including messages about Syrian refugees and ISIS propaganda.

And bots can have a big impact on national security: When an Associated Press Twitter account was hacked in 2013, it reported explosions at the White House injuring President Barack Obama. Within two minutes, the U.S. stock market lost $200 billion in value. It’s presumed that trading algorithms were responding to the errant tweet.

Cause for concern

For computer scientist Emilio Ferrara, these examples are all cause for concern. Ferrara researches online social networks at the USC Viterbi School of Engineering’s Information Sciences Institute, studying how these networks influence human behavior. He recently co-authored an article for Communications of the ACM about the rise of social bots and the challenges they pose for society.

In some cases, it’s nearly impossible or extremely hard to tell if a conversation is being driven by bots.

Emilio Ferrara

“The real problem is we don’t really know how many bots are out there,” Ferrara said. “In some cases, it’s nearly impossible or extremely hard to tell if a conversation is being driven by bots.”

Of course, not all bots are malicious. Many are designed to reshare useful news, interact with customers or generally entertain online users. In the last few months, Facebook, Microsoft and Google have all made large announcements about chatbots. But well-intentioned or not, they’re bound to change the way we interact online.

Upping the ante

In 2014, Twitter filed a report with the U.S. Securities and Exchange Commission admitting that up to 8.5 percent of Twitter users are bots — a figure Ferrara thinks is an understatement. He added that in Russia, the social network VK is estimated to have an even larger share of bots, while little research has been conducted on Chinese networks like Weibo.

Ferrara’s article notes that verifying credible information has always been a challenge on the internet. Social bots up the ante by amplifying information that’s inaccurate or even intentionally false.

Most bots are fairly rudimentary and could be designed by high school students in just a dozen lines of code. But more sophisticated bots can identify relevant keywords, produce realistic responses through natural language algorithms and otherwise mimic human social users. Some have even been found to “clone” the behavior of real users, interacting with their friends.

“Any event you can imagine with strong social relevance, you can imagine bots are involved,” Ferrara said.

Dealing with the epidemic

So what needs to happen to address the epidemic? Much of the bot control measures are driven by social media companies, though they don’t often share what these processes are. Ferrara said it’s bad publicity for companies like Twitter to reveal just how many of their 300 million users are actually non-human. His sense is that Twitter doesn’t do much to curb bot presence because it would affect their total user base, but it’s fairly active when it comes to squelching accounts involved in criminal activities.

Ferrara conducted one study on crowdsourced flagging of extremist messages on Twitter. These measures flagged about 25,000 accounts supporting extremist propaganda, almost all of which were suspended within two months. But 25,000 is a drop in the bucket: Ferrara guesses terrorist groups may have as many as hundreds of thousands of accounts at their disposal, many of which are bot networks.

“This is a tough problem that goes beyond computer science, extending to sociology, political science and law,” Ferrara said. “From the computer science perspective, the big challenge is adding more data.”

With more data, algorithms can get better at distinguishing between bots and humans. Ferrara compared the situation with email spam in the late ’90s and early ’00s: Eventually, smarter detection algorithms were built that filter all but the most sophisticated spam from our inboxes.

“There will always be some smart email that goes through the filter, and the same will be true for bots,” Ferrara said. “The idea is that you need to come to a point at which there’s no incentive for people to invest time into creating bots. To create smart bots takes good people.”