That account you’re following on social media might not be the voice of a person at all, but a piece of code. On social media networks like Twitter, networks of bots—automated accounts driven by basic artificial intelligence—have been found promoting celebrities as well as politicians, shaping discussions on everything from global affairs to extremist propaganda.
The chatter sometimes causes serious effects. When an Associated Press Twitter account was hacked in 2013, it reported explosions at the White House. Within three minutes, the Dow Jones industrial average dropped about 150 points. It’s presumed that trading algorithms were picking up the tweet, quickly spreading misinformation.
For computer scientist Emilio Ferrara, bot-spawned troubles are cause for concern. Most bots are fairly rudimentary and could be designed in just a dozen lines of code. But sophisticated ones can mimic human social users, and sometimes “it’s extremely hard to tell if a conversation is being driven by bots,” says Ferrara, a professor at USC Viterbi School of Engineering’s Information Sciences Institute.
Of course, not all bots are malicious. Many share useful news. Companies like Facebook have introduced chatbots to interact with customers.
Well intentioned or not, they’re bound to change the way we act online, though. Verifying information’s credibility has always been a challenge on the internet. Social bots up the ante by amplifying information that’s inaccurate or even intentionally false, Ferrara says. But he believes technology will eventually erase the bots.
He compares the situation to what’s happened to email spam: Today, detection algorithms can filter all but the most sophisticated spam from our inboxes.
“There will always be some smart email that goes through the filter, and the same will be true for bots,” Ferrara says. “The idea is that you need to come to a point at which there’s no incentive for people to invest time into creating bots.”