How lying computers could help train next-generation negotiators
Virtual humans — complete with realistic bargaining skills — may soon be a normal part of a business education
This video from the USC Institute for Creative Technologies shows negotiation between a person and a virtual human.
At USC, researchers are studying how to train the next generation of negotiators — and doing so will require teaching machines how to convincingly lie.
Using training programs called virtual humans, computer scientists want to help tomorrow’s leaders realize when the person sitting across from them is bluffing their way to a better deal. Virtual humans already exist to train users in leadership and communication skills; someday soon, they could be a normal part of a business education.
Jonathan Gratch, director of the Virtual Humans Research team at the USC Institute for Creative Technologies, will present a conference paper in May outlining one of the challenges for building successful negotiation programs.
All’s fair in love and bluffing
“The Misrepresentation Game: How to Win at Negotiation While Seeming Like a Nice Guy” will be presented at the Autonomous Agents and Multiagent Systems International Conference in Singapore. The paper was co-authored by doctoral students Zahra Nazari and Emmanuel Johnson; it was sponsored by the National Science Foundation and the U.S. Army.
As the study title suggests, the negotiation technique the study explored was all about bluffing while seeming fair. In negotiation, there’s a technique known as the “fixed-pie lie.”
The idea is that people arrive at a negotiation expecting a win-lose outcome; they don’t think to ask what their opponents are willing to compromise on and will cede more than they have to if their opponent keeps turning down each deal.
In a study Gratch led, participants were fooled into accepting worse deals when their computer opponent expressed disappointment.
Gratch and his colleagues recruited 75 study participants from Amazon’s Mechanical Turk, asking them to negotiate over baskets of fruit. The computer would claim to want all the fruit — though in reality it only cared about certain kinds. When the participants gave in and split the fruit evenly, the computer would begrudgingly accept, saying, “I’m not happy, but I want to be fair.”
That “concession” tricked the human participants into thinking the computer was giving up more than it really was.
“People tend to believe we’re fighting over the same things, so you’re inclined to believe the fixed-pie lie,” Gratch said. “With this technique, if you realize early on that you can grow the pie, you can pretend that it’s fixed to make your opponent believe they got half of the pie.”
Such a deal!
In future experiments, he said, subject participants should be taught how and when to make counteroffers. These could force the computer opponents to reveal that they don’t really want the same things as the human participants. It could also highlight the risks of misrepresentation: You look untrustworthy, which hurts your ability to create future deals.
Gratch is working with USC Marshall School of Business faculty who teach negotiations skills. Currently, these skills are taught through classroom lectures and pen-and-paper roleplaying. Virtual humans, which are already used by agencies like the U.S. Army to teach leadership and communication, could provide believable negotiation scenarios in a consistent way.
Virtual humans are also useful because negotiation is an inherently anxiety-provoking task, Gratch said.
“Many people are anxious about a salary negotiation,” he said. “You feel safer in a scenario like this. You don’t worry about getting things wrong. And it provides scaffolding: You learn the easy stuff before you get to the harder stuff.”
You can really put this in a concrete mathematical framework.
As more courses move online and negotiation happens in more virtual spaces, being able to access training programs from anywhere in the world could make these skills easier to develop.
“The thing I’m excited about is you can really put this in a concrete mathematical framework,” Gratch said. “We can start proving things and covering different negotiation scenarios. The next step is putting virtual human agents on the Web.”
Virtual humans mimic realistic social behavior in customizable, replayable scenarios. Users can see how their interactions with co-workers, employees — or in this case, negotiators — can be modified for a more desirable outcome.
More stories about: FOC, Virtual Reality