The attached article talks about one of Facebook's latest endeavors where AI engineers have successfully taught bots how to negotiate, (and yes- lie), like humans. The effort is part of the Facebook Artificial Intelligence Researchers (FAIR) team's latest endeavor to create more human behavior. The robots were "trained" to divide up objects of various values and then negotiate, but the results were more human than expected.
In fact, humans playing against the bots couldn't tell they were opposing robots as the bots mimicked what they anticipated human behavior to be. Some bots were clever enough to 'feign interest' in valueless items only to later "compromise" by conceding it. Important to note- this tactic was never coded into the robots, they learned this tactic from human behavior.
The overt implications here are to create more intelligent personal assistants (yeah right) and to create customer service bots who are more humanistic (more like it). What if a customer service bot learns to deny help until the customer reaches a certain level of exasperation where the bot concedes something much smaller than what the customer may have otherwise gotten with an empathetic human? Is this favoring companies and hurting customers? Or are we heading towards some more optimal customer service interaction?
It would be interesting if you could train your own bot to negotiate for you and cut out the pesky interaction with the used car salesman... just better hope they don't train a sleazier bot first.
Facebook taught bots to negotiate (and lie) like humans
No comments:
Post a Comment