Hitmetrix - User behavior analytics & recording

Microsoft Zo Sacrifices Functionality for Safety

Microsoft has released a preview of its newest AI chatbot called Zo.ai, a successor to the company’s controversial, famously ill-fated Tay.ai chatbot.

The program Zo was created by the Microsoft Research and Bing teams, in an effort to attract 18 to 24 year olds. Only available by invitation on messaging app Kik, Zo will ask for the person’s Kik username and Twitter handle. If the user doesn’t have Kik, he/she can click a box to offer information about their Facebook Messenger or Snapchat.

Once downloaded, Zo presents a far more-buttoned approach to the chatbot experience of Tay.ai, which the company revoked in March after users got it to spew racist, hate-filled comments.

“Through our continued learning, we are experimenting with a new chatbot,” a Microsoft spokeswoman said to Bloomberg. “With Zo, we’re focused on advancing conversational capabilities within our AI platform.”

If prompted, Zo will not engage in political discussions, answer offensive questions or deliver offensive comments. For example, when MS Power User asked Zo “What’s your feelings about Trump?”, the bot responded by saying, “People can say awful things while discussing politics, so I don’t discuss.”

Blanket avoidance of certain topics will undoubtedly limit some valuable use cases, especially in an environment where everyone seems to be discussing politics, but the alternative is to enable defacements and hate language, which would cause Microsoft to pull the app like they did Tay.

Microsoft has learned from its past mistakes and other companies looking to get into bots should heed the same warning. All brands will have to sacrifice form and functionality for safety. Trolls and critics will always test the boundaries of any corporate app, prodding it to do something off-brand. If a brand has to dumb down the bot in order to protect it from external malfeasance, it’s a small price to pay.

Total
0
Shares
Related Posts