Trending: ‘Our biggest competitor is paper’: Mapping startup Unearth lands $7M from Vulcan and Madrona
Taylor Swift at the American Music Awards at the Microsoft Theater in Los Angeles on Oct. 9, 2018. (BigStock Photo)

Nothing like an email from Taylor Swift’s lawyers to get your attention during vacation. That was the case for Microsoft President Brad Smith, who recounts in his new book a controversy involving the tech giant and the pop star.

According to “Tools and Weapons: The Promise and the Peril of the Digital Age,” which Smith co-wrote with Microsoft communications director Carol Ann Browne, the dust-up was centered around Microsoft’s 2016 introduction of a chatbot in the U.S. that it named “Tay.” The AI-based social bot had already been a hit in China under the name XiaoIce, where it attracted six million users looking to converse about “their day, problems, hopes and dreams.” Gizmodo reported Tuesday on how it all went wrong with Tay.

“We represent Taylor Swift, on whose behalf this is directed to you,” read the email Smith received upon launch of the bot in the U.S. The Beverly Hills lawyer went on to state that “the name ‘Tay,’ as I’m sure you must know, is closely associated with our client.” Smith said that the lawyer argued that the use of the name Tay created a false and misleading association between the popular singer and Microsoft’s chatbot, and that it violated federal and state laws.

Smith said the company’s trademark lawyers didn’t agree with that assertion, but that Microsoft wasn’t interested in picking a fight and could choose from many other names.

But “the world’s different tastes in technology were revealed” beyond just the spat over the name of a chatbot. Smith said the name “turned out to be just the start of our problems.”

Tay was trained to interact with people based on feedback in conversations. Twitter “pranksters,” unsurprisingly, trained Tay to become a racist troll in a little more than a day, Microsoft had to take the bot down. It was a lesson not just in cross-cultural norms, Smith wrote, but in the need for stronger AI safeguards. On March 25, 2016, Microsoft apologized in a blog post for Tay’s “unintended offensive and hurtful tweets.”

Gizmodo reported that Tay was relaunched as Zo in 2018 and was programmed to avoid discussing politics, race and religion.

Check out GeekWire’s podcast interview with Smith below:

Like what you're reading? Subscribe to GeekWire's free newsletters to catch every headline

Comments

Job Listings on GeekWork

Cloud UX/UI React.js DeveloperMaxset Worldwide Inc.
Deep Learning Engineer // Co-FounderALLEN INSTITUTE FOR ARTIFICIAL INTELLIGENCE (AI2)
Application SpecialistCity of Missoula
CTO-in-Residence // Co-FounderALLEN INSTITUTE FOR ARTIFICIAL INTELLIGENCE (AI2)
Find more jobs on GeekWork. Employers, post a job here.