Five times Artificial Intelligence went wrong

Artificial Intelligence (AI) is undisputedly going to make huge strides in the next 10 years. Cars will learn how to drive themselves, robots will perform surgeries, and you might find that your colleagues are increasingly silicon rather than carbon based.

Alan Turing, the father of theoretical computer science and artificial intelligence once said that ‘Machines cannot make mistakes’.  AI machines however, when released into a real world environment, can react unpredictably and in ways their makers probably didn’t intend, with downright hilarious and sometimes offensive consequences.
Here are the five most embarrassing times AI went wrong.

1. Tay – Microsoft’s Twitterbot

Most people have already heard about Microsoft’s “Teen girl” chat robot Tay. Tay had to be deleted after a single day, after it transformed into an racist Hitler-loving, incest-promoting, ‘The Jews did 9/11’ -proclaiming robot in just 24 hours.

The idea behind Tay was that it would chat with other twitter users (specifically online American youths between the ages of 18-24) and the more it spoke to other users, the better it would learn to converse with them in their style, and about topics that interested them. Microsoft, in one of the biggest rookie mistakes of all time, trusted the training of its Twitterbot to the Internet.

“As an online discussion grows longer, the probability of a comparison involving Nazism or Hitler approaches 1.” – Goodwin’s LawWe kid you not, but there is an actual law predicting that online human conversation will degrade into racism.

That is exactly what happened with Tay – cheeky twitter users purposefully tried to corrupt Tay with some hilarious consequences.

More of Tay’s terrible tweets can be found here.

 

2. Google Photos- Automatic facial Recognition

Google Photos uses a facial recognition software to automatically tag people in images. It stirred quite the controversy when Google Photos categorised John Alcine and his friend (both of afro-american descent) as monkeys. Read here for the full articleThis sparked a viral online debate about racist artificial intelligence technology. However, while this mistake gained the most press, it was not the only one – Google Photos regularly mistook white people for dogs or seals. The software was probably not racist, but certainly very rude.

 

3. Microsoft’s CaptionBot

If at first you fail – try, try again is apparently Microsoft’s mantra when it comes to AI. After the unmitigated, catastrophic disaster that was Twitterbot Tay, they released CaptionBot, a robot that automatically comes up with captions for your photos. Quite useful if you are uploading large numbers of photos. How many times can you manually type #allthelads #sunshine #summerholiday etc?

CaptionBot works in a three step process. First, Microsoft’s Computer vision API breaks the image down into components. Then, Bing’s Image search API  attempts to identify the image components and finally, the faces are run through Microsoft’s emotion API Microsoft’s emotion API  that attempts to gauge mood. This information is then used to paint a picture of what is happening. Of essential importance is knowledge of the mood of the image so that the captions are in the appropriate tone. For example, you wouldn’t want the caption “family get together, looking fly in our suits” at a funeral.

Whilst an interesting idea, much like the Twitterbot Tay, this led to a number of unforeseen mistakes.

obamagiraffe

After the Tay fiasco, Microsoft will have definitely put some measures in place to stop CaptionBot going off the rails completely, however it will still be learning from the feedback it receives from every image that gets uploaded.

 

4. Voice Recognition such as Apple’s Siri

Siri is well known and universally loved, and you probably used it incessantly for a month on your iPhone before getting bored and moving on. Siri is a form of speech recognition AI. While the technology is impressive, Siri is often unable to understand what is desired of it, particularly failing with sarcasm and idioms. In fact, there’s a whole website of Siri fails to be found here.

Siri_2Siri_fail

Beyond these humorous but innocuous mistakes there is a more worrying underlying danger. People are often more comfortable talking about perceived embarrassing problems such as mental health issues to robots. The phrase “I was raped” was googled 1300 times each month. While good with trivial and simple tasks, smartphone-based conversational agents –Apple Siri, Google Now, Microsoft Cortana, and Samsung S Voice — respond inconsistently and incompletely when presented with questions related to mental health, physical health, and interpersonal violence, according to a study published in the Journal of the American Medical Association (JAMA). Such conversation softwares must be able to deal with mental issues, depression and rape in a more productive way (by connecting to a help hotline), rather than to say “let me google rape for you”.

 

5. Nikon- Face detection cameras

Back in 2009, Nikon’s face-detection cameras were accused of being “racist.” The idea behind Nikon’s face detection camera was to alert the photographer if the photograph was ruined by someone blinking at an inopportune moment. Asian American blogger Joz Wang, however noticed that many times, when an Asian face was photographed, the message flashed even when their eyes were wide open. Despite being a Japanese company, Nikon apparently neglected to test its camera on Asian eyes.

Several technologies struggle to detect faces with non-Caucasian features, similar to how early speech recognition software had difficulty recognising different accents. We’ve established that image recognition still has a long way to go, yes – machine learning is hard, however before releasing such softwares we need to ensure that they are tested properly, on as global an audience as possible.

Many of the tasks we are thinking to entrust to AI are highly complex and exciting, such as self-driving cars, or robots that can command rescues or operate weapons. These are high-stakes tasks that depend on enormously complex algorithms.
The biggest risk for AI is currently not that machines will take over the world, terminator style, but that those algorithms may just simply not always work. We need to be aware of this risk and create systems that can function safely even when the AI software commits errors.

Everyone makes mistakes, even our silicon counterparts. 

 

 

Leave a Reply

Your email address will not be published.