Accident by uber vehicle : autonomous car mistook woman for plastic bag

A software glitch caused the fatal accident of an uber vehicle in the u.S. Image recognition software also makes other hair-raising errors.

The cause of the first fatal accident with a self-driving car has probably been determined. According to a report in the US online magazine "the information", a software error in the uber vehicle led to the collision with a woman in the US state of arizona. The 49-year-old had pushed her bike on the road in the dark in march. The self-driving car of the driving service provider uber had continued to drive without braking, the test driver sitting at the wheel had not paid attention to the road.

According to the report, the vehicle’s sensors had even registered the 49-year-old, but apparently did not recognize her as a human with a bicycle. Instead, it was classified as a so-called "false positive" object, say two people familiar with the analysis of the incident. This category includes, for example, plastic bags or newspapers that blow through the air and can be ignored by cars.

In image recognition, computers have made enormous leaps forward in recent years. Nevertheless, it is not only moving objects that software systems can quickly misinterpret. Even with supposedly easy tasks such as recognizing traffic signs, artificial intelligence (AI) can still fail, as various studies have shown.

Dangerous stickers on signs

It often becomes problematic when there are foreign bodies on the plates. In a joint study, for example, scientists from several u.s. universities stuck small black and white stripes on stop signs. although the stop lettering itself was still fully recognizable, 73 percent of the images were misclassified. Similar mistakes could happen in real traffic due to graffiti or signs covered with stickers.

results of a similar experiment were published by researchers at princeton university in february. They showed that advertising can also become a problem. Because the vehicles are trained to interpret round signs as traffic signs. round logos like those of the fast food chain KFC or the gas station operator texaco sometimes interpret the vehicles as signs for bicycle paths or road works. However, only with a probability of around 50 percent. However, the researchers then amplified this effect enormously by slight optical changes that are barely visible to the human eye. The modifications resulted in a one hundred percent probability that the advertising logos were mistaken for stop signs or no passing signs.

Researchers wanted to show that malicious attacks on self-driving cars are possible by having attackers manipulate signs. However, the same effect could also be caused by natural factors such as pollution. Because often it is enough to change a few pixels in an image to confuse the algorithms. Japanese scientists have shown that usually only three or five pixels are necessary for this, in some cases even a false but colorful pixel was sufficient.

artificial intelligence thinks michelle obama is a cell phone

In another case, researchers at the massachusetts institute of technology (MIT) produced objects with a 3D printer and used them to fool google’s image recognition systems. The artificial intelligence mistook a turtle for a gun with a probability of 90 percent. In other cases of the test, a cat was mistaken for guacamole and a baseball for espresso.

More on the topic

When the algorithm fails so dumb is artificial intelligence

Also the artificial intelligence of microsoft is sometimes quite stupid. The Group offers a program called captionbot, which automatically provides a description for photos of. This often works amazingly well, then again hair-raising mistakes happen. For example, the AI likes to confuse mops with cakes. The following description also became famous: "I think it’s a man in a suit and tie talking on a cell phone". The analyzed photo showed barack obama hugging his wife michelle.

Leave a Reply

Your email address will not be published. Required fields are marked *