Study: Unmanned vehicles fail to detect dark-skinned pedestrians



03/14/2019 2:42 PM


The unmanned driving technology, which can completely replace conventional cars in the future, raises many questions. Is it safe to use self-driving cars? How will they deal with moral choices in a critical situation? How they will affect the road situation? Now, there is another concern – unmanned vehicles can be potentially dangerous for dark-skinned people with dark skin.



Gary J. Wood
According to a study by the Georgia Institute of Technology, dark-skinned pedestrians are more likely to be hit by a self-driving car. The numerous sensors installed in such a transport better recognize people with a light skin tone.

The study’s authors asked how well the obstacle recognition system in drones copes with different demographic segments of society. They took a large database with photos of pedestrians and divided them into different groups of skin tone according to the Fitzpatrick scale.

It turned out that the sensors reacted worse to people with skin tone from 4 (Mediterranean) phototype to 6 (African American). In this case, the error was at least 5%. Even external factors, including day and night, did not affect it.

One of the study’s authors, Jamie Morgenstern, advised to pay special attention to unmanned vehicles that use the recognition systems mentioned in the study.

It is known that algorithms and computer systems can adopt prejudices from their creators, including racist tendencies. The most resonant incident on this topic happened in 2015, when Google’s image recognition system identified photographs of black people as pictures of gorillas.

In addition, scientists have found that the IBM and Microsoft face recognition system often makes mistakes when determining gender in a photo if it shows a person with a darker skin tone.

Since the algorithms "learn" with examples that are loaded by people, they may have difficulty in working if these examples were not very diverse. The study’s authors suggest that the obstacle recognition algorithms could hone their functionality on white-skinned pedestrians, as the developers did not provide them with enough examples of people from other races.

Currently, this problem persists in both small start-ups and in large corporations engaged in artificial intelligence and neural networks. Perhaps, the issue can be solved with special tests that will recognize the algorithmic bias and correct it in a timely manner. In addition, representatives of various social groups that differ in gender, race and nationality, should be represented in the development team - such measures will help reduce the bias of AI, the researchers said.

First death

In March 2018 there was a fatal accident involving a Uber unmanned vehicle. The car hit a woman crossing the road with her bicycle in the dark. It is known that the operator named Rafaela Vazquez at that moment was distracted, as she watched a video on her smartphone. When Vazquez finally noticed the pedestrian, it was already too late - the car hit the woman to death.

After this incident, Uber Technologies temporarily interrupted unmanned vehicle testing. It became known that the software failure could be the cause of the accident. It was reported that the car "noticed" the woman, but decided that this "obstacle" was not worth stopping.

According to the prosecutor's office of Yavapai County, in which the accident occurred, Uber Technologies will not incur criminal liability for the incident. Prosecutor Sheila Polk concluded that the video recording of the accident did not fully reflect the moment of the collision. In this regard, the case was returned to the police of the city of Tempe, where the tragedy occurred, to search for new evidence.

At the same time, the fate of Rafaela Vazquez is still unclear - she will probably be responsible for the accident. Previously, the investigation concluded that the incident could have been prevented.

source: vox.com

 


More