Find A Speaker or Advisor

Tags:   +   +   +   +

As Uber seeks to resume testing self-driving cars on the road seven months after one of its prototypes killed a pedestrian, debate and discussion will likely turn to whether the technology is trustworthy. Joining Uber in trialing autonomous vehicles are fellow Silicon Valley tech companies Alphabet (Google’s parent company), Tesla and Waymo, as well as traditional automakers Audi and Toyota. But as each of these firms competes to demonstrate their technical competence, we may do well to ponder a different question: are the decisions being made by this kind of artificial intelligence ethical?

The answer may be more complicated than appears, according to a new study by MIT researchers including renowned AI expert Iyad Rahwan. Rahwan first explored the question of moral decision-making in self-driving cars in research that led to his acclaimed TED Talk on the subject. Now, in the largest ever survey of machine ethics, Rahwan and his colleagues observed that decisions which self-driving cars will ultimately be expected to make – such as whether to spare pedestrians or avoid a dangerous crash – are not guided by universal moral standards. Consequently, the developers of these vehicles will themselves have to make decisions about ethics which may be culturally biased.

In the survey, which the authors termed “The Moral Machine,” a user sees 13 randomly generated scenarios in which the death of at least one person in a car accident was unavoidable. In some cases, the choice was about numbers – sparing many at the expense of one or a few people. In such cases, moral choices were consistent across cultures. But when asked to prioritize young or old, rich or poor, or various other distinctions ranging from personal talents to gender, there were surprising differences between different cultural clusters and individual countries.

For example, in societies with a strong rule of law, respondents showed a clear preference for sparing the law-abiding pedestrians at the expense of those who were breaking the law – a priority which was far less notable in countries with weaker legal systems. More surprising and thought-provoking was the finding that in countries with higher levels of inequality, there was a preference for sparing the wealthy or high-status at the expense of the poor. Such biases could be reflected in AI that is developed by people from countries with wide income gaps.

Rahwan, whose expertise is in AI’s impact on society, argues the results of the Moral Machine survey pose challenges to AI developers, as well as all of us who will have to live with often culturally specific moral biases. “People who think about machine ethics make it sound like you can come up with a perfect set of rules for robots, and what we show here with data is that there are no universal rules,” concludes Rahwan.

While Uber and other companies strive to ensure their cars do not endanger anyone, we should also question how the AI experts are approaching these types of situations – and whose lives they think matter more.

How Much Does Your Life Matter to a Self-Driving Car? was last modified: July 5th, 2022 by Brian Sherry