Driverless Cars Could Learn to Make Moral Choices – Courthouse News Service

FILE In this Tuesday, Dec. 13, 2016, file photo, an Uber driverless car waits in traffic during a test drive in San Francisco. In just a few years, well-mannered self-driving robotaxis will share the roads with reckless, law-breaking human drivers. The prospect is causing migraines for the people developing the robocars and is slowing their development. But experts say eventually the cars will coexist with human drivers on real roads. (AP Photo/Eric Risberg, File)

(CN) Is a self-driving vehicle capable of making moral decisions? If it is, which moral values should it use to make such choices?

These questions are among the issues society must consider as artificial intelligence, or AI, systems become more common in various industries, according to Gordon Pipa, co-author of a new study that provides a statistical model of human morality.

The research, published Wednesday in the journal Frontiers in Behavioral Neuroscience, is a breakthrough for efforts to equip AI systems with morality which experts had viewed as context-based and therefore impossible to describe mathematically.

But we found quite the opposite, said lead author Leon Sutfeld, a researcher at the University of Osnabruck in Germany. Human behavior in dilemma situations can be modeled by a rather simple value-of-life-based model that is attributed by the participant to every human, animal, or inanimate object.

In order to examine human behavior in road traffic scenarios, the team asked study participants to drive a car in a simulated, virtual-reality suburban neighborhood where they experienced unexpected, unavoidable dilemmas involving animals, inanimate objects and humans forcing the participants to prioritize which to save.

The authors then used the results to conceptualize statistical models that established rules, along with an associated degree of explanatory power to understand the observed behavior.

The findings come amid growing debate over the behavior of self-driving vehicles and other machines in unavoidable accidents.

Stakeholders and experts have operated under the assumption that human moral behavior could not be modeled, and have focused on outlining critical variables for engineering AIsystems. For example, a new initiative from the German Federal Ministry of Transport and Digital Infrastructure, or BMVI, has defined 20 ethical principles for self-driving cars.

Now that applying human morality to machines seems to be possible, the team argues that debate should now focus on how such morals are programmed into, and employed by, AI.

Now that we know how to implement human ethical decisions into machines we, as a society, are still left with a double dilemma, said senior author Peter Konig, a professor at the University of Osnabruck. Firstly, we have to decide whether moral values should be included in guidelines for machine behavior and secondly, if they are, should machinesact just like humans.

The team also warns that society is at the beginning of a technological revolution that requires clear rules. Without them, machines could begin making decisions without us.

In conclusion, Papa wonders: Should they imitate moral behavior by imitating human decisions, should they behave along ethical theories and if so, which ones and critically, if things go wrong who or what is at fault?

Like Loading...

Read more:
Driverless Cars Could Learn to Make Moral Choices - Courthouse News Service

Related Posts