Self-Driving Cars Will Soon Make Moral Decisions As Well As Humans – IFLScience

In his book The Descent of Man, and Selection in Relation to Sex, Darwin proudly argued that our sense of morality was a uniquely human trait. Even though that claim has been disputed in recent years, its fair to say humans still top the charts when it comes to moral senses.

But it looks like we might soon have some competition, namely in the form of driverless cars.

A new study in the journal Frontiers in Behavioral Neuroscience has looked at human behavior and moral assessments to see how they could be applied to computers.

Just like a human in a car, a driverless car could be faced with split-second moral decisions. Picture this: A child runs into the road. The car has to work out whether it hits them, veers off to hit a wall and potentially kill other passersby, or hit the wall and potentially kill the driver.

It was previously assumed that this kind of human morality could never be described in the language of a computer as it was context dependent.

But we found quite the opposite, Leon Stfeld, first author of the study, said in a statement.

Human behavior in dilemma situations can be modeled by a rather simple value-of-life-based model that is attributed by the participant to every human, animal, or inanimate object."

They worked this out by asking participants to drive a car in a typical suburban neighborhood on a foggy day in an immersive virtual-reality simulation. During the simulation, they were faced with unavoidable crashes with inanimate objects, animals, people, etc. Their task was to decide what object the car crashes into.

The results were then plugged into statistical models leading to rules to work out how and why a human reached a moral decision. Remarkably, patterns emerged.

Now we have worked out the laws and mechanics in the way a computer would understand, it means we could now simply teach machines to share our morality. This will have some huge implications in regards to self-driving cars.

We need to ask whether autonomous systems should adopt moral judgments, if yes, should they imitate moral behavior by imitating human decisions, should they behave along ethical theories and if so, which ones, and critically, if things go wrong who or what is at fault? senior author Professor Gordon Pipa said.

"Now that we know how to implement human ethical decisions into machines we, as a society, are still left with a double dilemma," Professor Peter Knig, another senior author, added. "Firstly, we have to decide whether moral values should be included in guidelines for machine behavior and secondly, if they are, should machines act just like humans."

Go here to read the rest:
Self-Driving Cars Will Soon Make Moral Decisions As Well As Humans - IFLScience

Related Posts