AI in the Courtroom: Will a Robot Sentence You? – Walter Bradley Center for Natural and Artificial Intelligence

The possibility of an AI judge is raised in a recent article on new developments in artificial intelligence (AI) in court systems, which include Los Angeles Gina the Avatar for traffic ticket resolution and a proposed Jury Chat Bot.

Some experts think AI might be fairer than human judgment:

It may not be particularly hard to build an AI-based system that delivers better results than humans, panelists at the conference noted. Theres plenty of evidence of all kinds of human bias built into justice systems. In 2011, for instance, a study of an Israeli parole board showed by the parole board delivered harsher decisions in the hour before lunch and the hour before the end of the day.

But others warn of AIs limitations. Many AI decisions are not explainable because the computer system is motoring through 10,000 cases and comes up with a mathematical solution. Humans do not think that way and may not regard the decision as fair, no matter what it is.

In any event, one Superior Court judge warns that many cases dont come down to information alone:

In my experience in judging, especially with a self-represented litigant, most of the time people dont even now what to tell you, she said. If an automated system builds its decision based on the information it receives, she continued, how are you going to train it to look for other stuff? For me thats a very subjective, in-the-moment thing.

For instance, Chang said, if theyre fidgeting, Ill start asking them questions, and it will come to a wholly different result.

She cited immigration cases where the unsuccessful litigant is immediately murdered after deportation to a home country. Some such risks may be hard to quantify, especially if few wish to know about or accept responsibility for the outcomes.

On the other hand, we may be prone to inflating the difference AI will make. Gonzaga University law professor (emeritus) David DeWolf (right) doesnt see AI in the courtroom as a threat to justice. He told Mind Matters News,

Its hard to be too critical of AI in the courtroom because the current state of the U.S. legal system is so flawed. Resolving disputes through a trial is the very last resort, like going to war when diplomacy fails. It is never your first option.

Taking criminal sentencing as an example, there are multiple axes upon which the right sentence should be built. Retribution, deterrence, incapacitation, and rehabilitation are all relevant considerations. The desire to individualize a sentence to optimize these factors has to be limited to avoid arbitrary subjective judgments by the judge (or commission) imposing the sentence.

The late economist Kenneth Boulding pointed out that there were three ways of organizing human behavior: coercion, exchange, and gift. Armies and the legal system operate on the basis of coercion. Markets operate on the basis of exchange, while families, friends and churches operate on the basis of gift. All societies incorporate all three systems, but the less they rely on coercion, and the more they benefit from gift, the healthier they are.

Im less worried about the use of AI in the legal system than I am about the increasing dependence upon law a form of coercion to regulate human behavior.

If Dr. DeWolf proves correct, the principal concern should perhaps be that AI can do nothing to address fundamental problems with the way a system works. Those problems derive from human choices in the face of incentives, constructive or perverse.

We might also ask, what exactly has AI changed in various professions today? In disciplines that require years of study, like law, AI is not taking jobs so much as creating them. Just a few examples:

In general, in fields where human judgment is required, the huge increase in information that AI methods offer should result in more opportunities to exercise it.

Where has AI failed? In one spectacular example, a hospital tried to automate and streamline the process of telling a man that he was dying. Never again!, the administration vowed, after a huge outcry. But that was a failure in judgment on their part; a non-personal approach to dying should never have been considered in the first place.

Further reading:

Robot-proofing your career, Peter Thiels way

Students, dont let smart machines disruptyour futureThree ways you can avoid life in Moms basement and the job pouring coffee.

Creative freedom, not robots, isthe future of work.In an information economy, there will be a place where the human person is at the very center

and

Maybe the robot will do you a favor andsnatch your job.The historical pattern is that drudgery gets automated, not creativity

View post:
AI in the Courtroom: Will a Robot Sentence You? - Walter Bradley Center for Natural and Artificial Intelligence

Related Posts