Can Robots judge us better than, well, a judge?

‘Human Decisions and Machine Predictions’- a close look at this study into the use of AI in the place of judges.

Jon Kleinborg and his co-authors’ study, titled ‘Human Decisions and Machine Predictions’, looks in depth at the idea that AI is a more reliable option than human judges, and especially when deciding on ‘jail-or-release’ dates. Kleinborg and his colleagues’ basic argument is that algorithms are a more accurate way of predicting the likelihood of futurecrimes (ex-) convicts might commit than human judges are. This ‘better’ artificial intelligence would then prevent people from reoffending, and so leads to a win-win situation and better societal outcomes, according to the study.

Recently, in the US, parole boards have started using machine-learning-based tools (AI) much more frequently to make decisions regarding prisoners’ futures and potential parole.. This seems to have particularly gained clout in Pennsylvania, and indeed Richard Berk has even done a study into the use of machine forecasting in this state, in which he concludes that there is no risk to public safety caused by using this method of decision making when it comes to parole time lines.

This is all very well, and I’m sure true, but from a personal point of view, I cannot help but wonder how I’d feel if a machine condemned me to another 10 years in prison. Ultimately though I suppose, and as Kleinborg’s study argues, it would be harder in my hypothetical situation to claim that the machine had acted in a biased way. An extension of this thought process is the highly notable fact that factors such as racial or gender bias will in theory become eliminated through the use of AI in court decisions.

Kleinborg’s study highlights on its opening page that every year in the US over 10 million people are arrested (a slightly outdated figure as this was published by the FBI in 2016). All of these 10 million people then have very important decisions made about their future by a human- just like you or me. The pressure on the judges making decisions such as where the detained person has to await trial (at home or in jail), where they are eventually tried and so on, is huge. One wrong prediction about the offender’s capacity to reoffend could change the course of that individual’s future for, well, ever. Due to this then, the option of a non-biased decision maker is of course highly preferable, probably both for the judges involved and the accused offenders.

Another interesting point raised in the study is what happens when judges, when making these life-changing decisions, decide that prisoners with families or jobs waiting for them are more deserving of bail than those who don’t. This of course leads to more potentially biased and unfair decisions particularly if, for example, the judge making the decision also has his/her own children and has sympathy for the difficulty of locking up a parent, sometimes regardless of other ‘red flags’.

Ultimately, Kleinborg’s study concludes that predictive tools could prove incredibly effective for the future of decision-making, particularly with regards to prisoners. Kleinborg and his colleagues are not alone in this belief, and a recent article in Lexology looking at the replacement of judges with AI in the Ukraine supports their findings. It would seem that as long as it remains well-regulated and transparent, AI being used in the place of judges could indeed be a force for good.

The final piece of the puzzle seems to me, on an entirely aesthetic level, is how a computer will look in the courtroom in the place of the traditional figure of the white-wigged judge. That will be something to get used to, too.

Recommended links: