CreateDebate


Debate Info

1
2
Yes No
Debate Score:3
Arguments:3
Total Votes:3
More Stats

Argument Ratio

side graph
 
 Yes (1)
 
 No (2)

Debate Creator

colinong(130) pic



Can a machine ever be a court judge?

I am wondering if we get robot judges, will it be more efficient and fair to society?

Yes

Side Score: 1
VS.

No

Side Score: 2
1 point

Can a machine ever be a court judge?

Hello c:

First off, let me commend Jace for his thoughful response. If people are involved in the programing, he's absolutely right in his analyses.

The question, however, assumes that humans won't have a role. Ergo - CAN a program be written that adjudicates the Constitution fairly? Or is "fair" even an identifiable concept? I don't even think humans can identify "fair? The Constitution is the fairest document in the land, yet we still can't figure it out..

Can a computer do it? I dunno why not?

excon

Side: Yes
1 point

i do not think that a machine would be any less fair than a human judge, but i think there is serious reason to doubt that a machine would be more fair than a human judge. whether and to what extent machines would be more fair than human judges isn't a question of how impartial they can be but how impartial fairness is.

i think there are two competing sense of fairness within governmental judiciaries. the first is the sense of fairness between people; we want assurances that different people receive the same treatment under the law. the second sense of fairness is by person; we want assurances that the law is sensitive to the circumstances of the case, often including the opportunities a person had and their character (e.g. this is why sentencing guidelines offer a range of options). being sensitive to the second kind of fairness introduces a lot of subjectivity into the equation that can make accomplishing the first kind of fairness more challenging. it also require more nuanced programming, and probably programming that adapts to our evolving sense of what circumstances and characteristics matter in adjudicating guilt and sentencing. that kind of programming would likely involve a neural net, and machine learning through neural nets has repeatedly led machines to conform to common human prejudices. the risk we run is that in creating a sufficiently complex machine we replicate our own biases into it while thinking that we have created something unbiased (i.e. fair).

that being said, one way in which machines might succeed in fairness where humans cannot is that they do not fatigue. there is considerable research suggesting that fatigue impacts how judges adjudicate, particularly in determining sentencing and presiding over parole or bail hearings. the later in the docket someone comes, the less favorable the outcome will tend to be for the defendant. im not sure this benefit outweighs the risks of thinking we have created a fair machine in the broader sense, but it is also a significant improvement on fairness.

Side: No
1 point

if a machine does something wrong or codes something wrong, then a person can be punished too harshly or too lightly.

Side: No