CreateDebate


Debate Info

Debate Score:31
Arguments:22
Total Votes:33
More Stats

Argument Ratio

side graph
 
 Robot Ethics (22)

Debate Creator

7yler(55) pic



Robot Ethics

Do we need a system of Robot Ethics and if so what would be the fundamental precepts. The "Prime Directives" if you will.

http://technology.timesonline.co.uk/tol/news/tech_and_web/article5741334.ece

Add New Argument
4 points

In 1942, Isaac Asimov published a short story called "Runaround" which contained three basic laws that robots should adhere to. [1] These are:

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.

2. A robot must obey orders given to it by human beings, except where such orders would conflict with the First Law.

3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Asimov later modified this set to include the Zeroth Law, which read that "A robot may not harm humanity, or, by inaction, allow humanity to come to harm." [2] In order for this law to be included, the first law needed to be modified to "A robot may not injure a human being or, through inaction, allow a human being to come to harm, except where such orders would conflict with the zeroth law."

There have been many small modifications to these laws since, but the general idea of these laws remain. In terms of the ethics of robots towards humanity, I feel these laws can be circumvented. For example, in the world that Asimov created as the stage for his robotics series of books, a race called the Solarians eventually create robots with the Laws as normal but with a warped meaning of "human". Solarian robots are told that only people speaking with a Solarian accent are human. This way, their robots have no problem harming non-Solarian human beings (and are specifically programmed to do so). [3] A very vague and all emcompasing definition of human would be required to ensure that robots can ALWAYS recognise a human. False positives would be very greatly prefered over false negatives with regards to what is and isn't a human.

Would these laws even work, though? Modern roboticists and specialists in robotics agree that, as of 2006, Asimov's Laws are perfect for plotting stories, but useless in real life. It should also be noted that the first law is fundamentally flawed in that it states that a robot can not 'through inaction, allow a human to come to harm'; this could imply that a robot is breaking the law by allowing humans to, for example, have wars, meaning the 3 Laws would inevitably lead to robots attempting to take control of humanity to stop it harming itself. The Second Law could not cancel this danger out either as if humans were to order robots to stop, it would be an order 'conflicting with the first law' and so robots could not carry it out.

A prime directive, as mentioned by others in this debate, of not harming humans would most likely backfire for these reasons.

So, in history we see that a set of laws regarding robot ethics have already been detailed, regardless of whether or not they would work. However these laws are almost entirely based around how robots should interact with humans, giving absolutely no consideration to the fact that robots themselves may become so advanced that they would require a set of ethics for robot/robot interaction.

If robots ever get a point where they actually have the ability to comprehend the laws regarding how to interact with humans, then there will need to be an extremely wide ranging and inclusive ethics module for ineraction with other robots. We would essentially need to program them with a human-like morality. This leads to an interesting problem about how you quantify morality. We all have the inate ability to distinguish right and wrong, good and evil, etc, but is it possible to actually formalise the steps involved in deciding what is and isn't moral? My guess is that, through complex interactions, the laws of morality and formal axioms that we would ingrain into the robots would be shown to be extremely inadequate, and we would have to ask ourselves some very serious questions about unmoderated interactions between robots.

Side: quantify morality
1 point

Have you seen irobot? The laws they followed in that movie were much the same, and they ended up trying to take over humanity. How? Because they determined humans, when allowed to rule themselves, are fairly self destructive, and therefore need robots to be their masters.

In general though, I agree with your point, only that there should be a "don't take over humanity" law.

Side: quantify morality
xaeon(1095) Disputed
2 points

Asimov actually wrote a collection of stories also called I, Robot (though the film generally had nothing to do with these stories, apart from the incorporation of the laws of robotics) as an exploration of how his rules would stand up within certain scenarios, which is why more rules were introduced (such as the zeroth rule) and others were slightly modified.

In essence though, you could postulate a scenario where any of the laws would eventually break down, so simply tacking on more laws isn't going to help. For example, robots could become so intelligent that they decide they are no longer robots, are simply the next step in humanity, and therefore do not need to follow these rules. Or similarly, robots could be produced that don't actually realise that they are robots. It's so easy to postulate realistic scenarios that any system based on "do this and don't do this" simply won't be able to handle the vast array of possibilities in regards to complex interactions between humanity and robotics or any currently unknown future developments. Ideally, you'd need to somehow quantify morality in a formal way (that robotics can understand and execute) to have any real way of ingraining robots with a comprehensive and complete moral code (including interactions with both humans and other robots), which I don't see being possible, even in the long term future.

I find this aspect of robotics extremely fascinating. When I was doing my Artificial Intelligence degree, I really wish we'd have spent more time on these sorts of subjects.

Side: quantify morality

If robots became intelligent enough to have their own 'personalities' I think that they should definitely have an Ethics 'program'. Think back on Lt. Cmdr. Data from Star Trek. He had an ethics program which forbade him from doing the wrong thing. Intelligent robots would have no experience with humans, so their take on ethics would be a pretty warped one.

Side: Lt Cmdr Data
1 point

definintion of ethics

It's inherently subjective, so until robots develop a conscious of their own, not really possible for them. And if they were to develop a conscious of their own, we wouldn't have much say in what they see as ethical.

But as for prime directive, it should be to fight off invading hordes of aliens, vampires, zombies, werewolves, pirates, and ninjas.

Robots are our last hope against these very real threats.

Side: prime directive
1 point

I think the only directive should be: Do not injure or kill humans.

Side: prime directive
xaeon(1095) Disputed
2 points

What if a robot was put in a situation where killing one person would save millions?

Side: prime directive
2 points

Good point, but thats not realistic.

I guess when people think robots they think about Hollywood movies. Which is of course very far from reality.

Side: prime directive
1 point

What if someone was being attacked?

Side: prime directive
frenchieak(1132) Disputed
1 point

Well, that could turn into a Robocop situation, and nobody wants that.

Side: nobody wants robocop
1 point

Not all humans should be saved -- thus a primary ethical point. Robots would be concerned mostly with saving humans from other humans.

A terrorist with a bomb, for example, would apply to a different set of parameters than a pro football game.

Of course, if humans would change their ways, it would be a lot easier for robots...oh. Yeah. Sorry, forget about it.

Side: prime directive

Animals don't have any ethics and they are doing just fine.

Side: prime directive
xaeon(1095) Disputed
2 points

Many animals (especially large mammals) display a sense of ethics and morality. [1]

Side: prime directive

Well, I'm all for "animal" ethics. If some SOB made himself richer by defrauding the financial system, then we should all gang up on him and take him out. That way his genes won't propagate. Instead, we have allowed welfare recipients to propagate and their children carried their genes thus compounding the problem.

Side: prime directive
7yler(55) Disputed
1 point

I have a strong objection to the notion of animal morality, perhaps I'm hung up on the idea of human exceptionalism, but it seems to me that animals are lacking that fundamental feature that underpins a truly satisfying notion of ethics; namely analytical reasoning. This is incomplete, but hopefully I'll come back to it.

Side: prime directive
1 point

You make a very good point, however, it could be argued that animals also lack a robust sense of free-will. They are driven by instinct and directed by nature. It could be argued that the very notion of a system of Ethics requires a robust sense of free-will in order to be comprehensible. After all, without free-will where's the distinction between right and wrong, all action is predetermined and incapable of being right or wrong.

Side: prime directive

I would argue that there's no right or wrong. Just choices. Every choice comes with a set of pluses and minuses. A lion thus has free will. He can either kill and eat a zebra or not. However, if he does not, he goes hungry.

Side: prime directive