CreateDebate


Debate Info

5
2
Yes No
Debate Score:7
Arguments:7
Total Votes:7
More Stats

Argument Ratio

side graph
 
 Yes (4)
 
 No (2)

Debate Creator

JacobHawkins(13) pic



Should We Avoid The Technological Singularity?

The technological singularity can be described as the point at which Artificial Computer Intelligence reaches a point where it is considered perfect...billions of times smarter than all of humanity, and impossible to track its improvement, since it would be so fast. This would bring changes to humanity that is almost unimaginable, and many say that we should fear the technological singularity since there is a risk of a super intelligent AI enslaving or killing off the human race. But is this a realistic fear? Should we avoid the technological singularity?

Yes

Side Score: 5
VS.

No

Side Score: 2
2 points

For AI to be a threat to us it would need a reason to harm us. So long as it has no motive we shouldn't be too worried. As soon as AI starts to decide that we are a hindrance to it's own goals we'll be in trouble. Dermot may be right to say that this is not preventable.

I think as soon as AI is able to formulate its own goals independent of humans it needs to be stopped, if this isn't already happening. As long as we decide the goals/purpose of AI it is fairly safe.

Side: Yes
1 point

Hi Mack , one of the problems regarding AI is it might indeed have reason to harm us , what if it was programmed to harm and demonise certain people and certain societies ?

How will we stop AI from inevitably forming its own goals ?

Who decides if the goals are fair ?

Side: Yes
Mack(314) Clarified
1 point

Reasonable questions. I'll leave it to somebody else to try to answer them, as I'm not sure how to answer.

Side: Yes

Think about this: imagine you are an ant. Just an ant. A powerless, tiny ant. But this ant can think, and comprehend the world, yet, it is still quite stupid, when compared with the great, seemingly unreachable intelligence of the humans that find pleasure in killing other ants like you. It wouldn’t be that great, since there wouldn’t be much you would be able to do that wouldn’t be outmatched entirely by these humans. How would it feel to be so… powerless, to have such low control over your ability to survive?

That would be shockingly similar to what it would be like to be compared to a super intelligent machine. If artificial intelligence somehow reached the technological singularity, we would be dealing with a computerised mind billions of times more sophisticated than all of the human race collectively. The power held by something this powerful is hard to imagine, but it would be similar to the power that humans hold over the Earth and its non-human inhabitants. You could even go as far to say that that isn’t even comparable to what a super intelligent AI would be able to do.

But what are the consequences of this unimaginable power? The most obvious, is that anything with this kind of power has the ability to wipe out the human race, or all life on Earth, if the need arises. And it’s not that it will be because the Artificial Intelligence is plain ‘evil’, but because it would find an humans an inconvenience, or even a threat to it’s existence. To solve a problem, it might just decide to get rid of us completely. To be honest, I wouldn’t exactly be sure that having humanity on this precious planet is a positive thing.

We would lose control of our lives. If a super intelligent AI made the destruction of humanity it’s goal, then our lives would be a constant fight for survival. No, this is not an opinion based off the Terminator series. This is a real possibility, and the technological singularity is rapidly approaching. There is no way we would be able to place restrictions on something so intelligent that all of humanity wouldn’t compare, since it would be able to outsmart every single attempt we would make. It would just be unrealistic to think that something super intelligent would actually be containable.

So, would we really want to endanger humanity, or even the Earth, just because we don’t want to restrict the development of artificial intelligence. We need to put a halt to the artificial intelligence development, before it is too late.

Side: Yes
1 point

Yes, it would further polarize the nation into isolated pockets of ethnic groups thus increasing the likelihood of civil unrest.

Side: Yes
1 point

But how do we avoid it , as surely it's inevitable ?

At the moment AI is getting more and more involved at an ever increasing rate in our daily lives ; a brother of mine is involved in a process of replacing humans with bots in the workplace the practice is being used increasingly by a lot of large multinationals.

Stephen Hawkings many years ago in a lecture predicted this very thing happening and how inevitable it is as it's happening as we speak

Side: No
1 point

I agree with Dermot that it's going to be unavoidable.

Probably the only thing we could avoid, if enough nations agreed to it, would be to move away from remote control and AI weapons (just like most of the world made agreements to ban biological and chemical weapons), but instead we're already in arms races to create those things.

Side: No