Is it a good idea to create artificial intelligence (with Isaac Asimov's rules)?
Yes
Side Score: 2
|
No
Side Score: 1
|
|
|
|
1
point
The first law is obviously beneficial to humanity (A robot may not injure a human being or, through inaction, allow a human being to come to harm) but then, how can you save someone without harming the person/people posing a threat to their well-being? I think the laws are all too simplistic and that in many situations the more benevolent ruling will be against the law especially with law 3 (A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws) why can't we just dispose of the AI? Has it gained citizenship and the equivalent of human rights? Doesn't that then negate the point of the rules in the first place?... Rule 3 makes the most sense for AI to abide by, rule 1 benefits humanity the most and rule 2 is nothing but a filler rule as AI can't help but obey humans since they only do what you programmed them to do in the first place. Side: No
|