Dealing with it
When it comes to making new technology, motive is the biggest issue on how its going to be used. What I mean by this is if if I'm the guy that created sentient AI, I probably knew that it may have an evil intent. Should it then be heavily regulated (take that however you will) or should it be unregulated and just have to deal with the effects somehow?
1
point
1
point
|