Can we trust an AI?
Yes . We can trust AI.
We should remain in my opinion AI was program by programmer also AI suggest option to improve job and complete task .
Human require thinking about how to treat AI , what is important to use AI so that improve our life . I thinking recently army abuse AI ability like a battle to opponent troop shooting a gun .
AIs today are good with menial jobs, so they are to be trusted for that specific job, but recent fairs showed an AI that was capable of holding a conversation with another human to take an appointment. So of course the technologies related to AIs will advance and become more and more sophisticated, but we must remember that humans program those AIs and we all know that humans are far from perfect. The point is AIs can be trusted with menial jobs, and with a little bit more information they can be trusted with very complicated conversation. Look at the link below to see an entertaining and exciting advancement of technology.
AI can be trusted with human oversight.
AI should always remain as a tool humans use to complete a task.
For example, a Self-Driving Car in the future should always have an "Emergency Stop" button or something to disable to automatic driving and force the car to pull over safely.
Self Driving cars will do TONS of terrible things, and they will be constantly updated to be more and more reliable. But they will probably do a much less amount of damage than humans do.
I plan to be a robotics engineer so I think I can add insight.
I plan to be supreme ruler of the potato people so I think I can counter your insight.
A.I may never get advanced to the point of being destructive.
Then again it may. In fact, it probably will. This can be ascertained simply by observing the fact that it is becoming more advanced with time, and adding on ten thousand years.
Most is primitive and harmless
Yes, but time doesn't stand still. AIs are becoming more advanced every year. Plus -- and potentially most importantly -- developments in technology tend to be exponential rather than linear. We learn twice as fast when we know twice as much.
As long as we don't give it significant control, we should be fine.
I think that if we ever reach the level of AI where the decision has to be made whether or not to give it control, at that point it becomes inevitable that it will happen. Again, because of the passage of time, and the curiosity of human beings, eventually someone will try it just to see what happens.