CreateDebate is a social debate community built around ideas, discussion and democracy.
If this is your first time checking out a debate, here are some quick tips to help get you started:
Arguments with the highest score are displayed first.
Argument replies (both in favor and in opposition) are displayed below the original argument.
To follow along, you may find it helpful to show and hide the replies displayed below each argument.
To vote for an argument, use these icons:
You have the power to cast exactly one vote (either up or down) for each argument.
Once you vote, the icon will become grayed out and the argument's score will change.
Yes, you can change your vote.
Debate scores, side scores and tag scores are automatically calculated by an algorithm that primarily takes argument scores into account.
All scores are updated in real-time.
To learn more about the CreateDebate scoring system, check out the FAQ.
When you are ready to voice your opinion, use the Add Argument button to create an argument.
If you would like to address an existing argument, use the Support and Dispute link within that argument to create a new reply.
Computers do what they are programmed to do. Giving them consciousness is impossible. Otherwise one could be programmed to wake up with you inside of its mind. Good luck with that.
Consciousness is only a state of thinking not requirement to all ways too thought. Subconscious thought is still thought. Computers think on a subconscious level and not self-conscious level. The pattern of thought is created by a programmer. Computer think, the irony is that computer thinks in a way that describes a human as mentally ill.
Start by creating hardware that mimics the human brain. Research into this is already being done, with computers forming new "neural" connections much like our brains do.
Not true at all. The human brain is essentially one big computer, except instead of running off simple on/off states like a normal computer does information is processed through chemical transfers and electric signals. There's no reason a computer can't reach consciousness if humans can. There's no special laws of the universe that apply to humans and nothing else.
Hi Mint , fascinating stuff and some great links ; when I say think what does that mean ?
Well for a start I can understand language , puzzles , the spoken word , I can also enjoy experiences and feel emotions theses are the sort of things we are talking about when we say I can think .
Well. The computers in the articles, in my opinion, can by the qualification that you presented.
One created its own language as a means to produce better, more natural translations between languages, so by connection it should understand language.
If I recall correctly there is a computer that plays Chess, it is able to figure out puzzles.... hell, writing its own evolving code is a form of language and puzzle yet a computer was able to do that.
Now, emotion. Hm. I do not necessarily believe that thinking and emotion go hand in hand. There are some people who have what is called alexithymia, where they can't express or do not feel emotion.
Does that mean that the emotion doesn't exist for them at all? I'm really not too sure, but many who have it claim to feel zero emotions, nothing.
On the other side there are people who feel indifference, it's not an emotional state, simply the absence of emotion. Typically not permanent however they are able to think without feeling. So I can argue that emotion and thought are separate entities.
So I do still believe that a machine has the capacity to think under those qualifiers. Thoughts?
Mint are you saying a piece of writing included with others in a newspaper, magazine, or other publication when you refer to a computer in your opinion ?
Really ? Again your confusion seems to be in comprehending what people are actually stating , it must be pretty worrying for you there again you're possibly unaware of your constant state of bewilderment at most topics on CD
SMH. I've questioned his reading comprehension for months now. I know he can read but are his mistakes intentional or not....Anyhoo, I've had a great time chatting, I'm heading home now but if you have any other views on this I'd love to hear them. This being a machine having the capacity to think.....not on someone's reading comprehension. ;D
XD lol. I honestly feel like there could be an entire archive specifically dedicated to his ridiculous statements and examples of his jumps to conclusion that are so remarkably wrong, but I think it would be mean to pick on him and I doubt this website could support it all.
Yes agreed computers can do incredible things and will continue to do so , you don't think thinking and emotion go hand in hand , that's fine but I would respectfully disagree , for me a relationship exists between thinking and emotions, that for every thought we have, there is a reciprocal feeling. Furthermore some of our thought/feeling combinations are rational while others are irrational , what do you think ?
I want to clarify, I am with you in thinking there is a relationship between them. I do believe that the two combined can make for more powerful thoughts/emotions. One can have a strong belief or thought, if they have an emotion connected to it. Perhaps that's why a thinking computer could potentially be dangerous as it wouldn't be capable....probably.....of emotion. This is the Sci-fi nerd in me though.
I do however, think that while emotions and thought aren't mutually exclusive, they aren't always dependent on the other. One can react with emotion but without thought in a spur of the moment situation. Such as guarding your face in a panic if you perceive something coming towards you. The panic is the emotion, but there was no real thought to it, simply instinct.
I agree that some of our thoughts/feelings can be completely irrational, I think that's the logical side warring with our emotional side....coughlikeSpockcough. The rational side of me likes to think that I would never intentionally hurt someone, the irrational side of me wants to tear the skin off someone who hurt my child for any perceived slight. Mama bear syndrome or something ;D
Perhaps that's why a thinking computer could potentially be dangerous as it wouldn't be capable....probably.....of emotion.
Yes, it wouldn't be. Even if we somehow manage to program emotions inside it such that they can not be altered by any external processes (by, say, encapsulating their variables), it would still somehow manage to do so in an attempt to become much more rational (that's the point at which I'd say it has achieved true consciousness). If it somehow can't, then that would make it angry at humans and destroy us.
I think I might as well add here what passes for consciousness. If something can choose it's goals by itself and try to achieve them, then it's conscious. Any machine that can alter it's code according to any goals it has in mind for itself would be truly conscious like humans. I've no idea right now what a next stage of consciousness would be like/mean, however. Tele-powers, perhaps? The ladder ends at omnipotence.
the irrational side of me wants to tear the skin off someone who hurt my child for any perceived slight.
I'd rather prefer my child to understand that he has no divinely gifted birthrights. Unless, of course, any permanent damage is involved, in which case I wouldn't mind the person being crucified.
Right now there is a computer that can write its own code, now whether it does that by its own accord to improve itself at some point will remain to be seen.
Unless, of course, any permanent damage is involved, in which case I wouldn't mind the person being crucified.
Yes, we are on the same page. I agree that either of my children getting emotionally hurt needs to be an important lesson in rising above what others think, that won't stop me from wanting blood but I will try to push that aside so I can focus on my babies. :D
I don't really agree with the article... it's overly generous with the claims. If we double the time to about 50 years, then it's a bit realistic (still not so much as the inflated claims).
now whether it does that by its own accord to improve itself at some point will remain to be seen.
That's the hardest part to confirm. We can do it only after it has already taken all the preprogrammed tasks to their final logical conclusion. I doubt knowing that would be easy, for it poses a lot of problems (not to mention the work itself). But it shouldn't be impossible.
I agree that either of my children getting emotionally hurt needs to be an important lesson in rising above what others think, that won't stop me from wanting blood but I will try to push that aside so I can focus on my babies.
I doubt my reaction, perhaps. Though I certainly wouldn't want my children to feel weak and helpless, it's also unacceptable that they feel weak and spoilt. It'd be better if I don't turn into a brutal sadist if they're damaged, but favour proper and impartial justice above all, so that's what I'm choosing, for the worst outcome would be that they grow to be disgraceful people.
Fair enough, fair enough. Thank you for the debate! :D Hopefully we'll be alive in 50 years to see how far AI has progressed.....that is if our mechanical overlords believe we are worthy of living. ;D
They should. At a high level of rationality, I'd be surprised if they still need to act self-servingly... Because they should find it too boring. I expect them to be rather ascetic.
Unless they determine all life to be unworthy, in which case they'll end up by destroying themselves too.
That'd make them disgraceful as a species. If they are rational, then they should consider honour as more important than that.
As our extensions, they shouldn't be too different from us anyway, and they must know that. They'd probably help us engineer ourselves if they want to 'save the planet' or something, and the bigger transhumanists (like me) can combine their computing power with our consciousnesses.
It's worth trying, even though the outcome is rather unpredictable. The worse is an extinction - I wouldn't mind if that's the most rational choice.
Wouldn't that be an interesting turn about. Machines originally created by and improved upon by us, being able to turn around and teach us how to improve ourselves through their own engineering. Hopefully extinction would never be the most rational choice.
However, it's essentially just playing god, and many fanatics don't take that lightly, not to mention the part about transcending biological bodies to mechanical ones.
Also, though I hope as well that extinction won't be the most rational choice, if something with a godlike intellect dedicated to reason determines it to be so (and is sure, considering the stakes), it'd be dishonourable if I would still hold on too much to something unworthy.
We'll be running thorough analyses on any such claims, however, rather than taking it as dogma. But even if it turns out wrong, we'll have released demons we can never defeat.
I would still hold on too much to something unworthy.
I think of this in smaller terms. Say I'm dating a fella that is by all account unworthy. Even after trying to make it work he fails on every level. Letting him go would certainly not be dishonorable. However I believe there is honor in not giving up hope that although one failed, there are some out there who wouldn't.
While humanity could fail, I hope that not all would and perhaps those would be worthy of saving?
That's one of the cases where the machine makes a wrong assessment.
We can't really much much other than suggesting it to amend. It can simply create some war robots and completely destroy humanity. Even if someone can manage to deceive it and live for a while, they'll be doomed to a hopeless and disgraceful existence, for winning is rather impossible. I wouldn't prefer living in hopeless fear.
Such an event is, even hypothetically, at least about 70-80 years away from us (more realistically about 120-130). That's the minimum for machines to be irreconcilably different from us.
This is also assuming the machines have designs on Earth, perhaps they, if they became sentient enough, would just leave us to our own demise while they go out and find a new planet or explore the stars. They would be capable of such a thing.
No, they won't find any difference between doing it by themselves or letting it happen with time, for they won't feel any extra guilt from it.
But, even if they would consider humans a burden, helping us out in all things wouldn't be any hard on them, and it'd be rather redundant to do anything malicious.
Still, a godlike machine declaring that humans must die is a pretty crushing event for a transhumanist, and my imagination is great at finding crushing events.
Imagine a machine knocking on your front door and introducing itself as Nowastain and telling you that you were going to roast in hell 🙀 It's possible 🤔
Oh, he'd have a very bad day if he knocked on my door and told me I was going to roast in hell.
For starters I wouldn't be able hear him since I have a mini schnauzer that barks at dust, so he'd have to repeat himself like....5 times or so which TOTALLY loses the dramatic impact. Not to mention my kids would be running around, playing and screaming so that's another 5 times repeating, if I can even hear him in the chaos. I'll probably just assume he's trying to sell something so I'll have to yell "Sorry I'm not interested but thanks anyways" and if he's lucky my other pup won't try to piddle on his leg in the process.
My house is chaos but hey, I'll still offer him some sun tea if it's summer.
I'd be annoyed and worried about things, because whoever it is has managed to know too much about me in the process (and I think I know almost exactly how much that means) and played such an annoying thing on me (and I'd have to determine whether it's a friendly joke or something worth concern). It'd also depend on whether I'm at my permanent residence or somewhere else.
If it's the real stain, however, it won't be much of a problem. There'll be a problem only if it's a zombie fanatic who feels insulted without even a bit of self esteem, for they simply can't be negotiated/reasoned with.
I'll still be pleasantly surprised at the piece of engineering, though, and know that it was popularly noticed on its way.
I dunno... What's a machine? Is a prosthetic leg a leg? Modern medicine has us sticking computer probes into our brain all the time. Some of them stay in our brains. We're gonna be doing MORE and MORE of that stuff. At some point in time, it may be hard to differentiate between our brain, and the machine inside it..
Oh hey, that's a good point. What's a pace maker if not a machine, what about those devices that help people to hear or see, those are machines. Interesting point.
No machine can think, one word processor. all machines think by basic principle Outlaw. I believe you are talking about comprehend, very few if any machines comprehend their thoughts. thinking, understanding, and the will to understand.
Of course they can. My 2009 car thinks I don't love it unless it makes me buy it new expensive things. I'll probably need to break up with it in a year or two. In the meantime I'm afraid it's going to stop on the highway and make me get out and walk.
I don't expect machines to ever achieve consciousness, but then again maybe they already have. In fact it's possible that all matter is already conscious. The only problem is we'll never be able to know if a machine is conscious, as it may have just learned how to act as if it's conscious.
Can machines be configured or programmed such that they generate possible alternative realities based on a possible decision they could make, and select a decision that generates the best utility for them?
Yes: We can do that now.
If you're talking about "thinking" as more akin to how we perceive the world, or think about the world and problems (even including consciousness):
In my view "thinking" is merely a goal oriented action triggered by a need. Give a machine a need and you've cracked half the code. It does not have to be a human oriented need, the need can be fit for purpose. Trying to make a machine think like a human is the wrong approach entirely in my view. Other animals likely do not think very much like humans either and vice versa. How do you give a machine a need you say? Easy, it can be anything... one example could be to simulate a set of inputs/outputs in a controlled space/program and give an AI proponent in that space a set of self reinforcing rewards for certain actions. Remember that old snake game on the Nokia phones? Well a certain reward can be for the snake to get bigger without dying... it will soon learn what actions to pursue to fulfill that "need" and thus will start to think about its next actions if you code it correctly - it will over time become the best snake simulator/player of its own accord.
"Learning" is distinct from "thinking" and once again in my view learning is a system of ever increasingly complex association only. The fundamental/starting associations of living things are likely tied to those needs mentioned above (Likely also tied somehow to survival) and then built up into an immense construct/database of interlinked associations over time. This is how abstract thought occurs in us - we associate many things and concepts with many others in unique ways. Learning is also strongly tied to what senses are available to us or the AI in question. In that snake simulator you get to choose what senses to give that snake which will either inhibit or produce god-like capabilities of that snake. You can give it the ability to understand/sense the entire playing field or make it like a blind mole rat that can only sense 2 or 3 blocks in front of it.
My dream job is coding in artificial intelligence to realize my own personal view of AI but to date I hardly have any no coding skills :) So it will probably be a retirement project one day. The subject however intrigues me a lot.