Could a machine think ?
The really cool, again terrifying, thing is....they can supersede their original algorithms and instructions.
Consciousness is only a state of thinking not requirement to all ways too thought. Subconscious thought is still thought. Computers think on a subconscious level and not self-conscious level. The pattern of thought is created by a programmer. Computer think, the irony is that computer thinks in a way that describes a human as mentally ill.
Not true at all. The human brain is essentially one big computer, except instead of running off simple on/off states like a normal computer does information is processed through chemical transfers and electric signals. There's no reason a computer can't reach consciousness if humans can. There's no special laws of the universe that apply to humans and nothing else.
Yes!! Frighteningly enough, there is a computer that wrote its own play:
one that writes its own code:
and one that invented its own language!
It's quite fascinating and a little terrifying, if I'm really honest with myself. But it's neat.
Hi Mint , fascinating stuff and some great links ; when I say think what does that mean ?
Well for a start I can understand language , puzzles , the spoken word , I can also enjoy experiences and feel emotions theses are the sort of things we are talking about when we say I can think .
Would you now say a machine could think ?
Well. The computers in the articles, in my opinion, can by the qualification that you presented.
One created its own language as a means to produce better, more natural translations between languages, so by connection it should understand language.
If I recall correctly there is a computer that plays Chess, it is able to figure out puzzles.... hell, writing its own evolving code is a form of language and puzzle yet a computer was able to do that.
Now, emotion. Hm. I do not necessarily believe that thinking and emotion go hand in hand. There are some people who have what is called alexithymia, where they can't express or do not feel emotion.
Does that mean that the emotion doesn't exist for them at all? I'm really not too sure, but many who have it claim to feel zero emotions, nothing.
On the other side there are people who feel indifference, it's not an emotional state, simply the absence of emotion. Typically not permanent however they are able to think without feeling. So I can argue that emotion and thought are separate entities.
So I do still believe that a machine has the capacity to think under those qualifiers. Thoughts?
I dunno... What's a machine? Is a prosthetic leg a leg? Modern medicine has us sticking computer probes into our brain all the time. Some of them stay in our brains. We're gonna be doing MORE and MORE of that stuff. At some point in time, it may be hard to differentiate between our brain, and the machine inside it..
I don't expect machines to ever achieve consciousness, but then again maybe they already have. In fact it's possible that all matter is already conscious. The only problem is we'll never be able to know if a machine is conscious, as it may have just learned how to act as if it's conscious.
Can machines be configured or programmed such that they generate possible alternative realities based on a possible decision they could make, and select a decision that generates the best utility for them?
Yes: We can do that now.
If you're talking about "thinking" as more akin to how we perceive the world, or think about the world and problems (even including consciousness):
It's impossible to tell.
In my view "thinking" is merely a goal oriented action triggered by a need. Give a machine a need and you've cracked half the code. It does not have to be a human oriented need, the need can be fit for purpose. Trying to make a machine think like a human is the wrong approach entirely in my view. Other animals likely do not think very much like humans either and vice versa. How do you give a machine a need you say? Easy, it can be anything... one example could be to simulate a set of inputs/outputs in a controlled space/program and give an AI proponent in that space a set of self reinforcing rewards for certain actions. Remember that old snake game on the Nokia phones? Well a certain reward can be for the snake to get bigger without dying... it will soon learn what actions to pursue to fulfill that "need" and thus will start to think about its next actions if you code it correctly - it will over time become the best snake simulator/player of its own accord.
"Learning" is distinct from "thinking" and once again in my view learning is a system of ever increasingly complex association only. The fundamental/starting associations of living things are likely tied to those needs mentioned above (Likely also tied somehow to survival) and then built up into an immense construct/database of interlinked associations over time. This is how abstract thought occurs in us - we associate many things and concepts with many others in unique ways. Learning is also strongly tied to what senses are available to us or the AI in question. In that snake simulator you get to choose what senses to give that snake which will either inhibit or produce god-like capabilities of that snake. You can give it the ability to understand/sense the entire playing field or make it like a blind mole rat that can only sense 2 or 3 blocks in front of it.
My dream job is coding in artificial intelligence to realize my own personal view of AI but to date I hardly have any no coding skills :) So it will probably be a retirement project one day. The subject however intrigues me a lot.