is it completly possible for AI to become self aware? I wouldnt think so because of the fact the computer has limitation . But I would like some other opinions on this.
is it completly possible for AI to become self aware? I wouldnt think so because of the fact the computer has limitation . But I would like some other opinions on this.
well not as of now.. bue research in the feilds of neural networks, and other fields may make it possible.. and whats a thread like this doing in the C++ secion.. its a GD topic...
it is a highly philosophical question; I agree with vasanth that it is possible - just not now - I give it 8-15 years.
Have you guys seen the movie "Pi", where the computer becomes "aware" of its own limitations and brakes down good stuff.
some entropy with that sink? entropysink.com
there are two cardinal sins from which all others spring: Impatience and Laziness. - franz kafka
great movieOriginally Posted by axon
There are philisophical debates as to whether or not humans are self aware (how do you know your not just a brain in a vat?) so computer AI being self aware will always be debated even if it is achieved.
I'd have to agree with your instincts, inferno. As long as there's an 'A' in AI we're just faking it! So to the programmer, scientist, or engineer, the question becomes 'how well can we simulate these things'?
Perhaps in another decade, machines will be very good at faking-it. But progress on things like Turing Machines seems very slow.
A Pentium processor "understands" about 300 instructions (if I remember correctly). It can make calculations, compare numbers, manipulate numbers, and save & retreive numbers... Maybe the numbers represent letters in the alphpbet, or colors of a pixel... but they are just (binary) numbers. The processor can perform a sequence of instructions, and the most important thing it can do is "branch". It can branch to a different sequence of instructions based on the results of the current instruction.
Increasing the speed of the processor, or the number of instructions it can handle, doesn't change the nature of the machine.
A lot of the questions and issues with AI are philosophic. And, science can never answer philosophic questions. Heck, philosophers can't "answer" them either!
The majority of the work on this problem, at least now, really belongs to cognitive science, and not computer science. How can we model something if we don't really know how it works?
>> A lot of the questions and issues with AI are philosophic. And, science can never answer philosophic questions. Heck, philosophers can't "answer" them either! <<
Science is philosophy.
>>but they are just (binary) numbers.
And our perception of colour etc. is just an array of light sensors firing electric impulses or whatever along nerves to our biological CPU, which then processes the information at an astronomical rate and reacts to it according to the chemistry of some 99999999 encyclopedias of DNA instructions Unless you believe that we are set apart in our sentience by a soul or other such related spiritual entity (which by the way, I do), I don't see any reason why computers can't eventually become sentient.
And please nobody start a flame war over religion
Just Google It. √
(\ /)
( . .)
c(")(") This is bunny. Copy and paste bunny into your signature to help him gain world domination.
if a machine could, it must not be a computer.
blow me ... ...
If you've not read it already you might find Godel, Escher, Bach: An Eternal Golden Braid by Douglas Hoefstader an interesting read on how self awareness might develop.
the simply way to explain:
any machine which works follow the code can never have real intelligence.
blow me ... ...
Here's how it goes: Define intelligence in such a way that no machine qualifies. Then if (when) someone implements an machine that does that thing, redefine intelligence.Originally Posted by Hermitsky
Regards,
Dave
i think no one knows the definition of intelligence.Originally Posted by Dave Evans
we think we have intelligence, but maybe we not .......
blow me ... ...
>>any machine which works follow the code can never have real intelligence.
What do WE do? As far as I know, evolution says that we're just globs of molecules following DNA instructions that somehow randomly mashed together into something comprehensive.
>>we think we have intelligence, but maybe we not .......
[misunderstand]I know what you mean. Don't you just hate it when someone thinks they're such a hotshot but they really don't know anything? [/misunderstand]
Dave:
Rather, define intelligence to be something which includes humans but excludes computers. When it comes to a point where no such definition can be found, it can be concluded that computers have become intelligent on the same level as humans. Gee, sounds like the squeeze theorem to me...
On the other hand, sometime in the future we may find that the definition of the definition of intelligence becomes reversed, and humans become measured by the standards of machine intelligence
Machine conversation:
Bit: Gee, I wonder if humans will ever become intelligent.
Byte: Ah! That would be scary. I hear there's some genetic engineers working on that right now...
Bit: You ever heard of the Kernel test? You sit a human and a computer on one side of a wall, and another computer on the other... if the computer can't tell which is the human, then you can conclude that the human has become intelligent.
Just Google It. √
(\ /)
( . .)
c(")(") This is bunny. Copy and paste bunny into your signature to help him gain world domination.
But you sort of just defined it in a negative way:Originally Posted by Hermitsky
If a computer can do "something" (become sentient), then that "something" is not intelligence.
This is the logical equivalent (actually known as the contrapositive, as I recall) of saying
If "something" is intelligence, then no computer can do that "something".
So you haven't actually defined intelligence, but you have characterised it by creating a conditional test for intelligence that you are free to redefine at any time.
Kind of like the famous Supreme Court justice's statement about pornography (To paraphrase: "I can't define it precisely, but I know it when I see it.")
Regards,
Dave
"It is the brain, the little gray cells on which one must rely. One must seek the truth within---not without."
--- Hercule Poirot