PDA

View Full Version : Can Computers Think?



Pages : [1] 2

Ben_Robotics
06-28-2003, 06:04 PM
.... and would it be fair to say that a software is like the human mind where the body is the hardware?

Ben
p.s. B.T.W. did you know that the first programmer was a female; Augusta Ada Byron (1815-52) the English Mathematician?!

gcn_zelda
06-28-2003, 06:08 PM
No. The computer cannot use logic without human programming. Scientists, according to MIT's Technology Review magazine, are creating ImoBots(Immobile Robots), that can sometimes use logic better than humans, so computers can be very smart when programmed well, but computers have disadvantages. They can't learn by sight/hearing/smell/touch/taste because they have no senses, and they only can contain a certain amount of data.

Zach L.
06-28-2003, 07:20 PM
Well, I suppose it can be considered to think if it can pass the Turing test (of course, I can think of several people who would fail it).

The problem is that a computer is too logical. Think of how many discoveries have been made by someone doing something wrong or making false assumptions that led to new knowledge.

Software is too logical, and too rational to be considered equivalent to a human mind.

-KEN-
06-28-2003, 07:20 PM
You can probably make an AI program that mimicks the human brain fairly well and "learns", but no - computers can only do what they're told. No more, no less.

Unregd
06-28-2003, 08:35 PM
Someday, computers might have AI's good enough that they could really be considered to think, but for now I'd say I'd say you're taking the analogy too literally.

<tangent philosophical="true">But a new question arises if computers have advanced to being capable of true intelligent thought and humanlike or better mobility: What will we the humans do? Most of our life is ultimately spent dealing with work and the problems that arise from it. If machines could do virtually all the tasks humans needed done, what would we occupy our time with? How/Would we need to educate ourselves? How would our society be structured?</tangent>

XSquared
06-28-2003, 09:25 PM
>>If machines could do virtually all the tasks humans needed done, what would we occupy our time with?

Sex.

*ClownPimp*
06-28-2003, 10:16 PM
Amen.

Zach L.
06-28-2003, 10:47 PM
You'd think that'd provide programmers with a lot of incenive to get that AI up to par.

confuted
06-28-2003, 11:21 PM
You would think so... but how many programmers get it anyway? What if we're just creating the ultimate party tool for everyone else, and we're doomed to maintaining our creation?

XSquared
06-28-2003, 11:23 PM
>> doomed to maintaining our creation?
Wouldn't that involve extensive testing?

DDPhoenix
06-29-2003, 01:18 AM
Well, you might be able develop an AI that thinks "like a human being", but how aware are we of the way we think?

Can you teach a computer to have a purpose:
1. beyond self-preservation?
2. and beyond definition/determination by a human entity?

Much of our thoughts has nothing to do with improving our lives on a strictly biological level. Or. . . are they? If so, can computers/AI be taught to develop complex relationships with other entities, such as:
1. the human's manipulative role over its environment
2. the emotional connections between humans and others humans, their world, other organisms, etc.

Clyde
06-29-2003, 08:04 AM
Define 'think'.

Davros
06-29-2003, 08:10 AM
>computers can only do what they're told. No more, no less

I would disagree. If you argue that computers cannot think because they are only following the rules of their programming, then you must conclude that humans cannot think because our brains are only following the rules dictated by physics.

Some people argue that there is some quantum mechanical magic going inside the brain, and it is this which leads to consciousness. I think that's rubbish. Even if it where true, I don't see why some quantum mechanical hardware couldn't be integrated into a machine.

As for the Turing Test, it is not considered so important in many circles. It is more a test of humaness, rather than intelligence. Would some hypothetical technologically advanced alien pass the Turing Test? Or would we conclude that it wasn't really thinking.

There are some species of spider (Salticedae) which have excellent stereoscopic vision and display remarkable problem solving intelligence (including forward planning). If a robot could be built which displayed the same ability, many people would conclude it was intelligent. But would it be? It certainly wouldn't be human, that's for sure.

Zach L.
06-29-2003, 08:10 AM
>> Define 'think'.

Hmm... That is a rather big problem.

>> Can you teach a computer to have a purpose:
>> 1. beyond self-preservation?
>> 2. and beyond definition/determination by a human entity?

Does any person really know what their purpose is, aside from biological aspects, if there is one?

Davros
06-29-2003, 08:11 AM
>Define 'think'.

Good one. I can't. Don't know if anyone can.

Zach L.
06-29-2003, 08:16 AM
>> It is more a test of humaness

Very true, but if we consider humans to be intelligent (which is debatable), then it makes sense (to me at least) that something that passes for a human could also be intelligent. Granted, this is only a small subset of things considered intelligent, but it is the most easily definable one.

The 'quantum mechanical magic' doesn't really make sense as an argument (which I know is what you're saying). Things on the quantum level do not (apparently) follow causal laws, but rather random probability. On large scales, however, the 'laws' are approximately causal (close enough for government work, so to speak), but those quantum laws apply to everything, not just humans.

Davros
06-29-2003, 08:18 AM
>Hmm... That is a rather big problem

I've got a feeling that the answer is blindingly simple, but a paradigm shift is needed to see it.

Zach L.
06-29-2003, 08:26 AM
A lot of problems arise if you require things to be defined. In the case of language, every definition must necessarily be circular because only a finite number of symbols exist. Some things (axioms if you will) must just be implicitly understood, and that I think is where the problem lies.

Davros
06-29-2003, 08:26 AM
Hi Zach,

>then it makes sense (to me at least) that something that passes for a human could also be intelligent

But my point is what about things which can't pass as human? Or are we to define intelligence as a property belonging to humans only?

Davros
06-29-2003, 08:29 AM
>Hah, that sounds like it's directly taken from the speak program!

Does the speak program mean alot to you?*

*ELIZA style reply :)

Zach L.
06-29-2003, 08:31 AM
>> But my point is what about things which can't pass as human? Or are we to define intelligence as a property belonging to humans only?

And a good point indeed. I can't say I really have any decent answer to that. :)

If thinking merely implies the ability to use logic at a certain level, then I'd have to say yes. If it require other, less definable qualities, then I don't think that an objective answer can ever be reached.

Davros
06-29-2003, 09:09 AM
One last point on whether computers can only do what they are explicitly programmed to do. I want to show you something, if you are not familiar with this:

Look at the equation: x <-> x*3.57(1 - x)

The <-> means we are going to iterate this equation, will show you, just treat it as equals.

If we choose an initial start value for x between 0 and 1, say 0.2, and work it out:

x = 0.2 * 3.57 * (1 - 0.2) = 0.5712

now we plug 0.5712 back into the equation and calculate a new value:

x = 0.5712 * 3.57 * (1 - 0.5712) = 0874402099

we can keep doing this, and it will give us an endless stream of random looking numbers.

So what's my point? Where are all these numbers coming from? We could program the equation, but have programmed the numbers? And there is no short-cut way to determine what the 100th iteration value is going to be, without going through all the iterations before hand. So we don't know what the answer is going to be before hand, so how can we programmed it?

Another property of this equation, is that it is 'chaotic'. If we start with an initial value of 0.199999, instead of 0.2, after a number of iterations, we will get wildly different numbers to the original. So small rounding errors are also going have a serious impact on the results.

If you still think we have programmed this, try using complex numbers for x. Because 'complex' numbers have two components, we could actually plot this on an x-y graph. Have a look at what it looks like (I'm a bit rusty on this & not sure the plot represents the same equation, but there are lots of simple equations which yield plots similar to this and point is the same.)

http://www.olympus.net/personal/dewey/Deepv.png

http://www.olympus.net/personal/dewey/mandelbrot.html

Where has this come this thing come from? Have we programmed this or was it there all along?

I guess my final argument is that I think the propensity for complex behaviour and the formation of complex structures (i.e. brains) is built into the fabric and laws of the universe. Try looking at primes! If it can be revealed with just numbers, then in principle at least, there is no reason why a computer cannot display complex thinking-like behaviour. It need not be programmed explicitly, it's behaviour will just emerge.

Am I makeing sense, or have I just bored you all?

Ben_Robotics
06-29-2003, 09:14 AM
what if you had a computer capable of taking college courses, making friends and doing everything us, humans, do on a daily basis. is this machine thinking?

...and i guess thinking is a property of being intelligent! someone or something that can make decisions on their own.

but then are they really aware of their existance? probably not!

gcn_zelda
06-29-2003, 10:32 AM
>> define think
dictionary.com-
To exercise the power of reason, as by conceiving ideas, drawing inferences, and using judgment.

This is obviously not a proper definition, because you can't properly define think, but whatever.

>>but then are they really aware of their existance? probably not!

I agree. Knowledge of existence is a property of intelligence.

vasanth
06-29-2003, 10:39 AM
i think that true AI is not going to be possible in the near fuuture with our conventional tools and present technology.... May be in another 1000 or 2000 years we might have completely new technologies and we would have explored biology in depth and also in the fields of quantum computing etc etc.. that true AI may be possible... But thats a distant ("Possibility")...


I always wich that i was born a thousand years later... I could have seen better technology and the fate of mankind.. But then the egyptian must also have though the same.. and the humans of the future may also think the same.... :D:D


I remember my dad say that when he told his grand pa that man had landed on the moon he never believed it.. so we at present with present technology in mind and thinking about the present may not think about true AI.. even if we do may be in todays sense...

When we think of the future of the human race i think we might start migrating to other parts of the universe (no laughing here please... this i asuem in terms of thousands of years from now..)... because ultimately the entire human race might end when the sun feels that it had its day...


So we are the unlucky/lucky ones to be born in this part of the century...

Clyde
06-29-2003, 10:50 AM
There seem to me to be two separate issues here;

Can computers feasably solve problems in a human-esque way?

and

Can computers ever experience reality in a subjective sense?

Since we can already approximate certain areas of human problem solving with programs and there seems to be no barrier to prevent future expansion of AI algorithms the answer to the first question seems me an unassailable yes.

The second question is where the debate rages: can you create a conscious computer?

I think so. If we discount Decarte's dualistic model of the conscious mind we are forced to accept that consciousness and all its trappings are derived from the interactions of a physical system.

The brain in all its glory simply consists of 'electrical' and chemical inputs and outputs.

Clyde
06-29-2003, 10:51 AM
think that true AI is not going to be possible in the near fuuture with our conventional tools and present technology.... May be in another 1000 or 2000 years we might have completely new technologies and we would have explored biology in depth and also in the fields of quantum computing etc etc.. that true AI may be possible... But thats a distant ("Possibility")...


I think you underestimate the rate of scientific and technological advancement.

vasanth
06-29-2003, 11:05 AM
though technology increments at the rate of a geometric progression(GP).. I think true AI where there is no difference between a human and a robonoid /humonoid/computer (if you can call it so).. may take another 500 to 1000 years.. Hope i am wrong..(i want to see it in my life time):D.

Commander
06-29-2003, 11:47 AM
Originally posted by vasanth though technology increments at the rate of a geometric progression(GP).. I think true AI where there is no difference between a human and a robonoid /humonoid/computer (if you can call it so).. may take another 500 to 1000 years.. Hope i am wrong..(i want to see it in my life time):D. there is no way it would take that long..i mean look @ it, mendel(sp?) discoverd the heritence thing a few hundred years ago...and by now we cloned animals (possibly humans as well) have the whole map of the human DNA and research in the biotech field is moving VERY fast...scientists figured out the SARS virus in a matter of weeks. so it's very doubtful that figuring out the human brain anatomy and how we think is gonna take that long if researchers really work on it.....

....but then again, u never know, cancer and AIDS are still here :(

but likewise i too would like 2 c it in my lifetime, and i'm very optimistic about that:)

as for if computers can think, it better can because i have no other explanations for the things it does in the most inappropriate of times!

gcn_zelda
06-29-2003, 12:14 PM
:rolleyes: heh

VirtualAce
06-30-2003, 07:10 AM
You could answer this question by playing some recent software titles. I've never encountered enemy AI that I would deem as 'smart'.


:D


Like, for instance, I think the first thing they would teach Rainbow Six operatives is how NOT to walk in front of your teammate while he is discharging his firearm. :)


:cool:

zornthrohacker
06-30-2003, 08:18 AM
:mad: What ........es me off eaven more is when you get shot by your own men. In Halo it's pritty funny though, the guy that shoots you oftan sais: Get your big ass out of my way.

And in socom it allways seems like your given away by your own men.. They just get up and run right in like an idiot. Of course they die verry fast :D.

kristy
06-30-2003, 10:13 AM
A little dependent clause..
Humans would still make mistakes, computers that could think (if we define 'think' as dictionary.com did) wouldn't. I believe that's a scary thought (but also a thought that says something about how powerful humans can be).

zornthrohacker
06-30-2003, 12:00 PM
The computer dosn't eaven know it made a mistake, were all programmers and know how a computer really 'thinks'. It did what it was supposed to... React to imput and that's basicly it.

Zach L.
06-30-2003, 12:18 PM
>> It did what it was supposed to... React to imput and that's basicly it.

And how do we know that isn't all that humans do? ;)

gcn_zelda
06-30-2003, 12:21 PM
>>And how do we know that isn't all that humans do?
Because I'm psychic!!!:p

zornthrohacker
06-30-2003, 12:56 PM
Computers relate to instructions.. Humans relate to past experiences.

tgm
06-30-2003, 01:06 PM
I don't know that the question is specific enough to have a definite answer (think is a little vague).
Can computers be programmed to learn? Yes.
Can computers use things they have learned to reach conclusions? Yes.
How different is that from teaching a child to understand a concept and reach conclusions?
I think computers can mimic thinking (in a general sense) pretty well.

I think a better discussion would be on consciousness. Can that be programmed?

Zach L.
06-30-2003, 01:07 PM
Computers can relate to past experiences. All that it really entails is storing in some fashion data from the past, and then looking it up again when needed.

Humans change behavior over time (and experience), but so can computers. A good example would be training an artificial neural network.

VirtualAce
06-30-2003, 01:17 PM
I like stupid computers. The day my system talks back to me and tells me what he/she thinks of me is the day I quit programming.

:D

kristy
06-30-2003, 01:19 PM
Originally posted by tgm
Can computers be programmed to learn? Yes.Can computers program the 'perfect' computer? In theory, yes. That's why, imo, AI is a scary thought.

As for consciousness; a computer cannot feel, so I'd say no, it can't be programmed. It would never wake up one day thinking 'why am I here?'. But then again; we are only aware of our consciousness because we are taught too, aren't we? So I guess a computer could be taught it too.

gcn_zelda
06-30-2003, 01:22 PM
>>Can computers program the 'perfect' computer? In theory, yes. That's why, imo, AI is a scary thought.

You can't program a computer. For a computer to build a computer, it would have to be able to put one together. But I agree. we've gotta be careful not to make a completely intelligent computer.

Zach L.
06-30-2003, 01:30 PM
The question then becomes, what is consciousness? What exactly does it mean to be self-aware? Considering the components of a computer and a human (on a rather small scale) are the same, I do not see why a computer could not experience "consciousness", at least in theory. Of course, what is the distinction between a computer and a human?

gcn_zelda
06-30-2003, 01:33 PM
All this talk seems to be theoritical. Could anybody actually program a computer like that in their lifetime?

Clyde
06-30-2003, 02:55 PM
Can computers program the 'perfect' computer? In theory, yes. That's why, imo, AI is a scary thought


Presumeably a 'perfect' computer would have infinite computational power, hence it is impossible. 'Perfect' AI means nothing unless we define the purpose of the AI.



As for consciousness; a computer cannot feel, so I'd say no, it can't be programmed


Todays computers cannot feel (presumeably), how do you know that tommorows will not be able to?



we are only aware of our consciousness because we are taught too, aren't we?


Consciousness seems pretty self-evident to me.



Of course, what is the distinction between a computer and a human?


Biology.

Zach L.
06-30-2003, 03:01 PM
But biology is just another view of the physics which apply equally to humans and computers which makes dealing with the particular nature of humans (and related creatures) easier to understand. Biology is an artificial construct. Electrons don't care if they are in an organic neuron or the processor of a computer. The same laws apply. Granted, computers have not been developed as much or with the same complexity of human brains, but there is no barrier saying they can't.

Fahrenheit
06-30-2003, 03:19 PM
We don't even understand our own brains yet, so how do you expect someone to create a brain for something else?

Zach L.
06-30-2003, 03:21 PM
I'm not saying its particularly feasible, I'm saying its possible.

Clyde
06-30-2003, 04:31 PM
But biology is just another view of the physics which apply equally to humans and computers which makes dealing with the particular nature of humans (and related creatures) easier to understand. Biology is an artificial construct. Electrons don't care if they are in an organic neuron or the processor of a computer. The same laws apply. Granted, computers have not been developed as much or with the same complexity of human brains, but there is no barrier saying they can't.


... biology is merely the study of specific phenomena: self replicating chemical systems capable of mutation that have evolved over the past 3.8 billion years.

However we are arguing across points, i think that there probably is no barrier to computers (as we think of them today) becoming conscious.

On the hand there might be something currently unknown that makes brains and computers very different.

Ultimately we ARE machines, so conscious machines ARE possible, we just don't know how they work.

The main problem with the conscious mind is that it seems fundamentally different from everything else, it appears impossible to invisage how conscious experience could be created from the physical processes available to the universe, and yet it is.

I suspect that our main stumbling block is how we are thinking about the problem.

Ben_Robotics
06-30-2003, 04:44 PM
The essence of thinking as mentioned before is really knowing the definition or the semantics of symbols; computers can maniputlate symbols but do not know the semantics since symbol manipulation and calculations is solely based on syntax and has nothing to do with the definition of the symbols.

now, remember, i did not ask if a machine can think. of course, in a sense, we are all machines since we all can manipulate symbols to perform some sort of operation; moreover we can all think!

the actual question here is can a digital computer think! remember than thinking is more than manipualating meaningless symbols, it involves meaningful semantic contents. contents that make sense to the "machine" or the "performer/manipulator".