.... and would it be fair to say that a software is like the human mind where the body is the hardware?
Ben
p.s. B.T.W. did you know that the first programmer was a female; Augusta Ada Byron (1815-52) the English Mathematician?!
Printable View
.... and would it be fair to say that a software is like the human mind where the body is the hardware?
Ben
p.s. B.T.W. did you know that the first programmer was a female; Augusta Ada Byron (1815-52) the English Mathematician?!
No. The computer cannot use logic without human programming. Scientists, according to MIT's Technology Review magazine, are creating ImoBots(Immobile Robots), that can sometimes use logic better than humans, so computers can be very smart when programmed well, but computers have disadvantages. They can't learn by sight/hearing/smell/touch/taste because they have no senses, and they only can contain a certain amount of data.
Well, I suppose it can be considered to think if it can pass the Turing test (of course, I can think of several people who would fail it).
The problem is that a computer is too logical. Think of how many discoveries have been made by someone doing something wrong or making false assumptions that led to new knowledge.
Software is too logical, and too rational to be considered equivalent to a human mind.
You can probably make an AI program that mimicks the human brain fairly well and "learns", but no - computers can only do what they're told. No more, no less.
Someday, computers might have AI's good enough that they could really be considered to think, but for now I'd say I'd say you're taking the analogy too literally.
<tangent philosophical="true">But a new question arises if computers have advanced to being capable of true intelligent thought and humanlike or better mobility: What will we the humans do? Most of our life is ultimately spent dealing with work and the problems that arise from it. If machines could do virtually all the tasks humans needed done, what would we occupy our time with? How/Would we need to educate ourselves? How would our society be structured?</tangent>
>>If machines could do virtually all the tasks humans needed done, what would we occupy our time with?
Sex.
Amen.
You'd think that'd provide programmers with a lot of incenive to get that AI up to par.
You would think so... but how many programmers get it anyway? What if we're just creating the ultimate party tool for everyone else, and we're doomed to maintaining our creation?
>> doomed to maintaining our creation?
Wouldn't that involve extensive testing?
Well, you might be able develop an AI that thinks "like a human being", but how aware are we of the way we think?
Can you teach a computer to have a purpose:
1. beyond self-preservation?
2. and beyond definition/determination by a human entity?
Much of our thoughts has nothing to do with improving our lives on a strictly biological level. Or. . . are they? If so, can computers/AI be taught to develop complex relationships with other entities, such as:
1. the human's manipulative role over its environment
2. the emotional connections between humans and others humans, their world, other organisms, etc.
Define 'think'.
>computers can only do what they're told. No more, no less
I would disagree. If you argue that computers cannot think because they are only following the rules of their programming, then you must conclude that humans cannot think because our brains are only following the rules dictated by physics.
Some people argue that there is some quantum mechanical magic going inside the brain, and it is this which leads to consciousness. I think that's rubbish. Even if it where true, I don't see why some quantum mechanical hardware couldn't be integrated into a machine.
As for the Turing Test, it is not considered so important in many circles. It is more a test of humaness, rather than intelligence. Would some hypothetical technologically advanced alien pass the Turing Test? Or would we conclude that it wasn't really thinking.
There are some species of spider (Salticedae) which have excellent stereoscopic vision and display remarkable problem solving intelligence (including forward planning). If a robot could be built which displayed the same ability, many people would conclude it was intelligent. But would it be? It certainly wouldn't be human, that's for sure.
>> Define 'think'.
Hmm... That is a rather big problem.
>> Can you teach a computer to have a purpose:
>> 1. beyond self-preservation?
>> 2. and beyond definition/determination by a human entity?
Does any person really know what their purpose is, aside from biological aspects, if there is one?
>Define 'think'.
Good one. I can't. Don't know if anyone can.
>> It is more a test of humaness
Very true, but if we consider humans to be intelligent (which is debatable), then it makes sense (to me at least) that something that passes for a human could also be intelligent. Granted, this is only a small subset of things considered intelligent, but it is the most easily definable one.
The 'quantum mechanical magic' doesn't really make sense as an argument (which I know is what you're saying). Things on the quantum level do not (apparently) follow causal laws, but rather random probability. On large scales, however, the 'laws' are approximately causal (close enough for government work, so to speak), but those quantum laws apply to everything, not just humans.
>Hmm... That is a rather big problem
I've got a feeling that the answer is blindingly simple, but a paradigm shift is needed to see it.
A lot of problems arise if you require things to be defined. In the case of language, every definition must necessarily be circular because only a finite number of symbols exist. Some things (axioms if you will) must just be implicitly understood, and that I think is where the problem lies.
Hi Zach,
>then it makes sense (to me at least) that something that passes for a human could also be intelligent
But my point is what about things which can't pass as human? Or are we to define intelligence as a property belonging to humans only?
>Hah, that sounds like it's directly taken from the speak program!
Does the speak program mean alot to you?*
*ELIZA style reply :)
>> But my point is what about things which can't pass as human? Or are we to define intelligence as a property belonging to humans only?
And a good point indeed. I can't say I really have any decent answer to that. :)
If thinking merely implies the ability to use logic at a certain level, then I'd have to say yes. If it require other, less definable qualities, then I don't think that an objective answer can ever be reached.
One last point on whether computers can only do what they are explicitly programmed to do. I want to show you something, if you are not familiar with this:
Look at the equation: x <-> x*3.57(1 - x)
The <-> means we are going to iterate this equation, will show you, just treat it as equals.
If we choose an initial start value for x between 0 and 1, say 0.2, and work it out:
x = 0.2 * 3.57 * (1 - 0.2) = 0.5712
now we plug 0.5712 back into the equation and calculate a new value:
x = 0.5712 * 3.57 * (1 - 0.5712) = 0874402099
we can keep doing this, and it will give us an endless stream of random looking numbers.
So what's my point? Where are all these numbers coming from? We could program the equation, but have programmed the numbers? And there is no short-cut way to determine what the 100th iteration value is going to be, without going through all the iterations before hand. So we don't know what the answer is going to be before hand, so how can we programmed it?
Another property of this equation, is that it is 'chaotic'. If we start with an initial value of 0.199999, instead of 0.2, after a number of iterations, we will get wildly different numbers to the original. So small rounding errors are also going have a serious impact on the results.
If you still think we have programmed this, try using complex numbers for x. Because 'complex' numbers have two components, we could actually plot this on an x-y graph. Have a look at what it looks like (I'm a bit rusty on this & not sure the plot represents the same equation, but there are lots of simple equations which yield plots similar to this and point is the same.)
http://www.olympus.net/personal/dewey/Deepv.png
http://www.olympus.net/personal/dewey/mandelbrot.html
Where has this come this thing come from? Have we programmed this or was it there all along?
I guess my final argument is that I think the propensity for complex behaviour and the formation of complex structures (i.e. brains) is built into the fabric and laws of the universe. Try looking at primes! If it can be revealed with just numbers, then in principle at least, there is no reason why a computer cannot display complex thinking-like behaviour. It need not be programmed explicitly, it's behaviour will just emerge.
Am I makeing sense, or have I just bored you all?
what if you had a computer capable of taking college courses, making friends and doing everything us, humans, do on a daily basis. is this machine thinking?
...and i guess thinking is a property of being intelligent! someone or something that can make decisions on their own.
but then are they really aware of their existance? probably not!
>> define think
dictionary.com-
To exercise the power of reason, as by conceiving ideas, drawing inferences, and using judgment.
This is obviously not a proper definition, because you can't properly define think, but whatever.
>>but then are they really aware of their existance? probably not!
I agree. Knowledge of existence is a property of intelligence.
i think that true AI is not going to be possible in the near fuuture with our conventional tools and present technology.... May be in another 1000 or 2000 years we might have completely new technologies and we would have explored biology in depth and also in the fields of quantum computing etc etc.. that true AI may be possible... But thats a distant ("Possibility")...
I always wich that i was born a thousand years later... I could have seen better technology and the fate of mankind.. But then the egyptian must also have though the same.. and the humans of the future may also think the same.... :D:D
I remember my dad say that when he told his grand pa that man had landed on the moon he never believed it.. so we at present with present technology in mind and thinking about the present may not think about true AI.. even if we do may be in todays sense...
When we think of the future of the human race i think we might start migrating to other parts of the universe (no laughing here please... this i asuem in terms of thousands of years from now..)... because ultimately the entire human race might end when the sun feels that it had its day...
So we are the unlucky/lucky ones to be born in this part of the century...
There seem to me to be two separate issues here;
Can computers feasably solve problems in a human-esque way?
and
Can computers ever experience reality in a subjective sense?
Since we can already approximate certain areas of human problem solving with programs and there seems to be no barrier to prevent future expansion of AI algorithms the answer to the first question seems me an unassailable yes.
The second question is where the debate rages: can you create a conscious computer?
I think so. If we discount Decarte's dualistic model of the conscious mind we are forced to accept that consciousness and all its trappings are derived from the interactions of a physical system.
The brain in all its glory simply consists of 'electrical' and chemical inputs and outputs.
I think you underestimate the rate of scientific and technological advancement.Quote:
think that true AI is not going to be possible in the near fuuture with our conventional tools and present technology.... May be in another 1000 or 2000 years we might have completely new technologies and we would have explored biology in depth and also in the fields of quantum computing etc etc.. that true AI may be possible... But thats a distant ("Possibility")...
though technology increments at the rate of a geometric progression(GP).. I think true AI where there is no difference between a human and a robonoid /humonoid/computer (if you can call it so).. may take another 500 to 1000 years.. Hope i am wrong..(i want to see it in my life time):D.
there is no way it would take that long..i mean look @ it, mendel(sp?) discoverd the heritence thing a few hundred years ago...and by now we cloned animals (possibly humans as well) have the whole map of the human DNA and research in the biotech field is moving VERY fast...scientists figured out the SARS virus in a matter of weeks. so it's very doubtful that figuring out the human brain anatomy and how we think is gonna take that long if researchers really work on it.....Quote:
Originally posted by vasanth though technology increments at the rate of a geometric progression(GP).. I think true AI where there is no difference between a human and a robonoid /humonoid/computer (if you can call it so).. may take another 500 to 1000 years.. Hope i am wrong..(i want to see it in my life time):D.
....but then again, u never know, cancer and AIDS are still here :(
but likewise i too would like 2 c it in my lifetime, and i'm very optimistic about that:)
as for if computers can think, it better can because i have no other explanations for the things it does in the most inappropriate of times!
:rolleyes: heh
You could answer this question by playing some recent software titles. I've never encountered enemy AI that I would deem as 'smart'.
:D
Like, for instance, I think the first thing they would teach Rainbow Six operatives is how NOT to walk in front of your teammate while he is discharging his firearm. :)
:cool:
:mad: What ........es me off eaven more is when you get shot by your own men. In Halo it's pritty funny though, the guy that shoots you oftan sais: Get your big ass out of my way.
And in socom it allways seems like your given away by your own men.. They just get up and run right in like an idiot. Of course they die verry fast :D.
A little dependent clause..
Humans would still make mistakes, computers that could think (if we define 'think' as dictionary.com did) wouldn't. I believe that's a scary thought (but also a thought that says something about how powerful humans can be).
The computer dosn't eaven know it made a mistake, were all programmers and know how a computer really 'thinks'. It did what it was supposed to... React to imput and that's basicly it.
>> It did what it was supposed to... React to imput and that's basicly it.
And how do we know that isn't all that humans do? ;)
>>And how do we know that isn't all that humans do?
Because I'm psychic!!!:p
Computers relate to instructions.. Humans relate to past experiences.
I don't know that the question is specific enough to have a definite answer (think is a little vague).
Can computers be programmed to learn? Yes.
Can computers use things they have learned to reach conclusions? Yes.
How different is that from teaching a child to understand a concept and reach conclusions?
I think computers can mimic thinking (in a general sense) pretty well.
I think a better discussion would be on consciousness. Can that be programmed?
Computers can relate to past experiences. All that it really entails is storing in some fashion data from the past, and then looking it up again when needed.
Humans change behavior over time (and experience), but so can computers. A good example would be training an artificial neural network.
I like stupid computers. The day my system talks back to me and tells me what he/she thinks of me is the day I quit programming.
:D
Can computers program the 'perfect' computer? In theory, yes. That's why, imo, AI is a scary thought.Quote:
Originally posted by tgm
Can computers be programmed to learn? Yes.
As for consciousness; a computer cannot feel, so I'd say no, it can't be programmed. It would never wake up one day thinking 'why am I here?'. But then again; we are only aware of our consciousness because we are taught too, aren't we? So I guess a computer could be taught it too.
>>Can computers program the 'perfect' computer? In theory, yes. That's why, imo, AI is a scary thought.
You can't program a computer. For a computer to build a computer, it would have to be able to put one together. But I agree. we've gotta be careful not to make a completely intelligent computer.
The question then becomes, what is consciousness? What exactly does it mean to be self-aware? Considering the components of a computer and a human (on a rather small scale) are the same, I do not see why a computer could not experience "consciousness", at least in theory. Of course, what is the distinction between a computer and a human?
All this talk seems to be theoritical. Could anybody actually program a computer like that in their lifetime?
Presumeably a 'perfect' computer would have infinite computational power, hence it is impossible. 'Perfect' AI means nothing unless we define the purpose of the AI.Quote:
Can computers program the 'perfect' computer? In theory, yes. That's why, imo, AI is a scary thought
Todays computers cannot feel (presumeably), how do you know that tommorows will not be able to?Quote:
As for consciousness; a computer cannot feel, so I'd say no, it can't be programmed
Consciousness seems pretty self-evident to me.Quote:
we are only aware of our consciousness because we are taught too, aren't we?
Biology.Quote:
Of course, what is the distinction between a computer and a human?
But biology is just another view of the physics which apply equally to humans and computers which makes dealing with the particular nature of humans (and related creatures) easier to understand. Biology is an artificial construct. Electrons don't care if they are in an organic neuron or the processor of a computer. The same laws apply. Granted, computers have not been developed as much or with the same complexity of human brains, but there is no barrier saying they can't.
We don't even understand our own brains yet, so how do you expect someone to create a brain for something else?
I'm not saying its particularly feasible, I'm saying its possible.
... biology is merely the study of specific phenomena: self replicating chemical systems capable of mutation that have evolved over the past 3.8 billion years.Quote:
But biology is just another view of the physics which apply equally to humans and computers which makes dealing with the particular nature of humans (and related creatures) easier to understand. Biology is an artificial construct. Electrons don't care if they are in an organic neuron or the processor of a computer. The same laws apply. Granted, computers have not been developed as much or with the same complexity of human brains, but there is no barrier saying they can't.
However we are arguing across points, i think that there probably is no barrier to computers (as we think of them today) becoming conscious.
On the hand there might be something currently unknown that makes brains and computers very different.
Ultimately we ARE machines, so conscious machines ARE possible, we just don't know how they work.
The main problem with the conscious mind is that it seems fundamentally different from everything else, it appears impossible to invisage how conscious experience could be created from the physical processes available to the universe, and yet it is.
I suspect that our main stumbling block is how we are thinking about the problem.
The essence of thinking as mentioned before is really knowing the definition or the semantics of symbols; computers can maniputlate symbols but do not know the semantics since symbol manipulation and calculations is solely based on syntax and has nothing to do with the definition of the symbols.
now, remember, i did not ask if a machine can think. of course, in a sense, we are all machines since we all can manipulate symbols to perform some sort of operation; moreover we can all think!
the actual question here is can a digital computer think! remember than thinking is more than manipualating meaningless symbols, it involves meaningful semantic contents. contents that make sense to the "machine" or the "performer/manipulator".
Right now computers just follow instructions.. their instructions never change... Just the variables. That's not to say their instructions can't change, mabe atach a compiler with the programm and 'teech' it to program.
>> and 'teech' it to program.
We probably should teach it to spell too. :p
Don't veiw my website if you hate spelling.. I have my fixed version on a computer - unoperational though. My computer should learn to fix spelling and programm on it's own:rolleyes:
>>Don't veiw my website if you hate spelling
man, you're site has a lot of spelling errors
Why the compiler? I'm working on a couple of different version of an AI program right now that associates ideas and then puts what it "learns" into a batch file, or a CGI program, (I'm writing a version for each). As long as the original is compiled, you can circumvent most of the security concerns. Speaking of which, do batch files work on Windows XP and all that? Is their a Linux equivalent?Quote:
Right now computers just follow instructions.. their instructions never change... Just the variables. That's not to say their instructions can't change, mabe atach a compiler with the programm and 'teech' it to program.
And while you're not too far away from the discussion on the fact that the brain functions just like a computer when you break it down. True. No one can argue with that. The mind is what you guys have to worry about. I just got a book about this, and there's some really interesting stuff. Computers today, not just the most advanced ones, can emulate the brain in concept, but not yet in performance. Like someone else said, it may not be feasible right now, but it is possible. The problem emerges when you learn that almost all sensory information, which most people can recall for years, travels to the center of the brain and then disappears. It enters the energy field that surrounds your body. Whether you want to cal this your spirit, the "force", or whatever, it's got some pretty interesting implications with the subject. The field takes the shape of what you will be as an adult when photographed (I forgot the technical term for the technique used to photograph this, I apologize) as a baby, diseases make it change shape, it's powerful enough to set off general use light detectors like the ones on street-light systems, and if it can retain memory better than a CD, it's gotta be useful if we can harness the power that makes it tick.
define mind!
would you say mind=conciousness? if so, computers have no minds since they are not conciouss meaning not aware of their existance? or are they? hmmm....
>It enters the energy field that surrounds your body.
If you believe that, you'll believe anything.
With a couple of exceptions, you're all talking in circles.
Correct. I believe you're an idiot.
>> With a couple of exceptions, you're all talking in circles.
Indeed. Any discussion like this is bound to go in circles, by the very nature of our language. We can't concretely define anything. We just have to make assumptions that some things are understood.
I can't say I know much about what your saying Sean, but I have heard about it.
...... i rather think not.Quote:
It enters the energy field that surrounds your body
With the introduction of RAM, AI can "think". Period.
computers don't think because they aren't capable of enough processing power yet. But eventually they'll be able to out process humans, and eventually neural nets (no joke) will be used in such a way to allow them (with special software of course) to learn and evolve on their own. This will all take place well after Moore's paradigm of computing runs out, and instead of computing power doubling it will be increasing at an even higher exponential rate using nano technology and three dimensional processors (some even say after that quantum mechanical computers).
EDIT:
You cannot possibly define consciousness. You can always examine sequences and patterns of impulses, for example in the human brain, but how do you define the subjective experience? Nobody here can without a doubt say that I am conscious, they can only make objective observations and rationalizations but they cannot possibly ever examine or share the subjective experience. Rather, my subjective experience cannot be explained or proved to exist. It will get to a point with computers that they will say they are conscious, and we will believe them for the same reason you'd believe me if I said "I'm conscious, believe me"Quote:
As for consciousness; a computer cannot feel, so I'd say no, it can't be programmed
EDIT1:
Neural nets will be used and be just as functional and capable and logical as the human brain (the human brain is a neural net where each neuron connects to approx 1,000 other neurons, and each pathway can send approx 200 impulses per second, which results in about 200GB theoretical bandwidth per second, but this is a loose number so don't correct me). The way that neural nets work is that something is passed in as an input, and then it is send along these path ways. The original signal is sent to the first neuron(s), but from then on the signal is altered based on the function of each neuron. The output is a completely different signal that determines a reaction. For example, one person may look at a soap opera and begin to cry because of his/her neuron configuration, but another person (like me) may look at a soap opera and laugh based on his/her neuron configuration. Imagine you have a computer, but instead of a single 1.5 - 3.5 inch processor you have 90 trillion molecule sized processors, each 80 billion times more powerful than the computer you are currently typing at. The same logic problems could be solved the same way the human brain solves logic problems, and these computers will be able to learn by calibrating each part of the neural net (so each 'neuron' knows how to change the signal when it is passed to itself), but there will have to be software that directs the calibration of each part of the neural net.Quote:
The problem is that a computer is too logical. Think of how many discoveries have been made by someone doing something wrong or making false assumptions that led to new knowledge.
Software is too logical, and too rational to be considered equivalent to a human mind.
Silvercord,
do you believe that you exist? if so you, do you believe that you have an internal life that is solely yours and private to the external world? do you experience things the way you, and only you, experience them? if so, then you are concious!!
based on the above questions, the problem here is that we can no longer solve the relation of conciousness to our brain simply by asserting the reduciton of conciousness to brain processes or our behaviour. i think what you mean is we cannot explain conciousness scientifically hence we can altogether forget about conciousness. well, this is what elimintaive materialists do! they eliminate talks about things that cannot be explained conciously!
when it comes to conciousness we have to ask ourselves: "what is it like to be X?" and if you know what the answer is then you are aware of the existance of X, hence X is conscious. what i mean here is conciousness is the source of our own feeling of what it is like to be X.
now with regards to computers, well, they are simply not aware of their existance. they do not have feelings and experiences the way you and i do. hence they are not concious.
the difference between us and computers is really about conciousness that is shaped by our qualia(experiences) and feelings! and thinking is a part of our concious lives; but we still dont have a complete definition of what thinking is and how it relates to our conciousness.
lastly, we simply dont understand what thinking involves because we cannot turn this subjective point of view into an objective one in order to use scientific data and experiences to provide a logical explanation.
tnx,
Ben
But how do you know if I'm conscious. You would just have to rely on what I'm telling you. You'd also have to just rely on what the computers tell you, otherwise the only way you can prove I'm conscious is by examining the patterns in which my neurons fire.Quote:
if so, then you are concious!!
>> Neural nets will be used and be just as functional and capable and logical as the human brain... software that directs the calibration of each part of the neural net.
The point wasn't that current software is not as logical as humans, it was that humans are not as logical as current software. Granted a large enough artificial neural network could mimic a human, they have not yet evolved to the point where it could easily develop its own 'opinions' if you will.
>> now with regards to computers, well, they are simply not aware of their existance
And how exactly do you know this. I know you were arguing against this type of question, but what does it mean to be aware of one's self? You can ask, but that only works if you both speak a common language. Its not a solid test. Computers, in a way, are shaped by their 'experiences'. For example, they behave much differently with an upgraded OS, or a loose memory card, or new software.
benny, prove to me and zach that you are conscious.
zach, what i meant by this comment:
all that means is we would basically just write enough software so that the computers would be able to teach themselves, with no limit to their learning. This means that even if we gave them no arms or body, they would eventually be able to figure out how to manipulate their environment first with electrons (the only thing they have access to), then to somehow create bodies for themselves (sorry, I dont' know the implementation details :))Quote:
but there will have to be software that directs the calibration of each part of the neural net.
and ben i wasn't joking about the part about you proving you are conscious. As far as I'm concerned you are just a mindless being that produces text and images just like
the ones at this site
Silvercord i cannot prove that you're conscious but i can prove that I am conscious and hence it is a reasonable assumption that you are conscious too since we both have human brains and we both display conscious like behaviour.
For a computer displaying conscious like behaviour there are two possibilities:
A) It is conscious or
B) Its not conscious, it is merely programmed to fake conscious behaviour.
There is no problem with B, there is a big hole in our understanding around A, which is where the debate springs from.
However I, like you, suspect computers are capable of becoming conscious (as in A) at some point.
Prove it to whom? Yourself?Quote:
but i can prove that I am conscious
yes
This guy claims to be conscoius:
yes, it's just a bunch of messages done in console mode, but, it certainly proves a point, considering all of you are only producing text messages, and I cannot be completely certain that any of your are conscious.
But you know YOU are, therefore it seems reasonable to suppose we are too.
Nice prog. btw :) - Though on occasion, it can't spell all that well =)
I think you guys ought to read this
It basically discusses, in a lot more detail, the options described by Clyde.
Appologies if this has already been mentioned, but this is a long thread.
Yeah I know, what's up with my silly computer? ;)Quote:
Nice prog. btw - Though on occasion, it can't spell all that well =)
everyone here should read Age of Spiritual Machines by Ray Kurzweil, it directly discusses this stuff.
I am not a great fan of Penrose's theories on conciousness i've skimmed through an Emperor's new mind and it seemed rather far fetched.Quote:
think you guys ought to read this
I've just started "Consciousness Explained" by Daniel Dennet, so far its been very good.
like computers thinking?Quote:
seemed rather far fetched
I don't think computers thinking is particularly far fetched.
But i do think that consciousness being at its heart a quantum mechanical process is.
ok Clyde
Clyde is right though. I'm sure he can explain this better than I can, but the human brain is definitely NOT a quantum mechanical process. In quantum mechanics, every possible outcome exists. A 'digital' quantum mechanical computer would not have a bitset, rather every single place is both a 1 and a 0 at the same time. This is a rather abstract, but basically that bitset exists in that 1-0 state until some sort of a conscious (oh god) process observes it, and then the actual sequence of bits is set. What does this mean? This means that even if every single particle in the universe was a processor capable of a (excuse my crappy notation) million billion trillion calculations per nano second, even all of them combined would never be able to out process a single quantum mechanical processor. This topic is also discussed in the Age of Spiritual Machines.Quote:
But i do think that consciousness being at its heart a quantum mechanical process is.
Humans are capable of a lot of computaton, but when you come right down to it they still seem to be digital at their main core because each neuron is firing or it is not, but this seems to combine into a larger analog signal (neural net) that can be altered in various ways, thus explaining the different behaviors of different humans.