Hey,
It's obvious that biotechnology and all that is controversial at times, but do you think there will ever be a time when we will be the ones being questioned? I mean with AI or something. Tell me what you think.
Sean Mackrory
[email protected]
Yes
No
Maybe
I don't know
Could you repeat the question
I don't watch Malcolm in the middle
Hey,
It's obvious that biotechnology and all that is controversial at times, but do you think there will ever be a time when we will be the ones being questioned? I mean with AI or something. Tell me what you think.
Sean Mackrory
[email protected]
I hear in ~20 years, with the way our advancements are going in AI and Robotics, we should be able to create an artifical soccer team that could wip any world cup team's rear. With that in mind, yes, I believe some people will be ........ed (heck there are already champion chess players that are ........ed). Seriously though, I'm sure deeper ethical -- moral issues will arise when true AI comes about (and it will, eventually). However, imo, I think bio-technology and genetic engineering will be a bigger issue (for the next half century or so anyway; but of course what the hell do i know?).
Last edited by greenRoom; 11-08-2001 at 06:24 PM.
Would we humans ever become so stupid to make a machine as smart and logical as us?
probably , and Sean, im from Colorado too, where in the state are ya?
I don't think there will ever be 'true' AI... I know that in the future anything is possible but I think what most people class as AI is the ability to learn and apply knowledge...
I don't reckon that a computer will ever be able to decide for itself what to learn, and a computer will never ask itself why.
eg. A human hears a piece of music and (as computers will be able to) decides who probably wrote it, what type of music it is, what response it was designed to provoke, etc. I don't think that anyone will be able to make a computer ask itself why that particular piece of music was written, what mindset the composer was in and whether the musician could also paint!?
If it did become possible to cause a computer to think laterally, it would probable become boundless... imagine a computer designed by NASA to design and build more human-friendly space-stations spending its entire existence pondering on whether the inventor of post-it's knew an effective means of catching trout!
Maybe, the "safe" answer:
It could pass as yes;
it could pass as no;
we don't know.
^that's where in Co I live by the way! I enjoyed your responses. They were really good!
I plan on making some kind of robot with all the characteristics a human has, then upload my brain to it and I'll become immortal!!!
Oskilian
Setlle down dude!
I don't think robots can get genuine intelligence of a human. Many emotions (not availible to robots) drive our intelligence and we don't even know it. I think robots are capable of a lot, but to a certain extent.
--Garfield the Programmer
1978 Silver Anniversary Corvette
To believe that human minds could be duplicated in robots, you must believe that the human mind is completely deterministic -- that is, there is no free will.
A machine can make decisions, but it MUST do so only in a mathematical fashion. A transistor passes current or blocks it completely under the laws of physics. With the appropriate voltages, current flows or does not flow. There is no "will" of a machine. You might simulate it to the point where nobody can tell the difference, but any machine can be completely determined by its initial state and all past and present inputs. That is, given the initial state of the machine (factory settings) and given a complete history of the machine's inputs, you could duplicate every output, 100% of the time, barring malfunction. You could write an equation (granted, it would be enormously complex) to tell you what it "thinks" at any time, where all of its past inputs and its initial state are inputs into the equation.
Of course, you could argue that humans have no free will. If what we call "thought" is solely caused by neurons in the brain firing, then we have no free will -- the true decisions are simply made by chemicals exciting membranes and the electrical summation of these impulses either triggering or failing to trigger a neuron's firing. Then, you could write the same equations about the human, and it, too, would be horribly complicated, but an equationd WOULD EXIST to predict perfectly the human's mind at any point in time.
If we have no free will, but ourselves are deterministic machines, determined by the initial state of our brain when we developed as a fetus, and determined by every single input that ever entered our brain, them machines could duplicate us in every way.
It is impossible to make a machine that truly has free will, because all machines are slaves to the laws of physics. AI can never create a machine with a will -- but it could end up proving that humans lack a true will, and that we never truly choose anything (in other words, that every 'choice' we make is completely determined, and all our thoughts and acts could be completely predicted, with enough information about the neurons of the brain).