PDA

View Full Version : Learning?



bob20
12-09-2002, 05:16 PM
I was having a discussion the other day with my teacher about the future development of a real AI. In my opinion a real AI is one that can learn, grow, and act independent of everything. We decided that the key to it all was one simple question. How does something learn? Not, how do you learn? But, how does the actual learning take place? A truly mind boggling question for me. I speculated, but that's all. We think that learning is the key, everything can work off of that in one way or another. If anyone out there has any insight into this matter I would like to hear it.

TechWins
12-09-2002, 05:31 PM
>>how does the actual learning take place

I'd say learning takes place from a series of events, depending on the situation and the state of mind. I don't really have enough to say right now, but I hope this turns into a very detailed thread with a lot of explanatory posts. I like the topic of this thread a lot. Good thinking!

adamviper
12-10-2002, 07:48 AM
i think that learning is a process that can only be obtained by a biological nature b\c of independent thought and feed-back that a computer won't be able to get on its own.

ammar
12-10-2002, 08:18 AM
Machine learning is defently different than human learning, and the future or AI, is to try to make machines learn like humans.

adamviper
12-10-2002, 08:37 AM
i dont think that humans will ever be smart enough to make TRUE ai. b\c nobody will really know what some things are like learning.

deathstryke
12-10-2002, 08:40 AM
How can a machine be made to learn like a human? Learning has much to do with experiences and a machine's experiences would be vastly different from those of a human. And if it did learn, would we be able to recognize it as learning when it is so vastly different from our own learning?

adamviper
12-10-2002, 08:42 AM
but you need to know how true learning happens so you can put it in a computer of program you cant do c++ w\o know how c++ works.

deathstryke
12-10-2002, 08:49 AM
Good point. However, what if learning truly depends on the individual? Sure, there are some innate succeptabilities in humans that allow us a certain amount of knowledge of what will be learned from a given experience, but with a machine we are dealing with an entirely different set of succeptabilities that we won't know are there until we try to teach it something and it learns an entirely different lesson than was intended. If it learns a lesson at all.

adamviper
12-10-2002, 08:51 AM
what defines an indvidual, does a soul? would a computer have one?how is a soul obtained?

deathstryke
12-10-2002, 09:03 AM
[refuses to discuss religion][paraphasing]An individual: an entity with a unique identity with awareness of self and at least some consequences of its actions.[/paraphrasing](or is it...?)[/refuses to discuss religion]

PJYelton
12-10-2002, 09:33 AM
Learning to me is the ability to recognize and remember patterns. To that end, I see no reason why AI can't "learn" in the future, although you are probably right, it will likely learn different things than a human would learn.

deathstryke
12-10-2002, 09:41 AM
Obvious patterns may not be obvious. And what happens if the machine starts extrapolating patterns from a few occurences?

Clyde
12-10-2002, 09:45 AM
They have already made AI that can learn, theres even some basic 'learning' in video games where the computer adapts to the way you play. (I cant name any off the top of my head, but i'm certain ive seen it mentioned in some games).

PJYelton
12-10-2002, 10:04 AM
Obvious patterns may not be obvious. And what happens if the machine starts extrapolating patterns from a few occurences?

I'm not saying it'll be easy, but I just think it'll be possible. Extrapolating patterns from few occurences is extremely common with human learning as well, ie a person "learning" that traveling is too dangerous based off one plane crash. And how many times have you seen humans NOT learning the obvious when the obvious is hitting them in the face? Its a problem with human learning but it is in no way a unmoveable obstacle. I don't see why it should be with machine learning as well.

vasanth
12-10-2002, 11:00 AM
Well Weak AI is now in existance,, we see them in variuos systems and machines.. but Real AI(i.e think on won, take decesion, undersand etc..) will take another century maybe... Technology such as Neural Science are aiming to learn how our brain works and are also researching wheather this can be done with machines...

But i think it will take another 100 or 200 years. THis may sound too long in the era of computers..

Clyde
12-10-2002, 11:28 AM
Computers can already pass the Turing test.

...
12-10-2002, 11:47 AM
Originally posted by Clyde
Computers can already pass the Turing test.

sadly, i cant... :(

I think the biggest obsticle in computer AI is abstract thinking. sure a computer can recognize patterns, but one being able to create new ideas would be very difficult.

it will be a long time before we see computers with a sense of humor.

PJYelton
12-10-2002, 12:27 PM
I think the biggest obsticle in computer AI is abstract thinking. sure a computer can recognize patterns, but one being able to create new ideas would be very difficult.

it will be a long time before we see computers with a sense of humor

I'd have to agree with you there. I tend to see most of our abstract thinking stemming ultimately from emotions, something I'm not sure a computer could emulate very well, if at all.

bob20
12-10-2002, 12:42 PM
They have already made AI that can learn, theres even some basic 'learning' in video games where the computer adapts to the way you play. (I cant name any off the top of my head, but i'm certain ive seen it mentioned in some games).
The learning that takes place in video games isn't true learning. The "AI" is just reacting. Its adaption is just its reaction to something. It isn't reacting because it has learned to react, it is reacting because someone programmed it to.

abstract thinking stemming ultimately from emotions
I was just discussing this last night. Abstract thinking can but doesn't have to stem from emotion. The learning of concepts is truly a key to it all.

Clyde
12-10-2002, 12:55 PM
"The learning that takes place in video games isn't true learning. The "AI" is just reacting. Its adaption is just its reaction to something. It isn't reacting because it has learned to react, it is reacting because someone programmed it to."

What makes you think there is a distinction?

- I'm not saying there isn't, i'm just interested to hear why you think there is:

When i am playing my opponent at a computer game and i notice he is using x-tactics over and over, i adjust to compensate, as far as i can see AI can basically do the same thing albeit with poorer pattern recognition, the difference is with computers we can see the internal working whereas with humans we can't.

PJYelton
12-10-2002, 01:13 PM
I was just discussing this last night. Abstract thinking can but doesn't have to stem from emotion. The learning of concepts is truly a key to it all.

Well, I guess it depends on what you mean by abstract thinking. For me I'm talking about things like sense of humor, love, hatred, desire, curiosity, etc and all the thoughts stemming from these. A computer would have a very hard time with them. What non-emotion based abstract thinking do you feel comps would still have a very hard time with? All the things I can think of at the moment are either at least partially emotional or are just a much broader example of pattern recognition.

adrianxw
12-10-2002, 04:02 PM
I can't think how many times I've made this point on this and other boards, but again...

>>>
things like sense of humor, love, hatred, desire, curiosity, etc and all the thoughts stemming from these.
<<<

... these are indications of HUMAN intelligence. Why does an AI need to exhibit human intelligence - it is not human.

Ask the average alien who has crossed interstellar space a typical human joke, and he/she/it/they don't get it - are they not intelligent?

Correctly define intelligent and you'll be on the way to solving this problem.

TechWins
12-10-2002, 04:24 PM
But i think it will take another 100 or 200 years. THis may sound too long in the era of computers..

I say it will take maybe another 10-20 years. Technogoly and scientifical information are increased at an exponential rate. We will discover more in the next 10 years than we probably have in the past 50 years. This is why I think any sort of huge, technological leap is very well possible in the coming decades.

compjinx
12-10-2002, 04:32 PM
Hmmm, for a computer to think like a human it would need a HUGE database of information that constantly effects and molds the current process. Computers are so "non-human" because they have very little knowledge (or experience) so even with the right programming you still won't have real AI.
When an AI computer looks at something like: "1+1=?" he will (after identifying the sting) will give an answer of two.
A human, on the other hand, might start thinking back when he was a child looking at his first math book that has a picture of one apple, a plus sign, and another apple. the result being two.

Now as you can see the end result is the same, two, but the human has more in his mind and therfor can think in diffrent ways. For example, the computer will go on, not thinking another thing about "1+1=?" but the human might go away thinking of various childhood events, or about food.

Although in retrospect of what I have just said I soppose that a computer also needs the ability to pull out items from memory that have little to do with the current thought (like the numbers being associated with apples), but were somehow related through some experience (like the math book with the one apple+one apple=two apples illustration). Although I have doubts that a huge database of information, the association of unrelated ideas, and anything else I just said will help very much with this subject but I figured I should get my two cents in.

Not to mention that the way information is entered into a computer shouldn't be as strict, for example:
1+1=2 is a very basic math statement, a computer would take such "absolute facts" as an absolute fact, when really it should be treated more like an idea. That might lead the computer to think thoughts like "does 1+1 really equal to two?", and may lead to more human thoughts, like doubting entered information EVEN if he has no conflicting information.

Well, thats my two cents. I now have to go and get some cookies from my governers mansion.

Unregd
12-10-2002, 08:55 PM
The human mind is complex but, in theory, not beyond replication with computer technology. If you think about it, abstract thinking and creative thought is really a more complicated mix of reactions to emotions, environmental input, and instinct. Humans have been given the distinct advantage of having a millennia-evolved genome; however, I do not doubt the possibility of a computer program someday in effect emulating many of the built-in characteristics and instincts present in it.

A computer could develop a sense of humor like a human: by finding a pattern behind what other objects in the environment label funny. A computer might then learn that statements about President Bush and his simple mind invoke a positive reaction (defined by coded "instinct" and past experience) in other interactive objects (real human beings) in the environment.

New ideas, anyway, are most often syntheses of old ideas combined in new ways. I see no reason why a computer could not eventually think critically and independently, weighing ideas according to the stimulation system programmed into it.

The challenge is, as previously stated, for the computer to obtain the experience so that it can make informed decisions. It would likely take a long period of exposure to the environment until it obtains sufficient experience.

bob20
12-10-2002, 09:04 PM
When i am playing my opponent at a computer game and i notice he is using x-tactics over and over, i adjust to compensate, as far as i can see AI can basically do the same thing albeit with poorer pattern recognition, the difference is with computers we can see the internal working whereas with humans we can't.
It's not the difference in actions, it's the difference in the cause of the action. A person can compensate, and can program a computer to do the same. But, how does the computer learn to compensate by itself, without human intervention?

Well, I guess it depends on what you mean by abstract thinking. For me I'm talking about things like sense of humor, love, hatred, desire, curiosity, etc and all the thoughts stemming from these.
When I talk of abstract thinking I am talking more about concepts, over emotions.

... these are indications of HUMAN intelligence. Why does an AI need to exhibit human intelligence - it is not human.
It doesn't need to exibit human intelligence(emotion), it needs to be able to fuction on its own.

Clyde
12-11-2002, 06:48 AM
"It's not the difference in actions, it's the difference in the cause of the action. A person can compensate, and can program a computer to do the same. But, how does the computer learn to compensate by itself, without human intervention?"

In which case the only difference is that computers are programmed by people, whereas people are programmed via genetics.

Of course our genetic programs are amazingly adaptable and change hugely with input. Whereas computer code is currently quite limited. But thats not a fundamental problem.

Polymorphic OOP
12-11-2002, 07:00 AM
I'm 100% absolutely, positively certain that one can create a computer which can "learn."

Why? Because, if you really wanted to get down with it on the extreme lowest level, you can use a computer to simulate movement of subatomic particles, atoms, molecules, cells, tissues, organs, etc. When broken down, it is conceivable, though obviously incredibly complex -- but that's not to say impossible. The fact is that anything in nature can be broken down to simple parts and everything in nature has to follow rules. This is exactly what a computer is great at doing -- breaking things down and gradually building up into a complex object. The only major difference that separates computers from purely simulating the real world is that computers are digital and the world is analog.

With a strong understanding of the brains of even very simple life-forms, it's very possible that a computer can be made to simulate thinking. A brain, while is not fully understood by humans, is still a real object -- and by real, I mean that it's not "magical." Once again, it can be broken down into simple parts with rules just like anything else. Once we understand the human brain enough to fully understand learning (and we already understand it quite a lot more than many people know), you can bet your life that one of the first things that will be done is a computer simulation of it. It's already being attempted in some forms, it's just a matter of time and humans' ability to learn...

Davros
12-11-2002, 07:09 AM
I have spent my life (since 14 years old) trying to answer this question. I don't know what the answer is, and I don't think there is anyone who does. I do know the following, however.

Before the 19th Century people defined water as a colourless, odorless, tasteless liquid. A definition which served some uses, but offered no insight into what water is. Definitions for 'learning' & 'intelligence' are of a similar nature today.

I believe it will take a paradigm shift in order to gain a deper insight into intelligence, after which, every thing will seem obvious (at least to those who can understand it).

Davros
12-11-2002, 07:19 AM
>Hmmm, for a computer to think like a human it would need a HUGE database of information that constantly effects and molds the current process.

Guess what? Have a look at www.cyc.com . A project which aims to give a computer all the common sense knowledge it needs and has been running for around 15 years now. Guess who's funding it? Here's a clue - begins with M.

I don't think cyc will lead to a truely 'intelligent' computer though.

adamviper
12-11-2002, 07:25 AM
if learning needs a shift for us to create a TRUE AI then that how and when will that happen. i still don't belive that is possiable even in 100 years.

Polymorphic OOP
12-11-2002, 07:28 AM
Originally posted by adamviper
if learning needs a shift for us to create a TRUE AI then that how and when will that happen. i still don't belive that is possiable even in 100 years.

Agreed. It will take quite a bit of time before humans will be able to fully understand the brains of even farily simple multicellular life-forms.

Funny how "simple" is so relative.

Davros
12-11-2002, 07:31 AM
>then that how and when will that happen

It could be tomorrow, a hundred years away, or never. If I figure it out, I'll let you know. :)

adamviper
12-11-2002, 08:01 AM
but the way that general socity is declineing it will almost never be possiable to create an AI

Polymorphic OOP
12-11-2002, 08:02 AM
Originally posted by adamviper
but the way that general socity is declineing it will almost never be possiable to create an AI

Yeah, some people can't even write a single sentence without multiple typos!

adamviper
12-11-2002, 08:06 AM
with this decline an AI will never be obtained

Polymorphic OOP
12-11-2002, 08:06 AM
:D

...
12-11-2002, 08:17 AM
the thing about that, though, is modern society wont be the one to create the first true AI. no discovery was ever made by the collective masses. the great discoveries have all come from individuals or small groups of elite and educated scientists.

the real problem with the world is that those intelligent thinkers and the general masses are growing farther and farther apart. as technology gets more complex it becomes less accessable to the general public. eventually it wont be presidents and kings ruling the world. it will be the gates and the einstines.

Polymorphic OOP
12-11-2002, 08:18 AM
Originally posted by ...
eventually it wont be presidents and kings ruling the world. it will be the gates and the einstines.

I'd rather have Einstein as president instead of Bush :rolleyes:

adamviper
12-11-2002, 08:25 AM
i would not be able to create AI beacuse it would be the end of the world.

deathstryke
12-11-2002, 08:45 AM
I'd rather have Einstein as president instead of Bush

But then again, intelectuals don't usually make very good leaders of men, they tend to seem too distant, too suspicious for the average man.

adamviper
12-11-2002, 10:17 AM
acording to dictionary.com learning is ther act of gaining knowledge and knowledge is The sum or range of what has been perceived, discovered, or learned. so a computer can learn but can also not be intellagent.