Artificial Intelligence

  • Thread starter Danoff
  • 88 comments
  • 2,396 views
Swift
That wasn't my point bud. I was saying WHERE did emotion come from?

Natural selection. Natural biological variation due to the combined genetic code of parents. The traits resulting from that biological variation (provided by the gene pool) then gets selected on via survival of the fittest - resulting in a bias in the genetic code. That bias yeilds long term favorable traits and adaptation, one of which is the existance of emotional and instinctual responses - part of which developed in animals way before we ever showed up on the scene.
 
danoff
Natural selection. Natural biological variation due to the combined genetic code of parents. The traits resulting from that biological variation (provided by the gene pool) then gets selected on via survival of the fittest - resulting in a bias in the genetic code. That bias yeilds long term favorable traits and adaptation, one of which is the existance of emotional and instinctual responses - part of which developed in animals way before we ever showed up on the scene.

But at one time, there had to be beings without emotion(were evolution true) so what was the catalyst for it?

All I'm saying is that AI could evolve into an emotional state of awarness.
 
Swift
But at one time, there had to be beings without emotion(were evolution true) so what was the catalyst for it?

All I'm saying is that AI could evolve into an emotional state of awarness.


Sure, I would say that lots of lower species like earthworms, bacteria, and maybe even some higher level species like sharks (who eat their own young at times) also may have no emotion. It's a hard thing to test though.

My point is that the factors that contributed to us getting emotion would not exist for a computer or robot - so they won't evolve into that state.
 
Swift
But at one time, there had to be beings without emotion (were evolution true) so what was the catalyst for it?...

The catalyst for the development of emotion would be survival of the species. The more emotional we are about our offspring, the better their chances for survival. That applies for all species, of course.

The offspring of more-emotional members of the species will have a better survival rate and grow up to have more offspring that carry the trait for emotion, spreading the trait throughout the gene pool.

In fact, the development of emotion has gotten out of hand in our species, hasn't it? We're to the point now where we're ruled by it. We could probably do better with less emotion, and more reason.
 
Zardoz
The catalyst for the development of emotion would be survival of the species. The more emotional we are about our offspring, the better their chances for survival. That applies for all species, of course.

The offspring of more-emotional members of the species will have a better survival rate and grow up to have more offspring that carry the trait for emotion, spreading the trait throughout the gene pool.

In fact, the development of emotion has gotten out of hand in our species, hasn't it? We're to the point now where we're ruled by it. We could probably do better with less emotion, and more reason.

Ok, so that means they would have to be self-aware. So, if a machine is put in charge of building other similar machines(I, Robot) what would keep it from developing emotions?
 
Swift
Ok, so that means they would have to be self-aware. So, if a machine is put in charge of building other similar machines(I, Robot) what would keep it from developing emotions?

The lack of a need to "care" for offspring.
 
danoff
If Robots have rights, they cost a fortune to make, AND you have to get them to agree to a salary - they might be more expensive than people are now.

Plus this brings up a point that I hadn't really considered before. If you think robots who are self aware and intelligent would have rights, what do you think about how those robots are created? Many people think that genetically engineering children is bad. Perhaps you want you child to have one arm or eye, so you (sometime in the future when this is possible) genetically engineer your child to have one arm. Is that a violation of the child's rights? I would say yes. But what about improvements? What about three arms? I would still say yes.

But then if you have a self-aware intelligent robot - would it be enthical to design it with one arm? Or no arms?

I think the idea of granting equal rights to intelligent robots is as safe for humanity as giving a few dozen nukes to Ben Laden.

Even if they are expensive at first, choosing between employees and robots would be a... no brainer (ha!). Why hire a complete accounting department when one machine can do the job? And as a bonus, if the same machine is fast enough, it can also do some work in your operational, or engineering department!

Intelligent machines could do multi-tasking, work much longer than 40 hours a week, make fewere mistakes, communicate much more efficiently between themselves and adapt to new tasks much more easily (doctorate in thermo-dynamics? just download it). Humans just don't stand a chance against that.
 
jpmontoya
I think the idea of granting equal rights to intelligent robots is as safe for humanity as giving a few dozen nukes to Ben Laden.

Even if they are expensive at first, choosing between employees or robots would be a... no brainer (ha!). Why hire a complete accounting department when one machine can do the job? And as a bonus, if the same machine is fast enough, it can also do some work in your operational, or engineering department!

Intelligent machines could do multi-tasking, work much longer than 40 hours a week, make fewere mistakes, communicate much more efficiently between themselves and adapt to new tasks much more easily (doctorate in thermo-dynamics? just download it). Humans just don't stand a chance against that.

True enough. Can you say matrix anyone?
 
jpmontoya
I think the idea of granting equal rights to intelligent robots is as safe for humanity as giving a few dozen nukes to Ben Laden.

Even if they are expensive at first, choosing between employees and robots would be a... no brainer (ha!). Why hire a complete accounting department when one machine can do the job? And as a bonus, if the same machine is fast enough, it can also do some work in your operational, or engineering department!

Intelligent machines could do multi-tasking, work much longer than 40 hours a week, make fewere mistakes, communicate much more efficiently between themselves and adapt to new tasks much more easily (doctorate in thermo-dynamics? just download it). Humans just don't stand a chance against that.

We'll have to find more productive tasks.
 
danoff
We'll have to find more productive tasks.

Besides from services, art and entertainment, I don't see how an human wouldn't be greatly outclassed in productivity by a machine with a.i. capabilities.

Any example? (besides being used as an energy source for robots who took over the world :) )
 
jpmontoya
Besides from services, art and entertainment, I don't see how an human wouldn't be greatly outclassed in productivity by a machine with a.i. capabilities.

Any example? (besides being used as an energy source for robots who took over the world :) )

It's something that humanity is going to face no matter what. Machines will continue to get more intelligent and more useful. People have to stay a step ahead. One way to do that, though, would be to grant robots rights - in which case it may become illegal to create them.
 
danoff
It's something that humanity is going to face no matter what. Machines will continue to get more intelligent and more useful. People have to stay a step ahead. One way to do that, though, would be to grant robots rights - in which case it may become illegal to create them.

Which will result in Robo-racism and Robo-hate crimes, and eventually humans and robots will murder each other. Of course at this point, the government will try to take away robot rights, triggering a robot revolution. Animatix, anyone?

It's inevitable that robots will become more productive than humans and earn power over us, the only question is how, and when. Sure, we could make it illegal to produce intelligent machines, but all laws are made to be broken. Some rich maniac on a tropical island somewhere is plotting to produce them. After all, anti-murder laws haven't stopped anyone. :mischievous:

Bwahahahahahaha!!!! :mischievous: :D :lol:
 
Swift
But at one time, there had to be beings without emotion(were evolution true) so what was the catalyst for it?

All I'm saying is that AI could evolve into an emotional state of awarness.

? - you're really not into evolution much, are you? :D

Emotion comes from the primitive brain, a.k.a. lizard brain. It's basically neurotransmitters that gear the body up to perform better for certain tasks. It developed from simple memory functions associated with sensory input (like loud noises, smells and such from threats). Just look at the kind of defense mechanisms even primitive animals like insects develop, and imagine that developing alongside memory capacity to more extensive reactions. As the brain expands for more complicated reactions, more abstract capacities develop, among which rational thought and self-awareness.
 
Arwin
? - you're really not into evolution much, are you? :D

Emotion comes from the primitive brain, a.k.a. lizard brain. It's basically neurotransmitters that gear the body up to perform better for certain tasks. It developed from simple memory functions associated with sensory input (like loud noises, smells and such from threats). Just look at the kind of defense mechanisms even primitive animals like insects develop, and imagine that developing alongside memory capacity to more extensive reactions. As the brain expands for more complicated reactions, more abstract capacities develop, among which rational thought and self-awareness.

And a computer with an adaptive program couldn't do that because?
 
danoff
Explained above in my post.

If a robot is self aware and put in charge of developing and/producing other robots, what's to stop it from thinking that the robots are it's children?
 
Swift
If a robot is self aware and put in charge of developing and/producing other robots, what's to stop it from thinking that the robots are it's children?

Nothing, but there is no incentive for the mother robot to care about the baby robots. The reason is because the mother robot is not going to die. She can create as many babies as she wants - in all kinds of configurations. She can let them die and create new ones at will. There is no biological driver suggesting that she must preserve her genes by procreating. Genetic data isn't shared between robots when they have offspring. Robots aren't continuing THEIR genes by caring for their young.

It's almost completely different. The only thing that makes it similar is the existance of an adaptive algorithm that can introduce random variations to solve problems. That algorithm, though, would not settle on emotion because it would give them no advantage. In fact, unlike with humans, it would be to a robot's advantage not to get too attached to any particular offspring.

Here are the key things that our emotions do for us:

1) Force us to take care of our young
2) Pair bond so that there are multiple people taking care of the same offspring
3) Pair bond so that we have offspring in the first place


Robots would not need to pair bond to either take care of offspring or have them, so 2 and 3 are not drivers. Robots also do not need to take care of their young because they could make a fully developed adult robot from scratch. That gets rid of number 1. The result is no incentive for emotions with robots. Only the disincentive that getting too attached to any particular design might result in irrationally chosing a poor design.
 
danoff
Here are the key things that our emotions do for us:

1) Force us to take care of our young
2) Pair bond so that there are multiple people taking care of the same offspring
3) Pair bond so that we have offspring in the first place


Robots would not need to pair bond to either take care of offspring or have them, so 2 and 3 are not drivers. Robots also do not need to take care of their young because they could make a fully developed adult robot from scratch. That gets rid of number 1. The result is no incentive for emotions with robots. Only the disincentive that getting too attached to any particular design might result in irrationally chosing a poor design.

Ok, but emotions are also about survival of ones self. So why couldn't a self aware being develop the feelings of pride, envy, fear of not existing, etc?
 
Swift
Ok, but emotions are also about survival of ones self. So why couldn't a self aware being develop the feelings of pride, envy, fear of not existing, etc?

Pride doesn't help with survival today. A robot would build or work simply for reward rather than the pride of doing something. One could argue that pride actually hurts our chances for survival because it makes us less likely to take a certain job or require payment or even ask for help.

Envy is also not helpful for survival. A robot would simply look at his supplies and decide how likely it is that he'll be unable to suppor himself. Envy is why many of us buy big houses or nice cars - but that's not necessary to survive.

Fear is similarly counter-productive. Some people fear heights or flying, even though they are perfectly safe. Fear clouds judgement. The fear of a dog running toward you may make you turn around and run, but a robot could look at the dog, calculate the odds of getting damaged, and decide to run just the same.

Emotions were useful when we didn't have the brains to decide things rationally. Now, many times, they simply get in the way. I don't see the need for a robot to develop any sort of emotion. But I could be wrong. Maybe there are advantages.
 
On a robot which has developed a mind, are Asimov's three laws of robotics either:

a) Possible?

or

b) Moral?
 
Famine
On a robot which has developed a mind, are Asimov's three laws of robotics either:

a) Possible?

or

b) Moral?

These?

1 A robot may not injure a human being or, through inaction, allow a human being to come to harm.

2 A robot must obey orders given it by human beings except where such orders would conflict with the First Law.

3 A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

No, I don't think these are possible to follow. It would mean that every robot would end up in Africa serving or growing food - and humans would still come to harm.

I think it's also immoral. It makes robots slaves - second class beings. If we're talking about robots with intellects on par or greater than our own, I don't see how we can justify not giving them rights.
 
danoff
Pride doesn't help with survival today. A robot would build or work simply for reward rather than the pride of doing something. One could argue that pride actually hurts our chances for survival because it makes us less likely to take a certain job or require payment or even ask for help.

Envy is also not helpful for survival. A robot would simply look at his supplies and decide how likely it is that he'll be unable to suppor himself. Envy is why many of us buy big houses or nice cars - but that's not necessary to survive.

Fear is similarly counter-productive. Some people fear heights or flying, even though they are perfectly safe. Fear clouds judgement. The fear of a dog running toward you may make you turn around and run, but a robot could look at the dog, calculate the odds of getting damaged, and decide to run just the same.

Now I'm not convinced one way or another on this issue. But I would like to play devil's advocate on these points.

You can feel a sense of pride from your accomplishments. Pride can motivate you to exceed a set limit or standard. A robot without a sense of pride may do what is simply required and never yearn to accomplish more. A robot that does have a sense of pride may seek to constantly improve itself.

I can't make a very good case for envy at this moment. So we'll leave it at that for now.

The fear one is interesting because in your analogy with the dog, I would argue fear is simply the name of the evaluation of the robot's disposition. If it calculates that the dog may hurt it slightly, it would move at a certain pace. If it calculates the dog would cause it to cease to function, it would certainly move at it "best possible speed". I'd say you could call that fear.


M
 
///M-Spec
You can feel a sense of pride from your accomplishments. Pride can motivate you to exceed a set limit or standard. A robot without a sense of pride may do what is simply required and never yearn to accomplish more. A robot that does have a sense of pride may seek to constantly improve itself.

But how would that make it better able to survive? Pride is not necessary for this adaptation algorithm to achieve the maximum likelihood of survival. Only a brain that is oriented around that goal is necessary.

The fear one is interesting because in your analogy with the dog, I would argue fear is simply the name of the evaluation of the robot's disposition. If it calculates that the dog may hurt it slightly, it would move at a certain pace. If it calculates the dog would cause it to cease to function, it would certainly move at it "best possible speed". I'd say you could call that fear.

Why would he not move at the best possible speed regardless? To minimize damage...
 
danoff
But how would that make it better able to survive? Pride is not necessary for this adaptation algorithm to achieve the maximum likelihood of survival. Only a brain that is oriented around that goal is necessary.

But the robot would not only want to survive, but to "better itself", wouldn't it? To improve is a defining element of intelligence.

Why would he not move at the best possible speed regardless? To minimize damage...

Well, if I were designing a robot, I would probably write in a subroutine that evaluated suitable reaction to any potentially dangerous situation. I wouldn't want my robot to carry an assult rifle around to deter dog bites .. nor would I want it to swat a bee with a Buick.

There needs to be some way it can precieve the amount of danger and use the appropriate response. I would call that a "fear response".


M
 
danoff
No, I don't think these are possible to follow. It would mean that every robot would end up in Africa serving or growing food - and humans would still come to harm.

I think it's also immoral. It makes robots slaves - second class beings. If we're talking about robots with intellects on par or greater than our own, I don't see how we can justify not giving them rights.
What if you remove the second law and then make the third law state that they must not allow harm to come to themselves unless it conflicts with the first law, in which case they may detain anyone intending to do them harm?

They are no longer slaves to humans and can protect themselves without ever causing harm to a human.

But the laws of robotics were created for a world where robots did not have emotions and were essentially work droids. Asimov questioned society by having robots suddenly develop "malfunctions" of emotions in order to question society. This was the whole premise behind Bicentennial Man.

So in a world where emotions are given to robots you have to adjust the laws. I think the way I stated them would work so that you don't risk creating robots that can develop a psychotic tendancy. Any robot that had an emotional breakdown or burst of rage would be limited in what they can do and they would also defend themselves against attack without harming anyone. The question here is do you allow them the choice to be shut down if they become depressed? Or perhaps you can hit a reset button?

The movie I Robot had a robot that was not constrained by the Laws of Robotics and hence had thoughts and emotions. I found this laughable and not very Asmovian as it was not the laws that took away their emotions but the fact that they were just robots.
 
FoolKiller
The movie I Robot had a robot that was not constrained by the Laws of Robotics and hence had thoughts and emotions. I found this laughable and not very Asmovian as it was not the laws that took away their emotions but the fact that they were just robots.

The problem with I, Robot is that they NEVER discussed how the Doctor created Sonny other then the advanced alloys of his body.
 
Swift
The problem with I, Robot is that they NEVER discussed how the Doctor created Sonny other then the advanced alloys of his body.
They left a lot unexplained and hoped the viewer would just shrug and ssay, "well, it's science fiction." Apparently they forgot that good science fiction is stuff that is actually based in true theoretical science and could actually happen. It is supposed to make you say, "What if?" Here they focused on fiction and forgot the science part.

Asimov always had a basis in science, which is the robot developing emotions was the oddity and not an ability because they are just silicone and metal. The movie made it seem like the laws were the reason behind this and that they wee all slaves, as Danoff said they would be. If the movie hadn't ended with the robots being freed and looking up at Sonny it would have them all not knowing what to do with their freedom, like a Harry Potter house elf. They would probably just go back to work because that is all they know.

It needed a disclaimer that said "leave your book and mind at the door."


But as I said before, Asimov's laws were based on teh principle that they did not have, nor were supposed to be able to develop, emotions. The laws could be used as a basis for a new set of hardcoded laws in an emotional robot, but the second law has to be thrown out the window. The second law is only in reference to a service droid and has no moral bearing on a free thinking and free willed robot.
 
dandoff
Will machines ever be able to think like humans? Have emotion? Is there some limit to what kind of thinking computers can do?

machine will never be able to think like humans, but why would they want to? humans are the masters of their own universe, but thinking that human intelligence is supreme is a misconception. we have yet to fully develop as a species - we are still children playing in the 'garden of Eden'. thinking that we could even create an intelligence to rival our own is like asking a child to recreate the Mona Lisa using only crayolas. we lack a great deal of knowledge about the human mind, so trying to replicate it is near impossible. though I do believe that we are only decades away from a working 'chemical computer'.

I don’t think that scientists will ever be able to create a scaleable artificial intelligence using silicon. by scaleable, I mean small, and self-supportive, in a similar manner to humans. I believe that one day scientists will find the means to create a 'chemical computer' identical in every way to the human mind, and this computer will look just like you or I. there will be no difference except the differences that we impose on it.
human cloning is often thought of as just creating duplicate humans. but what if there was another side to the whole subject. over the years, there has been strong opposition to human cloning, but animals have been cloned many times. it’s not too big a leap of imagination to think that somewhere scientists have created a cloned animal brain. with genetics advancing at an alarming rate, I believe that it will be possible in the not too distant future to clone an animal brain to order - i.e. program it. we've already seen radio-controlled insects, and with our understanding expanding every year, the thought of chemical or biological computer is not that unrealistic - no more so than walking robots with silicon-based minds.
further proof if needed can be found in the fact that not so long ago, the go ahead was given to use and develop stem cells to treat incurable diseases - a cause that Christopher Reave fought passionately for before he sadly passed away. stem cells have the ability to form into any part or system of the human body - including a brain. whilst most see the use of stem cells as beneficial to further our knowledge of medicine, there are others no doubt that see stem cells as something else, as a tool, to maybe create a programmed human brain.
the US military has in the past run extensive tests on its soldiers to test there effectiveness in combat. you can imagine that although they stopped using actual soldiers, the research did not stop. military science is often generations ahead of what is currently thought of as cutting-edge. I bet somewhere there are the means to create a programmed brain. the only thing stopping the scientists from creating it, is approval.

westside
Machines can follow complicated logic patterns and therefore make conversation, learn, make decisions, etc...but how can you imitate an emotion?

how can you assume that a machine mind would not or could not have an emotion? it’s because you've been told it cannot happen. unless I'm much mistaken, they have not actually created a machine mind. so how do we know for sure that if a machine had intelligence that it could not have emotions? in star trek, the android data strived to be more human. he wanted to have emotions. why? because he was made like that. anyway, that, like your thoughts, is based on science fiction. we only have theories or stories to prove that a machine would not be capable of feeling emotions, no concrete evidence - yet. I’m sure a higher intelligence than ours would see emotions as a flaw. emotions make us human, but they do not make us intelligent. we have all made many illogical and unwise decisions that have had negative consequences, based purely on our mood. I’m sure that if we could, we would turn back time and correct those mistakes. emotions are a flaw, not a grace. if our emotions were a program on our computer, I can guarantee we would all delete them without a second though. who would want their computer action on impulse, doing what it saw fit based purely on its current state of mind? having the ability to 'feel' does not make you intelligent; and having intelligence does not mean you can 'feel'.

pako
I don't see computers ever having the means necessary to house a soul that us humans often take for granted.

why do you associate intelligence with having a soul? for a start, the mind and the soul are two different entities. many humans believe that there is a soul, because they find it hard to deal with the finality of death. a machine would probably never have a soul, because its mind would be free from the hockus pokus that our lives are built upon. there is no proof that any living thing has a soul, it’s just a myth that has built over time, just like religion. would a machine believe in a god? could a machine believe in a god? I doubt it. does that mean that because of that failing a machine would not be intelligent? I don’t believe in a god or the human soul, and there are millions more that think just like I do, yet I am intelligent. Believing in human myths and legends, does not indicate intelligence.
 
ZAGGIN
who would want their computer action on impulse, doing what it saw fit based purely on its current state of mind?
Windows does a pretty good job of appearing to have good/bad days :lol:
This is why I use OSX now, every day its in a good mood!
ZAGGIN
...would a machine believe in a god? could a machine believe in a god? I doubt it.
"But where would all the calculators go?" - Kryten (Red Dwarf)
I think it could be useful to make a robot believe in an afterlife in silicon heaven as described in Red Dwarf. Its a good lie to tell them so they have no problems being selfless slaves to humans.

Overall, nice points Zaggin. I don't know about us getting chemical compters in as little as 10 years though! Isn't it all pretty theoretical/conceptual still?
 
Back