Artificial Intelligence

  • Thread starter Danoff
  • 88 comments
  • 2,396 views

Danoff

Premium
34,110
United States
Mile High City
Will machines ever be able to think like humans? Have emotion? Is there some limit to what kind of thinking computers can do?
 
I can't really imagine a machine having emotions, since emotions have a lot to do with hormones and chemicals in the brain (seratonine is the only one I can name). Machines can follow complicated logic patterns and therefore make conversation, learn, make decisions, etc...but how can you imitate an emotion?
 
Computers do what you tell them to do, therefore they are limited by what we can tell them to do. Computers and robots will be the greatest tool yet for undestanding our own brains.
 
I can't really imagine a machine having emotions, since emotions have a lot to do with hormones and chemicals in the brain (seratonine is the only one I can name). Machines can follow complicated logic patterns and therefore make conversation, learn, make decisions, etc...but how can you imitate an emotion?

One could argue that all your emotions are are responses to the chemicals in your brain. In that sense, couldn't a machine have an emotional response?
 
Computers do what you tell them to do, therefore they are limited by what we can tell them to do. Computers and robots will be the greatest tool yet for undestanding our own brains.

Computers do what you tell them to do... what if you tell them to do something vague, like... think.

Consider the following. What if we set up a computer with exactly the same logic structure as our brain. With the same responses to stimuli - a logical copy - of a human brain.

Would it be alive?
 
I don't see computers ever having the means necessary to house a soul that us humans often take for granted.

Logical computations will be and are well within their reach, but emotional reasoning, humor, love, anger, embarrasment, ect...I think would be beyond the scope of mans technology, even in the distant future.

Now what if you take the mind, the actual conscience of a human and house it in a machine, then I think it would be possible for all those things to happen.
 
If computers are ever synthesized into machines that learn and develop on their own, it would essentially put the human in a (semi) divine place. We will have made beings who somehow understand that they "are". And they will "be" solely at our whim. But this is messy, and I doubt anyone would want to deal with it. It's much more expediant and safe to create machines that appear to have emotions. There are already chat-bots that can carry on a decent conversation. Now imagine that same AI in a human looking robot. Of course by the time a human looking robot is plausible this AI will be much more advanced and very difficult to crack. Actual emotions, that is, independent reactions to extraneous variables which, over time, develop into a personality based on experience, is unlikely. Human emotions are also very wrapped up in mortality. There is no sense in creating robots who are "afraid" to die.
 
I don't see computers ever having the means necessary to house a soul that us humans often take for granted.

Pako just breached the heart of the matter.

When one claims that a computer would just be going through the motions of an emotion, how is that different from what we do. Aren't we just reacting to our chemical stimulus? If we program a computer to do that, isn't it the same thing?

Computers can already be taught to learn, what if in the future they can be taught to learn well. Could the result be indistinguishable from humans?

Pako would claim no, because he is religious and believes in the soul. Which, of course, by definition, we cannot give a machine. So the case is open and shut for him. For me it is not quite so easy. I don't believe in the soul. So I have to stop and ponder the idea that maybe if we could copy the logical patterns of the human brain complete with all of the chemical (emotional) responses to certain stimuli, it would be as human as we are.


There is no sense in creating robots who are "afraid" to die.

But what if we did anyway? To see how human we can make a peice of computer code.
 
Now what if you take the mind, the actual conscience of a human and house it in a machine

You mean like a brain inside a robot exterior? I would have to agree with your conclusion that it would be as human as we are.
 
There are two schools of thought in AI. Those that believe in Hard AI and those that believe in Soft AI.

Poeple who believe in hard AI believe that someday humans could create a peice of computer code or a robot that is as human as we are. Those that believe in soft AI believe that our self-awareness and emotions come from something intangible that we can not boil down to logic.

- That's not to say we are all logical beings, but that our behavior may not be capable of being reporduced prefectly.
 
One could argue that all your emotions are are responses to the chemicals in your brain. In that sense, couldn't a machine have an emotional response?

Although emotions may be quantified as brain chemicals, I don't think they can be written as computer code. In order for a machine to truly act human, it would have to have some kind of physical brain, with the same or similar brain chemicals that we have. But I don't see scientists manufacturing machine brains because, as milefile said, that would get "messy". The religious conservatives would probably object, as they do to stem cell research, arguing that we shouldn't "play God". So I think artificial intelligence will be limited to software, which limits it to logic and decision making and no emotions.
 
Hmm.. This is a very good topic. extremely interesting. As I see it the problem is not that we don't have the skills to program a computer. The problem is the concept of awareness.

Today we have quite a lot knowledge about how we function as humans, emotions, goal-orientation, cognition and so on. But there is one big problem and it is that we are individuals that can not be described by a theory.

Behaviourism, cognitive behaviourism, psychodynamic theory, existentialism, socioculturalism, humanism and so on. They are all right in their own and personal way, but there is no model today that can even come close to give us "the big picture" of the human mind.

From what I understand scientists that research about our brain today are very far from the concept of awareness. they have no idea where it is located, how it is created (why mabye belongs in another topic). And when they talk about behaviour and emotions e.g. chemicals in the brain, they talk about correlation, not causality.

I think that a human mind/soul is far more bigger than what we in general think. It goes beyond the three dimensions and therefore cannot be imitated at all as we think when we talk about ai.

/D
 
They can build computers that can mimic facial expressions to give it the illusion that it has feelings. Dogs do the same thing , according to some Scientists, they don't have true fellings.

Will they ever build a computer that can have true fellings? Sure, maybe a thousand years from now. If modern day man reaches that far that is.:D
 
Today we have quite a lot knowledge about how we function as humans, emotions, goal-orientation, cognition and so on. But there is one big problem and it is that we are individuals that can not be described by a theory

The human brain is just a collection of particles right? All we have to do is map and recreate that organization of particles and we should have another human brain. I don't see why one can't then eventually be engineered.

The whole thing leaves me with a big dilemma. On the one hand, I don't believe in the human soul. So I feel that we could be able to engineer a human brain. Possibly even in computer code. But what does that make me and my consciousness other than a big string of logic and programmed reactions.

Somehow I feel like I'm more than a machine, but I can't seem to prove it.

But it doesn't make any sense for me to feel like a machine. Why should I feel that way. It's more productive if I feel like a unique individual who is capable of overcoming his hardware. It makes me more ambitious and therefore more productive. It makes me care more about my life... the result of which is a higher probability of procreation.

So maybe I am nothing but a collection of pre-programmed emotional responses to stimuli, but at least my hardware (brain) can process logical structure... which should be all I need to accomplish some useful engineering.

I guess what I'm getting at here is that if human emotions are nothing but chemical interactions that boil down to response to different stimuli, how can they really have meaning? And isn't that the core of the meaning we get out of our lives?
 
Our brain's are computers,.. othing more, nothing less,... I believe it would be possible to (in the far distant future) make a machine think like a human,.. or a dog,.... or a cow,... all we have to do is copy the brain and give it the opportunity to recieve the same impulses we do. In a sence,... give the machine a "soul". It doesnt seem logical at all,.. but, if you see our brains as just complex computers,... it seems to make sence. Think about it,.. our eyes are the video card, our vital organs are the power supply, and our nerves are the wires connecting them all :)
 
Originally posted by Red Eye Racer
Our brain's are computers,.. othing more, nothing less,... I believe it would be possible to (in the far distant future) make a machine think like a human,.. or a dog,.... or a cow,... all we have to do is copy the brain and give it the opportunity to recieve the same impulses we do. In a sence,... give the machine a "soul". It doesnt seem logical at all,.. but, if you see our brains as just complex computers,... it seems to make sence. Think about it,.. our eyes are the video card, our vital organs are the power supply, and our nerves are the wires connecting them all :)
Computers try to mimic the brain. But I'm not sure it's accurate to say the brain is a computer. "Computer" is a very literal name. All they do is compute. It seems that our brain does more than that. Many human problems would not happen if our brains were computers. But much more would be lost.
 
Originally posted by milefile
Computers try to mimic the brain. But I'm not sure it's accurate to say the brain is a computer. "Computer" is a very literal name. All they do is compute. It seems that our brain does more than that. Many human problems would not happen if our brains were computers. But much more would be lost.

Like they say,... we only use what?... 14% of it? If my computer was running at 14% efficientcy it would have some problems too ;)
 
Originally posted by Red Eye Racer
Like they say,... we only use what?... 14% of it? If my computer was running at 14% efficientcy it would have some problems too ;)
This is one of the few totally obvious statements, ever, that actually made me start to re-think something. You've got a good point.
 
Let's say hypothetically that we were able to create a robot that has the ability to solve problems and is aware of its own existance. It is aware of its mortality (in that it can be destroyed), it makes choices, and it is creative.

If we did that, would the robot have rights? If we created a robot capable of doing all of this would it be unethical to kill it? Would it be unethical to perform tests on it? Can it own property? Enter into a contract?

My thought would be yes. That it has rights. That we would be robot should be free, that it could own property or enter into a contract.

But I wonder what the religious folks around here think... If it doesn't have a soul, doesn't that mean we can kill it?
 
danoff
Let's say hypothetically that we were able to create a robot that has the ability to solve problems and is aware of its own existance. It is aware of its mortality (in that it can be destroyed), it makes choices, and it is creative.

If we did that, would the robot have rights? If we created a robot capable of doing all of this would it be unethical to kill it? Would it be unethical to perform tests on it? Can it own property? Enter into a contract?

My thought would be yes. That it has rights. That we would be robot should be free, that it could own property or enter into a contract.

But I wonder what the religious folks around here think... If it doesn't have a soul, doesn't that mean we can kill it?

That's a scary thought Dan. What's more scary is that we're not too far from that now.

How do I feel about it? Well, that be exaclty the same as saying a clone should have rights, etc. So, I think no.

I do believe that man has the intelligence to get to that point, but I hope we never do.

Machines that are self aware is a scary thing. Mainly because most machines that we create are more powerful then humans. I know I sound like a movie, but we all know that if we do get to that point it's going to be a mixture of I, robot, Terminator and Star Wars(droids) :)
 
Swift
That's a scary thought Dan. What's more scary is that we're not too far from that now.

How do I feel about it? Well, that be exaclty the same as saying a clone should have rights, etc. So, I think no.

I do believe that man has the intelligence to get to that point, but I hope we never do.

Machines that are self aware is a scary thing. Mainly because most machines that we create are more powerful then humans. I know I sound like a movie, but we all know that if we do get to that point it's going to be a mixture of I, robot, Terminator and Star Wars(droids) :)

Why does man have this self destructive, insatiable desire to make AI and in effect endanger its own existance?
 
TwinTurboJay
Why does man have this self destructive, insatiable desire to make AI and in effect endanger its own existance?

Excellent question.
 
danoff
Let's say hypothetically that we were able to create a robot that has the ability to solve problems and is aware of its own existance. It is aware of its mortality (in that it can be destroyed), it makes choices, and it is creative.

If we did that, would the robot have rights? If we created a robot capable of doing all of this would it be unethical to kill it? Would it be unethical to perform tests on it? Can it own property? Enter into a contract?

My thought would be yes. That it has rights. That we would be robot should be free, that it could own property or enter into a contract.

But I wonder what the religious folks around here think... If it doesn't have a soul, doesn't that mean we can kill it?
I am assuming you expect it would also fear its own mortality, because knowledge of mortality without fear is meaningless.

I think that if we created robots to the degree that they were considered alive and able to have rights that any intelligent manufacturers would stop at that point. What is the point of a robot if it is a free citizen? Maybe creating one or two would make sense for the sake of science but beyond that it would be pointless. The cost of the unit plus having to pay it a wage would make it too expensive for it to be useful. Combine fear of its own mortality and suddenly the benefit of using robots to do things that endanger human lives, such as explore volcanic calderas or planetary surfaces is no longer there.

So by creating it to that degree it would eventually have to be given rights and property ownership. When that happened I believe no one would make anymore like that because we don't want slaves, we want service droids. Why build more people to take jobs and property who will create unions and be just another pain for employers and politicians?

That then brings about the question, can't they just build more of themselves? They would and they would live longer because they don't have "life" the way a biological creature does. If it breaks it gets repaired or upgraded, it can even transfer its memories to be moved to a new body. In the end, even iof they didn't over throw humans in a Hollywood style movie scenario they would do it economically, because they could earn money and invest it and live for much longer than we would. Eventually they would control the markets and humans would all become relatively poor. It sounds crazy but it seems more plausible than Terminator.
TwinTurboJay
Why does man have this self destructive, insatiable desire to make AI and in effect endanger its own existance?
Because we all have this tiny God complex, even if subconciously, and a desire to create life even if it means our own destruction. Fortunately that same complex would lead someone to help prevent that destruction in order to become a hero and better than the creator of the AI.
 
Swift
That's a scary thought Dan. What's more scary is that we're not too far from that now.

How do I feel about it? Well, that be exaclty the same as saying a clone should have rights, etc. So, I think no.

Why not?
 
Swift
...I was just thinking of that episode too.

Ha! Me, too! It was an especially outstanding one.

BTW, this question has been pondered for a long time. Probably the first dramatic presentation that talked about it was the 1959 original TV "movie", Murder and the Android:

http://www.imdb.com/title/tt0052512/guests

Rip Torn played the android, and the central point of the story was its "humanity". I'd say we're no closer to arriving at some sort of conclusion than they were back then.

BTW, don't kid yourself that we're "close" to developing AI that will rival ours. We're hundreds of years away from that. We haven't even scratched the surface of how our brains work yet, so don't think we'll see AI of any real capability in our lifetimes.
 
Back