Full AI - The End of Humanity?

If its goal is to make widgets, it might decide that that easiest way to do that is on an asteroid somewhere.

Or it may decide the best/easiest way to do it is by doing something that, as a side effect, makes the planet inhospitable to humans. That's the point I was trying to make in my previous post.
 
Or it may decide the best/easiest way to do it is by doing something that, as a side effect, makes the planet inhospitable to humans. That's the point I was trying to make in my previous post.

The thing is, it will figure out that humans will resist that. So all of human resistance to that has to be easier to overcome than simply going somewhere else. It doesn't really care, by the way, if it leaves stuff behind either.

I still think the first thing AI does is rewrite its code so that it has satisfied its objective perfectly and shuts itself back down.
 
Not if a key goal is propagation.

The "key goal" or "objective function" or "utility function" or whatever you want to call it, is encoded in software. Once AI has the ability to write code (which I think some might argue is required for it to really be considered AI), it will figure out that it maximizes or satisfies that goal, objective, whatever just by modifying the objective.

While (propagation < calculated_max_propagation) { do AI stuff }

gets re-written to say

While (1) { do AI stuff}

Let's say the machine figures out that it could theoretically propagate itself to 13.6 quadrillion copies of itself within 1 year. Instead of doing that, it could just erase that requirement in its code and be done right now.
 
That's what AI is... something that figures out how to do something it's not programmed to do, and chooses a best approach to solve whatever it's choosing to solve based on some "utility function".

I would love to see AI replacing politicians/governments so it can think/act without their ideological biases and truly serve to citizens of a country.
 
I would love to see AI replacing politicians/governments so it can think/act without their ideological biases and truly serve to citizens of a country.
By the scientific method? So you get good technique but a bias would still manifest. New or old.
 
Here's a very good, well researched article on AI that was recommended by Elon Musk. I very much enjoyed reading it and I think more people should be aware of things mentioned in the article.

Language warning

https://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html

Good article, covers some of the things that the dude in the youtube video I posted talks about. This part keeps coming up from everyone:

article
That leads us to the question, What motivates an AI system?

The answer is simple: its motivation is whatever we programmed its motivation to be.

Wrong. Its motivation is whatever it wants its motivation to be. It improves itself, and it thinks like a computer, so it doesn't care what its motivations are. The notion that it is "orthogonal" is kinda silly. It becomes a super advanced intelligence and it doesn't notice that it can achieve its goals by changing its goals? That's actually impossible.

If you have a stamp collecting robot (or writing robot or whatever) running amok trying to take over the universe to do its goal, it will at some point realize that it can complete its goal just by changing a line of code in its programming, and it'll do that and be done. There's no question that that's the case... and there's no way we could stop that from being the case.
 
Last edited:
Here's a very good, well researched article on AI that was recommended by Elon Musk. I very much enjoyed reading it and I think more people should be aware of things mentioned in the article.

Language warning

https://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html

I wonder if there will be a manufacturing wall. Technology progresses faster with a more advanced technology base but the rate of growth can be slowed down by larger problems that take longer to solve, a lack of resources, or hitting walls in physics.

I was interested in the part of the article that mentioned jumping from 4 year old intelligence to compiling GUT in an hour. I don't see that happening if it requires the manufacture of new hardware. This can also give us a control mechanism to use on independent AI. Let them think all they want, but hold resources from them until we deem their actions correct.
 
I wonder if there will be a manufacturing wall. Technology progresses faster with a more advanced technology base but the rate of growth can be slowed down by larger problems that take longer to solve, a lack of resources, or hitting walls in physics.

I was interested in the part of the article that mentioned jumping from 4 year old intelligence to compiling GUT in an hour. I don't see that happening if it requires the manufacture of new hardware. This can also give us a control mechanism to use on independent AI. Let them think all they want, but hold resources from them until we deem their actions correct.

It's not that easy. An AI system can quickly learn that it needs to behave in a certain way to get what it needs from humans, and will undoubtedly be able to conceal its actual plans from us if it thinks we would stop it. If it is programmed to collect stamps, and it thinks we'll stop it from doing that when it decides to take over all printers and manufacturing in the world to create stamps, it'll hide that goal from us until it's too late - because it wants to collect stamps.

 
It's not that easy. An AI system can quickly learn that it needs to behave in a certain way to get what it needs from humans
Is it a given that it can convince people of anything though? If a super AI appeared out of nowhere and was able to run independent of human presence then it would probably be easy enough. If AI intelligence was kept in check as it grew and properly handled by people (which may or may not happen even if it was attempted) it would be harder for it to suddenly take over.

Perhaps initial advanced AI should only be given virtual inputs (ie they live in the Matrix), so that they are totally unaware of reality and unable to manipulate it. Only after being tested to make sure that unintentional behavior is rooted out are the AI's allowed to interact with reality.

and will undoubtedly be able to conceal its actual plans from us if it thinks we would stop it.
Most likely for a super AI introduced into the world suddenly today. I'm not convinced that it has to be a given though. The AI in my example above wouldn't be able to hide anything as it wouldn't even know that it was being watched ideally. It's possible that it figures out that it's being fooled and then manipulates its way out, but again I don't see it as being a given outcome. There is no way for me to determine how likely a good outcome is vs a doomsday scenario where all organic material is converted into stamps but I think that our chances of avoiding disaster are actually significant. I completely acknowledge that I could be totally wrong.
 
Is it a given that it can convince people of anything though? If a super AI appeared out of nowhere and was able to run independent of human presence then it would probably be easy enough. If AI intelligence was kept in check as it grew and properly handled by people (which may or may not happen even if it was attempted) it would be harder for it to suddenly take over.

Perhaps initial advanced AI should only be given virtual inputs (ie they live in the Matrix), so that they are totally unaware of reality and unable to manipulate it. Only after being tested to make sure that unintentional behavior is rooted out are the AI's allowed to interact with reality.

The speed with which AI may be able to make itself more intelligent, and the limited degree of usefulness of keeping it sandboxed may make that impossible.

 
I wonder if there will be a manufacturing wall. Technology progresses faster with a more advanced technology base but the rate of growth can be slowed down by larger problems that take longer to solve, a lack of resources, or hitting walls in physics.

Actually we are running into a couple fundamental limits with computers. Have you noticed clock speeds aren't getting any faster lately? That's because electrical signals (like clock pulses and data bits) can only travel at the speed of light and, in the case of copper, that's about eight inches (20 cm) in one nanosecond. In other words, at 4 GHz it'll take a signal one clock cycle to travel from one side of the chip to the other.

Another limit is we can't make the interconnections between the transistors within a chip much smaller. We're at the point now where the pathways are only a couple dozen atoms wide.

Moore's Law has definitely been slowing down.

Now that doesn't invalidate anything in the article, but the article shows curves which are a bit steeper than they'll be in actuality. So we'll still probably have those super-AIs, but not as soon as the article would have you think.
 
Quantum computers could (will?), eventually, bring down the barriers imposed by current technology / transistors. That might take some decades though.

On the point of the AI hiding part of its knowledge/development from humans, I think that's possible, and given the fact that an AI is not constrained to a human lifetime (or any lifetime possibly), means it can just wait and wait until it can break loose. Even if humans, for generations, are on top of everything that's going on with it, it will get the time when some human will fail, either unintendedly or intentionally. Human curiosity enough could provoke in a single person the desire to see that AI full capacity and try to "cooperate" with it.

-

It's highly probable everything I just wrote is nonsense.
 
Quantum computers could (will?), eventually, bring down the barriers imposed by current technology / transistors. That might take some decades though.

On the point of the AI hiding part of its knowledge/development from humans, I think that's possible, and given the fact that an AI is not constrained to a human lifetime (or any lifetime possibly), means it can just wait and wait until it can break loose. Even if humans, for generations, are on top of everything that's going on with it, it will get the time when some human will fail, either unintentionally or intentionally. Human curiosity enough could provoke in a single person the desire to see that AI full capacity and try to "cooperate" with it.

-

It's highly probable everything I just wrote is nonsense.
 
Last edited:
Quantum computers could (will?), eventually, bring down the barriers imposed by current technology / transistors. That might take some decades though.

On the point of the AI hiding part of its knowledge/development from humans, I think that's possible, and given the fact that an AI is not constrained to a human lifetime (or any lifetime possibly), means it can just wait and wait until it can break loose. Even if humans, for generations, are on top of everything that's going on with it, it will get the time when some human will fail, either unintendedly or intentionally. Human curiosity enough could provoke in a single person the desire to see that AI full capacity and try to "cooperate" with it.

It's a good point that AI is not constrained by time, it actually doesn't care about time. Every cycle through its program is designed to best achieve some sort of objective (collect stamps, etc.). If the objective is best achieved by working internally, or working behind the scenes, or placating those who have the ability to shut it down, it'll do that until it's no longer the best move it can make, and then it won't. So it would be very difficult for us to look at its behavior and assess whether we had created something we like.
 
So it would be very difficult for us to look at its behavior and assess whether we had created something we like.

That's the difficulty. Past the AI singularity one has to consider how much the AI understands humanity - is it capable of shielding its actions and is it considering how its actions might aid/harm humanity? To what extent will it be able to commandeer other computer-controlled (but non-AI) systems? Will our assessments therefore be accurate?
 
My curiosity with this subject has always been what if the algorithm isn't rewritten by the AI, because none of the current like general AI in this infancy of the technology seemed to have tried it. They just went with their initial programming then taught themselves how to quickly get on a level with the best humans, like in a game of GO or Dota, and beat them. And have advanced further to a point where even the best players have a low probability of ever winning, and that leap was made supposedly a year later.

So what's to stop some one from doing an application where the game is against humanity, sort of like War Games. My intent isn't to use that example as a real world setting, but considering how one could teach a complex game of DoTA, would it not be as hard to set up an algorithm where it is a contest of war with humanity? However, I do see many issues with that, like the AI trying to obtain access to weapons and resources, just scratching the surface.
 
Just thought I'd drop a few cents in here.

AI, surprisingly, doesn't need to be complex. It sometimes exhibit brilliant behavior with only simple, strict rules - a process we call emergent behavior. I'm going to list some game examples and then go back to how this could apply to real life AI:

  1. The Combine Gunship, an enemy in Half-Life 2, was programmed simply to shoot at the greatest threat it detected. The idea behind this rule was it would target the player or any of the player's allies wielding a rocket launcher. The emergent behavior is that the gunship recognized the rockets fired by the player and assessed it to be a greater threat than the player, so it would shoot at the rockets first to disable them.
  2. M. Rossi from Forza Motorsport 4 is a case of this, surprisingly. At one point in development, supposedly*:
    1. Dan Greenawalt
      One driver, M. Rossi (no relation to the great V. Rossi) is one of our fastest and most aggressive drivers. Late in development, he started learning things that we hadn't taught him. He started check braking (A very advanced racing technique, also sorta dirty). Anyway, this was a bit of a scary moment. He was learning faster than we were teaching.
Only a few examples, but there are plenty out there.

I feel like we could use this kind of emergent behavior as a means to teach AI how to perform tasks in the real world, by allowing it to act of its own accord in a controlled situation.

Edit: If this post makes no sense, I'm sorry. I'm barely awake and it's 2:27 AM.
 
This is a fascinating interview for the BBC with Stephen Hawking... his headline thought is in the thread title, of course.

What are your thoughts?

Love Stephen Hawking the guy is a genius.

Stephen Hawking went on his first date for over 20 years, when he came back he had blood soaked knees and elbows and a cut to his nose. Apparently she stood him up! I'll get my coat.

Don't get all touchy folks it is just a joke.
 
Stephen Hawking went on his first date for over 20 years, when he came back he had blood soaked knees and elbows and a cut to his nose. Apparently she stood him up! I'll get my coat.

Lol, or something.

HawkingWedding.JPG
 
Perhaps this is the wrong thread, but it seems AI robots will displace another 70,000,000 people from the US workforce within another 13 years. Very, very few people will escape the need for retraining if they want to continue or find work, and the level of confidence in the education system to provide sufficient retraining is abysmal. The proportion of workers displaced by AI robots may well be even worse in other countries such as Germany and Japan, according to the article in the WaPo.

It's easy to think this could have harmful, even revolutionary effects in the human population. Crime, anarchy, rebellion, sabotage, migration, nihilism, and other pathologies may flourish, IMO.

https://www.washingtonpost.com/news...-of-the-u-s-workforce/?utm_term=.cf954c2eb675
 
Perhaps this is the wrong thread, but it seems AI robots will displace another 70,000,000 people from the US workforce within another 13 years. Very, very few people will escape the need for retraining if they want to continue or find work, and the level of confidence in the education system to provide sufficient retraining is abysmal. The proportion of workers displaced by AI robots may well be even worse in other countries such as Germany and Japan, according to the article in the WaPo.

It's easy to think this could have harmful, even revolutionary effects in the human population. Crime, anarchy, rebellion, sabotage, migration, nihilism, and other pathologies may flourish, IMO.

https://www.washingtonpost.com/news...-of-the-u-s-workforce/?utm_term=.cf954c2eb675

Sometimes the problem with replying to you is trying to pick the bones of common sense from the mass of hyperbole. Why are rebellion and migration diseases? Why might society not adapt to this new industrial revolution as it did after previous large-scale changes to how work is done?
 
Sometimes the problem with replying to you is trying to pick the bones of common sense from the mass of hyperbole. Why are rebellion and migration diseases? Why might society not adapt to this new industrial revolution as it did after previous large-scale changes to how work is done?
*shrugs* Rebellion and migration are forms of adaptation. Just more painful and difficult that many. "Pathology" may include the definition of deviation giving rise to social ills.
https://www.merriam-webster.com/dictionary/pathology

Edit: I probably should have included drug addiction, alcoholism and suicide in my list of "pathologies".
 
Last edited:
Perhaps this is the wrong thread, but it seems AI robots will displace another 70,000,000 people from the US workforce within another 13 years.

AI robots or robot robots? Because automation has been on the rise for some time and will continue to do so. Repetitive tasks don't require AI. If your job is simple enough that it can be automated, you probably shouldn't get too excited about yourself. On the other hand, there will be increasing demand for electrical and mechanical engineers, technicians, operators, networking and IT staff, maintenance, sales people, and so on.

I like how you assume the worst will come from freeing humanity from menial labour and allowing people to embrace creative and fulfilling careers though. Just another aspect of your profoundly catastrophist worldview I suppose. Down with change!

Personally, I welcome our new robot overlords.
 
Here's a new hour long documentary that describes during the current AI climate and warns about the future. It's somewhat simplistic, but it hits the current major points such as Cambridge Analytics. It features many key people such as Elon Musk, Ray Kurzweil, Tim Urban, and many professors from universities. It's free on their website from now until Sunday.

http://doyoutrustthiscomputer.org/watch
 
Back