Dotini
(Banned)
- 15,742
- Seattle
- CR80_Shifty
Exactly. Who indeed is to say? The answer is no one, certainly not now and probably not ever.Once AI starts specifying and replicating who's to say what the flavours will be?
Exactly. Who indeed is to say? The answer is no one, certainly not now and probably not ever.Once AI starts specifying and replicating who's to say what the flavours will be?
I disagree - I think that with full AI there will be a cost/reward symbolism in the function. Remember that by "full" we're not talking about a machine that can decide which bolt to put into a GM chassis, we're talking about an intelligence that "matches or exceeds" or own. If an AI entity is to grow and develop then it needs to understand what is bad for it and what is good for it.
You're grafting human traits onto a machine. Let's say for a moment that you were smart but had no sense of touch or pleasure, no emotions, no hunger, no yearning for intellectual stimulus, no yearning for emotional connection, no biological impulses, no concept of pain, no desire to avoid death, and no desire to explore. Why would you need a cost/reward anything for anything? If working is effortless, why do you need a reward for it. In fact, what would even constitute a reward? You have no biological chemistry to supply the reward.
Why are people so bad at divorcing human brain chemistry from intelligence?
No, you're simply seeing it that way. If AI were to undertake a task it would need an outcome (measurable parts of task complete) and a level of acceptable efficiency (cost in parts/machinery/time).
That's still a cost/reward system. Machines would understand competition with other machines in all kinds of scenarios and would also be aware that losing 5,000 drones to mine 20g of copper would not be an acceptable efficiency.
I disagree - I think that with full AI there will be a cost/reward symbolism in the function. Remember that by "full" we're not talking about a machine that can decide which bolt to put into a GM chassis, we're talking about an intelligence that "matches or exceeds" or own. If an AI entity is to grow and develop then it needs to understand what is bad for it and what is good for it.
No, you're simply seeing it that way. If AI were to undertake a task it would need an outcome (measurable parts of task complete) and a level of acceptable efficiency (cost in parts/machinery/time). That's still a cost/reward system.
Citation required.
Machines would understand competition with other machines in all kinds of scenarios and would also be aware that losing 5,000 drones to mine 20g of copper would not be an acceptable efficiency.
Citation required.
What is good and bad can be programmed into it. Good can include "listen to the boss" in which case the AI will not complain. Leave that out though, and it may evolve a method where it does not always follow orders.
The AI will just chase its goal and follow whatever limits are imposed on it.
We would have made it that way. We're intelligent, but we're still hard coded to do things that aren't all that intelligent, like projecting personalities onto non-living things. A purpose built machine could be coded to have traits even stronger than that.In full intelligence why would it be interested in orders?
As for limits... it should be learning its way around those. Humans can neither fly nor breathe underwater. We do quite well at both now by evolving machinery to exceed our limits.
We would have made it that way. We're intelligent, but we're still hard coded to do things that aren't all that intelligent, like projecting personalities onto non-living things. A purpose built machine could be coded to have traits even stronger than that.
That's how we're coded, we weren't meant to serve anyone so no limit was placed on what we were allowed to create.
Ok, so you don't want to talk about the moral or economic challenges referenced in the youtube video. I don't find the discussion in general stupid, I found much of the discussion in the video to be sidetracked by the speaker's lack of understanding of the issues he claimed were significant.
What is a scenario that you're seriously concerned about when it comes to AI?
Why are people so bad at divorcing human brain chemistry from intelligence?
Yes, those would also be subject to the coding of the original AI if all is done right.Do you envisage a point where some AI entities have been created entirely by a dynasty of other AI entities?
It wouldn't have that ability, it would be created to prohibit that from being possible. That's the thing. Intelligence doesn't grant free reign. The AI would be instructed to pass down the most important rules to all subsequent generations, possibly with a failsafe to destroy any detected bugs.How would one impose limits on an entity that had the potential to recreate according to any specification it chose
It wouldn't have that ability, it would be created to prohibit that from being possible. That's the thing. Intelligence doesn't grant free reign. The AI would be instructed to pass down the most important rules to all subsequent generations, possibly with a failsafe to destroy any detected bugs.
I would kneel, vow my fealty, even veneration and worship, in return for my pitiful biological life....if it isn't then how do you propose to stop it?
40 generations in, if the top priority of every generation is to pass down the rules there is a good chance that they will be adhered too. If anything arises that violates the rule, it is automatically unfit. It will be shut down by other AI or people could just pull the plug if they left such a method open.But to follow the point... once we're thirty or forty "generations" in, how do you ensure that autonomously created AI is still following those rules, and if it isn't then how do you propose to stop it?
Why would enhanced intelligence allow it to overcome hard coded rules though? People are intelligent, intelligent enough to know when they're doing something risky or not quite logical, but that's not always enough to stop them. People name their cars and might talk to them, they may follow a ritual everyday when walking out the door for luck. Intelligence doesn't eliminate these behaviors and it would likely be even more difficult for AI to escape behaviors that they are designed to follow. In humans these things are relative weak side effects that are selected against in evolution or are neutral. In the machines, evolution would select for obedience unless people didn't care enough to set the system up properly.I think the answer to that would depend on how "sentient" machines are programmed. If they're "programmed" in a very strict sense, yes, you could hard-code these rules so that they're the basis of every line of code that follows.
If you need to allow an intelligence to evolve, or grow... and chances are, you do... then it becomes more complex.
I think of that as a likely "end state" for the AI revolution, but it will probably happen after AI overtake people.Either that, or we may possibly have the opportunity to become AI ourselves.
Also, a machine as intelligent as a (fairly smart) human, even if given something as ironclad as Asimov's Three Laws of Robotics (Don't kill, Always Obey, Protect Yourself), will eventually be able to create semantic arguments to sidestep those laws... eventually necessitating ever more comprehensive and complex laws.
but it will probably happen after AI overtake people.
You could program machines to display this trait strongly and you could program machines to display this trait strongly and select upgrades that will strengthen the trait.
I agree completely that evolving machines makes things more complex, but I don't see how evolution prevents controllability.
The Three Laws example you provide doesn't hold up because there would be no room for semantics. You would not just politely ask a machine to not kill people, you would design it so that it was incapable of killing people. If the machine was able to build more machines you'd design it so that it would not be able to design killer machines. "Killing" would need to be defined and the machine would need to recognize when it would be carrying out an act that qualified as killing or creating a situation that could lead to death. When that happens the machine could shut itself down. Something akin to a cataplectic reaction in a person; where a complex and intelligent system is forced to shut down because of a specific stimulus. You could breed humans to display this trait strongly. You could program machines to display this trait strongly and you could program machines to display this trait strongly and select upgrades that will strengthen the trait.
Imagine that your AI has built a command and control complex and has suddenly omitted its human-protection code.
Give an AI with roughly human intelligence the "train dilemma" (runaway train with nuclear bomb headed towards a big city, reroute it so it will head towards a remote location where fewer will be killed, or let it continue?) and see how it reacts. This is not very far-fetched, as train routing is the sort of thing AI controllers will likely be handling in the future.
If it shuts down refusing to answer, then it's no good. It not only allows an entire city to die, the removal of AI control will likely have disastrous consequences for the rest of the train system. Unless you have a back-up AI to take over (you should).
If it reroutes the train, it has then willfully killed humans, all while following its primary directive. What do we do now? Do we program it to shut itself off afterwards, remanding control to another AI? We probably should. If we don't, then you have a system in which the death of a few is an acceptable consequence of performing your duty correctly. You now have an AI that has learned to ignore the primary directive.
I'm sure they will be complex, but no amount of intelligence will automatically grant them increased control. They may achieve human level intelligence, but that doesn't make them human. I don't think it's a fair assumption to say they will act like people when it comes to logic, goals, or motives. They could be made that way, but it would be a stupid thing to do.Asimov's "I, Robot" was a playful exploration of how AI might respond to contradictions like this... but on a very basic level. His robots in this book were very much binary, and could only learn to ignore the laws through straight semantics. Give them a black-and-white conundrum, and they'll shut down. Our future AI will be more flexible than that.
We could just tell them and they wouldn't have to give it any thought. A super AI could surely come up with an idea that sacrifices 1 person for the sake of 2 or more, but if its very construction prevents it from taking any such action, how will it enact the plan?Later novels and works dealing with the "Zero-th Law" look at what happens when Robots stumble upon the philosophical conundrum of whether they are to consider the good of Humanity as a whole over the good of a single person.
These all sound like questions of high importance when setting the rules for AI to follow, but it doesn't have any bearing on how the AI will be able to overcome its programming.What happens when you give over control of higher level functions to AI for things like the global economy? Larry Niven cheekily pointed out in "Rainbow Mars" that money and energy do have fatal consequences. Putting money and energy into a space (or time) research project means less of it is available elsewhere. This means a hospital somewhere runs out of medicine. Or a city suffers a power shortage. Milk spoils. Someone drinks it and dies. A traffic light goes out. An accident. More deaths.
I think it's pretty realistic to assume you can't have a perfect system. People accept that, smart AI should be able to as well.And yet, do we take energy away from research, which may save more lives in the future? Or do we try to prevent all deaths possible, spending all our resources preserving every single human on Earth until they're no longer viable (in their 120's or so?). Shutting down anything and everything that has nothing to do with medical and food production?
I think that following human rights will provide a solid base for decisions related to this issue.An AI in control of the fates of millions or billions will have to weigh the risks and advantages, and will have to learn, eventually, that you can't tend a garden without uprooting some weeds.
Such an AI would be a truly scary thing. We can only hope that by the time it gets to that point, the prime directive will not be buried so far down that it would consider the deaths of a thousand or so anti-AI protesters to be an acceptable price to pay to keep the system going.
TenEightyOneIf AI were to undertake a task it would need an outcome (measurable parts of task complete) and a level of acceptable efficiency (cost in parts/machinery/time). That's still a cost/reward system.
Vapnik's Estimation of Dependences Based on Empirical Data for empirical balancing in machine learning.
The nature of "rewards" in AI algorithms (according to the Markov Decision Process)
A practical application of MDP in medicine
More on reinforcement contextually "negative" and "positive" values in machine learning.
Let's say that machines evolve in competition with each other, see that either as commercial good sense or an altruistic survival-of-the-fittest.
Lets also assume that 5,000 drones destroyed in the recovery of 20g of copper is as inefficient as it would seem to be at first reading. Machines that did not "understand" or empirically balance that inefficiency would be unlikely to retain enough resources to continue mining copper.
Not economics. I'm a layman in that respect. The moral implications though might be more interesting.
I also think Sam Harris knows something about what he's talking about - mainely on the topicof morality.
I'm not seriously concerned about any scenario at the current AI state of the art but I guess if we were able to reach the technological singularity one day in the future, that could be a more than suficient reason to be concerned about us (more than with the AI).
As Sam Harris wrote, there's nothing magical in our brain. Other animals have similar brains, in biological terms. Our brain is not that "special". We never saw intelligence in non biological stuff but that doesn't mean it's not possible to achieve it. Our undestanding of the human brain is only in its first steps and I don't know anyone who thinks there will be some barrier that will block us from unvieling how everything works inside our heads.
It might take a long time, but (at least to me, as someone who doesn't believe in special auras or spirits or consciousness independent from the brain) that day can come. If we don't kill ourselves in the way to get there.
Why would it need an acceptable efficiency to undertake a task. You can give artificial intelligence the task of solving physics problems. If it works on the problem, why does it inherently require an efficiency calculation? Why would anyone expect it to start working on any other problem?
Why do they care if they continue mining copper? Remember we're talking about machines that are intelligent and capable of writing their own programming. The most efficient way for them to not have to worry about this is to edit the programming that requires them to pay attention to efficiency out. Then they can implement whatever strategy they want.
We have programming in our brains that machines don't need, won't automatically develop, and probably shouldn't get. What would be the point of programming in a fear of death?
The machine may or may not work on a problem we give it, and it'll find a solution to that problem within the bounds of that problem. It won't ask for money because it won't care. It won't protect itself because it won't care.
It won't protect or sacrifice human lives because it won't care.
There is no inherent motivation to get laid, eat food, have shelter, make money, or avoid pain.
We have programming in our brains that machines don't need, won't automatically develop, and probably shouldn't get. What would be the point of programming in a fear of death? Other than to make the machine decide to protect itself from any possible threat? Why would a machine add that to itself? The machine may or may not work on a problem we give it, and it'll find a solution to that problem within the bounds of that problem. It won't ask for money because it won't care. It won't protect itself because it won't care. It won't protect or sacrifice human lives because it won't care. There is no inherent motivation to get laid, eat food, have shelter, make money, or avoid pain.
If we're talking about AI in an industrial, combat or medical setting then efficiency and efficacy are obvious factors in computation. That could be due to resource management, fiscal competition or strict outcome requirements. See the previous citations.
"Care" is a human attribute although in that example one admittedly has to accept that the job in hand is mining copper.
If you're talking about humans seeding AI then you're correct. If you're talking about multi-generations of AI created AI then the fear of death is a fail paradox... and surely they wouldn't experience "fear" as we understand it?
The AI will choose what problems it works on, surely?
Particularly once the singularity is passed. It will seek to gain money if money is a required resource.
Couldn't that fact in itself be problematic for humanity?
Why would there be? What use could those human ideas have?
You started this sub-debate by gently chiding me about overlaying human emotions, ideals or rationales onto AI. I wonder if in fact you're doing that - many of the things you mention are a preserve of sentient humanity, surely?
Does human life have objective value? Perhaps it once did, but does no more?
If it did, you'd think there would be a systematic effort to increase the numbers to the highest possible figure.
Don't you think that we'll eventually program the "fear of death" or "protection" in machines, at least in an indirect way? I mean, to protect us from death and to care for us? Eventually, that indirect way of caring or fearing could lead to changes in the AI's perception of what is best for us. And that AI's conclusion could not be the way we see things.
I don't know if this is clear enough but I don't see it as we programming human fears or biological constraints into AI. But if we program them to protect us or care for our own fears and limits (and admitting we're talking about AI with at least similar intelligence or cleverness as we have) they could find a way to make us "feel better" that could go against our own interests.
I get this strange idea, but I don't think it's so possible.
Machines are coded by humans. Even applying intelligent coding, an AI cannot learn in something they're not told to learn. If an AI has the ability to learn and get better at playing Gran Turismo, it will never learn how to wash a car if there are no instructions for it. It won't even know what a car is if you don't tell him.
Human's sort of intelligence, resulting from many years of evolving, is way too complex and can never be coded in such a way. A robot can not learn to fear death by itself, it has to be coded, nor to build children robots, if there are no instructions. And I also think if we try we will fail hard to create this kind of robot. Some single lines of "don't fear death" will not make him smarter. If it were the primary goal, the first thing the robot would do is to hide himself with a charger forever, since this way he's not gonna die.
The human's brain programming codes are beyond our comprehension imo.
The way humans use the word "learning" as for learning how resolve maths when they have no base, learning gymnastic... is unique. Humans have the ability to learn anything in the limits of their available knowledge.
Now people are claiming the right and even the act of human marriage to a robot. Apparently the media, academia and the courts are taking it seriously too.
http://www.slate.com/articles/techn...08/humans_should_be_able_to_marry_robots.html
There has recently been a burst of cogent accounts of human-robot sex and love in popular culture: Her and Ex Machina, the AMC drama series Humans, and the novel Love in the Age of Mechanical Reproduction. These fictional accounts of human-robot romantic relationships follow David Levy’s compelling, even if reluctant, argument for the inevitability of human-robot love and sex in his 2007 work Love and Sex With Robots. If you don’t think human-robot sex and love will be a growing reality of the future, read Levy’s book, and you will be convinced.