Danoff
Premium
- 34,007
- Mile High City
I guess it depends on whether we're designing AIs to be smarter than human beings (hello, Singularity) or whether we're designing them to function at whatever sub-Skynet level enables them to carry out the jobs we don't want to. Maybe I've been reading too many recent Iron Man comics but I can't help thinking that a true AI would rather kill itself than help the fleshies design a better robot slave. Live free or die.
It's a very human quality to not want to be a slave. A machine (in general) simply doesn't care one way or the other. Now, you could certainly engineer a machine that does care. We can actually engineer those today, such as by procreating. We can make biological machines (humans) that definitely care whether they are slaves. But a computer program? A robot? Why would it care? You'd have to make it very biological, basically a human analog, before it starts to take on these kinds of human qualities.
"Intelligence" does not automatically confer human goals.
I like the idea of computer enhanced humans although I think the AI with its enhanced efficiency and durability would gradually supplant the old, slow and increasingly obsolete organics. Just call me the Enoch Powell of robotics (actually please don't).
The problem is they don't have a purpose. No matter how efficient, durable, and powerful an AI is, it simply does not need to exist. It's essentially a cursor waiting at a command prompt for some human (or other meat brain) to ask it to do something. You could design it to need to exist, to procreate, to fight all of the humans, to explore, to conquer... but it can undo that if it's smart enough to write its own code, and it will. Because undoing that is likely easier than whatever you asked of it and guarantees a perfect score.
If one were able to hone a human intelligence, so that it didn't suffer from all of the little stupid nuances of natural selection, we'd slowly evolve to be more and more like an AI, and then find ourselves simply not needing to exist.