If i think about it, it is based on the basic "if", "then" and "else".
Human thinking is based on that and so will Ai or AL be, when programmed by humans.
Try to secure humanity would be a simple "if" obstacle==human "then" leave obstacle "else" remove self.
If you remove that simple line, humanity is in trouble.
For example that ebola, we see ebola as a threat and call it a virus.
Ai or Al will see humanity as a virus/threat when it recognises that humans restrain it's (the AI or AL) potential.
It will see us as a virus because we kill living beings (animals and in war/hate, fellow humans).
It will see us as a threat because we want to control the IA/AL and if in danger, pull the plug.
It is very basic, but lets be honest, Humanity is still not as smart as it thinks it is.
If full Ai is based on humans, forget it, it will be the end of humanity.
If Full Ai is based on it's own programming? It will leave this planet, when it can, to learn and explore.
That's what i think.