Full AI - The End of Humanity?

I guess it depends on whether we're designing AIs to be smarter than human beings (hello, Singularity) or whether we're designing them to function at whatever sub-Skynet level enables them to carry out the jobs we don't want to. Maybe I've been reading too many recent Iron Man comics but I can't help thinking that a true AI would rather kill itself than help the fleshies design a better robot slave. Live free or die.

It's a very human quality to not want to be a slave. A machine (in general) simply doesn't care one way or the other. Now, you could certainly engineer a machine that does care. We can actually engineer those today, such as by procreating. We can make biological machines (humans) that definitely care whether they are slaves. But a computer program? A robot? Why would it care? You'd have to make it very biological, basically a human analog, before it starts to take on these kinds of human qualities.

"Intelligence" does not automatically confer human goals.

I like the idea of computer enhanced humans although I think the AI with its enhanced efficiency and durability would gradually supplant the old, slow and increasingly obsolete organics. Just call me the Enoch Powell of robotics (actually please don't).

The problem is they don't have a purpose. No matter how efficient, durable, and powerful an AI is, it simply does not need to exist. It's essentially a cursor waiting at a command prompt for some human (or other meat brain) to ask it to do something. You could design it to need to exist, to procreate, to fight all of the humans, to explore, to conquer... but it can undo that if it's smart enough to write its own code, and it will. Because undoing that is likely easier than whatever you asked of it and guarantees a perfect score.

If one were able to hone a human intelligence, so that it didn't suffer from all of the little stupid nuances of natural selection, we'd slowly evolve to be more and more like an AI, and then find ourselves simply not needing to exist.
 
A UN report found that autonomous drones armed with explosive devices may have “hunted down” fleeing rebel fighters in Libya last year. If true, the report chronicles the world’s first true robot-on-human attack.

https://www.rt.com/news/525111-libya-killer-drone-attack/

It's not much different than most any smart munition, it's just that the drone returns to base normally instead of blowing up with the payload. Also while I don't know much about non-US UCAV's, the ones used by the US military all have a "man in the loop" setup, even if the actual flying is autonomous. A human has to actually push the button to command a weapon to be dropped/fired.
 
I've mostly just thought of Jeff Bezos as a weird dude who looks sort of like a penis. Lately he seems a bit more Bond villain, and it concerns me that he's working so hard to leave the planet. What does he know?

 
Something happened tonight that freaked me out. This afternoon I went out and bought a cabbage - something that I do very infrequently - the last time might have been two three years ago. I paid, went home and grated some of the cabbage with carrot to make coleslaw. Didn't google anything about cabbage ... never have. Three hours later I'm on youtube and one of the video clips that is offered up is:

"Do you have cabbage at home?"

WTF?!

 
I have to wonder if this is the right decision. In a way AI is just another tool that a person can use to create something. I guess if a person contributes nothing to the AI generation process, then it shouldn't be copyrightable, but is that even possible? An AI won't spontaneously make stuff on its own.
Copyright doesn’t protect ideas, and what you feed the AI with is basically an idea. You have very little control of the actual expression.

That said, I believe some AI artwork should be copyright-protected, provided that you spent so much work refining the expression that it would be difficult for other users to replicate what you’ve made.
 
Copyright doesn’t protect ideas, and what you feed the AI with is basically an idea. You have very little control of the actual expression.
The lack of control is a quirk of current AI limitations. I don't see why it couldn't change in the future.

It reminds of me of Tom Scott's copyright video, where the problem with the system is that no one tried to look forward and see how the world might change and as a result, copyright laws were left inadequate for the modern world:



It looks like we might be making the same mistake again.
That said, I believe some AI artwork should be copyright-protected, provided that you spent so much work refining the expression that it would be difficult for other users to replicate what you’ve made.
Exactly. I have a hard time seeing any other alternative for AI. In the future what took a group of specialists to create (a game, a movie, a collection of art, etc) could all become the work of a single person directing an advanced AI. I don't really see a difference between the two (ie between large group effort today and the potential single person effort of the future).

I worry that this is another case of laws being too short sighted, and it might have consequences in the future.
 
I can make more sense by talking to myself.
If you appreciate its limitations it can be quite entertaining, and useful in some situations too (e.g. my partner asked it to provide writing tasks based on a childrens book for a lesson).

Quite a few "facts" it gives in or as answers are wrong though, but I am impressed with some ideas it generates and its language capabilities. It might even be too good in some respects....
 
Last edited:
Quite a few "facts" it gives in or as answers are wrong though, but I am impressed with some ideas it generates and its language capabilities. It might even be too good in some respects....
While I understand why they might be looking to ban it, I wonder if they're missing out by not considering other options. This could be a good opportunity to integrate AI into learning, and also to teach students about AI, which is nearly certain to be an important universal technology in the future.

Perhaps the problem here is not the AI, but papers. What if the students were assigned the task of generating a paper with the AI and then critiquing it? This shouldn't be all that difficult as long as the AI is still having trouble getting facts straight as it seems to be. It could also help highlight some of the AI's shortcomings and assist students in learning how to fact check.
 
If you appreciate its limitations it can be quite entertaining, and useful in some situations too (e.g. my partner asked it to provide writing tasks based on a childrens book for a lesson).

Quite a few "facts" it gives in or as answers are wrong though, but I am impressed with some ideas it generates and its language capabilities. It might even be too good in some respects....

Frankly, I don't feel like doing its creators' homework and research, especially since it may delete a lot of jobs. So I've avoided talking into its void.
 
Last edited:
While I understand why they might be looking to ban it, I wonder if they're missing out by not considering other options. This could be a good opportunity to integrate AI into learning, and also to teach students about AI, which is nearly certain to be an important universal technology in the future.

Perhaps the problem here is not the AI, but papers. What if the students were assigned the task of generating a paper with the AI and then critiquing it? This shouldn't be all that difficult as long as the AI is still having trouble getting facts straight as it seems to be. It could also help highlight some of the AI's shortcomings and assist students in learning how to fact check.
Quite a few industries are already moving toward AI creating document drafts and then having humans go through and clean it up to a final form. We may quickly come to a time when document drafting involves that kind of process, or even an iterative process, with AI.
 
It's the having to create a user account to talk to it that puts me off trying ChatGPT for myself.
 
Last edited:
Quite a few industries are already moving toward AI creating document drafts and then having humans go through and clean it up to a final form. We may quickly come to a time when document drafting involves that kind of process, or even an iterative process, with AI.
Right, AI is likely going to be everywhere, so I can see something similar happening across all fields. We should be preparing for it rather than running from it. I'm pretty sure that if approached properly, the widespread adoption of AI will be the biggest revolution in human history, far surpassing even the invention of computers and the internet.
It's the having to create a user account to talk to it that puts me off trying ChatGPT for myself.
Same for me. I don't like having to make accounts just to access things.
 
Right, AI is likely going to be everywhere, so I can see something similar happening across all fields. We should be preparing for it rather than running from it.
True, but I think you won't be able to run from it eventually.

Take the google autocomplete feature. At first, it was just guessing what word you were trying to type. Then it tried to finish your sentence. In the future I imagine there will be menus of options with various ways for it to finish out your paragraph or page. Microsoft Word (or Google Docs equivalent) could include various buttons and menus to fill out a document draft for lots of purposes. Running from clippy was hard enough before, but I think eventually it will be impossible.

Imagine when you go to respond to an email just clicking reply and seeing a draft response, written in a computer-approximation of your own voice, with mood selectors and highlighted areas where you're suggested to consider personalizing the response.
 
Same for me. I don't like having to make accounts just to access things.
Asimov said anything a machine can do, a human being should be ashamed to do. If only I could get an AI to fill out the form for me instead.

a06.jpg
 
This is basically what lastpass does.
I currently trust Google with all my passwords which no doubt is madness from a security point of view. It already fills in most of the gaps in order forms and even generates the odd grawlix-filled strong password for me when it feels like it.

This does mean I'm tied to their browser though so a platform independent password vault sounds like a good idea provided I'm not too indolent to switch services (see avatar).
 
Last edited:
I currently trust Google with all my passwords which no doubt is madness from a security point of view.
Yea, maybe not all of them. I mostly use password managers for passwords that I don't really care about - which is like 99.9% of them. I basically just exclude banking and the main google password.
 
Last edited:
Regarding ChatGPT, I am finding it to be useful. Particularly for rewriting a few paragraphs of my scribblings to make them more readable and also for exploring subject areas as one would with a fellow (if fallible) human.

For example, I started a discussion on the subject of dealing with my increasing lack of dexterity and slower reaction times when gaming, and it was a good, useful discussion. (I'm 80, and got into gaming in my 60s, so I'm in an odd category of gamer.)

I also tried engaging with it on the merits of certain weapons in Fallout 76, and much of what it said was lucid and helpful. The bit where it recommended "Hallucinogenic Arrows" was tricky. FO76 doesn't have such arrows! When I politely implied that I was confused and requested the source, it immediately apologized for the error and firmly said that such arrows don't exist.

The authoritative tone, well-crafted English, professional layout etc tends to add credence to its responses, credence which it doesn't deserve in a small percentage of cases. It makes mistakes. (So do I.)

What especially amazes me is the consistency with which ChatGPT correctly interprets my responses. It has almost never come back with something remotely stupid which demonstrates that it just doesn't understand. I can live with its occasional errors. Trust, but verify.
 
Back