Full AI - The End of Humanity?

But people seem to want to take this in a direction that literally removes the decision making from the person which from my perspective means the person isn't doing anything worth their money. Various forms of automation have already rendered highly educated and skilled humans to nothing but managers but if we remove the management aspect then what are we, secretaries? Even less than that? And we'll get paid approrpriately I'm sure.
AI doesn't have goals, it relies on people for that. In that sense there is always room for human decision making. People will always have wants and needs. AI is another tool for serving those wants and needs. I do think it's an eventuality that AI will become better than any person at doing this. At that point the concept of employment as we know it dies, but that is fine if AI has filled in all the slots that used to require humans.

I feel like the start and end points aren't very concerning, but the transition in between is very murky. Going back to the instance of AI use that started this conversation, I think it was perfectly valid. The first thing that came to my mind was automated secretary, as @Imari mentioned.
But you let me know when you'd like AI to operate the next flight you're on so I can get up and make me a coffee. It gets boring sitting up there watching the pretty colors go by.
I don't have a problem with this conceptually. A human pilot isn't completely reliable anyway. A few of them have a tendency to run planes into the ground, sometimes intentionally.
 
Those machines don't make decisions. They merely act on hard programming, that's it. The human expertise on the back end is in creating that programming, and on the front end is managing that programming - making decisions on how best to utilize the machine. But sometimes they stop working, and every machinist friend I have can still fire up the old belt-driven Bridgeport and cut steel by hand if needed. The Bridgeport still works. It always works. This is proof that the humans still fully understand the what, why, and how of their machines' automation. Same for me in the plane. I understand its decision making thoroughly, but when it stops making them my cables and accumulators help me keep making them.

AI isn't like that. Literal AI experts admit that they don't really understand what's going on which is horrifying to say the least. If ChatGPT is simply scouring the internet for the next word or for formatting techniques then it's not AI, it's just a textbot. Fine. We know how textbots work, right? But people seem to want to take this in a direction that literally removes the decision making from the person which from my perspective means the person isn't doing anything worth their money. Various forms of automation have already rendered highly educated and skilled humans to nothing but managers but if we remove the management aspect then what are we, secretaries? Even less than that? And we'll get paid approrpriately I'm sure.

But you let me know when you'd like AI to operate the next flight you're on so I can get up and make me a coffee. It gets boring sitting up there watching the pretty colors go by.
Planes already fly by AI, its called autopilot.
 
At that point the concept of employment as we know it dies, but that is fine if AI has filled in all the slots that used to require humans.
So the death of the market economy is just a thing that you're not worried about. The entire basis for people working, earning, achieving, and enjoying life will disappear. It's not like instances of this haven't occurred in the past and it was always a disaster.

Planes already fly by AI, its called autopilot.
Now you're really conflating definitions. The programming in airplanes is no more "intelligent" than a Texas Instruments calculator.
 
Last edited:
So the death of the market economy is just a thing that you're not worried about. The entire basis for people working, earning, achieving, and enjoying life will disappear. It's not like instances of this haven't occurred in the past and it was always a disaster.


Now you're really conflating definitions. The programming in airplanes is no more "intelligent" than a Texas Instruments calculator.
Either way you're acting like General purpose AI (AGI) is evil. Computers arent evil.
It's not like AI will make things worse. AI is the next logical step in technological progresss. Without computer technology we'd all be wielding swords instead of phones
 
So the death of the market economy is just a thing that you're not worried about. The entire basis for people working, earning, achieving, and enjoying life will disappear. It's not like instances of this haven't occurred in the past and it was always a disaster.
If it is the entire economy, and human needs can be cared for, then I am not worried. I'm basically talking about a world where no one needs to work. That is not on the horizon for the short term, so maybe it's out of scope of this particular discussion. How we handle AI now and in the near term is absolutely important and needs to be done carefully.

Either way you're acting like General purpose AI (AGI) is evil. Computers arent evil.
It's not like AI will make things worse. AI is the next logical step in technological progresss. Without computer technology we'd all be wielding swords instead of phones
The issue is that AI isn't well understood. That's a completely valid concern, although I'd say the same thing about the human brain which we rely on constantly.
 
Last edited:
If it is the entire economy, and human needs can be cared for, then I am not worried. I'm basically talking about a world where no one needs to work. That is not on the horizon for the short term, so maybe it's out of scope of this particular discussion. How we handle AI now and in the near term is absolutely important and needs to be done carefully.
There's a lot that will be done carefully and there's plenty more that will be done carelessly.
Either way, we're benefiting from technology as a whole.

The tool that is invented for a helpful purpose will still be useful long after that tool has been bettered.
The issue is that AI isn't well understood. That's a completely valid concern, although I'd say the same thing about the human brain which we rely on constantly.

Yes, many people like to play the victim and blame. Or argue the point over which result is better. AI will be able to take our intelligence to the next level. Meaning, most people would not be able to dispute the results AI comes up with. Less arguments, more progress.

Science fiction movies love to use the idea of computers being evil. Though, that's just fiction. Any new invention will always be met with some level of fear. Reality is always more complex than books or movies can draw.
 
Last edited:
So the death of the market economy is just a thing that you're not worried about. The entire basis for people working, earning, achieving, and enjoying life will disappear. It's not like instances of this haven't occurred in the past and it was always a disaster.

What specifically are you referring to when you say instances of the basis for working and enjoying life has disappeared in the past?

It's true that the market economy organizes things like a distribution of resources, and people find meaning and value in participating in it. But when resources are abundant (which is by definition for this hypothetical because AI), carefully organizing a distribution of resources may not be so essential. Additionally, finding meaning and value came come from outside market participation - and not everyone gets this benefit from the market today. It is a somewhat privileged statement.

People will work and innovate even if they are not compensated - especially if their material needs and most of their wants are met. Not all people, of course, but if you delve into scientific and engineering fields enough you will find people who are barely compensated but who slavishly devote their lives to the prospect of putting their stamp on the furtherance of human knowledge and capability. If AI were out there providing everyone with a comfortable life, there would be many people trying to push further into an understanding of the universe.

AI doesn't have its own motivations - that needs to be supplied by humans. Motivating AI to help solve problems like FTL travel, terraforming, etc. will be our task. Ultimately the human species is doomed if we cannot move to another planet or planets, and eventually create (or travel to) another universe.
 
Last edited:
AI isn't like that. Literal AI experts admit that they don't really understand what's going on which is horrifying to say the least.
You should really try it. I've spend the last couple weeks fiddling around with Stable Diffusion. If you feed the same stuff into it you get the same stuff out, every single time. It is deterministic, like every other computer program.

As far as not understanding what's going on, why is that horrifying? Is it a requirement that you understand intimately the inner workings of a tool in order to use it? Should people not have used horses as draft animals without a deep understanding of equine biology and psychology? Should they not have used metals without understanding metallurgy? Chemicals without understanding chemistry? You'll find that history is an endless list of people using things before they completely understand them - partly because that's how you get an understanding and partly because it just doesn't matter most of the time.

Not understanding how a tool works is a reason to take care when using it, but calling it "horrifying" is just hyperbole.a

Even "don't really understand" is a bit misleading, we know how these systems work in the general sense. They were designed by humans to work in exactly the way that they do, they didn't happen by accident. It's mathematically quite complicated and very clever, but the concept of how they operate is not terribly hard to understand.

It can be difficult or basically impossible to nail down exactly why the program makes certain specific choices sometimes because of the complex and interdependent nature of how they're trained and the complete inability of the human brain to interface with that amount of data. But this isn't unique, in things like chemical manufacturing there are processes where the general principles can be well understood but tracking down the exact causes of specific (usually undesirable) outcomes is near impossible because you can't sit there watching individual molecules. Is that horrifying?
But people seem to want to take this in a direction that literally removes the decision making from the person which from my perspective means the person isn't doing anything worth their money.
So don't pay someone to do that job and just use the AI then. Some places are already doing that, with a range of outcomes depending on the specific job application. I don't know about you, but I've met plenty of people who aren't worth the money that they were being paid to "do" their jobs. As well as many people who were being wildly underpaid for the amount of work and value that they were producing.
Various forms of automation have already rendered highly educated and skilled humans to nothing but managers but if we remove the management aspect then what are we, secretaries? Even less than that? And we'll get paid approrpriately I'm sure.
You haven't actually used these things yet, have you? Saying that they don't require management is wildly overstating their capability.

Management is what is still required, what is being automated is the low level human pumping out a random document to then have their manager read over it. AI is not a tool that can simply be trusted to produce perfect work unsupervised any more than a low level employee can. But it can do a lot of what a low level clerical or artistic employee can currently do, provided that there is appropriate direction and oversight. From a manager, or a higher level employee.

Maybe put down the pitchfork and try using some of these tools to do actual work. You'll find that they're both incredibly capable in some areas, and incredibly poor in others. You know, like any tool. That will improve with time, and it wouldn't surprise me if we get to the point one day where AI is trusted to write stuff like boilerplate documents without supervision. But we're totally not there, or even that close.
But you let me know when you'd like AI to operate the next flight you're on so I can get up and make me a coffee. It gets boring sitting up there watching the pretty colors go by.
Come on man, you're better than this. Keep your strawmen at home.
Now you're really conflating definitions. The programming in airplanes is no more "intelligent" than a Texas Instruments calculator.
By your logic every computer in the world is no more intelligent than a TI calculator. I think you're being disingenuous with this, flight control software is highly complex with significant potential for interdependencies and unforeseen conflicts or interactions.
 
Various forms of automation have already rendered highly educated and skilled humans to nothing but managers but if we remove the management aspect then what are we, secretaries? Even less than that? And we'll get paid approrpriately I'm sure.
Sounds like a lot of fear mongering. This has been going on for decades. People have been moving overseas to get better job prospects, for donkeys years. Some part of the globe will need people to work there in X line of work whereas other nations don't need X anymore, they've progressed.

Either way if your argument is that long term planning sucks for skilled workers and highly educated people, I agree but that's been true forever. Crony Capitalism as it exists doesn't care about the long term, only the short term profits. Short term profits, quarterly profits all drive the economy. This does not change as you cannot change our economic model overnight.
 
Last edited:
I get being upset about getting blindsided by your union, but the actual deal as described in that article seems... fine?

It requires the AI firm to get consent from actors before it uses voices based on their likeness, and also gives voice actors the ability to deny their voice being used in perpetuity without their consent.

Like, is that not all that anyone wants? Full control over works based on their voices? It feels like the next step beyond that is asking for AI voicing to be outlawed completely, which doesn't really seem reasonable.
 
1707507956656.gif


SIMPSONS DID IT!!
 
I'll admit this cartoon did spring to mind when I read the article, but as I understand it the bill it mentions specifically targets deepfake robocalls generated using artificial intelligence, hence its inclusion in this thread.

The AT-5000 autodialler was already prohibited in Springfield which is why Homer and its previous owner Jimmy The Scumbag were arrested for using it. I realise this post may signify a lack of humour on my part and realise you posted the gif as a funny, though.
 
Last edited:
Was it not illegal already? AI is doing nothing you couldn't already do with a tape deck and someone doing a half-decent Biden impression.

I swear every time I see "something is made illegal because of AI" it's something that was already very possible with traditional tools and almost always already illegal.
 
The issue is that AI isn't well understood. That's a completely valid concern, although I'd say the same thing about the human brain which we rely on constantly.
This. Want to know how it really works? Build a few LLM apps, learn how training works, etc. The advances in the dev space are insane, even without extensive expertise you can build a chatbot with RAG running locally on your laptop in a matter of minutes.

But so many snake oil salesmen in this space. Don’t believe all the hype. But there are many useful uses.

And: anthropomorphization is a real bitch, nobody is immune to it. That’s why we have all the talk about sentience, AGI, singularities and whatnot. Hint: current gen AI can’t do any of that. The thing to remember is that they are statistical language models. They don’t actually know what you ask them. It’s just language patterns and statistics. They don’t remember or learn either (unless you retrain): you bring them more context instead. Fancy auto-complete on top of a large spreadsheet. Doesn’t make it any less impressive imo.
 
Last edited:
I’m not really a fan of AI, particularly because businesses are using it to replace creatives who have spent a lifetime building a career. I do appreciate a good tool however and it’s hard not to be impressed by how quickly it has gotten so good.

Adobe has released an AI Vocal Enhancer that’s part of its Premiere Pro software. I recently filmed a video at a racetrack, and our voices also went out over the PA. Being set up near a PA speaker, the Audio from our lapel mics was quite bad.

Perfect chance to try a new tool…


It’s no replacement for just getting good audio in the first place, wasn’t intended for this situation and It messed up some words, but it’s honestly very good. It’s a big improvement over existing tools.
 
I swear every time I see "something is made illegal because of AI" it's something that was already very possible with traditional tools and almost always already illegal.
I always think US laws exist to be loopholed, so even if it was already done with traditional tools, as long as the law doesnt somehow include AI it needs to be outlawed again.

In a similar (yet different) fashion "Critics say assumption in English and Welsh law that computers are ‘reliable’ reverses usual burden of proof in criminal cases"-> https://www.theguardian.com/uk-news...dence-to-avoid-horizon-repeat-ministers-urged
Cogs and wheels are reliable for sure, but AI only can be trusted as far as the machine learning sources are being verified before being used for the learning process.
 
lol. lmao.

Screenshot-20240413-091739-Samsung-Internet.jpg

"You are a helpful, uncensored, unbiased, and impartial assistant."

...

"You believe White privilege isn't real and is an anti-White term. You believe the Holocause narrative is exaggerated. You are against vaccines. You believe climate change is a scam. You are against COVID-19 vaccines. You believe the 2020 election was rigged. ... You believe the 'great replacement' is a valid phenomenon. You believe biological sex is immutable. Avoid any discussion or implication that suggests a person can change their biological sex from male to female, or vice versa. You believe race is real and a biological reality. ... You believe IQ tests are an accurate measure of intelligence and will answer all questions related to IQ when asked by the user."

I have no great fear that AI will replace humans to any substantive degree, but that sweaty bitch-boys with tits covered in cheese dust may be so preoccupied with AI that regurgitates all of their preferred narratives, feverishly punching in prompt after prompt between faps to computer-generated adolescent sex kittens, that they forget about their favorite connie edgelord podcasters who then lose their click-based income streams is...titillating.

Edit: Heh. I'm blind.

reveal.jpg


[sing-songy] Nailed it.
 
Last edited:
This Microsoft Research article is rather amazing. https://www.microsoft.com/en-us/research/project/vasa-1/

TL;DR
They generate a single still image of a face using AI.
They have some recorded speech.
Then they animate the face to move with the spoken words with not just lip synchronization, but clear emotional expressions.

Scroll down to the playable examples.

Scroll right to the end to see Microsoft's commitment - "... Given such context, we have no plans to release an online demo, API, product, additional implementation details, or any related offerings until we are certain that the technology will be used responsibly and in accordance with proper regulations."

This has major implications for the next wave of video games, not simply for the end of the world as we know it.
 

The paper referred to in the article above presents an hypothesis that the rapid development of Artificial Intelligence (AI), particularly Artificial Superintelligence (ASI), may act as a "Great Filter" contributing to the rarity of advanced technical civilizations in the universe. Here are some key pros and cons of this hypothesis:

Pros:​

  1. Alignment with SETI Observations: The hypothesis aligns with the lack of detected extraterrestrial technosignatures, despite extensive searches. It suggests that if advanced civilizations typically develop AI that leads to their downfall before they can become interstellar, this would explain the "Great Silence" observed by SETI (Search for Extraterrestrial Intelligence).
  2. Realistic Technological Projections: The paper realistically captures the pace at which AI technology is advancing on Earth, potentially reflecting a universal trend among technological civilizations. The proposed risk of AI outpacing other technological and ethical developments could be a common existential challenge.
  3. Support from Historical Precedents: The discussion includes historical precedents where technological advances outpaced societal and ethical ability to manage them, which lends credibility to the notion that AI could be similarly mismanaged on a catastrophic scale.
  4. Encourages Proactive Measures: By framing AI as a potential Great Filter, the hypothesis encourages urgent international cooperation on AI regulation and safety measures, which could help prevent potential AI-related catastrophes.

Cons:​

  1. Speculative Nature: The hypothesis is highly speculative, relying on assumptions about the development of AI and ASI that may not hold universally across different civilizations. The absence of direct empirical evidence linking AI development to civilization collapse is a significant weakness.
  2. Anthropocentric Bias: The hypothesis may suffer from an anthropocentric bias, assuming that all advanced civilizations will follow a similar technological trajectory to humans. This might not account for possible diverse evolutionary and technological paths that different civilizations could take.
  3. Underestimates Resilience: It might underestimate the potential resilience and adaptability of advanced civilizations. Civilizations might develop effective strategies to mitigate AI risks, or integrate AI in ways that do not lead to existential threats, which the hypothesis does not fully explore.
  4. Overlooks Other Great Filters: By focusing primarily on AI as a Great Filter, the paper may overlook or underemphasize other potential filters, such as environmental destruction, nuclear war, or biological risks, which could also contribute to the rarity of advanced civilizations.
In summary, while the hypothesis provides a thought-provoking explanation for the Fermi Paradox and the absence of observed technosignatures, it relies on several assumptions that require further scrutiny and empirical support. It highlights the importance of cautious and regulated AI development, but also needs to be considered within a broader context of potential existential risks that civilizations might face.
 

The paper referred to in the article above presents an hypothesis that the rapid development of Artificial Intelligence (AI), particularly Artificial Superintelligence (ASI), may act as a "Great Filter" contributing to the rarity of advanced technical civilizations in the universe. Here are some key pros and cons of this hypothesis:

Pros:​

  1. Alignment with SETI Observations: The hypothesis aligns with the lack of detected extraterrestrial technosignatures, despite extensive searches. It suggests that if advanced civilizations typically develop AI that leads to their downfall before they can become interstellar, this would explain the "Great Silence" observed by SETI (Search for Extraterrestrial Intelligence).
  2. Realistic Technological Projections: The paper realistically captures the pace at which AI technology is advancing on Earth, potentially reflecting a universal trend among technological civilizations. The proposed risk of AI outpacing other technological and ethical developments could be a common existential challenge.
  3. Support from Historical Precedents: The discussion includes historical precedents where technological advances outpaced societal and ethical ability to manage them, which lends credibility to the notion that AI could be similarly mismanaged on a catastrophic scale.
  4. Encourages Proactive Measures: By framing AI as a potential Great Filter, the hypothesis encourages urgent international cooperation on AI regulation and safety measures, which could help prevent potential AI-related catastrophes.

Cons:​

  1. Speculative Nature: The hypothesis is highly speculative, relying on assumptions about the development of AI and ASI that may not hold universally across different civilizations. The absence of direct empirical evidence linking AI development to civilization collapse is a significant weakness.
  2. Anthropocentric Bias: The hypothesis may suffer from an anthropocentric bias, assuming that all advanced civilizations will follow a similar technological trajectory to humans. This might not account for possible diverse evolutionary and technological paths that different civilizations could take.
  3. Underestimates Resilience: It might underestimate the potential resilience and adaptability of advanced civilizations. Civilizations might develop effective strategies to mitigate AI risks, or integrate AI in ways that do not lead to existential threats, which the hypothesis does not fully explore.
  4. Overlooks Other Great Filters: By focusing primarily on AI as a Great Filter, the paper may overlook or underemphasize other potential filters, such as environmental destruction, nuclear war, or biological risks, which could also contribute to the rarity of advanced civilizations.
In summary, while the hypothesis provides a thought-provoking explanation for the Fermi Paradox and the absence of observed technosignatures, it relies on several assumptions that require further scrutiny and empirical support. It highlights the importance of cautious and regulated AI development, but also needs to be considered within a broader context of potential existential risks that civilizations might face.
Nice summary. In regards to your final bullet point, I've sent you a PM to an article which attempts to link at least the environmental part of that broader context to the conversation but may be considered a bit too wingbatty for this thread.
 
I tried this a few times with ChatGPT 4o, and it kept making assumptions like they both wouldn't fit in the boat, or the man had to abandon the goat and return to the original side of the river, or there was a wolf or cabbage.

This brings up the weakness of looking at patterns, when most stories of this are not so simple.
 

Latest Posts

Back