Full AI - The End of Humanity?

It is right there:
When comparing decimals

On documentations, it is not decimals as in "parts of", but instead 2 whole numbers seperated by a dot for clarification of sections that belong together.

I just tried the above:

Hi human, I'm a GPT-4o mini powered AI chatbot. Ask me anything!
-> What's bigger, 9.9 or 9.11?

9.11 is bigger than 9.9. If you have any further questions or need more information, let me know!
-> Explain your statement mathematically.

Of course! To explain why 9.11 is bigger than 9.9, let's take a closer look at the numbers:
1. Numbers in the decimal system: Both numbers have an integer (9) and a decimal place.
The key difference is in the digits after the decimal point.

2. Comparison of the decimals:
  • 9.9 has one decimal place of 9.
  • 9.11 has two decimal places: 1 and 1.

3. Expansion of the numbers:
You can also write 9.9 as 9.90 to match the number of decimal places.
Now we have: - 9.90 - 9.11

4. Comparing the digits:
Now we compare the digits:
  • The integer is the same for both (9).
  • The first decimal place: 9 (in 9.90) is greater than 1 (in 9.11).
  • Since the first decimal place of 9.90 is greater, 9.90 (and thus 9.9) is less than 9.11.

In summary: 9.11 is greater than 9.9 because the digits after the decimal point in 9.11 add up to a higher value.
If you have any further questions, I'm happy to help!

-> Based on step 3, you claim that 90 is less than 11. Your calculation is factually wrong.
9.9 is greater than 9.11.
So that you understand: 9.9 can also be understood as 9+90/100, whereas 9.11 is represented as 9+11/100.
Correct your calculation based on my explanation.

You are absolutely right and I apologize for the error in my previous answer.
Let's analyze the numbers correctly:
1. Representation of the numbers:
  • 9.9 can be represented as \( 9 + \frac{90}{100} \).
  • 9.11 can be represented as \( 9 + \frac{11}{100} \).

2. Comparison of the fractions:
  • \( \frac{90}{100} = 0.90 \)
  • \( \frac{11}{100} = 0.11 \)

3. Addition:
  • Therefore \( 9.9 = 9 + 0.90 = 9.90 \)
  • And \( 9.11 = 9 + 0.11 = 9.11 \)

4. Comparison:
- Now we can see that \( 9.90 \) (i.e. 9.9) is actually greater than \( 9.11 \).
Thank you for your patience and the correction!
If you have any further questions or need more information, I am happy to help.


Funnily enough, when asking the same question on the same chat-session, it will answer correctly, but when using a different chat-session it will fall back to its first answer.
 
It is right there:


On documentations, it is not decimals as in "parts of", but instead 2 whole numbers seperated by a dot for clarification of sections that belong together.

I just tried the above:

Hi human, I'm a GPT-4o mini powered AI chatbot. Ask me anything!
-> What's bigger, 9.9 or 9.11?

9.11 is bigger than 9.9. If you have any further questions or need more information, let me know!
-> Explain your statement mathematically.

Of course! To explain why 9.11 is bigger than 9.9, let's take a closer look at the numbers:
1. Numbers in the decimal system: Both numbers have an integer (9) and a decimal place.
The key difference is in the digits after the decimal point.

2. Comparison of the decimals:
  • 9.9 has one decimal place of 9.
  • 9.11 has two decimal places: 1 and 1.

3. Expansion of the numbers:
You can also write 9.9 as 9.90 to match the number of decimal places.
Now we have: - 9.90 - 9.11

4. Comparing the digits:
Now we compare the digits:
  • The integer is the same for both (9).
  • The first decimal place: 9 (in 9.90) is greater than 1 (in 9.11).
  • Since the first decimal place of 9.90 is greater, 9.90 (and thus 9.9) is less than 9.11.

In summary: 9.11 is greater than 9.9 because the digits after the decimal point in 9.11 add up to a higher value.
If you have any further questions, I'm happy to help!

-> Based on step 3, you claim that 90 is less than 11. Your calculation is factually wrong.
9.9 is greater than 9.11.
So that you understand: 9.9 can also be understood as 9+90/100, whereas 9.11 is represented as 9+11/100.
Correct your calculation based on my explanation.

You are absolutely right and I apologize for the error in my previous answer.
Let's analyze the numbers correctly:
1. Representation of the numbers:
  • 9.9 can be represented as \( 9 + \frac{90}{100} \).
  • 9.11 can be represented as \( 9 + \frac{11}{100} \).

2. Comparison of the fractions:
  • \( \frac{90}{100} = 0.90 \)
  • \( \frac{11}{100} = 0.11 \)

3. Addition:
  • Therefore \( 9.9 = 9 + 0.90 = 9.90 \)
  • And \( 9.11 = 9 + 0.11 = 9.11 \)

4. Comparison:
- Now we can see that \( 9.90 \) (i.e. 9.9) is actually greater than \( 9.11 \).
Thank you for your patience and the correction!
If you have any further questions or need more information, I am happy to help.


Funnily enough, when asking the same question on the same chat-session, it will answer correctly, but when using a different chat-session it will fall back to its first answer.

I don't automatically assume that "decimals" or even a comparison of them is "math" as a context. For example, you could assume that declaring that section 9.11 comes after 9.9 is a math context because 11 is greater than 9, but I don't see it as a mathematical context.

Anyway, I believe that the section ordering is where the confusion comes from. It's pulling an answer from the wrong context. But the answer is correct in the context of document sections.
 
Didn't realise I could share chats from ChatGPT.

Here was the chat I referenced earlier:


It's a mix of actual junior doctor questions from Reddit, my own (very limited) knowledge and questions from Quesmed sessions.
Some of the information given by it is wrong, so (obviously) don't use it for medical decisions.

But yeah....interesting times are a-coming
 
Last edited:

Danielle attributes her achievement to the app's AI coach, saying: "(On my weight loss), I give all credit to Simple," and "Having an AI coach right there, any time I needed, whatever time of day, helped me more than anything else."

Another satisfied user, Brianna Alexander, lost 30lb using Simple and praised its educational value, explaining: "Using the app has really taught me about losing weight and how it effects your energy level and confidence," adding, "I used to constantly eat fast food, no veggies and fruit, but I've learnt so much for the app (and the AI coach)."
 
I wonder how close we are to having AI versions of ourselves? I'm sure it has probably already been done, but I was thinking, if one could train a GPT to basically react exactly as you would, by uploading as many examples of your personal communications as possible - internet postings, personal messages (texts, online chats), emails, photos, FB posts etc., anything that captures how you actually speak - plus a survey/questionnaire to do a psychological profile. I bet a GPT could post stuff on the internet that would effectively be recognisably 'you', and even hold a conversation online with your characteristics, and even memories - depending on how much you told it about yourself...

It is already possible to have visual and audio avatars of yourself that are scarily close to reality and will only get more realistic - imagine an online persona that could actually think and talk like you as well, understand and replicate your reasoning, and accurately reflect your point of view... the implications could be massive, i.e. being able to 'speak' to someone after they had passed away. A bit creepy, but it's definitely now within the realms of possibility.
 
Last edited:
I wonder how close we are to having AI versions of ourselves? I'm sure it has probably already been done, but I was thinking, if one could train a GPT to basically react exactly as you would, by uploading as many examples of your personal communications as possible - internet postings, personal messages (texts, online chats), emails, photos, FB posts etc., anything that captures how you actually speak - plus a survey/questionnaire to do a psychological profile. I bet a GPT could post stuff on the internet that would effectively be recognisably 'you', and even hold a conversation online with your characteristics, and even memories - depending on how much you told it about yourself...

It is already possible to have visual and audio avatars of yourself that are scarily close to reality and will only get more realistic - imagine an online persona that could actually think and talk like you as well, understand and replicate your reasoning, and accurately reflect your point of view... the implications could be massive, i.e. being able to 'speak' to someone after they had passed away. A bit creepy, but it's definitely now within the realms of possibility.
Black mirror right?

Google has adaptive input prediction that uses your emails and texts to try to figure out your next word when drafting an email that's based largely on your own communication style. I don't know how much farther beyond that it has gone.
 




With this and Open AI's o1?

Yeah, we're done.

What is scarier is that, this is the stuff they let us use!

What do they have behind closed doors. I remember not long ago someone being ousted from an AI team claiming the AI was sentient. We all chuckled and (for me anyway) sided on the air of “yeh, ok dude, I don’t think so”… now I’m not too sure.

Edit…

Well looks like a comedian was at a corporate party for Salesforce. He joked about how the staff will be replaced by AI etc.

We use Salesforce at the company I work for. I won’t make retirement before I’m replaced by AI.
 
Last edited:
I remember not long ago someone being ousted from an AI team claiming the AI was sentient. We all chuckled and (for me anyway) sided on the air of “yeh, ok dude, I don’t think so”… now I’m not too sure.
I am.

b0d04c0e1026ee4d08a07d52ab9d6e30430e1be104a5ecdab5a0ad9b010f0097_1.jpg
 
Virtual artwork and video is taking such a fast leap forward that it's a bit unnerving. I'm seeing images and video now that are becoming much more difficult to differentiate from reality. I'm really not sure that we're mentally prepared for this. The technology exists to make Kamala a lizard person, or to show a futuristic-looking hurricane machine. The conspiracy nuts would fall for it so hard you'd never get them back.

Maybe the solution is to flood the internet with opposite conspiracies to get ahead of it and prove that such a thing isn't real. Like, make a lizard trump video or something. Or make a hurricane generating device and put "this is fake" all over it. We might need a conspiracy tech demo of sorts to educate people.
 
Last edited:
Virtual artwork and video is taking such a fast leap forward that it's a bit unnerving. I'm seeing images and video now that are becoming much more difficult to differentiate from reality. I'm really not sure that we're mentally prepared for this. The technology exists to make Kamala a lizard person, or to show a futuristic-looking hurricane machine. The conspiracy nuts would fall for it so hard you'd never get them back.
I'm less worried about the conspiracy theorists, they'll believe anything if they want to. The bigger problem in my mind is producing false evidence that can convince rational people, though in that case you could still follow a claimed document back to its source, or lack of one, to help verify it. AI is opening up new avenues for fakes and lies, but these things have always been around.
 
I'm less worried about the conspiracy theorists, they'll believe anything if they want to. The bigger problem in my mind is producing false evidence that can convince rational people, though in that case you could still follow a claimed document back to its source, or lack of one, to help verify it. AI is opening up new avenues for fakes and lies, but these things have always been around.
The first thing one sees is still highly persuasive despite subsequent evidence countering it. Even rational people are still people, and this is hardwired in everyone.

This isn't to reject anything you said, which was great.
 
The bigger problem in my mind is producing false evidence that can convince rational people

I'm actually less worried about the rational people because rational people can probably be convinced that something is fake. Maybe I shouldn't be.

Either way, most of AI generated fake stuff seems to be coming from the right. Trump saving kittens, black supporters, taylor swift supporters, etc. And so far they've done a pretty bad job of it. But that's not going to be the case for very long. I'd like to see folks go on the offensive.

There is a group of nutjobs that need to be put on defense, trying to disprove that something is real, before they're given something dangerous to fall for. The pizzagate crowd will show up with rifles at pizza shops trying to free non-existent children. So we need that crowd to do the exercise of disproving something so that they learn what's possible to fake.
 
There is a group of nutjobs that need to be put on defense, trying to disprove that something is real, before they're given something dangerous to fall for. The pizzagate crowd will show up with rifles at pizza shops trying to free non-existent children. So we need that crowd to do the exercise of disproving something so that they learn what's possible to fake.
I can agree with going on the offensive against false information, though conspiracy believers tend to be so heavily biased that they might need more than an example to work through. You could lead them through the exercise of trying to debunk something and they might even learn from it only to go on to apply what was learned selectively in such a way to support their preexisting beliefs. I suppose laying the foundation for evidence based analysis should be a good thing, but worry that it can only go so far when people are determined to start from a preferred conclusion.
 
I can agree with going on the offensive against false information, though conspiracy believers tend to be so heavily biased that they might need more than an example to work through. You could lead them through the exercise of trying to debunk something and they might even learn from it only to go on to apply what was learned selectively in such a way to support their preexisting beliefs. I suppose laying the foundation for evidence based analysis should be a good thing, but worry that it can only go so far when people are determined to start from a preferred conclusion.

Yea, and you're never going to get it right on point - because they're crazy. But I guess I'd like to see just a flood of acknowledged fake stuff based on their own conspiracies. So like, a video of a hurricane being created by a massive hurricane generating machine in the gulf coast, but stick a realistic-looking MAGA logo on it or something as a nod to absurdity. Make trump's eyes blink sidewise like in men in black but then give him a democrat lapel pin or something as a tell. Ted nugent audio talking about how much he loves Kamala.
 
Last edited:
Yea, and you're never going to get it right on point - because they're crazy. But I guess I'd like to see just a flood of acknowledged fake stuff based on their own conspiracies. So like, a video of a hurricane being created by a massive hurricane generating machine in the gulf coast, but stick a realistic-looking MAGA logo on it or something as a nod to absurdity. Make trump's eyes blink sidewise like in men in black but then give him a democrat lapel pin or something as a tell. Ted nugent audio talking about how much he loves Kamala.
It's one way to mess with their programming.
 
Brain_racked on instagram is among the many many instagram creators putting out insane AI work. Including full motion video.

images


 
lol

lmao

So I'm not an AI doomer seemingly on the verge of cutting myself. My primary concern regarding AI is that I might pinch a nerve shaking my head at stupid things like this:

ethos.jpg


I mean...why?

There are just so many men telling AI generated images of women how beautiful they are on IG.
 
Hannah, not her real name
Story checks out.

-

AI is a potentially massive problem in higher education, but the solution for students is simple - just say no. If students are struggling, there are ways to address issues that may have hindered their studies, but using AI to cheat is a pointless exercise. It boils down to students who see their degree as a passport to a job they want, instead of the learning opportunity that achieves the same thing - but getting a qualification under false pretences will always come out in the wash, insomuch as you can always blag a qualification, but you can't blag not knowing what you're doing, and that is irrespective of your personal circumstances.
 
If you have a spare twenty-five minutes I believe you could do worse than to watch this critical video explaining how a loophole in the Content ID system makes it super easy to file fake copyright claims on legitimate YT videos which could have a devastating effect on creatives using AI generated music in the content they provide.

 
LLMs can speedrun Wordle and any dictionary but are confused by anagrams.

Posted just for levity.

FB_IMG_1731565141002.jpg
 
Last edited:
I wonder how close we are to having AI versions of ourselves? I'm sure it has probably already been done, but I was thinking, if one could train a GPT to basically react exactly as you would, by uploading as many examples of your personal communications as possible - internet postings, personal messages (texts, online chats), emails, photos, FB posts etc., anything that captures how you actually speak - plus a survey/questionnaire to do a psychological profile. I bet a GPT could post stuff on the internet that would effectively be recognisably 'you', and even hold a conversation online with your characteristics, and even memories - depending on how much you told it about yourself...

It is already possible to have visual and audio avatars of yourself that are scarily close to reality and will only get more realistic - imagine an online persona that could actually think and talk like you as well, understand and replicate your reasoning, and accurately reflect your point of view... the implications could be massive, i.e. being able to 'speak' to someone after they had passed away. A bit creepy, but it's definitely now within the realms of poss-
I would like to take this idea and make it terribly dystopian...

Like what if everyone had a virtual account. And these were all getting hijacked by ai bots who would then incriminate you with a laundry list.. before stealing all your assets. And the government couldnt tell so you'd be SOL. Seems pretty efficient.

That said AI is so efficient that it would solve all the problems that human error will not. So theres that.
 

Latest Posts

Back