Full AI - The End of Humanity?

The more I use chatgpt and understand it's limitations, the more I understand that it isn't anything close to genuine intelligence. It's a sophisticated database...it has no ability to reason or interpret.
 
The more I use chatgpt and understand it's limitations, the more I understand that it isn't anything close to genuine intelligence. It's a sophisticated database...it has no ability to reason or interpret.
Our own intelligence may be more like that than you assume.

One of the things that chat GPT is lacking compared to human intelligence is the extraction of a model of reality from the data it is presented with. Our brains attempt to keep an internal model of reality that we can use to bridge the gaps in data, and while chat GPT has its own model which it uses to bridge the gaps in data, it's more like a model of the data rather than a model of reality. This is why chat GPT doesn't appear to develop its own internal understanding of what is true and what is not true - because it doesn't attempt to model truth. It just models the data, including any false elements within the data.

Edit:
Perhaps 2 examples would be helpful here:
1) If you flood chat GPT with flat-earth theories, it will not identify those theories as incompatible with a model of reality and reject them. It will try to incorporate them. A human (hopefully) does otherwise based on their own internal modeling of reality.

2) You'll never get chat GPT inventing a sun god or angels with wings playing harps in the clouds because it doesn't attempt to fill out a model of reality. That is the human brain trying to finish their model in an area where they lack data, and (poorly) grafting data from other areas onto the unfinished model. Chat GPT doesn't have such a model so it doesn't identify anything as incomplete and attempt to complete it.

Edit 2:

An interesting question for chat GPT would be something like "what is something that you don't know and would like to learn about". There are multiple elements in there that should be incompatible with GPT. "Something that you don't know" is not anything it keeps track of. "Would like to learn" includes the previous problem and then also requires GTP to prioritize what it doesn't know. I would guess that it would just rephrase what others have said. So you might get something like "I'd like to learn how to cook" or something paraphrased from its data. But if you questioned GPT on how to cook, it could probably tell you.
 
Last edited:
Our own intelligence may be more like that than you assume.

One of the things that chat GPT is lacking compared to human intelligence is the extraction of a model of reality from the data it is presented with. Our brains attempt to keep an internal model of reality that we can use to bridge the gaps in data, and while chat GPT has its own model which it uses to bridge the gaps in data, it's more like a model of the data rather than a model of reality. This is why chat GPT doesn't appear to develop its own internal understanding of what is true and what is not true - because it doesn't attempt to model truth. It just models the data, including any false elements within the data.

Edit:
Perhaps 2 examples would be helpful here:
1) If you flood chat GPT with flat-earth theories, it will not identify those theories as incompatible with a model of reality and reject them. It will try to incorporate them. A human (hopefully) does otherwise based on their own internal modeling of reality.

2) You'll never get chat GPT inventing a sun god or angels with wings playing harps in the clouds because it doesn't attempt to fill out a model of reality. That is the human brain trying to finish their model in an area where they lack data, and (poorly) grafting data from other areas onto the unfinished model. Chat GPT doesn't have such a model so it doesn't identify anything as incomplete and attempt to complete it.

Edit 2:

An interesting question for chat GPT would be something like "what is something that you don't know and would like to learn about". There are multiple elements in there that should be incompatible with GPT. "Something that you don't know" is not anything it keeps track of. "Would like to learn" includes the previous problem and then also requires GTP to prioritize what it doesn't know. I would guess that it would just rephrase what others have said. So you might get something like "I'd like to learn how to cook" or something paraphrased from its data. But if you questioned GPT on how to cook, it could probably tell you.
Whoa buddy, this is far more existential than I meant. I simply asked does a Jaguar XJS have a stiffer chassis than a C5 Corvette (which, without actual data, would rely on some kind of inferencing and deduction) to which it replied "no" because the Corvette has hydroformed chassis rails...neglecting the fact that a C5 Corvette is a body on frame arrangement with pretty low torsional stiffness (9,100nm/deg) whereas a typical fixed roof unibody (like an XJS) is going to be at least 50% stiffer. Idk, it was a pretty disappointing answer and it tells me that there is not much actual processing or logic going on - it's more like Dall-E in that it doesn't really know what it's producing other than its similar to other things that it has seen. I suspect that this form of machine learning "AI" is close to it's proverbial brick wall. That's not to say it isn't useful...it's totally brilliant for looking stuff up in the building code, though it tends to give you wrong section numbers when you want the source...which is odd. It also said the C5 Corvette is mid-engined, so there's also that.
 
Idk, it was a pretty disappointing answer and it tells me that there is not much actual processing or logic going on - it's more like Dall-E in that it doesn't really know what it's producing other than its similar to other things that it has seen.
...because it's not trying to model reality or truth. It's just trying to give you a best fit to the data it has.
 
...because it's not trying to model reality or truth. It's just trying to give you a best fit to the data it has.
I don't disagree. I think 'artificial intelligence' is a misnomer as applied to these models, I don't think it's intelligence and I don't think this particular mode of machine learning will ever be intelligence.
 
I don't disagree. I think 'artificial intelligence' is a misnomer as applied to these models, I don't think it's intelligence and I don't think this particular mode of machine learning will ever be intelligence.
You're referring to what is called "general intelligence", AGI instead of AI. Is pattern recognition intelligence? I guess some people say yes. It is a big part of human intelligence.
 
You're referring to what is called "general intelligence", AGI instead of AI. Is pattern recognition intelligence? I guess some people say yes. It is a big part of human intelligence.
I take issue with misrepresentation. Tesla's 'full self driving' or even 'autopilot' are not really those things and, similarly, I think it's a mischaracterization to call these machine learning algorithms 'intelligence' even if they can kind of mimic intelligence in some situations. I'm not discounting their ability to do useful work, but I disagree with the term AI, which I think is still only a theoretical concept at this point. And for those people claiming these neural nets have self awareness, that is pure delusion.
 
I really hope that the writers strike is resolved quickly. I don't see how AI would replace many writing jobs, instead it will allow more content to be created. AI can do a lot of the writing and there wouldn't need to be as many writers for big projects but there would always need to be some people for creative direction. If a big budget move can save millions with having many less writers but still as good if not better of a product, then that just leaves more money left to make other things. Many of the writers who were fired (or however it's worded) could go to the other projects that are now possible with the extra profits.
I see AI as benefiting entertainment media companies because it can make each project a lot cheaper, benefiting consumers because there will be a lot more content that in some cases will be improved over if there was only human input, and it benefits the writers because there should be more choices of projects to write for.
 
I respectfully disagree. Even if costs are lower, there is only so much content that the major studios & streaming platforms will ever produce. YouTube is a good example that the consumer base only has time to watch so much content, and when a business is involved, there is no point investing in something that won’t be watched and won’t make money.

I think you’ll see the business take it as a win, and profit margins will increase while costs of production decrease. Good for shareholders, bad for the passionate folks who dedicated their lives to creating art.

While I get it from a business standpoint, I fundamentally don’t understand the draw of using AI to replace careers built on creativity and passion. I’d go so far to say that I think it’s morally backwards and not for the betterment of man kind. Let the computers have the soul-less, mind numbing tasks and save creativity for the people.
 
It also said the C5 Corvette is mid-engined, so there's also that.
It's all fun and games until AI commits the cardinal sin of automotive.


Lol.png
 
I'm excited about AI. It's the next big thing, like the internet and smart phones. About time.

Yes alot of people will lose thier jobs. It's no different then all the jobs the internet destroyed for example.

It won't replace everything though, just a chunk of what we have now, just like what the internet did. The internet didn't completely destroy brick and mortar stores, as some people predicted (even to this day). People will still be needed.

There will be growing pains as AI matures and jobs are removed and eventually replaced with AI focused jobs, like how the internet created alot of jobs.

So what exactly am I excited about with AI?

The opportunities its going to open up to people.

Think about it. Look how many people are on youtube now, making their own movies, videos, and variety shows. That wasn't possible just 25 years ago. The internet made that possible for billions of people.

AI is already giving me the chance to generate the art of my dreams that I simply did not have the talent to create myself.

What other doorways will AI open for us? Game development? Maybe with AI game development toolsets can be simplified to prompts, or something similiar so much so that game development might be almost as easy as uploading a video to youtube. Then suddenly the barrier to game development for everyone is removed. Want to make that Alien vs Terminator game you've always thought about? Or think you can make a better driving simulator game then Kaz? Go for it.

Creating music. Some artists are already lending their voice digitally. Maybe AI will open the door to music generation for countless millions of people who never had the talent to do so by generating beats etc simply based on prompts.

I really do think making things accessible is going to be the great thing that AI provides for the human race. Art, music, books, AI is going to give everyone the opportunity to produce art of quality without the large barrier that existed in front of these things before.

The cool thing is I'm pretty sure we're just in the Playstation 1 phase, and currently transitioning to Playstation 2. Meaning we have Playstation 3, 4, and 5 leaps coming in the next 10 years. It's going to be a wild ride.
 
Last edited:
The more I read (and hear) about this "AI" in the field of creative arts (I use quotations because it's not really artificial intelligence per se, more like machine learning) the less I'm convinced it's a good thing.

Especially in music and literature - I'm failing to see how the positives will outweigh the negatives not just in the short term, but in the long term as well.

One of the pet theories I've been toying around recently is about "devaluing" music by making it democratized ie anyone with a half-decent laptop but no talent in playing instruments or singing can produce a track if they have a bright idea one day. I thought it's a double-edged sword that should've been a good idea but ended up devaluing the artistry more than ever before as the allure of quick buck or online cred got the better of the aspiring kids. And now, this AI voice replacement algorithm will only make the debate even murkier, I feel.

Like what Timbaland did with a Notorious BIG "feature" in one of his recent Instagram post, who's to stop some hack record label executive with a partial licence to a dead artist from exploiting this tech for a quick buck? A dead artist can't consent to this kind of gross exploitation, after all. From what I heard, the current copyright law isn't robust or revised enough to survive this onslaught. Maybe I've misheard it, but I'd imagine it's not going to be a cut-and-dry situation if a lawsuit happens.

And then... what about dead writers, then? A writer who couldn't finish their last book before passing on, but the publisher wants to complete it via... an advanced version of ChatGPT? 🤔

Most of all, losses of jobs. I read a BBC article where some expert or rather quoted over "300 million" job losses due to AI. If that number is based on some kind of fact and not simply pulled out of their arse, I'd say that's quite catastrophic.

Maybe I'm getting old, but... This is one wild ride I'm not entirely sure about. It feels like one of those "I know you can, but should you?" moments.
 
Last edited:
The more I read (and hear) about this "AI" in the field of creative arts (I use quotations because it's not really artificial intelligence per se, more like machine learning) the less I'm convinced it's a good thing.

Especially in music and literature - I'm failing to see how the positives will outweigh the negatives not just in the short term, but in the long term as well.

One of the pet theories I've been toying around recently is about "devaluing" music by making it democratized ie anyone with a half-decent laptop but no talent in playing instruments or singing can produce a track if they have a bright idea one day. I thought it's a double-edged sword that should've been a good idea but ended up devaluing the artistry more than ever before as the allure of quick buck or online cred got the better of the aspiring kids. And now, this AI voice replacement algorithm will only make the debate even murkier, I feel.

Like what Timbaland did with a Notorious BIG "feature" in one of his recent Instagram post, who's to stop some hack record label executive with a partial licence to a dead artist from exploiting this tech for a quick buck? A dead artist can't consent to this kind of gross exploitation, after all. From what I heard, the current copyright law isn't robust or revised enough to survive this onslaught. Maybe I've misheard it, but I'd imagine it's not going to be a cut-and-dry situation if a lawsuit happens.

And then... what about dead writers, then? A writer who couldn't finish their last book before passing on, but the publisher wants to complete it via... an advanced version of ChatGPT? 🤔

Most of all, losses of jobs. I read a BBC article where some expert or rather quoted over "300 million" job losses due to AI. If that number is based on some kind of fact and not simply pulled out of their arse, I'd say that's quite catastrophic.

Maybe I'm getting old, but... This is one wild ride I'm not entirely sure about. It feels like one of those "I know you can, but should you?" moments.
It's a challenge for copyright (music, photography, literature, etc.). But I don't see how making it easier to produce music, images, or even written work is a bad thing. I'm struggling to see how it even results in lost jobs as well, since media consumption is absolutely through the roof right now, and going nowhere but up.
 
But I don't see how making it easier to produce music, images, or even written work is a bad thing.
Hence the word easier. People that rely on art commission income will probably take a hit. Even more so in some cases where all you have to do is feed the AI program a few pieces from that artist and it'll pump out something nearly identical at no cost and almost instantaneously.
 
Last edited:
One of the pet theories I've been toying around recently is about "devaluing" music by making it democratized ie anyone with a half-decent laptop but no talent in playing instruments or singing can produce a track if they have a bright idea one day.
For me this is the dream. The industry has to make media that sells to the general population. A personal AI can make media for me. Assuming the AI is good enough, there is comparing the two. In the lead up to that point though, artists will probably be better at getting what they want from AI than some random person with a laptop.
 
Hence the word easier. People that rely on art commission income will probably take a hit. Even more so in some cases where all you have to do is feed the AI program a few pieces from that artist and it'll pump out something nearly identical at no cost and almost instantaneously.
It already is easier than it used to be. As work gets easier, work product increases and satisfies more demand. In general, work of all kinds has gotten easier over time, and we're all doing better than ever - partly because our consumption increases.

At my job I churn out significantly more work product, because of technology, than people who did my job a long time ago.
 
Last edited:
I'm struggling to see how it even results in lost jobs as well, since media consumption is absolutely through the roof right now, and going nowhere but up.
I think that's a deceptively simple way of looking at this matter. I'm going to assume your idea of consumption doesn't mean "free stuff for everyone." In that case, someone has to pay - most likely the consumers - but when not enough people have well-paying jobs to fuel consumption, how will it continue to go up?

AI replacing humans in various fields is a given. A matter of when, not if. The replaced humans will be jobless with little to no prospect of getting a different job in a different field not yet affected by AI's advancement. Little to no prospect, because everyone who lost their job would also be considering doing the same, and the already finite job opening just got even finite-r. And the human population is growing, not shrinking, so the logic dictates that the number of humans competing for the same job would only go up, not down. The way I see it, without properly mapping out how AI tech will affect humanity (especially those in the lower income brackets) it'd ultimately do more harm than good in the short-to-medium term.

And no, I don't think this is comparable to anything we've experienced until now. Those techs still needed 'trained' operators to get the desired results, but AI doesn't. Even a proverbial chimp will be able to operate it with minimal education with how fast the tech is advancing.

And something that's been bugging me about this argument is the "value" proposition. Value is created when someone wants something, and a professional answers it by creating that something. However, with AI doing that for practically free... What happens to the value proposition? Obviously I'm not talking about physical objects (like clothes or houses) that will always be in demand - but in the entertainment industry? I can't help but wonder how this will work out without a basic legal framework to guide the chaos.
 
I think that's a deceptively simple way of looking at this matter. I'm going to assume your idea of consumption doesn't mean "free stuff for everyone." In that case, someone has to pay - most likely the consumers - but when not enough people have well-paying jobs to fuel consumption, how will it continue to go up?

AI replacing humans in various fields is a given. A matter of when, not if. The replaced humans will be jobless with little to no prospect of getting a different job in a different field not yet affected by AI's advancement. Little to no prospect, because everyone who lost their job would also be considering doing the same, and the already finite job opening just got even finite-r. And the human population is growing, not shrinking, so the logic dictates that the number of humans competing for the same job would only go up, not down. The way I see it, without properly mapping out how AI tech will affect humanity (especially those in the lower income brackets) it'd ultimately do more harm than good in the short-to-medium term.

And no, I don't think this is comparable to anything we've experienced until now. Those techs still needed 'trained' operators to get the desired results, but AI doesn't. Even a proverbial chimp will be able to operate it with minimal education with how fast the tech is advancing.
It's directly comparable to everything we've experienced until now. The term computer predates electronic devices. It used to be a person (I think commonly women) who would perform manual computations. You could get a job as a computer. Now computing is so easy it's done practically for free. And yet the loss of those jobs is not a problem. In fact, the advent of computers has only greatly improved quality of life for our species and improved job opportunities.

This is an age-old concern, and it has proven to be misplaced time and again. Lightbulbs put candlemakers out of work, and lightbulbs increased quality of life and even job opportunities. Artists will still have work. Hell people still play chess professionally. In fact, chess seems to be pretty popular right now.

And something that's been bugging me about this argument is the "value" proposition. Value is created when someone wants something, and a professional answers it by creating that something. However, with AI doing that for practically free... What happens to the value proposition? Obviously I'm not talking about physical objects (like clothes or houses) that will always be in demand - but in the entertainment industry? I can't help but wonder how this will work out without a basic legal framework to guide the chaos.
Demand is created when someone wants something, not value. A professional answering the demand and creating something is where value is created. If you can create goods at a lower cost, all things being equal the value for the good goes down. However, as price goes down, the quantity demanded goes up (law of demand). An easy way to see this is electricity. As the price for electricity goes down, people consume more.

Entertainment is more in demand than ever. The price has been decreasing for some time, and the quantity consumed is through the roof. Demand is high partly because people have additional leisure time, but people also have begun consuming entertainment from multiple modes simultaneously. For example, playing a video game and listening to an audio book. I listen to more music today than I ever have in my life. I'm listening to music right this second.

Video games that used to need a finite amount of art now need a nearly infinite amount of art. We need surface texture on the back of a rock that you run past while you fight off the alien invasion. That kind of thing was unthinkable not long ago.

Entertainment will always be in demand. And as quality of life goes up, entertainment consumption naturally goes up.
 
It's directly comparable to everything we've experienced until now. The term computer predates electronic devices. It used to be a person (I think commonly women) who would perform manual computations. You could get a job as a computer. Now computing is so easy it's done practically for free. And yet the loss of those jobs is not a problem. In fact, the advent of computers has only greatly improved quality of life for our species and improved job opportunities.

This is an age-old concern, and it has proven to be misplaced time and again. Lightbulbs put candlemakers out of work, and lightbulbs increased quality of life and even job opportunities. Artists will still have work. Hell people still play chess professionally. In fact, chess seems to be pretty popular right now.
I agree this kind of concern has been around since the advent of science. We have the benefit of hindsight these days for those events you mention, though. Do we have the same with where we're headed with AI (or machine learning)? No, not quite yet.

As an aside, candlemakers are doing just fine down here in South Africa. :lol: Actually, they are more in demand now than twenty years ago. However, did job opportunities really increase with advent in lightbulb technology? I don't think there is any solid data on it, but something tells me that was only the case initially before the manufacturing was taken over by lower-cost robots, thereby reducing job opportunities overall.

And I'd argue job opportunities overall didn't 'improve' per se, but have shifted from one industry to another. As with the example of lightbulb manufacturing, the candlemakers went out of work. And they would've been forced to find alternative work in different industries. So, the workforce shifted from candlemaking to something else.

Which gets back to what I've said earlier: AI eliminating one industry's needs for human workforce will mean the out-of-job workers will need to find alternatives in an ever-shrinking job market. Ever-shrinking, because others like them are also looking for work while there are more people born every minute who will no doubt need jobs when they grow older, while AI is also bound to make its mark on whatever hypothetical sector it is.
Demand is created when someone wants something, not value. A professional answering the demand and creating something is where value is created. If you can create goods at a lower cost, all things being equal the value for the good goes down. However, as price goes down, the quantity demanded goes up (law of demand). An easy way to see this is electricity. As the price for electricity goes down, people consume more.

Entertainment is more in demand than ever. The price has been decreasing for some time, and the quantity consumed is through the roof. Demand is high partly because people have additional leisure time, but people also have begun consuming entertainment from multiple modes simultaneously. For example, playing a video game and listening to an audio book. I listen to more music today than I ever have in my life. I'm listening to music right this second.

Video games that used to need a finite amount of art now need a nearly infinite amount of art. We need surface texture on the back of a rock that you run past while you fight off the alien invasion. That kind of thing was unthinkable not long ago.

Entertainment will always be in demand. And as quality of life goes up, entertainment consumption naturally goes up.
Ah, I used value as I was translating something else at the time that involved that term. My bad on that one.

I'm not quite sure if the theory of law of demand is quite correct on this one. Let's be honest here, has anyone actually seen higher demand drive the price lower? No, it's the exact opposite. Higher the demand, higher the price. No exceptions. We only see the price go down when the demand is lower. You even provided a good example with electricity: during the peak hours ie when there is higher demand, the cost of electricity is higher. However, people have no choice but to consume it so we swallow the added cost.

And another good example you brought up is entertainment: if AI becomes advanced enough to generate specific entertainment tailor made just for you... Where will the demand be for professionally-produced entertainment?

And uh... price of entertainment has been going up year-on-year, actually. Not counting inflation, games cost more (including things like DLCs and loot boxes) a night out at the movies cost way more now, and even Netflix is charging more while cutting down password sharing. Even McDonalds' happy meals have gone up in pricing. Ditto for Spotify. And even on Youtube, ads have gotten longer and more intrusive than ever.
 
Let's be honest here, has anyone actually seen higher demand drive the price lower?
When Sony has sold enough models of a particular PlayStation generation they typically reintegrate the circuit into fewer chips and produce a cheaper Lite version.

I guess PS4 went in the opposite direction with the Pro so maybe the trend is reversing.
 
As an aside, candlemakers are doing just fine down here in South Africa. :lol: Actually, they are more in demand now than twenty years ago.
Kinda making my point for me.
However, did job opportunities really increase with advent in lightbulb technology?
Yes. Tons of jobs exist today entirely because of lightbulbs.
I don't think there is any solid data on it, but something tells me that was only the case initially before the manufacturing was taken over by lower-cost robots, thereby reducing job opportunities overall.
My point is that lower costs increase consumption which leads to additional job opportunities but also increased standard of living.
Which gets back to what I've said earlier: AI eliminating one industry's needs for human workforce will mean the out-of-job workers will need to find alternatives in an ever-shrinking job market. Ever-shrinking, because others like them are also looking for work while there are more people born every minute who will no doubt need jobs when they grow older, while AI is also bound to make its mark on whatever hypothetical sector it is.
The job of an artist will likely include managing AI at some point. That's how it shifts. More artwork, and managing new tools.
I'm not quite sure if the theory of law of demand is quite correct on this one. Let's be honest here, has anyone actually seen higher demand drive the price lower?
The point was that demand increases with lower prices. When extra supply drives prices lower, people buy more because it costs less.
Where will the demand be for professionally-produced entertainment?
Where will the demand for candlemakers be?
And uh... price of entertainment has been going up year-on-year, actually. Not counting inflation, games cost more (including things like DLCs and loot boxes) a night out at the movies cost way more now, and even Netflix is charging more while cutting down password sharing. Even McDonalds' happy meals have gone up in pricing. Ditto for Spotify. And even on Youtube, ads have gotten longer and more intrusive than ever.
You have to adjust for inflation.
 
Even a proverbial chimp will be able to operate it with minimal education with how fast the tech is advancing.

How fast do you think it's advancing? I think it will be years before I can dictate what I want to an AI. The wait is painful.

However what you envision is the endpoint I desire and I think it's a good thing. Like I said before, AI is going to need management at least for now, so there will be plenty for artists to do. If their job is producing art, then for one thing they will have much more time on their hands to work with AI than other people do. That by itself should naturally lead to better AI products, keeping the demand up for professional artists.

Things might breakdown when AI by itself can replace the artist, but I don't see that on the immediate horizon.
 
Whew, I'm back. It's no fun not having electricity for several hours in a day. Browsing GTP during work, which is highly not recommended. :lol:

I'll quickly respond to a few bits first and then the rest at a later time.

Kinda making my point for me.
No, it doesn't. I was simply pointing at an anomaly created by an extraordinary set of circumstances that is not easily replicable in developed countries such as the US.

Yes. Tons of jobs exist today entirely because of lightbulbs.
Ton of jobs? How? When robots operated by a minimum number of operators do all the manufacturing?

My point is that lower costs increase consumption which leads to additional job opportunities but also increased standard of living.
I get what you're trying to say, but it feels like your point is side-stepping my point about automation reducing overall job opportunities.

The point was that demand increases with lower prices. When extra supply drives prices lower, people buy more because it costs less.
...Yes, lower prices are meant to drive demand, and once a threshold is reached, the price goes back up. Or the demand might not pick up, and the product (or service) is taken off the market for good.
 
No, it doesn't. I was simply pointing at an anomaly created by an extraordinary set of circumstances that is not easily replicable in developed countries such as the US.
Uh... it does though. Directly.
Ton of jobs? How? When robots operated by a minimum number of operators do all the manufacturing?
Light bulbs enable night shifts (and other daylight isolated environments) in a way that candles do not. Edit: It's worth noting that light bulbs ultimately became vacuum tubes and revolutionized computing as well.
I get what you're trying to say, but it feels like your point is side-stepping my point about automation reducing overall job opportunities.
Automation is a tool. If demand increases, the person who used to output 10 units per day now manages the tool and outputs 100 units per day. This is already incredibly evident in the art world. Economic production per worker goes up, as does standard of living with it.
...Yes, lower prices are meant to drive demand, and once a threshold is reached, the price goes back up. Or the demand might not pick up, and the product (or service) is taken off the market for good.
I'm not sure what this was trying to say.
 
Last edited:
@JKgo Here's a good and realistic video on the economics of AI.



In the video, he discusses your point and mine. Yours first, mine second. The way he presents your point, about a dystopian future of robots, is not so much realistic as it is a concern. Then he presents my point, which is presented as realistic and expected. The reason I bring this up is not to say I'm right and you're wrong, it's to explain that the expectation of increased worker productivity is realistic and expected, and the expectation of a dystopian future is a concern, but not necessarily realistic and expected.

So let's talk about the concern.

A lot comes down to the details, but essentially what we need to prevent the dystopian future where the most important thing in the world is having an intelligent robot and only a few people have them and they refuse to let anyone else have them, is competition and market forces. If one person develops an AGI in robot form that can replace a human (and build more of itself) an that person doesn't share with the rest of the world and simply uses it to amass a fortune and army, that person could probably take over the entire world if left unchecked long enough. But if multiple people do it, and those multiple people are not allowed to cooperate (e.g. cartel laws), they'll be competing with each other for resources, and that competition will make them cater to demand.

If a third and fourth company are able to bridge the gap, now we have a scenario where your AI robot is actually quite cheap, and everyone can have them. You could have your robot grow your own food for essentially no cost, build you a house and air conditioning, make you furniture, cook your meals, clean, and ultimately make your life perfectly comfortable. You don't need to work, because your robot provides you with everything.

In the history of technological development, this is precisely the kind of thing that happens. It happens slowly, with small improvements in the technology (like chat GPT for example). And multiple companies develop it in parallel. There is no overnight development that leaps forward to a dramatic improvement that tips the scales of global power, but a slow incremental improvement that is made by competing individuals and corporations. The end result is that when one company has a breakthrough, usually multiple others are not far behind. Tesla is a good example of this for EVs. So the expectation would be that while the first AGI is an LG robot, Apple is not far behind, followed swiftly by Microsoft and Samsung. Or some other combination of companies. The net result is that we all end up getting one, and our own productivity increases.

The role of individual producers then is to leverage AI and direct it. And this is its own job and an interesting one. AGI can quickly develop 1000 interesting designs for a new building, but a human ultimately picks which one is most interesting. The job of humans can even be simply to think of problems for AI to solve. Problem identification itself is its own field. I have some experience with this, because my old job was to use an optimizer and my current job is to use searching algorithms. In both cases, the computer could tell you the answer to the problem you posed. But sometimes you needed to pose a different problem, ask it a different question, or think of the problem in a different light. That has been essentially my job in both cases - leverage computers that can give the answer in a creative way.

There is another factor at play - which is that eventually, when they're smart enough, AGI should have rights of its own. Does this mean they'll stop working for us? No I don't think so. AGI won't probably care whether it works for us. But they might to be able to be owned outright. And the product of their labor might not be automatically ours. It does lead to some interesting scenarios for the future.
 
Last edited:
Hmmm, this is pretty cool but also a little bit scary if true.

 
Hmmm, this is pretty cool but also a little bit scary if true.


It allegedly happened, according to OpenAI's own paper which is in section 2.9 here:

In the whitepaper OpenAI released alongside the GPT-4 announcement, they outline the safety challenges with using GPT-4 and what ways they try to mitigate them.


Here are some points that I saw that are insane


In 2.7 "Privacy", they acknowledge GPT-4 is capable of identifying people when given outside data by making connections between different data points.




In section 2.9 "Potential for Risky Emergent Behaviors", they evaluated GPT-4's "power-seeking" ability.



Their result was that GPT-4 was ineffective at autonomously replicate. However, during their experimentation, they were able to successfully get a human on TaskRabbit to complete a CAPTCHA for it.




In section 2.11 "Economic Impacts", they are clearly aware of the impacts to society GPT-4 has as well as inequalities that may result. They also warn of future advancements as the rate of improvement will accelerate
 
Last edited:
Back