- 11,805
- Marin County
The more I use chatgpt and understand it's limitations, the more I understand that it isn't anything close to genuine intelligence. It's a sophisticated database...it has no ability to reason or interpret.
Our own intelligence may be more like that than you assume.The more I use chatgpt and understand it's limitations, the more I understand that it isn't anything close to genuine intelligence. It's a sophisticated database...it has no ability to reason or interpret.
Whoa buddy, this is far more existential than I meant. I simply asked does a Jaguar XJS have a stiffer chassis than a C5 Corvette (which, without actual data, would rely on some kind of inferencing and deduction) to which it replied "no" because the Corvette has hydroformed chassis rails...neglecting the fact that a C5 Corvette is a body on frame arrangement with pretty low torsional stiffness (9,100nm/deg) whereas a typical fixed roof unibody (like an XJS) is going to be at least 50% stiffer. Idk, it was a pretty disappointing answer and it tells me that there is not much actual processing or logic going on - it's more like Dall-E in that it doesn't really know what it's producing other than its similar to other things that it has seen. I suspect that this form of machine learning "AI" is close to it's proverbial brick wall. That's not to say it isn't useful...it's totally brilliant for looking stuff up in the building code, though it tends to give you wrong section numbers when you want the source...which is odd. It also said the C5 Corvette is mid-engined, so there's also that.Our own intelligence may be more like that than you assume.
One of the things that chat GPT is lacking compared to human intelligence is the extraction of a model of reality from the data it is presented with. Our brains attempt to keep an internal model of reality that we can use to bridge the gaps in data, and while chat GPT has its own model which it uses to bridge the gaps in data, it's more like a model of the data rather than a model of reality. This is why chat GPT doesn't appear to develop its own internal understanding of what is true and what is not true - because it doesn't attempt to model truth. It just models the data, including any false elements within the data.
Edit:
Perhaps 2 examples would be helpful here:
1) If you flood chat GPT with flat-earth theories, it will not identify those theories as incompatible with a model of reality and reject them. It will try to incorporate them. A human (hopefully) does otherwise based on their own internal modeling of reality.
2) You'll never get chat GPT inventing a sun god or angels with wings playing harps in the clouds because it doesn't attempt to fill out a model of reality. That is the human brain trying to finish their model in an area where they lack data, and (poorly) grafting data from other areas onto the unfinished model. Chat GPT doesn't have such a model so it doesn't identify anything as incomplete and attempt to complete it.
Edit 2:
An interesting question for chat GPT would be something like "what is something that you don't know and would like to learn about". There are multiple elements in there that should be incompatible with GPT. "Something that you don't know" is not anything it keeps track of. "Would like to learn" includes the previous problem and then also requires GTP to prioritize what it doesn't know. I would guess that it would just rephrase what others have said. So you might get something like "I'd like to learn how to cook" or something paraphrased from its data. But if you questioned GPT on how to cook, it could probably tell you.
...because it's not trying to model reality or truth. It's just trying to give you a best fit to the data it has.Idk, it was a pretty disappointing answer and it tells me that there is not much actual processing or logic going on - it's more like Dall-E in that it doesn't really know what it's producing other than its similar to other things that it has seen.
I don't disagree. I think 'artificial intelligence' is a misnomer as applied to these models, I don't think it's intelligence and I don't think this particular mode of machine learning will ever be intelligence....because it's not trying to model reality or truth. It's just trying to give you a best fit to the data it has.
You're referring to what is called "general intelligence", AGI instead of AI. Is pattern recognition intelligence? I guess some people say yes. It is a big part of human intelligence.I don't disagree. I think 'artificial intelligence' is a misnomer as applied to these models, I don't think it's intelligence and I don't think this particular mode of machine learning will ever be intelligence.
I take issue with misrepresentation. Tesla's 'full self driving' or even 'autopilot' are not really those things and, similarly, I think it's a mischaracterization to call these machine learning algorithms 'intelligence' even if they can kind of mimic intelligence in some situations. I'm not discounting their ability to do useful work, but I disagree with the term AI, which I think is still only a theoretical concept at this point. And for those people claiming these neural nets have self awareness, that is pure delusion.You're referring to what is called "general intelligence", AGI instead of AI. Is pattern recognition intelligence? I guess some people say yes. It is a big part of human intelligence.
It's all fun and games until AI commits the cardinal sin of automotive.It also said the C5 Corvette is mid-engined, so there's also that.
Yeah, it is completely and totally useless at anything to do with cars. I suspect this is a result of trawling car forums with ungodly amounts of misinformation.
It's a challenge for copyright (music, photography, literature, etc.). But I don't see how making it easier to produce music, images, or even written work is a bad thing. I'm struggling to see how it even results in lost jobs as well, since media consumption is absolutely through the roof right now, and going nowhere but up.The more I read (and hear) about this "AI" in the field of creative arts (I use quotations because it's not really artificial intelligence per se, more like machine learning) the less I'm convinced it's a good thing.
Especially in music and literature - I'm failing to see how the positives will outweigh the negatives not just in the short term, but in the long term as well.
One of the pet theories I've been toying around recently is about "devaluing" music by making it democratized ie anyone with a half-decent laptop but no talent in playing instruments or singing can produce a track if they have a bright idea one day. I thought it's a double-edged sword that should've been a good idea but ended up devaluing the artistry more than ever before as the allure of quick buck or online cred got the better of the aspiring kids. And now, this AI voice replacement algorithm will only make the debate even murkier, I feel.
Like what Timbaland did with a Notorious BIG "feature" in one of his recent Instagram post, who's to stop some hack record label executive with a partial licence to a dead artist from exploiting this tech for a quick buck? A dead artist can't consent to this kind of gross exploitation, after all. From what I heard, the current copyright law isn't robust or revised enough to survive this onslaught. Maybe I've misheard it, but I'd imagine it's not going to be a cut-and-dry situation if a lawsuit happens.
And then... what about dead writers, then? A writer who couldn't finish their last book before passing on, but the publisher wants to complete it via... an advanced version of ChatGPT? 🤔
Most of all, losses of jobs. I read a BBC article where some expert or rather quoted over "300 million" job losses due to AI. If that number is based on some kind of fact and not simply pulled out of their arse, I'd say that's quite catastrophic.
Maybe I'm getting old, but... This is one wild ride I'm not entirely sure about. It feels like one of those "I know you can, but should you?" moments.
Hence the word easier. People that rely on art commission income will probably take a hit. Even more so in some cases where all you have to do is feed the AI program a few pieces from that artist and it'll pump out something nearly identical at no cost and almost instantaneously.But I don't see how making it easier to produce music, images, or even written work is a bad thing.
For me this is the dream. The industry has to make media that sells to the general population. A personal AI can make media for me. Assuming the AI is good enough, there is comparing the two. In the lead up to that point though, artists will probably be better at getting what they want from AI than some random person with a laptop.One of the pet theories I've been toying around recently is about "devaluing" music by making it democratized ie anyone with a half-decent laptop but no talent in playing instruments or singing can produce a track if they have a bright idea one day.
It already is easier than it used to be. As work gets easier, work product increases and satisfies more demand. In general, work of all kinds has gotten easier over time, and we're all doing better than ever - partly because our consumption increases.Hence the word easier. People that rely on art commission income will probably take a hit. Even more so in some cases where all you have to do is feed the AI program a few pieces from that artist and it'll pump out something nearly identical at no cost and almost instantaneously.
I think that's a deceptively simple way of looking at this matter. I'm going to assume your idea of consumption doesn't mean "free stuff for everyone." In that case, someone has to pay - most likely the consumers - but when not enough people have well-paying jobs to fuel consumption, how will it continue to go up?I'm struggling to see how it even results in lost jobs as well, since media consumption is absolutely through the roof right now, and going nowhere but up.
It's directly comparable to everything we've experienced until now. The term computer predates electronic devices. It used to be a person (I think commonly women) who would perform manual computations. You could get a job as a computer. Now computing is so easy it's done practically for free. And yet the loss of those jobs is not a problem. In fact, the advent of computers has only greatly improved quality of life for our species and improved job opportunities.I think that's a deceptively simple way of looking at this matter. I'm going to assume your idea of consumption doesn't mean "free stuff for everyone." In that case, someone has to pay - most likely the consumers - but when not enough people have well-paying jobs to fuel consumption, how will it continue to go up?
AI replacing humans in various fields is a given. A matter of when, not if. The replaced humans will be jobless with little to no prospect of getting a different job in a different field not yet affected by AI's advancement. Little to no prospect, because everyone who lost their job would also be considering doing the same, and the already finite job opening just got even finite-r. And the human population is growing, not shrinking, so the logic dictates that the number of humans competing for the same job would only go up, not down. The way I see it, without properly mapping out how AI tech will affect humanity (especially those in the lower income brackets) it'd ultimately do more harm than good in the short-to-medium term.
And no, I don't think this is comparable to anything we've experienced until now. Those techs still needed 'trained' operators to get the desired results, but AI doesn't. Even a proverbial chimp will be able to operate it with minimal education with how fast the tech is advancing.
Demand is created when someone wants something, not value. A professional answering the demand and creating something is where value is created. If you can create goods at a lower cost, all things being equal the value for the good goes down. However, as price goes down, the quantity demanded goes up (law of demand). An easy way to see this is electricity. As the price for electricity goes down, people consume more.And something that's been bugging me about this argument is the "value" proposition. Value is created when someone wants something, and a professional answers it by creating that something. However, with AI doing that for practically free... What happens to the value proposition? Obviously I'm not talking about physical objects (like clothes or houses) that will always be in demand - but in the entertainment industry? I can't help but wonder how this will work out without a basic legal framework to guide the chaos.
I agree this kind of concern has been around since the advent of science. We have the benefit of hindsight these days for those events you mention, though. Do we have the same with where we're headed with AI (or machine learning)? No, not quite yet.It's directly comparable to everything we've experienced until now. The term computer predates electronic devices. It used to be a person (I think commonly women) who would perform manual computations. You could get a job as a computer. Now computing is so easy it's done practically for free. And yet the loss of those jobs is not a problem. In fact, the advent of computers has only greatly improved quality of life for our species and improved job opportunities.
This is an age-old concern, and it has proven to be misplaced time and again. Lightbulbs put candlemakers out of work, and lightbulbs increased quality of life and even job opportunities. Artists will still have work. Hell people still play chess professionally. In fact, chess seems to be pretty popular right now.
Ah, I used value as I was translating something else at the time that involved that term. My bad on that one.Demand is created when someone wants something, not value. A professional answering the demand and creating something is where value is created. If you can create goods at a lower cost, all things being equal the value for the good goes down. However, as price goes down, the quantity demanded goes up (law of demand). An easy way to see this is electricity. As the price for electricity goes down, people consume more.
Entertainment is more in demand than ever. The price has been decreasing for some time, and the quantity consumed is through the roof. Demand is high partly because people have additional leisure time, but people also have begun consuming entertainment from multiple modes simultaneously. For example, playing a video game and listening to an audio book. I listen to more music today than I ever have in my life. I'm listening to music right this second.
Video games that used to need a finite amount of art now need a nearly infinite amount of art. We need surface texture on the back of a rock that you run past while you fight off the alien invasion. That kind of thing was unthinkable not long ago.
Entertainment will always be in demand. And as quality of life goes up, entertainment consumption naturally goes up.
When Sony has sold enough models of a particular PlayStation generation they typically reintegrate the circuit into fewer chips and produce a cheaper Lite version.Let's be honest here, has anyone actually seen higher demand drive the price lower?
Kinda making my point for me.As an aside, candlemakers are doing just fine down here in South Africa. Actually, they are more in demand now than twenty years ago.
Yes. Tons of jobs exist today entirely because of lightbulbs.However, did job opportunities really increase with advent in lightbulb technology?
My point is that lower costs increase consumption which leads to additional job opportunities but also increased standard of living.I don't think there is any solid data on it, but something tells me that was only the case initially before the manufacturing was taken over by lower-cost robots, thereby reducing job opportunities overall.
The job of an artist will likely include managing AI at some point. That's how it shifts. More artwork, and managing new tools.Which gets back to what I've said earlier: AI eliminating one industry's needs for human workforce will mean the out-of-job workers will need to find alternatives in an ever-shrinking job market. Ever-shrinking, because others like them are also looking for work while there are more people born every minute who will no doubt need jobs when they grow older, while AI is also bound to make its mark on whatever hypothetical sector it is.
The point was that demand increases with lower prices. When extra supply drives prices lower, people buy more because it costs less.I'm not quite sure if the theory of law of demand is quite correct on this one. Let's be honest here, has anyone actually seen higher demand drive the price lower?
Where will the demand for candlemakers be?Where will the demand be for professionally-produced entertainment?
You have to adjust for inflation.And uh... price of entertainment has been going up year-on-year, actually. Not counting inflation, games cost more (including things like DLCs and loot boxes) a night out at the movies cost way more now, and even Netflix is charging more while cutting down password sharing. Even McDonalds' happy meals have gone up in pricing. Ditto for Spotify. And even on Youtube, ads have gotten longer and more intrusive than ever.
Even a proverbial chimp will be able to operate it with minimal education with how fast the tech is advancing.
No, it doesn't. I was simply pointing at an anomaly created by an extraordinary set of circumstances that is not easily replicable in developed countries such as the US.Kinda making my point for me.
Ton of jobs? How? When robots operated by a minimum number of operators do all the manufacturing?Yes. Tons of jobs exist today entirely because of lightbulbs.
I get what you're trying to say, but it feels like your point is side-stepping my point about automation reducing overall job opportunities.My point is that lower costs increase consumption which leads to additional job opportunities but also increased standard of living.
...Yes, lower prices are meant to drive demand, and once a threshold is reached, the price goes back up. Or the demand might not pick up, and the product (or service) is taken off the market for good.The point was that demand increases with lower prices. When extra supply drives prices lower, people buy more because it costs less.
Uh... it does though. Directly.No, it doesn't. I was simply pointing at an anomaly created by an extraordinary set of circumstances that is not easily replicable in developed countries such as the US.
Light bulbs enable night shifts (and other daylight isolated environments) in a way that candles do not. Edit: It's worth noting that light bulbs ultimately became vacuum tubes and revolutionized computing as well.Ton of jobs? How? When robots operated by a minimum number of operators do all the manufacturing?
Automation is a tool. If demand increases, the person who used to output 10 units per day now manages the tool and outputs 100 units per day. This is already incredibly evident in the art world. Economic production per worker goes up, as does standard of living with it.I get what you're trying to say, but it feels like your point is side-stepping my point about automation reducing overall job opportunities.
I'm not sure what this was trying to say....Yes, lower prices are meant to drive demand, and once a threshold is reached, the price goes back up. Or the demand might not pick up, and the product (or service) is taken off the market for good.
Hmmm, this is pretty cool but also a little bit scary if true.
In the whitepaper OpenAI released alongside the GPT-4 announcement, they outline the safety challenges with using GPT-4 and what ways they try to mitigate them.
Here are some points that I saw that are insane
In 2.7 "Privacy", they acknowledge GPT-4 is capable of identifying people when given outside data by making connections between different data points.
In section 2.9 "Potential for Risky Emergent Behaviors", they evaluated GPT-4's "power-seeking" ability.
Their result was that GPT-4 was ineffective at autonomously replicate. However, during their experimentation, they were able to successfully get a human on TaskRabbit to complete a CAPTCHA for it.
In section 2.11 "Economic Impacts", they are clearly aware of the impacts to society GPT-4 has as well as inequalities that may result. They also warn of future advancements as the rate of improvement will accelerate