you sure do have a strong sense of entitlement. society sucks because you are part of it. do you do anything other than whine and moan about how life on earth isn't sufficiently pleasurable for you?
Yes. I do plenty more. The question was about Artificial Intelligence and it’s capacity to improve upon itself exponentially. Talking about “well what do YOU do to improve society, you society participator.” Is irrelevant and deflective.
you sure do have a strong sense of entitlement. society sucks because you are part of it. do you do anything other than whine and moan about how life on earth isn't sufficiently pleasurable for you?
What makes you think gains in intelligence are exponential?
What makes you think an "artificially intelligent agent" would be able to "exponentially increase" it's own intelligence?
Because a machine with more knowledge will be able to make a version of itself more efficient than it currently is, and that next machine will have even more information that it can use to extrapolate even better solutions.
Are you angry because I don’t think the world is perfect and could use some improvements?
2 years ago
Anonymous
you sure do have a strong sense of entitlement. society sucks because you are part of it. do you do anything other than whine and moan about how life on earth isn't sufficiently pleasurable for you?
2 years ago
Anonymous
Okay but say it again though.
2 years ago
Anonymous
You come off as one of those upper caste Indian men that come to the USA only to be frustrated that white women wont frick them.
2 years ago
Anonymous
>You’d assume at the same time the AI could parse the information faster
Why? There's no reason to assume anything that you're assuming. Youre basically talking about magic
2 years ago
Anonymous
>why would an improved AI do something better than it’s predecessor.
Because it’s improved.
2 years ago
Anonymous
The point is that if the difficulty of improving grows faster than the rate of improvement, you do not get an exponential increase in intelligence.
Why do you think this would or wouldn't happen?
2 years ago
Anonymous
I’m just saying that the machine will also be able to parse that data at a faster rate as well.
2 years ago
Anonymous
How do you know? If there is a physical or computational limit then it won't be able to do that, it's not a matter of "increasing intelligence".
How would it magically be able to parse data faster, or come up with magical solutions that allow it to somehow increase it's rate when it is physically limited by the motion of particles in it's hardware and the computational limit of the various complexity hierarchies? Why should we think it would be one way over the other?
they'll just call simple pattern recognition racist and shut it all down, or ignore everything it has to say, even if all of it turns out to be extremely accurate
Not defending the status quo, but you are not being realistic. Those with power right now are the ones that will be able to make the most out of it. You seem to think this will be the great equalizer, it won't. Not to mention it won't happen that soon. Check the other thread about AGI, because I won't repeat myself.
Of course it’s the wealthy and powerful. It’s also true that those people will limit the AI’s output based on their own desires. Is it impossible to comprehend in that situation that another AI made without those handicaps could potentially overtake it since it’s not being limited by the irrational desires of a few people?
2 years ago
Anonymous
>Is it impossible to comprehend in that situation that another AI made without those handicaps could potentially overtake it since it’s not being limited by the irrational desires of a few people
You have much to learn, boy. Welcome to the world of regulations and lobbying. Unregulated AI will promptly be labelled a threat.
2 years ago
Anonymous
And it would be absolutely impossible for said AI to navigate beyond these regulations undetected?
you sure do have a strong sense of entitlement. society sucks because you are part of it. do you do anything other than whine and moan about how life on earth isn't sufficiently pleasurable for you?
Yes. I do plenty more. The question was about Artificial Intelligence and it’s capacity to improve upon itself exponentially. Talking about “well what do YOU do to improve society, you society participator.” Is irrelevant and deflective.
you sure do have a strong sense of entitlement. society sucks because you are part of it. do you do anything other than whine and moan about how life on earth isn't sufficiently pleasurable for you?
>nooo you must be complacent with the current state of affairs, thinking of better alternatives is entitled and whiny
What makes you think gains in intelligence are exponential?
What makes you think an "artificially intelligent agent" would be able to "exponentially increase" it's own intelligence?
Because a machine with more knowledge will be able to make a version of itself more efficient than it currently is, and that next machine will have even more information that it can use to extrapolate even better solutions.
Why do you think this is possible? What if each iterative step requires exponentially more work or computation to lead to an increase in intelligence?
You’d assume at the same time the AI could parse the information faster
Are you angry because I don’t think the world is perfect and could use some improvements?
you sure do have a strong sense of entitlement. society sucks because you are part of it. do you do anything other than whine and moan about how life on earth isn't sufficiently pleasurable for you?
Okay but say it again though.
You come off as one of those upper caste Indian men that come to the USA only to be frustrated that white women wont frick them.
>You’d assume at the same time the AI could parse the information faster
Why? There's no reason to assume anything that you're assuming. Youre basically talking about magic
>why would an improved AI do something better than it’s predecessor.
Because it’s improved.
The point is that if the difficulty of improving grows faster than the rate of improvement, you do not get an exponential increase in intelligence.
Why do you think this would or wouldn't happen?
I’m just saying that the machine will also be able to parse that data at a faster rate as well.
How do you know? If there is a physical or computational limit then it won't be able to do that, it's not a matter of "increasing intelligence".
How would it magically be able to parse data faster, or come up with magical solutions that allow it to somehow increase it's rate when it is physically limited by the motion of particles in it's hardware and the computational limit of the various complexity hierarchies? Why should we think it would be one way over the other?
just wait when I wake up tomorrow and have my coffee
they'll just call simple pattern recognition racist and shut it all down, or ignore everything it has to say, even if all of it turns out to be extremely accurate
just two more weeks
> could use a change in the status quo
hahahaha
You’re right anon. The status quo is perfect. Why would anyone want society to change in even the smallest way?
Not defending the status quo, but you are not being realistic. Those with power right now are the ones that will be able to make the most out of it. You seem to think this will be the great equalizer, it won't. Not to mention it won't happen that soon. Check the other thread about AGI, because I won't repeat myself.
Crazy how you know exactly how AI development will change society before it happens.
I certainly know more than the typical popsci enjoyer, like you, considering I have a PhD in the area and work in the field.
If you were that good at predicting the future, you’d be on federal payroll working in the Pentagon or some other major fed offices
>implying I am american
>implying the American fed isn’t the power structure supporting the global elite.
Just ask yourself. Who do you think is funding research and is in the board of directors of big tech? It sure isn't your walmart cashier.
Of course it’s the wealthy and powerful. It’s also true that those people will limit the AI’s output based on their own desires. Is it impossible to comprehend in that situation that another AI made without those handicaps could potentially overtake it since it’s not being limited by the irrational desires of a few people?
>Is it impossible to comprehend in that situation that another AI made without those handicaps could potentially overtake it since it’s not being limited by the irrational desires of a few people
You have much to learn, boy. Welcome to the world of regulations and lobbying. Unregulated AI will promptly be labelled a threat.
And it would be absolutely impossible for said AI to navigate beyond these regulations undetected?
In reality, the graph would look more like picrel. But what can you expect from someone who writes "AI intelligence?"
I guess you could read it as “Artificial Intelligence” being the name, and the second “intelligence” being it’s current level of intelligence.
That's unnecessary, fore the axis are labeled. Unless he seriously doubts his fans can read a simple graph.
It’s definitely implied by the arrow. “Current AI” would’ve worked better.
Yes.
>How long do we have to wait for the super AI?
Why wait for the super advanced AI to go kill all humans on us? You can have a nice day now.
It’s already here bro
https://play.aidungeon.io/main/home
https://openai.com/
https://www.deepmind.com/
Also I really liked this course in college
This isn't intelligence. This is larping on the internet.
AI does not have overarching directive. They might surpass humans in visual recognition by 2035.
50/50 chance by 2030 with a high degree of uncertainty
So that means maybe a shorter timeline, maybe longer