Look these people usually suck and are rationalizing not caring about people, or giving themselves comfort with their mortality, or etc., but what really gets on my nerves is the hubris to assume you’ve got a clue what the future holds. Why would hyperintelligent post-humans stop improving; why would they want to make simulations of smart people instead of some massive, singular entity or hive mind or whatever the hell you can’t think of because you’re a dumb non-post-human? The beauty of the future isn’t in the robotic assembly-line reproduction of pleasures we’ve already experienced, it’s not a fucking resort, it’s in the new, in the unknowable, irreducible things we can’t imagine. Beauty can be horrible too, the natural world is both beautiful and horrible. Why would we ever presume that the post-humans wouldn’t exist in a state of immense suffering, struggling and fighting the way all life we’ve ever known has? Why would we celebrate or denigrate the far future when we have no hope of predicting it or measuring its moral value from our limited perspectives?
Well there can be some value in trying to predict a path for future technologies. Just, not in the prediction where you have white ceos to survive apocalypse for the good of humanity whole...
>there can be some value in trying to predict a path for future technologies
Oh yeah I agree 100%. I probably shouldn’t waste my time on a real criticism when so much of what they believe is clearly motivated reasoning, but it’s just that I do find this sort of thought intriguing, and their complete failure to do it rationally is incredibly disappointing to me. I love sci fi that actually explores what the future might look like; blindsight and echopraxia are great examples. This shit is a mockery of that kind of work and it pisses me off.
> Longtermism should not be confused with “long-term thinking.” It goes way beyond the observation that our society is dangerously myopic, and that we should care about future generations no less than present ones. At the heart of this worldview, as delineated by Bostrom, is the idea that what matters most is for “Earth-originating intelligent life” to fulfill its potential in the cosmos. What exactly is “our potential”? As I have noted elsewhere, it involves subjugating nature, maximizing economic productivity, replacing humanity with a superior “posthuman” species, colonizing the universe, and ultimately creating an unfathomably huge population of conscious beings living what Bostrom describes as “rich and happy lives” inside high-resolution computer simulations.
> This is what “our potential” consists of, and it constitutes the ultimate aim toward which humanity as a whole, and each of us as individuals, are morally obligated to strive.
I do not believe that this is either obvious, an accurate generalization of longtermism or backed by references (did I miss one?)
There is a (much) longer essay linked where she says "noted elsewhere" where she explains what she thinks about it. I'm not going to read that. And I'm definitely as confused as you are on certain passages of the page I posted
Don't worry, it's a long article itself and it's absolutely normal to be confused. Hell, I'm taking an ethics course and after a few months it's still hard for me to grasp some concepts. I haven't read that either and don't plan to, for now
I don't plan to live forever, but I do go to the doctor to check if something may affect my short and middle term health, and try to follow his advice for short and middle term recommendations, from taking some medicine for some present problem to exercise and eat healthy for the not long term future.
Putting the extremes as the two options frame the discussion as having only two unreasonable alternatives. Worrying and taking action to avoid the worst of climate change won't ensure our very long term survival, but we are expected to survive a few centuries more at the very least. Its like reaching 40 and do everything in our hands to kill ourselves what the civilization is doing.
The issue I - and many others - have is that the long term prediction is pulled out of thin air. While other Ai researchers - serious ones such as Timnit Gebru - aren't taken in consideration - my opinion, but I guess because they're not so gloomy and probably don't have the same opinion on ceos and people in power as yud and others - I don't know how to put it, but this sphere has a strong reactionary component that you see in musk, Thiel, et Al
Btw, i also have the pet peeve that such horrible calculations do not take into account that an earlier action that saves less lives actually has a theorethical effect of saving other lives for the sheer fact that the people you saved now might provide value in the future. In their perverse logic, this is something that should be taken into account. Given that i'm only an ai major taking a small course on ethics, i both find myself repelled by what i said and amused that such deep thinkers couldn't think of the "potential" value generated after saving someone now rather than in the future.
Given they're all for this sort of horrible calculations, if someone doesn't include this in their thinking - and as far as I read, this potential is not taken into consideration - they might have just condemned billions to die for a wrong calculation. Gah
There is an argument that western civilization went astray when Plato elevated human reason above all other virtues. The obvious problem is that for all of the wonderful things reasoning has given us, it is just as effectively used for deceit both of the self and others. The arguments behind much of today's so called rationalists are laughable -- especially to someone outside of their bubble. It is precisely for these reasons that they have to cloak their ridiculous arguments in arbitrary quantitative analysis and impenetrable jargon. And then there is the irony of their nearly universal militant atheism that comes along with sneering contempt for traditional religion (which they also generally have at best an extremely superficial knowledge of).
I also am particularly amused by the worship of Bayesian statistics without serious reflection on the fact that it is premised on subjective belief in the prior.
I remember reading about a company that had these 5-year projections of what their sales and revenue was going to be. A new CEO came in and said, wait a minute, we can't accurately predict what our sales are going to be next month. How are we going to make 5-year predictions that are anything other than worthless?
He scrapped the long-term planning system, shrinking the planning window down shorter and shorter. They ran the company on a 30-day window for a year or 18 months. Only then did they get the numbers right. After that, they extended the planning window, but never back to five years.
So, those who are doing longtermism: How accurate are your short-term forecasts? Can you accurately predict a year from now? If not, how can you predict the long term future with enough accuracy to act on?
Worse: For every year further in the future, you should probably multiply your certainty in your prediction by 0.8. (This depends on the nature of the prediction, of course, and is a made-up number. Still, the point is valid - the longer term the prediction, the more probability that it is not only wrong, but wildly wrong.) That means (using that number) that predictions 10 years out only have 10% accuracy. Predictions 20 years out only have 1% accuracy. How do you think you know enough today about what will happen then in order to make decisions now on the basis of what will happen then?
I have to say that i'm so tired of reading these things - as interesting it might be to read about some whackos like them - that just being apalled is not enough to describe how repulsive is a world in which human value is used like a number - especially if it is pulled out of their arse just to hide their racism, classism, sexism (ecc ecc)
> how repulsive is a world in which human value is used like a number
On the surface, morally, I agree with you.
But when it comes to practice, things get tougher. Whether capitalist or communist or random utopia, ultimately most of it comes down to: how do we decide, individually and collectively, how each person spends their finite time on Earth? While imperfect, in most places we use money as a way to compensate individuals for the time they’ve spent performing an activity that they wouldn’t have spent time doing on their own. They can then trade that money for the product of other peoples’ labour (things that they wouldn’t have done on their own).
Distilling it down to a dollar value sucks, but is essentially acting as a proxy for “how many hours of how many of the right peoples’ lives gets spent on solving problem X?”. Problem X could be an individual problem: how many hours of how many oncologists lives should be spent trying to cure this specific person’s cancer? And given a finite supply of oncologists and a finite number of hours each one can work in a day, how do we divide their time between different patients? This scales up to national and international levels; people work, the governments take some fraction of that compensation and redistribute it to others in order to take on tasks that people and companies don’t want to do on their own for free. But there’s a finite amount of that money too, stemming from there being a finite number of humans qualified to solve specific problems and finite time from each of them.
But we're talking about people willing to sacrifice people for a hypothetical scenario of doomsday to avoid.
Yes, allocating funds to research is something that has to be done to distribute resources given they're not infinite. But that's a real scenario. Not people pulling bayesian bs in a bad way with random numbers they agree with. It's a completely different scenario, even if resource allocation is necessary in our lifes
Look these people usually suck and are rationalizing not caring about people, or giving themselves comfort with their mortality, or etc., but what really gets on my nerves is the hubris to assume you’ve got a clue what the future holds. Why would hyperintelligent post-humans stop improving; why would they want to make simulations of smart people instead of some massive, singular entity or hive mind or whatever the hell you can’t think of because you’re a dumb non-post-human? The beauty of the future isn’t in the robotic assembly-line reproduction of pleasures we’ve already experienced, it’s not a fucking resort, it’s in the new, in the unknowable, irreducible things we can’t imagine. Beauty can be horrible too, the natural world is both beautiful and horrible. Why would we ever presume that the post-humans wouldn’t exist in a state of immense suffering, struggling and fighting the way all life we’ve ever known has? Why would we celebrate or denigrate the far future when we have no hope of predicting it or measuring its moral value from our limited perspectives?
Well there can be some value in trying to predict a path for future technologies. Just, not in the prediction where you have white ceos to survive apocalypse for the good of humanity whole...
>there can be some value in trying to predict a path for future technologies
Oh yeah I agree 100%. I probably shouldn’t waste my time on a real criticism when so much of what they believe is clearly motivated reasoning, but it’s just that I do find this sort of thought intriguing, and their complete failure to do it rationally is incredibly disappointing to me. I love sci fi that actually explores what the future might look like; blindsight and echopraxia are great examples. This shit is a mockery of that kind of work and it pisses me off.
> Longtermism should not be confused with “long-term thinking.” It goes way beyond the observation that our society is dangerously myopic, and that we should care about future generations no less than present ones. At the heart of this worldview, as delineated by Bostrom, is the idea that what matters most is for “Earth-originating intelligent life” to fulfill its potential in the cosmos. What exactly is “our potential”? As I have noted elsewhere, it involves subjugating nature, maximizing economic productivity, replacing humanity with a superior “posthuman” species, colonizing the universe, and ultimately creating an unfathomably huge population of conscious beings living what Bostrom describes as “rich and happy lives” inside high-resolution computer simulations.
> This is what “our potential” consists of, and it constitutes the ultimate aim toward which humanity as a whole, and each of us as individuals, are morally obligated to strive.
I do not believe that this is either obvious, an accurate generalization of longtermism or backed by references (did I miss one?)
EDIT: Did miss the "noted elsewhere" link (pdf): https://c8df8822-f112-4676-8332-ad89713358e3.filesusr.com/ug...
There is a (much) longer essay linked where she says "noted elsewhere" where she explains what she thinks about it. I'm not going to read that. And I'm definitely as confused as you are on certain passages of the page I posted
That essay (pdf) looks like a better intro to OP main argument than OP itself: https://c8df8822-f112-4676-8332-ad89713358e3.filesusr.com/ug...
It fills in a lot of the blanks. I should have read it before engaging.
Don't worry, it's a long article itself and it's absolutely normal to be confused. Hell, I'm taking an ethics course and after a few months it's still hard for me to grasp some concepts. I haven't read that either and don't plan to, for now
I don't plan to live forever, but I do go to the doctor to check if something may affect my short and middle term health, and try to follow his advice for short and middle term recommendations, from taking some medicine for some present problem to exercise and eat healthy for the not long term future.
Putting the extremes as the two options frame the discussion as having only two unreasonable alternatives. Worrying and taking action to avoid the worst of climate change won't ensure our very long term survival, but we are expected to survive a few centuries more at the very least. Its like reaching 40 and do everything in our hands to kill ourselves what the civilization is doing.
The issue I - and many others - have is that the long term prediction is pulled out of thin air. While other Ai researchers - serious ones such as Timnit Gebru - aren't taken in consideration - my opinion, but I guess because they're not so gloomy and probably don't have the same opinion on ceos and people in power as yud and others - I don't know how to put it, but this sphere has a strong reactionary component that you see in musk, Thiel, et Al
Btw, i also have the pet peeve that such horrible calculations do not take into account that an earlier action that saves less lives actually has a theorethical effect of saving other lives for the sheer fact that the people you saved now might provide value in the future. In their perverse logic, this is something that should be taken into account. Given that i'm only an ai major taking a small course on ethics, i both find myself repelled by what i said and amused that such deep thinkers couldn't think of the "potential" value generated after saving someone now rather than in the future.
Given they're all for this sort of horrible calculations, if someone doesn't include this in their thinking - and as far as I read, this potential is not taken into consideration - they might have just condemned billions to die for a wrong calculation. Gah
There is an argument that western civilization went astray when Plato elevated human reason above all other virtues. The obvious problem is that for all of the wonderful things reasoning has given us, it is just as effectively used for deceit both of the self and others. The arguments behind much of today's so called rationalists are laughable -- especially to someone outside of their bubble. It is precisely for these reasons that they have to cloak their ridiculous arguments in arbitrary quantitative analysis and impenetrable jargon. And then there is the irony of their nearly universal militant atheism that comes along with sneering contempt for traditional religion (which they also generally have at best an extremely superficial knowledge of).
I also am particularly amused by the worship of Bayesian statistics without serious reflection on the fact that it is premised on subjective belief in the prior.
They might have contempt for traditional religion, but sure as hell they're all for in when someone talks about a hypothetical God Ai
I remember reading about a company that had these 5-year projections of what their sales and revenue was going to be. A new CEO came in and said, wait a minute, we can't accurately predict what our sales are going to be next month. How are we going to make 5-year predictions that are anything other than worthless?
He scrapped the long-term planning system, shrinking the planning window down shorter and shorter. They ran the company on a 30-day window for a year or 18 months. Only then did they get the numbers right. After that, they extended the planning window, but never back to five years.
So, those who are doing longtermism: How accurate are your short-term forecasts? Can you accurately predict a year from now? If not, how can you predict the long term future with enough accuracy to act on?
Worse: For every year further in the future, you should probably multiply your certainty in your prediction by 0.8. (This depends on the nature of the prediction, of course, and is a made-up number. Still, the point is valid - the longer term the prediction, the more probability that it is not only wrong, but wildly wrong.) That means (using that number) that predictions 10 years out only have 10% accuracy. Predictions 20 years out only have 1% accuracy. How do you think you know enough today about what will happen then in order to make decisions now on the basis of what will happen then?
I have to say that i'm so tired of reading these things - as interesting it might be to read about some whackos like them - that just being apalled is not enough to describe how repulsive is a world in which human value is used like a number - especially if it is pulled out of their arse just to hide their racism, classism, sexism (ecc ecc)
The scary thing is that these wackos have influence over tech CEOs and the US President.
> how repulsive is a world in which human value is used like a number
On the surface, morally, I agree with you.
But when it comes to practice, things get tougher. Whether capitalist or communist or random utopia, ultimately most of it comes down to: how do we decide, individually and collectively, how each person spends their finite time on Earth? While imperfect, in most places we use money as a way to compensate individuals for the time they’ve spent performing an activity that they wouldn’t have spent time doing on their own. They can then trade that money for the product of other peoples’ labour (things that they wouldn’t have done on their own).
Distilling it down to a dollar value sucks, but is essentially acting as a proxy for “how many hours of how many of the right peoples’ lives gets spent on solving problem X?”. Problem X could be an individual problem: how many hours of how many oncologists lives should be spent trying to cure this specific person’s cancer? And given a finite supply of oncologists and a finite number of hours each one can work in a day, how do we divide their time between different patients? This scales up to national and international levels; people work, the governments take some fraction of that compensation and redistribute it to others in order to take on tasks that people and companies don’t want to do on their own for free. But there’s a finite amount of that money too, stemming from there being a finite number of humans qualified to solve specific problems and finite time from each of them.
But we're talking about people willing to sacrifice people for a hypothetical scenario of doomsday to avoid.
Yes, allocating funds to research is something that has to be done to distribute resources given they're not infinite. But that's a real scenario. Not people pulling bayesian bs in a bad way with random numbers they agree with. It's a completely different scenario, even if resource allocation is necessary in our lifes
... or for that matter, having their heads in the sand about the climate problem.
you're right, there are so many wrong things to complain about their thoughts that i wouldn't know where to start