On the rationalist community and pop-utilitarianism

I’ve since come to the conclusion that this text is a woefully inadequate treatment of a multitude of subjects, which were loosely connected in my mind as a seemingly coherent whole, and that what sews them together is my own sense of inadequacy and incompetence I felt when writing this post. I somewhat regret writing this, but it’s good to have it here as a reference.

 

In case the reader isn’t aware of it, it is worth mentioning that there exists a diverse, heterogenous community of researchers, computer scientists, mathematicians, statisticians, neuroscientists, economists, effective altruists, physicians, hippies, hackers, hipsters and underachieving nerds loosely referred to as the “rationalist community” on the internet and to some extent on meatspace.

These people are dedicated to reasoning their way to improving the world, reducing suffering, saving humanity, making everything better for everyone enough people that it justifies some really messed up things and just enjoying the discussion.

Since the community itself is heterogenous and dynamic and since I’m not a historian, there’s no point for me in dwelling deeply on the origins or complexities of such a society. That is, things I’m about to say are necessarily loose and weak generalizations based on limited data and intuitive cognition – sort of what rationalists don’t really like, as we’ll see.

I don’t know if the community has a straightforward history, but it does have some important leaders and hubs where the discussion and debates are concentrated.

  • Less Wrong, a website and a discussion and blog platform is focused on AI risk, logical reasoning and making Bayesian predictions in a complex environment. Most prominent character is Eliezer Yudkowsky, a highly talented high-school dropout who is determined to save the future’s galaxy-wide civilizations from predicament and therefore doesn’t want to stop eating meat. He has written a brilliant book on rational reasoning, and often writes like an ass.
  • Slate Star Codex is a blog whose subjects vary, but tend to concentrate on making sense of social issues and politics. It leans towards Right libertarianism and liberalism, free markets, freedom of speech and charitable discussion. It’s produced by pseudonym Scott Alexander, a wonderfully productive psychiatrist.
  • Overcoming Bias is Robin Hanson’s blog. Yudkowsky used to write it with him. Katja Grace, a scientist who also has her own site, has also contributed to the site (among some others). Hanson is a highly talented economist, who has studied and contributed to science in various and impressive ways. He decided to take up social sciences, as he felt most of the low-hanging fruits for drastic improvements are in the society’s structures. He tends to get in trouble with the press and social media due to not-so-controversial ideas presented in a cold, controversial way. I get the impression he kind of likes it.
  • r/lesswrong and r/slatestarcodex, among others, are popular platforms for discussion on AI risk and Bayesian reasoning.
  • 80,000 hours is a site dedicated to guiding talented people to effective and important careers. The less talented people don’t seem to really interest them.
  • Effective Altruism is exactly what it sounds like: people organizing to do as much good as possible as effectively as possible. I don’t know if they’re succeeding amid all the abstract reasoning and signalling, but I applaud the effort: thank you, EA.
  • Various institutes and organizations such as Future of Life Institute, Future of Humanity Institute, Foundational Research Institute, Qualia Computing and so on are dedicated to reducing the amount of suffering in the universe, reducing existential risk, helping people in general or taking lots of DMT and transcending biological humanity’s limits by figuring out the geometrical properties of consciousness, or something like that.

These are ones I can immediately think off the top of my head. So as we can see: a dynamic, diverse group of people. Many if most of them consist of talented, serious, hardworking people with whom I wouldn’t dare to even open a discussion, lest I be knocked out in a flash by their awesome knowledge and reasoning and left wondering if it were better if I just decided to never read anything ever again, and simply settled to watch Netflix like the rest of us normal people.

Some of them are undoubtedly batshit insane. I don’t think it’s necessarily a bad thing.

All in all, what brings these people together is a willingness to formulate their ideas carefully, endeavor to avoid bias and muddled thinking and be as clear about everything as remotely possible.

As you may have noticed, all of this evokes conflicting and ambivalent emotions in me. For one, I feel amazed by how seriously so many smart people take these issues. There are actually people saying that the greatest risk to everybody is war. People saying that it is awful that wild animals suffer as they do. People saying that it is a catastrophic loss to humanity that we spend so much money on so silly things while people are dying of malaria. It is not hypocritical – it’s amazing.

Yet, of course, sometimes it is hypocritical, as these people don’t necessarily donate that much more to effective charities or change their lifestyles if it would make them more uncomfortable. They do talk about it though, and wonder why that is.

I also feel envious of all the energy and focus they have. I honestly lack the brain power to think of these things like they do. I remember struggling with elementary geometry in high school, being flabbergasted by simple calculus problems and feeling frustrated when after 15 years I still can’t get my head around some simple differentiating rules. I can’t keep my head together long enough to finish a serious book, let alone a series of books on what the historical data on the rise and fall of civilizations tells us of the future of today’s societies, let alone process all that and write a book of my own on the subject combining it to contemporary knowledge of computing and statistics to make Bayesian predictions to back up my claims, but there are people who do things like that.

I don’t understand everything they talk about. I don’t feel confident I ever could. These people are trying to reason the whole humanity’s way to a better future and often I feel like I can’t even follow the discussion – not to mention taking part in it.

I also feel scared by some of the things they say.

Like the time Yudkowsky and Hanson both advocated someone getting tortured over really, really, really (really really…) many people getting a dust speck in their eyes, because utilitarianism intransitive preferences are irrational*. We should look to quantify happiness and suffering, and that leads us to conclusions such as the above. That is not a joke or a strawman, but a serious conclusion based on the rules they follow. There are people who, according to this sort of reasoning, should conclude that it is completely alright for 99% of the planet’s current population to die horrible deaths at the hands of sadistic torturers if they concluded that would assure a pleasant enough life for civilizations spanning the entire galaxy over millions of years. I am by no means exaggerating – that’s a form of pop-utilitarianism for you.

(*it seems during the time I wrote this I had a somewhat confused understanding of the thought experiment in question; however, the basic idea remains about the same)

Now, I get pretty easily disturbed by pretty much anything, so this shouldn’t really be that much of an issue. One serious reason things like torture-vs-dustspeck really pertube me is that these are the people who contribute to creating technology which will make decisions that impact people’s lives. Among the loose, dynamic rationalist community there exists a geist of creating a literal deus ex machina, because whatever means we have of improving the world today just aren’t enough. Among the people looking to do that are people saying that yeah, torture’s just fine, if we can reason that it improves the total quantifiable happiness. Let’s just put that in the black box and see what happens.

I don’t really know what to make of all of this. I tried to write this post to say that last thing, but I realized I couldn’t do that without introducing the subject. Having written up to this point I realize I should still delve deeper, elaborate my ideas and interpretations and formulate my case better. Before even that I should actually delve into the questions and frameworks these issues raise and make sure I’m not just spewing confused nonsense, as I likely am. I’m not sure I can. Many of the things I’m concerned about there are possibly ones of little significance, and I should maybe direct my worries towards my own bad habits and the possible scenario of digital dictatures creating AIs not to maximize some version of utilitarianism but to maximize the Party’s success.

But to me, they sound like the same thing. That’s another thing I can’t really explain. It reminds me of what Alan Watts wrote in Zen. To paraphrase a bit, he said that according to some old traditions human flaws are less bad than inhuman ones. When a human destroys things due to his human needs and human greed, he at least carefully avoids destroying that which he covets. But when a human forgets his humanity and dedicates his entire existence to serve an idea, an abstraction – that is when he is prepared to sacrifice anything, and truly inhuman annihilation becomes possible.

That is not to romanticize anything or condemn this or that. It is merely a point of view which I feel I cannot quite put into words. I doubt I’ll ever can.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s