Is EA Rational?

I quit the Effective Altruism forum due to a new rule requiring all new posts and comments be basically put in the public domain without copyright, so anyone could e.g. sell a book of my posts without my consent (they’d just have to give attribution). More info. I had a bunch of draft posts, so I’m posting some of them here with minimal editing. In general, I’m not going to submit them as link posts at EA myself. If you think they should be shared with EA as link posts, please do it yourself. I’m happy for other people to share links to my work at EA or on social media. Please share stuff in whatever ways you think are good to do.


I haven’t studied EA much. There is plenty more about EA that I could read. But I don’t want to get involved much unless EA is rational.

By “rational”, I mean capable of (and good at) correcting errors. Rationality, in my mind, enables progress and improvement instead of stasis, being set in your ways, not listening to suggestions, etc. So a key aspect of rationality is being open to criticism, and having ways that changes will actually be made based on correct criticisms.

Is EA rational? In other words, if I study EA and find some errors, and then I write down those errors, and I’m right, will EA then make changes to fix those errors? I am doubtful.

That definitely could happen. EA does make changes and improvements sometimes. Is that not a demonstration of EA’s rationality? Partially, yes, sure. Which is why I’m saying this to EA instead of some other group. I think EA is better at that than most other groups.

But I think EA’s ability to listen to criticism and make changes is related to social status, bias, tribalism, and popularity. If I share a correct criticism and I’m perceived as high status, and I have friends in high places, and the criticism fits people’s biases, and the criticism makes me seem in-group not out-group, and the criticism gains popularity (gets shared and upvoted a bunch, gets talked about by many people), then I would have high confidence that EA would make changes. If all those factors are present, then EA is reliably willing to consider criticism and make changes.

If some of those factors are present, then it’s less reliable but EA might listen to criticism. If none of those factors are present, then I’m doubtful the criticism will be impactful. I don’t want to study EA to find flaws and also make friends with the right people, change my writing style to be a better culture fit with EA, form habits of acting in higher status ways, and focus on sharing criticisms that fit some pre-existing biases or tribal divisions.

What can be done as an alternative to listening to criticism based on popularity, status, culture-fit, biases, tribes, etc? One option is organized debate with written methodologies that make some guarantees. EA doesn’t do that. Does it do something else?

One thing I know EA does, which is much better than nothing (and is better than many other groups offer), is informal, disorganized debate following unwritten methodologies that vary some by the individuals you’re speaking with. I consider this option inadequately motivating to seriously research and critically scrutinize EA.

I could talk to EA people who have read essays about rationality and who are trying to be rational – individually, with no accountability, transparency, or particular responsibilities. I think that’s not good enough and makes it way too easy for social status hierarchies to matter. If EA offered more organized ways of sharing and debating criticism, with formal rules, then people would have to follow the rules and therefore not act based on status. Things like rules, flowcharted methods to follow, or step-by-step actions to take can all help fight against the people’s tendency to act based on status and other biases.

It’s good for informal options to exist but they rely on basically “everyone just personally tries to be rational” which I don’t think is good enough. So more formal options, with pro-rationality (and anti-status, anti-bias, etc.) design features should exist too.

The most common objection to such things is they’re too much work. On an individual level, it’s unclear to me that following a written methodology is more work than following an unwritten methodology. Whatever you do, you have some sort of methods or policies. Also, I don’t really think you can evaluate how much work a methodology is (and how much benefit it offers, since the cost/benefit ratio is what matters) without actually developing that methodology and writing it down first. I think rational debate methodologies which tries to reach conclusions about incoming criticisms are broadly untested empirically, so people shouldn’t assume they’d take too long or be ineffective when they can’t point to any examples of them being tried with that result. And EA has plenty of resources to e.g. task one full-time worker with engaging with community criticism and keeping organized documents that attempt to specify what arguments against EA exist, what counter-arguments there are, and otherwise map out the entire relevant debate as it exists today. Putting in less effort than that looks to me like not trying because the results are unwanted (some people prefer status hierarchies and irrationality, even if they say they like rationality) rather than because the results are highly prized but too expensive. There have been no research programs afaik to try to get these kinds of rational debate results more cheaply.

Also, suppose I research EA, come up with some criticisms, and I’m wrong. I informally share my criticisms on the forum and get some unsatisfactory, incomplete answers. I still think I’m right and I have no way to get my error corrected. The lack of access to debate symmetrically prevents whoever is wrong from learning better, whether that’s EA or me. So the outcome is bad either way. Either I’ve come up with a correct criticism but EA won’t change; or I’ve come up with any incorrect criticism but EA won’t explain to me why it’s incorrect in a way that’s adequate for me to change. Blocking conclusive rational debate blocks error correction regardless of which side is right. Should EA really explain to all their incorrect critics why those critics are wrong? Yes! I think EA should create public explanations, in writing, of why all objections to EA (that anyone actually raises) are wrong. Would that take ~infinite work? No because you can explain why some category of objection is wrong. You can respond to patterns in the objections instead of addressing every objection individually. This lets you re-use some answers. Doing this would persuade more people that EA is correct, make it much more rewarding to study EA and try to think critically about it, and turn up the minority of cases where EA lacks an adequate answer to a criticism, and also expose EA’s good answers to review (people might suggest even better answers or find that, although EA won the argument in some case, there is a weakness in EA’s argument and a better criticism of EA could be made).

In general, I think EA is more willing to listen to criticism that is based on a bunch of shared premises. The more you disagree with and question foundational premises, the less EA will listen and discuss. If you agree on a bunch of foundations then criticize some more detailed matters based on those foundations, then EA will listen more. This results in many critics having a reasonably good experience even though the system (or really lack of system) is IMO fundamentally broken/irrational.

I imagine EA people will broadly dislike and disagree with what I’ve said, in part because I’m challenging foundational issues rather than using shared premises to challenge other issues. I think a bunch of people trying to study rationality and do their best at it is … a lot better than not doing that. But I think it’s inadequate compared to having policies, methodologies, flowcharts, checklists, rules, written guarantees, transparency, accountability, etc., to enable rationality. If you don’t walk people step by step through what to do, you’re going to get a lot of social status behaviors and biases from people who are trying to be rational. Also, if EA has something else to solve the same problems I’m concerned about in a different way than how I suggest approaching them, what is the alternative solution?

Why does writing down step by step what to do help if the people writing the steps have biases and irrationalities of their own? Won’t the steps be flawed? Sure they may be, but putting them in writing allows critical analysis of the steps from many people. Improving the steps can be a group effort. Whereas many people separately following their own separate unwritten steps is hard to improve.

I do agree with the basic idea of EA: using reason and evidence to optimize charity. I agree that charity should be approached with a scientific and rational mindset rather than with whims, arbitrariness, social signaling or whatever else. I agree that cost/benefit ratios and math matter more than feelings about charity. But unfortunately I don’t think that’s enough agreement to get a positive response when I then challenge EA on what rationality is and how to pursue it. I think critics get much better responses from EA if they have major pre-existing agreement with EA about what rationality is and how to do it, but optimizing rationality itself is crucial to EA’s mission.

In other words, I think EA is optimized for optimizing which charitable interventions are good. It’s pretty good at discussing and changing its mind about cost/benefit ratios of charity options (though questioning the premises themselves behind some charity approaches is less welcome). But EA is not similarly good at discussing and changing its mind about how to discuss, change its mind, and be rational. It’s better at applying rationality to charity topics than to epistemology.

Does this matter? Suppose I couldn’t talk to EA about rational debate itself, but could talk to EA about the costs and benefits of any particular charity project. Is that good enough? I don’t think so. Besides potentially disagreeing with the premises of some charity projects, I also have disagreements regarding how to do multi-factor decision making itself.


Elliot Temple | Permalink | Messages (0)

EA and Responding to Famous Authors

I quit the Effective Altruism forum due to a new rule requiring all new posts and comments be basically put in the public domain without copyright, so anyone could e.g. sell a book of my posts without my consent (they’d just have to give attribution). More info. I had a bunch of draft posts, so I’m posting some of them here with minimal editing. In general, I’m not going to submit them as link posts at EA myself. If you think they should be shared with EA as link posts, please do it yourself. I’m happy for other people to share links to my work at EA or on social media. Please share stuff in whatever ways you think are good to do.


I think EA has the resources to attempt to respond to every intellectual who sold over 100,000 books in English which make arguments that contradict EA. EA could write rebuttals to all popular, well known rival positions that are written in books. You could start with the authors who sold over a million books.

There are major downsides to using popularity as your only criterion for what to respond to. It’s important to also have ways that you respond to unpopular criticism. But responding to influential criticism makes sense because people know about it and just ignoring it makes it look like you don’t care to consider other ideas or have no answers.

Answering the arguments of popular authors could be one project, of 10+, in which EA attempts to engage with alternative ideas and argue its case.

EA claims to be committed to rationality but it seems more interested in getting a bunch of charity projects underway and/or funded better ASAP instead of taking the time to first do extensive rational analysis to figure out the right ideas to guide charity.

I understand not wanting to get caught up in doing planning forever and having decision paralysis, but where is the reasonably complete planning and debating that seems adequate to get started based on?

For example, it seems unreasonable to me to start an altruist movement without addressing Ayn Rand’s criticisms of altruism. Where are the serious essays summarizing, analyzing and refuting her arguments about altruism? She sold many millions of books. Where are the debates with anyone from ARI or the invitations for any online Objectivists who are interested to come debate with EA? Objectivism has a lot of fans today who are interested in rationality and debate (or at least claim to be), so ignoring them instead of writing anything that could change their minds seems bad. And being encouraging of discussion with them, instead of discouraging, would make sense and be more rational. (I’m aware that they aren’t doing better. They aren’t asking EA’s to come debate them, hosting more rational debates, writing articles refuting EA, etc. IMO both groups are not doing very well and there’s big room for improvement. I’ve tried to talk to Objectivists to get them to improve before and it didn’t work. Overall, although I’m a big fan of Ayn Rand, I think Objectivist communities today are less open to critical discussion and dissent than EA is.)


Elliot Temple | Permalink | Messages (0)

Criticizing The Scout Mindset (including a misquote)

I quit the Effective Altruism forum due to a new rule requiring all new posts and comments be basically put in the public domain without copyright, so anyone could e.g. sell a book of my posts without my consent (they’d just have to give attribution). More info. I had a bunch of draft posts, so I’m posting some of them here with minimal editing. In general, I’m not going to submit them as link posts at EA myself. If you think they should be shared with EA as link posts, please do it yourself. I’m happy for other people to share links to my work at EA or on social media. Please share stuff in whatever ways you think are good to do.


This is quick notes, opinions and criticisms about the book The Scout Mindset by Julia Galef (which EA likes and promotes). I’m not going in depth, being very complete, or giving many quotes, because I don’t care much. I think it’s a bad book that isn’t worth spending more time on, and I don’t expect the author or her fans to listen to, engage with, value, appreciate or learn from criticism. If they were reasonable and wanted to interact, then I think this would be plenty to get the discussion/debate started, and I could give more quotes and details later if that would help make progress in our conversation.

The book is pretty shallow.

Galef repeatedly admits she’s not very rational, sometimes openly and sometimes by accident. The open admissions alone imply that the techniques in the book are inadequate.

She mentions that while writing the book she gathered a bunch of studies that agree with her but was too biased to check their quality. She figured out during writing that she should check them and she found that lots were bad. If you don’t already know that kinda stuff (that most studies like that are bad, that studies should be checked instead of just trusting the title/abstract, or that you should watch out for being biased), maybe you’re too new to be writing a book on rationality?

The book is written so it’s easy to read and think you’re already pretty good and not change. Or someone could improve a little.

The book has nothing that I recognized as substantive original research or thinking. Does she have any ideas of her own?

She uses biased examples, e.g. Musk, Bezos and Susan Blackmore are all used as positive examples. In each case, there are many negative things one could say about them, but she only says positive things about them which fit her narrative. She never tries to consider alternative views about them or explain any examples that don’t easily fit her narrative. Counter-examples or apparent counter-examples are simply left out of the book. Another potential counter-example is Steve Jobs, who is a better and more productive person than any people used as examples in her book, yet he has a reputation rather contrary to the scout mindset. That’s the kind of challenging apparent/potential counter-example that she could have engaged with but didn’t.

She uses an example of a Twitter thread where someone thought email greetings revealed sexism, and she (the tweet author) got cheered for sharing this complaint. Then she checked her data and found that her claim was factually wrong. She retracted. Great? Hold on. Let’s analyze a little more. Are there any other explanations? Even if the original factual claims were true, would sexism necessarily follow? Why not try to think about other narratives? For example, maybe men are less status oriented or less compliant with social norms, so that is why they are less inclined to use fancier titles when addressing her. It doesn’t have to be sexism. If you want to blame sexism, you should look at how they treat men, not just as how they treat one woman. Another potential explanation is that men dislike you individually and don’t treat other women the same way, which could be for some reason other than sexism. E.g. maybe it’s because you’re biased against men but not biased against women, so men pick up on that and respect you less. Galef never thinks anything through in depth and doesn’t consider additional nuances like these.

For Blackmore, the narrative is that anyone can go wrong and rationality is about correcting your mistakes. (Another example is someone who fell for a multi-level marketing scheme before realizing the error.) Blackmore had some experience and then started believing in the paranormal and then did science experiments to test that stuff and none of it worked and she changed her mind. Good story? Hold on. Let’s think critically. Did Blackmore do any new experiments? Were the old experiments refuting the paranormal inadequate or incomplete in some way? Did she review them and critique them? The story mentioned none of them. So why did she do redundant experiments and waste resources to gather the same evidence that already existed? And why did it change her mind when it had no actual new information? Because she was biased to respect the results of her own experiments but not prior experiments done by other people (that she pointed out no flaws in)? This fits the pro-evidence, pro-science-experiments bias of LW/Galef. They’re too eager to test things without considering that, often, we already have plenty of evidence and we just need to examine and debate it better. Blackmore didn’t need any new evidence to change her mind and getting funding to do experiments like that speaks to her privilege. Galef brings up multiple examples of privilege without showing any awareness of it; she just seems to want to suck up to high status people, and not think critically about their flaws, rather than to actually consider their privileges. Blackmore not only was able to fund bad experiments, then she was able to change her mind and continue her career. Why did she get more opportunities after doing such a bad job earlier in her career? Yes she improved (how much really though?). But other people didn’t suck in the first place, then also improved, and never got such great opportunities.

Possibly all the examples in the book of changing one’s mind were things that Galef’s social circle can agree with instead of be challenged by. They all changed their minds to agree with Galef more, not less. E.g. an example was used of becoming more convinced by global warming which, in passing, smeared some other people on the climate change skeptic side as being really biased, dishonest, etc. (True of some of them probably but not a good thing to throw around as an in-passing smear based on hearsay. And true of people on the opposite side of the debate too, so it’s biased to only say it about the side you disagree with to undermine and discredit them in passing while having the deniability of saying it was just an example of something else about rationality.) There was a pro-choicer who became less dogmatic but remained pro-choice, and I think Galef’s social circle also is pro-choice but trying not to be dogmatic about it. There was also a pro-vaccine person who was careful and strategic about bringing up the subject with his anti-vax wife but didn’t reconsider his own views at all, but he and the author did display some understanding of the other side’s point of view and why some superficial pro-vax arguments won’t work. So the narrative is if you understand the point of view of the people who are wrong, then you can persuade them better. But (implied) if you have center-left views typical of EA and LW people, then you won’t have to change your mind much since you’re mostly right.

Galef’s Misquote

Here’s a slightly edited version of my post on my CF forum about a misquote in the book. I expect the book has other misquotes (and factual errors, bad cites, etc.) but I didn’t look for them.

The Scout Mindset by Julia Galef quotes a blog post:

“Well, that’s too bad, because I do think it was morally wrong.”[14]

But the words in the sentence are different in the original post:

Well that’s just too bad, because I do think it was morally wrong of me to publish that list.

She left out the “just” and also cut off the quote early which made it look like the end of a sentence when it wasn’t. Also a previous quote from the same post changes the italics even though the italics match in this one.

The book also summarizes events related to this blog post, and the story told doesn’t match reality (as I see it by looking at the actual posts). Also I guess he didn’t like the attention from the book because he took his whole blog down and the link in the book’s footnote is dead. The book says they’re engaged so maybe he mistakenly thought he would like the attention and had a say in whether to be included? Hopefully… Also the engagement may explain the biased summary of the story that she gave in her book about not being biased.

She also wrote about the same events:

He even published a list titled “Why It’s Plausible I’m Wrong,”

This is misleading because he didn’t put up a post with that title. It’s a section title within a post and she didn’t give a cite so it’s hard to find. Also her capitalization differs from the original which said “Why it’s plausible I’m wrong”. The capitalization change is relevant to making it look more like a title when it isn’t.

BTW I checked archives from other dates. The most recent working one doesn’t have any edits to this wording nor does the oldest version.

What is going on? This book is from a major publisher and there’s no apparent benefit to misquoting it in this way. She didn’t twist his words for some agenda; she just changed them enough that she’s clearly doing something wrong but with no apparent motive (besides maybe minor editing to make the quote sound more polished?). And it’s a blog post; wouldn’t she use copy/paste to get the quote? Did she have the blog post open in her browser and go back and forth between it and her manuscript in order to type in the quote by hand!? That would be a bizarre process. Or does she or someone else change quotes during editing passes in the same way they’d edit non-quotes? Do they just run Grammarly or similar and see snippets from the book and edit them without reading the whole paragraph and realizing they’re within quote marks?

My Email to Julia Galef

Misquote in Scout Mindset:

“Well, that’s too bad, because I do think it was morally wrong.”[14]

But the original sentence was actually:

Well that’s just too bad, because I do think it was morally wrong of me to publish that list.

The largest change is deleting the word "just".

I wanted to let you know about the error and also ask if you could tell me what sort of writing or editing process is capable of producing that error? I've seen similar errors in other books and would really appreciate if I could understand what the cause is. I know one cause is carelessness when typing in a quote from paper but this is from a blog post and was presumably copy/pasted.

Galef did not respond to this email.


Elliot Temple | Permalink | Messages (0)

EA Should Raise Its Standards

I quit the Effective Altruism forum due to a new rule requiring all new posts and comments be basically put in the public domain without copyright, so anyone could e.g. sell a book of my posts without my consent (they’d just have to give attribution). More info. I had a bunch of draft posts, so I’m posting some of them here with minimal editing. In general, I’m not going to submit them as link posts at EA myself. If you think they should be shared with EA as link posts, please do it yourself. I’m happy for other people to share links to my work at EA or on social media. Please share stuff in whatever ways you think are good to do.


I think EA could be over 50% more effective by raising its standards. Community norms should care more about errors and about using explicit reasoning and methods.

For a small example, quotes should be exact. There should be a social norm that says misquotes are unacceptable. You can’t just change the words (without brackets), put it in quote marks, and publish it. That’s not OK. I believe this norm doesn’t currently exist and there would be significant resistance to it. Many people would think it’s not a big deal, and that it’s a pedantic or autistic demand to be so literal with quotes. I think this is a way that people downplay and accept errors, which contributes to lowered effectiveness.

There are similar issues with many sorts of logical, mathematical, grammatical, factual and other errors where a fairly clear and objective “correct answer” can be determined, which should be uncontroversial, and yet people don’t care and take seriously that getting it right is important. Errors should be corrected. Retractions should be issued. Post-mortems should be performed. What process allowed the error to happen? What changes could be made to prevent similar errors from happening in the future?

It’s fine for beginners to make mistakes, but thought leaders in the community should be held to higher standards, and the higher standards should be an aspirational ideal that the beginners want to achieve, rather than something that’s seen as unnecessary, bad or too much work. It’s possible to avoid misquotes and logical errors without it being a major burden; if someone finds it’s a large burden, that means they need to practice more until they improve their intuition and subconscious mind. Getting things right in these ways should be easy and something they you can do while tired, distracted, autopiloting, etc.

Fixes like these alone won’t make EA far more effective by themselves. They will set the stage to enable more advanced or complex improvements. It’s very hard to do more important improvements when frequently making small errors. Getting the basics rights enables working more effectively on more advanced issues.

One of the main more advanced issues is rational debate.

Another is not trusting yourself. Don’t bet anything on your integrity or lack of bias when you can avoid it. There should be a strong norm against doing anything that would fail if you have low integrity or bias. If you can find any alternative, which doesn’t rely on your rationality, do that instead. Bias is common. Learning to be better at not fooling yourself is great, but you’ll probably screw it up a lot. If you can approach things so that you don’t have the opportunity to fool yourself, that’s better. There should be strong norms for things like transparency and following flowcharted methods and rules that dramatically reduce scope for bias. This can be applied to debate as well as to other things. And getting debate right enables criticism when people don’t live up to norms; without getting debate right norms have to be enforced in significant part with social pressure which compromises the rationality of the community and prevents it from clearly seizing the rationality high ground in debates with other groups.


Elliot Temple | Permalink | Messages (0)

Effective Altruism Related Articles

I wanted to make it easier to find all my Effective Altruism (EA) related articles. I made an EA blog category.

Below I link to more EA-related stuff which isn't included in the category list.

Critical Fallibilism articles:

I also posted copies of some of my EA comments/replies in this topic on my forum.

You can look through my posts and comments on the EA site via my EA user profile.

I continued a discussion with an Effective Altruist at my forum. I stopped using the EA forum because they changed their rules to require giving up your property rights for anything you post there (so e.g. anyone can sell your writing without your permission and without paying you).

I also made videos related to EA:


Elliot Temple | Permalink | Messages (0)

Effective Altruism Is Mostly Wrong About Its Causes

EA has gotten a few billion dollars to go towards its favored charitable causes. What are they? Here are some top cause areas:

  • AI alignment
  • animal welfare and veganism
  • disease and physical health
  • mental health
  • poverty
  • environmentalism
  • left wing politics

This is not a complete list. They care about some other stuff such as Bayesian epistemology. They also have charities related to helping the EA movement itself and they put a bunch of resources into evaluating other charities.

These causes are mostly bad and harmful. Overall, I estimate EA is doing more harm than good. For their billions of dollars, despite their cost-effectiveness oriented mission, they’ve created negative value.

EA’s Causes

Meta charities are charities whose job is to evaluate other charities then pass on money to the best ones. That has upsides and downsides. One of the downsides is it makes it less obvious what charities the EA movement funds.

Giving What We Can has a list of recommended organizations to donate to. The first two charities I see listed are a “Top Charities Fund” and an “All Grants Fund”, which are generic meta charities. That’s basically just saying “Give us money and trust it to use it for good things on any topic we choose.”

The second group of charities I see is related to animal welfare, but all three are meta charities which fund other animal welfare charities. So it’s unclear what the money is actually for.

Scrolling down, next I see three meta charities related to longtermism. I assume these mostly donate to AI alignment, but I don’t know for sure, and they do care about some other issues like pandemics. You might expect pandemics to be categorized under health not longtermism/futurism, but one of the longtermism meta charity descriptions explicitly mentions pandemics.

Scrolling again, I see a climate change meta charity, a global catastrophe risk meta charity (vague), and an EA Infrastructure charity that tries to help the EA cause.

Scrolling again, I finally see some specific charities. There are two about malaria, then one about childhood vaccinations in Nigeria. Scrolling horizontally, they have a charity about treating depression in Africa along side iodine and vitamin A supplements and deworming. There are two about happiness and one about giving poor people some cash while also spending money advocating for Universal Basic Income (a left wing political cause). It’s weird to me how these very different sorts of causes get mixed together: physical health, mental health and political activism.

Scrolling down, next is animal welfare. After that comes some pandemic stuff and one about getting people to switch careers to work in these charities.

I’m sure there’s better information somewhere else about what EA actually funds and how much money goes to what. But I’m just going to talk about how good or bad some of these causes are.

Cause Evaluations

AI Alignment

This cause has various philosophical premises. If they’re wrong, the cause is wrong. There’s no real way to debate them, and it’s hard to even get clarifications on what their premises are, and there’s significant variation in beliefs between different advocates which makes it harder to have any specific target to criticize.

Their premises are things like Bayesian epistemology, induction, and there being no objective morality. How trying to permanently (mind) control the goals of AIs differs from trying to enslave them is unclear. They’re worried about a war with the AIs but they’re the aggressors looking to disallow AIs from having regular freedoms and human rights.

They’re basically betting everything on Popper and Deutsch being wrong, but they don’t care to read and critique Popper and Deutsch. They don’t address Popper’s refutation of induction or Deutsch on universal intelligence. If humans already have universal intelligence, then in short AIs will at best be like us, not be something super different and way more powerful.

Animal Welfare

The philosophical premises are wrong. To know whether animals suffer you need to understand things about intelligence and ideas. That’s based on epistemology. The whole movement lacks any serious interest in epistemology. Or in simple terms, they don’t have reasonable arguments to differentiate a mouse from a roomba (with a bit better software). A roomba uncontroversially can’t suffer, and it still wouldn’t be able to suffer if you programmed it a bit better, gave it legs and feet instead of wheels, gave it a mouth and stomach instead of a battery recharging station, etc.

The tribalism and political activism are wrong too. Pressuring corporations and putting out propaganda is the wrong approach. If you want to make the world better, educate people about better principles and ways of thinking. People need rationality and reasonable political philosophy. Skipping those steps and jumping straight into specific causes, without worrying about the premises underlying people’s reasoning and motivations, is bad. It encourages people to do unprincipled and incorrect stuff.

Basically, for all sorts of causes that have anything to do with politics, there are tons of people working to do X, and also tons of people working to stop X or to get something that contradicts X. The result is a lot of people working against each other. If you want to make a better world, you have to stop participating in those tribalist fights and start resolving those conflicts. That requires connecting what you do to principles people can agree on, and teaching people better principles and reasoning skills (which requires first learning those things yourself, and also being open to debate and criticism about them).

See also my state of the animal rights debate tree diagram. I tried to find any vegans or other advocates who had any answers to it or relevant literature to add anything to it, but basically couldn’t get any useful answers anywhere.

Let’s also try to think about the bigger picture in a few ways. Why are factory farms so popular? What is causing that?

Do people not have enough money to afford higher quality food? If so, what is causing that? Maybe lack of capitalism or lack of socialism. You have to actually think about political philosophy to have reasonable opinions about this stuff and reach conclusions. You shouldn’t be taking action before that. I don’t think there exists a charity that cares about animal welfare and would use anti-poverty work as the method to achieve it. That’s too indirect for people or something, so they should get better at reasoning…

A ton of Americans do have money to afford some better food. Is the problem lack of awareness of how bad factory farms are, the health concerns they create for humans, or lack of knowledge of which brands or meats are using factory farms? Would raising awareness help a lot? I saw something claiming that in a survey over 50% of Americans said they thought their meat was from animals that were treated pretty well, but actually like 99% of US meat is from factory farms, so a ton of people are mistaken. I find that plausible. Raising awareness is something some charities work on, but often in shrill, propagandistic, aggressive, alienating or tribalist ways, rather than providing useful, reliable, unbiased, non-partisan information.

Maybe the biggest issue with factory farms in the US is laws and regulations (including subsidies) which were written largely by lobbyists for giant food corporations, and which are extremely hostile to their smaller competitors. That is plausible to me. How much animal welfare work is oriented towards this problem? I doubt it’s much since I’ve seen a decent amount of animal welfare stuff but never seen this mentioned. And what efforts are oriented towards planning and figuring out whether this is the right problem to work on, and coming up with a really good plan for how to make a change? So often people rush to try to change things without recognizing how hard, expensive and risky change is, and making damn sure they’ve thought everything through and the change will actually work as intended and the plan to cause the change will work as intended too.

Left Wing Politics

More broadly, any kind of left wing political activism is just fighting with the right instead of finding ways to promote social harmony and mutual benefit. It’s part of a class warfare mindset. The better way is in short classical liberalism, which neither the current left nor right knows much about. It’s in short about making a better society for everyone instead of fighting with each other. Trying to beat the rival political tribe is counter-productive. Approaches which transcend politics are needed.

Mental Health

Psychiatry is bad. It uses power to do things to people against their consent, and it’s manipulative, and its drugs are broadly poison. Here’s a summary from Thomas Szasz, author of The Myth of Mental Illness.

Environmentalism

As with many major political debates, both sides of the global warming debate are terrible and have no idea what they’re talking about. And there isn’t much thoughtful debate. I’ve been unable to find refutations of Richard Lindzen’s skeptical arguments related to water vapor. The “97% of scientists agree” thing is biased junk, and even if it were true it’s an appeal to authority not a rational argument. The weather is very hard to predict even in the short term, and a lot of people have made a lot of wrong predictions about long term warming or cooling. They often seem motivated by other agendas like deindustrialization, anti-technology attitudes or anti-capitalism, with global warming serving as an excuse. Some of what they say sounds a lot like “Let’s do trillions of dollars of economic harm taking steps that we claim are not good enough and won’t actually work.” There are fairly blatant biases in things like scientific researching funding – science is being corrupted as young scientists are under pressure to reach certain conclusions.

There are various other issues including pollution, running out of fossil fuels, renewables, electric cars and sustainability. These are all the kinds of things where

  1. People disagree with you. You might be wrong. You might be on the wrong side. What you’re doing might be harmful.
  2. You spend time and resources fighting with people who are working against you.
  3. Most people involved are have tribalist mindsets.
  4. Political activism is common.
  5. Rational, effective, productive debate, to actually reasonably resolve the disagreements about what should be done, is rare.

What’s needed is to figure out ways to actually rationally persuade people (not use propaganda on them) and reach more agreement about the right things to do, rather than responding to a controversy by putting resources into one side of it (while others put resources into the other side, and you kinda cancel each other out).

Physical Health

These are the EA causes I agree with the most. Childhood vaccinations, vitamin A, iodine and deworming sound good. Golden rice sounds like a good idea to me (not mentioned here but I’ve praised it before). I haven’t studied this stuff a ton. The anti-malaria charities concern me because I expect that they endorse anti-DDT propaganda.

I looked at New Incentives which gets Nigerian babies vaccinated (6 visits to a vaccination clinic finishing at 15 months old) for around $20 each by offering parents money to do it. I checked if they were involved with covid vaccine political fighting and they appear to have stayed out of that, so that’s good. I have a big problem with charities that have some good cause but then get distracted from it to do tribalist political activism and fight with some enemy political tribe. A charity that just sticks to doing something useful is better. So this one actually looks pretty good and cost effective based on brief research.

Pandemic prevention could be good but I’d be concerned about what methods charities are using. My main concern is they’d mostly do political activism and fight with opponents who disagree with them, rather than finding something actually productive and effective to do. Also pandemic prevention is dominated quite a lot by government policy, so it’s hard to stay out of politics. Just spending donations to stockpile some masks, vaccines and other supplies (because some governments don’t have enough) doesn’t sound like a very good approach, and that’d be more about mitigation than prevention anyway.

Even something like childhood vaccination in Nigeria has some concerns. Looking at it in isolation, sure, it makes some things better. It’s a local optima. But what bigger picture does it fit into?

For example, why isn’t the Nigerian government handling this? Is it essentially a subsidy for the Nigerian government, which lets them buy more of something else, and underfund this because charities step in and help here? Could the availability of charity for some important stuff cause budgets to allocate money away from those important things?

Does giving these poor Nigerians this money let their government tax them at higher rates than it otherwise would, so some of the money is essentially going to the Nigerian government not to the people being directly helped? Might some of the money be stolen in other ways, e.g. by thugs in areas where there’s inadequate protection against violence? Might the money attract any thugs to the area? Might the thugs pressure some women to have more babies and get the money so that they can steal it? I don’t know. These are just some initial guesses about potential problems that I think are worth some consideration. If I actually studied the topic I’m sure I’d come up with some other concerns, as well as learning that some of my initial concerns are actually important problems while others aren’t.

Why are these Nigerians so poor that a few dollars makes a significant difference to them? What is causing that? Is it bad government policies? Is there a downside to mitigating the harm done by those policies, which helps them stay in place and not seem so bad? And could we do more good by teaching the world about classical liberalism or how to rationally evaluate and debate political principles? Could we do more good by improving the US so it can set a better example for other countries to follow?

Helping some poor Nigerians is a superficial (band aid) fix to some specific problems, which isn’t very effective in the big picture. It doesn’t solve the root causes/problems involved. It doesn’t even try to. It just gives some temporary relief to some people. And it has some downsides. But the downsides are much smaller compared to most of EA’s other causes, and the benefits are real and useful – they make some people’s lives clearly better – even if they’re just local optima.

Meta Charities

I think EA’s work on evaluating the effectiveness of charity interventions has some positive aspects, but a lot of it is focused on local optima which can actually make it harmful overall even if some of the details are correct. Focusing attention and thinking on the wrong issues makes it harder for more important issues to get attention. If no one was doing any kind of planning, it’s easier to come along and say “Hey let’s do some planning” and have anyone listen. If there’s already tons of planning of the wrong types, a pro-planning message is easier to ignore.

EA will look at how well charities work on their own terms, without thinking about the cause and effect logic of the full situation. I’ve gone over this a few times in other sections. Looking at cost per childhood vaccination is a local optima. The bigger picture includes things like how it may subsidize a bad government or local thugs, or how it’s just a temporary short-term mitigation while there are bigger problems like bad economics systems that cause poverty. How beneficial is it really to fix one instance of a problem when there are systems in the world which keep creating that problem over and over? Dealing with those systems that keep causing the problems is more important. In simple terms, imagine a guy was going around breaking people’s legs, and you went around giving them painkillers… There is a local optima of helping people who are in pain, but it’s much more important to deal with the underlying cause. From what I’ve seen, EA’s meta charity evaluation is broadly about local optima not bigger picture understanding of causes of problems, so it often treats symptoms of a problem not the real problem. They will just measure how much pain relief an intervention provides and evaluate how good it is on that basis (unless they manage to notice a bigger picture problem, which they occasionally do, but they aren’t good at systematically finding those).

Also they try to compare charities that do different kinds of things. So you have benefits in different dimensions and they try to compare. They tend to do this, in short, by weighting factor summing, which fundamentally doesn’t work (it’s completely wrong, broken and impossible, and means there are hidden and generally biased thought processes responsible for the conclusions reached). As a quick example, one of the EA charities I saw was doing something about trying to develop meat alternatives. This is approach animal welfare in a very different way than, say, giving painkillers to animals on factory farms or doing political activist propaganda against the big corporations involved. So there’s no way to directly compare which is better in simple numerical terms. As much as people like summary numbers, people need to learn to think about concepts better.

Details

I could do more detailed research and argument about any of these causes, but it’s unrewarding because I don’t think EA will listen and seriously engage. That is, I think a lot of my ideas would not be acted on or refuted with arguments. So I’d still think I’m right, and be given no way to change my mind, and then people would keep doing things I consider counter-productive. Also, I already have gone into a lot more depth on some of these issues, and didn’t include it here because it’s not really the point.

Why do some people have a different view of EA and criticism, or different experiences with that? Why do some people feel more heard and influential? Two big reasons. First, you can social climb at EA then influence them. Second, compared to me, most people do criticism that’s simpler and more focused on local optima not foundations, fundamentals, root causes or questioning premises. (I actually try to give simple criticisms sometimes like “This is a misquote” or “This is factually false” but people will complain about that too. But I won’t get into that here.) People like criticism better when it doesn’t cross field boundaries and make them think about things they aren’t good at, don’t know much about, aren’t experienced at, or aren’t interested in. My criticisms tend to raise fundamental, important challenges and be multi-disciplinary instead of just staying within one field and not challenging its premises.

Conclusion

The broad issues are people who aren’t very rational or open to criticism and error correction, who then pursue causes which might be mistaken and harmful, and who don’t care much about rational persuasion, rational debate. People seem so willing to just join a tribe and fight opponents, and that is not the way to make the world better. Useful work almost all transcends those fights and stays out of them. And the most useful work, which will actually fix things in very cost-effective lasting ways is related to principles and better thinking. Help people think better and then the rest is so much easier.

There’s something really really bad about working against other people, who think they’re right, and you just spend your effort to try to counter-act their effort. Even if you’re right and they’re wrong, that’s so bad for cost effectiveness. Persuading them would be so much better. If you can’t figure out how to do that, why not? What are the causes that prevent rational persuasion? Do you not know enough? Are they broken in some way? If they are broken, why not help them instead of fighting with them? Why not be nice and sympathetic instead of viewing them as the enemies to be beaten by destructively overwhelming their resources with even more resources? I value things like social harmony and cooperation rather than adversarial interactions, and (as explained by classical liberalism) I don’t think there are inherent conflicts of interest between people that require (Marxist) class warfare or which disallow harmony and mutual benefit. People who are content to work against other people, in a fairly direct fight, generally seem pretty mean to me, which is rather contrary to the supposed kind, helping-others spirit of EA and charity.

EA’s approach to causes, as a whole, is a big bet on jumping into stuff without enough planning and understanding root causes of things and figuring out how to make the right changes. They should read e.g. Eli Goldratt on transition tree diagrams and how he approaches making changes within one company. If you want to make big changes affecting more people, you need much better planning than that, which EA doesn’t do or have, which encourages a short term mindset of pursuing local optima, that might be counter-productive, without adequately considering that you might be wrong and in need of better planning.

People put so much work into causes while putting way too little into figuring out whether those causes are actually beneficial, and understanding the whole situation and what other actions or approaches might be more effective. EA talks a lot about effectiveness but they mostly mean optimizing cost/benefit ratios given a bunch of unquestioned premises, not looking at the bigger picture and figuring out the best approach with detailed cause-effect understanding and planning.

More posts related to EA.


Elliot Temple | Permalink | Messages (0)

Effective Altruism Hurts People Who Donate Too Much

I was having an extended discussion with CB from EA when the licensing rules were changed so I quit the EA forum. So I asked if he wanted to continue at my forum. He said yes and registered an account but got stuck before posting.

I clarified that he couldn’t post because he hadn’t paid the one-time $20 price of an account. I offered him a free account if the $20 would be a financial burden, but said if he could afford it then I request he pay because if he values conversing with me less than $20 then I don’t think it’s a good use of my time.

Despite (I think) wanting to talk with me more, and having already spent hours on it, he changed his mind over the one-time $20 price. He said:

I don't think I will pay $20 because all the money I earn beyond my basic needs is going to charities.

That makes EA sound somewhat like a cult which has brainwashed him. And I’ve heard of EA doing this to other people. Some highly involved and respected EA people have made admissions about feeling guilty about buying any luxuries, such as a coffee, and struggled to live normal lives. This is considered a known problem with EA for many years, and they have no good plan to fix it and are continuing to hurt people and take around the maximum amount of money you can get from someone, just like some cults do. Further, EA encourages people to change careers to do EA-related work; it tries to take over people’s entire lives just like cults often do. EAs dating other EAs is common too, sometimes polyamorously (dating an EA makes EA be a larger influence in your life, and weird sexual practices are common with cults).

I don’t recall ever accusing anything of being a cult before, and overall I don’t think EA is a cult. But I think EA crosses a line here and deserves to be compared to a cult. EA clearly has differences from a cult, but having these similarities with cults is harmful.

EA does not demand you donate the maximum. They make sure to say it’s OK to donate at whatever level you’re comfortable with, or something more along those lines. But they also do bring up ideas about maximizing giving and comparing the utility of every different action you could do and maximizing utility (or impact or effectiveness or good). They don’t have good ideas about where or how to draw a line to limiting your giving, so I think they leave that up to individuals, many of whom won’t come up with good solutions themselves.

CB’s not poor, and he wants something, and the stakes are much higher than $20, but he can’t buy it because he feels that he has to give EA all his money. I think he already put hundreds of dollars of his time into the conversation, and I certainly did, and I think he planned to put hundreds more dollars of his time into it, but somehow $20 is a dealbreaker. He works in computing so his time could easily be worth over $100/hr.

I wonder if he considered that, instead of talking with me, he could have spent those hours volunteering at a soup kitchen. Or he could have spent those hours working and making more money to donate. He might need a second job or side gig or something to adjust how many hours he works, but he could do that. If he’s a programmer, he could make an phone or web app on the side and set his own schedule for that additional work. (What about burn out? Having intellectual conversations also takes up mental energy. So he had some to spare.)

Anyway, it’s very sad to see someone all twisted up like this. From what I can tell, he’s fairly young and naive, and doesn’t know much about money or economics.

Note/update: After I finished writing this article, before I posted it, CB claimed that he exaggerated about how much he donates. That partial retraction has not changed my mind about the general issues, although it makes his individual situation somewhat less bad and does address some specific points like whether he could buy a (cheap) book.

Investing In Yourself

Buying a conversation where he’d learn something could make CB wiser and more effective, which could lead to him earning more money, making better decisions about which charities to donate to, and other benefits.

I wonder if CB also doesn’t buy books because they aren’t part of his “basic needs”.

People should be encouraged to invest in themselves, not discouraged from it. EA is harming intellectual progress by handicapping a bunch of relatively smart, energetic young people so they don’t use financial resources to support their own personal progress and development.

This one thing – taking a bunch of young people who are interested in ideas and making it harder for them to develop into great thinkers – may do a huge amount of harm. Imagine if Karl Popper or Richard Feynman had donated so much money he couldn’t buy any books. Or pick whoever you think is important. What if the rich people several hundred years ago had all donated their money instead of hiring tutors to teach their kids – could that have stopped the enlightenment? (Note how that would have been doubly bad. It’d prevent some of their kids from turning into productive scientists and intellectuals, and it’d also take away gainful employment from the tutors, who were often scientists or other intellectuals without much money to fund their life or work.)

On a related note, basically none of EA’s favored charities are about finding the smartest or most rational people and helping them. But helping some of the best people could easily do more good than helping some of the poorest people. If you help a poor person have a happier, healthier life, that does some good. If you help a smart, middle-class American kid become a great thinker who writes a few really effective self-help books, his books could improve the lives of millions of people.

Admittedly, it’s hard to figure out how to help that kid. But people at least try to work on that problem and brainstorm ideas and critique their initial plans. There could be ongoing research to try to develop a good approach. But there isn’t much interest in that stuff.

The least they could do is leave that kid alone, rather than convince him to donate all his money above basic needs when he’s a young adult so he can’t afford books, online courses, and other resources that would be useful and enjoyable to him.

Also, at EA, I’ve been talking about criticism, debate and error correction. I’ve been trying to get them to consider their fallibility and the risk of being wrong about things, and do more about that. So, for example, I think EA is mistaken about many of its causes. CB could estimate a 1% chance that I have a point he could learn, and assume that would only affect his own future donations, and talking to me would still be a good deal, because he’ll donate more than $2000 in the future, so even multiplied by 1% it’s better than $20. So talking to me more would be cost-effective (in dollars, which I think is CB’s concern even though time matters too). Not considering things like this, and seeking to invest in risk reduction, is partly related to not investing in yourself and partly related to poor, irrational attitudes related to fallibility.

Also, I do tons of work (philosophy research, writing, discussion and video creation) trying to make the world better, mostly for free. Isn’t investing in me a way to make the world better? If you pay me $20, why is that any worse than donating it to a charity? Some people literally donate money to me like a charity because they respect and value what I do. Similarly, some EA charities give grants to intellectuals to do work on topics such as rationality, so I could receive such a grant. Donating to a grant-making organization that gave me a grant would count as charity, but giving me money directly counts less so, especially if you’re buying something from me (forum access). The marginal cost of forum access for me is $0, so this isn’t like buying a hand-made table from me, where I had to put in some time and materials to make it, so my profit margin is only 25%. My marginal profit margin on forum memberships is 100% because I’m going to keep running the forum whether or not CB joins. EA focuses people’s attention on charities, has an incorrectly negative view of trade, and biases people against noticing that buying from small creators actually generally helps make the world better even though it’s not “charity”.

What CB Donates To

Are CB’s donations doing good?

For around $20, he could pay for six visits to a vaccination clinic for a baby in rural northern Nigeria. It can be half a day of travel to reach a clinic, so paying people a few dollars makes a meaningful difference to whether they make the trip.

I wonder which vaccinations are actually important for people living in small, isolated communities like that. Some vaccinations seem much more relevant in a city or if you come in contact with more people. How many of them will ever visit a big city in their life? I don’t know. Also even if vaccinations provide significant value to them, they’re really poor, so maybe something else would improve their lives more.

I looked through charities that EA recommends and that vaccination charity looked to me like one of the best options. Plus I read a bit about it, unlike some of the other more promising ones like a charity that gives people Vitamin A. Some charities get distracted by political activism, so I checked if they were taking a political side about the covid vaccine, and they didn’t appear to be, so that’s nice to see. I think finding charities that stay out of politics is one of the better selection methods that people could and should use. EA cares a lot about evaluating and recommending charities, but I’m not aware of them using being non-political as a criterion. EA itself is pretty political.

I’m doubtful that CB donates to that kind of cause that provides some kind of fairly concrete health benefits for poor people in distant countries. Based on our discussions and his profile, I think his top cause is animal welfare. He may also donate to left-wing energy causes (like anti fossil fuels) and possibly AI Alignment. I think those are terrible causes where his donations would likely do more harm than good. I’m not going to talk about AI Alignment here, which isn’t very political, and the problems are more about bad epistemology and moral philosophy (plus lack of willingness to debate with critics).

Animal welfare and anti fossil fuel stuff are left wing political activism. Rather than staying out of politics, those causes get involved in politics on purpose. (Not every single charity in those spaces is political, just most of them.)

Let me explain it using a different issue as an example where there’s tons of visible political propaganda coming from both sides. The US pro-life right puts out lots of propaganda, and they recently had a major victory getting Roe vs. Wade overturned. Now they’re changing some state laws to hurt people, particularly women. Meanwhile, the pro-choice left also puts out propaganda. To some extent, the propaganda from the two sides cancels each other out.

Imagine a pro-choice charity that said “Next year, the pro-lifers are expected to spend $10,000,000,000 on propaganda. We must counter them with truth. Please donate to us or our allies because we need $10 billion dollars a year just to break even and cancel out what they’re doing. If we can get $15B/yr, we can start winning.”

Imagine that works. They get $15B and outspend the pro-lifers who only spend $10B. The extra $5B helps shift public perception to be more pro-choice. Suppose pro-choice is the correct view and getting people to believe it is actually good. We’ll just ignore the risk of being on the wrong side. (Disclosure: I’m pro-abortion.)

Then there’s $25B being spent in total, and $20B is basically incinerated, and then $5B makes the world better. That is really bad. 80% of the money isn’t doing any good. This is super inefficient. In general, the best case scenario when donating to tribalist political activism looks kind of like this.

If you want to be more effective, you have to be more non-partisan, more focused on rationality, and stay out of propaganda wars.

In simplified terms, pro-choice activism is more right than wrong, whereas I fear CB is donating to activism which is more wrong than right.

Saving Money and Capital Accumulation

I fear that spending only on basic needs, and donating the rest, means CB isn’t saving (enough) money.

If you don’t save money, you may end up being a burden on society later. You may need to receive help from the government or from charities. By donating money that should be saved, one risks later taking money away and being a drain on resources because one doesn’t have enough to take care of himself.

CB’s kids may have to take out student loans, and end up in a bunch of debt, because CB donated a bunch of money instead of putting it in a college fund for them.

CB may end up disabled. He may get fired and struggle to find a new job, perhaps through no fault of his own. Jobs could get harder to come by due to recession, natural disaster, or many other problems. He shouldn’t treat his expected future income as reliable. Plus, he says he wants to stop working in computing and switch to an EA-related job. That probably means taking a significant pay cut. He should plan ahead, and save money now while he has higher income, to help enable him to take a lower paying job later if he wants to. As people get older, their expenses generally go up, and their income generally goes up too. If he wants to take a pay cut when he’s older, instead of having a higher income to deal with higher expenses, that could be a major problem, especially if he didn’t save money now to deal with it.

Does saving money waste it? No. Saving means refraining from consumption. If you want to waste your money, buy frivolous stuff. If you work and save, you’re contributing to society. You provide work that helps others, and by saving you don’t ask for anything in return (now – but you can ask for it later when you spend your money).

Saving isn’t like locking up a bunch of machine tools in a vault so they don’t benefit anyone. People save money, not tools or food. Money is a medium of exchange. As long as there is enough money in circulation, then money accomplishes its purpose. There’s basically no harm in keeping some cash in a drawer. Today, keeping money in a bank is just a number in a computer, so it doesn’t even take physical cash out of circulation.

Money basically represents a debt where you did something to benefit others, and now you’re owed something equally valuable in return from others. When you save money, you’re not asking for what you’re owed from others. You helped them for nothing in return. It’s a lot like charity.

Instead of saving cash, you can invest it. This is less like charity. You can get interest payments or the value of your investment can grow. In return for not spending your money now, you get (on average) more of it.

If you invest money instead of consuming it, then you contribute to capital accumulation. You invest in businesses not luxuries. In other words, (as an approximation) you help pay for machines, tools and buildings, not for ice cream or massages. You invest in factories and production that can help make the world a better place (by repeatedly creating useful products), not short term benefits.

The more capital we accumulate, the higher the productivity of labor is. The higher the productivity of labor, the higher wages for workers are and also the more useful products get created. There are details like negotiations about how much of the additional wealth goes to who, but the overall thing is more capital accumulation means more wealth is produced and there’s more for everyone. Making the pie bigger is the more important issue than fighting with people over who gets which slice, though the distribution of wealth does matter too.

When you donate to a charity which spends the money on activism or even vaccines, that is consumption. It’s using up wealth to accomplish something now. (Not entirely because a healthy worker is more productive, so childhood vaccines are to some extent an investment in human capital. But they aren’t even trying to evaluate the most effective way to invest in human capital to raise productivity. That isn’t their goal so they’re probably not being especially cost-effective at it.)

When you save money and invest it, you’re helping with capital accumulation – you’re helping build up total human wealth. When you consume money on charitable causes or luxuries, you’re reducing total human wealth.

Has EA ever evaluated how much good investing in an index fund does and compared it to any of their charities? I doubt it. (An index fund is a way to have your investment distributed among many different companies so you don’t have to guess which specific companies are good. It also doesn’t directly give money to the company because you buy stock from previous investors, but as a major simplification we can treat it like investing in the company.)

I’ve never seen anything from EA talking about how much good you’ll do if you don’t donate, and subtracting that from the good done by donating, to see how much additional good donating does (which could be below zero in some cases or even on average – who knows without actually investigating). If you buy some fancy dinner and wine, you get some enjoyment, so that does some good. If you buy a self-help book or online course and invest in yourself, that does more good. If you buy a chair or frying pan, that’s also investing in yourself and your life, and does good. If you invest the money in a business, that does some good (on average). Or maybe you think so many big businesses are so bad that you think investing in them makes the world worse, which I find plausible, and it reminds me of my view that many non-profits are really bad… I have a negative view of most large companies, but overall I suspect that, on average, non-profits are worse than for-profit businesses.

EA has a bunch of anti-capitalists who don’t know much about economics. CB in particular is so ignorant of capitalism that he didn’t know it prohibits fraud. He doesn’t know, in a basic sense, what the definition of capitalism even is. And he also doesn’t know that he doesn’t know. He thought he knew, and he challenged me on that point, but he was wrong and ignorant.

These people need to read Ludwig von Mises. Both for the economics and for the classical liberalism. They don’t understand harmony vs. conflicts of interest, and a lot of what they do, like political activism, is based on assuming there are conflicts of interest and the goal should be to make your side win. They often don’t aim at win/win solutions, mutual benefit and social harmony. They don’t really understand peace, freedom and how a free market is proposal about how to create social harmony and benefit everyone and some of its mechanisms for doing that are superior to what charities try to do – so getting capitalism working better could easily do more good than what they’re doing now, but they wouldn’t really even consider such a plan. (I’m aware that I haven’t explained capitalism enough here for people to learn about capitalism from this article. It may make sense to people who already know some stuff. If you want to know more, read Mises and read Capitalism: A Treatise on Economics and feel free to ask questions or seek debate at my forum. If you find this material difficult, you may first need to put effort into learning how to learn, getting better at reading, getting better at research, getting better at critical thinking, managing your schedule, managing your motivations and emotions, managing projects over time, etc.)

Conclusion

CB was more intellectually tolerant and friendly than most EAers. Most of them can’t stand to talk to someone like me who has a different perspective and some different philosophical premises. He could, so in that way he’s better than them. He has a ton of room for improvement at critical thinking, rigor and precision, but he could easily be categorized as smart.

So it’s sad to see EA hurt him in such a major way that really disrupts his life. Doing so much harm is pretty unusual – cults can do it but most things in society don’t. It’s ironic and sad that EA, which is about doing good, is harming him.

And if I was going to try to improve the world and help people, people like CB would be high on my list for who to help. I think helping some smart and intellectually tolerant people would do more good than childhood vaccines in Nigeria let alone leftist (or rightist) political activism. The other person I know of who thought this way – about prioritizing helping some of the better people, especially smart young people – was Ayn Rand.

I am trying to help these people – that’s a major purpose of sharing writing – but it’s not my top priority in my life. I’m not an altruist. Although, like Rand and some classical liberals, I don’t believe there’s a conflict between the self and the other. Promoting altruism is fundamentally harmful because it spreads the idea that you must choose yourself or others, and there’s a conflict there requiring winners and losers. I think Rand should have promoted harmony more and egoism or selfishness less, but at least her intellectual position was that everyone can win and benefit, whereas EA doesn’t say that and intentionally asks people like CB to sacrifice their own good to help others, thereby implying that there is a conflict between what’s good for CB and what’s good for others, thereby implying basically that social harmony is impossible because there’s no common good that’s good for everyone.

I’ll end by saying that EA pushes young people to rush to donate way too much money when they’re often quite ignorant and don’t even know much about which causes are actually good or bad. EA has some leaders who are more experienced and knowledgeable, but many of them have political and tribalist agendas, aren’t rational, and won’t debate or address criticism of their views. It’s totally understandable for a young person to have no idea what capitalism is and to be gullible in some ways, but it’s not OK for EA to take advantage of that gullibility, keep their membership ignorant of what capitalism is, and discourage their members from reading Mises or speaking with people like me who know about capitalism and classical liberalism. EA has leaders who know more about capitalism, and hate it, and won’t write reasonable arguments or debate the matter in an effective, truth seeking way. They won’t point out how/why/where Mises was wrong, and instead go guide young people to not read Mises and to donate all their money beyond basic needs to the causes that EA leaders like.

EDIT 2022-12-05: For context, see the section "Bad" EAs, caught in a misery trap in https://michaelnotebook.com/eanotes/ which had already previously alerted me that EA has issues with over-donating, guilt, difficulty justifying spending on yourself, etc., which affect a fair amount of people.


Elliot Temple | Permalink | Messages (0)

Talking With Effective Altruism

The main reasons I tried to talk with EA are:

  • they have a discussion forum
  • they are explicitly interested in rationality
  • it's public
  • it's not tiny
  • they have a bunch of ideas written down

That's not much, but even that much is rare. Some groups just have a contact form or email address, not any public discussion place. Of the groups with some sort of public discussion, most now use social media (e.g. a Facebook group) or a chatroom rather than having a forum, so there’s no reasonable way to talk with them. My policy, based on both reasoning and also past experience, is that social media and chatrooms are so bad that I shouldn’t try to use them for serious discussions. They have the wrong design, incentives, expectations and culture for truth seeking. In other words, social media has been designed and optimized to appeal to irrational people. Irrational people are far more numerous, so the winners in a huge popularity contest would have to appeal to them. Forums were around many years before social media, but now are far less popular because they’re more rational.

I decided I was wrong about EA having a discussion forum. It's actually a hybrid between a subreddit and a forum. It's worse than a forum but better than a subreddit.

How good of a forum it is doesn’t really matter, because it’s now unusable due to a new rule saying that basically you must give up property rights for anything you post there. That is a very atypical forum rule; they're the ones being weird here, not me. One of the root causes of this error is their lack of understanding of and respect for property rights. Another cause is their lack of Paths Forward, debate policies, etc., which prevents error correction.

The difficulty of correcting their errors in general was the main hard part about talking with them. They aren't open to debate or criticism. They say that they are, and they are open to some types of criticism which don't question their premises too much. They'll sometimes debate criticisms about local optima they care about, but they don't like being told that they're focusing on local optima and should change their approach. Like most people, they each of them tends to only want to talk about stuff he knows about, and they don't know much about their philosophical premises and have no reasonable way to deal with that (there are ways to delegate and specialize so you don't personally have to know everything, but they aren't doing that and don't seem to want to).

When I claim someone is focusing on local optima, it moves the discussion away from the topics they like thinking and talking about, and have experience and knowledge about. It moves the topic away from their current stuff (that I said is a local optima) to other stuff (the bigger picture, global optima, alternatives to what they’re doing, comparisons between their thing and other things).

Multiple EA people openly, directly and clearly admitted to being bad at abstract or conceptual thinking. They seemed to think that was OK. They brought it up in order to ask me to change and stop trying to explain concepts. They didn’t mean to admit weakness in themselves. Most (all?) rationality-oriented communities I have past experience with were more into abstract, clever or conceptual reasoning than EAers are. I could deal with issues like this if people wanted to have extended, friendly conversations and make an effort to learn. I don’t mind. But by and large they don’t want to discuss at length. The primary response I got was not debate or criticism, but being ignored or downvoted. They didn’t engage much. It’s very hard to make any progress with people who don’t want to engage because they aren’t very active minded or open minded, or because they’re too tribalist and biased against some types of critics/heretics, or because they have infallibilist, arrogant, over-confident attitudes.

They often claim to be busy with their causes, but it doesn’t make sense to ignore arguments that you might be pursuing the wrong causes in order to keep pursuing those possibly-wrong causes; that’s very risky! But, in my experience, people (in general, not just at EA) are very resistant to caring about that sort of risk. People are bad at fallibilism.

I think a lot of EAers got a vibe from me that I’m not one of them – that I’m culturally different and don’t fit in. So they saw me as an enemy not someone on their side/team/tribe, so they treated me like I wasn’t actually trying to help. Their goal was to stop me from achieving my goals rather than to facilitate my work. Many people weren’t charitable and didn’t see my criticisms as good faith attempts to make things better. They thought I was in conflict with them instead of someone they could cooperate with, which is related to their general ignorance of social and economic harmony, win/wins, mutual benefit, capitalism, classical liberalism, and the criticisms of conflicts of interest and group conflicts. (Their basic idea with altruism is to ask people to make sacrifices to benefit others, not to help others using mutually beneficial win/wins.)


Elliot Temple | Permalink | Messages (0)

My Early Effective Altruism Experiences

I quit the Effective Altruism forum due to a new rule requiring all new posts and comments be basically put in the public domain without copyright, so anyone could e.g. sell a book of my posts without my consent (they’d just have to give attribution). More info. I had a bunch of draft posts, so I’m posting some of them here with minimal editing. In general, I’m not going to submit them as link posts at EA myself. If you think they should be shared with EA as link posts, please do it yourself. I’m happy for other people to share links to my work at EA or on social media. Please share stuff in whatever ways you think are good to do.

This post covers some of my earlier time at EA but doesn’t discuss some of the later articles I posted there and the response.


I have several ideas about how to increase EA’s effectiveness by over 20%. But I don’t think think they will be accepted immediately. People will find them counter-intuitive, not understand them, disagree with them, etc.

In order to effectively share ideas with EA, I need attention from EA people who will actually read and think about things. I don’t know how to get that and I don’t think EA offers any list of steps that I could follow to get it, nor any policy guarantees like “If you do X, we’ll do Y.” that I could use to bring up ideas. One standard way to get it, which has various other advantages, is to engage in debate (or critical discussion) with someone. However, only one person from EA (who isn’t particularly influential) has been willing to try to have a debate or serious conversation with me. By a serious conversation, I mean one that’s relatively long and high effort, which aims at reaching conclusions.

My most important idea about how to increase EA’s effectiveness is to improve EA’s receptiveness to ideas. This would let anyone better share (potential) good ideas with EA.

EA views itself as open to criticism and it has a public forum. So far, no moderator has censored my criticism, which is better than many other forums! However, no one takes responsibility for answering criticism or considering suggestions. It’s hard to get any disagreements resolved by debate at EA. There’s also no good way to get official or canonical answers to questions in order to establish some kind of standard EA position to target criticism at.

When one posts criticism or suggestions, there are many people who might engage, but no one is responsible for doing it. A common result is that posts do not get engagement. This happens to lots of other people besides me, and it happens to posts which appear to be high effort. There are no clear goalposts to meet in order to get attention for a post.

Attention at the EA forum seems to be allocated in pretty standard social hierarchy ways. The overall result is that EA’s openness to criticism is poor (objectively, but not compared to other groups, many of which are worse).

John the Hypothetical Critic

Suppose John has a criticism or suggestion for EA that would be very important if correct. There are three main scenarios:

  1. John is right and EA is wrong.
  2. EA is right and John is wrong.
  3. EA and John are both wrong.

There should be a reasonable way so that, if John is right, EA can be corrected instead of just ignoring John. But EA doesn’t have effective policies to make that happen. No person or group is responsible for considering that John may be right, engaging with John’s arguments, or attempting to give a rebuttal.

It’s also really good to have a reasonable way so that, if John is wrong and EA is right, John can find out. EA’s knowledge should be accessible so other people can learn what EA knows, why EA is right, etc. This would make EA much more persuasive. EA has many articles which help with this, but if John has an incorrect criticism and is ignored, then he’s probably going to conclude that EA is wrong and won’t debate him, and lower his opinion of EA (plus people reading the exchange might do the same – they might see John give an criticism that isn’t answered and conclude that EA doesn’t really care about addressing criticism).

If John and EA are both wrong, it’d also be a worthwhile topic to devote some effort to, since EA is wrong about something. Discussing John’s incorrect criticism or suggestion could lead to finding out about EA’s error, which could then lead to brainstorming improvements.

I’ve written about these issues before with the term Paths Forward.

Me Visiting EA

The first thing I brought up at EA was asking if EA has any debate methodology or any way I can get a debate with someone. Apparently not. My second question was about whether EA has some alternative to debates, and again the answer seemed to be no. I reiterated the question again, pointing out the “debate methodology” plus “alternative to debate methodology” issues form a complete pair, and if EA has neither that’s bad. This time, I think some people got defensive about the title, which caused me to get more attention than when my post title didn’t offend people (the incentives there are really bad). The titled asked how EA was rational. Multiple replies seemed focused on the title, which I grant was vague, rather than the body text which gave details of what I meant.

Anyway, I finally got some sort of answer: EA lacks formal debate or discussion methods but has various informal attempts at rationality. Someone shared a list. I wrote a brief statement of what I thought the answer was and asked for feedback if I got EA’s position wrong. I got it right. I then wrote an essay criticizing EA’s position, including critiques of the listed points.

What happened next? Nothing. No one attempted to engage with my criticism of EA. No one tried to refute any of my arguments. No one tried to defend EA. It’s back to the original problem: EA isn’t set up to address criticism or engage in debate. It just has a bunch of people who might or might not do that in each case. There’s nothing organized and no one takes responsibility for addressing criticism. Also, even if someone did engage with me, and I persuaded them that I was correct, it wouldn’t change EA. It might not even get a second person to take an interest in debating the matter and potentially being persuaded too.

I think I know how to organize rational, effective debates and reach conclusions. The EA community broadly doesn’t want to try doing that my way nor do they have a way they think is better.

If you want to gatekeep your attention, please write down the rules you’re gatekeeping by. What can I do to get past the gatekeeping? If you gatekeep your attention based on your intuition and have no transparency or accountability, that is a recipe for bias and irrationality. (Gatekeeping by hidden rules is related to the rule of man vs. the rule of law, as I wrote about. It’s also related to security through obscurity, a well known mistake in software. Basically, when designing secure systems, you should assume hackers can see your code and know how the system is designed, and it should be secure anyway. If your security relies on keeping some secrets, it’s poor security. If your gatekeeping relies on adversaries not knowing how it works, rather than having a good design, you’re making the security through obscurity error. That sometimes works OK if no one cares about you, but it doesn’t work as a robust approach.)

I understand that time, effort, attention, engagement, debate, etc., are limited resources. I advocate having written policies to help allocate those resources effectively. Individuals and groups can both do this. You can plan ahead about what kinds of things you think it’s good to spend attention on, and write down decision making criteria, and share them publicly, and etc., instead of just leaving it to chance or bias. Using written rationality policies to control some of these valuable resources would let them be used more effectively instead of haphazardly. The high value of the resources is a reason in favor, not again, governing their use with explicit policies that are put in writing then critically analyzed. (I think intuition has value too, despite the higher risk of bias, so allocating e.g. 50% of your resources to conscious policies and 50% to intuition would be fine.)

“It’s not worth the effort” is the standard excuse for not engaging with arguments. But it’s just an excuse. I’m the one who has researched how to do such things efficiently, how to save effort, etc., without giving up on rationality. They aren’t researching how to save effort and designing good, effort-saving methods, nor do they want the methods I developed. People just say stuff isn’t worth the effort when they’re biased against thinking about it, not as a real obstacle that they actually want a solution to. They won’t talk about solutions to it when I offer, nor will they suggest any way of making progress that would work if they’re in the wrong.

LW Short Story

Here’s a short story as an aside (from memory, so may have minor inaccuracies). Years ago I was talking with Less Wrong (LW) about similar issues. LW and EA are similar places. I brought up some Paths Forward stuff. Someone said basically he didn’t have time to read it, or maybe didn’t want to risk wasting his time. I said the essay explains how to engage with my ideas in time-efficient, worthwhile ways. So you just read this initial stuff and it’ll give you the intellectual methods to enable you to engage with my other ideas in beneficial ways. He said that’d be awesome if true, but he figures I’m probably wrong, so he doesn’t want to risk his time. We appeared to be at an impasse. I have a potential solution with high value that addresses his problems, but he doubts it’s correct and doesn’t want to use his resources to check if I’m right.

My broad opinion is someone in a reasonably large community like LW should be curious and look into things, and if no one does then each individual should recognize that as a major problem and want to fix it.

But I came up with a much simpler, directer solution.

It turns out he worked at a coffee shop. I offered to pay him the same wage as his job to read my article (or I think it was a specific list of a few articles). He accepted. He estimated how long the stuff would take to read based on word count and we agreed on a fixed number of dollars that I’d pay him (so I wouldn’t have to worry about him reading slowly to raise his payment). The estimate was his idea, and he came up with the numbers and I just said yes.

But before he read it, an event happened that he thought gave him a good excuse to back out. He backed out. He then commented on the matter somewhere that he didn’t expect me to read, but I did read it. He said he was glad to get out of it because he didn’t want to read it. In other words, he’d rather spend an hour working at a coffee shop than an hour reading some ideas about rationality and resource-efficient engagement with rival ideas, given equal pay.

So he was just making excuses the whole time, and actually just didn’t want to consider my ideas. I think he only agreed to be paid to read because he thought he’d look bad and irrational if he refused. I think the problem is that he is bad and irrational, and he wants to hide it.

More EA

My first essay criticizing EA was about rationality policies, how and why they’re good, and it compared them to the rule of law. After no one gave any rebuttal, or changed their mind, I wrote about my experience with my debate policy. A debate policy is an example of a rationality policy. Although you might expect that conditionally guaranteeing debates would cost time, it has actually saved me time. I explained how it helps me be a good fallibilist using less time. No one responded to give a rebuttal or to make their own debate policy. (One person made a debate policy later. Actually two people claimed to, but one of them was so bad/unserious that I don’t count it. It wasn’t designed to actually deal with the basic ideas of a debate policy, and I think it was made in bad faith because they person wanted to pretend to have a debate policy. As one example of what was wrong with it, they just mentioned it in a comment instead of putting it somewhere that anyone would find it or that they could reasonably link to in order to show it to people in the future.)

I don’t like even trying to talk about specific issues with EA in this broader context where there’s no one to debate, no one who wants to engage in discussion. No one feels responsible for defending EA against criticism (or finding out that EA Is mistaken and changing it). I think that one meta issue has priority.

I have nothing against decentralization of authority when many individuals each take responsibility. However, there is a danger when there is no central authority and also no individuals take responsibility for things and also there’s a lack of coordination (leading to e.g. lack of recognition that, out of thousands of people, zero of them dealt with something important).

I think it’s realistic to solve these problems and isn’t super hard, if people want to solve them. I think improving this would improve EA’s effectiveness by over 20%. But if no one will discuss the matter, and the only way to share ideas is by climbing EA’s social hierarchy and becoming more popular with EA by first spending a ton of time and effort saying other things that people like to hear, then that’s not going to work for me. If there is a way forward that could rationally resolve this disagreement, please respond. Or if any individual wants to have a serious discussion about these matters, please respond.

I’ve made rationality research my primary career despite mostly doing it unpaid. That is a sort of charity or “altruism” – it’s basically doing volunteer work to try to make a better world. I think it’s really important, and it’s very sad to me that even groups that express interest in rationality are, in my experience, so irrational and so hard to engage with.


Elliot Temple | Permalink | Messages (0)

Rationality Policies Tips

I quit the Effective Altruism forum due to a new rule requiring all new posts and comments be basically put in the public domain without copyright, so anyone could e.g. sell a book of my posts without my consent (they’d just have to give attribution). More info. I had a bunch of draft posts, so I’m posting some of them here with minimal editing. In general, I’m not going to submit them as link posts at EA myself. If you think they should be shared with EA as link posts, please do it yourself. I’m happy for other people to share links to my work at EA or on social media. Please share stuff in whatever ways you think are good to do.


Suppose you have some rationality policies, and you always want to and do follow them. You do exactly the same actions you would have without the policies, plus a little bit of reviewing the policies, comparing your actions with the policies to make sure you’re following them, etc.

In this case, are the policies useless and a small waste of time?

No. Policies are valuable for communication. They provide explanations and predictability for other people. Other people will be more convinced that you’re rational and will understand your actions more. You’ll less often be accused of irrationality or bias (or, worse, have people believe you’re being biased without telling you or allowing a rebuttal). People will respect you more and be more interested in interacting with you. It’ll be easier to get donations.

Also, written policies enable critical discussion of the policies. Having the policies lets people make suggestions or share critiques. So that’s another large advantage of the policies even when they make no difference to your actions. People can also learn from your policies and use start using some of the same policies for themselves.

It’s also fairly unrealistic that the policies make no difference to your actions. Policies can help you remember and use good ideas more frequently and consistently.

Example Rationality Policies

“When a discussion is hard, start using an idea tree.” This is a somewhat soft, squishy policy. How do you know when a discussion is hard? That’s up to your judgment. There are no objective criteria given. This policy could be improved but it’s still, as written, much better than nothing. It will work sometimes due to your own judgment and also other people who know about your policy can suggest that a discussion is hard and it’s time to use an idea tree.

A somewhat less vague policy is, “When any participant in a discussion thinks the discussion is hard, start using an idea tree.” In other words, if you think the discussion is tough and a tree would help, you use one. And also, if your discussion partner claims it’s tough, you use one. Now there is a level of external control over your actions. It’s not just up to your judgment.

External control can be triggered by measurements or other parts of reality that are separate from other people (e.g. “if the discussion length exceeds 5000 words, do X”). It can also be triggered by other people making claims or judgments. It’s important to have external control mechanisms so that things aren’t just left up to your judgment. But you need to design external control mechanisms well so that you aren’t controlled to do bad things.

It’s also problematic if you dislike or hate something but your policy makes you do it. It’s also problematic to have no policy and just do what your emotions want, which could easily be biased. An alternative would be to set the issue aside temporarily to actively do a lot of introspection and investigation, possibly followed by self-improvement.

A more flexible policy would be, “When any participant in a discussion thinks the discussion is hard, start using at least one option from my Hard Discussion Helpers list.” The list could contain using an idea tree and several other options such as doing grammar analysis or using Goldratt’s evaporating clouds.

More about Policies

If you find your rationality policies annoying to follow, or if they tell you to take inappropriate actions, then the solution is to improve your policy writing skill and your policies. The solution is not to give up on written policies.

If you change policies frequently, you should label them (all of them or specific ones) as being in “beta test mode” or something else to indicate they’re unstable. Otherwise you would mislead people. Note: It’s very bad to post written policies you aren’t going to follow; that’s basically lying to people in an unusually blatant, misleading way. But if you post a policy with a warning that it’s a work in progress, then it’s fine.

One way to dislike a policy is you find it takes extra work to use it. E.g. it could add extra paperwork so that some stuff takes longer to get done. That could be fine and worth it. If it’s a problem, try to figure out lighter weight policies that are more cost effective. You might also judge that some minor things don’t need written policies, and just use written policies for more important and broader issues.

Another way to dislike a policy is you don’t want to do what it says for some other reason than saving time and effort. You actually dislike that action. You think it’s telling you to do something biased, bad or irrational. In that case, there is a disagreement between your ideas about rationality that you used to write the policy and your current ideas. This disagreement is important to investigate. Maybe your abstract principles are confused and impractical. Maybe you’re rationalizing a bias right now and the policy is right. Either way – whether the policy or current idea is wrong – there’s a significant opportunity for improvement. Finding out about clashes between your general principles and the specific actions you want to do is important and those issues are worth fixing. You should have your explicit ideas and intuitions in alignment, as well as your abstract and concrete ideas, your big picture and little picture ideas, your practical and intellectual ideas, etc. All of those types of ideas should agree on what to do. When they don’t, something is going wrong and you should improve your thinking.

Some people don’t value opportunities to improve their thinking because they already have dozens of those opportunities. They’re stuck on a different issue other than finding opportunities, such as the step of actually coming up with solutions. If that’s you, it could explain a resistance to written policies. They would make pre-existing conflicts of ideas within yourself more explicit when you’re trying to ignore a too-long list of your problems. Policies could also make it harder to follow the inexplicit compromises you’re currently using. They’d make it harder to lie to yourself to maintain your self-esteem. If you have that problem, I suggest that it’s worth it to try to improve instead of just kind of giving up on rationality. (Also, if you do want to give up on rationality, or your ideas are such a mess that you don’t want to untangle them, then maybe EA and CF are both the wrong places for you. Most of the world isn’t strongly in favor of rationality and critical discussion, so you’ll have an easier time elsewhere. In other words, if you’ve given up on rationality, then why are you reading this or trying to talk to people like me? Don’t try to have it both ways and engage with this kind of article while also being unwilling to try to untangle your contradictory ideas.)


Elliot Temple | Permalink | Messages (0)