[Previous] Effective Altruism Related Articles | Home | [Next] Effective Altruism Hurts People Who Donate Too Much

Effective Altruism Is Mostly Wrong About Its Causes

EA has gotten a few billion dollars to go towards its favored charitable causes. What are they? Here are some top cause areas:

  • AI alignment
  • animal welfare and veganism
  • disease and physical health
  • mental health
  • poverty
  • environmentalism
  • left wing politics

This is not a complete list. They care about some other stuff such as Bayesian epistemology. They also have charities related to helping the EA movement itself and they put a bunch of resources into evaluating other charities.

These causes are mostly bad and harmful. Overall, I estimate EA is doing more harm than good. For their billions of dollars, despite their cost-effectiveness oriented mission, they’ve created negative value.

EA’s Causes

Meta charities are charities whose job is to evaluate other charities then pass on money to the best ones. That has upsides and downsides. One of the downsides is it makes it less obvious what charities the EA movement funds.

Giving What We Can has a list of recommended organizations to donate to. The first two charities I see listed are a “Top Charities Fund” and an “All Grants Fund”, which are generic meta charities. That’s basically just saying “Give us money and trust it to use it for good things on any topic we choose.”

The second group of charities I see is related to animal welfare, but all three are meta charities which fund other animal welfare charities. So it’s unclear what the money is actually for.

Scrolling down, next I see three meta charities related to longtermism. I assume these mostly donate to AI alignment, but I don’t know for sure, and they do care about some other issues like pandemics. You might expect pandemics to be categorized under health not longtermism/futurism, but one of the longtermism meta charity descriptions explicitly mentions pandemics.

Scrolling again, I see a climate change meta charity, a global catastrophe risk meta charity (vague), and an EA Infrastructure charity that tries to help the EA cause.

Scrolling again, I finally see some specific charities. There are two about malaria, then one about childhood vaccinations in Nigeria. Scrolling horizontally, they have a charity about treating depression in Africa along side iodine and vitamin A supplements and deworming. There are two about happiness and one about giving poor people some cash while also spending money advocating for Universal Basic Income (a left wing political cause). It’s weird to me how these very different sorts of causes get mixed together: physical health, mental health and political activism.

Scrolling down, next is animal welfare. After that comes some pandemic stuff and one about getting people to switch careers to work in these charities.

I’m sure there’s better information somewhere else about what EA actually funds and how much money goes to what. But I’m just going to talk about how good or bad some of these causes are.

Cause Evaluations

AI Alignment

This cause has various philosophical premises. If they’re wrong, the cause is wrong. There’s no real way to debate them, and it’s hard to even get clarifications on what their premises are, and there’s significant variation in beliefs between different advocates which makes it harder to have any specific target to criticize.

Their premises are things like Bayesian epistemology, induction, and there being no objective morality. How trying to permanently (mind) control the goals of AIs differs from trying to enslave them is unclear. They’re worried about a war with the AIs but they’re the aggressors looking to disallow AIs from having regular freedoms and human rights.

They’re basically betting everything on Popper and Deutsch being wrong, but they don’t care to read and critique Popper and Deutsch. They don’t address Popper’s refutation of induction or Deutsch on universal intelligence. If humans already have universal intelligence, then in short AIs will at best be like us, not be something super different and way more powerful.

Animal Welfare

The philosophical premises are wrong. To know whether animals suffer you need to understand things about intelligence and ideas. That’s based on epistemology. The whole movement lacks any serious interest in epistemology. Or in simple terms, they don’t have reasonable arguments to differentiate a mouse from a roomba (with a bit better software). A roomba uncontroversially can’t suffer, and it still wouldn’t be able to suffer if you programmed it a bit better, gave it legs and feet instead of wheels, gave it a mouth and stomach instead of a battery recharging station, etc.

The tribalism and political activism are wrong too. Pressuring corporations and putting out propaganda is the wrong approach. If you want to make the world better, educate people about better principles and ways of thinking. People need rationality and reasonable political philosophy. Skipping those steps and jumping straight into specific causes, without worrying about the premises underlying people’s reasoning and motivations, is bad. It encourages people to do unprincipled and incorrect stuff.

Basically, for all sorts of causes that have anything to do with politics, there are tons of people working to do X, and also tons of people working to stop X or to get something that contradicts X. The result is a lot of people working against each other. If you want to make a better world, you have to stop participating in those tribalist fights and start resolving those conflicts. That requires connecting what you do to principles people can agree on, and teaching people better principles and reasoning skills (which requires first learning those things yourself, and also being open to debate and criticism about them).

See also my state of the animal rights debate tree diagram. I tried to find any vegans or other advocates who had any answers to it or relevant literature to add anything to it, but basically couldn’t get any useful answers anywhere.

Let’s also try to think about the bigger picture in a few ways. Why are factory farms so popular? What is causing that?

Do people not have enough money to afford higher quality food? If so, what is causing that? Maybe lack of capitalism or lack of socialism. You have to actually think about political philosophy to have reasonable opinions about this stuff and reach conclusions. You shouldn’t be taking action before that. I don’t think there exists a charity that cares about animal welfare and would use anti-poverty work as the method to achieve it. That’s too indirect for people or something, so they should get better at reasoning…

A ton of Americans do have money to afford some better food. Is the problem lack of awareness of how bad factory farms are, the health concerns they create for humans, or lack of knowledge of which brands or meats are using factory farms? Would raising awareness help a lot? I saw something claiming that in a survey over 50% of Americans said they thought their meat was from animals that were treated pretty well, but actually like 99% of US meat is from factory farms, so a ton of people are mistaken. I find that plausible. Raising awareness is something some charities work on, but often in shrill, propagandistic, aggressive, alienating or tribalist ways, rather than providing useful, reliable, unbiased, non-partisan information.

Maybe the biggest issue with factory farms in the US is laws and regulations (including subsidies) which were written largely by lobbyists for giant food corporations, and which are extremely hostile to their smaller competitors. That is plausible to me. How much animal welfare work is oriented towards this problem? I doubt it’s much since I’ve seen a decent amount of animal welfare stuff but never seen this mentioned. And what efforts are oriented towards planning and figuring out whether this is the right problem to work on, and coming up with a really good plan for how to make a change? So often people rush to try to change things without recognizing how hard, expensive and risky change is, and making damn sure they’ve thought everything through and the change will actually work as intended and the plan to cause the change will work as intended too.

Left Wing Politics

More broadly, any kind of left wing political activism is just fighting with the right instead of finding ways to promote social harmony and mutual benefit. It’s part of a class warfare mindset. The better way is in short classical liberalism, which neither the current left nor right knows much about. It’s in short about making a better society for everyone instead of fighting with each other. Trying to beat the rival political tribe is counter-productive. Approaches which transcend politics are needed.

Mental Health

Psychiatry is bad. It uses power to do things to people against their consent, and it’s manipulative, and its drugs are broadly poison. Here’s a summary from Thomas Szasz, author of The Myth of Mental Illness.

Environmentalism

As with many major political debates, both sides of the global warming debate are terrible and have no idea what they’re talking about. And there isn’t much thoughtful debate. I’ve been unable to find refutations of Richard Lindzen’s skeptical arguments related to water vapor. The “97% of scientists agree” thing is biased junk, and even if it were true it’s an appeal to authority not a rational argument. The weather is very hard to predict even in the short term, and a lot of people have made a lot of wrong predictions about long term warming or cooling. They often seem motivated by other agendas like deindustrialization, anti-technology attitudes or anti-capitalism, with global warming serving as an excuse. Some of what they say sounds a lot like “Let’s do trillions of dollars of economic harm taking steps that we claim are not good enough and won’t actually work.” There are fairly blatant biases in things like scientific researching funding – science is being corrupted as young scientists are under pressure to reach certain conclusions.

There are various other issues including pollution, running out of fossil fuels, renewables, electric cars and sustainability. These are all the kinds of things where

  1. People disagree with you. You might be wrong. You might be on the wrong side. What you’re doing might be harmful.
  2. You spend time and resources fighting with people who are working against you.
  3. Most people involved are have tribalist mindsets.
  4. Political activism is common.
  5. Rational, effective, productive debate, to actually reasonably resolve the disagreements about what should be done, is rare.

What’s needed is to figure out ways to actually rationally persuade people (not use propaganda on them) and reach more agreement about the right things to do, rather than responding to a controversy by putting resources into one side of it (while others put resources into the other side, and you kinda cancel each other out).

Physical Health

These are the EA causes I agree with the most. Childhood vaccinations, vitamin A, iodine and deworming sound good. Golden rice sounds like a good idea to me (not mentioned here but I’ve praised it before). I haven’t studied this stuff a ton. The anti-malaria charities concern me because I expect that they endorse anti-DDT propaganda.

I looked at New Incentives which gets Nigerian babies vaccinated (6 visits to a vaccination clinic finishing at 15 months old) for around $20 each by offering parents money to do it. I checked if they were involved with covid vaccine political fighting and they appear to have stayed out of that, so that’s good. I have a big problem with charities that have some good cause but then get distracted from it to do tribalist political activism and fight with some enemy political tribe. A charity that just sticks to doing something useful is better. So this one actually looks pretty good and cost effective based on brief research.

Pandemic prevention could be good but I’d be concerned about what methods charities are using. My main concern is they’d mostly do political activism and fight with opponents who disagree with them, rather than finding something actually productive and effective to do. Also pandemic prevention is dominated quite a lot by government policy, so it’s hard to stay out of politics. Just spending donations to stockpile some masks, vaccines and other supplies (because some governments don’t have enough) doesn’t sound like a very good approach, and that’d be more about mitigation than prevention anyway.

Even something like childhood vaccination in Nigeria has some concerns. Looking at it in isolation, sure, it makes some things better. It’s a local optima. But what bigger picture does it fit into?

For example, why isn’t the Nigerian government handling this? Is it essentially a subsidy for the Nigerian government, which lets them buy more of something else, and underfund this because charities step in and help here? Could the availability of charity for some important stuff cause budgets to allocate money away from those important things?

Does giving these poor Nigerians this money let their government tax them at higher rates than it otherwise would, so some of the money is essentially going to the Nigerian government not to the people being directly helped? Might some of the money be stolen in other ways, e.g. by thugs in areas where there’s inadequate protection against violence? Might the money attract any thugs to the area? Might the thugs pressure some women to have more babies and get the money so that they can steal it? I don’t know. These are just some initial guesses about potential problems that I think are worth some consideration. If I actually studied the topic I’m sure I’d come up with some other concerns, as well as learning that some of my initial concerns are actually important problems while others aren’t.

Why are these Nigerians so poor that a few dollars makes a significant difference to them? What is causing that? Is it bad government policies? Is there a downside to mitigating the harm done by those policies, which helps them stay in place and not seem so bad? And could we do more good by teaching the world about classical liberalism or how to rationally evaluate and debate political principles? Could we do more good by improving the US so it can set a better example for other countries to follow?

Helping some poor Nigerians is a superficial (band aid) fix to some specific problems, which isn’t very effective in the big picture. It doesn’t solve the root causes/problems involved. It doesn’t even try to. It just gives some temporary relief to some people. And it has some downsides. But the downsides are much smaller compared to most of EA’s other causes, and the benefits are real and useful – they make some people’s lives clearly better – even if they’re just local optima.

Meta Charities

I think EA’s work on evaluating the effectiveness of charity interventions has some positive aspects, but a lot of it is focused on local optima which can actually make it harmful overall even if some of the details are correct. Focusing attention and thinking on the wrong issues makes it harder for more important issues to get attention. If no one was doing any kind of planning, it’s easier to come along and say “Hey let’s do some planning” and have anyone listen. If there’s already tons of planning of the wrong types, a pro-planning message is easier to ignore.

EA will look at how well charities work on their own terms, without thinking about the cause and effect logic of the full situation. I’ve gone over this a few times in other sections. Looking at cost per childhood vaccination is a local optima. The bigger picture includes things like how it may subsidize a bad government or local thugs, or how it’s just a temporary short-term mitigation while there are bigger problems like bad economics systems that cause poverty. How beneficial is it really to fix one instance of a problem when there are systems in the world which keep creating that problem over and over? Dealing with those systems that keep causing the problems is more important. In simple terms, imagine a guy was going around breaking people’s legs, and you went around giving them painkillers… There is a local optima of helping people who are in pain, but it’s much more important to deal with the underlying cause. From what I’ve seen, EA’s meta charity evaluation is broadly about local optima not bigger picture understanding of causes of problems, so it often treats symptoms of a problem not the real problem. They will just measure how much pain relief an intervention provides and evaluate how good it is on that basis (unless they manage to notice a bigger picture problem, which they occasionally do, but they aren’t good at systematically finding those).

Also they try to compare charities that do different kinds of things. So you have benefits in different dimensions and they try to compare. They tend to do this, in short, by weighting factor summing, which fundamentally doesn’t work (it’s completely wrong, broken and impossible, and means there are hidden and generally biased thought processes responsible for the conclusions reached). As a quick example, one of the EA charities I saw was doing something about trying to develop meat alternatives. This is approach animal welfare in a very different way than, say, giving painkillers to animals on factory farms or doing political activist propaganda against the big corporations involved. So there’s no way to directly compare which is better in simple numerical terms. As much as people like summary numbers, people need to learn to think about concepts better.

Details

I could do more detailed research and argument about any of these causes, but it’s unrewarding because I don’t think EA will listen and seriously engage. That is, I think a lot of my ideas would not be acted on or refuted with arguments. So I’d still think I’m right, and be given no way to change my mind, and then people would keep doing things I consider counter-productive. Also, I already have gone into a lot more depth on some of these issues, and didn’t include it here because it’s not really the point.

Why do some people have a different view of EA and criticism, or different experiences with that? Why do some people feel more heard and influential? Two big reasons. First, you can social climb at EA then influence them. Second, compared to me, most people do criticism that’s simpler and more focused on local optima not foundations, fundamentals, root causes or questioning premises. (I actually try to give simple criticisms sometimes like “This is a misquote” or “This is factually false” but people will complain about that too. But I won’t get into that here.) People like criticism better when it doesn’t cross field boundaries and make them think about things they aren’t good at, don’t know much about, aren’t experienced at, or aren’t interested in. My criticisms tend to raise fundamental, important challenges and be multi-disciplinary instead of just staying within one field and not challenging its premises.

Conclusion

The broad issues are people who aren’t very rational or open to criticism and error correction, who then pursue causes which might be mistaken and harmful, and who don’t care much about rational persuasion, rational debate. People seem so willing to just join a tribe and fight opponents, and that is not the way to make the world better. Useful work almost all transcends those fights and stays out of them. And the most useful work, which will actually fix things in very cost-effective lasting ways is related to principles and better thinking. Help people think better and then the rest is so much easier.

There’s something really really bad about working against other people, who think they’re right, and you just spend your effort to try to counter-act their effort. Even if you’re right and they’re wrong, that’s so bad for cost effectiveness. Persuading them would be so much better. If you can’t figure out how to do that, why not? What are the causes that prevent rational persuasion? Do you not know enough? Are they broken in some way? If they are broken, why not help them instead of fighting with them? Why not be nice and sympathetic instead of viewing them as the enemies to be beaten by destructively overwhelming their resources with even more resources? I value things like social harmony and cooperation rather than adversarial interactions, and (as explained by classical liberalism) I don’t think there are inherent conflicts of interest between people that require (Marxist) class warfare or which disallow harmony and mutual benefit. People who are content to work against other people, in a fairly direct fight, generally seem pretty mean to me, which is rather contrary to the supposed kind, helping-others spirit of EA and charity.

EA’s approach to causes, as a whole, is a big bet on jumping into stuff without enough planning and understanding root causes of things and figuring out how to make the right changes. They should read e.g. Eli Goldratt on transition tree diagrams and how he approaches making changes within one company. If you want to make big changes affecting more people, you need much better planning than that, which EA doesn’t do or have, which encourages a short term mindset of pursuing local optima, that might be counter-productive, without adequately considering that you might be wrong and in need of better planning.

People put so much work into causes while putting way too little into figuring out whether those causes are actually beneficial, and understanding the whole situation and what other actions or approaches might be more effective. EA talks a lot about effectiveness but they mostly mean optimizing cost/benefit ratios given a bunch of unquestioned premises, not looking at the bigger picture and figuring out the best approach with detailed cause-effect understanding and planning.

More posts related to EA.


Elliot Temple on December 3, 2022

Messages

Want to discuss this? Join my forum.

(Due to multi-year, sustained harassment from David Deutsch and his fans, commenting here requires an account. Accounts are not publicly available. Discussion info.)