Effective Altruism Hurts People Who Donate Too Much

I was having an extended discussion with CB from EA when the licensing rules were changed so I quit the EA forum. So I asked if he wanted to continue at my forum. He said yes and registered an account but got stuck before posting.

I clarified that he couldn’t post because he hadn’t paid the one-time $20 price of an account. I offered him a free account if the $20 would be a financial burden, but said if he could afford it then I request he pay because if he values conversing with me less than $20 then I don’t think it’s a good use of my time.

Despite (I think) wanting to talk with me more, and having already spent hours on it, he changed his mind over the one-time $20 price. He said:

I don't think I will pay $20 because all the money I earn beyond my basic needs is going to charities.

That makes EA sound somewhat like a cult which has brainwashed him. And I’ve heard of EA doing this to other people. Some highly involved and respected EA people have made admissions about feeling guilty about buying any luxuries, such as a coffee, and struggled to live normal lives. This is considered a known problem with EA for many years, and they have no good plan to fix it and are continuing to hurt people and take around the maximum amount of money you can get from someone, just like some cults do. Further, EA encourages people to change careers to do EA-related work; it tries to take over people’s entire lives just like cults often do. EAs dating other EAs is common too, sometimes polyamorously (dating an EA makes EA be a larger influence in your life, and weird sexual practices are common with cults).

I don’t recall ever accusing anything of being a cult before, and overall I don’t think EA is a cult. But I think EA crosses a line here and deserves to be compared to a cult. EA clearly has differences from a cult, but having these similarities with cults is harmful.

EA does not demand you donate the maximum. They make sure to say it’s OK to donate at whatever level you’re comfortable with, or something more along those lines. But they also do bring up ideas about maximizing giving and comparing the utility of every different action you could do and maximizing utility (or impact or effectiveness or good). They don’t have good ideas about where or how to draw a line to limiting your giving, so I think they leave that up to individuals, many of whom won’t come up with good solutions themselves.

CB’s not poor, and he wants something, and the stakes are much higher than $20, but he can’t buy it because he feels that he has to give EA all his money. I think he already put hundreds of dollars of his time into the conversation, and I certainly did, and I think he planned to put hundreds more dollars of his time into it, but somehow $20 is a dealbreaker. He works in computing so his time could easily be worth over $100/hr.

I wonder if he considered that, instead of talking with me, he could have spent those hours volunteering at a soup kitchen. Or he could have spent those hours working and making more money to donate. He might need a second job or side gig or something to adjust how many hours he works, but he could do that. If he’s a programmer, he could make an phone or web app on the side and set his own schedule for that additional work. (What about burn out? Having intellectual conversations also takes up mental energy. So he had some to spare.)

Anyway, it’s very sad to see someone all twisted up like this. From what I can tell, he’s fairly young and naive, and doesn’t know much about money or economics.

Note/update: After I finished writing this article, before I posted it, CB claimed that he exaggerated about how much he donates. That partial retraction has not changed my mind about the general issues, although it makes his individual situation somewhat less bad and does address some specific points like whether he could buy a (cheap) book.

Investing In Yourself

Buying a conversation where he’d learn something could make CB wiser and more effective, which could lead to him earning more money, making better decisions about which charities to donate to, and other benefits.

I wonder if CB also doesn’t buy books because they aren’t part of his “basic needs”.

People should be encouraged to invest in themselves, not discouraged from it. EA is harming intellectual progress by handicapping a bunch of relatively smart, energetic young people so they don’t use financial resources to support their own personal progress and development.

This one thing – taking a bunch of young people who are interested in ideas and making it harder for them to develop into great thinkers – may do a huge amount of harm. Imagine if Karl Popper or Richard Feynman had donated so much money he couldn’t buy any books. Or pick whoever you think is important. What if the rich people several hundred years ago had all donated their money instead of hiring tutors to teach their kids – could that have stopped the enlightenment? (Note how that would have been doubly bad. It’d prevent some of their kids from turning into productive scientists and intellectuals, and it’d also take away gainful employment from the tutors, who were often scientists or other intellectuals without much money to fund their life or work.)

On a related note, basically none of EA’s favored charities are about finding the smartest or most rational people and helping them. But helping some of the best people could easily do more good than helping some of the poorest people. If you help a poor person have a happier, healthier life, that does some good. If you help a smart, middle-class American kid become a great thinker who writes a few really effective self-help books, his books could improve the lives of millions of people.

Admittedly, it’s hard to figure out how to help that kid. But people at least try to work on that problem and brainstorm ideas and critique their initial plans. There could be ongoing research to try to develop a good approach. But there isn’t much interest in that stuff.

The least they could do is leave that kid alone, rather than convince him to donate all his money above basic needs when he’s a young adult so he can’t afford books, online courses, and other resources that would be useful and enjoyable to him.

Also, at EA, I’ve been talking about criticism, debate and error correction. I’ve been trying to get them to consider their fallibility and the risk of being wrong about things, and do more about that. So, for example, I think EA is mistaken about many of its causes. CB could estimate a 1% chance that I have a point he could learn, and assume that would only affect his own future donations, and talking to me would still be a good deal, because he’ll donate more than $2000 in the future, so even multiplied by 1% it’s better than $20. So talking to me more would be cost-effective (in dollars, which I think is CB’s concern even though time matters too). Not considering things like this, and seeking to invest in risk reduction, is partly related to not investing in yourself and partly related to poor, irrational attitudes related to fallibility.

Also, I do tons of work (philosophy research, writing, discussion and video creation) trying to make the world better, mostly for free. Isn’t investing in me a way to make the world better? If you pay me $20, why is that any worse than donating it to a charity? Some people literally donate money to me like a charity because they respect and value what I do. Similarly, some EA charities give grants to intellectuals to do work on topics such as rationality, so I could receive such a grant. Donating to a grant-making organization that gave me a grant would count as charity, but giving me money directly counts less so, especially if you’re buying something from me (forum access). The marginal cost of forum access for me is $0, so this isn’t like buying a hand-made table from me, where I had to put in some time and materials to make it, so my profit margin is only 25%. My marginal profit margin on forum memberships is 100% because I’m going to keep running the forum whether or not CB joins. EA focuses people’s attention on charities, has an incorrectly negative view of trade, and biases people against noticing that buying from small creators actually generally helps make the world better even though it’s not “charity”.

What CB Donates To

Are CB’s donations doing good?

For around $20, he could pay for six visits to a vaccination clinic for a baby in rural northern Nigeria. It can be half a day of travel to reach a clinic, so paying people a few dollars makes a meaningful difference to whether they make the trip.

I wonder which vaccinations are actually important for people living in small, isolated communities like that. Some vaccinations seem much more relevant in a city or if you come in contact with more people. How many of them will ever visit a big city in their life? I don’t know. Also even if vaccinations provide significant value to them, they’re really poor, so maybe something else would improve their lives more.

I looked through charities that EA recommends and that vaccination charity looked to me like one of the best options. Plus I read a bit about it, unlike some of the other more promising ones like a charity that gives people Vitamin A. Some charities get distracted by political activism, so I checked if they were taking a political side about the covid vaccine, and they didn’t appear to be, so that’s nice to see. I think finding charities that stay out of politics is one of the better selection methods that people could and should use. EA cares a lot about evaluating and recommending charities, but I’m not aware of them using being non-political as a criterion. EA itself is pretty political.

I’m doubtful that CB donates to that kind of cause that provides some kind of fairly concrete health benefits for poor people in distant countries. Based on our discussions and his profile, I think his top cause is animal welfare. He may also donate to left-wing energy causes (like anti fossil fuels) and possibly AI Alignment. I think those are terrible causes where his donations would likely do more harm than good. I’m not going to talk about AI Alignment here, which isn’t very political, and the problems are more about bad epistemology and moral philosophy (plus lack of willingness to debate with critics).

Animal welfare and anti fossil fuel stuff are left wing political activism. Rather than staying out of politics, those causes get involved in politics on purpose. (Not every single charity in those spaces is political, just most of them.)

Let me explain it using a different issue as an example where there’s tons of visible political propaganda coming from both sides. The US pro-life right puts out lots of propaganda, and they recently had a major victory getting Roe vs. Wade overturned. Now they’re changing some state laws to hurt people, particularly women. Meanwhile, the pro-choice left also puts out propaganda. To some extent, the propaganda from the two sides cancels each other out.

Imagine a pro-choice charity that said “Next year, the pro-lifers are expected to spend $10,000,000,000 on propaganda. We must counter them with truth. Please donate to us or our allies because we need $10 billion dollars a year just to break even and cancel out what they’re doing. If we can get $15B/yr, we can start winning.”

Imagine that works. They get $15B and outspend the pro-lifers who only spend $10B. The extra $5B helps shift public perception to be more pro-choice. Suppose pro-choice is the correct view and getting people to believe it is actually good. We’ll just ignore the risk of being on the wrong side. (Disclosure: I’m pro-abortion.)

Then there’s $25B being spent in total, and $20B is basically incinerated, and then $5B makes the world better. That is really bad. 80% of the money isn’t doing any good. This is super inefficient. In general, the best case scenario when donating to tribalist political activism looks kind of like this.

If you want to be more effective, you have to be more non-partisan, more focused on rationality, and stay out of propaganda wars.

In simplified terms, pro-choice activism is more right than wrong, whereas I fear CB is donating to activism which is more wrong than right.

Saving Money and Capital Accumulation

I fear that spending only on basic needs, and donating the rest, means CB isn’t saving (enough) money.

If you don’t save money, you may end up being a burden on society later. You may need to receive help from the government or from charities. By donating money that should be saved, one risks later taking money away and being a drain on resources because one doesn’t have enough to take care of himself.

CB’s kids may have to take out student loans, and end up in a bunch of debt, because CB donated a bunch of money instead of putting it in a college fund for them.

CB may end up disabled. He may get fired and struggle to find a new job, perhaps through no fault of his own. Jobs could get harder to come by due to recession, natural disaster, or many other problems. He shouldn’t treat his expected future income as reliable. Plus, he says he wants to stop working in computing and switch to an EA-related job. That probably means taking a significant pay cut. He should plan ahead, and save money now while he has higher income, to help enable him to take a lower paying job later if he wants to. As people get older, their expenses generally go up, and their income generally goes up too. If he wants to take a pay cut when he’s older, instead of having a higher income to deal with higher expenses, that could be a major problem, especially if he didn’t save money now to deal with it.

Does saving money waste it? No. Saving means refraining from consumption. If you want to waste your money, buy frivolous stuff. If you work and save, you’re contributing to society. You provide work that helps others, and by saving you don’t ask for anything in return (now – but you can ask for it later when you spend your money).

Saving isn’t like locking up a bunch of machine tools in a vault so they don’t benefit anyone. People save money, not tools or food. Money is a medium of exchange. As long as there is enough money in circulation, then money accomplishes its purpose. There’s basically no harm in keeping some cash in a drawer. Today, keeping money in a bank is just a number in a computer, so it doesn’t even take physical cash out of circulation.

Money basically represents a debt where you did something to benefit others, and now you’re owed something equally valuable in return from others. When you save money, you’re not asking for what you’re owed from others. You helped them for nothing in return. It’s a lot like charity.

Instead of saving cash, you can invest it. This is less like charity. You can get interest payments or the value of your investment can grow. In return for not spending your money now, you get (on average) more of it.

If you invest money instead of consuming it, then you contribute to capital accumulation. You invest in businesses not luxuries. In other words, (as an approximation) you help pay for machines, tools and buildings, not for ice cream or massages. You invest in factories and production that can help make the world a better place (by repeatedly creating useful products), not short term benefits.

The more capital we accumulate, the higher the productivity of labor is. The higher the productivity of labor, the higher wages for workers are and also the more useful products get created. There are details like negotiations about how much of the additional wealth goes to who, but the overall thing is more capital accumulation means more wealth is produced and there’s more for everyone. Making the pie bigger is the more important issue than fighting with people over who gets which slice, though the distribution of wealth does matter too.

When you donate to a charity which spends the money on activism or even vaccines, that is consumption. It’s using up wealth to accomplish something now. (Not entirely because a healthy worker is more productive, so childhood vaccines are to some extent an investment in human capital. But they aren’t even trying to evaluate the most effective way to invest in human capital to raise productivity. That isn’t their goal so they’re probably not being especially cost-effective at it.)

When you save money and invest it, you’re helping with capital accumulation – you’re helping build up total human wealth. When you consume money on charitable causes or luxuries, you’re reducing total human wealth.

Has EA ever evaluated how much good investing in an index fund does and compared it to any of their charities? I doubt it. (An index fund is a way to have your investment distributed among many different companies so you don’t have to guess which specific companies are good. It also doesn’t directly give money to the company because you buy stock from previous investors, but as a major simplification we can treat it like investing in the company.)

I’ve never seen anything from EA talking about how much good you’ll do if you don’t donate, and subtracting that from the good done by donating, to see how much additional good donating does (which could be below zero in some cases or even on average – who knows without actually investigating). If you buy some fancy dinner and wine, you get some enjoyment, so that does some good. If you buy a self-help book or online course and invest in yourself, that does more good. If you buy a chair or frying pan, that’s also investing in yourself and your life, and does good. If you invest the money in a business, that does some good (on average). Or maybe you think so many big businesses are so bad that you think investing in them makes the world worse, which I find plausible, and it reminds me of my view that many non-profits are really bad… I have a negative view of most large companies, but overall I suspect that, on average, non-profits are worse than for-profit businesses.

EA has a bunch of anti-capitalists who don’t know much about economics. CB in particular is so ignorant of capitalism that he didn’t know it prohibits fraud. He doesn’t know, in a basic sense, what the definition of capitalism even is. And he also doesn’t know that he doesn’t know. He thought he knew, and he challenged me on that point, but he was wrong and ignorant.

These people need to read Ludwig von Mises. Both for the economics and for the classical liberalism. They don’t understand harmony vs. conflicts of interest, and a lot of what they do, like political activism, is based on assuming there are conflicts of interest and the goal should be to make your side win. They often don’t aim at win/win solutions, mutual benefit and social harmony. They don’t really understand peace, freedom and how a free market is proposal about how to create social harmony and benefit everyone and some of its mechanisms for doing that are superior to what charities try to do – so getting capitalism working better could easily do more good than what they’re doing now, but they wouldn’t really even consider such a plan. (I’m aware that I haven’t explained capitalism enough here for people to learn about capitalism from this article. It may make sense to people who already know some stuff. If you want to know more, read Mises and read Capitalism: A Treatise on Economics and feel free to ask questions or seek debate at my forum. If you find this material difficult, you may first need to put effort into learning how to learn, getting better at reading, getting better at research, getting better at critical thinking, managing your schedule, managing your motivations and emotions, managing projects over time, etc.)


CB was more intellectually tolerant and friendly than most EAers. Most of them can’t stand to talk to someone like me who has a different perspective and some different philosophical premises. He could, so in that way he’s better than them. He has a ton of room for improvement at critical thinking, rigor and precision, but he could easily be categorized as smart.

So it’s sad to see EA hurt him in such a major way that really disrupts his life. Doing so much harm is pretty unusual – cults can do it but most things in society don’t. It’s ironic and sad that EA, which is about doing good, is harming him.

And if I was going to try to improve the world and help people, people like CB would be high on my list for who to help. I think helping some smart and intellectually tolerant people would do more good than childhood vaccines in Nigeria let alone leftist (or rightist) political activism. The other person I know of who thought this way – about prioritizing helping some of the better people, especially smart young people – was Ayn Rand.

I am trying to help these people – that’s a major purpose of sharing writing – but it’s not my top priority in my life. I’m not an altruist. Although, like Rand and some classical liberals, I don’t believe there’s a conflict between the self and the other. Promoting altruism is fundamentally harmful because it spreads the idea that you must choose yourself or others, and there’s a conflict there requiring winners and losers. I think Rand should have promoted harmony more and egoism or selfishness less, but at least her intellectual position was that everyone can win and benefit, whereas EA doesn’t say that and intentionally asks people like CB to sacrifice their own good to help others, thereby implying that there is a conflict between what’s good for CB and what’s good for others, thereby implying basically that social harmony is impossible because there’s no common good that’s good for everyone.

I’ll end by saying that EA pushes young people to rush to donate way too much money when they’re often quite ignorant and don’t even know much about which causes are actually good or bad. EA has some leaders who are more experienced and knowledgeable, but many of them have political and tribalist agendas, aren’t rational, and won’t debate or address criticism of their views. It’s totally understandable for a young person to have no idea what capitalism is and to be gullible in some ways, but it’s not OK for EA to take advantage of that gullibility, keep their membership ignorant of what capitalism is, and discourage their members from reading Mises or speaking with people like me who know about capitalism and classical liberalism. EA has leaders who know more about capitalism, and hate it, and won’t write reasonable arguments or debate the matter in an effective, truth seeking way. They won’t point out how/why/where Mises was wrong, and instead go guide young people to not read Mises and to donate all their money beyond basic needs to the causes that EA leaders like.

EDIT 2022-12-05: For context, see the section "Bad" EAs, caught in a misery trap in https://michaelnotebook.com/eanotes/ which had already previously alerted me that EA has issues with over-donating, guilt, difficulty justifying spending on yourself, etc., which affect a fair amount of people.

Elliot Temple | Permalink | Messages (0)

Effective Altruism Is Mostly Wrong About Its Causes

EA has gotten a few billion dollars to go towards its favored charitable causes. What are they? Here are some top cause areas:

  • AI alignment
  • animal welfare and veganism
  • disease and physical health
  • mental health
  • poverty
  • environmentalism
  • left wing politics

This is not a complete list. They care about some other stuff such as Bayesian epistemology. They also have charities related to helping the EA movement itself and they put a bunch of resources into evaluating other charities.

These causes are mostly bad and harmful. Overall, I estimate EA is doing more harm than good. For their billions of dollars, despite their cost-effectiveness oriented mission, they’ve created negative value.

EA’s Causes

Meta charities are charities whose job is to evaluate other charities then pass on money to the best ones. That has upsides and downsides. One of the downsides is it makes it less obvious what charities the EA movement funds.

Giving What We Can has a list of recommended organizations to donate to. The first two charities I see listed are a “Top Charities Fund” and an “All Grants Fund”, which are generic meta charities. That’s basically just saying “Give us money and trust it to use it for good things on any topic we choose.”

The second group of charities I see is related to animal welfare, but all three are meta charities which fund other animal welfare charities. So it’s unclear what the money is actually for.

Scrolling down, next I see three meta charities related to longtermism. I assume these mostly donate to AI alignment, but I don’t know for sure, and they do care about some other issues like pandemics. You might expect pandemics to be categorized under health not longtermism/futurism, but one of the longtermism meta charity descriptions explicitly mentions pandemics.

Scrolling again, I see a climate change meta charity, a global catastrophe risk meta charity (vague), and an EA Infrastructure charity that tries to help the EA cause.

Scrolling again, I finally see some specific charities. There are two about malaria, then one about childhood vaccinations in Nigeria. Scrolling horizontally, they have a charity about treating depression in Africa along side iodine and vitamin A supplements and deworming. There are two about happiness and one about giving poor people some cash while also spending money advocating for Universal Basic Income (a left wing political cause). It’s weird to me how these very different sorts of causes get mixed together: physical health, mental health and political activism.

Scrolling down, next is animal welfare. After that comes some pandemic stuff and one about getting people to switch careers to work in these charities.

I’m sure there’s better information somewhere else about what EA actually funds and how much money goes to what. But I’m just going to talk about how good or bad some of these causes are.

Cause Evaluations

AI Alignment

This cause has various philosophical premises. If they’re wrong, the cause is wrong. There’s no real way to debate them, and it’s hard to even get clarifications on what their premises are, and there’s significant variation in beliefs between different advocates which makes it harder to have any specific target to criticize.

Their premises are things like Bayesian epistemology, induction, and there being no objective morality. How trying to permanently (mind) control the goals of AIs differs from trying to enslave them is unclear. They’re worried about a war with the AIs but they’re the aggressors looking to disallow AIs from having regular freedoms and human rights.

They’re basically betting everything on Popper and Deutsch being wrong, but they don’t care to read and critique Popper and Deutsch. They don’t address Popper’s refutation of induction or Deutsch on universal intelligence. If humans already have universal intelligence, then in short AIs will at best be like us, not be something super different and way more powerful.

Animal Welfare

The philosophical premises are wrong. To know whether animals suffer you need to understand things about intelligence and ideas. That’s based on epistemology. The whole movement lacks any serious interest in epistemology. Or in simple terms, they don’t have reasonable arguments to differentiate a mouse from a roomba (with a bit better software). A roomba uncontroversially can’t suffer, and it still wouldn’t be able to suffer if you programmed it a bit better, gave it legs and feet instead of wheels, gave it a mouth and stomach instead of a battery recharging station, etc.

The tribalism and political activism are wrong too. Pressuring corporations and putting out propaganda is the wrong approach. If you want to make the world better, educate people about better principles and ways of thinking. People need rationality and reasonable political philosophy. Skipping those steps and jumping straight into specific causes, without worrying about the premises underlying people’s reasoning and motivations, is bad. It encourages people to do unprincipled and incorrect stuff.

Basically, for all sorts of causes that have anything to do with politics, there are tons of people working to do X, and also tons of people working to stop X or to get something that contradicts X. The result is a lot of people working against each other. If you want to make a better world, you have to stop participating in those tribalist fights and start resolving those conflicts. That requires connecting what you do to principles people can agree on, and teaching people better principles and reasoning skills (which requires first learning those things yourself, and also being open to debate and criticism about them).

See also my state of the animal rights debate tree diagram. I tried to find any vegans or other advocates who had any answers to it or relevant literature to add anything to it, but basically couldn’t get any useful answers anywhere.

Let’s also try to think about the bigger picture in a few ways. Why are factory farms so popular? What is causing that?

Do people not have enough money to afford higher quality food? If so, what is causing that? Maybe lack of capitalism or lack of socialism. You have to actually think about political philosophy to have reasonable opinions about this stuff and reach conclusions. You shouldn’t be taking action before that. I don’t think there exists a charity that cares about animal welfare and would use anti-poverty work as the method to achieve it. That’s too indirect for people or something, so they should get better at reasoning…

A ton of Americans do have money to afford some better food. Is the problem lack of awareness of how bad factory farms are, the health concerns they create for humans, or lack of knowledge of which brands or meats are using factory farms? Would raising awareness help a lot? I saw something claiming that in a survey over 50% of Americans said they thought their meat was from animals that were treated pretty well, but actually like 99% of US meat is from factory farms, so a ton of people are mistaken. I find that plausible. Raising awareness is something some charities work on, but often in shrill, propagandistic, aggressive, alienating or tribalist ways, rather than providing useful, reliable, unbiased, non-partisan information.

Maybe the biggest issue with factory farms in the US is laws and regulations (including subsidies) which were written largely by lobbyists for giant food corporations, and which are extremely hostile to their smaller competitors. That is plausible to me. How much animal welfare work is oriented towards this problem? I doubt it’s much since I’ve seen a decent amount of animal welfare stuff but never seen this mentioned. And what efforts are oriented towards planning and figuring out whether this is the right problem to work on, and coming up with a really good plan for how to make a change? So often people rush to try to change things without recognizing how hard, expensive and risky change is, and making damn sure they’ve thought everything through and the change will actually work as intended and the plan to cause the change will work as intended too.

Left Wing Politics

More broadly, any kind of left wing political activism is just fighting with the right instead of finding ways to promote social harmony and mutual benefit. It’s part of a class warfare mindset. The better way is in short classical liberalism, which neither the current left nor right knows much about. It’s in short about making a better society for everyone instead of fighting with each other. Trying to beat the rival political tribe is counter-productive. Approaches which transcend politics are needed.

Mental Health

Psychiatry is bad. It uses power to do things to people against their consent, and it’s manipulative, and its drugs are broadly poison. Here’s a summary from Thomas Szasz, author of The Myth of Mental Illness.


As with many major political debates, both sides of the global warming debate are terrible and have no idea what they’re talking about. And there isn’t much thoughtful debate. I’ve been unable to find refutations of Richard Lindzen’s skeptical arguments related to water vapor. The “97% of scientists agree” thing is biased junk, and even if it were true it’s an appeal to authority not a rational argument. The weather is very hard to predict even in the short term, and a lot of people have made a lot of wrong predictions about long term warming or cooling. They often seem motivated by other agendas like deindustrialization, anti-technology attitudes or anti-capitalism, with global warming serving as an excuse. Some of what they say sounds a lot like “Let’s do trillions of dollars of economic harm taking steps that we claim are not good enough and won’t actually work.” There are fairly blatant biases in things like scientific researching funding – science is being corrupted as young scientists are under pressure to reach certain conclusions.

There are various other issues including pollution, running out of fossil fuels, renewables, electric cars and sustainability. These are all the kinds of things where

  1. People disagree with you. You might be wrong. You might be on the wrong side. What you’re doing might be harmful.
  2. You spend time and resources fighting with people who are working against you.
  3. Most people involved are have tribalist mindsets.
  4. Political activism is common.
  5. Rational, effective, productive debate, to actually reasonably resolve the disagreements about what should be done, is rare.

What’s needed is to figure out ways to actually rationally persuade people (not use propaganda on them) and reach more agreement about the right things to do, rather than responding to a controversy by putting resources into one side of it (while others put resources into the other side, and you kinda cancel each other out).

Physical Health

These are the EA causes I agree with the most. Childhood vaccinations, vitamin A, iodine and deworming sound good. Golden rice sounds like a good idea to me (not mentioned here but I’ve praised it before). I haven’t studied this stuff a ton. The anti-malaria charities concern me because I expect that they endorse anti-DDT propaganda.

I looked at New Incentives which gets Nigerian babies vaccinated (6 visits to a vaccination clinic finishing at 15 months old) for around $20 each by offering parents money to do it. I checked if they were involved with covid vaccine political fighting and they appear to have stayed out of that, so that’s good. I have a big problem with charities that have some good cause but then get distracted from it to do tribalist political activism and fight with some enemy political tribe. A charity that just sticks to doing something useful is better. So this one actually looks pretty good and cost effective based on brief research.

Pandemic prevention could be good but I’d be concerned about what methods charities are using. My main concern is they’d mostly do political activism and fight with opponents who disagree with them, rather than finding something actually productive and effective to do. Also pandemic prevention is dominated quite a lot by government policy, so it’s hard to stay out of politics. Just spending donations to stockpile some masks, vaccines and other supplies (because some governments don’t have enough) doesn’t sound like a very good approach, and that’d be more about mitigation than prevention anyway.

Even something like childhood vaccination in Nigeria has some concerns. Looking at it in isolation, sure, it makes some things better. It’s a local optima. But what bigger picture does it fit into?

For example, why isn’t the Nigerian government handling this? Is it essentially a subsidy for the Nigerian government, which lets them buy more of something else, and underfund this because charities step in and help here? Could the availability of charity for some important stuff cause budgets to allocate money away from those important things?

Does giving these poor Nigerians this money let their government tax them at higher rates than it otherwise would, so some of the money is essentially going to the Nigerian government not to the people being directly helped? Might some of the money be stolen in other ways, e.g. by thugs in areas where there’s inadequate protection against violence? Might the money attract any thugs to the area? Might the thugs pressure some women to have more babies and get the money so that they can steal it? I don’t know. These are just some initial guesses about potential problems that I think are worth some consideration. If I actually studied the topic I’m sure I’d come up with some other concerns, as well as learning that some of my initial concerns are actually important problems while others aren’t.

Why are these Nigerians so poor that a few dollars makes a significant difference to them? What is causing that? Is it bad government policies? Is there a downside to mitigating the harm done by those policies, which helps them stay in place and not seem so bad? And could we do more good by teaching the world about classical liberalism or how to rationally evaluate and debate political principles? Could we do more good by improving the US so it can set a better example for other countries to follow?

Helping some poor Nigerians is a superficial (band aid) fix to some specific problems, which isn’t very effective in the big picture. It doesn’t solve the root causes/problems involved. It doesn’t even try to. It just gives some temporary relief to some people. And it has some downsides. But the downsides are much smaller compared to most of EA’s other causes, and the benefits are real and useful – they make some people’s lives clearly better – even if they’re just local optima.

Meta Charities

I think EA’s work on evaluating the effectiveness of charity interventions has some positive aspects, but a lot of it is focused on local optima which can actually make it harmful overall even if some of the details are correct. Focusing attention and thinking on the wrong issues makes it harder for more important issues to get attention. If no one was doing any kind of planning, it’s easier to come along and say “Hey let’s do some planning” and have anyone listen. If there’s already tons of planning of the wrong types, a pro-planning message is easier to ignore.

EA will look at how well charities work on their own terms, without thinking about the cause and effect logic of the full situation. I’ve gone over this a few times in other sections. Looking at cost per childhood vaccination is a local optima. The bigger picture includes things like how it may subsidize a bad government or local thugs, or how it’s just a temporary short-term mitigation while there are bigger problems like bad economics systems that cause poverty. How beneficial is it really to fix one instance of a problem when there are systems in the world which keep creating that problem over and over? Dealing with those systems that keep causing the problems is more important. In simple terms, imagine a guy was going around breaking people’s legs, and you went around giving them painkillers… There is a local optima of helping people who are in pain, but it’s much more important to deal with the underlying cause. From what I’ve seen, EA’s meta charity evaluation is broadly about local optima not bigger picture understanding of causes of problems, so it often treats symptoms of a problem not the real problem. They will just measure how much pain relief an intervention provides and evaluate how good it is on that basis (unless they manage to notice a bigger picture problem, which they occasionally do, but they aren’t good at systematically finding those).

Also they try to compare charities that do different kinds of things. So you have benefits in different dimensions and they try to compare. They tend to do this, in short, by weighting factor summing, which fundamentally doesn’t work (it’s completely wrong, broken and impossible, and means there are hidden and generally biased thought processes responsible for the conclusions reached). As a quick example, one of the EA charities I saw was doing something about trying to develop meat alternatives. This is approach animal welfare in a very different way than, say, giving painkillers to animals on factory farms or doing political activist propaganda against the big corporations involved. So there’s no way to directly compare which is better in simple numerical terms. As much as people like summary numbers, people need to learn to think about concepts better.


I could do more detailed research and argument about any of these causes, but it’s unrewarding because I don’t think EA will listen and seriously engage. That is, I think a lot of my ideas would not be acted on or refuted with arguments. So I’d still think I’m right, and be given no way to change my mind, and then people would keep doing things I consider counter-productive. Also, I already have gone into a lot more depth on some of these issues, and didn’t include it here because it’s not really the point.

Why do some people have a different view of EA and criticism, or different experiences with that? Why do some people feel more heard and influential? Two big reasons. First, you can social climb at EA then influence them. Second, compared to me, most people do criticism that’s simpler and more focused on local optima not foundations, fundamentals, root causes or questioning premises. (I actually try to give simple criticisms sometimes like “This is a misquote” or “This is factually false” but people will complain about that too. But I won’t get into that here.) People like criticism better when it doesn’t cross field boundaries and make them think about things they aren’t good at, don’t know much about, aren’t experienced at, or aren’t interested in. My criticisms tend to raise fundamental, important challenges and be multi-disciplinary instead of just staying within one field and not challenging its premises.


The broad issues are people who aren’t very rational or open to criticism and error correction, who then pursue causes which might be mistaken and harmful, and who don’t care much about rational persuasion, rational debate. People seem so willing to just join a tribe and fight opponents, and that is not the way to make the world better. Useful work almost all transcends those fights and stays out of them. And the most useful work, which will actually fix things in very cost-effective lasting ways is related to principles and better thinking. Help people think better and then the rest is so much easier.

There’s something really really bad about working against other people, who think they’re right, and you just spend your effort to try to counter-act their effort. Even if you’re right and they’re wrong, that’s so bad for cost effectiveness. Persuading them would be so much better. If you can’t figure out how to do that, why not? What are the causes that prevent rational persuasion? Do you not know enough? Are they broken in some way? If they are broken, why not help them instead of fighting with them? Why not be nice and sympathetic instead of viewing them as the enemies to be beaten by destructively overwhelming their resources with even more resources? I value things like social harmony and cooperation rather than adversarial interactions, and (as explained by classical liberalism) I don’t think there are inherent conflicts of interest between people that require (Marxist) class warfare or which disallow harmony and mutual benefit. People who are content to work against other people, in a fairly direct fight, generally seem pretty mean to me, which is rather contrary to the supposed kind, helping-others spirit of EA and charity.

EA’s approach to causes, as a whole, is a big bet on jumping into stuff without enough planning and understanding root causes of things and figuring out how to make the right changes. They should read e.g. Eli Goldratt on transition tree diagrams and how he approaches making changes within one company. If you want to make big changes affecting more people, you need much better planning than that, which EA doesn’t do or have, which encourages a short term mindset of pursuing local optima, that might be counter-productive, without adequately considering that you might be wrong and in need of better planning.

People put so much work into causes while putting way too little into figuring out whether those causes are actually beneficial, and understanding the whole situation and what other actions or approaches might be more effective. EA talks a lot about effectiveness but they mostly mean optimizing cost/benefit ratios given a bunch of unquestioned premises, not looking at the bigger picture and figuring out the best approach with detailed cause-effect understanding and planning.

More posts related to EA.

Elliot Temple | Permalink | Messages (0)

Effective Altruism Related Articles

I wanted to make it easier to find all my Effective Altruism (EA) related articles. I made an EA blog category.

Below I link to more EA-related stuff which isn't included in the category list.

Critical Fallibilism articles:

I also posted copies of some of my EA comments/replies in this topic on my forum.

You can look through my posts and comments on the EA site via my EA user profile.

I continued a discussion with an Effective Altruist at my forum. I stopped using the EA forum because they changed their rules to require giving up your property rights for anything you post there (so e.g. anyone can sell your writing without your permission and without paying you).

I also made videos related to EA:

Elliot Temple | Permalink | Messages (0)

EA Should Raise Its Standards

I quit the Effective Altruism forum due to a new rule requiring all new posts and comments be basically put in the public domain without copyright, so anyone could e.g. sell a book of my posts without my consent (they’d just have to give attribution). More info. I had a bunch of draft posts, so I’m posting some of them here with minimal editing. In general, I’m not going to submit them as link posts at EA myself. If you think they should be shared with EA as link posts, please do it yourself. I’m happy for other people to share links to my work at EA or on social media. Please share stuff in whatever ways you think are good to do.

I think EA could be over 50% more effective by raising its standards. Community norms should care more about errors and about using explicit reasoning and methods.

For a small example, quotes should be exact. There should be a social norm that says misquotes are unacceptable. You can’t just change the words (without brackets), put it in quote marks, and publish it. That’s not OK. I believe this norm doesn’t currently exist and there would be significant resistance to it. Many people would think it’s not a big deal, and that it’s a pedantic or autistic demand to be so literal with quotes. I think this is a way that people downplay and accept errors, which contributes to lowered effectiveness.

There are similar issues with many sorts of logical, mathematical, grammatical, factual and other errors where a fairly clear and objective “correct answer” can be determined, which should be uncontroversial, and yet people don’t care and take seriously that getting it right is important. Errors should be corrected. Retractions should be issued. Post-mortems should be performed. What process allowed the error to happen? What changes could be made to prevent similar errors from happening in the future?

It’s fine for beginners to make mistakes, but thought leaders in the community should be held to higher standards, and the higher standards should be an aspirational ideal that the beginners want to achieve, rather than something that’s seen as unnecessary, bad or too much work. It’s possible to avoid misquotes and logical errors without it being a major burden; if someone finds it’s a large burden, that means they need to practice more until they improve their intuition and subconscious mind. Getting things right in these ways should be easy and something they you can do while tired, distracted, autopiloting, etc.

Fixes like these alone won’t make EA far more effective by themselves. They will set the stage to enable more advanced or complex improvements. It’s very hard to do more important improvements when frequently making small errors. Getting the basics rights enables working more effectively on more advanced issues.

One of the main more advanced issues is rational debate.

Another is not trusting yourself. Don’t bet anything on your integrity or lack of bias when you can avoid it. There should be a strong norm against doing anything that would fail if you have low integrity or bias. If you can find any alternative, which doesn’t rely on your rationality, do that instead. Bias is common. Learning to be better at not fooling yourself is great, but you’ll probably screw it up a lot. If you can approach things so that you don’t have the opportunity to fool yourself, that’s better. There should be strong norms for things like transparency and following flowcharted methods and rules that dramatically reduce scope for bias. This can be applied to debate as well as to other things. And getting debate right enables criticism when people don’t live up to norms; without getting debate right norms have to be enforced in significant part with social pressure which compromises the rationality of the community and prevents it from clearly seizing the rationality high ground in debates with other groups.

Elliot Temple | Permalink | Messages (0)

Criticizing The Scout Mindset (including a misquote)

I quit the Effective Altruism forum due to a new rule requiring all new posts and comments be basically put in the public domain without copyright, so anyone could e.g. sell a book of my posts without my consent (they’d just have to give attribution). More info. I had a bunch of draft posts, so I’m posting some of them here with minimal editing. In general, I’m not going to submit them as link posts at EA myself. If you think they should be shared with EA as link posts, please do it yourself. I’m happy for other people to share links to my work at EA or on social media. Please share stuff in whatever ways you think are good to do.

This is quick notes, opinions and criticisms about the book The Scout Mindset by Julia Galef (which EA likes and promotes). I’m not going in depth, being very complete, or giving many quotes, because I don’t care much. I think it’s a bad book that isn’t worth spending more time on, and I don’t expect the author or her fans to listen to, engage with, value, appreciate or learn from criticism. If they were reasonable and wanted to interact, then I think this would be plenty to get the discussion/debate started, and I could give more quotes and details later if that would help make progress in our conversation.

The book is pretty shallow.

Galef repeatedly admits she’s not very rational, sometimes openly and sometimes by accident. The open admissions alone imply that the techniques in the book are inadequate.

She mentions that while writing the book she gathered a bunch of studies that agree with her but was too biased to check their quality. She figured out during writing that she should check them and she found that lots were bad. If you don’t already know that kinda stuff (that most studies like that are bad, that studies should be checked instead of just trusting the title/abstract, or that you should watch out for being biased), maybe you’re too new to be writing a book on rationality?

The book is written so it’s easy to read and think you’re already pretty good and not change. Or someone could improve a little.

The book has nothing that I recognized as substantive original research or thinking. Does she have any ideas of her own?

She uses biased examples, e.g. Musk, Bezos and Susan Blackmore are all used as positive examples. In each case, there are many negative things one could say about them, but she only says positive things about them which fit her narrative. She never tries to consider alternative views about them or explain any examples that don’t easily fit her narrative. Counter-examples or apparent counter-examples are simply left out of the book. Another potential counter-example is Steve Jobs, who is a better and more productive person than any people used as examples in her book, yet he has a reputation rather contrary to the scout mindset. That’s the kind of challenging apparent/potential counter-example that she could have engaged with but didn’t.

She uses an example of a Twitter thread where someone thought email greetings revealed sexism, and she (the tweet author) got cheered for sharing this complaint. Then she checked her data and found that her claim was factually wrong. She retracted. Great? Hold on. Let’s analyze a little more. Are there any other explanations? Even if the original factual claims were true, would sexism necessarily follow? Why not try to think about other narratives? For example, maybe men are less status oriented or less compliant with social norms, so that is why they are less inclined to use fancier titles when addressing her. It doesn’t have to be sexism. If you want to blame sexism, you should look at how they treat men, not just as how they treat one woman. Another potential explanation is that men dislike you individually and don’t treat other women the same way, which could be for some reason other than sexism. E.g. maybe it’s because you’re biased against men but not biased against women, so men pick up on that and respect you less. Galef never thinks anything through in depth and doesn’t consider additional nuances like these.

For Blackmore, the narrative is that anyone can go wrong and rationality is about correcting your mistakes. (Another example is someone who fell for a multi-level marketing scheme before realizing the error.) Blackmore had some experience and then started believing in the paranormal and then did science experiments to test that stuff and none of it worked and she changed her mind. Good story? Hold on. Let’s think critically. Did Blackmore do any new experiments? Were the old experiments refuting the paranormal inadequate or incomplete in some way? Did she review them and critique them? The story mentioned none of them. So why did she do redundant experiments and waste resources to gather the same evidence that already existed? And why did it change her mind when it had no actual new information? Because she was biased to respect the results of her own experiments but not prior experiments done by other people (that she pointed out no flaws in)? This fits the pro-evidence, pro-science-experiments bias of LW/Galef. They’re too eager to test things without considering that, often, we already have plenty of evidence and we just need to examine and debate it better. Blackmore didn’t need any new evidence to change her mind and getting funding to do experiments like that speaks to her privilege. Galef brings up multiple examples of privilege without showing any awareness of it; she just seems to want to suck up to high status people, and not think critically about their flaws, rather than to actually consider their privileges. Blackmore not only was able to fund bad experiments, then she was able to change her mind and continue her career. Why did she get more opportunities after doing such a bad job earlier in her career? Yes she improved (how much really though?). But other people didn’t suck in the first place, then also improved, and never got such great opportunities.

Possibly all the examples in the book of changing one’s mind were things that Galef’s social circle can agree with instead of be challenged by. They all changed their minds to agree with Galef more, not less. E.g. an example was used of becoming more convinced by global warming which, in passing, smeared some other people on the climate change skeptic side as being really biased, dishonest, etc. (True of some of them probably but not a good thing to throw around as an in-passing smear based on hearsay. And true of people on the opposite side of the debate too, so it’s biased to only say it about the side you disagree with to undermine and discredit them in passing while having the deniability of saying it was just an example of something else about rationality.) There was a pro-choicer who became less dogmatic but remained pro-choice, and I think Galef’s social circle also is pro-choice but trying not to be dogmatic about it. There was also a pro-vaccine person who was careful and strategic about bringing up the subject with his anti-vax wife but didn’t reconsider his own views at all, but he and the author did display some understanding of the other side’s point of view and why some superficial pro-vax arguments won’t work. So the narrative is if you understand the point of view of the people who are wrong, then you can persuade them better. But (implied) if you have center-left views typical of EA and LW people, then you won’t have to change your mind much since you’re mostly right.

Galef’s Misquote

Here’s a slightly edited version of my post on my CF forum about a misquote in the book. I expect the book has other misquotes (and factual errors, bad cites, etc.) but I didn’t look for them.

The Scout Mindset by Julia Galef quotes a blog post:

“Well, that’s too bad, because I do think it was morally wrong.”[14]

But the words in the sentence are different in the original post:

Well that’s just too bad, because I do think it was morally wrong of me to publish that list.

She left out the “just” and also cut off the quote early which made it look like the end of a sentence when it wasn’t. Also a previous quote from the same post changes the italics even though the italics match in this one.

The book also summarizes events related to this blog post, and the story told doesn’t match reality (as I see it by looking at the actual posts). Also I guess he didn’t like the attention from the book because he took his whole blog down and the link in the book’s footnote is dead. The book says they’re engaged so maybe he mistakenly thought he would like the attention and had a say in whether to be included? Hopefully… Also the engagement may explain the biased summary of the story that she gave in her book about not being biased.

She also wrote about the same events:

He even published a list titled “Why It’s Plausible I’m Wrong,”

This is misleading because he didn’t put up a post with that title. It’s a section title within a post and she didn’t give a cite so it’s hard to find. Also her capitalization differs from the original which said “Why it’s plausible I’m wrong”. The capitalization change is relevant to making it look more like a title when it isn’t.

BTW I checked archives from other dates. The most recent working one doesn’t have any edits to this wording nor does the oldest version.

What is going on? This book is from a major publisher and there’s no apparent benefit to misquoting it in this way. She didn’t twist his words for some agenda; she just changed them enough that she’s clearly doing something wrong but with no apparent motive (besides maybe minor editing to make the quote sound more polished?). And it’s a blog post; wouldn’t she use copy/paste to get the quote? Did she have the blog post open in her browser and go back and forth between it and her manuscript in order to type in the quote by hand!? That would be a bizarre process. Or does she or someone else change quotes during editing passes in the same way they’d edit non-quotes? Do they just run Grammarly or similar and see snippets from the book and edit them without reading the whole paragraph and realizing they’re within quote marks?

My Email to Julia Galef

Misquote in Scout Mindset:

“Well, that’s too bad, because I do think it was morally wrong.”[14]

But the original sentence was actually:

Well that’s just too bad, because I do think it was morally wrong of me to publish that list.

The largest change is deleting the word "just".

I wanted to let you know about the error and also ask if you could tell me what sort of writing or editing process is capable of producing that error? I've seen similar errors in other books and would really appreciate if I could understand what the cause is. I know one cause is carelessness when typing in a quote from paper but this is from a blog post and was presumably copy/pasted.

Galef did not respond to this email.

Elliot Temple | Permalink | Messages (0)

EA and Responding to Famous Authors

I quit the Effective Altruism forum due to a new rule requiring all new posts and comments be basically put in the public domain without copyright, so anyone could e.g. sell a book of my posts without my consent (they’d just have to give attribution). More info. I had a bunch of draft posts, so I’m posting some of them here with minimal editing. In general, I’m not going to submit them as link posts at EA myself. If you think they should be shared with EA as link posts, please do it yourself. I’m happy for other people to share links to my work at EA or on social media. Please share stuff in whatever ways you think are good to do.

I think EA has the resources to attempt to respond to every intellectual who sold over 100,000 books in English which make arguments that contradict EA. EA could write rebuttals to all popular, well known rival positions that are written in books. You could start with the authors who sold over a million books.

There are major downsides to using popularity as your only criterion for what to respond to. It’s important to also have ways that you respond to unpopular criticism. But responding to influential criticism makes sense because people know about it and just ignoring it makes it look like you don’t care to consider other ideas or have no answers.

Answering the arguments of popular authors could be one project, of 10+, in which EA attempts to engage with alternative ideas and argue its case.

EA claims to be committed to rationality but it seems more interested in getting a bunch of charity projects underway and/or funded better ASAP instead of taking the time to first do extensive rational analysis to figure out the right ideas to guide charity.

I understand not wanting to get caught up in doing planning forever and having decision paralysis, but where is the reasonably complete planning and debating that seems adequate to get started based on?

For example, it seems unreasonable to me to start an altruist movement without addressing Ayn Rand’s criticisms of altruism. Where are the serious essays summarizing, analyzing and refuting her arguments about altruism? She sold many millions of books. Where are the debates with anyone from ARI or the invitations for any online Objectivists who are interested to come debate with EA? Objectivism has a lot of fans today who are interested in rationality and debate (or at least claim to be), so ignoring them instead of writing anything that could change their minds seems bad. And being encouraging of discussion with them, instead of discouraging, would make sense and be more rational. (I’m aware that they aren’t doing better. They aren’t asking EA’s to come debate them, hosting more rational debates, writing articles refuting EA, etc. IMO both groups are not doing very well and there’s big room for improvement. I’ve tried to talk to Objectivists to get them to improve before and it didn’t work. Overall, although I’m a big fan of Ayn Rand, I think Objectivist communities today are less open to critical discussion and dissent than EA is.)

Elliot Temple | Permalink | Messages (0)

Is EA Rational?

I quit the Effective Altruism forum due to a new rule requiring all new posts and comments be basically put in the public domain without copyright, so anyone could e.g. sell a book of my posts without my consent (they’d just have to give attribution). More info. I had a bunch of draft posts, so I’m posting some of them here with minimal editing. In general, I’m not going to submit them as link posts at EA myself. If you think they should be shared with EA as link posts, please do it yourself. I’m happy for other people to share links to my work at EA or on social media. Please share stuff in whatever ways you think are good to do.

I haven’t studied EA much. There is plenty more about EA that I could read. But I don’t want to get involved much unless EA is rational.

By “rational”, I mean capable of (and good at) correcting errors. Rationality, in my mind, enables progress and improvement instead of stasis, being set in your ways, not listening to suggestions, etc. So a key aspect of rationality is being open to criticism, and having ways that changes will actually be made based on correct criticisms.

Is EA rational? In other words, if I study EA and find some errors, and then I write down those errors, and I’m right, will EA then make changes to fix those errors? I am doubtful.

That definitely could happen. EA does make changes and improvements sometimes. Is that not a demonstration of EA’s rationality? Partially, yes, sure. Which is why I’m saying this to EA instead of some other group. I think EA is better at that than most other groups.

But I think EA’s ability to listen to criticism and make changes is related to social status, bias, tribalism, and popularity. If I share a correct criticism and I’m perceived as high status, and I have friends in high places, and the criticism fits people’s biases, and the criticism makes me seem in-group not out-group, and the criticism gains popularity (gets shared and upvoted a bunch, gets talked about by many people), then I would have high confidence that EA would make changes. If all those factors are present, then EA is reliably willing to consider criticism and make changes.

If some of those factors are present, then it’s less reliable but EA might listen to criticism. If none of those factors are present, then I’m doubtful the criticism will be impactful. I don’t want to study EA to find flaws and also make friends with the right people, change my writing style to be a better culture fit with EA, form habits of acting in higher status ways, and focus on sharing criticisms that fit some pre-existing biases or tribal divisions.

What can be done as an alternative to listening to criticism based on popularity, status, culture-fit, biases, tribes, etc? One option is organized debate with written methodologies that make some guarantees. EA doesn’t do that. Does it do something else?

One thing I know EA does, which is much better than nothing (and is better than many other groups offer), is informal, disorganized debate following unwritten methodologies that vary some by the individuals you’re speaking with. I consider this option inadequately motivating to seriously research and critically scrutinize EA.

I could talk to EA people who have read essays about rationality and who are trying to be rational – individually, with no accountability, transparency, or particular responsibilities. I think that’s not good enough and makes it way too easy for social status hierarchies to matter. If EA offered more organized ways of sharing and debating criticism, with formal rules, then people would have to follow the rules and therefore not act based on status. Things like rules, flowcharted methods to follow, or step-by-step actions to take can all help fight against the people’s tendency to act based on status and other biases.

It’s good for informal options to exist but they rely on basically “everyone just personally tries to be rational” which I don’t think is good enough. So more formal options, with pro-rationality (and anti-status, anti-bias, etc.) design features should exist too.

The most common objection to such things is they’re too much work. On an individual level, it’s unclear to me that following a written methodology is more work than following an unwritten methodology. Whatever you do, you have some sort of methods or policies. Also, I don’t really think you can evaluate how much work a methodology is (and how much benefit it offers, since the cost/benefit ratio is what matters) without actually developing that methodology and writing it down first. I think rational debate methodologies which tries to reach conclusions about incoming criticisms are broadly untested empirically, so people shouldn’t assume they’d take too long or be ineffective when they can’t point to any examples of them being tried with that result. And EA has plenty of resources to e.g. task one full-time worker with engaging with community criticism and keeping organized documents that attempt to specify what arguments against EA exist, what counter-arguments there are, and otherwise map out the entire relevant debate as it exists today. Putting in less effort than that looks to me like not trying because the results are unwanted (some people prefer status hierarchies and irrationality, even if they say they like rationality) rather than because the results are highly prized but too expensive. There have been no research programs afaik to try to get these kinds of rational debate results more cheaply.

Also, suppose I research EA, come up with some criticisms, and I’m wrong. I informally share my criticisms on the forum and get some unsatisfactory, incomplete answers. I still think I’m right and I have no way to get my error corrected. The lack of access to debate symmetrically prevents whoever is wrong from learning better, whether that’s EA or me. So the outcome is bad either way. Either I’ve come up with a correct criticism but EA won’t change; or I’ve come up with any incorrect criticism but EA won’t explain to me why it’s incorrect in a way that’s adequate for me to change. Blocking conclusive rational debate blocks error correction regardless of which side is right. Should EA really explain to all their incorrect critics why those critics are wrong? Yes! I think EA should create public explanations, in writing, of why all objections to EA (that anyone actually raises) are wrong. Would that take ~infinite work? No because you can explain why some category of objection is wrong. You can respond to patterns in the objections instead of addressing every objection individually. This lets you re-use some answers. Doing this would persuade more people that EA is correct, make it much more rewarding to study EA and try to think critically about it, and turn up the minority of cases where EA lacks an adequate answer to a criticism, and also expose EA’s good answers to review (people might suggest even better answers or find that, although EA won the argument in some case, there is a weakness in EA’s argument and a better criticism of EA could be made).

In general, I think EA is more willing to listen to criticism that is based on a bunch of shared premises. The more you disagree with and question foundational premises, the less EA will listen and discuss. If you agree on a bunch of foundations then criticize some more detailed matters based on those foundations, then EA will listen more. This results in many critics having a reasonably good experience even though the system (or really lack of system) is IMO fundamentally broken/irrational.

I imagine EA people will broadly dislike and disagree with what I’ve said, in part because I’m challenging foundational issues rather than using shared premises to challenge other issues. I think a bunch of people trying to study rationality and do their best at it is … a lot better than not doing that. But I think it’s inadequate compared to having policies, methodologies, flowcharts, checklists, rules, written guarantees, transparency, accountability, etc., to enable rationality. If you don’t walk people step by step through what to do, you’re going to get a lot of social status behaviors and biases from people who are trying to be rational. Also, if EA has something else to solve the same problems I’m concerned about in a different way than how I suggest approaching them, what is the alternative solution?

Why does writing down step by step what to do help if the people writing the steps have biases and irrationalities of their own? Won’t the steps be flawed? Sure they may be, but putting them in writing allows critical analysis of the steps from many people. Improving the steps can be a group effort. Whereas many people separately following their own separate unwritten steps is hard to improve.

I do agree with the basic idea of EA: using reason and evidence to optimize charity. I agree that charity should be approached with a scientific and rational mindset rather than with whims, arbitrariness, social signaling or whatever else. I agree that cost/benefit ratios and math matter more than feelings about charity. But unfortunately I don’t think that’s enough agreement to get a positive response when I then challenge EA on what rationality is and how to pursue it. I think critics get much better responses from EA if they have major pre-existing agreement with EA about what rationality is and how to do it, but optimizing rationality itself is crucial to EA’s mission.

In other words, I think EA is optimized for optimizing which charitable interventions are good. It’s pretty good at discussing and changing its mind about cost/benefit ratios of charity options (though questioning the premises themselves behind some charity approaches is less welcome). But EA is not similarly good at discussing and changing its mind about how to discuss, change its mind, and be rational. It’s better at applying rationality to charity topics than to epistemology.

Does this matter? Suppose I couldn’t talk to EA about rational debate itself, but could talk to EA about the costs and benefits of any particular charity project. Is that good enough? I don’t think so. Besides potentially disagreeing with the premises of some charity projects, I also have disagreements regarding how to do multi-factor decision making itself.

Elliot Temple | Permalink | Messages (0)

EA and Paths Forward

I quit the Effective Altruism forum due to a new rule requiring all new posts and comments be basically put in the public domain without copyright, so anyone could e.g. sell a book of my posts without my consent (they’d just have to give attribution). More info. I had a bunch of draft posts, so I’m posting some of them here with minimal editing. In general, I’m not going to submit them as link posts at EA myself. If you think they should be shared with EA as link posts, please do it yourself. I’m happy for other people to share links to my work at EA or on social media. Please share stuff in whatever ways you think are good to do.

Suppose EA is making an important error. John knows a correction and would like to help. What can John do?

Whatever the answer is, this is something EA should put thought into. They should advertise/communicate the best process for John to use, make it easy to understand and use, and intentionally design it with some beneficial features. EA should also consider having several processes so there are backups in case one fails.

Failure is a realistic possibility here. John might try to share a correction but be ignored. People might think John is wrong even though he’s right. People might think John’s comment is unimportant even though it’s actually important. There are lots of ways for people to reject or ignore a good idea. Suppose that happens. Now EA has made two mistakes which John knows are mistakes and would like to correct. There’s the first mistake, whatever it was, and now also this second mistake of not being receptive to the correction of the first mistake.

How can John get the second mistake corrected? There should be some kind of escalation process for when the initial mistake correction process fails. There is a risk that this escalation process would be abused. What if John thinks he’s right but actually he’s wrong? If the escalation process is costly in time and effort for EA people, and is used frequently, that would be bad. So the process should exist but should be designed in some kind of conservative way that limits the effort it will cost EA to deal with incorrect corrections. Similarly, the initial process for correcting EA also needs to be designed to limit the burden it places on EA. Limiting the burden increases the failure rate, making a secondary (and perhaps tertiary) error correction option more important to have.

When John believes he has an important correction for EA, and he shares it, and EA initially disagrees, that is a symmetric situation. Each side thinks the other is wrong. (That EA is multiple people, and John also might actually be multiple people, makes things more complex, but without changing some of the key principles.) The rational thing to do with this kind of symmetric dispute is not to say “I think I’m right” and ignore the other side. If you can’t resolve the dispute – if your knowledge is inadequate to conclude that you’re right – then you should be neutral and act accordingly. Or you might think you have crushing arguments which are objectively adequate to resolve the dispute in your favor, and you might even post them publicly, and think John is responding in obviously unreasonable ways. In that case, you might manage to objectively establish some kind of asymmetry. How to do objectively establish asymmetries in intellectual disagreements is a hard, important question in epistemology which I don’t think has received appropriate research attention (note: it’s also relevant when there’s a disagreement between two ideas within one person).

Anyway, what can John do? He can write down some criticism and post it on the EA forum. EA has a free, public forum. That is better than many other organizations which don’t facilitate publicly sharing criticism. Many organizations either have no forum or delete critical discussions while making no real attempt at rationality (e.g. Blizzard has forums related to its games, but they aren’t very rational, don’t really try to be, and delete tons of complaints). Does EA ever delete dissent or ban dissenters? As someone who hasn’t already spent many years paying close attention, I don’t know and I don’t know how to find out in a way that I would trust. Many forums claim not to delete dissent but actually do; it’s a common thing to lie about. Making a highly credible claim not to delete or punish dissent is important or else John might not bother trying to share his criticism.

So John can post a criticism on a forum, and then people may or may not read it and may or may not reply. Will anyone with some kind of leadership role at EA read it? Maybe not. This is bad. The naive alternative “guarantee plenty of attention from important people to all criticism” would be even worse. But there are many other possible policy options which are better.

To design a better system, we should consider what might go wrong. How could John’s great, valuable criticism receive a negative reaction on an open forum which is active enough that John gets at least a little attention? And how might things go well? If the initial attention John gets is positive, that will draw some additional attention. If that is positive too, then it will draw more attention. If 100% of the attention John gets results in positive responses, his post will be shared and spread until a large portion of the community sees it including people with power and influence, who will also view the criticism positively (by premise) and so they’ll listen and act. A 75% positive response rate would probably also be good enough to get a similar outcome.

So how might John’s criticism, which we’re hypothetically supposing is true and important, get a more negative reception so that it can’t snowball to get more attention and influence important decision makers?

John might have low social status, and people might judge more based on status than idea quality.

John’s criticism might offend people.

John’s criticism might threaten people in some way, e.g. implying that some of them shouldn’t have the income and prestige (or merely self-esteem) that they currently enjoy.

John’s criticism might be hard to understand. People might get confused. People might lack some prerequisite knowledge and skills needed to engage with it well.

John’s criticism might be very long and hard to get value from just the beginning. People might skim but not see the value that they would see if they read the whole thing in a thoughtful, attentive way. Making it long might be an error by John, but it also might be really hard to shorten and still have a good cost/benefit ratio (it’s valuable enough to justify the length).

John’s criticism might rely on premises that people disagree with. In other words, EA might be wrong about more than one thing. An interconnected set of mistakes can be much harder to explain than a single mistake even if the critic understands the entire set of mistakes. People might reject criticism of X due to their own mistake Y, and criticism of Y due to their own mistake X. A similar thing can happening involving many more ideas in a much more complicated structure so that it’s harder for John to point out what’s going on (even if he knows).

What can be done about all these difficulties? My suggestion, in short, is to develop a rational debate methodology and to hold debates aimed at reaching conclusions about disagreements. The methodology must include features for reducing the role of bias, social status, dishonesty, etc. In particular, it must prevent people from arbitrarily stopping any debates whenever they feel like it (which tends to include shortly before losing, which prevents the debate from being conclusive). The debate methodology must also have features for reducing the cost of debate, and ending low value debates, especially since it won’t allow arbitrarily quitting at any moment. A debate methodology is not a perfect, complete solution to all the problems John may face but it has various merits.

People often assume that rational, conclusive debate is too much work so the cost/benefit ratio on it is poor. This is typically a general opinion they have rather than an evaluation of any specific debate methodology. I think they should reserve judgment until after they review some written debate methodologies. They should look at some actual methods and see how much work they are, and what benefits they offer, before reaching a conclusion about their cost/benefit ratio. If the cost/benefit ratios are poor, people would try to make adjustments to reduce costs and increase benefits before giving up on rational debate.

Can people have rational debate without following any written methodology? Sure that’s possible. But if that worked well for some people and resulted in good cost/benefit ratios, wouldn’t it make sense to take whatever those successful debate participants are doing and write it down as a method? Even if the method had vague parts that’d be better than nothing.

Although under-explored, debate methodologies are not a new idea. E.g. Russell L. Ackoff published one in a book in 1978 (pp. 44-47). That’s unfortunately the only very substantive, promising one I’ve found besides developing one of my own. I bet there are more to be found somewhere in existing literature though. The main reason I thought Ackoff’s was a valuable proposal were that 1) it was based on following specific steps (in other words, you could make a flowchart out of it); and 2) it aimed at completeness, including using recursion to enable it to always succeed instead of getting stuck. Partial methods are common and easy to find, e.g. “don’t straw man” is a partial debate method, but it’s just suitable for being one little part of an overall method (and it lacks specific methods of detecting straw men, handling them when someone thinks one was done, etc. – it’s more of an aspiration than specific actions to achieve that aspiration).

A downside of Ackoff’s method is that it lacks stopping conditions besides success, so it could take an unlimited amount of effort. I think unilateral stopping conditions are one of the key issues for a good debate method: they need to exist (to prevent abuse by unreasonable debate partners who don’t agree to end the debate) but be designed to prevent abuse (by e.g. people quitting debates when they’re losing and quitting in a way designed to obscure what happened). I developed impasse chains as a debate stopping condition which takes a fairly small, bounded amount of effort to end debates unilaterally but adds significant transparency about how and why the debate is ending. Impasse chains only work when the further debate is providing low value, but that’s the only problematic case – otherwise you can either continue or say you want to stop and give a reason (which the other person will consent to, or if they don’t and you think they’re being unreasonable, now you’ve got an impasse to raise). Impasse chains are in the ballpark of “to end a debate, you must either mutually agree or else go through some required post-mortem steps” plus they enable multiple chances at problem solving to fix whatever is broken about the debate. This strikes me as one of the most obvious genres of debate stopping conditions to try, yet I think my proposal is novel. I think that says something really important about the world and its hostility to rational debate methodology. (I don’t think it’s mere disinterest or ignorance; if it were, the moment I suggested rational debate methods and said why they were important a lot of people would become excited and want to pursue the matter; but that hasn’t happened.)

Another important and related issue is how can you write, or design and organize a community or movement, so it’s easier for people to learn and debate with your ideas? And also easier to avoid low value or repetitive discussion. An example design is an FAQ to help reduce repetition. A less typical design would be creating (and sharing and keeping updated) a debate tree document organizing and summarizing the key arguments in the entire field you care about.

Elliot Temple | Permalink | Messages (0)