Misquoting Is Conceptually Similar to Deadnaming: A Suggestion to Improve EA Norms

Our society gives people (especially adults) freedom to control many aspects of their lives. People choose what name to go by, what words to say, what to do with their money, what gender to be called, what clothes to wear, and much more.

It violates people’s personal autonomy to try to control these things without their consent. It’s not your place to choose e.g. what to spend someone else’s money on, what clothes they should wear, or what their name is. It’d be extremely rude to call me “Joan” instead of “Elliot”.

Effective Altruism (EA) has written norms related to this:

Misgendering deliberately and/or deadnaming gratuitously is not ok, although mistakes are expected and fine (please accept corrections, though).

I think this norm is good. I think the same norm should be applied to misquoting for the same reasons. It currently isn’t (context).

Article summary: Misquoting is different than sloppiness or imprecision in general. Misquoting puts words in someone else’s mouth without their consent. It takes away their choice of what words to say or not say, just like deadnaming takes away their choice of what name to use.

I’d also suggesting applying the deadnaming norm to other forms of misnaming besides deadnaming, though I don’t know if those ever actually come up at EA, whereas misquoting happens regularly. I won’t include examples of misquotes for two reasons. First, I don’t want to name and shame individuals (especially when it’s a widespread problem and it could easily have been some other individuals instead). Second, I don’t want people to respond by trying to debate the degree of importance or inaccuracy of particular misquotes. That would miss the point about people’s right to control their own speech. It’s not your place to speak for other people, without their consent, even a little bit, even in unimportant ways.

I’ll clarify how I think the norm for deadnaming works, which will simultaneously clarify what I think about misquoting. There are some nuances to it. Then I’ll discuss misquoting more and discuss costs and benefits.

Accidents

Accidental deadnaming is OK but non-accidental deadnaming isn’t. If you deadname someone once, and you’re corrected, you should fix it and you shouldn’t do it again. Accidentally deadnaming someone many times is implausible or unreasonable; reasonable people who want to stop having those accidents can stop.

While “mistakes are expected and fine”, EA’s norm is that deadnaming on purpose is not fine nor expected. Misquotes, like deadnaming, come in accidental and non-accidental categories, and the non-accidental ones shouldn’t be fine.

How can we (charitably) judge what is an accident?

A sign that deadnaming wasn’t accidental is when someone defends, legitimizes or excuses it. If they say, “Sorry, my mistake.” it was probably a genuine accident. If they instead say “Deadnaming is not that bad.” or “It’s not a big deal.” or “Why do you care so much?”, or “I’m just using the name on your birth certificate.” then their deadnaming was partly due to their attitude rather than by accident. That violates EA norms.

When people resist a correction, or deny the importance of getting it right, then their mistake wasn’t just an accident.

For political reasons, some people resist using other people’s preferred name or pronouns. There’s a current political controversy about it. This makes deadnaming more common than it would otherwise be. Any deadnaming that occurs in part due to political attitudes is not fully accidental. Similarly, there is a current intellectual controversy about whether misquoting is a big deal or whether, instead, complaining about it is annoyingly pedantic and unproductive. This controversy increases the frequency of misquotes.

However, that controversy about misquotes and precision is separate from the issue of people’s right to control their own speech and choose what words to say or not say. Regardless of the outcome of the precision vs. sloppiness debate in general, misquotes are a special case because they non-consensually violate other people’s control over their own speech. It’s a non sequitur to go from thinking that lower effort, less careful writing is good to the conclusion that it’s OK to say that John said words that he did not say or choose.

People who deadname frequently claim it’s accidental when there are strong signs it isn’t accidental, such as resisting correction, making political comments that reveal their agenda, or being unapologetic. If they do that repeatedly, I don’t think EA would put up with it. Misquoting could be treated the same way.

Legitimacy

Sometimes people call me “Elliott” and I usually say nothing about the misspelling. I interpret it as an accident because it doesn’t fit any agenda. I don’t know why they’d do it on purpose. If I expected them to use my name many times in the future, or they were using it in a place that many people would read it, then I’d probably correct them. If I corrected them, they would say “oops sorry” or something like that; as long as they didn’t feel attacked or judged, and they don’t have a guilty conscience, then they wouldn’t resist the correction.

My internet handle is “curi”. Sometimes people call me “Curi”. When we’re having a conversation and they’re using my name repeatedly, I may ask them to use “curi”. A few people have resisted this. Why? Besides feeling hostility towards a debate opponent, I think some were unfamiliar with internet culture, so they don’t regard name capitalization as a valid, legitimate choice. They believe names should be formatted in a standard way. They think I’m in the wrong by wanting to have a name that starts with a lowercase letter. They think, by asking them to start a name with a lowercase letter, I’m the one trying to control them in a weird, inappropriate way.

People resist corrections when they think they’re in the right in some way. In that case, the mistake isn’t accidental. Their belief that it’s good in some way is a causal factor in it happening. If it was just an accident, they wouldn’t resist fixing the mistake. Instead, there is a disagreement; they like something about the alleged mistake. On the EA forum, you’re not allowed to disagree that deadnaming is bad and also act on that disagreement by being resistant to the forum norms. You’re required to go along with and respect the norms. You can get a warning or ban for persistent deadnaming.

People’s belief that they’re in the right usually comes from some kind of social-cultural legitimacy, rather than being their own personal opinion. Deadnaming and misgendering are legitimized by right wing politics and by some traditional views. Capitalizing the first letter of a name, and lowercasing the rest, is a standard English convention/tradition which some internet subcultures decided to violate, perhaps due to their focus on written over spoken communication. I think misquoting is legitimized primarily by anti-pedantry or anti-over-precision ideas (which is actually a nuanced debate where I think both standard sides are wrong). But viewpoints on precision aren’t actually relevant to whether it’s acceptable or violating to put unchosen words in someone else’s mouth. Also, each person has a right to decide how precise to be in their own speech. When you quote, it’s important to understand that that isn’t your speech; you’re using someone else’s speech in a limited way, and it isn’t yours to control.

When someone asks you not to deadname, you may feel that they’re asking you to go against your political beliefs, and therefore want to resist what feels like politicized control over your speech, which asks you to use your own speech contrary to your values. However, a small subset of speech is more about other people than yourself, so others need to have significant control over it. That subset includes names, pronouns and quotes. When asked not to misquote, instead of feeling like your views on precision are being challenged, you should instead recognize that you’re simply being asked to respect other people’s right to choose what words to say or not say. It’s primarily about them, not you. And it’s primarily about their control over their own life and speech, not about how much precision is good or how precisely you should speak.

Control over names and pronouns does have to be within reason. You can’t choose “my master who I worship” as a name or pronoun and demand that others say it. I’m not aware of anyone ever seriously wanting to do that. I don’t think it’s a real problem or what the controversy is actually about (even though it’s a current political talking point).

Our culture has conflicting norms, but it does have a very clear, well known norm in favor of exact quotes. That’s taught in schools and written down in policies at some universities and newspapers. We lack similarly clear or strong norms for many other issues related to precision. Why? Because the norm against misquoting isn’t primarily about precision. Misquoting is treated differently than other issues related to precision because it’s not your place to choose someone else’s words any more than it’s your place to choose their name or gender.

Misquotes Due to Bias

Misquotes usually aren’t random errors.

Sometimes people make a typo. That’s an accident. Typos can be viewed as basically random errors. I bet there are actually patterns regarding which letters or letter combinations get more typos. And people could work to make fewer typos. But there’s no biased agenda there, so in general it’s not a problem.

Most quotes can be done with copy/paste, so typos can be avoided. If someone has a general policy of typing in quotes and keeps making typos within quotes, they should switch to using copy/paste. At my forum, I preemptively ask everyone to use software tools like copy/paste when possible to avoid misquotes. I don’t wait and ask them to switch to less error-prone quoting methods after they make some errors. That’s because, as with deadnaming, those errors mistreat other people, so I’d rather they didn’t happen in the first place.

Except for typos and genuine accidents, misquotes are usually changed in some way that benefits or favors the misquoter, not in random ways.

People often misquote because they want to edit things in their favor, even in very subtle ways. Tiny changes can make a quote seem more or less formal or tweak the connotations. People often edit quotes to remove some ambiguity, so it reads as an author more clearly saying something than he did.

Sometimes people want their writing to look good with no errors, so they want to change anything in a quote that they regard as an error, like a comma or lack of comma. Instead of respecting the quote as someone else’s words – their errors are theirs to make (or to disagree are errors) – they want to control it because they’re using it within their own writing, so they want to make it conform to their own writing standards. People should understand that when they quote, they are giving someone else a space within their writing, so they are giving up some control.

People also misquote because they don’t respect the concept of accurate quotations. These misquotes can be careless with no other agenda or bias – they aren’t specifically edited to e.g. help one side of a debate. However, random changes to the wordings your debate partners use tend to be bad for them. Random changes tend to make their wordings less precise rather than more precise. As we know from evolution, random changes are more likely to make something less adapted to a purpose rather than more adapted.

If you deadname people because you don’t respect the concept of people controlling their name, that’s not OK. If you are creating accidents because you don’t care to try to get names right, you’re doing something wrong. Similarly, if you create accidental misquotes because you don’t respect the concept of people controlling their own speech and wordings, you’re doing something wrong.

Also, imprecision in general is an enabler of bias because it gives people extra flexibility. They get more options for what to say, think or do, so they can pick the one that best fits their bias. A standard example is rounding in their favor. If you’re 10 minutes late, you might round that down to 5 minutes in a context where plus or minus five minutes of precision is allowed. On the other hand, if someone else is 40 minutes late, you might round that up to an hour as long as that’s within acceptable boundaries of imprecision. People also do this with money. Many people round their budget up but round their expenses down, and the more imprecise their thinking, the larger the effect. If permissible imprecision gives people multiple different versions of a quote that they can use, they’ll often pick one that is biased in their favor, which is different than a fully accidental misquote.

Misquotes Due to Precise Control or Perfectionism

Some non-accidental misquotes, instead of due to bias, are because people want to control all the words in their essay (or book or forum post). They care so much about controlling their speech, in precise detail, that they extend that control to the text within quotes just because it’s within their writing. They’re used to having full control over everything they write and they don’t draw a special boundary for quotations; they just keep being controlling. Then, ironically, when challenged, they may say “Oh who cares; it’s just small changes; you don’t need precise control over your speech.” But they changed the quote because of their extreme desire to exactly control anything even resembling their own speech. If you don’t want to give up control enough to let someone else speak in entirely their own words within your writing, there is a simple solution: don’t quote them. If you want total control of your stuff, and you can’t let a comma be out of place even within a quote, you should respect other people wanting control of their stuff, too. Some people don’t fully grasp that the stuff within quotes is not their stuff even though it’s within their writing. Misquotes of this nature come more from a place of perfectionism and precise control, and lack of empathy, rather than being sloppy accidents. These misquotes involve non-random changes to make the text fit the quoter’s preferences better.

Types of Misquotes

I divide misquotes into two categories. The first type changes a word, letter or punctuation mark. It’s a factual error (the quote is factually wrong about what the person said). It’s inaccurate in a clear, literal way. Computers can pretty easily check for this kind of quotation error without needing any artificial intelligence. Just a simple string comparison algorithm can do it. In this case, there’s generally no debate about whether the quote is accurate or inaccurate. There are also some special rules that allow changing quotes without them being considered inaccurate, e.g. using square brackets to indicate changes or notes, or using ellipses for omitted words.

The second type of misquote is a misleading quote, such as taking words out of context. There is sometimes debate about whether a quote is misleading or not. Many cases are pretty clear, and some cases are harder to judge. In borderline cases, we should be forgiving of the person who did it, but also, in general, they should change it if the person being quoted objects. (Or, for example, if you’re debating someone about Socrates’ ideas, and they’re the one taking Socrates’ side, and they think your Socrates quote is misleading, then you should change it. You may say all sorts of negative things about the other side of the debate, but that’s not what quotation marks are for. Quotations are a form of neutral ground that should be kept objective, not a place to pursue your debating agenda.)

Here’s an example of a misleading quote that doesn’t violate the basic accuracy rules. You say, “I do not think John is great.” but I quote you as saying “John is great.” The context included an important “not” which has been left out. I think we can all agree that this counts as misquoting even though no words, letters or punctuation marks were changed. And, like deadnaming, it’s very rude to do this to someone.

Small Changes

Sometimes people believe it’s OK to misquote as long as the meaning isn’t changed. Isn’t it harmless to replace a word with a synonym? Isn’t it harmless to change a quote if the author agrees with the changed version? Do really small changes matter?

First of all, if the changes are small and don’t really matter, then just don’t do them. If you think there’s no significant difference, that implies there’s no significant upside, so then don’t misquote. It’s not like it takes substantial effort to refrain from editing a quote; it’s less work not to make changes. And copy/pasting is generally less work than typing.

If someone doesn’t mind a change to a quote, there are still concerns about truth and accuracy. Anyone in the audience may not want to read things he believes are exact quotes but which aren’t. He may find that misleading (and EA has a norm against misleading people). Also, if you ever non-accidentally use inaccurate quotes, then reasonable people will doubt that they can trust any of your quotes. They’ll have to check primary sources for any quotes you give, which will significantly raise the cost of reading your writing and reduce engagement with your ideas. But the main issue – putting words in someone’s mouth without their consent – is gone if they consent. Similarly, it isn’t deadnaming to use an old name of someone who consents to be called by either their old or new name.

However, it’s not your place to guess what words someone would consent to say. If they are a close friend, maybe you have a good understanding of what’s OK with them, and I guess you could try to get away with it. I wouldn’t recommend that and I wouldn’t want to be friends with someone who thought they could speak for me and present it as a quote rather than as an informed guess about my beliefs or about what I would say. But if you want to quote your friend (or anyone else) saying something they haven’t said, and you’re pretty sure they’d be happy to say it, there’s a solution: ask them to say it and then quote them if they do choose to say it. On the other hand, if you’re arguing with someone, you’re in a poor position to judge what words they would consent to saying or what kind of wording edits would be meaningful to them. It’s not reasonable to try to guess what wording edits a debate opponent would consent to and then go ahead with them unilaterally.

Inaccurately paraphrasing debate opponents is a problem too, but it’s much harder to avoid than misquoting is. Misquoting, like deadnaming, is something that you can almost entirely avoid if you want to.

The changes you find small and unimportant can matter to other people with different perspectives on the issues. You may think that “idea”, “concept”, “thought” and “theory” are interchangeable words, but someone else may purposefully, non-randomly use each of those words in different contexts. It’s important that people can control the nuances of their wordings when they want to (even if they can’t give explicit arguments for why they use words that way). Even if an author doesn’t (consciously) see any significant difference between his original wording and your misquote, the misquote is still less representative of his thinking (his subconscious or intuition chose to say it the other way, and that could be meaningful even if he doesn’t realize it).

Even if your misquote would be an accurate paraphrase, and won’t do a bunch of harm by spreading severe misinformation, there’s no need to put quote marks around it. If you’re using an edited version of someone else’s words, so leaving out the quote marks would be plagiarism, then use square brackets and ellipses. There’s already a standard solution for how to edit quotes, when appropriate, without misquoting. There’s no good reason to misquote.

Cost and Benefit

How costly is it to avoid misquotes or to avoid deadnaming? The cost is low but there are some reasons people misjudge it.

Being precise has a high cost, at least initially. But misquoting, like misnaming, is a specific case where, with a low effort, people can get things right with high reliability and few accidents. Reducing genuine accidents to zero is unnecessary and isn’t what the controversy is about.

When a mistake is just an accident, correcting it shouldn’t be a big deal. There is no shame is infrequent accidents. Attempts to correct misquotes sometimes turn into a much bigger deal, with each party writing multiple messages. It can even initiate drama. This is because people oppose the policy of not misquoting, rather than a cost inherent in a policy of not misquoting. It’s the resistance to the policy, not the policy itself, which wastes time and energy and derails conversations.

Most of the observed conversational cost, that goes to talking about misquotes, is due to people’s pro-misquoting attitudes rather than due to any actual difficulty of avoiding misquotes. This misleads people about how large the cost is.

Similarly, if you go to some right wing political forums, getting people to stop deadnaming would be very costly. They’d fight you over it. But if they were happy to just do it, then the costs would be low. It’s not very hard to very infrequently make updates to your memory about the names of a few people. Cost due to opposition to doing something correctly should be clearly differentiated from the cost of doing it correctly.

To avoid misquotes, copy and paste. If you type in a quote from paper, double check it and/or disclaim it as potentially containing a typo. Most books are available electronically so typing quotes in from paper is usually unnecessary and more costly. Most cases of misquoting that I’ve seen, or had a conflict over, involved a quote that could have been copy/pasted. Copy/pasting is easy not costly.

Avoiding misquotes also involves never adding quotation marks around things which are not quotes but which readers would think were quotes. For example, don’t write “John said” and then a paraphrase then also quote marks around it in order to make it seem more exact, precise, rigorous or official than it is. And don’t put quote marks around a paraphrase because you believe you should use a quote, but you’re too lazy to get the quote, and you want to hide that laziness by pretending you did quote.

Accurate quoting can be more about avoiding bias than about effort or precision. You have to want to do it and then resist the temptation to violate the rules in ways that favor you. For some people, that’s not even tempting. It’s like how some people resist the temptation to steal while others don’t find stealing tempting in the first place. You can get to the point that things aren’t tempting and really don’t take effort to not do. Norms can help with that. Due to better anti-stealing norms, many more people aren’t tempted to steal than aren’t tempted to misquote. Anyway, if someone gives in to temptation and steals, deadnames or misquotes, that is not an accident. It’s a different thing. It’s not permissible at EA to deadname because you gave in to temptation, and I suggest misquoting should work that way too.

What’s the upside of misquoting? Why are many people resistant to making a small effort to change? I think there are two main reasons. First, they confuse the misquoting issue with the general issue of being imprecise. They feel like someone asking them not to misquote is demanding that they be a more precise thinker and writer in general. Actually, people asking not to be misquoted, like people asking not to be deadnamed, don’t want their personal domain violated. Second, people like misquoting because it lets lets them make biased changes to quotes. People don’t like being controlled by rules that give them less choice of what to do and less opportunity to be flexible in their favor (a.k.a. biased). Many people have a general resistance to creating and following written policies. I’ve written about how that’s related to not understanding or resisting the rule of law.

Another cost of avoiding misquotes is that you should be careful when using software editing tools like spellcheck or Grammarly. They should have automatic quote detection features and warn you before making changes within quotes, but they don’t. These tools encourage people to quickly make many small changes without reading the context, so people may change something without even knowing it’s within a quote. People can also click buttons like “correct all” and end up editing quotes. Or they might decide to replace all instances of “colour” with “color” in their book, do a mass find/replace, and accidentally change a quote. I wonder how many small misquotes in recent books are caused this way, but I don’t think it’s the cause of many misquotes on forums. Again, the occasional accident is OK; perfection is not necessary but people could avoid most errors at a low cost and stop picking fights in defense of misquotes or deadnaming.

If non-accidental misquoting is prohibited at EA, just like deadnaming, then it will provide a primary benefit by defending people’s control over their own speech. It will also provide a secondary benefit regarding truth, accuracy and precision. It’s debatable how large that accuracy benefit is and how much cost it would be worth. However, in this case, the marginal cost of that benefit would be zero. If you change misquoting norms for another reason which is worth the cost by itself, then then the gain in accuracy is a free bonus.

There are some gray areas regarding misquoting, where it’s harder to judge whether it’s an error. Those issues are more costly to police. However, most of the benefit is available just by policing misquotes which are clearly and easily avoidable, which is the large majority of misquotes. Doing that will have a good cost to benefit ratio.

Another cost of misquoting is it can gaslight people, especially with small, subtle changes. It can cause them to doubt themselves or create false memories of their own speech to match the misquote. It takes work to double check what you actually said after reading someone quote you, which is a cost. Many people don’t do that work, which leaves them vulnerable. There’s a downside both do doing or not doing that work. That’s a cost imposed by allowing misquotes to be common and legitimized.

Tables

Benefits and costs of anti-misquoting norms:

Benefits Costs
Respect people’s control over their speech Avoiding carelessness
Accuracy Resisting temptation
Prevent conflicts about misquotes Not getting to bias quotes in your favor
No hidden, biased tweaks in quotes you read Learning to use copy/paste hotkeys
Less time editing quotes Not getting full control over quoted text like you have over other text in your post
Quotes and paraphrases differentiated Not getting to put quote marks around whatever you want to
Filter out persistent misquoters Lose people who insist on misquoting
Effort to spread and enforce norm

For comparison, here’s a cost/benefit table for anti-deadnaming norms:

Benefits Costs
Respect people's control over their name Avoiding carelessness
Accuracy Resisting temptation
Filter out persistent deadnamers Lose people who insist on deadnaming
Not getting to call people whatever you want
Effort to spread and enforce norm

Potential Objections

If I can’t misquote, how can I tweak a quote wording to fit my sentence? Use square brackets.

If I can’t misquote, how can I supply context for a quote and keep it short? Use square brackets or explain the context before giving the quote.

What if I want to type in a quote and make a typo? If you’re a good enough typist that you don’t mind typing extra words, I’m sure you can also manage to use copy/paste hotkeys.

What if I’m quoting a paper book? Double check what you typed in and/or put a disclaimer that it’s typed in by hand.

What if an accident happens? As with deadnaming, rare, genuine accidents are OK. Accidents that happen because you don’t really care about deadnaming or misquoting are not fine.

Who cares? People who think about what words to say and not say, and put effort into those decisions. They don’t want someone else to overrule those decisions. Whether you’re one of those people or not, people who think about what to say are people you should want to have on your forum.

Who else cares? People who want to form accurate beliefs about the world and have high standards don’t want to read misquotes and potentially be fooled by them or have to look stuff up in primary sources frequently. It’s much less work for people to not misquote in the first place than for readers (often multiple readers independently) to check sources.

Is it really that big a deal? Quoting accurately isn’t very hard and isn’t that big a deal to do. If this issue doesn’t matter much, just do it in the way that doesn’t cause problems and doesn’t draw attention to quoting. If people would stop misquoting then we could all stop talking about this.

Can’t you just ignore being misquoted? Maybe. You can also ignore being deadnamed, but you shouldn’t have to. It’s also hard enough to have discussions when people subtly reframe the issues, and indirectly reframe what you said (often by replying as if you said something, without claiming you said it), which is very common. Those actions are harder to deal with and counter when they involve misquotes – misquotes escalate a preexisting problem and make it worse. On the other hand, norms in favor of using (accurate) quotes more often would make it harder to be subtly biased and misleading about what discussion partners said.

Epistemic Status

I’ve had strong opinions about misquoting for years and brought these issues up with many people. My experiences with using no-misquoting norms at my own forum have been positive. I still don’t know of any reasonable counter-arguments that favor misquotes.

Conclusion

Repeated deadnaming is due to choice not accident. Even if a repeat offender isn’t directly choosing to deadname on purpose, they’re choosing to be careless about the issue on purpose, or they have a (probably political) bias. They could stop deadnaming if they tried harder. EA norms correctly prohibit deadnaming, except by genuine accident. People are expected to make a reasonable (small) effort to not deadname.

Like deadnaming, misquoting violates someone else’s consent and control over their personal domain. People see misquoting as being about the open debate over how precise people should be, but that is a secondary issue. They should have more empathy for people who want to control their own speech. I propose that EA’s norms should be changed to treat misquoting like deadnaming. Misquoting is a frequent occurrence and the forum would be a better place if moderators put a stop to it, as they stop deadnaming.

Norms that allow non-accidental misquoting alienate some people who might otherwise participate, just like allowing non-accidental deadnaming would alienate some potential participants. Try to visualize in your head what a forum would be like where the moderators refused to do anything about non-accidental deadnaming. Even if you don’t personally have a deadname, it’d still create a bad, disrespectful atmosphere. It’s better to be respectful and inclusive, at a fairly small cost, instead of letting some forum users mistreat others. It’s great for forums to enable free speech and have a ton of tolerance, but that shouldn’t extend to people exercising control over something that someone else has the right to control, such as his name or speech. It’s not much work to get people’s names right nor to copy/paste exact quotes and then leave them alone (and to refrain from adding quotation marks around paraphrases). Please change EA’s norms to be more respectful of people’s control over their speech, as the norms already respect people’s control over their name.


Elliot Temple | Permalink | Messages (0)

Finding Errors in The Case Against Education by Bryan Caplan

I proposed a game on the Effective Altruism forum where people submit texts to me and I find three errors. I wrote this for that game. This is a duplicate of my EA post.

Introduction

I'm no fan of university nor academia, so I do partly agree with The Case Against Education by Bryan Caplan. I do think social climbing is a major aspect of university. (It's not just status signalling. There's also e.g. social networking.)

I'm assuming you can electronically search the book to read additional context for quotes if you want to.

Error One

For a single individual, education pays.

You only need to find one job. Spending even a year on a difficult job search, convincing one employer to give you a chance, can easily beat spending four years at university and paying tuition. If you do well at that job and get a few years of work experience, getting another job in the same industry is usually much easier.

So I disagree that education pays, under the signalling model, for a single individual. I think a difficult job search is typically more efficient than university.

This works in some industries, like software, better than others. Caplan made a universal claim so there's no need to debate how many industries this is viable in.

Another option is starting a company. That's a lot of work, but it can still easily be a better option than going to university just so you can get hired.

Suppose, as a simple model, that 99% of jobs hire based on signalling and 1% don't. If lots of people stop going to university, there's a big problem. But if you individually don't go, you can get one of the 1% of non-signalling jobs. Whereas if 3% of the population skipped university and competed for 1% of the jobs, a lot of those people would have a rough time. (McDonalds doesn't hire cashiers based on signalling – or at least not the same kind of signalling – so imagine we're only considering good jobs in certain industries so the 1% non-signalling jobs model becomes more realistic.)

When they calculate the selfish (or “private”) return to education, they focus on one benefit—the education premium—and two costs—tuition and foregone earnings.[4]

I've been reading chapter 5 trying to figure out if Caplan ever considers alternatives to university besides just entering the job market in the standard way. This is a hint that he doesn't.

Foregone earnings are not a cost of going to university. They are a benefit that should be added on to some, but not all, alternatives to university. Then univeristy should be compared to alternatives for how much benefit it gives. When doing that comparison, you should not subtract income available in some alternatives from the benefit of university. Doing that subtraction only makes sense and works out OK if you're only considering two options: university or get a job earlier. When there are only two options, taking a benfit from one and instead subtracting it from the other as an opportunity cost doesn't change the mathematical result.

See also Capitalism: A Treatise on Economics by George Reisman (one of the students of Ludwig von Mises) which criticizes opportunity costs:

Contemporary economics, in contrast, continually ignores the vital connection of income and cost with the receipt and outlay of money. It does so insofar as it propounds the doctrines of “imputed income” and “opportunity cost.”[26] The doctrine of imputed income openly and systematically avows that the absence of a cost constitutes income. The doctrine of opportunity cost, on the other hand, holds that the absence of an income constitutes a cost. Contemporary economics thus deals in nonexistent incomes and costs, which it treats as though they existed. Its formula is that money not spent is money earned, and that money not earned is money spent.

That's from the section "Critique of the Concept of Imputed Income" which is followed by the section "Critique of the Opportunity-Cost Doctrine". The book explains its point in more detail than this quote. I highly recommend Reisman's whole book to anyone who cares about economics.

Risk: I looked for discussion of alternatives besides university or entering the job market early, such as a higher effort job search or starting a business. I didn't find it, but I haven't read most of the book so I could have missed it. I primarily looked in chapter 5.

Error Two

The answer would tilt, naturally, if you had to sing Mary Poppins on a full-price Disney cruise. Unless you already planned to take this vacation, you presumably value the cruise less than the fare. Say you value the $2,000 cruise at only $800. Now, to capture the 0.1% premium, you have to fork over three hours of your time plus the $1,200 difference between the cost of the cruise and the value of the vacation.

(Bold added to quote.)

The full cost of the cruise is not just the fare. It's also the time cost of going on the cruise. It's very easy to value the cruise experience at more than the ticket price, but still not go, because you'd rather vacation somewhere else or stay home and write your book.

BTW, Caplan is certainly familiar with time costs in general (see e.g. the last sentence quoted).

Error Three

Laymen cringe when economists use a single metric—rate of return—to evaluate bonds, home insulation, and college. Hasn’t anyone ever told them money isn’t everything! The superficial response: Economists are by no means the only folks who picture education as an investment. Look at students. The Higher Education Research Institute has questioned college freshmen about their goals since the 1970s. The vast majority is openly careerist and materialist. In 2012, almost 90% called “being able to get a better job” a “very important” or “essential” reason to go to college. Being “very well-off financially” (over 80%) and “making more money” (about 75%) are almost as popular. Less than half say the same about “developing a meaningful philosophy of life.”[2] These results are especially striking because humans exaggerate their idealism and downplay their selfishness.[3] Students probably prize worldly success even more than they admit.

(Bold added.)

First, minor point, some economists have that kind of perspective about rate of return. Not all of them.

And I sympathize with the laymen. You should consider whether you want to go to university. Will you enjoy your time there? Future income isn't all that matters. Money is nice but it doesn't really buy happiness. People should think about what they want to do with their lives, in realistic ways that take money into account, but which don't focus exclusively on money. In the final quoted sentence he mentions that students (on average) probably "prize worldly success even more than they admit". I agree, but I think some of those students are making a mistake and will end up unhappy as a result. Lots of people focus their goals too much on money and never figure out how to be happy (also they end up unhappy if they don't get a bunch of money, which is a risk).

But here's the more concrete error: The survey does not actually show that students view education in terms of economic returns only. It doesn't show that students agree with Caplan.

The issue, highlighted in the first sentence, is "economists use a single metric—rate of return". Do students agree with that? In other words, do students use a single metric? A survey where e.g. 90% of them care about that metric does not mean they use it exclusively. They care about many metrics, not a single one. Caplan immediately admits that so I don't even have to look the study up. He says 'Less than half [of students surveyed] say the same [very important or essential reason to go to university] about “developing a meaningful philosophy of life.”' Let's assume less than half means a third. Caplan tries to present this like the study is backing him up and showing how students agree with him. But a third disagreeing with him on a single metric is a ton of disaagreement. If they surveyed 50 things, and 40 aren't about money, and just 10% of students thought each of those 40 mattered, then maybe around zero students would agree with Caplan about only the single metric being important (the answers aren't independent so you can't just use math to estimate this scenario btw).

Bonus Error

Self-help gurus tend to take the selfish point of view for granted. Policy wonks tend to take the social point of view for granted. Which viewpoint—selfish or social—is “correct”? Tough question. Instead of taking sides, the next two chapters sift through the evidence from both perspectives—and let the reader pick the right balance between looking out for number one and making the world a better place.

This neglects to consider the classical liberal view (which I believe, and which an economist ought to be familiar with) of the harmony of (rational) interests of society and the individual. There is no necessary conflict or tradeoff here. (I searched the whole book for "conflict", "harmony", "interests" and "classical" but didn't find this covered elsewhere.)

I do think errors of omission are important but I still didn't want to count this as one of my three errors. I was trying to find somewhat more concrete errors than just not talking about something important and relevant.

Bonus Error Two

The deeper response to laymen’s critique, though, is that economists are well aware money isn’t everything—and have an official solution. Namely: count everything people care about. The trick: For every benefit, ponder, “How much would I pay to obtain it?”

This doesn't work because lots of things people care about are incommensurable. They're in different dimensions that you can't convert between. I wrote about the general issue of taking into account multiple dimensions at once at https://forum.effectivealtruism.org/posts/K8Jvw7xjRxQz8jKgE/multi-factor-decision-making-math

A different way to look at it is that the value of X in money is wildly variable by context, not a stable number. Also how much people would pay to obtain something is wildly variable by how much money they have, not a stable number.

Potential Error

If university education correlates with higher income, that doesn't mean it causes higher income. Maybe people who are likely to get high incomes are more likely to go to university. There are also some other correlation isn't causation counter-arguments that could be made. Is this addressed in the book? I didn't find it, but I didn't look nearly enough to know whether it's covered. Actually I barely read anything about his claims that university results in higher income, which I assume are at least partly based on correlation data, but I didn't really check. So I don't know if there's an error here but I wanted to mention it. If I were to read the book more, this is something I'd look into.

Screen Recording

Want to see me look through the book and write this post? I recorded my process with sporadic verbal commentary:

https://www.youtube.com/watch?v=BQ70qzRG61Y


Elliot Temple | Permalink | Messages (0)

Critiquing an Axiology Article about Repugnant Conclusions

I proposed a game on the Effective Altruism forum where people submit texts to me and I find three errors. I wrote this for that game. This is a duplicate of my EA post. I criticize Minimalist extended very repugnant conclusions are the least repugnant by Teo Ajantaival.

Error One

Archimedean views (“Quantity can always substitute for quality”)

Let us look at comparable XVRCs for Archimedean views. (Archimedean views roughly say that “quantity can always substitute for quality”, such that, for example, a sufficient number of minor pains can always be added up to be worse than a single instance of extreme pain.)

It's ambiguous/confusing about whether by "quality" you mean different quantity sizes, as in your example (substitution between small pains and a big pain), or you actually mean qualitatively different things (e.g. substitution between pain and the thrill of skydiving).

Is the claim that 3 1lb steaks can always substitute for 1 3lb steak, or that 3 1lb pork chops can always substitute for 1 ~3lb steak? (Maybe more or less if pork is valued less or more than steak.)

The point appears to be about whether multiple things can be added together for a total value or not – can a ton of small wins ever make up for a big win? In that case, don't use the word "quality" to refer to a big win, because it invokes concepts like a qualitative difference rather than a quantitative difference.

I thought it was probably about whether a group of small things could substitute for a bigger thing but then later I read:

Lexical views deny that “quantity can always substitute for quality”; instead, they assign categorical priority to some qualities relative to others.

This seems to be about qualitative differences: some types/kinds/categories have priority over others. Pork is not the same thing as steak. Maybe steak has priority and having no steak can't be made up for with a million pork chops. This is a different issue. Whether qualitative differences exist and matter and are strict is one issue, and whether many small quantities can add together to equal a large quantity is a separate issue (though the issues are related in some ways). So I think there's some confusion or lack of clarity about this.

I didn't read linked material to try to clarify matters, except to notice that this linked paper abstract doesn't use the word "quality". I think, for this issue, the article should stand on its own OK rather than rely on supplemental literature to clarify this.

Actually, I looked again while editing, and I've now noticed that in the full paper (as linked to and hosted by PhilPapers, the same site as before), the abstract text is totally different and does use the word "quality". What is going on!? PhilPapers is broken? Also this paper, despite using the word "quality" in the abstract once (and twice in the references), does not use that word in the body, so I guess it doesn't clarify the ambiguity I was bringing up, at least not directly.

Error Two

This is a strong point in favor of minimalist views over offsetting views in population axiology, regardless of one’s theory of aggregation.

I suspect you're using an offsetting view in epistemology when making this statement concluding against offsetting views in axiology. My guess is you don't know you're doing this or see the connection between the issues.

I take a "strong point in favor" to refer to the following basic model:

We have a bunch of ideas to evaluate, compare, choose between, etc.

Each idea has points in favor and points against.

We weight and sum the points for each idea.

We look at which idea has the highest overall score and favor that.

This is an offsetting model where points in favor of an idea can offset points against that same idea. Also, in some sense, points in favor of an idea offset points in favor of rival ideas.

I think offsetting views are wrong, in both epistemology and axiology, and there's overlap in the reasons for why they're wrong, so it's problematic (though not necessarily wrong) to favor them in one field while rejecting them in another field.

Error Three

The article jumps into details without enough framing about why this matters. This is understandable for a part 4, but on the other hand you chose to link me to this rather than to part 1 and you wrote:

Every part of this series builds on the previous parts, but can also be read independently.

Since the article is supposed to be readable independently, then the article should have explained why this matters in order to work well independently.

A related issue is I think the article is mostly discussing details in a specific subfield that is confused and doesn't particularly matter – the field's premises should be challenged instead.

And another related issue is the lack of any consideration of win/win approaches, discussion of whether there are inherent conflicts of interest between rational people, etc. A lot of the article topics are related to political philosophy issues (like classical liberalism's social harmony vs. Marxism's class warfare) that have already been debated a bunch, and it'd make sense to connect claims and viewpoints to that the existing knowledge. I think imagining societies with different agents with different amounts of utility or suffering, fully out of context of imagining any particular type of society, or design or organization or guiding principles of society, is not very productive or meaningful, so it's no wonder it's gotten bogged down in abstract concerns like the very repugnant conclusion stuff with no sign of any actually useful conclusions coming up.

This is not the sort of error I primarily wanted to point out. However, the article does a lot of literature summarizing instead of making its own claims. So I noticed some errors in the summarized ideas but that's different than errors in the articles. To point out errors in an article itself, when its summarizing other ideas, I'd have to point out that it has inaccurately summarized the ideas. That requires reading the cites and comparing them to the summaries. Which I don't think would be especially useful/valuable to do. Sometimes people summarize stuff they agree with, so criticizing the content works OK. But here a lot of it was summarizing stuff the author and I both disagree with, in order to criticize it, which doesn't provide many potential targets for criticism. So that's why I went ahead and made some more indirect criticism (and included more than one point) for the third error.

But I'd suggest that @Teo Ajantaival watch my screen recording (below) which has a bunch of commentary and feedback on the article. I expect some of it will be useful and some of the criticisms I make will be relevant to him. He could maybe pick out some things I said and recognize them as criticisms of ideas he holds, whereas sometimes it was hard for me to tell what he believes because he was just summarizing other people's ideas. (When looking for criticism, consider if I'm right, does it mean you're wrong? If so, then it's a claim by me about an error, even if I'm actually mistaken.) My guess is I said some things that would work as better error claims than some of the three I actually used, but I don't know which things they are. Also, I think if we were to debate, discussing the underlying premises, and whether this sub-field even matters, would acutally be more important than discussing within-field details, so it's a good thing to bring up. I think my disagreement with the niche that the article is working within is actually more important than some of the within-niche issues.

Offsetting and Repugnance

This section is about something @Teo Ajantaival also disagrees with, so it's not an error by him. It could possibly be an error of omission if he sees this as a good point that he didn't know but would have wanted to think of but didn't. To me it looks pretty important and relevant, and problematic to just ignore like there's no issue here.

If offsetting actually works – if you're a true believer in offsetting – then you should not find the very repugnant scenario to be repugnant at all.

I'll illustrate with a comparison. I am, like most people, to a reasonable approximation, a true believer in offsetting for money. That is, $100 in my bank account fully offsets $100 of credit card debt that I will pay off before there are any interest charges. There do exist people who say credit cards are evil and you shouldn't have one even if you pay it off in full every month, but I am not one of those people. I don't think debt is very repugnant when it's offset by assets like cash.

And similarly, spreading out the assets doesn't particularly matter. A billion bank accounts with a dollar each, ignoring some adminstrative hassle details, are just as good as one bank account with a billion dollars. That money can offset a million dollars of credit card debt just fine despite being spread out.

If you really think offsetting works, then you shouldn't find it repugnant to have some negatives that are offset. If you find it repugnant, you disagree with offsetting in that case.

I disagree with offsetting suffering – one person being happy does not simply cancel out someone else being victimized – and I figure most people also disagree with suffering offsetting. I also disagree with offsetting in epistemology. Money, as a fungible commodity, is something where offsetting works especially well. Similarly, offsetting would work well for barrels of oil of a standard size and quality, although oil is harder to transport than money so location matters more.

Bonus Error by Upvoters

At a glance (I haven't read it yet as I write this section), the article looks high effort. It has ~22 upvoters but no comments, no feedback, no hints about how to get feedback next time, no engagement with its ideas. I think that's really problematic and says something bad about the community and upvoting norms. I talk about this more at the beginning of my screen recording.

Update after reading the article: I can see some more potential reasons the article got no engagement (too specialized, too hard to read if you aren't familiar with the field, not enough introductory framing of why this matters) but someone could have at least said that. Upvoting is actually misleading feedback if you have problems like that with the article.

Bonus Literature on Maximizing or Minimizing Moral Values

https://www.curi.us/1169-morality

This article, by me, is about maximizing squirrels as a moral value, and more generally about there being a lot of actions and values which are largely independent of your goal. So if it was minimizing squirrels or maximizing bison, most of the conclusions are the same.

I commented on this some in my screen recorded after the upvoters criticism, maybe 20min in.

Bonus Comments on Offsetting

(This section was written before the three errors, one of which ended up being related to this.)

Offsetting views are problematic in epistemology too, not just morality/axiology. I've been complaining about them for years. There's a huge, widespread issue where people basically ignore criticism – don't engage with it and don't give counter-arguments or solutions to the problems it raises – because it's easier to go get a bunch more positive points elsewhere to offset the criticism. Or if they already think their idea already has a ton of positive points and a significant lead, then they can basically ignore criticism without even doing anything. I commented on this verbally around 25min into the screen recording.

Screen Recording

I recorded my screen and talked while creating this. The recording has a lot of commentary that isn't written down in this post.

https://www.youtube.com/watch?v=d2T2OPSCBi4


Elliot Temple | Permalink | Messages (0)

Criticizing "Against the singularity hypothesis"

I proposed a game on the Effective Altruism forum where people submit texts to me and I find three errors. I wrote this for that game. This is a duplicate of my EA post. I criticize Against the singularity hypothesis by David Thorstad.

Introduction

FYI, I disagree with the singularity hypothesis, but primarily due to epistemology, which isn't even discussed in this article.

Error One

As low-hanging fruit is plucked, good ideas become harder to find (Bloom et al. 2020; Kortum 1997; Gordon 2016). Research productivity, understood as the amount of research input needed to produce a fixed output, falls with each subsequent discovery.

By way of illustration, the number of FDA-approved drugs per billion dollars of inflation-adjusted research expenditure decreased from over forty drugs per billion in the 1950s to less than one drug per billion in the 2000s (Scannell et al. 2012). And in the twenty years from 1971 to 1991, inflation-adjusted agricultural research expenditures in developed nations rose by over sixty percent, yet growth in crop yields per acre dropped by fifteen percent (Alston et al. 2000). The problem was not that researchers became lazy, poorly educated or overpaid. It was rather that good ideas became harder to find.

There are many other reasons for drug research progress to slow down. The healthcare industry, as well as science in general (see e.g. the replication crisis), are really broken, and some of the problems are newer. Also maybe they're putting a bunch of work into updates to existing drugs instead of new drugs.

Similarly, decreasing crop yield growths (in other words, yields are still increasing but by lower percentages) could have many other causes. And also decreasing crop yields are a different thing than a decrease in the number of new agricultural ideas that researchers come up with – it's not even the right quantity to measure to make his point. It's a proxy for the actual thing his argument relies on, and he makes no attempt to consider how good or bad of a proxy it is, and I can easily think of some reasons it wouldn't be a very good proxy.

The comment about researchers not becoming lazy, poorly educated or overpaid is an unargued assertion.

So these are bad arguments which shouldn't convince us of the author's conclusion.

Error Two

Could the problem of improving artificial agents be an exception to the rule of diminishing research productivity? That is unlikely.

Asserting something is unlikely isn't an argument. His followup is to bring up Moore's law potentially ending, not to give an actual argument.

As with the drug and agricultural research, his points are bad because singularity claims are not based on extrapolating patterns from current data, but rather on conceptual reasoning. He didn't even claim his opponents were doing that in the section formulating their position, and my pre-existing understanding of their views is they use conceptual arguments not extrapolating from existing data/patterns (there is no existing data about AGI to extrapolate from, so they use speculative arguments, which is OK).

Error Three

one cause of diminishing research productivity is the difficulty of maintaining large knowledge stocks (Jones 2009), a problem at which artificial agents excel.

You can't just assume that AGIs will be anything like current software including "AI" software like AlphaGo. You have to consider what an AGI would be like before you can even know if it'd be especially good at this or not. If the goal with AGI is in some sense to make a machine with human-like thinking, then maybe it will end up with some of the weaknesses of humans too. You can't just assume it won't. You have to envision what an AGI would be like, or what many different things it might be like that would work (narrow it down to various categories and rule some things out) before you consider the traits it'd have.

Put another way, in MIRI's conception, wouldn't mind design space include both AGIs that are good or bad at this particular category of task?

Error Four

It is an unalterable mathematical fact that an algorithm can run no more quickly than its slowest component. If nine-tenths of the component processes can be sped up, but the remaining processes cannot, then the algorithm can only be made ten times faster. This creates the opportunity for bottlenecks unless every single process can be sped up at once.

This is wrong due to "at once" at the end. It'd be fine without that. You could speed up 9 out of 10 parts, then speed up the 10th part a minute later. You don't have to speed everything up at once. I know it's just two extra words but it doesn't make sense when you stop and think about it, so I think it's important. How did it seem to make sense to the author? What was he thinking? What process created this error? This is the kind of error that's good to post mortem. (It doesn't look like any sort of typo; I think it's actually based on some sort of thought process about the topic.)

Error Five

Section 3.2 doesn't even try to consider any specific type of research an AGI would be doing and claim that good ideas would get harder to find for that and thereby slow down singularity-relevant progress.

Similarly, section 3.3 doesn't try to propose a specific bottleneck and explain how it'd get in the way of the singularity. He does bring up one specific type of algorithm – search – but doesn't say why search speed would be a constraint on reaching the singularity. Whether exponential search speed progress is needed depends on specific models of how the hardware and/or software are improving and what they're doing.

There's also a general lack of acknowledgement of, or engagement with, counter-arguments that I can easily imagine pro-singularity people making (e.g. responding to the good ideas getting harder to find point by saying some stuff about mind design space containing plenty of minds that are powerful enough for a singularity with a discontinuity, even if progress slows down later as it approaches some fundamental limits). Similarly, maybe there is something super powerful in mind design space that doesn't rely on super fast search. Whether there is, or not, seems hard to analyze, but this paper doesn't even try. (The way I'd approach it myself is indirectly via epistemology first.)

Error Six

Section 2 mixes Formulating the singularity hypothesis (the section title) with other activities. This is confusing and biasing, because we don't get to read about what the singularity hypothesis is without the author's objections and dislikes mixed in. The section is also vague on some key points (mentioned in my screen recording) such as what an order of magnitude of intelligence is.

Examples:

Sustained exponential growth is a very strong growth assumption

Here he's mixing explaining the other side's view with setting it up to attack it (as requiring a super high evidential burden due to such strong claims). He's not talking from the other side's perspective, trying to present it how they would present it (positively); he's instead focusing on highlighting traits he dislikes.

A number of commentators have raised doubts about the cogency of the concept of general intelligence (Nunn 2012; Prinz 2012), or the likelihood of artificial systems acquiring meaningful levels of general intelligence (Dreyfus 2012; Lucas 1964; Plotnitsky 2012). I have some sympathy for these worries.[4]

This isn't formulating the singularity hypothesis. It's about ways of opposing it.

These are strong claims, and they should require a correspondingly strong argument to ground them. In Section 3, I give five reasons to be skeptical of the singularity hypothesis’ growth claims.

Again this doesn't fit the section it's in.

Padding

Section 3 opens with some restatements of material from section 2 which was also in the introduction some. And look at this repetitiveness (my bolds):

Near the bottom of page 7 begins section 3.2:

3.2 Good ideas become harder to find

Below that we read:

As low-hanging fruit is plucked, good ideas become harder to find

Page 8 near the top:

It was rather that good ideas became harder to find.

Later in that paragraph:

As good ideas became harder to find

Also, page 11:

as time goes on ideas for further improvement will become harder to find.

Page 17

As time goes on ideas for further improvement will become harder to find.

Amount Read

I read to the end of section 3.3 then briefly skimmed the rest.

Screen Recording

I recorded my screen and made verbal comments while writing this:

https://www.youtube.com/watch?v=T1Wu-086frA


Update: Thorstad replied and I wrote a followup post in response: Credentialed Intellectuals Support Misquoting


Elliot Temple | Permalink | Messages (0)

Organized EA Cause Evaluation

I wrote this for the Effective Altruism forum. Link.


Suppose I have a cause I’m passionate about. For example, we’ll use fluoridated water. It’s poison. It lowers IQs. Changing this one thing is easy (just stop purposefully doing it) and has negative cost (it costs money to fluoridate water; stopping saves money) and huge benefits. That gives it a better cost to benefit ratio than any of EA’s current causes. I come to EA and suggest that fluoridated water should be the highest priority.

Is there any *organized** process by which EA can evaluate these claims, compare them to other causes, and reach a rational conclusion about resource allocation to this cause?* I fear there isn’t.

Do I just try to write some posts rallying people to the cause? And then maybe I’m right but bad at rallying people. Or maybe I’m wrong but good at rallying people. Or maybe I’m right and pretty good at rallying people, but someone else with a somewhat worse cause is somewhat better at rallying. I’m concerned that my ability to rally people to my cause is largely independent of the truth of my cause. Marketing isn’t truth seeking. Energy to keep writing more about the issue, when I already made points (that are compelling if true, and which no one has given a refutation of), is different than truth seeking.

Is there any reasonable on-boarding process to guide me to know how to get my cause taken seriously with specific, actionable steps? I don’t think so.

Is there any list of all evaluated causes, their importance, and the reasons? With ways to update the list based on new arguments or information, and ways to add new causes to the list? I don’t think so. How can I even know how important my cause is compared to others? There’s no reasonable, guided process that EA offers to let me figure that out.

Comparing causes often depends on some controversial ideas, so a good list would take that into account and give alternative cause evaluations based on different premises, or at least clearly specify the controversial premises it uses. Ways those premises can be productively debated are also important.

Note: I’m primarily interested in processes which are available to anyone (you don’t have to be famous or popular first, or have certain credentials given to you be a high status authority) and which can be done in one’s free time without having to get an EA-related job. (Let’s suppose I have 20 hours a week available to volunteer for working on this stuff, but I don’t want to change careers. I think that should be good enough.) Being popular, having credentials, or working at a specific job are all separate issues from being correct.

Also, based on a forum search, stopping water fluoridation has never been proposed as an EA cause, so hopefully it’s a fairly neutral example. But this appears to indicate a failure to do a broad, organized survey of possible causes before spending millions of dollars on some current causes, which seems bad. (It could also be related to the lack of any way good way to search EA-related information that isn’t on the forum.)

Do others think these meta issues about EA’s organization (or lack thereof) are important? If not, why? Isn’t it risky and inefficient to lack well-designed processes for doing commonly-needed, important tasks? If you just have a bunch of people doing things their own way, and then a bunch of other people reaching their own evaluations of the subset of information they looked at, that is going to result in a social hierarchy determining outcomes.


Elliot Temple | Permalink | Messages (0)

Misquoting and Scholarship Norms at EA

Link to the EA version of this post.


EA doesn’t have strong norms against misquoting or some other types of errors related to having high intellectual standards (which I claim are important to truth seeking). As I explained, misquoting is especially bad: “Misquoting puts words in someone else’s mouth without their consent. It takes away their choice of what words to say or not say, just like deadnaming takes away their choice of what name to use.”

Despite linking to lizka clarifying the lack of anti-misquoting norms, I got this feedback on my anti-misquoting article:

One of your post spent 22 minutes to say that people shouldn't misquote. It's a rather obvious conclusion that can be exposed in 3 minutes top. I think some people read that as a rant.

So let me try to explain that EA really doesn’t have strong anti-misquoting norms or strong norms for high intellectual standards and scholarship quality. What would such norms look like?

Suppose I posted a single misquote in Rationalization: AI to Zombies. Suppose it was one word added or omitted and it didn’t change the meaning much. Would people care? I doubt it. How many people would want to check other quotes in the book for errors? Few, maybe zero. How many would want to post mortem the cause of the error? Few, maybe zero. So there is no strong norm against misquotes. Am I wrong? Does anyone really think that finding a single misquote in a book this community likes would result in people making large updates to their views (even is the misquote is merely inaccurate, but doesn’t involve a large change in meaning)?

Similarly, I’m confident that there’s no strong norm against incorrect citations. E.g. suppose in RAZ I found one cite to a study with terrible methodology or glaring factual errors. Or suppose I found one cite to a study that says something different than what it’s cited for (e.g. it’s cited as saying 60% X but the study itself actually says 45% X). I don’t think anything significant would change based on pointing out that one cite error. RAZ’s reputation would not go down substantially. There’d be no major investigation into what process created this error and what other errors the same process would create. It probably wouldn’t even spark debates. It certainly wouldn’t result in a community letter to EY, signed by thousands of people with over a million total karma, asking for an explanation. The community simply tolerates such things. This is an example of intellectual standards I consider too low and believe are lowering EA’s effectiveness a large amount.

Even most of RAZ’s biggest fans don’t really expect the book to be correct. They only expect it to be mostly correct. If I find an error, and they agree it’s an error, they’ll still think it’s a great book. Their fandom is immune to correction via pointing out one error.

(Just deciding “RAZ sucks” due to one error would be wrong too. The right reaction is more complicated and nuanced. For some information on the topic, see my Resolving Conflicting Ideas, which links to other articles including We Can Always Act on Non-Criticized Ideas.)

What about two errors? I don’t think that would work either. What about three error? Four? Five? Nah. What exactly would work?

What about 500 errors? If they’re all basically indisputable, then I’ll be called picky and pedantic, and people will doubt that other books would stand up to a similar level of scrutiny either, and people will say that the major conclusions are still valid.

If the 500 errors include more substantive claims that challenge the book’s themes and concepts, then they’ll be more debatable than factual errors, misquotes, wrong cites, simple, localized logic errors, grammar errors, etc. So that won’t work either. People will disagree with my criticism. And then they won’t debate their disagreement persistently and productively until we reach a conclusion. Some people won’t say anything at all. Others will comment 1-5 times expressing their disagreement. Maybe a handful of people will discuss more, and maybe even change their minds, but the community in general won’t change their minds just because a few people did.

There are errors that people will agree are in fact errors, but will dismiss as unimportant. And there are errors which people will deny are errors. So what would actually change many people’s minds?

Becoming a high status, influential thought leader might work. But social climbing is a very different process than truth seeking.

If people liked me (or whoever the critic was) and liked some alternative I was offering, they’d be more willing to change their minds. Anyone who wanted to say “Yeah, Critical Fallibilism is great. RAZ is outdated and flawed.” would be receptive to the errors I pointed out. People with the right biases or agendas would like the criticisms because the criticisms help them with their goals. Other people would interpret the criticism as fighting against their goals, not helping – e.g. AI alignment researchers basing a lot of their work on premises from RAZ would tend to be hostile to the criticism instead of grateful for the opportunity to stop using incorrect premises and thereby wasting their careers.

I’m confident that I could look through RAZ and find an error. If I thought it’d actually be useful, I’d do that. I did recently find two errors in a different book favored by the LW and EA communities (and I wasn’t actually looking for errors, so I expect there are many others – actually there were some other errors I noticed but those were more debatable). The first error I found was a misquote. I consider it basically inexcusable. It’s from a blog post, so it would be copy/pasted not typed in, so why would there be any changes? That’s a clear-cut error which is really hard to deny is an error. I found a second related error which is worse but requires more skill and judgment to evaluate. The book has a bunch of statements summarizing some events and issues. The misquote is about that stuff. And, setting aside the misquote, the summary is wrong too. It gives an inaccurate portrayal of what happened. It’s biased. The misquote error is minor in some sense: it’s not particularly misleading. The misleading, biased summary of events is actually significantly wrong and misleading.

I can imagine writing two different posts about it. One tries to point out how the summary is misleading in a point-by-point way breaking it down into small, simple points that are hard to deny. This post would use quotes from the book, quotes from the source material, and point out specific discrepancies. I think people would find this dry and pedantic, and not care much.

In my other hypothetical post, I would emphasize how wrong and misleading what the book says is. I’d focus more on the error being important. I’d make less clear-cut claims so I’d be met with more denials.

So I don’t see what would actually work well.

That’s why I haven’t posted about the book’s problems previously and haven’t named the guilty book here. RAZ is not the book I found these errors in. I used a different example on purpose (and, on the whole, I like RAZ, so it’s easier for me avoid a conflict with people who like it). I don’t want to name the book without a good plan for how to make my complaints/criticisms productive, because attacking something that people like, without an achievable, productive purpose, will just pointlessly alienate people.


Elliot Temple | Permalink | Messages (0)

Downvotes Are Evidence

I also posted this on the Effective Altruism forum.


Downvotes are evidence. They provide information. They can be interpreted, especially when they aren’t accompanied by arguments or reasons.

Downvotes can mean I struck a nerve. They can provide evidence of what a community is especially irrational about.

They could also mean I’m wrong. But with no arguments and no links or cites to arguments, there’s no way for me to change my mind. If I was posting some idea I thought of recently, I could take the downvotes as a sign that I should think it over more. However, if it’s something I’ve done high-effort thinking about for years, and written tens of thousands of words about, then “reconsider” is not a useful action with no further information. I already considered it as best I know how to.

People can react in different ways to downvotes. If your initial reaction is to stop writing about whatever gets downvotes, that is evidence that you care a lot about social climbing and what other people think of you (possibly more than you value truth seeking). On the other hand, one can think “strong reactions can indicate something important” and write more about whatever got downvoted. Downvotes can be a sign that a topic is important to discuss further.

Downvotes can also be evidence that something is an outlier, which can be a good thing.

Downvoting Misquoting Criticism

One of the things that seems to have struck a nerve with some people, and has gotten me the most downvotes, is criticizing misquoting (examples one and two both got to around -10). I believe the broader issue is my belief that “small” or “pedantic” errors are (sometimes) important, and that raising intellectual standards would make a large overall difference to EA’s correctness and therefore effectiveness.

I’ll clarify this belief more in future posts despite the cold reception and my expectation of getting negative rewards for my efforts. I think it’s important. It’s also clarified a lot in prior writing on my websites.

There are practical issues regarding how to deal with “small” errors in a time-efficient way. I have some answers to those issues but I don’t think they’re the main problem. In other words, I don’t think many people want to be able to pay attention to small errors, but are limited by time constraints and don’t know practical time-saving solutions. I don’t think it’s a goal they have that is blocked by practicality. I think people like something about being able to ignore “small” or “pedantic” errors, and practicality then serves as a convenient excuse to help hide the actual motivation.

Why do I think there’s any kind of hidden motivation? It’s not just the disinterest in practical solutions to enable raising intellectual standards (which I’ve seen year after year in other communities as well, btw). Nor is it just the downvotes that are broadly not accompanied by explanations or arguments. It’s primarily the chronic ambiguity about whether people already agree with me and think obviously misquotes are bad on the one hand or disagree with me and think I’m horribly wrong on the other hand. Getting a mix of responses including both ~“obviously you’re right and you got a negative reaction because everyone already knows it and doesn’t need to hear it again” and ~“you’re wrong and horrible” is weird and unusual.

People generally seem unwilling to actually clearly state what their misquoting policies/attitudes are, but nevertheless say plenty of things that indicate clear disagreements with me (when they speak about it at all, which they often don’t but sometimes do). And this allows a bunch of other people to think there already are strong anti-misquoting norms, including people who do not actually personally have such a norm. In my experience, this is widespread and EA seems basically the same as most other places about it.

I’m not including examples of misquotes, or ambiguous defenses of misquotes, because I don’t want to make examples of people. If someone wants to claim they’re right and make public statements they stand behind, fine, I can use them as an example. But if someone merely posts on the forum a bit, I don’t think I should interpret that as opting in to being some kind of public intellectual who takes responsibility for what he says, claims what he says is important, and is happy to be quoted and criticized. (People often don’t want to directly admit that they don’t think what they post is important, while also not wanting to claim it’s important. That’s another example of chronic ambiguity that I think is related to irrationality.) If someone says to me “This would convince me if only you had a few examples” I’ll consider how to deal with that, but I don’t expect that reaction (and if you care that much you can find two good examples by reviewing my EA posting history, and many many examples of representative non-EA misquotes on my websites and forum).

Upvoting Downvoted Posts

There’s a pattern on Reddit, which I’ve also observed on EA, where people upvote stuff that’s a negative points which they don’t think deserves to be negative. They wouldn’t upvote it if it had positive votes. You can tell because the upvoting stops when it gets back to neutral karma (actually slightly less on EA due to strong votes – people tend to stop at 1, not at the e.g. 4 karma an EA post might start with).

In a lot of ways I think this is a good norm. Some people are quite discouraged by downvotes and feel bad about being disliked. The lack of reasons to accompany downvotes makes that worse for some types of people (though others would only feel worse if they were told reasons). And some downvotes are unwarranted and unreasonable so counteracting those is a reasonable activity.

However, there’s a downside to upvoting stuff that’s undeservedly downvoted. It hides evidence. It makes it harder for people to know what kinds of things get how many downvotes. Downvotes can actually be important evidence about the community. Reddit is larger and many subreddits have issues with many new posts tending to get a few downvotes that do not reflect the community and might even come from bots. I’m not aware of EA having this problem. It’s stuff that is downvoted more than normal which provides useful evidence. On EA, a lot of posts get no votes, or just a few upvotes. I believe getting to -10 quickly isn’t normal and is useful evidence of something, rather than something that should just be ignored as meaningless. Also it only happens to a minority of my posts. The majority get upvotes not downvotes.)


Elliot Temple | Permalink | Messages (0)

EA Misquoting Discussion Summary

I quit the Effective Altruism forum due to a new rule requiring all new posts and comments be basically put in the public domain without copyright, so anyone could e.g. sell a book of my posts without my consent (they’d just have to give attribution). More info. I had a bunch of draft posts, so I’m posting some of them here with minimal editing. In general, I’m not going to submit them as link posts at EA myself. If you think they should be shared with EA as link posts, please do it yourself. I’m happy for other people to share links to my work at EA or on social media. Please share stuff in whatever ways you think are good to do.


Let me summarize events from my perspective.

I read a book EA likes and found a misquote in it (and other problems).

Someone misquoted me twice in EA forum discussion. They seemed to think that was OK, not a big deal, etc. And no one in the audience took my side or said anything negative about misquotes.

The person who misquoted me (as well as anyone reading) didn’t want to talk about it or debate the matter.

In an open questions thread, I asked about EA’s norms regarding misquotes.

In response, someone misquoted the EA norms to me, which is pretty ironic and silly.

Their claim about EA norms was basically that misquotes aren’t important.

When I pointed out that they had misquoted, they didn’t really seem to care or think that was bad. Again, there were no signs the audience thought misquoting was bad, either.

Lizka, who was the person being misquoted since she wrote the EA norms document, commented on the matter. Lizka’s comment communicated:

  • She agrees with me that the norms were misquoted.
  • But she didn’t really mind or care.
  • EA has no strong norm against misquoting.
  • The attitude to misquotes is basically like typos: mistakes and accidents happen and we should be tolerant and forgiving about that.

Again, no one wanted to talk with me about the matter or debate it.

I wrote an article explaining that misquoting is bad. I compared misquoting to deadnaming because the misquoted norm was actually about deadnaming, and I thought that read as a whole it’s actually a good norm, and the same norm should be used for misquoting.

The EA norm on deadnaming is basically: first, don’t do it, and second, if it’s a genuine accident, that’s alright, but don’t do it again.

Whereas EA’s current misquoting norm is more like: misquotes are technically errors, so that’s bad, but no one particularly cares.

Misquotes are actually like deadnaming. Deadnaming involves exercising control over someone else’s name without their consent, when their name should be within their control. Misquotes involve exercising control over someone else’s words/speech without their consent, when their words/speech should be within their control. Misquotes and deadnaming both violate the personal boundaries of a person and violate consent.

Misquotes are also bad for reasons of scholarship, accuracy and truth seeking. I believe the general attitude of not caring about “small” errors is a big mistake.

Misquotes are accepted at EA due to the combination of not recognizing how they violate consent and victimize someone (like deadnaming), and having a culture tolerant of “small” errors and imprecision.

So, I disagree, and I have two main reasons. And people are not persuaded and don’t want to debate or give any counter-arguments. Which gets into one of the other main topics I’ve posted about at EA, which is debating norms and methodology.

All this so far is … fine. Whatever. The weird part comes next.

The main feedback I’ve gotten regarding misquoting and deadnaming is not disagreement. No one has clearly admitted to disagreeing with me and e.g. claimed that misquoting is not like deadnaming.

Instead, I’ve been told that I’m getting downvoted because people agree with me too much: they think it’s so obvious and uncontroversial that it’s a waste of time to write about.

That is not what’s happening and it’s a very bizarre excuse. People are so eager to avoid a debate that they deny disagreeing with me, even when they could tell from the title that they do disagree with me. None of them has actually claimed that they do think misquoting is like deadnaming, and should be reacted to similarly.

Partly, people are anti-misquoting in some weaker way than I am, just like they are anti-typos but not very strongly. The nuance of “I am more against misquoting than you are, so we disagree” seems too complex for some people. They want to identify as anti-misquoting, so they don’t want to take the pro-misquoting side of a debate. The actual issue is how bad misquoting is (or we could be more specific and specify 20 ways misquoting might be bad, 15 of which I believe, and only 5 of which they believe, and then debate the other 10).

I wrote a second article trying to clarify to people that they disagree with me. I gave some guided thinking so they could see it for themselves. E.g. if I pointed out a misquote in the sequences, would you care? Would it matter much to you? Would you become concerned and start investigating other quotes in the book? I think we all know that if I found a single misquote in that book, it would result in no substantive changes. I think it should; you don’t; we disagree.

After being downvoted without explanation on the second article about misquoting, I wrote an article about downvotes being evidence, in which I considered what different interpretations of downvotes and different reactions. This prompted the most mixed voting I’d gotten yet and a response saying people were probably just downvoting me because they didn’t see the point of my anti-misquoting articles because they already agree with me. That guy refused to actually say he agrees with me himself, saying basically (only when pressed) that he’s unsure and neutral and not very interested in thinking or talking about it. If you think it’s a low priority unimportant issue, then you disagree with me, since I think it’s very important. Does he also think deadnaming is low priority and unimportant? If not, then he clearly disagrees with me.

It’s so weird for people who disagree with me to insist they agree with me. And Lizka already clarified that she disagrees with me, and made a statement about what the EA norms are, and y’all are still telling me that the community in general agrees with me!?

Guys, I’ve struck a nerve. I got downvotes because people didn’t like being challenged in this way, and I’m getting very bizarre excuses to avoid debate because this is a sensitive issue that people don’t want to think or speak clearly about. So it’s important for an additional reason: because people are biased and irrational about it.

My opinions on this matter predate EA (though the specific comparison to deadnaming is a new way of expressing an old point).

I suspect one reason the deadnaming comparison didn’t go over well is that most EAers don’t care much about deadnaming either (and don’t have nuanced, thoughtful opinions about it), although they aren’t going to admit that.

Most deadnaming and most misquoting is not an innocent accident. I think people know that with deadnaming, but deny it with misquoting. But explain to me: how did the wording change in a quote that you could have copy/pasted? That’s generally not an innocent accident. How did you leave out the start of the paragraph and take a quote out of context? That was not a random accident. How did you type in a quote from paper and then forget to triple check it for typos? That is negligence at best, not an accident.

Negligently deadnaming people is not OK. Don’t do it. Negligently misquoting is bad too for more reasons: violates consent and harms scholarship.

This is all related to more complex and more important issues, but if I can’t persuade anyone of this smaller initial point that should be easy, I don’t think trying to say more complex stuff is going to work. If people won’t debate a relatively small, isolated issue, they aren’t going to successfully debate a complex topic involving dozens of issues of similar or higher difficulty as well as many books. One purpose of talking about misquoting is it it’s test issue to see how people handle debate and criticism, plus it’s an example of one of the main broader themes I’d like to talk about which is about the value of raising intellectual standards. If you can’t win with the small test issue that shouldn’t be very hard, you’ve gotta figure out what is going on. And if the responses to the small test issue are really bizarre and involve things like persistently denying disagreeing while obviously disagreeing … you really gotta figure out what is going on instead of ignore that evidence. So I’ve written about it again (this post).

If you want to find details of this stuff on the EA forum and see exactly what people said to me, besides what is linked in my two articles about misquoting that I linked above, you can also go to my EA user profile and look through my post and comment history there.


Elliot Temple | Permalink | Messages (0)

Meta Criticism

I quit the Effective Altruism forum due to a new rule requiring all new posts and comments be basically put in the public domain without copyright, so anyone could e.g. sell a book of my posts without my consent (they’d just have to give attribution). More info. I had a bunch of draft posts, so I’m posting some of them here with minimal editing. In general, I’m not going to submit them as link posts at EA myself. If you think they should be shared with EA as link posts, please do it yourself. I’m happy for other people to share links to my work at EA or on social media. Please share stuff in whatever ways you think are good to do.


Meta criticism is potentially more powerful than direct criticism of specific flaws. Meta criticism can talk about methodologies or broad patterns. It’s a way of taking a step back, away from all the details, to look critically at a bigger picture.

Meta criticism isn’t very common. Why? It’s less conventional, normal, mainstream or popular. That makes it harder to get a positive reception for it. It’s less well understood or respected. Also, meta criticism tends to be more abstract, more complicated, harder to get right, and harder to act on. In return for those downsides, it can be more powerful.

On average or as some kind of general trend, is the cost-to-benefit ratio for meta criticism is better or worse than regular criticism? I don’t really know. I think neither one has a really clear advantage and we should try some of both. Plus, to some extent, they do different things so again it makes sense to use both.

I think there’s an under-exploited area with high value, which is some of the most simple, basic meta criticisms. These are easier to understand and deal with, yet can still be powerful. I think these initial meta criticisms tend to be more important than concrete criticisms. Also, meta criticisms are more generic so they can be re-used between different discussions or different topics more, and that’s especially true for the more basic meta criticisms that you would start with (whereas more advanced meta criticism might depend on the details of a topic more).

So let’s look at examples of introductory meta criticisms which I think have a great cost-to-benefit ratio (given that people aren’t hostile to them, which is a problem sometimes). These examples will help give a better sense of what meta criticisms are in addition to being useful issues to consider.

Do you act based on methods?

“You” could be a group or individual. If the answer is “no” that’s a major problem. Let’s assume it’s “yes”.

Are the methods written down?

Again, “no” is a major problem. Assuming “yes”:

Do the methods contain explicit features designed to reduce bias?

Again, “no” is a major problem. Examples of anti-bias features include transparency, accountability, anti-bias training or ways of reducing the importance of social status in decision making (such as some decisions being made in random or blinded ways).

Many individuals and organizations in the world have already failed within the first three questions. Others could technically say “yes” but their anti-bias features aren’t very good (e.g. I’m sure every large non-crypto bank has some written methods that employees use for some tasks which contain some anti-bias features of some sort despite not really even aiming at rationality).

But, broadly, those with “no” answers or poor answers don’t want to, and don’t, discuss this and try to improve. Why? There are many reasons but here’s a particularly relevant one: They lack methods of talking about it with transparency, accountability and other anti-bias features. The lack of rational discussion methodology protects all their other errors like lack of methodology for whatever it is that they do.

One of the major complicating factors is how groups work. Some groups have clear leadership and organization structures, with a hierarchical power structure which assigns responsibilities. In that case, it’s relatively easy to blame leadership for big picture problems like lack of rational methods. But other groups are more of a loose association without a clear leadership structure that takes responsibility for considering or addressing criticism, setting policies, etc. Not all groups have anyone who could easily decide on some methods and get others to use them. EA and LW are examples of groups with significant voids in leadership, responsibility and accountability. They claim to have a bunch of ideas, but it’s hard to criticize them because of the lack of official position statements by them (or when there is something kinda official, like The Sequences, the people willing to talk on the forum often disagree with or are ignorant of a lot of that official position – there’s no way to talk with a person who advocates the official position as a whole and will take responsibility for addressing errors in it, or who has the power to fix it). There’s no reasonable, reliable way to ask EA a question like “Do you have an a written methodology for rational debate?” and get an official answer that anyone will take responsibility for.

So one of the more basic, introductory areas for meta criticism/questioning is to ask about rational methodology. And a second is to ask about leadership, responsibility, and organization structure. If there is an error, who can be told who will fix it, and how does one get their attention? If some clarifying questions are needed before sharing the error, how does one get them answered? If the answers are things like “personally contact the right people and become familiar with the high status community members” that is a really problematic answer. There should be publicly accessible and documented options which can be used by people who don’t have social status within the community. Social status is a biasing, irrational approach which blocks valid criticism from leading to change. Also, even if the situation is better than that, many people won’t know it’s better, and won’t try, unless you publicly tell them it’s better in a convincing way. To be convincing, you have to offer specific policies with guarantees and transparency/accountability, rather than saying a variant of “trust us”.

Guarantees can be expensive especially when they’re open to the general public. There are costs/downsides here. Even non-guaranteed options, such as an option suggestion box for unsolicited advice, even if you never reply to anything, have a cost. If you promised to reply to every suggestion, that would be too expensive. Guarantees need to have conditions placed on them. E.g. “If you make a suggestion and read the following ten books and pay $100, then we guarantee a response (limit: one response per person per year).” That policy would result in a smaller burden than responding to all suggestions, but it still offers a guarantee. Would the burden still be too high? It depends how popular you are. Is a response a very good guarantee? Not really. You might read the ten books, pay the money, and get the response “No.” or “Interesting idea; we’ll consider it.” and nothing more. That could be unsatisfying. Some additional guarantees about the nature of the response could help. There is a ton of room to brainstorm how to do these things well. These kinds of ideas are very under-explored. An example stronger guarantee would be to respond with either a decisive refutation or else to put together an exploratory committee to investigate taking the suggestion. Such committees have a poor reputation and could be replaced with some other method of escalating the idea to get more consideration.

Guarantees should focus on objective criteria. For example, saying you’ll respond to all “good suggestions” would be a poor guarantee to offer. How can someone predictably know in advance whether their suggestion will meet that condition or not? Design policies to not let decision makers use arbitrary judgment which could easily be biased or wrong. For example, you might judge “good” suggestions using the “I’ll know it when I see it method”. That would be very arbitrary and a bad approach. If you say “good” means “novel, interesting, substantive and high value if correct” that is a little better, but still very bad, because a decision maker can arbitrary judge whatever he wants as bad and there’s no effective way to hold him accountable, determine his judgment was an error, get that error corrected, etc. There’s also there’s poor predictability for people considering making suggestions.

From what I can tell, my main disagreement with EA is I think EA should have written, rational debate methods, and EA doesn’t think so. I don’t know how to make effective progress on resolving that disagreement because no one from EA will follow any specific rational debate methods. Also EA offers no alternative solution, that I know of, to the same problem that rational debate methods are meant to solve. Without rational debate methods (or an effective alternative), no other disagreements really matter because there’s nothing good to be done about them.


Elliot Temple | Permalink | Messages (0)

EA and Paths Forward

I quit the Effective Altruism forum due to a new rule requiring all new posts and comments be basically put in the public domain without copyright, so anyone could e.g. sell a book of my posts without my consent (they’d just have to give attribution). More info. I had a bunch of draft posts, so I’m posting some of them here with minimal editing. In general, I’m not going to submit them as link posts at EA myself. If you think they should be shared with EA as link posts, please do it yourself. I’m happy for other people to share links to my work at EA or on social media. Please share stuff in whatever ways you think are good to do.


Suppose EA is making an important error. John knows a correction and would like to help. What can John do?

Whatever the answer is, this is something EA should put thought into. They should advertise/communicate the best process for John to use, make it easy to understand and use, and intentionally design it with some beneficial features. EA should also consider having several processes so there are backups in case one fails.

Failure is a realistic possibility here. John might try to share a correction but be ignored. People might think John is wrong even though he’s right. People might think John’s comment is unimportant even though it’s actually important. There are lots of ways for people to reject or ignore a good idea. Suppose that happens. Now EA has made two mistakes which John knows are mistakes and would like to correct. There’s the first mistake, whatever it was, and now also this second mistake of not being receptive to the correction of the first mistake.

How can John get the second mistake corrected? There should be some kind of escalation process for when the initial mistake correction process fails. There is a risk that this escalation process would be abused. What if John thinks he’s right but actually he’s wrong? If the escalation process is costly in time and effort for EA people, and is used frequently, that would be bad. So the process should exist but should be designed in some kind of conservative way that limits the effort it will cost EA to deal with incorrect corrections. Similarly, the initial process for correcting EA also needs to be designed to limit the burden it places on EA. Limiting the burden increases the failure rate, making a secondary (and perhaps tertiary) error correction option more important to have.

When John believes he has an important correction for EA, and he shares it, and EA initially disagrees, that is a symmetric situation. Each side thinks the other is wrong. (That EA is multiple people, and John also might actually be multiple people, makes things more complex, but without changing some of the key principles.) The rational thing to do with this kind of symmetric dispute is not to say “I think I’m right” and ignore the other side. If you can’t resolve the dispute – if your knowledge is inadequate to conclude that you’re right – then you should be neutral and act accordingly. Or you might think you have crushing arguments which are objectively adequate to resolve the dispute in your favor, and you might even post them publicly, and think John is responding in obviously unreasonable ways. In that case, you might manage to objectively establish some kind of asymmetry. How to do objectively establish asymmetries in intellectual disagreements is a hard, important question in epistemology which I don’t think has received appropriate research attention (note: it’s also relevant when there’s a disagreement between two ideas within one person).

Anyway, what can John do? He can write down some criticism and post it on the EA forum. EA has a free, public forum. That is better than many other organizations which don’t facilitate publicly sharing criticism. Many organizations either have no forum or delete critical discussions while making no real attempt at rationality (e.g. Blizzard has forums related to its games, but they aren’t very rational, don’t really try to be, and delete tons of complaints). Does EA ever delete dissent or ban dissenters? As someone who hasn’t already spent many years paying close attention, I don’t know and I don’t know how to find out in a way that I would trust. Many forums claim not to delete dissent but actually do; it’s a common thing to lie about. Making a highly credible claim not to delete or punish dissent is important or else John might not bother trying to share his criticism.

So John can post a criticism on a forum, and then people may or may not read it and may or may not reply. Will anyone with some kind of leadership role at EA read it? Maybe not. This is bad. The naive alternative “guarantee plenty of attention from important people to all criticism” would be even worse. But there are many other possible policy options which are better.

To design a better system, we should consider what might go wrong. How could John’s great, valuable criticism receive a negative reaction on an open forum which is active enough that John gets at least a little attention? And how might things go well? If the initial attention John gets is positive, that will draw some additional attention. If that is positive too, then it will draw more attention. If 100% of the attention John gets results in positive responses, his post will be shared and spread until a large portion of the community sees it including people with power and influence, who will also view the criticism positively (by premise) and so they’ll listen and act. A 75% positive response rate would probably also be good enough to get a similar outcome.

So how might John’s criticism, which we’re hypothetically supposing is true and important, get a more negative reception so that it can’t snowball to get more attention and influence important decision makers?

John might have low social status, and people might judge more based on status than idea quality.

John’s criticism might offend people.

John’s criticism might threaten people in some way, e.g. implying that some of them shouldn’t have the income and prestige (or merely self-esteem) that they currently enjoy.

John’s criticism might be hard to understand. People might get confused. People might lack some prerequisite knowledge and skills needed to engage with it well.

John’s criticism might be very long and hard to get value from just the beginning. People might skim but not see the value that they would see if they read the whole thing in a thoughtful, attentive way. Making it long might be an error by John, but it also might be really hard to shorten and still have a good cost/benefit ratio (it’s valuable enough to justify the length).

John’s criticism might rely on premises that people disagree with. In other words, EA might be wrong about more than one thing. An interconnected set of mistakes can be much harder to explain than a single mistake even if the critic understands the entire set of mistakes. People might reject criticism of X due to their own mistake Y, and criticism of Y due to their own mistake X. A similar thing can happening involving many more ideas in a much more complicated structure so that it’s harder for John to point out what’s going on (even if he knows).

What can be done about all these difficulties? My suggestion, in short, is to develop a rational debate methodology and to hold debates aimed at reaching conclusions about disagreements. The methodology must include features for reducing the role of bias, social status, dishonesty, etc. In particular, it must prevent people from arbitrarily stopping any debates whenever they feel like it (which tends to include shortly before losing, which prevents the debate from being conclusive). The debate methodology must also have features for reducing the cost of debate, and ending low value debates, especially since it won’t allow arbitrarily quitting at any moment. A debate methodology is not a perfect, complete solution to all the problems John may face but it has various merits.

People often assume that rational, conclusive debate is too much work so the cost/benefit ratio on it is poor. This is typically a general opinion they have rather than an evaluation of any specific debate methodology. I think they should reserve judgment until after they review some written debate methodologies. They should look at some actual methods and see how much work they are, and what benefits they offer, before reaching a conclusion about their cost/benefit ratio. If the cost/benefit ratios are poor, people would try to make adjustments to reduce costs and increase benefits before giving up on rational debate.

Can people have rational debate without following any written methodology? Sure that’s possible. But if that worked well for some people and resulted in good cost/benefit ratios, wouldn’t it make sense to take whatever those successful debate participants are doing and write it down as a method? Even if the method had vague parts that’d be better than nothing.

Although under-explored, debate methodologies are not a new idea. E.g. Russell L. Ackoff published one in a book in 1978 (pp. 44-47). That’s unfortunately the only very substantive, promising one I’ve found besides developing one of my own. I bet there are more to be found somewhere in existing literature though. The main reason I thought Ackoff’s was a valuable proposal were that 1) it was based on following specific steps (in other words, you could make a flowchart out of it); and 2) it aimed at completeness, including using recursion to enable it to always succeed instead of getting stuck. Partial methods are common and easy to find, e.g. “don’t straw man” is a partial debate method, but it’s just suitable for being one little part of an overall method (and it lacks specific methods of detecting straw men, handling them when someone thinks one was done, etc. – it’s more of an aspiration than specific actions to achieve that aspiration).

A downside of Ackoff’s method is that it lacks stopping conditions besides success, so it could take an unlimited amount of effort. I think unilateral stopping conditions are one of the key issues for a good debate method: they need to exist (to prevent abuse by unreasonable debate partners who don’t agree to end the debate) but be designed to prevent abuse (by e.g. people quitting debates when they’re losing and quitting in a way designed to obscure what happened). I developed impasse chains as a debate stopping condition which takes a fairly small, bounded amount of effort to end debates unilaterally but adds significant transparency about how and why the debate is ending. Impasse chains only work when the further debate is providing low value, but that’s the only problematic case – otherwise you can either continue or say you want to stop and give a reason (which the other person will consent to, or if they don’t and you think they’re being unreasonable, now you’ve got an impasse to raise). Impasse chains are in the ballpark of “to end a debate, you must either mutually agree or else go through some required post-mortem steps” plus they enable multiple chances at problem solving to fix whatever is broken about the debate. This strikes me as one of the most obvious genres of debate stopping conditions to try, yet I think my proposal is novel. I think that says something really important about the world and its hostility to rational debate methodology. (I don’t think it’s mere disinterest or ignorance; if it were, the moment I suggested rational debate methods and said why they were important a lot of people would become excited and want to pursue the matter; but that hasn’t happened.)

Another important and related issue is how can you write, or design and organize a community or movement, so it’s easier for people to learn and debate with your ideas? And also easier to avoid low value or repetitive discussion. An example design is an FAQ to help reduce repetition. A less typical design would be creating (and sharing and keeping updated) a debate tree document organizing and summarizing the key arguments in the entire field you care about.


Elliot Temple | Permalink | Messages (0)

Is EA Rational?

I quit the Effective Altruism forum due to a new rule requiring all new posts and comments be basically put in the public domain without copyright, so anyone could e.g. sell a book of my posts without my consent (they’d just have to give attribution). More info. I had a bunch of draft posts, so I’m posting some of them here with minimal editing. In general, I’m not going to submit them as link posts at EA myself. If you think they should be shared with EA as link posts, please do it yourself. I’m happy for other people to share links to my work at EA or on social media. Please share stuff in whatever ways you think are good to do.


I haven’t studied EA much. There is plenty more about EA that I could read. But I don’t want to get involved much unless EA is rational.

By “rational”, I mean capable of (and good at) correcting errors. Rationality, in my mind, enables progress and improvement instead of stasis, being set in your ways, not listening to suggestions, etc. So a key aspect of rationality is being open to criticism, and having ways that changes will actually be made based on correct criticisms.

Is EA rational? In other words, if I study EA and find some errors, and then I write down those errors, and I’m right, will EA then make changes to fix those errors? I am doubtful.

That definitely could happen. EA does make changes and improvements sometimes. Is that not a demonstration of EA’s rationality? Partially, yes, sure. Which is why I’m saying this to EA instead of some other group. I think EA is better at that than most other groups.

But I think EA’s ability to listen to criticism and make changes is related to social status, bias, tribalism, and popularity. If I share a correct criticism and I’m perceived as high status, and I have friends in high places, and the criticism fits people’s biases, and the criticism makes me seem in-group not out-group, and the criticism gains popularity (gets shared and upvoted a bunch, gets talked about by many people), then I would have high confidence that EA would make changes. If all those factors are present, then EA is reliably willing to consider criticism and make changes.

If some of those factors are present, then it’s less reliable but EA might listen to criticism. If none of those factors are present, then I’m doubtful the criticism will be impactful. I don’t want to study EA to find flaws and also make friends with the right people, change my writing style to be a better culture fit with EA, form habits of acting in higher status ways, and focus on sharing criticisms that fit some pre-existing biases or tribal divisions.

What can be done as an alternative to listening to criticism based on popularity, status, culture-fit, biases, tribes, etc? One option is organized debate with written methodologies that make some guarantees. EA doesn’t do that. Does it do something else?

One thing I know EA does, which is much better than nothing (and is better than many other groups offer), is informal, disorganized debate following unwritten methodologies that vary some by the individuals you’re speaking with. I consider this option inadequately motivating to seriously research and critically scrutinize EA.

I could talk to EA people who have read essays about rationality and who are trying to be rational – individually, with no accountability, transparency, or particular responsibilities. I think that’s not good enough and makes it way too easy for social status hierarchies to matter. If EA offered more organized ways of sharing and debating criticism, with formal rules, then people would have to follow the rules and therefore not act based on status. Things like rules, flowcharted methods to follow, or step-by-step actions to take can all help fight against the people’s tendency to act based on status and other biases.

It’s good for informal options to exist but they rely on basically “everyone just personally tries to be rational” which I don’t think is good enough. So more formal options, with pro-rationality (and anti-status, anti-bias, etc.) design features should exist too.

The most common objection to such things is they’re too much work. On an individual level, it’s unclear to me that following a written methodology is more work than following an unwritten methodology. Whatever you do, you have some sort of methods or policies. Also, I don’t really think you can evaluate how much work a methodology is (and how much benefit it offers, since the cost/benefit ratio is what matters) without actually developing that methodology and writing it down first. I think rational debate methodologies which tries to reach conclusions about incoming criticisms are broadly untested empirically, so people shouldn’t assume they’d take too long or be ineffective when they can’t point to any examples of them being tried with that result. And EA has plenty of resources to e.g. task one full-time worker with engaging with community criticism and keeping organized documents that attempt to specify what arguments against EA exist, what counter-arguments there are, and otherwise map out the entire relevant debate as it exists today. Putting in less effort than that looks to me like not trying because the results are unwanted (some people prefer status hierarchies and irrationality, even if they say they like rationality) rather than because the results are highly prized but too expensive. There have been no research programs afaik to try to get these kinds of rational debate results more cheaply.

Also, suppose I research EA, come up with some criticisms, and I’m wrong. I informally share my criticisms on the forum and get some unsatisfactory, incomplete answers. I still think I’m right and I have no way to get my error corrected. The lack of access to debate symmetrically prevents whoever is wrong from learning better, whether that’s EA or me. So the outcome is bad either way. Either I’ve come up with a correct criticism but EA won’t change; or I’ve come up with any incorrect criticism but EA won’t explain to me why it’s incorrect in a way that’s adequate for me to change. Blocking conclusive rational debate blocks error correction regardless of which side is right. Should EA really explain to all their incorrect critics why those critics are wrong? Yes! I think EA should create public explanations, in writing, of why all objections to EA (that anyone actually raises) are wrong. Would that take ~infinite work? No because you can explain why some category of objection is wrong. You can respond to patterns in the objections instead of addressing every objection individually. This lets you re-use some answers. Doing this would persuade more people that EA is correct, make it much more rewarding to study EA and try to think critically about it, and turn up the minority of cases where EA lacks an adequate answer to a criticism, and also expose EA’s good answers to review (people might suggest even better answers or find that, although EA won the argument in some case, there is a weakness in EA’s argument and a better criticism of EA could be made).

In general, I think EA is more willing to listen to criticism that is based on a bunch of shared premises. The more you disagree with and question foundational premises, the less EA will listen and discuss. If you agree on a bunch of foundations then criticize some more detailed matters based on those foundations, then EA will listen more. This results in many critics having a reasonably good experience even though the system (or really lack of system) is IMO fundamentally broken/irrational.

I imagine EA people will broadly dislike and disagree with what I’ve said, in part because I’m challenging foundational issues rather than using shared premises to challenge other issues. I think a bunch of people trying to study rationality and do their best at it is … a lot better than not doing that. But I think it’s inadequate compared to having policies, methodologies, flowcharts, checklists, rules, written guarantees, transparency, accountability, etc., to enable rationality. If you don’t walk people step by step through what to do, you’re going to get a lot of social status behaviors and biases from people who are trying to be rational. Also, if EA has something else to solve the same problems I’m concerned about in a different way than how I suggest approaching them, what is the alternative solution?

Why does writing down step by step what to do help if the people writing the steps have biases and irrationalities of their own? Won’t the steps be flawed? Sure they may be, but putting them in writing allows critical analysis of the steps from many people. Improving the steps can be a group effort. Whereas many people separately following their own separate unwritten steps is hard to improve.

I do agree with the basic idea of EA: using reason and evidence to optimize charity. I agree that charity should be approached with a scientific and rational mindset rather than with whims, arbitrariness, social signaling or whatever else. I agree that cost/benefit ratios and math matter more than feelings about charity. But unfortunately I don’t think that’s enough agreement to get a positive response when I then challenge EA on what rationality is and how to pursue it. I think critics get much better responses from EA if they have major pre-existing agreement with EA about what rationality is and how to do it, but optimizing rationality itself is crucial to EA’s mission.

In other words, I think EA is optimized for optimizing which charitable interventions are good. It’s pretty good at discussing and changing its mind about cost/benefit ratios of charity options (though questioning the premises themselves behind some charity approaches is less welcome). But EA is not similarly good at discussing and changing its mind about how to discuss, change its mind, and be rational. It’s better at applying rationality to charity topics than to epistemology.

Does this matter? Suppose I couldn’t talk to EA about rational debate itself, but could talk to EA about the costs and benefits of any particular charity project. Is that good enough? I don’t think so. Besides potentially disagreeing with the premises of some charity projects, I also have disagreements regarding how to do multi-factor decision making itself.


Elliot Temple | Permalink | Messages (0)

EA and Responding to Famous Authors

I quit the Effective Altruism forum due to a new rule requiring all new posts and comments be basically put in the public domain without copyright, so anyone could e.g. sell a book of my posts without my consent (they’d just have to give attribution). More info. I had a bunch of draft posts, so I’m posting some of them here with minimal editing. In general, I’m not going to submit them as link posts at EA myself. If you think they should be shared with EA as link posts, please do it yourself. I’m happy for other people to share links to my work at EA or on social media. Please share stuff in whatever ways you think are good to do.


I think EA has the resources to attempt to respond to every intellectual who sold over 100,000 books in English which make arguments that contradict EA. EA could write rebuttals to all popular, well known rival positions that are written in books. You could start with the authors who sold over a million books.

There are major downsides to using popularity as your only criterion for what to respond to. It’s important to also have ways that you respond to unpopular criticism. But responding to influential criticism makes sense because people know about it and just ignoring it makes it look like you don’t care to consider other ideas or have no answers.

Answering the arguments of popular authors could be one project, of 10+, in which EA attempts to engage with alternative ideas and argue its case.

EA claims to be committed to rationality but it seems more interested in getting a bunch of charity projects underway and/or funded better ASAP instead of taking the time to first do extensive rational analysis to figure out the right ideas to guide charity.

I understand not wanting to get caught up in doing planning forever and having decision paralysis, but where is the reasonably complete planning and debating that seems adequate to get started based on?

For example, it seems unreasonable to me to start an altruist movement without addressing Ayn Rand’s criticisms of altruism. Where are the serious essays summarizing, analyzing and refuting her arguments about altruism? She sold many millions of books. Where are the debates with anyone from ARI or the invitations for any online Objectivists who are interested to come debate with EA? Objectivism has a lot of fans today who are interested in rationality and debate (or at least claim to be), so ignoring them instead of writing anything that could change their minds seems bad. And being encouraging of discussion with them, instead of discouraging, would make sense and be more rational. (I’m aware that they aren’t doing better. They aren’t asking EA’s to come debate them, hosting more rational debates, writing articles refuting EA, etc. IMO both groups are not doing very well and there’s big room for improvement. I’ve tried to talk to Objectivists to get them to improve before and it didn’t work. Overall, although I’m a big fan of Ayn Rand, I think Objectivist communities today are less open to critical discussion and dissent than EA is.)


Elliot Temple | Permalink | Messages (0)

Criticizing The Scout Mindset (including a misquote)

I quit the Effective Altruism forum due to a new rule requiring all new posts and comments be basically put in the public domain without copyright, so anyone could e.g. sell a book of my posts without my consent (they’d just have to give attribution). More info. I had a bunch of draft posts, so I’m posting some of them here with minimal editing. In general, I’m not going to submit them as link posts at EA myself. If you think they should be shared with EA as link posts, please do it yourself. I’m happy for other people to share links to my work at EA or on social media. Please share stuff in whatever ways you think are good to do.


This is quick notes, opinions and criticisms about the book The Scout Mindset by Julia Galef (which EA likes and promotes). I’m not going in depth, being very complete, or giving many quotes, because I don’t care much. I think it’s a bad book that isn’t worth spending more time on, and I don’t expect the author or her fans to listen to, engage with, value, appreciate or learn from criticism. If they were reasonable and wanted to interact, then I think this would be plenty to get the discussion/debate started, and I could give more quotes and details later if that would help make progress in our conversation.

The book is pretty shallow.

Galef repeatedly admits she’s not very rational, sometimes openly and sometimes by accident. The open admissions alone imply that the techniques in the book are inadequate.

She mentions that while writing the book she gathered a bunch of studies that agree with her but was too biased to check their quality. She figured out during writing that she should check them and she found that lots were bad. If you don’t already know that kinda stuff (that most studies like that are bad, that studies should be checked instead of just trusting the title/abstract, or that you should watch out for being biased), maybe you’re too new to be writing a book on rationality?

The book is written so it’s easy to read and think you’re already pretty good and not change. Or someone could improve a little.

The book has nothing that I recognized as substantive original research or thinking. Does she have any ideas of her own?

She uses biased examples, e.g. Musk, Bezos and Susan Blackmore are all used as positive examples. In each case, there are many negative things one could say about them, but she only says positive things about them which fit her narrative. She never tries to consider alternative views about them or explain any examples that don’t easily fit her narrative. Counter-examples or apparent counter-examples are simply left out of the book. Another potential counter-example is Steve Jobs, who is a better and more productive person than any people used as examples in her book, yet he has a reputation rather contrary to the scout mindset. That’s the kind of challenging apparent/potential counter-example that she could have engaged with but didn’t.

She uses an example of a Twitter thread where someone thought email greetings revealed sexism, and she (the tweet author) got cheered for sharing this complaint. Then she checked her data and found that her claim was factually wrong. She retracted. Great? Hold on. Let’s analyze a little more. Are there any other explanations? Even if the original factual claims were true, would sexism necessarily follow? Why not try to think about other narratives? For example, maybe men are less status oriented or less compliant with social norms, so that is why they are less inclined to use fancier titles when addressing her. It doesn’t have to be sexism. If you want to blame sexism, you should look at how they treat men, not just as how they treat one woman. Another potential explanation is that men dislike you individually and don’t treat other women the same way, which could be for some reason other than sexism. E.g. maybe it’s because you’re biased against men but not biased against women, so men pick up on that and respect you less. Galef never thinks anything through in depth and doesn’t consider additional nuances like these.

For Blackmore, the narrative is that anyone can go wrong and rationality is about correcting your mistakes. (Another example is someone who fell for a multi-level marketing scheme before realizing the error.) Blackmore had some experience and then started believing in the paranormal and then did science experiments to test that stuff and none of it worked and she changed her mind. Good story? Hold on. Let’s think critically. Did Blackmore do any new experiments? Were the old experiments refuting the paranormal inadequate or incomplete in some way? Did she review them and critique them? The story mentioned none of them. So why did she do redundant experiments and waste resources to gather the same evidence that already existed? And why did it change her mind when it had no actual new information? Because she was biased to respect the results of her own experiments but not prior experiments done by other people (that she pointed out no flaws in)? This fits the pro-evidence, pro-science-experiments bias of LW/Galef. They’re too eager to test things without considering that, often, we already have plenty of evidence and we just need to examine and debate it better. Blackmore didn’t need any new evidence to change her mind and getting funding to do experiments like that speaks to her privilege. Galef brings up multiple examples of privilege without showing any awareness of it; she just seems to want to suck up to high status people, and not think critically about their flaws, rather than to actually consider their privileges. Blackmore not only was able to fund bad experiments, then she was able to change her mind and continue her career. Why did she get more opportunities after doing such a bad job earlier in her career? Yes she improved (how much really though?). But other people didn’t suck in the first place, then also improved, and never got such great opportunities.

Possibly all the examples in the book of changing one’s mind were things that Galef’s social circle can agree with instead of be challenged by. They all changed their minds to agree with Galef more, not less. E.g. an example was used of becoming more convinced by global warming which, in passing, smeared some other people on the climate change skeptic side as being really biased, dishonest, etc. (True of some of them probably but not a good thing to throw around as an in-passing smear based on hearsay. And true of people on the opposite side of the debate too, so it’s biased to only say it about the side you disagree with to undermine and discredit them in passing while having the deniability of saying it was just an example of something else about rationality.) There was a pro-choicer who became less dogmatic but remained pro-choice, and I think Galef’s social circle also is pro-choice but trying not to be dogmatic about it. There was also a pro-vaccine person who was careful and strategic about bringing up the subject with his anti-vax wife but didn’t reconsider his own views at all, but he and the author did display some understanding of the other side’s point of view and why some superficial pro-vax arguments won’t work. So the narrative is if you understand the point of view of the people who are wrong, then you can persuade them better. But (implied) if you have center-left views typical of EA and LW people, then you won’t have to change your mind much since you’re mostly right.

Galef’s Misquote

Here’s a slightly edited version of my post on my CF forum about a misquote in the book. I expect the book has other misquotes (and factual errors, bad cites, etc.) but I didn’t look for them.

The Scout Mindset by Julia Galef quotes a blog post:

“Well, that’s too bad, because I do think it was morally wrong.”[14]

But the words in the sentence are different in the original post:

Well that’s just too bad, because I do think it was morally wrong of me to publish that list.

She left out the “just” and also cut off the quote early which made it look like the end of a sentence when it wasn’t. Also a previous quote from the same post changes the italics even though the italics match in this one.

The book also summarizes events related to this blog post, and the story told doesn’t match reality (as I see it by looking at the actual posts). Also I guess he didn’t like the attention from the book because he took his whole blog down and the link in the book’s footnote is dead. The book says they’re engaged so maybe he mistakenly thought he would like the attention and had a say in whether to be included? Hopefully… Also the engagement may explain the biased summary of the story that she gave in her book about not being biased.

She also wrote about the same events:

He even published a list titled “Why It’s Plausible I’m Wrong,”

This is misleading because he didn’t put up a post with that title. It’s a section title within a post and she didn’t give a cite so it’s hard to find. Also her capitalization differs from the original which said “Why it’s plausible I’m wrong”. The capitalization change is relevant to making it look more like a title when it isn’t.

BTW I checked archives from other dates. The most recent working one doesn’t have any edits to this wording nor does the oldest version.

What is going on? This book is from a major publisher and there’s no apparent benefit to misquoting it in this way. She didn’t twist his words for some agenda; she just changed them enough that she’s clearly doing something wrong but with no apparent motive (besides maybe minor editing to make the quote sound more polished?). And it’s a blog post; wouldn’t she use copy/paste to get the quote? Did she have the blog post open in her browser and go back and forth between it and her manuscript in order to type in the quote by hand!? That would be a bizarre process. Or does she or someone else change quotes during editing passes in the same way they’d edit non-quotes? Do they just run Grammarly or similar and see snippets from the book and edit them without reading the whole paragraph and realizing they’re within quote marks?

My Email to Julia Galef

Misquote in Scout Mindset:

“Well, that’s too bad, because I do think it was morally wrong.”[14]

But the original sentence was actually:

Well that’s just too bad, because I do think it was morally wrong of me to publish that list.

The largest change is deleting the word "just".

I wanted to let you know about the error and also ask if you could tell me what sort of writing or editing process is capable of producing that error? I've seen similar errors in other books and would really appreciate if I could understand what the cause is. I know one cause is carelessness when typing in a quote from paper but this is from a blog post and was presumably copy/pasted.

Galef did not respond to this email.


Elliot Temple | Permalink | Messages (0)

EA Should Raise Its Standards

I quit the Effective Altruism forum due to a new rule requiring all new posts and comments be basically put in the public domain without copyright, so anyone could e.g. sell a book of my posts without my consent (they’d just have to give attribution). More info. I had a bunch of draft posts, so I’m posting some of them here with minimal editing. In general, I’m not going to submit them as link posts at EA myself. If you think they should be shared with EA as link posts, please do it yourself. I’m happy for other people to share links to my work at EA or on social media. Please share stuff in whatever ways you think are good to do.


I think EA could be over 50% more effective by raising its standards. Community norms should care more about errors and about using explicit reasoning and methods.

For a small example, quotes should be exact. There should be a social norm that says misquotes are unacceptable. You can’t just change the words (without brackets), put it in quote marks, and publish it. That’s not OK. I believe this norm doesn’t currently exist and there would be significant resistance to it. Many people would think it’s not a big deal, and that it’s a pedantic or autistic demand to be so literal with quotes. I think this is a way that people downplay and accept errors, which contributes to lowered effectiveness.

There are similar issues with many sorts of logical, mathematical, grammatical, factual and other errors where a fairly clear and objective “correct answer” can be determined, which should be uncontroversial, and yet people don’t care and take seriously that getting it right is important. Errors should be corrected. Retractions should be issued. Post-mortems should be performed. What process allowed the error to happen? What changes could be made to prevent similar errors from happening in the future?

It’s fine for beginners to make mistakes, but thought leaders in the community should be held to higher standards, and the higher standards should be an aspirational ideal that the beginners want to achieve, rather than something that’s seen as unnecessary, bad or too much work. It’s possible to avoid misquotes and logical errors without it being a major burden; if someone finds it’s a large burden, that means they need to practice more until they improve their intuition and subconscious mind. Getting things right in these ways should be easy and something they you can do while tired, distracted, autopiloting, etc.

Fixes like these alone won’t make EA far more effective by themselves. They will set the stage to enable more advanced or complex improvements. It’s very hard to do more important improvements when frequently making small errors. Getting the basics rights enables working more effectively on more advanced issues.

One of the main more advanced issues is rational debate.

Another is not trusting yourself. Don’t bet anything on your integrity or lack of bias when you can avoid it. There should be a strong norm against doing anything that would fail if you have low integrity or bias. If you can find any alternative, which doesn’t rely on your rationality, do that instead. Bias is common. Learning to be better at not fooling yourself is great, but you’ll probably screw it up a lot. If you can approach things so that you don’t have the opportunity to fool yourself, that’s better. There should be strong norms for things like transparency and following flowcharted methods and rules that dramatically reduce scope for bias. This can be applied to debate as well as to other things. And getting debate right enables criticism when people don’t live up to norms; without getting debate right norms have to be enforced in significant part with social pressure which compromises the rationality of the community and prevents it from clearly seizing the rationality high ground in debates with other groups.


Elliot Temple | Permalink | Messages (0)

Effective Altruism Related Articles

I wanted to make it easier to find all my Effective Altruism (EA) related articles. I made an EA blog category.

Below I link to more EA-related stuff which isn't included in the category list.

Critical Fallibilism articles:

I also posted copies of some of my EA comments/replies in this topic on my forum.

You can look through my posts and comments on the EA site via my EA user profile.

I continued a discussion with an Effective Altruist at my forum. I stopped using the EA forum because they changed their rules to require giving up your property rights for anything you post there (so e.g. anyone can sell your writing without your permission and without paying you).

I also made videos related to EA:


Elliot Temple | Permalink | Messages (0)

Effective Altruism Is Mostly Wrong About Its Causes

EA has gotten a few billion dollars to go towards its favored charitable causes. What are they? Here are some top cause areas:

  • AI alignment
  • animal welfare and veganism
  • disease and physical health
  • mental health
  • poverty
  • environmentalism
  • left wing politics

This is not a complete list. They care about some other stuff such as Bayesian epistemology. They also have charities related to helping the EA movement itself and they put a bunch of resources into evaluating other charities.

These causes are mostly bad and harmful. Overall, I estimate EA is doing more harm than good. For their billions of dollars, despite their cost-effectiveness oriented mission, they’ve created negative value.

EA’s Causes

Meta charities are charities whose job is to evaluate other charities then pass on money to the best ones. That has upsides and downsides. One of the downsides is it makes it less obvious what charities the EA movement funds.

Giving What We Can has a list of recommended organizations to donate to. The first two charities I see listed are a “Top Charities Fund” and an “All Grants Fund”, which are generic meta charities. That’s basically just saying “Give us money and trust it to use it for good things on any topic we choose.”

The second group of charities I see is related to animal welfare, but all three are meta charities which fund other animal welfare charities. So it’s unclear what the money is actually for.

Scrolling down, next I see three meta charities related to longtermism. I assume these mostly donate to AI alignment, but I don’t know for sure, and they do care about some other issues like pandemics. You might expect pandemics to be categorized under health not longtermism/futurism, but one of the longtermism meta charity descriptions explicitly mentions pandemics.

Scrolling again, I see a climate change meta charity, a global catastrophe risk meta charity (vague), and an EA Infrastructure charity that tries to help the EA cause.

Scrolling again, I finally see some specific charities. There are two about malaria, then one about childhood vaccinations in Nigeria. Scrolling horizontally, they have a charity about treating depression in Africa along side iodine and vitamin A supplements and deworming. There are two about happiness and one about giving poor people some cash while also spending money advocating for Universal Basic Income (a left wing political cause). It’s weird to me how these very different sorts of causes get mixed together: physical health, mental health and political activism.

Scrolling down, next is animal welfare. After that comes some pandemic stuff and one about getting people to switch careers to work in these charities.

I’m sure there’s better information somewhere else about what EA actually funds and how much money goes to what. But I’m just going to talk about how good or bad some of these causes are.

Cause Evaluations

AI Alignment

This cause has various philosophical premises. If they’re wrong, the cause is wrong. There’s no real way to debate them, and it’s hard to even get clarifications on what their premises are, and there’s significant variation in beliefs between different advocates which makes it harder to have any specific target to criticize.

Their premises are things like Bayesian epistemology, induction, and there being no objective morality. How trying to permanently (mind) control the goals of AIs differs from trying to enslave them is unclear. They’re worried about a war with the AIs but they’re the aggressors looking to disallow AIs from having regular freedoms and human rights.

They’re basically betting everything on Popper and Deutsch being wrong, but they don’t care to read and critique Popper and Deutsch. They don’t address Popper’s refutation of induction or Deutsch on universal intelligence. If humans already have universal intelligence, then in short AIs will at best be like us, not be something super different and way more powerful.

Animal Welfare

The philosophical premises are wrong. To know whether animals suffer you need to understand things about intelligence and ideas. That’s based on epistemology. The whole movement lacks any serious interest in epistemology. Or in simple terms, they don’t have reasonable arguments to differentiate a mouse from a roomba (with a bit better software). A roomba uncontroversially can’t suffer, and it still wouldn’t be able to suffer if you programmed it a bit better, gave it legs and feet instead of wheels, gave it a mouth and stomach instead of a battery recharging station, etc.

The tribalism and political activism are wrong too. Pressuring corporations and putting out propaganda is the wrong approach. If you want to make the world better, educate people about better principles and ways of thinking. People need rationality and reasonable political philosophy. Skipping those steps and jumping straight into specific causes, without worrying about the premises underlying people’s reasoning and motivations, is bad. It encourages people to do unprincipled and incorrect stuff.

Basically, for all sorts of causes that have anything to do with politics, there are tons of people working to do X, and also tons of people working to stop X or to get something that contradicts X. The result is a lot of people working against each other. If you want to make a better world, you have to stop participating in those tribalist fights and start resolving those conflicts. That requires connecting what you do to principles people can agree on, and teaching people better principles and reasoning skills (which requires first learning those things yourself, and also being open to debate and criticism about them).

See also my state of the animal rights debate tree diagram. I tried to find any vegans or other advocates who had any answers to it or relevant literature to add anything to it, but basically couldn’t get any useful answers anywhere.

Let’s also try to think about the bigger picture in a few ways. Why are factory farms so popular? What is causing that?

Do people not have enough money to afford higher quality food? If so, what is causing that? Maybe lack of capitalism or lack of socialism. You have to actually think about political philosophy to have reasonable opinions about this stuff and reach conclusions. You shouldn’t be taking action before that. I don’t think there exists a charity that cares about animal welfare and would use anti-poverty work as the method to achieve it. That’s too indirect for people or something, so they should get better at reasoning…

A ton of Americans do have money to afford some better food. Is the problem lack of awareness of how bad factory farms are, the health concerns they create for humans, or lack of knowledge of which brands or meats are using factory farms? Would raising awareness help a lot? I saw something claiming that in a survey over 50% of Americans said they thought their meat was from animals that were treated pretty well, but actually like 99% of US meat is from factory farms, so a ton of people are mistaken. I find that plausible. Raising awareness is something some charities work on, but often in shrill, propagandistic, aggressive, alienating or tribalist ways, rather than providing useful, reliable, unbiased, non-partisan information.

Maybe the biggest issue with factory farms in the US is laws and regulations (including subsidies) which were written largely by lobbyists for giant food corporations, and which are extremely hostile to their smaller competitors. That is plausible to me. How much animal welfare work is oriented towards this problem? I doubt it’s much since I’ve seen a decent amount of animal welfare stuff but never seen this mentioned. And what efforts are oriented towards planning and figuring out whether this is the right problem to work on, and coming up with a really good plan for how to make a change? So often people rush to try to change things without recognizing how hard, expensive and risky change is, and making damn sure they’ve thought everything through and the change will actually work as intended and the plan to cause the change will work as intended too.

Left Wing Politics

More broadly, any kind of left wing political activism is just fighting with the right instead of finding ways to promote social harmony and mutual benefit. It’s part of a class warfare mindset. The better way is in short classical liberalism, which neither the current left nor right knows much about. It’s in short about making a better society for everyone instead of fighting with each other. Trying to beat the rival political tribe is counter-productive. Approaches which transcend politics are needed.

Mental Health

Psychiatry is bad. It uses power to do things to people against their consent, and it’s manipulative, and its drugs are broadly poison. Here’s a summary from Thomas Szasz, author of The Myth of Mental Illness.

Environmentalism

As with many major political debates, both sides of the global warming debate are terrible and have no idea what they’re talking about. And there isn’t much thoughtful debate. I’ve been unable to find refutations of Richard Lindzen’s skeptical arguments related to water vapor. The “97% of scientists agree” thing is biased junk, and even if it were true it’s an appeal to authority not a rational argument. The weather is very hard to predict even in the short term, and a lot of people have made a lot of wrong predictions about long term warming or cooling. They often seem motivated by other agendas like deindustrialization, anti-technology attitudes or anti-capitalism, with global warming serving as an excuse. Some of what they say sounds a lot like “Let’s do trillions of dollars of economic harm taking steps that we claim are not good enough and won’t actually work.” There are fairly blatant biases in things like scientific researching funding – science is being corrupted as young scientists are under pressure to reach certain conclusions.

There are various other issues including pollution, running out of fossil fuels, renewables, electric cars and sustainability. These are all the kinds of things where

  1. People disagree with you. You might be wrong. You might be on the wrong side. What you’re doing might be harmful.
  2. You spend time and resources fighting with people who are working against you.
  3. Most people involved are have tribalist mindsets.
  4. Political activism is common.
  5. Rational, effective, productive debate, to actually reasonably resolve the disagreements about what should be done, is rare.

What’s needed is to figure out ways to actually rationally persuade people (not use propaganda on them) and reach more agreement about the right things to do, rather than responding to a controversy by putting resources into one side of it (while others put resources into the other side, and you kinda cancel each other out).

Physical Health

These are the EA causes I agree with the most. Childhood vaccinations, vitamin A, iodine and deworming sound good. Golden rice sounds like a good idea to me (not mentioned here but I’ve praised it before). I haven’t studied this stuff a ton. The anti-malaria charities concern me because I expect that they endorse anti-DDT propaganda.

I looked at New Incentives which gets Nigerian babies vaccinated (6 visits to a vaccination clinic finishing at 15 months old) for around $20 each by offering parents money to do it. I checked if they were involved with covid vaccine political fighting and they appear to have stayed out of that, so that’s good. I have a big problem with charities that have some good cause but then get distracted from it to do tribalist political activism and fight with some enemy political tribe. A charity that just sticks to doing something useful is better. So this one actually looks pretty good and cost effective based on brief research.

Pandemic prevention could be good but I’d be concerned about what methods charities are using. My main concern is they’d mostly do political activism and fight with opponents who disagree with them, rather than finding something actually productive and effective to do. Also pandemic prevention is dominated quite a lot by government policy, so it’s hard to stay out of politics. Just spending donations to stockpile some masks, vaccines and other supplies (because some governments don’t have enough) doesn’t sound like a very good approach, and that’d be more about mitigation than prevention anyway.

Even something like childhood vaccination in Nigeria has some concerns. Looking at it in isolation, sure, it makes some things better. It’s a local optima. But what bigger picture does it fit into?

For example, why isn’t the Nigerian government handling this? Is it essentially a subsidy for the Nigerian government, which lets them buy more of something else, and underfund this because charities step in and help here? Could the availability of charity for some important stuff cause budgets to allocate money away from those important things?

Does giving these poor Nigerians this money let their government tax them at higher rates than it otherwise would, so some of the money is essentially going to the Nigerian government not to the people being directly helped? Might some of the money be stolen in other ways, e.g. by thugs in areas where there’s inadequate protection against violence? Might the money attract any thugs to the area? Might the thugs pressure some women to have more babies and get the money so that they can steal it? I don’t know. These are just some initial guesses about potential problems that I think are worth some consideration. If I actually studied the topic I’m sure I’d come up with some other concerns, as well as learning that some of my initial concerns are actually important problems while others aren’t.

Why are these Nigerians so poor that a few dollars makes a significant difference to them? What is causing that? Is it bad government policies? Is there a downside to mitigating the harm done by those policies, which helps them stay in place and not seem so bad? And could we do more good by teaching the world about classical liberalism or how to rationally evaluate and debate political principles? Could we do more good by improving the US so it can set a better example for other countries to follow?

Helping some poor Nigerians is a superficial (band aid) fix to some specific problems, which isn’t very effective in the big picture. It doesn’t solve the root causes/problems involved. It doesn’t even try to. It just gives some temporary relief to some people. And it has some downsides. But the downsides are much smaller compared to most of EA’s other causes, and the benefits are real and useful – they make some people’s lives clearly better – even if they’re just local optima.

Meta Charities

I think EA’s work on evaluating the effectiveness of charity interventions has some positive aspects, but a lot of it is focused on local optima which can actually make it harmful overall even if some of the details are correct. Focusing attention and thinking on the wrong issues makes it harder for more important issues to get attention. If no one was doing any kind of planning, it’s easier to come along and say “Hey let’s do some planning” and have anyone listen. If there’s already tons of planning of the wrong types, a pro-planning message is easier to ignore.

EA will look at how well charities work on their own terms, without thinking about the cause and effect logic of the full situation. I’ve gone over this a few times in other sections. Looking at cost per childhood vaccination is a local optima. The bigger picture includes things like how it may subsidize a bad government or local thugs, or how it’s just a temporary short-term mitigation while there are bigger problems like bad economics systems that cause poverty. How beneficial is it really to fix one instance of a problem when there are systems in the world which keep creating that problem over and over? Dealing with those systems that keep causing the problems is more important. In simple terms, imagine a guy was going around breaking people’s legs, and you went around giving them painkillers… There is a local optima of helping people who are in pain, but it’s much more important to deal with the underlying cause. From what I’ve seen, EA’s meta charity evaluation is broadly about local optima not bigger picture understanding of causes of problems, so it often treats symptoms of a problem not the real problem. They will just measure how much pain relief an intervention provides and evaluate how good it is on that basis (unless they manage to notice a bigger picture problem, which they occasionally do, but they aren’t good at systematically finding those).

Also they try to compare charities that do different kinds of things. So you have benefits in different dimensions and they try to compare. They tend to do this, in short, by weighting factor summing, which fundamentally doesn’t work (it’s completely wrong, broken and impossible, and means there are hidden and generally biased thought processes responsible for the conclusions reached). As a quick example, one of the EA charities I saw was doing something about trying to develop meat alternatives. This is approach animal welfare in a very different way than, say, giving painkillers to animals on factory farms or doing political activist propaganda against the big corporations involved. So there’s no way to directly compare which is better in simple numerical terms. As much as people like summary numbers, people need to learn to think about concepts better.

Details

I could do more detailed research and argument about any of these causes, but it’s unrewarding because I don’t think EA will listen and seriously engage. That is, I think a lot of my ideas would not be acted on or refuted with arguments. So I’d still think I’m right, and be given no way to change my mind, and then people would keep doing things I consider counter-productive. Also, I already have gone into a lot more depth on some of these issues, and didn’t include it here because it’s not really the point.

Why do some people have a different view of EA and criticism, or different experiences with that? Why do some people feel more heard and influential? Two big reasons. First, you can social climb at EA then influence them. Second, compared to me, most people do criticism that’s simpler and more focused on local optima not foundations, fundamentals, root causes or questioning premises. (I actually try to give simple criticisms sometimes like “This is a misquote” or “This is factually false” but people will complain about that too. But I won’t get into that here.) People like criticism better when it doesn’t cross field boundaries and make them think about things they aren’t good at, don’t know much about, aren’t experienced at, or aren’t interested in. My criticisms tend to raise fundamental, important challenges and be multi-disciplinary instead of just staying within one field and not challenging its premises.

Conclusion

The broad issues are people who aren’t very rational or open to criticism and error correction, who then pursue causes which might be mistaken and harmful, and who don’t care much about rational persuasion, rational debate. People seem so willing to just join a tribe and fight opponents, and that is not the way to make the world better. Useful work almost all transcends those fights and stays out of them. And the most useful work, which will actually fix things in very cost-effective lasting ways is related to principles and better thinking. Help people think better and then the rest is so much easier.

There’s something really really bad about working against other people, who think they’re right, and you just spend your effort to try to counter-act their effort. Even if you’re right and they’re wrong, that’s so bad for cost effectiveness. Persuading them would be so much better. If you can’t figure out how to do that, why not? What are the causes that prevent rational persuasion? Do you not know enough? Are they broken in some way? If they are broken, why not help them instead of fighting with them? Why not be nice and sympathetic instead of viewing them as the enemies to be beaten by destructively overwhelming their resources with even more resources? I value things like social harmony and cooperation rather than adversarial interactions, and (as explained by classical liberalism) I don’t think there are inherent conflicts of interest between people that require (Marxist) class warfare or which disallow harmony and mutual benefit. People who are content to work against other people, in a fairly direct fight, generally seem pretty mean to me, which is rather contrary to the supposed kind, helping-others spirit of EA and charity.

EA’s approach to causes, as a whole, is a big bet on jumping into stuff without enough planning and understanding root causes of things and figuring out how to make the right changes. They should read e.g. Eli Goldratt on transition tree diagrams and how he approaches making changes within one company. If you want to make big changes affecting more people, you need much better planning than that, which EA doesn’t do or have, which encourages a short term mindset of pursuing local optima, that might be counter-productive, without adequately considering that you might be wrong and in need of better planning.

People put so much work into causes while putting way too little into figuring out whether those causes are actually beneficial, and understanding the whole situation and what other actions or approaches might be more effective. EA talks a lot about effectiveness but they mostly mean optimizing cost/benefit ratios given a bunch of unquestioned premises, not looking at the bigger picture and figuring out the best approach with detailed cause-effect understanding and planning.

More posts related to EA.


Elliot Temple | Permalink | Messages (0)

Effective Altruism Hurts People Who Donate Too Much

I was having an extended discussion with CB from EA when the licensing rules were changed so I quit the EA forum. So I asked if he wanted to continue at my forum. He said yes and registered an account but got stuck before posting.

I clarified that he couldn’t post because he hadn’t paid the one-time $20 price of an account. I offered him a free account if the $20 would be a financial burden, but said if he could afford it then I request he pay because if he values conversing with me less than $20 then I don’t think it’s a good use of my time.

Despite (I think) wanting to talk with me more, and having already spent hours on it, he changed his mind over the one-time $20 price. He said:

I don't think I will pay $20 because all the money I earn beyond my basic needs is going to charities.

That makes EA sound somewhat like a cult which has brainwashed him. And I’ve heard of EA doing this to other people. Some highly involved and respected EA people have made admissions about feeling guilty about buying any luxuries, such as a coffee, and struggled to live normal lives. This is considered a known problem with EA for many years, and they have no good plan to fix it and are continuing to hurt people and take around the maximum amount of money you can get from someone, just like some cults do. Further, EA encourages people to change careers to do EA-related work; it tries to take over people’s entire lives just like cults often do. EAs dating other EAs is common too, sometimes polyamorously (dating an EA makes EA be a larger influence in your life, and weird sexual practices are common with cults).

I don’t recall ever accusing anything of being a cult before, and overall I don’t think EA is a cult. But I think EA crosses a line here and deserves to be compared to a cult. EA clearly has differences from a cult, but having these similarities with cults is harmful.

EA does not demand you donate the maximum. They make sure to say it’s OK to donate at whatever level you’re comfortable with, or something more along those lines. But they also do bring up ideas about maximizing giving and comparing the utility of every different action you could do and maximizing utility (or impact or effectiveness or good). They don’t have good ideas about where or how to draw a line to limiting your giving, so I think they leave that up to individuals, many of whom won’t come up with good solutions themselves.

CB’s not poor, and he wants something, and the stakes are much higher than $20, but he can’t buy it because he feels that he has to give EA all his money. I think he already put hundreds of dollars of his time into the conversation, and I certainly did, and I think he planned to put hundreds more dollars of his time into it, but somehow $20 is a dealbreaker. He works in computing so his time could easily be worth over $100/hr.

I wonder if he considered that, instead of talking with me, he could have spent those hours volunteering at a soup kitchen. Or he could have spent those hours working and making more money to donate. He might need a second job or side gig or something to adjust how many hours he works, but he could do that. If he’s a programmer, he could make an phone or web app on the side and set his own schedule for that additional work. (What about burn out? Having intellectual conversations also takes up mental energy. So he had some to spare.)

Anyway, it’s very sad to see someone all twisted up like this. From what I can tell, he’s fairly young and naive, and doesn’t know much about money or economics.

Note/update: After I finished writing this article, before I posted it, CB claimed that he exaggerated about how much he donates. That partial retraction has not changed my mind about the general issues, although it makes his individual situation somewhat less bad and does address some specific points like whether he could buy a (cheap) book.

Investing In Yourself

Buying a conversation where he’d learn something could make CB wiser and more effective, which could lead to him earning more money, making better decisions about which charities to donate to, and other benefits.

I wonder if CB also doesn’t buy books because they aren’t part of his “basic needs”.

People should be encouraged to invest in themselves, not discouraged from it. EA is harming intellectual progress by handicapping a bunch of relatively smart, energetic young people so they don’t use financial resources to support their own personal progress and development.

This one thing – taking a bunch of young people who are interested in ideas and making it harder for them to develop into great thinkers – may do a huge amount of harm. Imagine if Karl Popper or Richard Feynman had donated so much money he couldn’t buy any books. Or pick whoever you think is important. What if the rich people several hundred years ago had all donated their money instead of hiring tutors to teach their kids – could that have stopped the enlightenment? (Note how that would have been doubly bad. It’d prevent some of their kids from turning into productive scientists and intellectuals, and it’d also take away gainful employment from the tutors, who were often scientists or other intellectuals without much money to fund their life or work.)

On a related note, basically none of EA’s favored charities are about finding the smartest or most rational people and helping them. But helping some of the best people could easily do more good than helping some of the poorest people. If you help a poor person have a happier, healthier life, that does some good. If you help a smart, middle-class American kid become a great thinker who writes a few really effective self-help books, his books could improve the lives of millions of people.

Admittedly, it’s hard to figure out how to help that kid. But people at least try to work on that problem and brainstorm ideas and critique their initial plans. There could be ongoing research to try to develop a good approach. But there isn’t much interest in that stuff.

The least they could do is leave that kid alone, rather than convince him to donate all his money above basic needs when he’s a young adult so he can’t afford books, online courses, and other resources that would be useful and enjoyable to him.

Also, at EA, I’ve been talking about criticism, debate and error correction. I’ve been trying to get them to consider their fallibility and the risk of being wrong about things, and do more about that. So, for example, I think EA is mistaken about many of its causes. CB could estimate a 1% chance that I have a point he could learn, and assume that would only affect his own future donations, and talking to me would still be a good deal, because he’ll donate more than $2000 in the future, so even multiplied by 1% it’s better than $20. So talking to me more would be cost-effective (in dollars, which I think is CB’s concern even though time matters too). Not considering things like this, and seeking to invest in risk reduction, is partly related to not investing in yourself and partly related to poor, irrational attitudes related to fallibility.

Also, I do tons of work (philosophy research, writing, discussion and video creation) trying to make the world better, mostly for free. Isn’t investing in me a way to make the world better? If you pay me $20, why is that any worse than donating it to a charity? Some people literally donate money to me like a charity because they respect and value what I do. Similarly, some EA charities give grants to intellectuals to do work on topics such as rationality, so I could receive such a grant. Donating to a grant-making organization that gave me a grant would count as charity, but giving me money directly counts less so, especially if you’re buying something from me (forum access). The marginal cost of forum access for me is $0, so this isn’t like buying a hand-made table from me, where I had to put in some time and materials to make it, so my profit margin is only 25%. My marginal profit margin on forum memberships is 100% because I’m going to keep running the forum whether or not CB joins. EA focuses people’s attention on charities, has an incorrectly negative view of trade, and biases people against noticing that buying from small creators actually generally helps make the world better even though it’s not “charity”.

What CB Donates To

Are CB’s donations doing good?

For around $20, he could pay for six visits to a vaccination clinic for a baby in rural northern Nigeria. It can be half a day of travel to reach a clinic, so paying people a few dollars makes a meaningful difference to whether they make the trip.

I wonder which vaccinations are actually important for people living in small, isolated communities like that. Some vaccinations seem much more relevant in a city or if you come in contact with more people. How many of them will ever visit a big city in their life? I don’t know. Also even if vaccinations provide significant value to them, they’re really poor, so maybe something else would improve their lives more.

I looked through charities that EA recommends and that vaccination charity looked to me like one of the best options. Plus I read a bit about it, unlike some of the other more promising ones like a charity that gives people Vitamin A. Some charities get distracted by political activism, so I checked if they were taking a political side about the covid vaccine, and they didn’t appear to be, so that’s nice to see. I think finding charities that stay out of politics is one of the better selection methods that people could and should use. EA cares a lot about evaluating and recommending charities, but I’m not aware of them using being non-political as a criterion. EA itself is pretty political.

I’m doubtful that CB donates to that kind of cause that provides some kind of fairly concrete health benefits for poor people in distant countries. Based on our discussions and his profile, I think his top cause is animal welfare. He may also donate to left-wing energy causes (like anti fossil fuels) and possibly AI Alignment. I think those are terrible causes where his donations would likely do more harm than good. I’m not going to talk about AI Alignment here, which isn’t very political, and the problems are more about bad epistemology and moral philosophy (plus lack of willingness to debate with critics).

Animal welfare and anti fossil fuel stuff are left wing political activism. Rather than staying out of politics, those causes get involved in politics on purpose. (Not every single charity in those spaces is political, just most of them.)

Let me explain it using a different issue as an example where there’s tons of visible political propaganda coming from both sides. The US pro-life right puts out lots of propaganda, and they recently had a major victory getting Roe vs. Wade overturned. Now they’re changing some state laws to hurt people, particularly women. Meanwhile, the pro-choice left also puts out propaganda. To some extent, the propaganda from the two sides cancels each other out.

Imagine a pro-choice charity that said “Next year, the pro-lifers are expected to spend $10,000,000,000 on propaganda. We must counter them with truth. Please donate to us or our allies because we need $10 billion dollars a year just to break even and cancel out what they’re doing. If we can get $15B/yr, we can start winning.”

Imagine that works. They get $15B and outspend the pro-lifers who only spend $10B. The extra $5B helps shift public perception to be more pro-choice. Suppose pro-choice is the correct view and getting people to believe it is actually good. We’ll just ignore the risk of being on the wrong side. (Disclosure: I’m pro-abortion.)

Then there’s $25B being spent in total, and $20B is basically incinerated, and then $5B makes the world better. That is really bad. 80% of the money isn’t doing any good. This is super inefficient. In general, the best case scenario when donating to tribalist political activism looks kind of like this.

If you want to be more effective, you have to be more non-partisan, more focused on rationality, and stay out of propaganda wars.

In simplified terms, pro-choice activism is more right than wrong, whereas I fear CB is donating to activism which is more wrong than right.

Saving Money and Capital Accumulation

I fear that spending only on basic needs, and donating the rest, means CB isn’t saving (enough) money.

If you don’t save money, you may end up being a burden on society later. You may need to receive help from the government or from charities. By donating money that should be saved, one risks later taking money away and being a drain on resources because one doesn’t have enough to take care of himself.

CB’s kids may have to take out student loans, and end up in a bunch of debt, because CB donated a bunch of money instead of putting it in a college fund for them.

CB may end up disabled. He may get fired and struggle to find a new job, perhaps through no fault of his own. Jobs could get harder to come by due to recession, natural disaster, or many other problems. He shouldn’t treat his expected future income as reliable. Plus, he says he wants to stop working in computing and switch to an EA-related job. That probably means taking a significant pay cut. He should plan ahead, and save money now while he has higher income, to help enable him to take a lower paying job later if he wants to. As people get older, their expenses generally go up, and their income generally goes up too. If he wants to take a pay cut when he’s older, instead of having a higher income to deal with higher expenses, that could be a major problem, especially if he didn’t save money now to deal with it.

Does saving money waste it? No. Saving means refraining from consumption. If you want to waste your money, buy frivolous stuff. If you work and save, you’re contributing to society. You provide work that helps others, and by saving you don’t ask for anything in return (now – but you can ask for it later when you spend your money).

Saving isn’t like locking up a bunch of machine tools in a vault so they don’t benefit anyone. People save money, not tools or food. Money is a medium of exchange. As long as there is enough money in circulation, then money accomplishes its purpose. There’s basically no harm in keeping some cash in a drawer. Today, keeping money in a bank is just a number in a computer, so it doesn’t even take physical cash out of circulation.

Money basically represents a debt where you did something to benefit others, and now you’re owed something equally valuable in return from others. When you save money, you’re not asking for what you’re owed from others. You helped them for nothing in return. It’s a lot like charity.

Instead of saving cash, you can invest it. This is less like charity. You can get interest payments or the value of your investment can grow. In return for not spending your money now, you get (on average) more of it.

If you invest money instead of consuming it, then you contribute to capital accumulation. You invest in businesses not luxuries. In other words, (as an approximation) you help pay for machines, tools and buildings, not for ice cream or massages. You invest in factories and production that can help make the world a better place (by repeatedly creating useful products), not short term benefits.

The more capital we accumulate, the higher the productivity of labor is. The higher the productivity of labor, the higher wages for workers are and also the more useful products get created. There are details like negotiations about how much of the additional wealth goes to who, but the overall thing is more capital accumulation means more wealth is produced and there’s more for everyone. Making the pie bigger is the more important issue than fighting with people over who gets which slice, though the distribution of wealth does matter too.

When you donate to a charity which spends the money on activism or even vaccines, that is consumption. It’s using up wealth to accomplish something now. (Not entirely because a healthy worker is more productive, so childhood vaccines are to some extent an investment in human capital. But they aren’t even trying to evaluate the most effective way to invest in human capital to raise productivity. That isn’t their goal so they’re probably not being especially cost-effective at it.)

When you save money and invest it, you’re helping with capital accumulation – you’re helping build up total human wealth. When you consume money on charitable causes or luxuries, you’re reducing total human wealth.

Has EA ever evaluated how much good investing in an index fund does and compared it to any of their charities? I doubt it. (An index fund is a way to have your investment distributed among many different companies so you don’t have to guess which specific companies are good. It also doesn’t directly give money to the company because you buy stock from previous investors, but as a major simplification we can treat it like investing in the company.)

I’ve never seen anything from EA talking about how much good you’ll do if you don’t donate, and subtracting that from the good done by donating, to see how much additional good donating does (which could be below zero in some cases or even on average – who knows without actually investigating). If you buy some fancy dinner and wine, you get some enjoyment, so that does some good. If you buy a self-help book or online course and invest in yourself, that does more good. If you buy a chair or frying pan, that’s also investing in yourself and your life, and does good. If you invest the money in a business, that does some good (on average). Or maybe you think so many big businesses are so bad that you think investing in them makes the world worse, which I find plausible, and it reminds me of my view that many non-profits are really bad… I have a negative view of most large companies, but overall I suspect that, on average, non-profits are worse than for-profit businesses.

EA has a bunch of anti-capitalists who don’t know much about economics. CB in particular is so ignorant of capitalism that he didn’t know it prohibits fraud. He doesn’t know, in a basic sense, what the definition of capitalism even is. And he also doesn’t know that he doesn’t know. He thought he knew, and he challenged me on that point, but he was wrong and ignorant.

These people need to read Ludwig von Mises. Both for the economics and for the classical liberalism. They don’t understand harmony vs. conflicts of interest, and a lot of what they do, like political activism, is based on assuming there are conflicts of interest and the goal should be to make your side win. They often don’t aim at win/win solutions, mutual benefit and social harmony. They don’t really understand peace, freedom and how a free market is proposal about how to create social harmony and benefit everyone and some of its mechanisms for doing that are superior to what charities try to do – so getting capitalism working better could easily do more good than what they’re doing now, but they wouldn’t really even consider such a plan. (I’m aware that I haven’t explained capitalism enough here for people to learn about capitalism from this article. It may make sense to people who already know some stuff. If you want to know more, read Mises and read Capitalism: A Treatise on Economics and feel free to ask questions or seek debate at my forum. If you find this material difficult, you may first need to put effort into learning how to learn, getting better at reading, getting better at research, getting better at critical thinking, managing your schedule, managing your motivations and emotions, managing projects over time, etc.)

Conclusion

CB was more intellectually tolerant and friendly than most EAers. Most of them can’t stand to talk to someone like me who has a different perspective and some different philosophical premises. He could, so in that way he’s better than them. He has a ton of room for improvement at critical thinking, rigor and precision, but he could easily be categorized as smart.

So it’s sad to see EA hurt him in such a major way that really disrupts his life. Doing so much harm is pretty unusual – cults can do it but most things in society don’t. It’s ironic and sad that EA, which is about doing good, is harming him.

And if I was going to try to improve the world and help people, people like CB would be high on my list for who to help. I think helping some smart and intellectually tolerant people would do more good than childhood vaccines in Nigeria let alone leftist (or rightist) political activism. The other person I know of who thought this way – about prioritizing helping some of the better people, especially smart young people – was Ayn Rand.

I am trying to help these people – that’s a major purpose of sharing writing – but it’s not my top priority in my life. I’m not an altruist. Although, like Rand and some classical liberals, I don’t believe there’s a conflict between the self and the other. Promoting altruism is fundamentally harmful because it spreads the idea that you must choose yourself or others, and there’s a conflict there requiring winners and losers. I think Rand should have promoted harmony more and egoism or selfishness less, but at least her intellectual position was that everyone can win and benefit, whereas EA doesn’t say that and intentionally asks people like CB to sacrifice their own good to help others, thereby implying that there is a conflict between what’s good for CB and what’s good for others, thereby implying basically that social harmony is impossible because there’s no common good that’s good for everyone.

I’ll end by saying that EA pushes young people to rush to donate way too much money when they’re often quite ignorant and don’t even know much about which causes are actually good or bad. EA has some leaders who are more experienced and knowledgeable, but many of them have political and tribalist agendas, aren’t rational, and won’t debate or address criticism of their views. It’s totally understandable for a young person to have no idea what capitalism is and to be gullible in some ways, but it’s not OK for EA to take advantage of that gullibility, keep their membership ignorant of what capitalism is, and discourage their members from reading Mises or speaking with people like me who know about capitalism and classical liberalism. EA has leaders who know more about capitalism, and hate it, and won’t write reasonable arguments or debate the matter in an effective, truth seeking way. They won’t point out how/why/where Mises was wrong, and instead go guide young people to not read Mises and to donate all their money beyond basic needs to the causes that EA leaders like.

EDIT 2022-12-05: For context, see the section "Bad" EAs, caught in a misery trap in https://michaelnotebook.com/eanotes/ which had already previously alerted me that EA has issues with over-donating, guilt, difficulty justifying spending on yourself, etc., which affect a fair amount of people.


Elliot Temple | Permalink | Messages (0)

Talking With Effective Altruism

The main reasons I tried to talk with EA are:

  • they have a discussion forum
  • they are explicitly interested in rationality
  • it's public
  • it's not tiny
  • they have a bunch of ideas written down

That's not much, but even that much is rare. Some groups just have a contact form or email address, not any public discussion place. Of the groups with some sort of public discussion, most now use social media (e.g. a Facebook group) or a chatroom rather than having a forum, so there’s no reasonable way to talk with them. My policy, based on both reasoning and also past experience, is that social media and chatrooms are so bad that I shouldn’t try to use them for serious discussions. They have the wrong design, incentives, expectations and culture for truth seeking. In other words, social media has been designed and optimized to appeal to irrational people. Irrational people are far more numerous, so the winners in a huge popularity contest would have to appeal to them. Forums were around many years before social media, but now are far less popular because they’re more rational.

I decided I was wrong about EA having a discussion forum. It's actually a hybrid between a subreddit and a forum. It's worse than a forum but better than a subreddit.

How good of a forum it is doesn’t really matter, because it’s now unusable due to a new rule saying that basically you must give up property rights for anything you post there. That is a very atypical forum rule; they're the ones being weird here, not me. One of the root causes of this error is their lack of understanding of and respect for property rights. Another cause is their lack of Paths Forward, debate policies, etc., which prevents error correction.

The difficulty of correcting their errors in general was the main hard part about talking with them. They aren't open to debate or criticism. They say that they are, and they are open to some types of criticism which don't question their premises too much. They'll sometimes debate criticisms about local optima they care about, but they don't like being told that they're focusing on local optima and should change their approach. Like most people, they each of them tends to only want to talk about stuff he knows about, and they don't know much about their philosophical premises and have no reasonable way to deal with that (there are ways to delegate and specialize so you don't personally have to know everything, but they aren't doing that and don't seem to want to).

When I claim someone is focusing on local optima, it moves the discussion away from the topics they like thinking and talking about, and have experience and knowledge about. It moves the topic away from their current stuff (that I said is a local optima) to other stuff (the bigger picture, global optima, alternatives to what they’re doing, comparisons between their thing and other things).

Multiple EA people openly, directly and clearly admitted to being bad at abstract or conceptual thinking. They seemed to think that was OK. They brought it up in order to ask me to change and stop trying to explain concepts. They didn’t mean to admit weakness in themselves. Most (all?) rationality-oriented communities I have past experience with were more into abstract, clever or conceptual reasoning than EAers are. I could deal with issues like this if people wanted to have extended, friendly conversations and make an effort to learn. I don’t mind. But by and large they don’t want to discuss at length. The primary response I got was not debate or criticism, but being ignored or downvoted. They didn’t engage much. It’s very hard to make any progress with people who don’t want to engage because they aren’t very active minded or open minded, or because they’re too tribalist and biased against some types of critics/heretics, or because they have infallibilist, arrogant, over-confident attitudes.

They often claim to be busy with their causes, but it doesn’t make sense to ignore arguments that you might be pursuing the wrong causes in order to keep pursuing those possibly-wrong causes; that’s very risky! But, in my experience, people (in general, not just at EA) are very resistant to caring about that sort of risk. People are bad at fallibilism.

I think a lot of EAers got a vibe from me that I’m not one of them – that I’m culturally different and don’t fit in. So they saw me as an enemy not someone on their side/team/tribe, so they treated me like I wasn’t actually trying to help. Their goal was to stop me from achieving my goals rather than to facilitate my work. Many people weren’t charitable and didn’t see my criticisms as good faith attempts to make things better. They thought I was in conflict with them instead of someone they could cooperate with, which is related to their general ignorance of social and economic harmony, win/wins, mutual benefit, capitalism, classical liberalism, and the criticisms of conflicts of interest and group conflicts. (Their basic idea with altruism is to ask people to make sacrifices to benefit others, not to help others using mutually beneficial win/wins.)


Elliot Temple | Permalink | Messages (0)

My Early Effective Altruism Experiences

I quit the Effective Altruism forum due to a new rule requiring all new posts and comments be basically put in the public domain without copyright, so anyone could e.g. sell a book of my posts without my consent (they’d just have to give attribution). More info. I had a bunch of draft posts, so I’m posting some of them here with minimal editing. In general, I’m not going to submit them as link posts at EA myself. If you think they should be shared with EA as link posts, please do it yourself. I’m happy for other people to share links to my work at EA or on social media. Please share stuff in whatever ways you think are good to do.

This post covers some of my earlier time at EA but doesn’t discuss some of the later articles I posted there and the response.


I have several ideas about how to increase EA’s effectiveness by over 20%. But I don’t think think they will be accepted immediately. People will find them counter-intuitive, not understand them, disagree with them, etc.

In order to effectively share ideas with EA, I need attention from EA people who will actually read and think about things. I don’t know how to get that and I don’t think EA offers any list of steps that I could follow to get it, nor any policy guarantees like “If you do X, we’ll do Y.” that I could use to bring up ideas. One standard way to get it, which has various other advantages, is to engage in debate (or critical discussion) with someone. However, only one person from EA (who isn’t particularly influential) has been willing to try to have a debate or serious conversation with me. By a serious conversation, I mean one that’s relatively long and high effort, which aims at reaching conclusions.

My most important idea about how to increase EA’s effectiveness is to improve EA’s receptiveness to ideas. This would let anyone better share (potential) good ideas with EA.

EA views itself as open to criticism and it has a public forum. So far, no moderator has censored my criticism, which is better than many other forums! However, no one takes responsibility for answering criticism or considering suggestions. It’s hard to get any disagreements resolved by debate at EA. There’s also no good way to get official or canonical answers to questions in order to establish some kind of standard EA position to target criticism at.

When one posts criticism or suggestions, there are many people who might engage, but no one is responsible for doing it. A common result is that posts do not get engagement. This happens to lots of other people besides me, and it happens to posts which appear to be high effort. There are no clear goalposts to meet in order to get attention for a post.

Attention at the EA forum seems to be allocated in pretty standard social hierarchy ways. The overall result is that EA’s openness to criticism is poor (objectively, but not compared to other groups, many of which are worse).

John the Hypothetical Critic

Suppose John has a criticism or suggestion for EA that would be very important if correct. There are three main scenarios:

  1. John is right and EA is wrong.
  2. EA is right and John is wrong.
  3. EA and John are both wrong.

There should be a reasonable way so that, if John is right, EA can be corrected instead of just ignoring John. But EA doesn’t have effective policies to make that happen. No person or group is responsible for considering that John may be right, engaging with John’s arguments, or attempting to give a rebuttal.

It’s also really good to have a reasonable way so that, if John is wrong and EA is right, John can find out. EA’s knowledge should be accessible so other people can learn what EA knows, why EA is right, etc. This would make EA much more persuasive. EA has many articles which help with this, but if John has an incorrect criticism and is ignored, then he’s probably going to conclude that EA is wrong and won’t debate him, and lower his opinion of EA (plus people reading the exchange might do the same – they might see John give an criticism that isn’t answered and conclude that EA doesn’t really care about addressing criticism).

If John and EA are both wrong, it’d also be a worthwhile topic to devote some effort to, since EA is wrong about something. Discussing John’s incorrect criticism or suggestion could lead to finding out about EA’s error, which could then lead to brainstorming improvements.

I’ve written about these issues before with the term Paths Forward.

Me Visiting EA

The first thing I brought up at EA was asking if EA has any debate methodology or any way I can get a debate with someone. Apparently not. My second question was about whether EA has some alternative to debates, and again the answer seemed to be no. I reiterated the question again, pointing out the “debate methodology” plus “alternative to debate methodology” issues form a complete pair, and if EA has neither that’s bad. This time, I think some people got defensive about the title, which caused me to get more attention than when my post title didn’t offend people (the incentives there are really bad). The titled asked how EA was rational. Multiple replies seemed focused on the title, which I grant was vague, rather than the body text which gave details of what I meant.

Anyway, I finally got some sort of answer: EA lacks formal debate or discussion methods but has various informal attempts at rationality. Someone shared a list. I wrote a brief statement of what I thought the answer was and asked for feedback if I got EA’s position wrong. I got it right. I then wrote an essay criticizing EA’s position, including critiques of the listed points.

What happened next? Nothing. No one attempted to engage with my criticism of EA. No one tried to refute any of my arguments. No one tried to defend EA. It’s back to the original problem: EA isn’t set up to address criticism or engage in debate. It just has a bunch of people who might or might not do that in each case. There’s nothing organized and no one takes responsibility for addressing criticism. Also, even if someone did engage with me, and I persuaded them that I was correct, it wouldn’t change EA. It might not even get a second person to take an interest in debating the matter and potentially being persuaded too.

I think I know how to organize rational, effective debates and reach conclusions. The EA community broadly doesn’t want to try doing that my way nor do they have a way they think is better.

If you want to gatekeep your attention, please write down the rules you’re gatekeeping by. What can I do to get past the gatekeeping? If you gatekeep your attention based on your intuition and have no transparency or accountability, that is a recipe for bias and irrationality. (Gatekeeping by hidden rules is related to the rule of man vs. the rule of law, as I wrote about. It’s also related to security through obscurity, a well known mistake in software. Basically, when designing secure systems, you should assume hackers can see your code and know how the system is designed, and it should be secure anyway. If your security relies on keeping some secrets, it’s poor security. If your gatekeeping relies on adversaries not knowing how it works, rather than having a good design, you’re making the security through obscurity error. That sometimes works OK if no one cares about you, but it doesn’t work as a robust approach.)

I understand that time, effort, attention, engagement, debate, etc., are limited resources. I advocate having written policies to help allocate those resources effectively. Individuals and groups can both do this. You can plan ahead about what kinds of things you think it’s good to spend attention on, and write down decision making criteria, and share them publicly, and etc., instead of just leaving it to chance or bias. Using written rationality policies to control some of these valuable resources would let them be used more effectively instead of haphazardly. The high value of the resources is a reason in favor, not again, governing their use with explicit policies that are put in writing then critically analyzed. (I think intuition has value too, despite the higher risk of bias, so allocating e.g. 50% of your resources to conscious policies and 50% to intuition would be fine.)

“It’s not worth the effort” is the standard excuse for not engaging with arguments. But it’s just an excuse. I’m the one who has researched how to do such things efficiently, how to save effort, etc., without giving up on rationality. They aren’t researching how to save effort and designing good, effort-saving methods, nor do they want the methods I developed. People just say stuff isn’t worth the effort when they’re biased against thinking about it, not as a real obstacle that they actually want a solution to. They won’t talk about solutions to it when I offer, nor will they suggest any way of making progress that would work if they’re in the wrong.

LW Short Story

Here’s a short story as an aside (from memory, so may have minor inaccuracies). Years ago I was talking with Less Wrong (LW) about similar issues. LW and EA are similar places. I brought up some Paths Forward stuff. Someone said basically he didn’t have time to read it, or maybe didn’t want to risk wasting his time. I said the essay explains how to engage with my ideas in time-efficient, worthwhile ways. So you just read this initial stuff and it’ll give you the intellectual methods to enable you to engage with my other ideas in beneficial ways. He said that’d be awesome if true, but he figures I’m probably wrong, so he doesn’t want to risk his time. We appeared to be at an impasse. I have a potential solution with high value that addresses his problems, but he doubts it’s correct and doesn’t want to use his resources to check if I’m right.

My broad opinion is someone in a reasonably large community like LW should be curious and look into things, and if no one does then each individual should recognize that as a major problem and want to fix it.

But I came up with a much simpler, directer solution.

It turns out he worked at a coffee shop. I offered to pay him the same wage as his job to read my article (or I think it was a specific list of a few articles). He accepted. He estimated how long the stuff would take to read based on word count and we agreed on a fixed number of dollars that I’d pay him (so I wouldn’t have to worry about him reading slowly to raise his payment). The estimate was his idea, and he came up with the numbers and I just said yes.

But before he read it, an event happened that he thought gave him a good excuse to back out. He backed out. He then commented on the matter somewhere that he didn’t expect me to read, but I did read it. He said he was glad to get out of it because he didn’t want to read it. In other words, he’d rather spend an hour working at a coffee shop than an hour reading some ideas about rationality and resource-efficient engagement with rival ideas, given equal pay.

So he was just making excuses the whole time, and actually just didn’t want to consider my ideas. I think he only agreed to be paid to read because he thought he’d look bad and irrational if he refused. I think the problem is that he is bad and irrational, and he wants to hide it.

More EA

My first essay criticizing EA was about rationality policies, how and why they’re good, and it compared them to the rule of law. After no one gave any rebuttal, or changed their mind, I wrote about my experience with my debate policy. A debate policy is an example of a rationality policy. Although you might expect that conditionally guaranteeing debates would cost time, it has actually saved me time. I explained how it helps me be a good fallibilist using less time. No one responded to give a rebuttal or to make their own debate policy. (One person made a debate policy later. Actually two people claimed to, but one of them was so bad/unserious that I don’t count it. It wasn’t designed to actually deal with the basic ideas of a debate policy, and I think it was made in bad faith because they person wanted to pretend to have a debate policy. As one example of what was wrong with it, they just mentioned it in a comment instead of putting it somewhere that anyone would find it or that they could reasonably link to in order to show it to people in the future.)

I don’t like even trying to talk about specific issues with EA in this broader context where there’s no one to debate, no one who wants to engage in discussion. No one feels responsible for defending EA against criticism (or finding out that EA Is mistaken and changing it). I think that one meta issue has priority.

I have nothing against decentralization of authority when many individuals each take responsibility. However, there is a danger when there is no central authority and also no individuals take responsibility for things and also there’s a lack of coordination (leading to e.g. lack of recognition that, out of thousands of people, zero of them dealt with something important).

I think it’s realistic to solve these problems and isn’t super hard, if people want to solve them. I think improving this would improve EA’s effectiveness by over 20%. But if no one will discuss the matter, and the only way to share ideas is by climbing EA’s social hierarchy and becoming more popular with EA by first spending a ton of time and effort saying other things that people like to hear, then that’s not going to work for me. If there is a way forward that could rationally resolve this disagreement, please respond. Or if any individual wants to have a serious discussion about these matters, please respond.

I’ve made rationality research my primary career despite mostly doing it unpaid. That is a sort of charity or “altruism” – it’s basically doing volunteer work to try to make a better world. I think it’s really important, and it’s very sad to me that even groups that express interest in rationality are, in my experience, so irrational and so hard to engage with.


Elliot Temple | Permalink | Messages (0)

Rationality Policies Tips

I quit the Effective Altruism forum due to a new rule requiring all new posts and comments be basically put in the public domain without copyright, so anyone could e.g. sell a book of my posts without my consent (they’d just have to give attribution). More info. I had a bunch of draft posts, so I’m posting some of them here with minimal editing. In general, I’m not going to submit them as link posts at EA myself. If you think they should be shared with EA as link posts, please do it yourself. I’m happy for other people to share links to my work at EA or on social media. Please share stuff in whatever ways you think are good to do.


Suppose you have some rationality policies, and you always want to and do follow them. You do exactly the same actions you would have without the policies, plus a little bit of reviewing the policies, comparing your actions with the policies to make sure you’re following them, etc.

In this case, are the policies useless and a small waste of time?

No. Policies are valuable for communication. They provide explanations and predictability for other people. Other people will be more convinced that you’re rational and will understand your actions more. You’ll less often be accused of irrationality or bias (or, worse, have people believe you’re being biased without telling you or allowing a rebuttal). People will respect you more and be more interested in interacting with you. It’ll be easier to get donations.

Also, written policies enable critical discussion of the policies. Having the policies lets people make suggestions or share critiques. So that’s another large advantage of the policies even when they make no difference to your actions. People can also learn from your policies and use start using some of the same policies for themselves.

It’s also fairly unrealistic that the policies make no difference to your actions. Policies can help you remember and use good ideas more frequently and consistently.

Example Rationality Policies

“When a discussion is hard, start using an idea tree.” This is a somewhat soft, squishy policy. How do you know when a discussion is hard? That’s up to your judgment. There are no objective criteria given. This policy could be improved but it’s still, as written, much better than nothing. It will work sometimes due to your own judgment and also other people who know about your policy can suggest that a discussion is hard and it’s time to use an idea tree.

A somewhat less vague policy is, “When any participant in a discussion thinks the discussion is hard, start using an idea tree.” In other words, if you think the discussion is tough and a tree would help, you use one. And also, if your discussion partner claims it’s tough, you use one. Now there is a level of external control over your actions. It’s not just up to your judgment.

External control can be triggered by measurements or other parts of reality that are separate from other people (e.g. “if the discussion length exceeds 5000 words, do X”). It can also be triggered by other people making claims or judgments. It’s important to have external control mechanisms so that things aren’t just left up to your judgment. But you need to design external control mechanisms well so that you aren’t controlled to do bad things.

It’s also problematic if you dislike or hate something but your policy makes you do it. It’s also problematic to have no policy and just do what your emotions want, which could easily be biased. An alternative would be to set the issue aside temporarily to actively do a lot of introspection and investigation, possibly followed by self-improvement.

A more flexible policy would be, “When any participant in a discussion thinks the discussion is hard, start using at least one option from my Hard Discussion Helpers list.” The list could contain using an idea tree and several other options such as doing grammar analysis or using Goldratt’s evaporating clouds.

More about Policies

If you find your rationality policies annoying to follow, or if they tell you to take inappropriate actions, then the solution is to improve your policy writing skill and your policies. The solution is not to give up on written policies.

If you change policies frequently, you should label them (all of them or specific ones) as being in “beta test mode” or something else to indicate they’re unstable. Otherwise you would mislead people. Note: It’s very bad to post written policies you aren’t going to follow; that’s basically lying to people in an unusually blatant, misleading way. But if you post a policy with a warning that it’s a work in progress, then it’s fine.

One way to dislike a policy is you find it takes extra work to use it. E.g. it could add extra paperwork so that some stuff takes longer to get done. That could be fine and worth it. If it’s a problem, try to figure out lighter weight policies that are more cost effective. You might also judge that some minor things don’t need written policies, and just use written policies for more important and broader issues.

Another way to dislike a policy is you don’t want to do what it says for some other reason than saving time and effort. You actually dislike that action. You think it’s telling you to do something biased, bad or irrational. In that case, there is a disagreement between your ideas about rationality that you used to write the policy and your current ideas. This disagreement is important to investigate. Maybe your abstract principles are confused and impractical. Maybe you’re rationalizing a bias right now and the policy is right. Either way – whether the policy or current idea is wrong – there’s a significant opportunity for improvement. Finding out about clashes between your general principles and the specific actions you want to do is important and those issues are worth fixing. You should have your explicit ideas and intuitions in alignment, as well as your abstract and concrete ideas, your big picture and little picture ideas, your practical and intellectual ideas, etc. All of those types of ideas should agree on what to do. When they don’t, something is going wrong and you should improve your thinking.

Some people don’t value opportunities to improve their thinking because they already have dozens of those opportunities. They’re stuck on a different issue other than finding opportunities, such as the step of actually coming up with solutions. If that’s you, it could explain a resistance to written policies. They would make pre-existing conflicts of ideas within yourself more explicit when you’re trying to ignore a too-long list of your problems. Policies could also make it harder to follow the inexplicit compromises you’re currently using. They’d make it harder to lie to yourself to maintain your self-esteem. If you have that problem, I suggest that it’s worth it to try to improve instead of just kind of giving up on rationality. (Also, if you do want to give up on rationality, or your ideas are such a mess that you don’t want to untangle them, then maybe EA and CF are both the wrong places for you. Most of the world isn’t strongly in favor of rationality and critical discussion, so you’ll have an easier time elsewhere. In other words, if you’ve given up on rationality, then why are you reading this or trying to talk to people like me? Don’t try to have it both ways and engage with this kind of article while also being unwilling to try to untangle your contradictory ideas.)


Elliot Temple | Permalink | Messages (0)

Controversial Activism Is Problematic

EA mostly advocates controversial causes where they know that a lot of people disagree with them. In other words, there exist lots of people who think EA’s cause is bad and wrong.

AI Alignment, animal welfare, global warming, fossil fuels, vaccinations and universal basic income are all examples of controversies. There are many people on each side of the debate. There are also experts on each side of the debate.

Some causes do involve less controversy, such as vitamin A supplements or deworming. I think that, in general, less controversial causes are better independent of whether they’re correct. It’s better when people broadly agree on what to do, and then do it, instead of trying to proceed with stuff while having a lot of opponents who put effort into working against you. I think EA has far too little respect for getting wider spread agreement and cooperation, and not trying to proceed with action on issues where there are a lot of people taking action on the other side who you have to fight against. This comes up most with political issues but also applies to e.g. AI Alignment.

I’m not saying it’s never worth it to try to proceed despite large disagreements, and win the fight. But it’s something people should be really skeptical of and try to avoid. It has huge downsides. There’s a large risk that you’re in the wrong and are actually doing something bad. And even if you’re right, the efforts of your opponents will cancel out a lot of your effort. Also, proceeding with action when people disagree basically means you’ve given up on persuasion working any time soon. In general, focusing on persuasion and trying to make better more reasonable arguments that can bring people together is much better than giving up on talking it out and just trying to win a fight. EA values persuasion and rational debate too little.


Suppose you want to make the world better in the short term without worrying about a bunch of philosophy. We try to understand the situation we’re in, what our goal is, what methods would work well, what is risky, etc. So how can we analyze the big picture in a fairly short way that doesn’t require advanced skill to make sense?

We can look at the world and see there are lots of disagreements. If we try to do something that lots of people disagree with, we might be doing something bad. It’s risky. Currently in the world, a ton of people on both sides of many controversies are doing this. Both sides have tons of people who feel super confident that they’re right, and who donate or get involved in activism. This is especially common with political issues.

So if you want to make the world better, two major options are:

  • Avoid controversy
  • Help resolve controversy

There could be exceptions, but these are broadly better options than taking sides and fighting in a controversy. If there are exceptions, correctly knowing about them would probably require a bunch of intellectual skill and study, and wouldn’t be compatible with looking for quicker, more accessible wins. A lot of people think their side of their cause is a special exception when it isn’t.

The overall world situation is there are far too many confident people who are far too eager to fight instead of seeking harmony, cooperation, working together, etc. Persuasion is what enables people to be on the same team instead of working against each other.

Causes related to education and sharing information can help resolve controversy, especially when they’re done in a non-partisan, unbiased way. Some education or information sharing efforts are clearly biased to help one side win, rather than focused on being fair and helpful. Stuff about raising awareness often means raising awareness of your key talking points and why your side is right. Propaganda efforts are very different than being neutral and helping enable people to form better opinions.

Another approach to resolving controversy is to look at intellectual thought leaders, and how they debate and engage with each other (or don’t), and try to figure out what’s going wrong there and what can be done about it.

Another approach is to look at how regular people debate each other and talk about issues, and try to understand why people on both sides aren’t being persuaded and try to come up with some ideas to resolve the issue. That means coming to a conclusion that most people on both sides can be happy with.

Another approach is to study philosophy and rationality.

Avoiding controversy is a valid option too. Helping people avoid blindness by getting enough Vitamin A is a pretty safe thing to work on if you want to do something good with a low risk that you’re actually on the wrong side.

A common approach people try to use is to have some experts figure out which sides of which issues are right. Then they feel safe to know they’re right because they trust that some smart people already looked into the matter really well. This approach doesn’t make much sense in the common case that there are experts on both sides who disagree with each other. Why listen to these experts instead of some other experts who say other things? Often people already like a particular conclusion or cause then find experts who agree with it. The experts offer justification for a pre-existing opinion rather than actually guiding what they think. Listening to experts can also run into issues related to irrational, biased gatekeeping about who counts as an “expert”.

In general, people are just way too eager to pick a side and fight for it instead of trying to transcend, avoid or fix such fighting. They don’t see cooperation, persuasion or harmony as powerful or realistic enough tools. They are content to try to beat opponents. And they don’t seem very interested in looking at the symmetry of how they think they’re right and their cause is worth fighting for, but so do many people on the other side.

If your cause is really better, you should be able to find some sort of asymmetric advantage for your side. If it can give you a quick, clean victory that’s a good sign. If it’s a messy, protracted battle, that’s a sign that your asymmetric advantage wasn’t good enough and you shouldn’t be so confident that you know what you’re talking about.


Elliot Temple | Permalink | Messages (0)

A Non-Status-Based Filter

Asking people if they want to have a serious conversation is a way of filtering, or gatekeeping, which isn’t based on social status. Regardless of one’s status, anyone can opt in. This does require making the offer to large groups, randomized people, or something else that avoids social status. If you just make the offer to people you like, then your choice of who to offer conversations to is probably status based.

This might sound like the most ineffective filter ever. People can just say “yes I want to pass your filter” and then they pass. But in practice, I find it effective – the majority of people decline (or don’t reply, or reply about something else) and are filtered out.

You might think it only filters out people who were not going to have a conversations with you anyway. However, people often converse because they’re baited into it, triggered, defensive, caught up in trying to correct someone they think is wrong, etc. Asking people to make a decision about whether they want to be in a conversation can help them realize that they don’t want to. That’s beneficial for both you and them. However, I’ve never had one of them thank me for it.

A reason people dislike this filter is they associate all filters with status and therefore interpret being filtered out as an attack on their status – a claim they are not good enough in some way. But that’s a pretty weird interpretation with this specific filter.

This filter is, in some sense, the nicest filter ever. No one is ever filtered out who doesn’t want to be filtered out. Only this filter and variants of it have that property. Filtering on anything else, besides whether the person wants to opt in or out, would filter out some people who prefer to opt in. However, no one has ever reacted to me like it’s a nice filter. Many reactions are neutral, and some negative, but no one has praised me for being nice.

Useful non-status-based filters are somewhat difficult to come by and really important/valuable. Most filters people use are some sort of proxy for social status. That’s one of the major sources of bias in the world. What people pay attention to – what gets to them through gatekeeping/filtering – is heavily biased towards status. So it’s hard for them to disagree with high status ideas or learn about low status ideas (such as outliers and innovation).


Elliot Temple | Permalink | Messages (0)

Hard and Soft Rationality Policies

I have two main rationality policies that are written down:

  1. Debate Policy
  2. Paths Forward Policy

I have many other smaller policies that are written down somewhere in some form, like about not misquoting or giving direct answers to direct questions (like say "yes" or "no" first when answering a yes or no question. then write extra stuff if you want. but don't skip the direct answer.)

A policy I thought of the other day, and recognized as worth writing down, is my debate policy sharing policy. I've had this policy for a long time. It's important but it isn't written in my debate policy.

If someone seems to want to debate me, but they don't invoke my debate policy, then I should link them to the debate policy so they have the option to use it. I shouldn't get out of the debate based on them not finding my debate policy.

In practice, I link the policy to a lot of people who I doubt want to debate me. I like sharing it. That's part of the point. It’s useful to me. It helps me deal with some situations in an easy way. I get in situations where I want to say/explain something, but writing it every time would be too much work, but some of the same things come up over and over, so I can write them once and then share links instead of rewriting the same points. My debate policy says some of the things I want frequently want to tell people, and linking it lets me repeat those things with very low effort.

One can imagine someone who put up a debate policy and then didn't mention it to critics who didn't ask for a debate in the right words. One can imagine someone who likes having the policy so they can claim they're rational, but they'd prefer to minimize actually using it. That would be problematic. I wrote my debate policy conditions so that if someone actually meets them, I'd like to debate. I don't dread that or want to avoid it. If you have a debate policy but hope people don't use it, then you have a problem to solve.

If I'm going to ignore a question or criticism from someone I don't know, then I want to link my policy so they have a way to fix things if I was wrong to ignore them. If I don't link it, and they have no idea it exists, then the results are similar to not having the policy. It doesn't function as a failsafe in that case.

Some policies offer hard guarantees and some are softer. What enforces the softer ones so they mean something instead of just being violated as much as one feels like? Generic, hard guarantees like a debate policy which can be used to address doing poorly at any softer guarantee.

For example, I don't have any specific written guarantee for linking people to my debate policy. There's an implicit (and now explicit in this post) soft guarantee that I should make a reasonable effort to share it with people who might want to use it. If I do poorly at that, someone could invoke my debate policy over my behavior. But I don't care much about making a specific, hard guarantee about debate policy link sharing because I have the debate policy itself as a failsafe to keep me honest. I think I do a good job of sharing my debate policy link, and I don't know how to write specific guarantees to make things better. It seems like something where a good faith effort is needed which is hard to define. Which is fine for some issues as long as you also have some clearer, more objective, generic guarantees in case you screw up on the fuzzier stuff.

Besides hard and soft policies, we could also distinguish policies from tools. Like I have a specific method of having a debate where people choose what key points they want to put in the debate tree. I have another debate method where people say two things at a time (it splits the conversation into two halves, one led by each person). I consider those tools. I don't have a policy of always using those things, or using those things in specific conditions. Instead, they're optional ways of debating that I can use when useful. There's a sort of soft policy there: use them when it looks like a good idea. Making a grammar tree is another tool, and I have a related soft policy of using that tool when it seems worthwhile. Having a big toolkit with great intellectual tools, along with actually recognizing situations for using them, is really useful.


Elliot Temple | Permalink | Messages (0)

Friendliness or Precision

I quit the Effective Altruism forum due to a new rule requiring posts and comments be basically put in the public domain without copyright. More info. I had a bunch of draft posts, so I’m posting some of them here with minimal editing.


In a debate, if you’re unfriendly and you make a lot of little mistakes, you should expect the mistakes to (on average) be biased for your side and against their side. In general, making many small, biased mistakes ruins debates dealing with complex or subtle issues. It’s too hard to fix them all, especially considering you’re the guy who made them (if you had the skill to fix them all, you could have used that same skill to avoid making some of them).

In other words, if you dislike someone, being extremely careful, rigorous and accurate with your reasoning provides a defense against bias. Without that defense, you don’t have much of a chance.

If you have a positive attitude and are happy to hear about their perspective, that helps prevent being biased against them. If you have really high intellectual standards and avoid making small mistakes, that helps prevent bias. If you have neither of those things, conversation doesn’t work well.


Elliot Temple | Permalink | Messages (0)

Attention Filtering and Debate

I quit the Effective Altruism forum due to a new rule requiring posts and comments be basically put in the public domain without copyright. More info. I had a bunch of draft posts, so I’m posting some of them here with minimal editing.


People skim and filter. Gatekeepers and many other types of filters end up being indirect proxies for social status much more than they are about truth seeking.

Filtering isn’t the only problem though. If you have some credentials – awards, a PhD, a popular book, thousands of fans – people often still won’t debate you. Also, I certainly get through initial filtering sometimes. People talk with me some, and a lot more people read some of what I say.

After you get through filters, you run into problems like people still not wanting to debate or not wanting to put in enough effort to understand your point. We could call this secondary filtering. Maybe if you get through five layers of filters, then they’ll debate. Or maybe not. I think some of the filters are generated ad hoc because they don’t want to debate or consider (some types of) ideas that disagree with their current ideas. People can keep making up new excuses as necessary.

Why don’t people want to debate? Often because they’re bad at it.

And they know – even if they don’t consciously admit it – that debating is risky to their social status, and the expectation value for their debating result is to lose status.

And they know that, if they lose the debate, they will then face a problem. They’ll be conflicted. They will partly want to change their mind, but part of them won’t want to change their mind. So they don’t want to face that kind of conflict because they don’t know how to deal with it, so they’d rather avoid getting into that situation.

Also they already have a ton of urgent changes to make in their lives. They already know lots of ways they’re wrong. They already know about many mistakes. So they don’t exactly need new criticism. Adding more issues to the queue isn’t valuable.

All of that is fine but on the other hand anyone who admits that is no thought leader. So people don’t want to admit it. And if an intellectual position has no thought leaders capable of defending it, that’s a major problem. So people make excuses, pretend someone else will debate if debate is merited, shift responsibility to others (usually not to specific people), etc.

Debating is a status risk, a self-esteem risk, a hard activity, and they maybe don’t want to learn about (even more) errors which will lead to thinking they should change which is a hard thing they may fail at (which may further harm status and self-esteem, and be distracting and unpleasant).


Elliot Temple | Permalink | Messages (0)

Betting Your Career

I quit the Effective Altruism forum due to a new rule requiring posts and comments be basically put in the public domain without copyright. More info. I had a bunch of draft posts, so I’m posting some of them here with minimal editing.


People bet their carers on various premises, outside their own expertise, e.g. AGI (alignment) researchers commonly bet on some epistemology without being experts on epistemology who actually read Popper and concluded, in their own judgment, that he’s wrong.

So you might expect them to be interested in criticism of those premises. Shouldn’t they want to investigate the risk?

But that depends on what you value about your career.

If you want money and status, and not to have to make changes, then maybe it’s safer to ignore critics who don’t seem likely to get much attention.

If you want to do productive work that’s actually useful, then your career is at risk.

People won’t admit it, but many of them don’t actually care that much about whether their career is productive. As long as they get status and money, they’re satisfied.

Also, a lot of people lack confidence that they can do very productive work whether or not their premises are wrong.

Actually, having wrong but normal/understandable/blameless premises has big advantages: you won’t come up with important research results but it’s not your fault. If it comes out that your premises were wrong, you did the noble work of investigating a lead that many people believed promising. Science and other types of research always involve investigating many leads that don’t turn out to be important. So if you find a lead people want investigated and then do nothing useful, and it turns out to be an important lead, then some other investigators outcompeted you. People could wonder why you didn’t figure out anything about the lead you worked on. But if the lead you work on turns out to be a dead end, then the awkward questions go away. So there’s an advantage to working on dead-ends as long as other people think it’s a good thing to work on.


Elliot Temple | Permalink | Messages (0)

AGI Alignment and Karl Popper

I quit the Effective Altruism forum due to a new rule requiring posts and comments be basically put in the public domain without copyright. I had a bunch of draft posts, so I’m posting some of them here with light editing.


On certain premises, which are primarily related to the epistemology of Karl Popper, artificial general intelligences (AGIs) aren’t a major threat. I tell you this as an expert on Popperian epistemology, which is called Critical Rationalism.

Further, approximately all AGI research is based on epistemological premises which contradict Popperian epistemology.

In other words, AGI research and AGI alignment research are both broadly premised on Popper being wrong. Most of the work being done is an implicit bet that Popper is wrong. If Popper is right, many people are wasting their careers, misdirecting a lot of donations, incorrectly scaring people about existential dangers, etc.

You might expect that alignment researchers would have done a literature review, found semi-famous relevant thinkers like Popper, and written refutations of them before being so sure of themselves and betting so much on the particular epistemological premises they favor. I haven’t seen anything of that nature, and I’ve looked a lot. If it exists, please link me to it.

To engage with and refute Popper requires expertise about Popper. He wrote a lot, and it takes a lot of study to understand and digest it. So you have three basic choices:

  • Do the work.
  • Rely on someone else’s expertise who agrees with you.
  • Rely on someone else’s expertise who disagrees with you.

How can you use the expertise of someone who disagrees with you? You can debate with them. You can also ask them clarifying questions, discuss issues with them, etc. Many people are happy to help explain ideas they consider important, even to intellectual opponents.

To rely on the expertise of someone on your side of the debate, you endorse literature they wrote. They study Popper, they write down Popper’s errors, and then you agree with them. Then when a Popperian comes along, you give them a couple citations instead of arguing the points yourself.

There is literature criticizing Popper. I’ve read a lot of it. My judgment is that the quality is terrible. And it’s mostly written by people who are pretty different than the AI alignment crowd.

There’s too much literature on your side to read all of it. What you need (to avoid doing a bunch of work yourself) is someone similar enough to you – someone likely to reach the same conclusions you would reach – to look into each thing. One person is potentially enough. So if someone who thinks similarly to you reads a Popper criticism and thinks it’s good, it’s somewhat reasonable to rely on that instead of investigating the matter yourself.

Keep in mind that the stakes are very high: potentially lots of wasted careers and dollars.

My general take is you shouldn’t trust the judgment of people similar to yourself all that much. Being personally well read regarding diverse viewpoints is worthwhile, especially if you’re trying to do intellectual work like AGI-related research.

And there aren’t a million well known and relevant viewpoints to look into, so I think it’s reasonable to just review them all yourself, at least a bit via secondary literature with summaries.

There are much more obscure viewpoints that are worth at least one person looking into, but most people can’t and shouldn’t try to look into most of those.

Gatekeepers like academic journals or university hiring committees are really problematic, but the least you should do is vet stuff that gets through gatekeeping. Popper was also respected by various smart people, like Richard Feynman.

Mind Design Space

The AI Alignment view claims something like:

Mind design space is large and varied.

Many minds in mind design space can design other, better minds in mind design space. Which can then design better minds. And so on.

So, a huge number of minds in mind design space work as starting points to quickly get to extremely powerful minds.

Many of the powerful minds are also weird, hard to understand, very different than us including regarding moral ideas, possibly very goal directed, and possibly significantly controlled by their original programming (which likely has bugs and literally says different things, including about goals, than the design intent).

So AGI is dangerous.

There is an epistemology which contradicts this, based primarily on Karl Popper and David Deutsch. It says that actually mind design space is like computer design space: sort of small. This shouldn’t be shocking since brains are literally computers, and all minds are software running on literal computers.

In computer design, there is a concept of universality or Turing completeness. In summary, when you start designing a computer and adding features, after very few features you get a universal computer. So there are only two types of computers: extremely limited computers and universal computers. This makes computer design space less interesting or relevant. We just keep building universal computers.

Every computer has a repertoire of computations it can perform. A universal computer has the maximal repertoire: it can perform any computation that any other computer can perform. You might expect universality to be difficult to get and require careful designing, but it’s actually difficult to avoid if you try to make a computer powerful or interesting.

Universal computers do vary in other design elements, besides what computations they can perform, such as how large they are. This is fundamentally less important than what computations they can do, but does matter in some ways.

There is a similar theory about minds: there are universal minds. (I think this was first proposed by David Deutsch, a Popperian intellectual.) The repertoire of things a universal mind can think (or learn, understand, or explain) includes anything that any other mind can think. There’s no reasoning that some other mind can do which it can’t do. There’s no knowledge that some other mind can create which it can’t create.

Further, human minds are universal. An AGI will, at best, also be universal. It won’t be super powerful. It won’t dramatically outthink us.

There are further details but that’s the gist.

Has anyone on the AI alignment side of the debate studied, understood and refuted this viewpoint? If so, where can I read that (and why did I fail to find it earlier)? If not, isn’t that really bad?


Elliot Temple | Permalink | Messages (0)

Altruism Contradicts Liberalism

I quit the Effective Altruism forum due to a new rule requiring posts and comments be basically put in the public domain without copyright. I had a bunch of draft posts, so I’m posting some of them here with light editing.


Altruism means (New Oxford Dictionary):

the belief in or practice of disinterested and selfless concern for the well-being of others

Discussion about altruism often involves being vague about a specific issue. Is this selfless concern self-sacrificial? Is it bad for the self or merely neutral? This definition doesn’t specify.

The second definition does specify but isn’t for general use:

Zoology behavior of an animal that benefits another at its own expense

Multiple dictionaries fit the pattern of not specifying self-sacrifice (or not) in the main definition, then bringing it up in an animal-focused definition.

New Oxford’s thesaurus is clear. Synonyms for altruism include:

unselfishness, selflessness, self-sacrifice, self-denial

Webster’s Third suggests altruism involves lack of calculation, and doesn’t specify whether it’s self-sacrificial:

uncalculated consideration of, regard for, or devotion to others' interests sometimes in accordance with an ethical principle

EA certainly isn’t uncalculated. EA does stuff like mathematical calculations and cost/benefit analysis. Although the dictionary may have meant something more like shrewd, self-interested, Machiavellian calculation. If so, they really shouldn’t try to put so much meaning into one fairly neutral word like that without explaining what they mean.

Macmillan gives:

a way of thinking or behaving that shows you care about other people and their interests more than you care about yourself

Caring more about their interests than yourself suggests self-sacrifice, a conflict of interest (where decisions favoring you or them must be made), and a lack of win-win solutions or mutual benefit.

Does EA have any standard, widely read and accepted literature which:

  • Clarifies whether it means self-sacrificial altruism or whether it believes its “altruism” is good for the self?
  • Refutes (or accepts!?) the classical liberal theory of the harmony of men’s interests.

Harmony of Interests

Is there any EA literature regarding altruism vs. the (classical) liberal harmony of interests doctrine?

EA believes in conflicts of interest between men (or between individual and total utility). For example, William MacAskill writes in The Definition of Effective Altruism:

Unlike utilitarianism, effective altruism does not claim that one must always sacrifice one’s own interests if one can benefit others to a greater extent.[35] Indeed, on the above definition effective altruism makes no claims about what obligations of benevolence one has.

I understand EA’s viewpoint to include:

  • There are conflicts between individual utility and overall utility (the impartial good).
  • It’s possible to altruistically sacrifice some individual utility in a way that makes overall utility go up. In simple terms, you give up $100 but it provides $200 worth of benefit to others.
  • When people voluntarily sacrifice some individual utility to altruistically improve overall utility, they should do it in (cost) effective ways. They should look at things like lives saved per dollar. Charities vary dramatically in how much overall utility they create per dollar donated.
  • It’d be good if some people did some effective altruism sometimes. EA wants to encourage more of this, although it doesn’t want to be too pressuring, so it does not claim that large amounts of altruism are a moral obligation for everyone. If you want to donate 10% of your income to cost effective charities, EA will say that’s great instead of saying you’re a sinner because you’re still deviating from maximizing overall utility. (EA also has elements which encourage some members to donate a lot more than 10%, but that’s another topic.)

Finally, unlike utilitarianism, effective altruism does not claim that the good equals the sum total of wellbeing. As noted above, it is compatible with egalitarianism, prioritarianism, and, because it does not claim that wellbeing is the only thing of value, with views on which non-welfarist goods are of value.[38]

EA is compatible with many views on how to calculate overall utility, not just the view that you should add every individual utility. In other words, EA is not based on a specific overall/impersonal utility function. EA also is not based on any advocating that individuals have any particular individual utility function or any claim that the world population currently has a certain distribution of individual utility functions.

All of this contradicts the classical liberal theory of the harmony of men’s (long term, rational) interests. And doesn’t engage with it. They just seem unaware of the literature they’re disagreeing with (or they’re aware and refusing to debate with it on purpose?), even though some of it is well known and easy to find.

Total Utility Reasoning and Liberalism

I understand EA to care about total utility for everyone, and to advocate people altruistically do things which have lower utility for themselves but which create higher total utility. One potential argument is that if everyone did this then everyone would have higher individual utility.

A different potential approach to maximizing total utility is the classical liberal theory of the harmony of men’s interests. It says, in short, that there is no conflict between following self-interest and maximizing total utility (for rational men in a rational society). When there appears to be a conflict, so that one or the other must be sacrificed, there is some kind of misconception, distortion or irrationality involved. That problem should be addressed rather than accepted as an inherent part of reality that requires sacrificing either individual or total utility.

According to the liberal harmony view, altruism claims there are conflicts between the individual and society which actually don’t exist. Altruism therefore stirs up conflict and makes people worse off, much like the Marxist class warfare ideology (which is one of the standard opponents of the harmony view). Put another way, spreading the idea of conflicts of interest is an error that lowers total utility. The emphasis should be on harmony, mutual benefit and win/win solutions, not on altruism and self-sacrifice.

It’s really bad to ask people to make tough, altruistic choices if such choices are unnecessary mistakes. It’s bad to tell people that getting a good outcome for others requires personal sacrifices if it actually doesn’t.

Is there any well-known, pre-existing EA literature which addresses this, including a presentation of the harmony view that its advocates would find reasonably acceptable? I take it that EA rejects the liberal harmony view for some reason, which ought to be written down somewhere. (Or they’re quite ignorant, which would be very unreasonable for the thought leaders who developed and lead EA.) I searched the EA forum and it looks like the liberal harmony view has never been discussed, which seems concerning. I also did a web search and found nothing regarding EA and the liberal harmony of interests theory. I don’t know where or how else to do an effective EA literature search.


Elliot Temple | Permalink | Messages (0)

Harmony, Capitalism and Altruism

I quit the Effective Altruism forum due to a new rule requiring posts and comments be basically put in the public domain without copyright. I had a bunch of draft posts, so I’m posting some of them here with light editing.


Are there conflicts of interest, or is mutual benefit always possible? This is one of the most important questions in political philosophy.

The belief in conflicts of interest leads to a further question. Who should win when there’s a conflict? One view is that individuals should get worse outcomes for the benefit of the group. Another view is that individuals should be prioritized over the group.

Why would one advocate worse outcomes for the group? That might sound odd initially. One reason is because it seems to be implied by individual freedom and individual rights. If each person has rights and freedom, then he’s free to maximize his interests within that framework. There’s nothing to do, besides asking nicely and trying to make persuasive arguments (which historically isn’t very effective), to try to get people to sacrifice their interests for the sake of others.

One consequence is the altruist-collectivist side of the debate has often considered rejecting individual freedom or some individual rights. What if most people won’t voluntarily act for the benefit of the group, to create a paradise society with the highest overall utility (the most good for the most people, or something along those lines)? Then some people will advocate violently forcing them.

Because there appears to be a conflict between the good of the group and the rights and freedoms of the individual, altruists have often advocated restricting the rights and freedoms of the individual. Sometimes they’ve used violence, in the name of the greater good, and killed millions. That kind of massive violence has never led to good results for the group, though it has led to somewhat good results for a few individuals who end up being wealthy rulers. There have always been questions about whether communist revolutionary leaders actually care about the welfare of everyone or are just seeking power so they can be corrupt and get personal luxuries. Historically, collectivist societies tend to be plagued by noticeably more corruption than the more individualist democracies have. Violence and corruption are linked together in some ways. It’s harder to profit from corruption if individuals have rights and society won’t let you get away with violating their rights to take their stuff.

Individualism

A rather different viewpoint is that we’re all fallible, we each individually have limited knowledge, and we can only coordinate with others a limited amount. We shouldn’t try to design paradise by looking at society from the perspective of an omniscient god. We have to consider decision making and action from the point of view of an individual. Instead of trying to have some wise philosopher kings or central planners telling everyone what to do, we need a system where individuals can figure out what to do based on their own situation and the knowledge they have. The central planner approach doesn’t work well because the planners don’t have enough detailed knowledge of each individual’s life circumstances, and can’t do a good enough job of optimizing what’s good for them. To get a good outcome for society, we need to use the brainpower of all its members, not just a few leaders. We have no god with infinite brainpower to lead us.

So maybe the best thing we can do is have each individual pay attention to and try to optimize the good for himself, while following some rules that prevent him from harming or victimizing others.

In total, there is one brain per person. A society with a million members has a million brains. So how much brainpower can be allocated to getting a good outcome for each person? On average, at most, one brain worth of brainpower. What’s the best way to assign brainpower to be used to benefit people? Should everyone line up, then each person looks 6 people to his right, and uses his brainpower to optimize that person’s life? No. It makes much more sense for each brain to be assigned the duty of optimizing the life of the person who that brain is physically inside of.

You can get a rough approximation of a good society by having each person make decisions for themselves and run their own lives, while prohibiting violence, theft and fraud.

Perhaps you can get efficiency gains with organized, centralized planning done by specialists – maybe they can make some ideas that are useful to many people. Or maybe people can share ideas in a decentralized way. There are many extra details to consider.

Coordination

Next let’s consider coordination between people. One model for how to do that is called trade. I make shoes, and you make pants. We’d each like to use a mix of shoes and pants, not just the thing we make. So I trade you some of my shoes for some of your pants. That trade makes us both better off. This is the model of voluntary trade for mutual benefit. It’s also the model of specialization and division of labor. And what if you make hats but I don’t want any hats, but you do want some of my shoes? That is the problem money solves. You can sell hats to someone else for money, then trade money for my shoes, and then I can trade that money to someone else for something I do want, e.g. shirts.

The idea here is that each individual makes sure each trade he participates in benefits him. If a trade doesn’t benefit someone, it’s his job to veto the trade and opt out. Trades only happen if everyone involved opts in. In this way, every trade benefits everyone involved (according to their best judgment using their brainpower, which will sometimes be mistaken), or at least is neutral and harmless for them. So voluntary trade raises the overall good in society – each trade raises the total utility score for that society. (So if you want high total utility, maybe you should think about how to increase the amount of trading that happens. Maybe that would do more good than donating to charity. And that’s a “real” maybe – I mean it’s something worth considering and looking into, not that I already reached a conclusion about it. And if EA has not looked into or considered that much, then I think that’s bad and shows a problem with EA, independent of whether increasing trade is a good plan.)

High Total Utility

It’s fairly hard to score higher on total good by doing something else besides individual rights plus voluntary trade and persuasion (meaning sharing ideas on a voluntary basis).

Asking people to sacrifice their self-interest usually results in lower total good, not higher total good. Minor exceptions, like some small voluntary donations to charity, may help raise total good a bit, though they may not. To the extent people donate due to social signaling or social pressure (rather than actually thinking a charity can use it better than they can) donations are part of some harmful social dynamics that are making society worse.

Donations or Trade

But many people look at this and say “Sometimes Joe could give up a pair of old pants that he doesn’t really need that’s just sitting around taking up space, and give it to Bob, who would benefit from it and actually wear it. The pants only have a small value to Joe, and if he would sacrifice that small value, Bob would get a large value, thus raising overall utility.”

The standard pro-capitalist rebuttal is that there’s scope for a profitable trade here. Also, the scenario was phrased from the perspective of an omniscient god, central planner or philosopher king. Joe needs to actually know that Bob needs a pair of used pants, and Bob needs to know that Joe has an extra pair. And Joe needs to consider the risk that several of the pants he currently wears become damaged in the near future in which case he’d want to wear that old pair again. And Bob needs to consider the risk that he’s about to be gifted a bunch of pairs of new pants from other people so he wouldn’t want Joe’s pants anyway.

But let’s suppose they know about all this stuff and still decide that, on average, taking into account risk and looking at expectation values, it’s beneficial for Bob to have the pants, not Joe. We can put numbers on it. It’s a $2 negative for Joe, but a $10 gain for Bob. That makes a total profit (increase in total utility) of $8 if Joe hands over the pants.

If handing over the pants increases total good by $8, how should that good be divided up? Should $10 of it go to Bob, and -$2 of it go to Joe? That’s hardly fair. Why should Bob get more benefit than the increase in total good? Why should Joe sacrifice and come out behind? It would be better if Bob paid $2 for the pants so Bob benefits by $8 and Joe by $0. That’s fairer. But is it optimal? Why shouldn’t Joe get part of the benefit? As a first approximation, the fairest outcome is that they split the benefit evenly. This requires Bob to pay $6 for the pants. Then Joe and Bob each come out ahead by $4 of value compared to beforehand.

There are objections. How are Joe and Bob going to find out about each other and make this trade happen? Maybe they are friends. But there are a lot more non-friends in society than friends, so if you only trade with your friends then a lot of mutually beneficial trades won’t happen. So maybe a middleman like a used clothing store can help – Joe sells his pants to the used clothing store where Bob later finds and buys them. The benefit is split up between Joe, Bob and the store. As a first approximation, we might want to give a third of the benefit to each party. In practice, used clothing stores often don’t pay very much for clothing and don’t charge very high profit margins, so Bob might get the largest share of the benefit. Also the overall benefit is smaller now because there are new costs like store employees, store lighting, marketing, and the building the store is in. Those costs may be worth it because otherwise Joe and Bob never would have found each other and made a trade, so a smaller benefit is better than no benefit. Those costs are helping deal with the problem of limited knowledge and no omniscient coordinator – coordination and finding beneficial trades actually takes work and has a downside. Some trades that would be beneficial if they took zero effort actually won’t work because the cost of the trading partners finding each other (directly or indirectly through a middle man) costs more than the benefit of the trade.

Not Having Enough Money

What if Bob doesn’t have $6 to spare? One possibility is a loan. A loan would probably from a bank not from Joe – this is an example of specialization and division of labor – Joe isn’t good at loans and a bank that handles hundreds of loans can have more efficient, streamlined processes. (In practice today, our banks have a lot of flaws and it’s more typical to get small loans from credit cards, which also have flaws. I was making a theoretical point.)

If Bob is having a hard time, but it’s only temporary, then a bank can loan him some money and he can pay it back later with interest. That can be mutually beneficial. But not everyone pays their loans back, so the bank will have to use the limited information it has to assess risk.

Long Term Poverty

What if Bob is unlikely to have money to spare in the future either? What if his lack of funds isn’t temporary? That raises the question of why.

Is Bob lazy and unproductive? Does he refuse to work, refuse to contribute to society and create things of value to others, but he wants things that other people worked to create like pants? That anti-social attitude is problematic under both capitalist and altruistic approaches. Altruism says he should sacrifice, by accepting the disutility of working, in order to benefit others. Capitalism gives him options. He can trade the disutility of working to get stuff like pants, if he wants to. Or he can decide the disutility of continuing to wear old pants is preferable to the disutility of working. Capitalism offers an incentive to work then lets people make their own choices.

It’s better (in some ways) if Joe trades pants to someone who works to create wealth that can benefit society, rather than someone who sits around choosing not to work. Joe should reward and incentivize people who participate in productive labor. That benefits both Joe (because he can be paid for his pants instead of give them away) and also society (which is better off in aggregate if more people work).

What if Bob is disabled, elderly, or unlucky, rather than lazy? There are many possibilities including insurance, retirement savings, and limited amounts of charitable giving to help out as long as these kinds of problems aren’t too common and there isn’t too much fraud or bad faith (e.g. lying about being disabled or choosing not to save for retirement on purpose because you know people will take pity on you and help you out later, so you can buy more alcohol and lotto tickets now).

Since the central planner approach doesn’t work well, one way to approach altruism is as some modifications on top of a free market. We can have a free market as a primary mechanism, and then encourage significant amounts of charitable sacrifice too. Will that create additional benefit? That is unclear. Why should Joe give his pants to Bob for free instead of selling them for $6 so that Joe and Bob split the benefit evenly? In the general case, he shouldn’t. Splitting the benefit – trade – makes more sense than charity.

Liberalism’s Premise

But pretty much everything I’ve said so far has a hidden premise which is widely disputed. It’s all from a particular perspective. The perspective is sometimes called classical liberalism, individualism, the free market or capitalism.

The hidden premise is that there are no conflicts of interest between people. This is often stated with some qualifiers, like that the people have to be rational, care about the long term not just the short term and live in a free, peaceful society. Sometimes it’s said that there are no innate, inherent or necessary conflicts of interest. The positive way of stating it is the harmony of interests theory.

An inherent conflict would mean Joe has to lose for Bob to win. And the win for Bob might be bigger than the loss for Joe. In other words, for some reason, Bob can’t just pay Joe $6 to split the benefit. Either Joe can get $2 of benefit from keeping that pair of pants, or Bob can get $10 if Joe gives it to him (or perhaps if Bob takes it), and there are no other options, so there’s a conflict. In this viewpoint, there have to be winners and losers. Not everything can be done for mutual benefit using a win/win approach or model. Altruism says Joe probably won’t want to give up the pants for nothing, but he should do it anyway for the greater good.

The hidden premise of altruism is that there are conflicts of interest, while the hidden premise of classical liberalism is that there are no necessary, rational conflicts of interest.

I call these things hidden premises but they aren’t all that hidden. There are books talking about them explicitly and openly. They aren’t well known enough though. The Marxist class warfare theory is a conflicts of interests theory, which has been criticized by the classical liberals who advocated a harmony of interests theory that says social harmony can be created by pursuing mutual benefit with no losers or sacrificial victims (note: it’s later classical liberals who criticized Marxism; classical liberalism is older than Marxism). Altruists sometimes openly state their belief in a conflict of interests viewpoint, but many of them don’t state that or aren’t even aware of it.

Put another way, most people have tribalist viewpoints. The altruists and collectivists think there are conflicts between the individual and group, and they want the group to win the conflict.

People on the capitalist, individualist side of the debate are mostly tribalists too. They mostly agree there are conflicts between the individual and group, and they want the individual to win the conflict.

And then a few people say “Hey, wait, the individual and group, or the individual and other individuals, or the group and the other groups, are not actually in conflict. They can exist harmoniously and even benefit each other.” And then basically everyone dislikes and ignores them, and refuses to read their literature.

The harmony theory of classical liberalism has historical associations with the free market, and my own thinking tends to favor the free market. But you should be able to reason about it from either starting point – individual or group – and reach the same conclusions. Or reason about it in a different way that doesn’t start with a favored group. There are many lines of reasoning that should work fine.

Most pro-business or pro-rich-people type thinking today is just a bunch of tribalism based on thinking there is a conflict and taking sides in the conflict. I don’t like it. I just like capitalism as an abstract economic theory that addresses some problems about coordinating human action given individual actors with limited knowledge. Also I like peace and freedom, but I know most people on most sides do too (or at least they think they do), so that isn’t very differentiating.

I think the most effective way to achieve peace and social harmony is by rejecting the conflicts of interest mindset and explaining stuff about mutual benefit. There is no reason to fight others if one is never victimized or sacrificed. Altruism can encourage people to pick fights because it suggests there are and should be sacrificial victims who lose out for the benefit of others. Tribalist capitalist views also lead to fights because they e.g. legitimize the exploitation of the workers and downplay the reasonable complaints of labor, rather than saying “You’re right. That should not be happening. This must be fixed. We must investigate how that kind of mistreatment by your employers is happening. There are definitely going to be some capitalism-compatible fixes; let’s figure them out.”

You can start with group benefit and think about how to get it given fallible actors with limited knowledge and limited brainpower. We won’t be able to design a societal system that gets a perfect outcome. We need systems that let people do the best with the knowledge they have, and let them coordinate, share knowledge, etc. We’ll want them to be able to trade when one has something that someone else could make better use of and vice versa. We’ll want money to deal with the double coincidence of wants problem. We’ll want stores with used goods functioning as middle men, as well as online marketplaces where individuals can find each other. (By the way, Time Will Run Back by Henry Hazlitt is a great book about a socialist leader who tries to solve some problems his society has and reinvents capitalism. It’s set in a world with no capitalist countries and where knowledge of capitalism had been forgotten.)

More Analysis

Will we want people to give stuff to each other, for nothing in return, when someone else can benefit more from it? Maybe. Let’s consider.

First, it’s hard to tell how much each person can benefit from something. How do I know that Bob values this object more than I do? If we both rate it on a 1-10 scale, how do we know our scales are equivalent? There’s no way to measure value. A common measure we use is comparing something to dollars. How many dollars would I trade for it and be happy? I can figure out some number of dollars I value more than the object, and some number of dollars I value less than the object, and with additional effort I can narrow down the range.

So how can we avoid the problem of mistakenly giving something to someone who actually gets less utility from it than I do? He could pay dollars for it. If he values it more in dollars than I do, then there’s mutual benefit in selling it to him. He could also offer an object in trade for it. What matters then is that we each value what we get more than what we give up. I might actually value the thing I trade away more than the other guy does, and there could still be mutual benefit.

Example:

I have pants that I value at $10 and Bob values at $5. For the pants, Bob offers to trade me artwork which I value at $100 and he values at $1. I value both the pants and artwork more than Bob does, but trading the pants to him still provides mutual benefit.

But would there be more total benefit if Bob simply gave me the artwork and I kept the pants. Sure. And what if I gave Bob $50 for the art? That has the same total benefit. On the assumption that we each value a dollar equally, transfers of dollars never change total benefit. (That’s not a perfect assumption but it’s often a reasonable approximation.) But transfers of dollars are useful, even when they don’t affect total utility, because they make trades mutually beneficial instead of having a winner and a loser. Transferring dollars also helps prevent trades that reduce total utility: If Bob will only offer me like $3 for the pants, which I value at $10, then we’ve figured out that the pants benefit me more than him and I should keep them.

BTW, if you want to help someone who has no dollars, you should consider giving him dollars, not other goods. Then see if he’ll pay you enough to trade for the other goods. If he won’t, that’s because he thinks he can get even more value by using the dollars in some other way.

Should I do services for Bob whenever the value to him is higher than the disutility to me? What if I I have very wonderful services that many people want – like I’m a great programmer or chef – and I end up working all day every day for nothing in return? That would create a disincentive to develop skills. From the perspective of designing a social system or society, it works better to set up good incentives instead of demanding people act contrary to incentives. We don’t want to have a conflict or misalignment between incentives and desired behaviors or we’ll end up with people doing undesirable but incentivized behavior. We’ll consider doing that if it’s unavoidable, but we should at least minimize it.

Social Credit

There’s a general problem when you don’t do dollar-based trading: what if some people keep giving and giving (to people who get higher utility for goods or services) but don’t get a similar amount of utility or more in return? If people just give stuff away whenever it will benefit others a bunch, wealth and benefit might end up distributed very unequally. How can we make things fairer? (I know many pro-capitalist people defend wealth inequality as part of the current tribalist political battle lines. And I don’t think trying to make sure everyone always has exactly equal amounts of wealth is a good idea. But someone giving a lot and getting little, or just significant inequality in general, is a concern worthy of some analysis.)

We might want to develop a social credit system (but I actually mean this in a positive way, despite various downsides of the Chinese Communist Party’s social credit system). We might want to keep score in some way to see who is contributing the most to society and make sure they get some rewards. That’ll keep incentives aligned well and prevent people from having bad luck and not getting much of the total utility.

So we have this points system where every time you benefit someone you get points based on the total utility created. And people with higher points should be given more stuff and services. Except, first of all, how? Should they be given stuff even if it lowers total utility? If the rule is always do whatever raises total utility, how can anyone deviate to help out the people with high scores (or high scores relative to personal utility).

Second, aren’t these points basically just dollars? Dollars are a social credit system which tracks who contributed the most. In the real world, many things go wrong with this and people’s scores sometimes end up wildly inaccurate, just like in China where their social credit system sometimes assigns people inaccurate scores. But if you imagine an ideal free market, then dollars basically track how much people contribute to total utility. And then you spend the dollars – lower your score – to get benefits for yourself. If someone helps you, you give him some of your dollars. He gave a benefit, and you got a benefit, so social credit points should be transferred from you to him. Then if everyone has the same number of dollars, that basically also means everyone got the same amount of personal utility or benefit.

What does it mean if someone has extra dollars? What can we say about rich people? They are the most altruistic. They have all these social credit points but they didn’t ask for enough goods and services in return to use up their credit. They contributed more utility to others than they got for themselves. And that’s why pro-capitalist reasoning sometimes says good things about the rich.

But in the real world today, people get rich in all kinds of horrible ways because no country has a system very similar to the ideal free market. And a ton of pro-capitalist people seem to ignore that. They like and praise the rich people anyway, instead of being suspicious of how they got rich. They do that because they’re pro-rich, pro-greed tribalists or something. Some of them aspire to one day be rich, and want to have a world that benefits the rich so they can keep that dream alive and imagine one day getting all kinds of unfair benefits for themselves. And then the pro-altruism and pro-labor tribalists yell at them, and they yell back, and nothing gets fixed. As long as both sides believe in conflicts of interest, and are fighting over which interest groups should be favored and disfavored in what ways, then I don’t expect political harmony to be achieved.

Free Markets

Anyway, you can see how a free market benefits the individual, benefits the group, solves various real problems about coordination and separate, fallible actors with limited knowledge, and focuses on people interacting only for mutual benefit. Interacting for mutual benefit – in ways with no conflict of interest – safeguards both against disutility for individuals (people being sacrificed for the alleged greater good) and also against disutility for the group (people sacrificing for the group in ineffective, counter-productive ways).

Are there benefits that can’t be achieved via harmony and interaction only for mutual benefit? Are there inherent conflicts where there must be losers in order to create utopia? I don’t think so, and I don’t know of any refutations of the classical liberal harmony view. And if there are such conflicts, what are they? Name one in addition to making some theoretical arguments. Also, if we’re going to create utopia with our altruism … won’t that benefit every individual? Who wouldn’t want to live in utopia? So that sounds compatible with the harmony theory and individual mutual benefit.

More Thoughts

People can disagree about what gives how much utility to Bob.

People can lie about how much utility they get from stuff.

People can have preferences about things other than their own direct benefit. I can say it’s high utility to have a walkable downtown even if I avoid walking. Someone else can disagree about city design. I can say it’s high utility for me if none of my neighbors are Christian (disclaimer: not my actual opinion). Others can disagree about what the right preferences and values are.

When preferences involve other people or public stuff instead of just your own personal stuff, then people will disagree about what’s good.

What can be done about all this? A lot can be solved by: whatever you think is high utility, pay for it. As a first approximation, whoever is willing to pay more is the person who would get the most utility from getting the thing or getting their way on the issue.

Paying social credit points, aka money, for things you value shows you actually value them that much. It prevents fraud and it enables comparison between people’s preferences. If I say “I strongly care” and you say “I care a lot”, then who knows who cares more. Instead, we can bid money/social credit to see who will bid higher.

People often have to estimate how much utility they would get from a good or service, before they have it. These estimates are often inaccurate. Sometimes they’re wildly inaccurate. Often, they’re systematically biased. How can we make the social system resilient to mistakes?

One way is to disincentivize mistakes instead of incentivizing them. Consider a simple, naive system, where people tend to be given more of whatever they value. The higher they value it, the more of it they get. Whoever likes sushi the most will be allocated the most sushi. Whoever likes gold bars the most will be allocated the most gold bars. Whoever is the best at really liking stuff, and getting pleasure, wellbeing or whatever other kind of utility from it, gets the most stuff. There is an incentive here to highly value lots of stuff, even by mistake. When in doubt, just decide that you value it a lot – maybe you’ll like it and there’s no downside to you of making a high bid in terms of how much utility you say it gives you. Your utility estimates are like a bank account with unlimited funds, so you can spend lavishly.

To fix this, we need to disincentivize mistakes. If you overbid for something – if you say it has higher utility for you than it actually does – that should have some kind of downside for you, such as a reduced ability to place high bids in the future.

How can we accomplish this? A simple model is everyone is assigned 1,000,000 utility points at birth. When you want a good or service, you bid utility points (fractions are fine). You can’t bid more than you have. If your bid is accepted, you transfer those utility points to the previous owner or the service provider, and you get the good or service. Now you have fewer utility points to bid in the future. If you are biased and systematically overbid, you’ll run out of points and you’ll get less stuff for your points than you could have.

If you’re low on utility points, you can provide goods or services to others to get more. There is an incentive to provide whatever good or services would provide the most utility to others, especially ones that you can provide efficiently or cheaply. Cost/benefit and specialization matter.

There are many ways we could make a more complex system. Do you have to plan way ahead? Maybe people should get 1,000 more utility points every month so they always have a guaranteed minimum income. Maybe inheritance or gifting should be allowed – those have upsides and downsides. If inheritance and gifting are both banned, then there’s an incentive to spend all your utility points before you die – even for little benefit – or else they’re wasted. And there’s less incentive to earn more utility points if you have enough, but would like to get more to help your children or favorite charity, but you can’t do gifting or inheritance. There’d also be people who pay 10,000 points for a marble to circumvent the no gifting rule. Or I might try to hire a tutor to teach my son, and pay him with my utility points rather than my son having to spend his own points.

Anyway, to a reasonable approximation, this is the system we already have, and utility points are called dollars. Dollars, in concept, are a system of social credit that track how much utility you’ve provided to others minus how much you’ve received from others. They keep score so that some people don’t hog a ton of utility.

There are many ways that, in real life, our current social system differs from this ideal. In general, those differences are not aspects of capitalist economic theory nor of dollars. They are deviations from the free market which let people grow rich by government subsidies, fraud, biased lawmaking, violence, and various other problems.

Note: I don’t think a perfect free market would automatically bring with it utopia. I just think it’s a system with some positive features and which is compatible with rationality. It doesn’t actively prevent or suppress people from having good, rational lives and doing problem solving and making progress. Allowing problem solving and helping with some problems (like coordination between people, keeping track of social credit, and allocating goods and services) is a great contribution from an economic system. Many other specific solutions are still needed. I don’t like the people who view capitalism as a panacea instead of a minimal framework and enabler. I also don’t think any of the alternative proposals, besides a free market, are any good.


Elliot Temple | Permalink | Messages (0)

Conflicts of Interest, Poverty and Rationality

I quit the Effective Altruism forum due to a new rule requiring posts and comments be basically put in the public domain without copyright. I had a bunch of draft posts, so I’m posting some of them here with light editing.


Almost everyone believes in conflicts of interest without serious consideration or analysis. It’s not a reasoned opinion based on studying the literature on both sides. They’re almost all ignorant of classical liberal reasoning and could not summarize the other side’s perspective. They also mostly haven’t read e.g. Marx or Keynes. I literally can’t find anyone who has ever attempted to give a rebuttal to Hazlitt’s criticism of Keynes. And I’ve never found an article like “Why Mises, Rand and classical liberalism are wrong and there actually are inherent conflicts of interest”.

(Searching again just now the closest-to-relevant thing I found was an article attacking Rand re conflicts of interest. Its argument is basically that she’s a naive idiot who is contradicting classical liberalism by saying whenever there is a conflict someone is evil/irrational. It shows no awareness that the “no conflicts of interest” is a classical liberal theory which Rand didn’t invent. It’s an anti-Rand article that claims, without details, that classical liberalism is on its side. It’s a pretty straightforward implication of the liberal harmony view that if there appears to be a conflict of interest or disharmony, someone is making a mistake that could and should be fixed, and fixing the mistake enough to avoid conflict is possible (in practice now, not just in theory) if no one is being evil, irrational, self-destructive, etc.)

There are some standard concerns about liberalism (which are already addressed in the literature) like: John would get more value from my ball than I would. So there’s a conflict of interest: I want to keep my ball, and John wants to have it.

Even if John would get less value from my ball, there may be a conflict of interest: John would like to have my ball, and I’d like to keep it.

John’s interest in taking my ball, even though it provides more value to me than him, is widely seen as illegitimate. The only principle it seems to follow is “I want the most benefit for me”, which isn’t advocated much, though it’s often said to be human nature and said that people will inevitably follow it.

Wanting to allocates resources where they’ll do the most good – provide the most benefit to the most people – is a reasonable, plausible principle. It has been advocated as a good, rational principle. There are intellectual arguments for it.

EA seems to believe in that principle – allocate resources where they’ll do the most good. But EA also tries not to be too aggressive about it and just wants people to voluntarily reallocate some resources to do more good compared to the status quo. EA doesn’t demand a total reallocate of all resources in the optimal way because that’s unpopular and perhaps unwise (e.g. there are downsides to attempting revolutionary changes to society (especially ones that many people will not voluntarily consent to) rather than incremental, voluntary changes, such as the risk of making costly mistakes while making massive changes).

But EA does ask for people to voluntarily make some sacrifices. That’s what altruism is. EA wants people to give up some benefit for themselves to provide larger benefits for others. E.g. give up some money that has diminishing returns for you, and donate it to help poor people who get more utility per dollar than you do. Or donate to a longtermist cause to help save the world, thus benefitting everyone, even though most people aren’t paying their fair share. In some sense, John is buying some extra beer while you’re donating to pay not only your own share but also John’s share of AGI alignment research. You’re making a sacrifice for the greater good while John isn’t.

This narrative, in terms of sacrifices, is problematic. It isn’t seeking win/win outcomes, mutual benefit or social harmony. It implicitly accepts a political philosophy involving conflicts of interest, and it further asks people to sacrifice their interests. By saying that morality and your interests contradict each other, it creates intellectual confusion and guilt.

Liberal Harmony

Little consideration has been given to the classical liberal harmony of interests view, which says no sacrifices are needed. You can do good without sacrificing your own interests, so it’s all upside with no downside.

How?

A fairly straightforward answer is: if John wants my ball and values it more than I do, he can pay me for it. He can offer a price that is mutually beneficial. If it’s worth $10 to me, and $20 to John, then he can offer me $15 for it and we both get $5 of benefit. On the other hand, if I give it to John for free, then John gets $20 of benefit and I get -$10 of benefit (that’s negative benefit).

If the goal is to maximize total utility, John needs to have that ball. Transferring the ball to John raises total utility. However, the goal of maximizing total utility is indifferent to whether John pays for it. As a first approximation, transferring dollars has no effect on total utility because everyone values dollars equally. That isn’t really true but just assume it for now. I could give John the ball and $100, or the ball and $500, and the effect on total utility would be the same. I lose an extra $100 or $500 worth of utility, and John gains it, which has no effect on total utility. Similarly, John could pay me $500 for the ball and that would increase total utility just as much (by $10) as if I gave him the ball for free.

Since dollar transfers are utility-neutral, they can be used to get mutual benefit and avoid sacrifices. Whenever some physical object is given to a new owner in order to increase utility, some dollars can be transferred in the other direction so that both the old and new owners come out ahead.

There is no need, from the standpoint of total utility, to have any sacrifices.

And these utility-increasing transfers can be accomplish, to the extent people know they exist, by free trade. Free trade already maximizes total utility, conditional on people finding opportunities and the transaction costs being lower than the available gains. People have limited knowledge, they’re fallible, and trade takes effort, so lots of small opportunities are missed out on that an omniscient, omnipotent God could do. If we think of this from a perspective of a central planner or philosopher king with unlimited knowledge who can do anything effortlessly, there’d be a lot of extra opportunities compared to the real situation where people have limited knowledge of who has what, how much utility they’d get from what, etc. This is an important matter that isn’t very relevant to the conflicts of interest issue. It basically just explains that some missed opportunities are OK and we shouldn’t expect perfection.

There is a second issue, besides John would value my physical object more than me. What if John would value my services more than the disutility of me performing those services? I could clean his bathroom for him, and he’d be really happy. It has more utility for him than I’d lose. So if I clean his bathroom, total utility goes up. Again, the solution is payment. John can give me dollars so that we both benefit, rather than me cleaning his bathroom for free. The goal of raising total utility has no objection to John paying me, and the goal of “no one sacrifices” or “mutual benefit” says it’s better if John pays me.

Valuing Dollars Differently

And there’s a third issue. What if the value of a dollar is different for two people? For a simple approximation, we’ll divide everyone up into three classes: rich, middle class and poor. As long as John and I are in the same class, we value a dollar equally, and my analysis above works. And if John is in a higher class than me, than him paying me for my goods or services will work fine. Possibly he should pay me extra. The potential problems for the earlier analysis come if John is in a lower class than me.

If I’m middle class and John is poor, then dollars are more important to him than to me. So if he gives me $10, that lowers total utility. We’ll treat middle class as the default, so that $10 has $10 of value for me, but for John it has $15 of value. Total utility goes down by $5. Money transfers between economic classes aren’t utility-neutral.

Also, if I simply give John $10, for nothing in return, that’s utility-positive. It increases total utility by $5. I could keep giving John money until we are in the same economic class, or until we have the same amount of money, or until we have similar amounts of money – and total utility would keep going up the whole time. (That’s according to this simple model.)

So should money be divided equally or approximately equally? Would that raise total utility and make a better society?

There are some concerns, e.g. that some people spend money more wastefully than others. Some people spend money on tools that increase the productivity of labor – they forego immediate consumption to invest in the future. Others buy alcohol and other luxury consumption. If more money is in the hands of investors rather than consumers, society will be better off after a few years. Similarly, it lowers utility to allocate seed corn to people who’d eat it instead of planting it.

Another concern is that if you equal out the wealth everyone has, it will soon become unequal again as some people consume more than others.

Another concern is incentives. The more you use up, the more you’ll be given by people trying to increase total utility? And the more you save, the more you’ll give away to others? If saving/investing benefits others not yourself, people will do it less. If people do it less, total utility will go down.

One potential solution is loans. If someone temporarily has less money, they can be loaned money. They can then use extra, loaned dollars when they’re low on money, thus getting good utility-per-dollar. Later when they’re middle class again, they can pay the loan back. Moving spending to the time period when they’re poor, and moving saving (loan payback instead of consumption) to the time period when they’re middle class, raises overall utility.

Poverty

But what if being poor isn’t temporary? Then I’d want to consider what is the cause of persistent poverty.

If the cause is buying lots of luxuries, then I don’t think cash transfers to that person are a good idea. Among other things, it’s not going to raise total utility of society to increase consumption of luxuries instead of capital accumulation. Enabling them to buy even more luxuries isn’t actually good for total utility.

If the cause is being wasteful with money, again giving the person more money won’t raise total utility.

If the cause is bad government policies, then perhaps fixing the government policies would be more efficient than transferring money. Giving money could be seen as subsidizing the bad government policies. It’d be a cost-ineffective way to reduce the harm of the policies, thus reducing the incentive to change the policies, thus making the harmful policies last longer.

If the person is poor because of violence and lack of secure property, then they need rule of law, not cash. If you give them cash, it’ll just get taken.

Can things like rule of law and good governance be provided with mutual benefit? Yes. They increase total wealth so much that everyone could come out ahead. Or put another way, it’s pretty easy to imagine good police and courts, which do a good job, which I’d be happy to voluntarily pay for, just like I currently voluntarily subscribe to various services like Netflix and renting web servers.

Wealth Equality

Would it still be important to even out wealth in that kind of better world where there are no external forces keeping people persistently poor? In general, I don’t think so. If there are no more poor people, that seems good enough. I don’t think the marginal utility of another dollar changes that much once you’re comfortable. People with plenty don’t need to be jealous of people with a bit more. I understand poor people complaining, but people who are upper middle class by today’s standards are fine and don’t need to be mad if some other people have more.

Look at it like this. If I have 10 million dollars and you have 20 million dollars, would it offer any kind of significant increase in total utility to even that out to 15 million each? Nah. We both can afford plenty of stuff – basically anything we want which is mass produced. The marginal differences come in two main forms:

1: Customized goods and services. E.g. you could hire more cooks, cleaners, personal drivers, private jet flights, etc.
2: Control over the economy, e.g. with your extra $10 million you could gain ownership of more businesses than I own.

I don’t care much about the allocation of customized goods and services besides to suggest that total utility may go up with somewhat less of them. Mass production and scalable services are way more efficient.

And I see no particular reason that total utility will go up if we even out the amount of control over businesses that everyone has. Why should wealth be transferred to me so that I can own a business and make a bunch of decisions? Maybe I’ll be a terrible owner. Who knows. How businesses are owned and controlled is an important issue but I basically don’t think that evening out ownership is the answer that will maximize total utility. Put another way, the diminishing returns on extra dollars is so small in this dollar range that personal preferences probably matter more. In other words, how much I like running businesses is a bigger factor than my net worth only being $10 million rather than $15 million. How good I am at running a business is also really important since it’ll affect how much utility the business creates or destroys. If you want to optimize utility more, you’ll have to start allocating specific things to the right people, which is hard, rather than simply trying to give more wealth to whoever has less. Giving more to whoever has less works pretty well at lower amounts but not once everyone is well off.

What about the ultra rich who can waste $44 billion dollars on a weed joke? Should anyone be a trillionaire? I’m not sure it’d matter in a better world where all that wealth was earned by providing real value to others. People that rich usually don’t spend it all anyway. Usually, they barely spend any of it, unless you count giving it to charity as spending. To the extent they keep it, they mostly invest it (in other words, basically, loan it out and let others use it). Having a ton of wealth safeguarded by people who will invest rather than consume it is beneficial for everyone. But mostly I don’t care much and certainly don’t want to defend current billionaires, many of who are awful and don’t deserve their money, and some of whom do a ton of harm by e.g. buying and then destroying a large business.

My basic claim here is that if everyone were well off, wealth disparities wouldn’t matter so much – we’d all be able to buy plenty of mass produced and scalable stuff, and so the benefit of a marginal dollar would be reasonably similar between people. It’s the existence of poverty that makes a dollar having different utility for different people a big issue.

The Causes of Poverty

If you give cash to poor people, you aren’t solving the causes of poverty. You’re just reducing some of the harm done (hopefully – you could potentially be fueling a drug addiction or getting a thug to come by and steal it or reducing popular resentment of a bad law and thus keeping it in place longer). It’s a superficial (band aid) solution not a root cause solution. If people want to do some of that voluntarily, I don’t mind. But I don’t place primary importance on that stuff. I’m more interested in how to fix the system and whether that can be done with mutual benefit.

From a conflicts of interest perspective, it’s certainly in my interest that human wealth goes up enough for everyone to have a lot. That world sounds way better for me to live in. I think the vast majority will agree. So there’s no large conflict of interest here. Many a few current elites would prefer to be a big fish in a smaller pond rather than live in that better world. But I think they’re wrong and that isn’t in their interest. Ideas like that will literally get them killed. Anti-aging research would be going so much better if humanity was so much richer that there were no poor people.

What about people who are really stupid, or disabled, or chronically fatigued or something so they can’t get a good job even in a much better world? Their families can help them. Or their neighbors, church group, online rationality forum, whatever. Failing that, some charity seems fine to fill in a few gaps here and there – and it won’t be a sacrifice because people will be happy to help and will still have plenty for themselves and nothing in particular they want to buy but have to give up. And with much better automation, people will be able to work much shorter hours and one worker will be able to easily support many people. BTW, we may run into trouble with some dirty/unpleasant jobs still being needed, that aren’t automated, and how do we incentivize anyone to do them when just paying higher wages for those jobs won’t attract much interest since everyone has plenty.

So why don’t we fix the underlying causes of poverty? People disagree about what those are. People try things that don’t work. There are many conflicting plans with many mistakes. But there’s no conflict of interest here, even between people who disagree on the right plan. It’s in both people’s interests to figure out a plan that will actually work and do that.

Trying to help poor people right now is a local optima that doesn’t invest in the future. As long as the amount of wealth being used on it is relatively small, it doesn’t matter much. It has upsides so I’m pretty indifferent. But we shouldn’t get distracted from the global optima of actually fixing the root problems.

Conclusion

I have ideas about how to fix the root causes of poverty, but I find people broadly are unwilling to learn about or debate my ideas (some of which are unoriginal and can be read in fairly well known books by other people). So if I’m right, there’s still no way to make progress. So the deeper root cause of poverty is irrationality, poor debate methods, disinterest in debate, etc. Those things are why no one is working out a good plan or, if anyone does have a good plan, it isn’t getting attention and acceptance.

Basically, the world is full of social hierarchies instead of truth-seeking, so great ideas and solutions often get ignored without rebuttal, and popular ideas (e.g. variants of Keynesian economics) often go ahead despite known refutations and don’t get refined with tweaks to fix all the known ways they’ll fail.

Fix the rationality problem, and get a few thousand people who are actually trying to be rational instead of following social status, and you could change the world and start fixing other problems like poverty. But EA isn’t that. When you post on EA, you’re often ignored. Attention is allocated by virality, popularity, whatever biased people feel like (which is usually related to status), etc. There’s no organized effort to e.g. point out one error in every proposal and not let any ideas get ignored with no counter-argument (and also not proceed with and spend money implementing any ideas with known refutations). There’s no one who takes responsibility for addressing criticism of EA ideas and there’s no particular mechanism for changing EA ideas when they’re wrong – the suggestion just has to gain popularity and be shared in the right social circles. To change EA, you have to market your ideas, impress people, befriend people, establish rapport with people, and otherwise do standard social climbing. Merely being right and sharing ideas, while doing nothing aimed at influence, wouldn’t work (or at least is unlikely to work and shouldn’t be expected to work). And many other places have these same irrationalities that EA has, which overall makes it really hard to improve the world much.


Elliot Temple | Permalink | Messages (0)

Animal Welfare Overview

Is animal welfare a key issue that we should work on? If so, what are productive things to do about it?

This article is a fairly high level overview of some issues, which doesn’t attempt to explain e.g. the details of Popperian epistemology.

Human Suffering

Humans suffer and die, today, a lot. Look at what’s going on in Iran, Ukraine, Yemen, North Korea, Venezuela and elsewhere. This massive human suffering should, in general, be our priority before worrying about animals much.

People lived in terrible conditions, and died, building stadiums for the World Cup in Qatar. Here’s a John Oliver video about it. They were lied to, exploited, defrauded, and basically (temporarily) enslaved etc. People sometimes die in football (soccer) riots too. I saw a headline recently that a second journalist died in Qatar for the World Cup. FIFA is a corrupt organization that likes dictators. Many people regard human death as an acceptable price for sports entertainment, and many more don’t care to know the price.

There are garment workers in Los Angeles (USA) working in terrible conditions for illegally low wages. There are problems in other countries too. Rayon manufacturing apparently poisons nearby children enough to damage their intelligence due to workers washing off toxic chemicals in local rivers. (I just read that one article; I haven’t really researched this but it seems plausible and I think many industries do a lot of bad things. There are so many huge problems in human civilization that even reading one article per issue would take a significant amount of time and effort. I don’t have time to do in-depth research on most of the issues. Similarly, I have not done in-depth research on the Qatar World Cup issues.)

India has major problems with orphans. Chinese people live under a tyrannical government. Human trafficking continues today. Drug cartels exist. Millions of people live in prisons. Russia uses forced conscription for its war of aggression in Ukraine.

These large-scale, widespread problems causing human suffering seem more important than animal suffering. Even if you hate how factory farms treat animals, you should probably care more that a lot of humans live in terrible conditions including lacking their freedom in major ways.

Intelligence

Humans have general, universal intelligence. They can do philosophy and science.

Animals don’t. All the knowledge involved in animal behavior comes from genetic evolution. They’re like robots created by their genes and controlled by software written by their genes.

Humans can do evolution of ideas in their minds to create new, non-genetic knowledge. Animals can’t.

Evolution is the only known way of creating knowledge. It involves replication with variation and selection.

Whenever there is an appearance of design (e.g. a wing or a hunting behavior), knowledge is present.

People have been interested in the sources of knowledge for a long time, but it’s a hard problem and there have been few proposals. Proposals include evolution, intelligent design, creationism, induction, deduction and abduction.

If non-evolutionary approaches to knowledge creation actually worked, it would still seem that humans can do them and animals can’t – because there are human scientists and philosophers but no animal scientists or philosophers.

Human learning involves guessing or brainstorming (replication with variation) plus criticism and rejecting refuted ideas (selection). Learning by evolution means learning by error correction, which we do by creating many candidate ideas (like a gene pool) and rejecting ideas that don’t work well (like animals with bad mutations being less likely to have offspring).

Also, since people very commonly get this wrong: Popperian epistemology says we literally learn by evolution. It is not a metaphor or analogy. Evolution literally applies to both genes and memes. It’s the same process (replication with variation and selection). Evolution could also work with other types of replicators. For general knowledge creation, the replicator has to be reasonably complex, interesting, flexible or something (the exact requirements aren’t known).

Types of Algorithms

All living creatures with brains have Turing-complete computers for brains. A squirrel is a reasonable example animal. Let’s not worry about bacteria or worms. (Earthworms apparently have some sort of brain with only around 300 neurons. I haven’t researched it.)

Humans have more neurons, but the key difference between humans and squirrels is the software our brains run.

We can look at software algorithms in three big categories.

  1. Fixed, innate algorithm
  2. “Learning” algorithms which read and write data in long-term memory
  3. Knowledge-creation algorithm (evolution, AGI)

Fixed algorithms are inborn. The knowledge comes from genes. They’re complete and functional with no practice or experience.

If you keep a squirrel in a lab and never let it interact with dirt, and it still does behaviors that seem designed for burying nuts in dirt, that indicates a fixed, innate algorithm. These algorithms can lead to nonsensical behavior when taken out of context.

There are butterflies which do multi-generation migrations. How do they know where to go? It’s in their genes.

Why do animals “play”? To “learn” hunting, fighting, movement, etc. During play, they try out different motions and record data about the results. Later, their behavioral algorithms read that data. Their behavior depends partly on what that data says, not just on inborn, genetic information.

Many animals record data for navigation purposes. They look around, then can find their way back to the same spot (long-term memory). They can also look around, then avoid walking into obstacles (short-term memory).

Chess-playing software can use fixed, innate algorithms. A programmer can specify rules which the software follows.

Chess-playing software can also involve “learning”. Some software plays many practice games against itself, records a bunch of data, and uses that data in order to make better moves in the future. The chess-playing algorithm takes into account data that was created after birth (after the programmer was done).

I put “learning” in scare quotes because the term often refers to knowledge creation (evolution) which is different than an algorithm that writes data to long-term data storage then uses it later. When humans learn at school, it’s not the same thing as e.g. a “reinforcement learning” AI algorithm or what animals do.

People often confuse algorithms involving long-term memory, which use information not available at birth, with knowledge creation. They call both “learning” and “intelligent”.

They can be distinguished in several ways. Is there replication with variation and selection, or not? If you think there’s evolution, can it create a variety of types of knowledge, or is it limited to one tiny niche? If you believe a different epistemology, you might look for the presence of inductive thinking (but Popper and others have refuted induction). There are other tests and methods that can be used to identify new knowledge as opposed to the downstream consequences of existing knowledge created by genetic evolution, by a programmer, or by some other sort of designer.

Knowledge

What is knowledge? It’s information which is adapted to a purpose. When you see the appearance of design, knowledge is present. Understanding the source of that knowledge is often important. Knowledge is one of the more important and powerful things in the universe.

Binary Intelligence or Degrees?

The word “intelligence” is commonly used with two different meanings.

One is a binary distinction. I’m intelligent but a rock or tree isn’t.

The other meaning is a difference in degree or amount of intelligence: Alice is smarter than Joe but dumber than Feynman.

Degrees of intelligence can refer to a variety of different things that we might call logical skill, wisdom, cleverness, math ability, knowledge, being well spoken, scoring well on tests (especially IQ tests, but others too), getting high grades, having a large vocabulary, being good at reading, being good at scientific research or being creative.

There are many different ways to use your intelligence. Some are more effective than others. Using your intelligence effectively is often called being highly intelligent.

Speaking very roughly, many people believe a chimpanzee or dog is kind of like a 50 IQ person – intelligent, but much less intelligent than almost all humans. They think a squirrel passes the binary intelligence distinction to be like a human not a rock, but just has less intelligence. However, they usually don’t think a self-driving car, chat bot, chess software or video game enemy is intelligent at all – that’s just an algorithm which has a lot of advantages compared to a rock but isn’t intelligent. Some other people do think that present-day “AI” software is intelligent, just with a low degree of intelligence.

My position is that squirrels are like self-driving cars: they aren’t intelligent but the software algorithm can do things that a rock can’t. A well designed software algorithm can mimic intelligence without actually having it.

The reason algorithms are cleverer than rocks is they have knowledge in them. Creating knowledge is the key thing intelligence does that makes it seem intelligent. An algorithm uses built-in knowledge, while intelligences can create their own knowledge.

Basically, anything with knowledge seems either intelligent or intelligently-designed to us (speaking loosely and counting evolution as an intelligent designer). People tend to assume animals are intelligent rather than intelligently-designed because they don’t understand evolution or computation very well, and because the animals seem to act autonomously, and because of the similarities between humans and many animals.

Where does knowledge come from? Evolution. To get knowledge, algorithms need to either evolve or to have an intelligent designer. An intelligent designer, such a human software developer, creates the knowledge by evolving ideas about the algorithm within his brain. So the knowledge always comes from evolution. Evolution is the only known solution to how new knowledge can be created which isn’t refuted.

(General intelligence may be an “algorithm” in the same kind of sense that e.g. “it’s all just math”. If you want to call it an algorithm, then whenever I write “algorithm” you can read it as e.g. “algorithm other than general intelligence”.)

Universality

There are philosophical reasons to believe that humans are universal knowledge creators – meaning they can create any knowledge that any knowledge creator can create. The Popperian David Deutsch has written about this.

This parallels how the computer I’m typing on can compute anything that any computer can compute. It’s Turing-complete, a.k.a. universal. (Except quantum computers have extra abilities, so actually my computer is a universal classical computer.)

This implies a fundamental similarity between everything intelligent (they all have the same repertoire of things they can learn). There is no big, bizarre, interesting mind design space like many AGI researchers believe. Instead, there are universally intelligent minds and not much else of note, just like there are universal computers and little else of interest. If you believe in mind design space like Eliezer Yudkowsky does, it’s easy to imagine animals are in it somewhere. But if the only options for intelligence are basically universality or nothing, then animals have to be like humans or else unintelligent – there’s no where else in mind design space for them to be. If the only two options are basically that animals are intelligent in the same way as humans (universal intelligence), or aren’t intelligent, then most people will agree that animals aren’t intelligent.

This also has a lot of relevance to concerns about super-powerful, super-intelligent AGIs turning us all into paperclips. There’s actually nothing in mind design space that’s better than human intelligence, because human intelligence is already universal. Just like how there’s nothing in classical computer design space that’s better than a universal computer or Turing machine.

A “general intelligence” is a universal intelligence. A non-general “intelligence” is basically not an intelligence, like a non-universal or non-Turing-complete “computer” basically isn’t a computer.

Pain

Squirrels have nerves, “pain” receptors, and behavioral changes when “feeling pain”.

Robots can have sensors which identify damage and software which outputs different behaviors when the robot is damaged.

Information about damage travels to a squirrel’s brain where some behavior algorithms use it as input. It affects behavior. But that doesn’t mean the squirrel “feels pain” anymore than the robot does.

Similarly, information travels from a squirrel’s eyes to its brain where behavioral algorithms take it into account. A squirrel moves around differently depending on what it sees.

Unconscious robots can do that too. Self-driving car prototypes today use cameras to send visual information to a computer which makes the car behave differently based on what the camera sees.

Having sensors which transmit information to the brain (CPU), where it is used by behavior-control software algorithms, doesn’t differentiate animals from present-day robots.

Suffering

Humans interpret information. We can form opinions about what is good or bad. We have preferences, values, likes and dislikes.

Sometimes humans like pain. Pain does not automatically equate to suffering. Whether we suffer due to pain, or due to anything else, depends on our interpretation, values, preferences, etc.

Sometimes humans dislike information that isn’t pain. Although many people like it, the taste of pizza can result in suffering for someone.

Pain and suffering are significantly different concepts.

Pain is merely a type of information sent from sensors to the CPU. This is true for humans and animals both. And it’d be true for robots too if anyone called their self-damage related sensors “pain” sensors.

It’s suffering that is important and bad, not pain. Actually, being born without the ability to feel pain is dangerous. Pain provides useful information. Being able to feel pain is a feature, not a bug, glitch or handicap.

If you could disable your ability to feel pain temporarily, that’d be nice sometimes if used wisely, but permanently disabling it would be a bad idea. Similarly, being able to temporarily disable your senses (smell, touch, taste, sight or hearing) is useful, but permanently disabling them is a bad idea. We invent things like ear and nose plugs to temporarily disable senses, and we have built-in eyelids for temporarily disabling our sight (and, probably more importantly, for eye protection).

Suffering involves wanting something and getting something else. Reality violates what you want. E.g. you feel pain that you don’t want to feel. Or you taste a food that you don’t want to taste. Or your spouse dies when you don’t want them to. (People, occasionally, do want their spouse to die – as always, interpretation determines whether one suffers or not).

Karl Popper emphasized that all observation is theory-laden, meaning that all our scientific evidence has to be interpreted and if we get the interpretation wrong then our scientific conclusions will be wrong. Science doesn’t operate on raw data.

Suffering involves something happening and you interpreting it negatively. That’s another way to look at wanting something (that you would interpret positively or neutrally) but getting something else (that you interpret negatively).

Animals can’t interpret like this. They can’t create opinions of what is good and bad. This kind of thinking involves knowledge creation.

Animals do not form preferences. They don’t do abstract thinking to decide what to value, compare differential potential values, and decide what they like. Just like self-driving cars have no interpretation of crashing and do not feel bad about it when they crash. They don’t want to avoid crashing. Their programmers want them to avoid crashing. Evolution doesn’t want things like people do, but it does design animals to (mostly) minimize dying. That involves various more specific designs, like behavior algorithms designed to prevent an animal from starving to death. (Those algorithms are pretty effective but not perfect.)

Genetic evolution is the programmer and designer for animals. Does genetic evolution have values or preferences? No. It has no mind.

Genetic evolution also created humans. What’s different is it gave them the ability to do their own evolution of ideas, thus creating evolved knowledge that wasn’t in their genes, including knowledge about interpretations, preferences, opinions and values.

Animal Appearances

People often assume animals have certain mental states due to superficial appearance. They see facial expressions on animals and think those animals have corresponding emotions, like a human would. They see animals “play” and think it’s the same thing as human play. They see an animal “whimper in pain” and think it’s the same as a human doing that.

People often think their cats or dogs have complex personalities, like an adult human. They also commonly think that about their infants. And they also sometimes think that about chatbots. Many people are fooled pretty easily.

It’s really easy to project your experiences and values onto other entities. But there’s no evidence that animals do anything other than follow their genetic code, which includes sometimes doing genetically-programmed information-gathering behaviors, then writing that information into long-term memory, then using that information in behavior algorithms later in exactly the way the genes say to. (People also get confused by indirection. Genes don’t directly tell animals what to do like slave-drivers. They’re more like blueprints for the physical structure and built-in software of animals.)

Uncertainty

Should we treat animals partially or entirely like humans just in case they can suffer?

Let’s first consider a related question. Should we treat trees and 3-week-old human embryos partially or entirely like humans just in case they can suffer? I say no. If you agree with me, perhaps that will help answer the question about animals.

In short, we have to live by our best understanding of reality. You’re welcome to be unsure, but I have studied stuff, debated and reached conclusions. I have conclusions both about my personal debates and also the state of the debate involving all expert literature.

Also, we’ve been eating animals for thousands of years. It’s an old part of human life, not a risky new invention. Similarly, the mainstream view of human intellectuals, for thousands of years, has been to view animals as incapable of reason or irrational, and as very different than humans. (You can reason with other humans and form e.g. peace treaties or social contracts. You can resolve conflicts with persuasion. You can’t do that with animals.)

But factory farms are not a traditional part of human life. If you just hate factory farms but don’t mind people eating wild animals or raising animals on non-factory farms, then … I don’t care that much. I don’t like factory farms either because I think they harm human health (but so do a lot of other things, including vegetable oil and bad political ideas, so I don’t view factory farms as an especially high priority – the world has a ton of huge problems). I’m a philosopher who mostly cares about the in-principle issue of whether or not animals suffer, which is intellectually interesting and related to epistemology. It’s also relevant to issues like whether or not we should urgently try to push everyone to be vegan, which I think would be a harmful mistake.

Activism

Briefly, most activism related to animal welfare is tribalist, politicized fighting related to local optima. It’s inadequately intellectual, inadequately interested in research and debate about the nature of animals or intelligence, and has inadequate big picture planning about the current world situation and what plan would be very effective and high leverage for improving things. There’s inadequate interest in persuading other humans and reaching agreement and harmony, rather than trying to impose one’s values (like treating animals in particular ways) on others.

Before trying to make big changes, you need e.g. a cause-and-effect diagram about how society works and what all the relevant issues are. And you need to understand the global and local optima well. See Eli Goldratt for more information on project planning.

Also, as is common with causes, activists tend to be biased about their issue. Many people who care about the (alleged) suffering of animals do not care much about the suffering of human children, and vice versa. And many advocates for animals or children don’t care much about the problems facing elderly people in old folks homes, and vice versa. It’s bad to have biased pressure groups competing for attention. That situation makes the world worse. We need truth seeking and reasonable organization, not competitions for attention and popularity. A propaganda and popularity contest isn’t a rational, truth seeking way to organize human effort to make things better.


Elliot Temple | Permalink | Messages (0)

Don’t Legalize Animal Abuse

This article discusses why mistreating animals is bad even if they’re incapable of suffering.

I don’t think animals can suffer, but I’m not an activist about it. I’m not trying to change how the world treats animals. I’m not asking for different laws. I don’t emphasize this issue. I don’t generally bring it up. I care more about epistemology. Ideas about how minds work are an application of some of my philosophy.

On the whole, I think people should treat animals better, not worse. People should also treat keyboards, phones, buildings and their own bodies better. It’s (usually) bad to smash keyboards, throw phones, cause and/or ignore maintenance problems in buildings, or ingest harmful substances like caffeine, alcohol or smoke.

Pets

Legalizing animal abuse would have a variety of negative consequences if nothing else about the world changed. People would do it more because legalizing it would make it more socially legitimate. I don’t see much upside. Maybe more freedom to do scientific testing on animals would be good but, if so, that could be accomplished with a more targeted change that only applies to science – and lab animals should be treated well in ways compatible with the experiment to avoid introducing extra variables like physiological stress.

On the other hand, legalizing animal abuse would actually kill human children. Abused dogs are more likely to bite both humans and dogs.

When spouses fight, one can vandalize the other’s car because it’s shared property. Vandalizing a spouse’s dog would be worse. A dog isn’t replaceable like a car. If vandalizing a dog was treated like regular property damage, the current legal system wouldn’t protect against it well enough.

Why aren’t dogs replaceable? Because they have long-term memory which can’t be backed up and put into a new dog (compare with copying all your data to a new phone). If you’ve had a dog for five years and spent hundreds of hours around it, that’s a huge investment, but society doesn’t see that dog as being worth tens of thousands of dollars. If you were going to get rid of animal abuse laws, you’d have to dramatically raise people’s perception of the monetary value of pets, which is currently way too low.

Dogs, unlike robots we build, cannot have their memory reset. If a dog starts glitching out (e.g. becomes more aggressive) because someone kicks it, you can’t just reinstall and start over, and you wouldn’t want to because the dog has valuable data in it. Restoring a backup from a few days before the abuse would work pretty well but isn’t an option.

You’d be more careful with how you use your Mac or phone if you had no backups of your data and no way to undo changes. You’d be more careful with what software you installed if you couldn’t uninstall it or turn it off. Dogs are like that. And people can screw up your dog’s software by hitting your dog.

People commonly see cars and homes as by far the most valuable things that they own (way ahead of e.g. jewelry, watches and computers for most people). They care about their pets but they don’t put them on that list. They can buy a new dog for a few hundred dollars and they know that (many of them wouldn’t sell their dog for $10,000, but they haven’t all considered that). They don’t want to replace their dog, but many people don’t calculate monetary value correctly. The reason they don’t want to replace their dog is that their current dog has far more value than a new one would. People get confused about it because they can’t sell their dog for anywhere near its value to them. Pricing unique items with no market for them is problematic. Also, if they get a new dog, it will predictably gain value over time.

It’s like how it’s hard to put a price on your diary or journal. If someone burned it, that would be really bad. But how many dollars of damages is that worth? It’s hard to say. A diary has unique, irreplaceable data in it, but there’s no accurate market price because it’s worth far more to the owner than to anyone else.

Similarly, if someone smashes your computer and you lose a bunch of data, you will have a hard time getting appropriate compensation in court today. Being paid the price of a new computer is something the courts understand. And courts will take into account emotional suffering and trauma. They’re worse at taking into account the hassle and time cost of dealing with the whole problem, including going to court. And courts are bad at taking into account the data loss for personal files with no particular commercial value. A dog is like that – it contains a literal computer that literally contains personal data files with no backups. But we also have laws against animal abuse which help protect pet owners, because we recognize in some ways that pets are important. Getting rid of those laws without changing a bunch of other things would make things worse.

Why would you want to abuse a pet anyway? People generally abuse pets because they see the animals as proxies for humans (it can bleed and yelp) or they want to harm the pet’s owner. So that’s really bad. They usually aren’t hitting a dog in the same way they punch a hole in their wall. They know the dog matters more than the wall or their keyboard.

Note: I am not attempting to give a complete list of reasons that animal abuse is bad even if animals are incapable of suffering. I’m just making a few points. There are other issues too.

Factory Farms

Factory farms abuse animals for different reasons. They’re trying to make money. They’re mostly callous not malicious. Let’s consider some downsides of factory farms that apply even if animals cannot suffer.

When animals are sick or have stress hormones, they’re worse for people to eat. This is a real issue involving e.g. cortisol. Tuna fishing reality shows talk about it affecting the quality and therefore price of their catch, and the fishermen go out of their way to reduce fish’s physiological stress.

When animals eat something – like corn or soy – some of it may stay in their body and affect humans who later eat that animal. It can e.g. change the fatty acid profile of the meat.

I don’t like to eat at restaurants with dirty kitchens. I don’t want to eat from dirty farms either. Some people are poor enough for that risk to potentially be worth it, but many aren’t. And in regions where people are poorer, labor is generally cheaper too, so keeping things clean and well-maintained is cheaper there, so reasonably clean kitchens and factories broadly make sense everywhere. (You run into problems in e.g. poorer areas of the U.S. that are stuck with costly laws designed for richer areas of the U.S. They can have a bunch of labor-cost-increasing laws without enough wealth to reduce the impact of the laws.)

I don’t want cars, clothes, books, computers or furniture from dirty factories. Factories should generally be kept neat, tidy and clean even if they make machine tools, let alone consumer products, let alone food. Reasonable standards for cleanliness differ by industry and practicality, but some factory farms are like poorly-kept factories. And they produce food, which is one of the products where hygiene matters most.

On a related note, E. coli is a problem mainly because they mix together large amounts of e.g. lettuce or beef. One infected head of lettuce can contaminate hundreds of other heads of lettuce due to mixing (for pre-made salad mixes rather than buying whole heads of lettuce). They generally don’t figure out which farm had the problem and make them clean up their act. And the government spends money to trace E. coli problems in order to protect public health. This subsidizes having dirtier farms and then mixing lettuce together in processing. The money the government spends on public health, along with the lack of accountability, helps enable farms to get away with being dirtier.

These are just a sample of the problems with factory farms that are separate issues from animal suffering. I’d suggest that animal activists should emphasize benefits for humans more. Explain to people how changes can be good for them, instead of being a sacrifice for the sake of animals. And actually focus reforms on pro-human changes. Even if animals can suffer, there are lots of changes that could be made which would be better for both humans and animals; reformers should start with those.


Elliot Temple | Permalink | Messages (0)

Activists Shouldn’t Fight with Anyone

This article is about how all activists, including animal activists, should stop fighting with people in polarizing ways. Instead, they should take a more intellectual approach to planning better strategies and avoiding fights.

Effective Activism

In a world with so many huge problems, there are two basic strategies for reforming things which make sense. Animal activism doesn’t fit either one. It’s a typical example, like many other types of activism, of what not to do (even if your cause is good).

Good Strategy 1: Fix things unopposed.

Work on projects where people aren’t fighting to stop you. Avoid controversy. This can be hard. You might think that helping people eat enough Vitamin A to avoid going blind would be uncontroversial, but if you try to solve the problem with golden rice then you’ll get a lot of opposition (because of GMOs). If it seems to you like a cause should be uncontroversial, but it’s not, you need to recognize and accept that in reality it’s controversial, no matter how dumb that is. Animal activism is controversial, whether or not it should be. So is abortion, global warming, immigration, economics, and anything else that sounds like a live political issue.

A simple rule of thumb is that 80% of people who care, or who will care before you’re done, need to agree with you. 80% majorities are readily available on many issues, but extremely unrealistic in the foreseeable future on many other issues like veganism or ending factory farming.

In the U.S. and every other reasonably democratic country, if you have an 80% majority on an issue, it’s pretty easy to get your way. If you live in a less democratic country, like Iran, you may have to consider a revolution if you have an 80% majority but are being oppressed by violent rulers. A revolution with a 52% majority is a really bad idea. Pushing hard for a controversial cause in the U.S. with a 52% majority is a bad idea too, though not as bad as a revolution. People should prioritize not fighting with other people a lot more than they do.

Put another way: Was there election fraud in the 2020 U.S. Presidential election? Certainly. There is every year. Did the fraud cost Trump the election? Maybe. Did Trump have an 80% majority supporting him? Absolutely not. There is nowhere near enough fraud to beat an 80% majority. If fraud swung the election, it was close anyway, so it doesn’t matter much who won. You need accurate enough elections that 80% majorities basically always win. The ability for the 80% to get their way is really important for enabling reform. But you shouldn’t care very much whether 52% majorities get their way. (For simplicity, my numbers are for popular vote elections with two candidates, which is not actually how U.S. Presidential elections work. Details vary a bit for other types of elections.)

Why an 80% majority instead of 100%? It’s more practical and realistic. Some people are unreasonable. Some people believe in UFOs or flat Earth. The point is to pick a high number that is realistically achievable when you persuade most people. We really do have 80% majorities on tons of issues, like whether women should be allowed to vote (which used to be controversial, but isn’t today). Should murder be illegal? That has well over an 80% majority. Are seatbelts good? Should alcohol be legal? Should we have some international trade instead of fulling closing our borders? Should parents keep their own kids instead of having the government raise all the children? These things have over 80% majorities. The point is to get way more than 50% agreement but without the difficulties of trying to approach 100%.

Good Strategy 2: Do something really, really, really important.

Suppose you can’t get a clean, neat, tidy, easy or unopposed victory for your cause. And you’re not willing to go do something else instead. Suppose you’re actually going to fight with a lot of people who oppose you and work against them. Is that ever worth it or appropriate? Rarely, but yes, it could be. Fighting with people is massively overrated but I wouldn’t say to never do it. Even literal wars can be worth it, though they usually aren’t.

If you’re going to fight with people, and do activism for an opposed cause, it better be really really worth it. That means it needs high leverage. It can’t just be one cause because you like that cause. The cause needs to be a meta cause that will help with many future reforms. For example, if Iran has a revolution and changes to a democratic government, that will help them fix many, many issues going forward, such as laws about homosexuality, dancing, or female head coverings. It would be problematic to pick a single issue, like music, and have a huge fight in Iran to get change for just that one thing. If you’re going to have a big fight, you need bigger rewards than one single reform.

If you get some Polish grocery stores to stop selling live fish, that activism is low leverage. Even if it works exactly as intended, and it’s an improvement, it isn’t going to lead to a bunch of other wonderful results. It’s focused on a single issue, not a root cause.

If you get factory farms to change, it doesn’t suddenly get way easier to do abortion-related reforms. You aren’t getting at root causes, such as economic illiteracy or corrupt, lobbyist-influenced lawmakers, which are contributing to dozens of huge problems. You aren’t figuring out why so many large corporations are so awful and fixing that, which would improve factory farms and dozens of other things too. You aren’t making the world significantly more rational, nor making the government significantly better at implementing reforms.

If you don’t know how to do a better, more powerful reform, stop and plan more. Don’t just assume it’s impossible. The world desperately needs more people willing to be good intellectuals who study how things work and create good plans. Help with that. Please. We don’t need more front-line activists working on ineffective causes, fighting with people, and being led by poor leaders. There are so many front-line activists who are on the wrong side of things, fighting others who merely got lucky to be on the right side (but tribalist fighting is generally counter-productive even if you’re on the right side).

Fighting with people tends to be polarizing and divisive. It can make it harder to persuade people. If the people who disagree with you feel threatened, and think you might get the government to force your way of life on them, they will dig in and fight hard. They’ll stop listening to your reasoning. If you want a future society with social harmony, agreement and rational persuasion, you’re not helping by fighting with people and trying to get laws made that many other people don’t want, which reduces their interest in constructive debate.

Root Causes

The two strategies I brought up are doing things with little opposition or doing things that are really super important and can fix many, many things. Be very skeptical of that second one. It’s a form of the “greater good” argument. Although fighting with people is bad, it could be worth it for the greater good if it fixes some root causes. But doing something with clear, immediate negatives in the name of the greater good rarely actually works out well.

There are other important ways to strategize besides looking for lack of opposition or importance. You can look for root causes instead of symptoms. Factory farms are downstream of various other problems. Approximately all the big corporations suck, not just the big farms. Why is that? What is going on there? What is causing that? People who want to fix factory farms should look into this and get a better understanding of the big picture.

I do believe that there are many practical, reasonable ways that factory farms could be improved, just like I think keeping factories clean in general is good for everyone from customers to workers to owners. What stops the companies from acting more reasonably? Why are they doing things that are worse for everyone, including themselves? What is broken there, and how is it related to many other types of big companies also being broken and irrational in many ways?

I have some answers but I won’t go into them now. I’ll just say that if you want to fix any of this stuff, you need a sophisticated view on the cause-and-effect relationships involved. You need to look at the full picture not just the picture in one industry before you can actually make a good plan. You also should find win/win solutions and present them in terms of mutual benefit instead of approaching it as activists fighting against enemies. You should do your best to proceed in a way where you don’t have enemies. Enemy-based activism is mostly counter-productive – it mostly makes the world worse by increasing fighting.

There are so many people who are so sure they’re right and so sure their cause is so important … and many of them are on opposite sides of the same cause. Don’t be one of those people. Stop rushing, stop fighting, read Eli Goldratt, look at the big picture, make cause-and-effect trees, make transition trees, etc., plan it all out, aim for mutual benefit, and reform things in a way with little to no fighting. If you can’t find ways to make progress without fighting, that usually means you don’t know what you’re doing and are making things worse not better (assuming a reasonably free, democratic country – this is less applicable in North Korea).


Elliot Temple | Permalink | Messages (0)

Food Industry Problems Aren’t Special

Many factory farms are dirty and problematic, but so are the workplaces for many (illegally underpaid) people sewing garments in Los Angeles.

Factory farms make less healthy meat, but the way a lot of meat is processed also makes it less healthy. And vegetable oil may be doing more harm to people’s health than meat. Adding artificial colorings and sweeteners to food does harm too, or piling in extra sugar. The world is full of problems.

You may disagree with me about some of these specific problems. My general point stands even if, actually, vegetable oil is fine. If you actually think the world is not full of huge problems, then we have a more relevant disagreement. In that case, you may wish to read some of my other writing about problems in the world or debate me.

The food industry, to some extent, is arrogantly trying to play God. They want to break food down into components (like salt, fat, sugar, protein, color and nutrients) and then recombine the components to build up ideal, cheap foods however they want. But they don’t know what they’re doing. They will remove a bunch of vitamins, then add back in a few that they know are important, while failing to add others back in. They have repeatedly hurt people by doing stuff like this. It’s especially dangerous when they think they know all the components needed to make baby formula, but they don’t – e.g. they will just get fat from soy and think they replaced fat with fat so it’s fine. But soy has clear differences from natural breast milk, such as a different fatty acid profile. They also have known for decades that the ratio of omega 3 and 6 fatty acids you consume is important, but then they put soy oil in many things and give people unhealthy ratios with too much omega 6. Then they also put out public health advice saying to eat more omega 3’s but not to eat fewer omega 6’s, even though they know it’s the ratio that matters and they know tons of people get way too much omega 6 (including infants).

Mixing huge batches of lettuce or ground beef (so if any of it has E. Coli, the whole batch is contaminated) is similar to how Amazon commingles inventory from different sellers (including themselves), doesn’t track what came from who, and thereby encourages fraud because when fraud happens they have no idea which seller sent in the fraudulent item. That doesn’t stop Amazon from blaming and punishing whichever seller was getting paid for the particular sale when the fraud was noticed, even though he’s probably innocent. Due to policies like these, Amazon has a large amount of fraud on its platforms. Not all industries, in all areas, have these problems. E.g. I have heard of milk samples being tested from dairy farms, before mixing the milk together from many farms, so that responsibility for problems can be placed on the correct people. Similarly, if they wanted to, Amazon could keep track of which products are sent in from which sellers in order to figure out which sellers send in fraudulent items.

Activists who care to consider the big picture should wonder why there are problems in many industries, and wonder what can be done about them.

For example, enforcing fraud laws better would affect basically all industries at once, so that is a candidate idea that could have much higher leverage.

Getting people to stop picking fights without 80% majorities could affect fighting, activism, tribalism and other problems across all topics, so it potentially has very high leverage.

Limiting the government’s powers to favor companies would apply to all industries.

Basic economics education could help people make better decisions about any industry and better judge what sort of policies are reasonable for any industry.

Dozens more high-leverage ideas could be brainstormed. The main obstacle to finding them is that people generally aren’t actually trying. Activists tend to be motivated by some concrete issue in one area (like helping animals or the environment, or preventing cancer, or helping kids or the elderly or battered women), not by abstract issues like higher leverage or fight-avoiding reforms. If a proposal is low-leverage and involves fighting with a lot of people (or powerful people) who oppose it, then in general it’s a really bad idea. Women’s shelters or soup kitchens are low leverage but largely unopposed, so that’s a lot better than a low leverage cause which many people think is bad. But it’s high leverage causes that have the potential to dramatically improve the world. Many low-leverage, unopposed causes can add up and make a big difference too. High leverage but opposed causes can easily be worse than low leverage unopposed causes. If you’re going to oppose people you really ought to aim for high leverage to try to make it worth it. Sometimes there’s an approach to a controversial cause that dramatically reduces opposition, and people should be more interested in seeking those out.


Elliot Temple | Permalink | Messages (0)

What Is Intelligence?

Intelligence is a universal knowledge creation system. It uses the only known method of knowledge creation: evolution. Knowledge is information adapted to a purpose.

The replicators that evolve are called ideas. They are varied and selected too.

How are they selected? By criticism. Criticisms are themselves ideas which can be criticized.

To evaluate an idea and a criticism of it, you also need context including goals (or purposes or values or something else about what is good or bad). Context and goals are ideas too. They too can be replicated, varied, selected/criticized, improved, thought about, etc.

A criticism explains that an idea fails at a goal. An idea, in isolation from any purpose, cannot be evaluated effectively. There need to be success and failure criteria (the goal, and other results that aren’t the goal) in order to evaluate and criticize an idea. The same idea can work for one goal and fail for another goal. (Actually, approximately all ideas work for some goals and fail for others.)

How do I know this? I regard it as the best available theory given current knowledge. I don’t know a refutation of it and I don’t know of any viable alternative.

Interpretations of information (such as observations or sense data) are ideas too.

Emotions are ideas too but they’re often connected to preceding physiological states which can make things more complicated. They can also precede changes in physiological states.

We mostly use our ideas in a subconscious way. You can think of your brain like a huge factory and your conscious mind like one person who can go around and do jobs, inspect workstations, monitor employees, etc. But at any time, there’s a lot of work going on which he isn’t seeing. The conscious mind has limited attention and needs to delegate a ton of stuff to the subconscious after figuring out how to do it. This is done with e.g. practice to form habits – a habit means your subconscious is doing at least part of the work so it can seem (partly) automatic from the perspective of your consciousness.

Your conscious mind could also be thought of as a small group of people at the factory who often stick together but can split up. There are claims that we can think about up to roughly seven things at once, or keep up to seven separate things in active memory, or that we can do some genuine multi-tasking (meaning doing multiple things at once, instead of doing only one at a time but switching frequently).

Do to limited conscious attention and limited short-term memory, one of the main things we do is take a few ideas and combine them into one new idea. That’s called integration. We take what used to be several mental units and create a new, single mental unit which has a lot of the value of its components. But it’s just one thing so it costs less attention than the previous multiple things. By repeating this process, we can get advanced ideas.

If you combine four basic ideas, we can call the new idea a level 1 idea. Combine four level 1 ideas and you get a level 2 idea. Keep going and you can get a level 50 idea eventually. You can also combine ideas that aren’t from the same level, and you can combine different numbers of ideas.

This may create a pyramid structure with many more low level ideas than high level ideas. But it doesn’t necessarily have to. Say you have 10 level 0 ideas at the foundation. You can make 210 different combinations of four ideas from those original 10. You can also make 45 groups of two ideas, 120 groups of three, and 252 groups of five. (This assumes the order of the ideas don’t matter, and each idea can only be used once in a combination, or else you’d get even more combinations.)


Elliot Temple | Permalink | Messages (0)

EA Judo and Economic Calculation

Effective Altruism (EA) claims to like criticism. They have a common tactic: they say thanks for this criticism. Our critics make us stronger. We have used this information to fix the weakness. So don’t lower your opinion of EA due to the criticism; raise it due to one more weakness being fixed.

The term “EA Judo” comes from Notes on Effective Altruism by Michael Nielsen which defines EA Judo as:

strong critique of any particular "most good" strategy improves EA, it doesn't discredit it

This approach doesn’t always work. Some criticisms are hard to fix. Some criticisms require large changes to fix that EAs don’t want to make, such as making large changes to which causes they put money towards.

EA could sometimes be criticized for not having figured out a criticism themselves sooner, which shows a lack of intellectual rigor, leadership, organization, effort, something … which is much harder to fix than addressing one concrete weakness.

They like criticisms like “X is 10% more important, relative to Y, than you realized” at which point they can advise people to donate slightly more to X which is an easy fix. But they don’t like criticism of their methodology or criticism of how actually one of their major causes is counter-productive and should be discontinued.

The same pretending-to-like-criticism technique was the response Ludwig von Mises got from the socialists 100 years ago.

Mises told them a flaw in socialism (the economic calculation problem). At first they thought they could fix it. They thanked him for helping make socialism better.

Their fixes were superficial and wrong. He explained that their fixes didn’t work, and also why his criticism was deeper and more fundamental than they had recognized.

So then, with no fixes in sight, they stopped speaking with him. Which brings us to today when socialists, progressives and others still don’t really engage with Mises or classical liberalism.

EA behaves the same way. When you make criticisms that are harder to deal with, you get ignored. Or you get thanked and engaged with in non-impactful ways, then nothing much changes.


Elliot Temple | Permalink | Messages (0)

“Small” Errors, Frauds and Violences

People often don’t want to fix “small” problems. They commonly don’t believe that consistently getting “small” stuff right would lead to better outcomes for big problems.

“Small” Intellectual Errors

For example, people generally don’t think avoiding misquotes, incorrect cites, factual errors, math errors, grammar errors, ambiguous sentences and vague references would dramatically improve discussions.

They’ve never tried it, don’t have the skill to try it even if they wanted to, and have no experience with what it’d be like.

But they believe it’d be too much work because they aren’t imagining practicing these skills until they’re largely automated. If you do all the work with conscious effort, that would indeed be too much work, as it would be for most things.

You automatically use many words for their correct meanings, like “cat” or “table”. What you automatically and reliably get right, with ease, can be expanded with study and practice. What you find intuitive or second-nature can be expanded. Reliably getting it right without it taking significant conscious is called mastery.

But you can basically only expand your mastery to “small” issues. You can’t just take some big, hard, complex thing with 50 parts and master it as a single, whole unit. You have to break it into those 50 parts and mastery them individually. You can only realistically work on a few at a time.

So if you don’t want to work on small things, you’ll be stuck. And most people are pretty stuck on most topics, so that makes sense. The theory fits with my observations.

Also, in general, you can’t know how “small” an error is until after you fix it. Sometimes what appears to be a “small” error turns out very important and requires large changes to fix. And sometimes what appears to be a “big” error can be fixed with one small change. After you understand the error and its solution, you can judge its size. But when there are still significant unknowns, you’re just guessing. So if you refuse to try to fix “small” errors, you will inevitably guess that some “big” errors are small and then refuse to try to fix them.

Factory Farm Fraud

Similarly, animal welfare activists generally don’t believe that policing fraud is a good approach to factory farms. Fraud is too “small” of an issue which doesn’t directly do what they want, just like how avoiding misquotes is too “small” of an issue which doesn’t directly make conversations productive.

Activists tend to want to help the animals directly. They want better living conditions for animals. They broadly aren’t concerned with companies putting untrue statements on their websites which mislead the public. Big lies like “our chickens spend their whole lives in pasture” when they’re actually kept locked in indoor cages would draw attention. Meat companies generally don’t lie that egregiously, but they do make many untrue and misleading statements which contribute to the public having the wrong idea about what farms are like.

Fraud is uncontroversially illegal. But many people wouldn’t really care if a company used a misquote in an ad. That would be “small” fraud. Basically, I think companies should communicate with the public using similar minimal standards to what rational philosophy discussions should use. They don’t have to be super smart or wise, but they should at least get basics right. By basics I mean things where there’s no real controversy about what the correct answer is. They should quote accurately, cite accurately, get math right, get facts right, avoid statements that are both ambiguous and misleading, and get all the other “small” issues right. Not all of these are fraud issues. If a person in a discussion makes an ad hominem attack instead of an argument, that’s bad. If a company does it on their website, that’s bad too, but it’s not fraud, it’s just dumb. But many types of “small” errors, like wrong quotes or facts in marketing materials, can be fraud.

What Is Fraud?

Legally, fraud involves communicating something false or misleading about something where there is an objective, right answer (not something which is a matter of opinion). Fraud has to be knowing or reckless, not an innocent accident. If they lied on purpose, or they chose not to take reasonable steps to find out if what they said is true or false, then it can be fraud. Fraud also requires harm – e.g. consumers who made purchasing decisions partly based on fraudulent information. And all of this has to be judged according to current, standard ideas in our society, not by using any advanced but unpopular philosophy analysis.

Does “Small” Fraud Matter?

There’s widespread agreement that it’s important to police a “big” fraud like FTX, Enron, Theranos, Bernie Madoff’s ponzi scheme, or Wells Fargo creating millions of accounts that people didn’t sign up for.

Do large corporations commit “small” frauds that qualify according to the legal meaning of fraud that I explained? I believe they do that routinely. It isn’t policed well. It could be.

If smaller frauds were policed well, would that help much? I think so. I think the effectiveness would be similar to the effectiveness of policing “small” errors in intellectual discussions. I think even many people who think it’d be ineffective can agree with me that it’d be similarly effective in the two different cases. There’s a parallel there.

Disallowing fraud is one of the basics of law and order, after disallowing violence. It’s important to classical liberalism and capitalism. It’s widely accepted by other schools of thought too, e.g. socialists and Keynesians also oppose fraud. But people from all those schools of thoughts tend not to care about “small” fraud like I do.

Fraud is closely related to breach of contract and theft. Suppose I read your marketing materials, reasonably conclude that your mattress doesn’t contain fiberglass, and buy it. The implied contract is that I trade $1000 for a mattress with various characteristics like being new, clean, a specific size, fiberglass-free and shipped to my home. If the mattress provided doesn’t satisfy the (implied) contract terms, then the company has not fulfilled their side of the contract. They are guilty of breach of contract. They therefore, in short, have no right to receive money from me as specified in the contract that they didn’t follow. If they keep my money anyway then, from a theoretical perspective, that’s theft because they have my property and won’t give it back. I could sue (and that’s at least somewhat realistic). Many people would see the connection to breach of contract and theft if the company purposefully shipped an empty box with no mattress in it, but fewer people seem to see it if they send a mattress which doesn’t match what was advertised in a “smaller” way.

“Small” Violence

Disallowing “small” instances of violence is much more popular than disallowing “small” frauds, but not everyone cares about it. Some people think pushing someone at a bar, or even getting in a fist fight, is acceptable behavior. I think it’s absolutely unacceptable to get into someone’s personal space in an intimidating manner, so that they reasonably fear that you might touch them in any way without consent. Worse is actually doing any intentional, aggressive, non-consensual touch. I wish this was enforced better. I think many people do have anti-violence opinions similar to mine. There are people find even “small” violence horrifying and don’t want it to happen to anyone. That viewpoint exists for violence in a way that I don’t think it does for fraud or misquotes.

Note that I was discussing the enforcement of “small” violence to strangers. Unfortunately, people’s attitudes tend to be worse when it’s done to your wife, your girlfriend, your own child, etc. Police usually treat “domestic” violence differently than other violence and do less to stop it. However, again, lots of people besides me do want better protection for victims.

Maybe after “small” violence is more thoroughly rejected by almost everyone, society will start taking “small” fraud and “small” breach of contract more seriously.

Why To Improve “Small” Problems

Smaller problems tend to provide opportunities for improvement that are easier to get right, easier to implement, simpler, and less controversial about what the right answer is.

Basically, fix the easy stuff first and then you’ll get into a better situation and can reevaluate what problems are left. Fixing a bunch of small problems will usually help with some but not all of the bigger or harder problems.

Also, the best way to solve hard problems is often to break them down into small parts. So then you end up solving a bunch of small problems. This is just like learning a big, hard subject by breaking it down into many parts.

People often resist this, but not because they disagree that the small problem is bad or that your fix will work. There are a few reasons they resist it:

  • They are in a big hurry to work directly on the big problem that they care about
  • They are skeptical that the small fixes will add up to much or make much difference or be important enough
  • They think the small fixes will take too much work

Why would small fixes take a lot of work? Because people don’t respect them, sabotage them, complain about them, etc., instead of doing them. People make it harder than it has to be, then say it’s too hard.

Small fixes also seem like too much work if problem solving is broken to the point that fixing anything is nearly impossible. People in that situation often don’t even want to try to solve a problem unless the rewards are really bad (or the issue is so small that they don’t recognize what they’re doing as problem solving – people who don’t want to solve “small” problems often do solve hundreds of tiny problems every day).

If you’re really stuck on problem solving and can barely solve anything, working on smaller problems can help you get unstuck. If you try to work on big problems, it’s more overwhelming and gives you more hard stuff to deal with at once. The big problem is hard and getting unstuck is hard, so that’s at least two things. It’d be better to get unstuck with a small, easy problem that is unimportant (so the stakes are low and you don’t feel much pressure), so the only hard part is whatever you’re stuck on, and everything else provides minimal distraction. Though I think many of these people want to be distracted from the (irrational) reasons they’re stuck and failing to solve problems, rather than wanting to face and try to solve what’s going on there.

Small fixes also seem to hard if you imagine doing many small things using conscious effort and attention. To get lots of small things right, you must practice, automatize, and use your subconscious. If you aren’t doing that, you aren’t going to be very effective at small or big things. Most of your brainpower is in your subconscious.


See also my articles Ignoring “Small” Errors and “Small” Fraud by Tyson and Food Safety Net Services.


Elliot Temple | Permalink | Messages (0)

“Small” Fraud by Tyson and Food Safety Net Services

This is a followup for my article “Small” Errors, Frauds and Violences. It discusses a specific example of “small” fraud.


Tyson is a large meat processing company that gets meat from factory farms. Tyson’s website advertises that their meat that passes objective inspections and audits (mirror) from unbiased third parties.

Tyson makes these claims because these issues matter to consumers and affect purchasing. For example, a 2015 survey found that “56 percent of US consumers stop buying from companies they believe are unethical” and 35% would stop buying even if there is no substitute available. So if Tyson is lying to seem more ethical, there is actual harm to consumers who bought products they wouldn’t have bought without being lied to, so it’d qualify legally as fraud.

So if Tyson says (mirror) “The [third party] audits give us rigorous feedback to help fine tune our food safety practices.”, that better be true. They better actually have internal documents containing text which a reasonable person could interpret as “rigorous feedback”. And if Tyson puts up a website section about animal welfare on their whole website about sustainability, their claims better be true.

I don’t think this stuff is false in a “big” way. E.g., they say they audited 50 facilities in 2021 just for their “Social Compliance Auditing program”. Did they actually audit 0 facilities? Are they just lying and making stuff up? I really doubt it.

But is it “small” fraud? Is it actually true that the audits give them rigorous feedback? Are consumers being misled?

I am suspicious because they get third party audits from Food Safety Net Services, an allegedly independent company that posts partisan meat propaganda (mirror) on their own public website.

How rigorous or independent are the audits from a company that markets (mirror) “Establishing Credibility” as a service they provide while talking about how you need a “non-biased, third-party testing facility” (themselves) and saying they’ll help you gain the “trust” of consumers? They obviously aren’t actually non-biased since they somehow think posting partisan meat propaganda on their website is fine while trying to claim non-bias.

Food Safety Net Services don’t even have a Wikipedia page or other basic information about them available, but they do say (mirror) that their auditing:

started as a subset of FSNS Laboratories in 1998. The primary focus of the auditing group was product and customer-specific audits for laboratory customers. With a large customer base in the meat industry, our auditing business started by offering services specific to meat production and processing. … While still heavily involved in the meat industry, our focus in 2008 broadened to include all food manufacturing sites.

The auditing started with a pre-existing customer base in the meat industry, and a decade later expanded to cover other types of food. It sounds independent like how Uber drivers are independent contractors or how many Amazon delivery drivers work for independent companies. This is the meat industry auditing itself, displaying their partisan biases in public, and then claiming they have non-biased, independent auditing. How can you do a non-biased audit when you have no other income and must please your meat customers? How can you do a non-biased meat audit when you literally post meat-related propaganda articles on your website?

How can you do independent, non-biased audits when your meat auditing team is run by meat industry veterans? Isn’t it suspicious that your “Senior Vice President of Audit Services” “spent 20 years in meat processing facilities, a majority of the time in operational management. Operational experience included steak cutting, marinating, fully cooked meat products, par fry meat and vegetables, batter and breaded meat and vegetables, beef slaughter and fabrication, ground beef, and beef trimmings.” (source). Why exactly is she qualified to be in charge of non-biased audits? Did she undergo anti-bias training? What has she done to become unbiased about meat after her time in the industry? None of the her listed credentials actually say anything about her ability to be unbiased about meat auditing. Instead of trying to establish her objectivity in any way, they brag about someone with “a strong background in the meat industry” performing over 300 audits.

Their Impartiality Statement is one paragraph long and says “Team members … have agreed to operate in an ethical manner with no conflict or perceived conflict of interest.” and employees have to sign an ethics document promising to disclose conflicts of interest. That’s it. Their strategy for providing non-biased audits is to make low-level employees promise to be non-biased in writing, that way if anything goes wrong management can put all the blame on the workers and claim the workers defrauded them by falsely signing the contracts they were required to sign to be hired.

Is this a ridiculous joke, lawbreaking, or a “small” fraud that doesn’t really matter, or a “small” fraud that actually does matter? Would ending practices like this make the industry better and lead to more sanitary conditions for farm animals, or would it be irrelevant?

I think ending fraud would indirectly result better conditions for animals and reducing their suffering (on the premise that animals can suffer). Companies would have to make changes, like using more effective audits, so that their policies are followed more. And they’d have to change their practices to better match what the public thinks is OK.

This stuff isn’t very hard to find, but in a world where even some anti-factory-farm activists don’t care (and actually express high confidence about the legal innocence of the factory farm companies), it’s hard to fix.

Though some activists actually have done some better and more useful work. For example, The Humane League has a 2021 report about slaughterhouses not following the law. Despite bias, current auditing practices already show many violations. That’s not primarily about fraud, but it implies fraud because the companies tell the public that their meat was produced in compliance with the law.


Elliot Temple | Permalink | Messages (0)

Capitalism or Charity

In an ideal capitalist society, a pretty straightforward hypothesis for how to do the most good is: make the most money you can. (If this doesn’t make sense to you, you aren’t familiar with the basic pro-capitalist claims. If you’re interested, Time Will Run Back is a good book to start with. It’s a novel about the leaders of a socialist dystopia trying to solve their problems and thereby reinventing capitalism.)

Instead of earn to give, the advice could just be earn.

What’s best to do with extra money? The first hypothesis to consider, from a capitalist perspective, is invest it. Helping with capital accumulation will do good.

I don’t think Effective Altruism (EA) has any analysis or refutation of these hypotheses. I’ve seen nothing indicating they understand the basic claims and reasoning of the capitalist viewpoint. They seem to just ignore thinkers like Ludwig von Mises.

We (in USA and many other places) do not live in an ideal capitalist society, but we live in a society with significant capitalist elements. So the actions we’d take in a fully capitalist society should be considered as possibilities that may work well in our society, or which might work well with some modifications.

One cause that might do a lot of good is making society more capitalist. This merits analysis and consideration which I don’t think EA has done.

What are some of the objections to making money as a way to do good?

  • Disagreement about how economics works.
  • Loopholes – a society not being fully capitalist means it isn’t doing a full job of making sure satisfying consumers is the only way to make much money. E.g. it may be possible to get rich by fraud or by forcible suppression of competition (with your own force or the help of government force).
  • This only focuses on good that people are willing to pay for. People might not pay to benefit cats, and cats don’t have money to pay for their own benefit.
  • The general public could be shortsighted, have bad taste, etc. So giving them what they want most might not do the most good. (Some alternatives, like having a society ruled by philosopher kings, are probably worse.)

What are some advantages of the making money approach? Figuring out what will do good is really hard, but market economies provide prices that give guidance about how much people value goods or services. Higher prices indicate something does more good. Higher profits indicate something is most cost effective. (Profits are the selling price minus the costs of creating the product or providing the service. To be efficient, we need to consider expenses not just revenue.)

Measuring Value

Lots of charities don’t know how to measure how much good they’re doing. EA tries to help with that problem. EA does analysis of how effective different charities are. But EA’s methods, like those of socialist central planners, aren’t very good. The market mechanism is much better at pricing things than EA is at assigning effectiveness scores to charities.

One of the main issues, which makes EA’s analysis job hard, is that different charities do qualitatively different things. EA has to compare unlike things. EA has to combine factors from different dimensions. E.g. EA tries to determine whether a childhood vaccines charity does more or less good than an AI Alignment charity.

If EA did a good job with their analysis, they could make a reasonable comparison of one childhood vaccine charity with another. But comparing different types of charities is like comparing apples to oranges. This is fundamentally problematic. One of the most impressive things about the market price system is it takes products which are totally different – e.g. food, clothes, tools, luxuries, TVs, furniture, cars – and puts them all on a common scale (dollars or more generally money). The free market is able to validly get comparable numbers for qualitatively different things. That’s an extremely hard problem in general, for complex scenarios, so basically neither EA nor central planners can do it well. (That partly isn’t their fault. It doesn’t mean they aren’t clever enough. I would fail at it too. The only way to win is stop trying to do that and find a different approach. The fault is in using that approach, not in failing to get good answers with the approach. More thoughtful or diligent analysis won’t fix this.)

See Multi-Factor Decision Making Math for more information about the problems with comparing unlike things.


Elliot Temple | Permalink | Messages (0)