Betting Your Career

I quit the Effective Altruism forum due to a new rule requiring posts and comments be basically put in the public domain without copyright. More info. I had a bunch of draft posts, so I’m posting some of them here with minimal editing.


People bet their carers on various premises, outside their own expertise, e.g. AGI (alignment) researchers commonly bet on some epistemology without being experts on epistemology who actually read Popper and concluded, in their own judgment, that he’s wrong.

So you might expect them to be interested in criticism of those premises. Shouldn’t they want to investigate the risk?

But that depends on what you value about your career.

If you want money and status, and not to have to make changes, then maybe it’s safer to ignore critics who don’t seem likely to get much attention.

If you want to do productive work that’s actually useful, then your career is at risk.

People won’t admit it, but many of them don’t actually care that much about whether their career is productive. As long as they get status and money, they’re satisfied.

Also, a lot of people lack confidence that they can do very productive work whether or not their premises are wrong.

Actually, having wrong but normal/understandable/blameless premises has big advantages: you won’t come up with important research results but it’s not your fault. If it comes out that your premises were wrong, you did the noble work of investigating a lead that many people believed promising. Science and other types of research always involve investigating many leads that don’t turn out to be important. So if you find a lead people want investigated and then do nothing useful, and it turns out to be an important lead, then some other investigators outcompeted you. People could wonder why you didn’t figure out anything about the lead you worked on. But if the lead you work on turns out to be a dead end, then the awkward questions go away. So there’s an advantage to working on dead-ends as long as other people think it’s a good thing to work on.


Elliot Temple | Permalink | Messages (0)

Attention Filtering and Debate

I quit the Effective Altruism forum due to a new rule requiring posts and comments be basically put in the public domain without copyright. More info. I had a bunch of draft posts, so I’m posting some of them here with minimal editing.


People skim and filter. Gatekeepers and many other types of filters end up being indirect proxies for social status much more than they are about truth seeking.

Filtering isn’t the only problem though. If you have some credentials – awards, a PhD, a popular book, thousands of fans – people often still won’t debate you. Also, I certainly get through initial filtering sometimes. People talk with me some, and a lot more people read some of what I say.

After you get through filters, you run into problems like people still not wanting to debate or not wanting to put in enough effort to understand your point. We could call this secondary filtering. Maybe if you get through five layers of filters, then they’ll debate. Or maybe not. I think some of the filters are generated ad hoc because they don’t want to debate or consider (some types of) ideas that disagree with their current ideas. People can keep making up new excuses as necessary.

Why don’t people want to debate? Often because they’re bad at it.

And they know – even if they don’t consciously admit it – that debating is risky to their social status, and the expectation value for their debating result is to lose status.

And they know that, if they lose the debate, they will then face a problem. They’ll be conflicted. They will partly want to change their mind, but part of them won’t want to change their mind. So they don’t want to face that kind of conflict because they don’t know how to deal with it, so they’d rather avoid getting into that situation.

Also they already have a ton of urgent changes to make in their lives. They already know lots of ways they’re wrong. They already know about many mistakes. So they don’t exactly need new criticism. Adding more issues to the queue isn’t valuable.

All of that is fine but on the other hand anyone who admits that is no thought leader. So people don’t want to admit it. And if an intellectual position has no thought leaders capable of defending it, that’s a major problem. So people make excuses, pretend someone else will debate if debate is merited, shift responsibility to others (usually not to specific people), etc.

Debating is a status risk, a self-esteem risk, a hard activity, and they maybe don’t want to learn about (even more) errors which will lead to thinking they should change which is a hard thing they may fail at (which may further harm status and self-esteem, and be distracting and unpleasant).


Elliot Temple | Permalink | Messages (0)

Friendliness or Precision

I quit the Effective Altruism forum due to a new rule requiring posts and comments be basically put in the public domain without copyright. More info. I had a bunch of draft posts, so I’m posting some of them here with minimal editing.


In a debate, if you’re unfriendly and you make a lot of little mistakes, you should expect the mistakes to (on average) be biased for your side and against their side. In general, making many small, biased mistakes ruins debates dealing with complex or subtle issues. It’s too hard to fix them all, especially considering you’re the guy who made them (if you had the skill to fix them all, you could have used that same skill to avoid making some of them).

In other words, if you dislike someone, being extremely careful, rigorous and accurate with your reasoning provides a defense against bias. Without that defense, you don’t have much of a chance.

If you have a positive attitude and are happy to hear about their perspective, that helps prevent being biased against them. If you have really high intellectual standards and avoid making small mistakes, that helps prevent bias. If you have neither of those things, conversation doesn’t work well.


Elliot Temple | Permalink | Messages (0)

Hard and Soft Rationality Policies

I have two main rationality policies that are written down:

  1. Debate Policy
  2. Paths Forward Policy

I have many other smaller policies that are written down somewhere in some form, like about not misquoting or giving direct answers to direct questions (like say "yes" or "no" first when answering a yes or no question. then write extra stuff if you want. but don't skip the direct answer.)

A policy I thought of the other day, and recognized as worth writing down, is my debate policy sharing policy. I've had this policy for a long time. It's important but it isn't written in my debate policy.

If someone seems to want to debate me, but they don't invoke my debate policy, then I should link them to the debate policy so they have the option to use it. I shouldn't get out of the debate based on them not finding my debate policy.

In practice, I link the policy to a lot of people who I doubt want to debate me. I like sharing it. That's part of the point. It’s useful to me. It helps me deal with some situations in an easy way. I get in situations where I want to say/explain something, but writing it every time would be too much work, but some of the same things come up over and over, so I can write them once and then share links instead of rewriting the same points. My debate policy says some of the things I want frequently want to tell people, and linking it lets me repeat those things with very low effort.

One can imagine someone who put up a debate policy and then didn't mention it to critics who didn't ask for a debate in the right words. One can imagine someone who likes having the policy so they can claim they're rational, but they'd prefer to minimize actually using it. That would be problematic. I wrote my debate policy conditions so that if someone actually meets them, I'd like to debate. I don't dread that or want to avoid it. If you have a debate policy but hope people don't use it, then you have a problem to solve.

If I'm going to ignore a question or criticism from someone I don't know, then I want to link my policy so they have a way to fix things if I was wrong to ignore them. If I don't link it, and they have no idea it exists, then the results are similar to not having the policy. It doesn't function as a failsafe in that case.

Some policies offer hard guarantees and some are softer. What enforces the softer ones so they mean something instead of just being violated as much as one feels like? Generic, hard guarantees like a debate policy which can be used to address doing poorly at any softer guarantee.

For example, I don't have any specific written guarantee for linking people to my debate policy. There's an implicit (and now explicit in this post) soft guarantee that I should make a reasonable effort to share it with people who might want to use it. If I do poorly at that, someone could invoke my debate policy over my behavior. But I don't care much about making a specific, hard guarantee about debate policy link sharing because I have the debate policy itself as a failsafe to keep me honest. I think I do a good job of sharing my debate policy link, and I don't know how to write specific guarantees to make things better. It seems like something where a good faith effort is needed which is hard to define. Which is fine for some issues as long as you also have some clearer, more objective, generic guarantees in case you screw up on the fuzzier stuff.

Besides hard and soft policies, we could also distinguish policies from tools. Like I have a specific method of having a debate where people choose what key points they want to put in the debate tree. I have another debate method where people say two things at a time (it splits the conversation into two halves, one led by each person). I consider those tools. I don't have a policy of always using those things, or using those things in specific conditions. Instead, they're optional ways of debating that I can use when useful. There's a sort of soft policy there: use them when it looks like a good idea. Making a grammar tree is another tool, and I have a related soft policy of using that tool when it seems worthwhile. Having a big toolkit with great intellectual tools, along with actually recognizing situations for using them, is really useful.


Elliot Temple | Permalink | Messages (0)

A Non-Status-Based Filter

Asking people if they want to have a serious conversation is a way of filtering, or gatekeeping, which isn’t based on social status. Regardless of one’s status, anyone can opt in. This does require making the offer to large groups, randomized people, or something else that avoids social status. If you just make the offer to people you like, then your choice of who to offer conversations to is probably status based.

This might sound like the most ineffective filter ever. People can just say “yes I want to pass your filter” and then they pass. But in practice, I find it effective – the majority of people decline (or don’t reply, or reply about something else) and are filtered out.

You might think it only filters out people who were not going to have a conversations with you anyway. However, people often converse because they’re baited into it, triggered, defensive, caught up in trying to correct someone they think is wrong, etc. Asking people to make a decision about whether they want to be in a conversation can help them realize that they don’t want to. That’s beneficial for both you and them. However, I’ve never had one of them thank me for it.

A reason people dislike this filter is they associate all filters with status and therefore interpret being filtered out as an attack on their status – a claim they are not good enough in some way. But that’s a pretty weird interpretation with this specific filter.

This filter is, in some sense, the nicest filter ever. No one is ever filtered out who doesn’t want to be filtered out. Only this filter and variants of it have that property. Filtering on anything else, besides whether the person wants to opt in or out, would filter out some people who prefer to opt in. However, no one has ever reacted to me like it’s a nice filter. Many reactions are neutral, and some negative, but no one has praised me for being nice.

Useful non-status-based filters are somewhat difficult to come by and really important/valuable. Most filters people use are some sort of proxy for social status. That’s one of the major sources of bias in the world. What people pay attention to – what gets to them through gatekeeping/filtering – is heavily biased towards status. So it’s hard for them to disagree with high status ideas or learn about low status ideas (such as outliers and innovation).


Elliot Temple | Permalink | Messages (0)

Controversial Activism Is Problematic

EA mostly advocates controversial causes where they know that a lot of people disagree with them. In other words, there exist lots of people who think EA’s cause is bad and wrong.

AI Alignment, animal welfare, global warming, fossil fuels, vaccinations and universal basic income are all examples of controversies. There are many people on each side of the debate. There are also experts on each side of the debate.

Some causes do involve less controversy, such as vitamin A supplements or deworming. I think that, in general, less controversial causes are better independent of whether they’re correct. It’s better when people broadly agree on what to do, and then do it, instead of trying to proceed with stuff while having a lot of opponents who put effort into working against you. I think EA has far too little respect for getting wider spread agreement and cooperation, and not trying to proceed with action on issues where there are a lot of people taking action on the other side who you have to fight against. This comes up most with political issues but also applies to e.g. AI Alignment.

I’m not saying it’s never worth it to try to proceed despite large disagreements, and win the fight. But it’s something people should be really skeptical of and try to avoid. It has huge downsides. There’s a large risk that you’re in the wrong and are actually doing something bad. And even if you’re right, the efforts of your opponents will cancel out a lot of your effort. Also, proceeding with action when people disagree basically means you’ve given up on persuasion working any time soon. In general, focusing on persuasion and trying to make better more reasonable arguments that can bring people together is much better than giving up on talking it out and just trying to win a fight. EA values persuasion and rational debate too little.


Suppose you want to make the world better in the short term without worrying about a bunch of philosophy. We try to understand the situation we’re in, what our goal is, what methods would work well, what is risky, etc. So how can we analyze the big picture in a fairly short way that doesn’t require advanced skill to make sense?

We can look at the world and see there are lots of disagreements. If we try to do something that lots of people disagree with, we might be doing something bad. It’s risky. Currently in the world, a ton of people on both sides of many controversies are doing this. Both sides have tons of people who feel super confident that they’re right, and who donate or get involved in activism. This is especially common with political issues.

So if you want to make the world better, two major options are:

  • Avoid controversy
  • Help resolve controversy

There could be exceptions, but these are broadly better options than taking sides and fighting in a controversy. If there are exceptions, correctly knowing about them would probably require a bunch of intellectual skill and study, and wouldn’t be compatible with looking for quicker, more accessible wins. A lot of people think their side of their cause is a special exception when it isn’t.

The overall world situation is there are far too many confident people who are far too eager to fight instead of seeking harmony, cooperation, working together, etc. Persuasion is what enables people to be on the same team instead of working against each other.

Causes related to education and sharing information can help resolve controversy, especially when they’re done in a non-partisan, unbiased way. Some education or information sharing efforts are clearly biased to help one side win, rather than focused on being fair and helpful. Stuff about raising awareness often means raising awareness of your key talking points and why your side is right. Propaganda efforts are very different than being neutral and helping enable people to form better opinions.

Another approach to resolving controversy is to look at intellectual thought leaders, and how they debate and engage with each other (or don’t), and try to figure out what’s going wrong there and what can be done about it.

Another approach is to look at how regular people debate each other and talk about issues, and try to understand why people on both sides aren’t being persuaded and try to come up with some ideas to resolve the issue. That means coming to a conclusion that most people on both sides can be happy with.

Another approach is to study philosophy and rationality.

Avoiding controversy is a valid option too. Helping people avoid blindness by getting enough Vitamin A is a pretty safe thing to work on if you want to do something good with a low risk that you’re actually on the wrong side.

A common approach people try to use is to have some experts figure out which sides of which issues are right. Then they feel safe to know they’re right because they trust that some smart people already looked into the matter really well. This approach doesn’t make much sense in the common case that there are experts on both sides who disagree with each other. Why listen to these experts instead of some other experts who say other things? Often people already like a particular conclusion or cause then find experts who agree with it. The experts offer justification for a pre-existing opinion rather than actually guiding what they think. Listening to experts can also run into issues related to irrational, biased gatekeeping about who counts as an “expert”.

In general, people are just way too eager to pick a side and fight for it instead of trying to transcend, avoid or fix such fighting. They don’t see cooperation, persuasion or harmony as powerful or realistic enough tools. They are content to try to beat opponents. And they don’t seem very interested in looking at the symmetry of how they think they’re right and their cause is worth fighting for, but so do many people on the other side.

If your cause is really better, you should be able to find some sort of asymmetric advantage for your side. If it can give you a quick, clean victory that’s a good sign. If it’s a messy, protracted battle, that’s a sign that your asymmetric advantage wasn’t good enough and you shouldn’t be so confident that you know what you’re talking about.


Elliot Temple | Permalink | Messages (0)

Rationality Policies Tips

I quit the Effective Altruism forum due to a new rule requiring all new posts and comments be basically put in the public domain without copyright, so anyone could e.g. sell a book of my posts without my consent (they’d just have to give attribution). More info. I had a bunch of draft posts, so I’m posting some of them here with minimal editing. In general, I’m not going to submit them as link posts at EA myself. If you think they should be shared with EA as link posts, please do it yourself. I’m happy for other people to share links to my work at EA or on social media. Please share stuff in whatever ways you think are good to do.


Suppose you have some rationality policies, and you always want to and do follow them. You do exactly the same actions you would have without the policies, plus a little bit of reviewing the policies, comparing your actions with the policies to make sure you’re following them, etc.

In this case, are the policies useless and a small waste of time?

No. Policies are valuable for communication. They provide explanations and predictability for other people. Other people will be more convinced that you’re rational and will understand your actions more. You’ll less often be accused of irrationality or bias (or, worse, have people believe you’re being biased without telling you or allowing a rebuttal). People will respect you more and be more interested in interacting with you. It’ll be easier to get donations.

Also, written policies enable critical discussion of the policies. Having the policies lets people make suggestions or share critiques. So that’s another large advantage of the policies even when they make no difference to your actions. People can also learn from your policies and use start using some of the same policies for themselves.

It’s also fairly unrealistic that the policies make no difference to your actions. Policies can help you remember and use good ideas more frequently and consistently.

Example Rationality Policies

“When a discussion is hard, start using an idea tree.” This is a somewhat soft, squishy policy. How do you know when a discussion is hard? That’s up to your judgment. There are no objective criteria given. This policy could be improved but it’s still, as written, much better than nothing. It will work sometimes due to your own judgment and also other people who know about your policy can suggest that a discussion is hard and it’s time to use an idea tree.

A somewhat less vague policy is, “When any participant in a discussion thinks the discussion is hard, start using an idea tree.” In other words, if you think the discussion is tough and a tree would help, you use one. And also, if your discussion partner claims it’s tough, you use one. Now there is a level of external control over your actions. It’s not just up to your judgment.

External control can be triggered by measurements or other parts of reality that are separate from other people (e.g. “if the discussion length exceeds 5000 words, do X”). It can also be triggered by other people making claims or judgments. It’s important to have external control mechanisms so that things aren’t just left up to your judgment. But you need to design external control mechanisms well so that you aren’t controlled to do bad things.

It’s also problematic if you dislike or hate something but your policy makes you do it. It’s also problematic to have no policy and just do what your emotions want, which could easily be biased. An alternative would be to set the issue aside temporarily to actively do a lot of introspection and investigation, possibly followed by self-improvement.

A more flexible policy would be, “When any participant in a discussion thinks the discussion is hard, start using at least one option from my Hard Discussion Helpers list.” The list could contain using an idea tree and several other options such as doing grammar analysis or using Goldratt’s evaporating clouds.

More about Policies

If you find your rationality policies annoying to follow, or if they tell you to take inappropriate actions, then the solution is to improve your policy writing skill and your policies. The solution is not to give up on written policies.

If you change policies frequently, you should label them (all of them or specific ones) as being in “beta test mode” or something else to indicate they’re unstable. Otherwise you would mislead people. Note: It’s very bad to post written policies you aren’t going to follow; that’s basically lying to people in an unusually blatant, misleading way. But if you post a policy with a warning that it’s a work in progress, then it’s fine.

One way to dislike a policy is you find it takes extra work to use it. E.g. it could add extra paperwork so that some stuff takes longer to get done. That could be fine and worth it. If it’s a problem, try to figure out lighter weight policies that are more cost effective. You might also judge that some minor things don’t need written policies, and just use written policies for more important and broader issues.

Another way to dislike a policy is you don’t want to do what it says for some other reason than saving time and effort. You actually dislike that action. You think it’s telling you to do something biased, bad or irrational. In that case, there is a disagreement between your ideas about rationality that you used to write the policy and your current ideas. This disagreement is important to investigate. Maybe your abstract principles are confused and impractical. Maybe you’re rationalizing a bias right now and the policy is right. Either way – whether the policy or current idea is wrong – there’s a significant opportunity for improvement. Finding out about clashes between your general principles and the specific actions you want to do is important and those issues are worth fixing. You should have your explicit ideas and intuitions in alignment, as well as your abstract and concrete ideas, your big picture and little picture ideas, your practical and intellectual ideas, etc. All of those types of ideas should agree on what to do. When they don’t, something is going wrong and you should improve your thinking.

Some people don’t value opportunities to improve their thinking because they already have dozens of those opportunities. They’re stuck on a different issue other than finding opportunities, such as the step of actually coming up with solutions. If that’s you, it could explain a resistance to written policies. They would make pre-existing conflicts of ideas within yourself more explicit when you’re trying to ignore a too-long list of your problems. Policies could also make it harder to follow the inexplicit compromises you’re currently using. They’d make it harder to lie to yourself to maintain your self-esteem. If you have that problem, I suggest that it’s worth it to try to improve instead of just kind of giving up on rationality. (Also, if you do want to give up on rationality, or your ideas are such a mess that you don’t want to untangle them, then maybe EA and CF are both the wrong places for you. Most of the world isn’t strongly in favor of rationality and critical discussion, so you’ll have an easier time elsewhere. In other words, if you’ve given up on rationality, then why are you reading this or trying to talk to people like me? Don’t try to have it both ways and engage with this kind of article while also being unwilling to try to untangle your contradictory ideas.)


Elliot Temple | Permalink | Messages (0)

My Early Effective Altruism Experiences

I quit the Effective Altruism forum due to a new rule requiring all new posts and comments be basically put in the public domain without copyright, so anyone could e.g. sell a book of my posts without my consent (they’d just have to give attribution). More info. I had a bunch of draft posts, so I’m posting some of them here with minimal editing. In general, I’m not going to submit them as link posts at EA myself. If you think they should be shared with EA as link posts, please do it yourself. I’m happy for other people to share links to my work at EA or on social media. Please share stuff in whatever ways you think are good to do.

This post covers some of my earlier time at EA but doesn’t discuss some of the later articles I posted there and the response.


I have several ideas about how to increase EA’s effectiveness by over 20%. But I don’t think think they will be accepted immediately. People will find them counter-intuitive, not understand them, disagree with them, etc.

In order to effectively share ideas with EA, I need attention from EA people who will actually read and think about things. I don’t know how to get that and I don’t think EA offers any list of steps that I could follow to get it, nor any policy guarantees like “If you do X, we’ll do Y.” that I could use to bring up ideas. One standard way to get it, which has various other advantages, is to engage in debate (or critical discussion) with someone. However, only one person from EA (who isn’t particularly influential) has been willing to try to have a debate or serious conversation with me. By a serious conversation, I mean one that’s relatively long and high effort, which aims at reaching conclusions.

My most important idea about how to increase EA’s effectiveness is to improve EA’s receptiveness to ideas. This would let anyone better share (potential) good ideas with EA.

EA views itself as open to criticism and it has a public forum. So far, no moderator has censored my criticism, which is better than many other forums! However, no one takes responsibility for answering criticism or considering suggestions. It’s hard to get any disagreements resolved by debate at EA. There’s also no good way to get official or canonical answers to questions in order to establish some kind of standard EA position to target criticism at.

When one posts criticism or suggestions, there are many people who might engage, but no one is responsible for doing it. A common result is that posts do not get engagement. This happens to lots of other people besides me, and it happens to posts which appear to be high effort. There are no clear goalposts to meet in order to get attention for a post.

Attention at the EA forum seems to be allocated in pretty standard social hierarchy ways. The overall result is that EA’s openness to criticism is poor (objectively, but not compared to other groups, many of which are worse).

John the Hypothetical Critic

Suppose John has a criticism or suggestion for EA that would be very important if correct. There are three main scenarios:

  1. John is right and EA is wrong.
  2. EA is right and John is wrong.
  3. EA and John are both wrong.

There should be a reasonable way so that, if John is right, EA can be corrected instead of just ignoring John. But EA doesn’t have effective policies to make that happen. No person or group is responsible for considering that John may be right, engaging with John’s arguments, or attempting to give a rebuttal.

It’s also really good to have a reasonable way so that, if John is wrong and EA is right, John can find out. EA’s knowledge should be accessible so other people can learn what EA knows, why EA is right, etc. This would make EA much more persuasive. EA has many articles which help with this, but if John has an incorrect criticism and is ignored, then he’s probably going to conclude that EA is wrong and won’t debate him, and lower his opinion of EA (plus people reading the exchange might do the same – they might see John give an criticism that isn’t answered and conclude that EA doesn’t really care about addressing criticism).

If John and EA are both wrong, it’d also be a worthwhile topic to devote some effort to, since EA is wrong about something. Discussing John’s incorrect criticism or suggestion could lead to finding out about EA’s error, which could then lead to brainstorming improvements.

I’ve written about these issues before with the term Paths Forward.

Me Visiting EA

The first thing I brought up at EA was asking if EA has any debate methodology or any way I can get a debate with someone. Apparently not. My second question was about whether EA has some alternative to debates, and again the answer seemed to be no. I reiterated the question again, pointing out the “debate methodology” plus “alternative to debate methodology” issues form a complete pair, and if EA has neither that’s bad. This time, I think some people got defensive about the title, which caused me to get more attention than when my post title didn’t offend people (the incentives there are really bad). The titled asked how EA was rational. Multiple replies seemed focused on the title, which I grant was vague, rather than the body text which gave details of what I meant.

Anyway, I finally got some sort of answer: EA lacks formal debate or discussion methods but has various informal attempts at rationality. Someone shared a list. I wrote a brief statement of what I thought the answer was and asked for feedback if I got EA’s position wrong. I got it right. I then wrote an essay criticizing EA’s position, including critiques of the listed points.

What happened next? Nothing. No one attempted to engage with my criticism of EA. No one tried to refute any of my arguments. No one tried to defend EA. It’s back to the original problem: EA isn’t set up to address criticism or engage in debate. It just has a bunch of people who might or might not do that in each case. There’s nothing organized and no one takes responsibility for addressing criticism. Also, even if someone did engage with me, and I persuaded them that I was correct, it wouldn’t change EA. It might not even get a second person to take an interest in debating the matter and potentially being persuaded too.

I think I know how to organize rational, effective debates and reach conclusions. The EA community broadly doesn’t want to try doing that my way nor do they have a way they think is better.

If you want to gatekeep your attention, please write down the rules you’re gatekeeping by. What can I do to get past the gatekeeping? If you gatekeep your attention based on your intuition and have no transparency or accountability, that is a recipe for bias and irrationality. (Gatekeeping by hidden rules is related to the rule of man vs. the rule of law, as I wrote about. It’s also related to security through obscurity, a well known mistake in software. Basically, when designing secure systems, you should assume hackers can see your code and know how the system is designed, and it should be secure anyway. If your security relies on keeping some secrets, it’s poor security. If your gatekeeping relies on adversaries not knowing how it works, rather than having a good design, you’re making the security through obscurity error. That sometimes works OK if no one cares about you, but it doesn’t work as a robust approach.)

I understand that time, effort, attention, engagement, debate, etc., are limited resources. I advocate having written policies to help allocate those resources effectively. Individuals and groups can both do this. You can plan ahead about what kinds of things you think it’s good to spend attention on, and write down decision making criteria, and share them publicly, and etc., instead of just leaving it to chance or bias. Using written rationality policies to control some of these valuable resources would let them be used more effectively instead of haphazardly. The high value of the resources is a reason in favor, not again, governing their use with explicit policies that are put in writing then critically analyzed. (I think intuition has value too, despite the higher risk of bias, so allocating e.g. 50% of your resources to conscious policies and 50% to intuition would be fine.)

“It’s not worth the effort” is the standard excuse for not engaging with arguments. But it’s just an excuse. I’m the one who has researched how to do such things efficiently, how to save effort, etc., without giving up on rationality. They aren’t researching how to save effort and designing good, effort-saving methods, nor do they want the methods I developed. People just say stuff isn’t worth the effort when they’re biased against thinking about it, not as a real obstacle that they actually want a solution to. They won’t talk about solutions to it when I offer, nor will they suggest any way of making progress that would work if they’re in the wrong.

LW Short Story

Here’s a short story as an aside (from memory, so may have minor inaccuracies). Years ago I was talking with Less Wrong (LW) about similar issues. LW and EA are similar places. I brought up some Paths Forward stuff. Someone said basically he didn’t have time to read it, or maybe didn’t want to risk wasting his time. I said the essay explains how to engage with my ideas in time-efficient, worthwhile ways. So you just read this initial stuff and it’ll give you the intellectual methods to enable you to engage with my other ideas in beneficial ways. He said that’d be awesome if true, but he figures I’m probably wrong, so he doesn’t want to risk his time. We appeared to be at an impasse. I have a potential solution with high value that addresses his problems, but he doubts it’s correct and doesn’t want to use his resources to check if I’m right.

My broad opinion is someone in a reasonably large community like LW should be curious and look into things, and if no one does then each individual should recognize that as a major problem and want to fix it.

But I came up with a much simpler, directer solution.

It turns out he worked at a coffee shop. I offered to pay him the same wage as his job to read my article (or I think it was a specific list of a few articles). He accepted. He estimated how long the stuff would take to read based on word count and we agreed on a fixed number of dollars that I’d pay him (so I wouldn’t have to worry about him reading slowly to raise his payment). The estimate was his idea, and he came up with the numbers and I just said yes.

But before he read it, an event happened that he thought gave him a good excuse to back out. He backed out. He then commented on the matter somewhere that he didn’t expect me to read, but I did read it. He said he was glad to get out of it because he didn’t want to read it. In other words, he’d rather spend an hour working at a coffee shop than an hour reading some ideas about rationality and resource-efficient engagement with rival ideas, given equal pay.

So he was just making excuses the whole time, and actually just didn’t want to consider my ideas. I think he only agreed to be paid to read because he thought he’d look bad and irrational if he refused. I think the problem is that he is bad and irrational, and he wants to hide it.

More EA

My first essay criticizing EA was about rationality policies, how and why they’re good, and it compared them to the rule of law. After no one gave any rebuttal, or changed their mind, I wrote about my experience with my debate policy. A debate policy is an example of a rationality policy. Although you might expect that conditionally guaranteeing debates would cost time, it has actually saved me time. I explained how it helps me be a good fallibilist using less time. No one responded to give a rebuttal or to make their own debate policy. (One person made a debate policy later. Actually two people claimed to, but one of them was so bad/unserious that I don’t count it. It wasn’t designed to actually deal with the basic ideas of a debate policy, and I think it was made in bad faith because they person wanted to pretend to have a debate policy. As one example of what was wrong with it, they just mentioned it in a comment instead of putting it somewhere that anyone would find it or that they could reasonably link to in order to show it to people in the future.)

I don’t like even trying to talk about specific issues with EA in this broader context where there’s no one to debate, no one who wants to engage in discussion. No one feels responsible for defending EA against criticism (or finding out that EA Is mistaken and changing it). I think that one meta issue has priority.

I have nothing against decentralization of authority when many individuals each take responsibility. However, there is a danger when there is no central authority and also no individuals take responsibility for things and also there’s a lack of coordination (leading to e.g. lack of recognition that, out of thousands of people, zero of them dealt with something important).

I think it’s realistic to solve these problems and isn’t super hard, if people want to solve them. I think improving this would improve EA’s effectiveness by over 20%. But if no one will discuss the matter, and the only way to share ideas is by climbing EA’s social hierarchy and becoming more popular with EA by first spending a ton of time and effort saying other things that people like to hear, then that’s not going to work for me. If there is a way forward that could rationally resolve this disagreement, please respond. Or if any individual wants to have a serious discussion about these matters, please respond.

I’ve made rationality research my primary career despite mostly doing it unpaid. That is a sort of charity or “altruism” – it’s basically doing volunteer work to try to make a better world. I think it’s really important, and it’s very sad to me that even groups that express interest in rationality are, in my experience, so irrational and so hard to engage with.


Elliot Temple | Permalink | Messages (0)