Controversial Activism Is Problematic

EA mostly advocates controversial causes where they know that a lot of people disagree with them. In other words, there exist lots of people who think EA’s cause is bad and wrong.

AI Alignment, animal welfare, global warming, fossil fuels, vaccinations and universal basic income are all examples of controversies. There are many people on each side of the debate. There are also experts on each side of the debate.

Some causes do involve less controversy, such as vitamin A supplements or deworming. I think that, in general, less controversial causes are better independent of whether they’re correct. It’s better when people broadly agree on what to do, and then do it, instead of trying to proceed with stuff while having a lot of opponents who put effort into working against you. I think EA has far too little respect for getting wider spread agreement and cooperation, and not trying to proceed with action on issues where there are a lot of people taking action on the other side who you have to fight against. This comes up most with political issues but also applies to e.g. AI Alignment.

I’m not saying it’s never worth it to try to proceed despite large disagreements, and win the fight. But it’s something people should be really skeptical of and try to avoid. It has huge downsides. There’s a large risk that you’re in the wrong and are actually doing something bad. And even if you’re right, the efforts of your opponents will cancel out a lot of your effort. Also, proceeding with action when people disagree basically means you’ve given up on persuasion working any time soon. In general, focusing on persuasion and trying to make better more reasonable arguments that can bring people together is much better than giving up on talking it out and just trying to win a fight. EA values persuasion and rational debate too little.


Suppose you want to make the world better in the short term without worrying about a bunch of philosophy. We try to understand the situation we’re in, what our goal is, what methods would work well, what is risky, etc. So how can we analyze the big picture in a fairly short way that doesn’t require advanced skill to make sense?

We can look at the world and see there are lots of disagreements. If we try to do something that lots of people disagree with, we might be doing something bad. It’s risky. Currently in the world, a ton of people on both sides of many controversies are doing this. Both sides have tons of people who feel super confident that they’re right, and who donate or get involved in activism. This is especially common with political issues.

So if you want to make the world better, two major options are:

  • Avoid controversy
  • Help resolve controversy

There could be exceptions, but these are broadly better options than taking sides and fighting in a controversy. If there are exceptions, correctly knowing about them would probably require a bunch of intellectual skill and study, and wouldn’t be compatible with looking for quicker, more accessible wins. A lot of people think their side of their cause is a special exception when it isn’t.

The overall world situation is there are far too many confident people who are far too eager to fight instead of seeking harmony, cooperation, working together, etc. Persuasion is what enables people to be on the same team instead of working against each other.

Causes related to education and sharing information can help resolve controversy, especially when they’re done in a non-partisan, unbiased way. Some education or information sharing efforts are clearly biased to help one side win, rather than focused on being fair and helpful. Stuff about raising awareness often means raising awareness of your key talking points and why your side is right. Propaganda efforts are very different than being neutral and helping enable people to form better opinions.

Another approach to resolving controversy is to look at intellectual thought leaders, and how they debate and engage with each other (or don’t), and try to figure out what’s going wrong there and what can be done about it.

Another approach is to look at how regular people debate each other and talk about issues, and try to understand why people on both sides aren’t being persuaded and try to come up with some ideas to resolve the issue. That means coming to a conclusion that most people on both sides can be happy with.

Another approach is to study philosophy and rationality.

Avoiding controversy is a valid option too. Helping people avoid blindness by getting enough Vitamin A is a pretty safe thing to work on if you want to do something good with a low risk that you’re actually on the wrong side.

A common approach people try to use is to have some experts figure out which sides of which issues are right. Then they feel safe to know they’re right because they trust that some smart people already looked into the matter really well. This approach doesn’t make much sense in the common case that there are experts on both sides who disagree with each other. Why listen to these experts instead of some other experts who say other things? Often people already like a particular conclusion or cause then find experts who agree with it. The experts offer justification for a pre-existing opinion rather than actually guiding what they think. Listening to experts can also run into issues related to irrational, biased gatekeeping about who counts as an “expert”.

In general, people are just way too eager to pick a side and fight for it instead of trying to transcend, avoid or fix such fighting. They don’t see cooperation, persuasion or harmony as powerful or realistic enough tools. They are content to try to beat opponents. And they don’t seem very interested in looking at the symmetry of how they think they’re right and their cause is worth fighting for, but so do many people on the other side.

If your cause is really better, you should be able to find some sort of asymmetric advantage for your side. If it can give you a quick, clean victory that’s a good sign. If it’s a messy, protracted battle, that’s a sign that your asymmetric advantage wasn’t good enough and you shouldn’t be so confident that you know what you’re talking about.


Elliot Temple | Permalink | Messages (0)

A Non-Status-Based Filter

Asking people if they want to have a serious conversation is a way of filtering, or gatekeeping, which isn’t based on social status. Regardless of one’s status, anyone can opt in. This does require making the offer to large groups, randomized people, or something else that avoids social status. If you just make the offer to people you like, then your choice of who to offer conversations to is probably status based.

This might sound like the most ineffective filter ever. People can just say “yes I want to pass your filter” and then they pass. But in practice, I find it effective – the majority of people decline (or don’t reply, or reply about something else) and are filtered out.

You might think it only filters out people who were not going to have a conversations with you anyway. However, people often converse because they’re baited into it, triggered, defensive, caught up in trying to correct someone they think is wrong, etc. Asking people to make a decision about whether they want to be in a conversation can help them realize that they don’t want to. That’s beneficial for both you and them. However, I’ve never had one of them thank me for it.

A reason people dislike this filter is they associate all filters with status and therefore interpret being filtered out as an attack on their status – a claim they are not good enough in some way. But that’s a pretty weird interpretation with this specific filter.

This filter is, in some sense, the nicest filter ever. No one is ever filtered out who doesn’t want to be filtered out. Only this filter and variants of it have that property. Filtering on anything else, besides whether the person wants to opt in or out, would filter out some people who prefer to opt in. However, no one has ever reacted to me like it’s a nice filter. Many reactions are neutral, and some negative, but no one has praised me for being nice.

Useful non-status-based filters are somewhat difficult to come by and really important/valuable. Most filters people use are some sort of proxy for social status. That’s one of the major sources of bias in the world. What people pay attention to – what gets to them through gatekeeping/filtering – is heavily biased towards status. So it’s hard for them to disagree with high status ideas or learn about low status ideas (such as outliers and innovation).


Elliot Temple | Permalink | Messages (0)

Hard and Soft Rationality Policies

I have two main rationality policies that are written down:

  1. Debate Policy
  2. Paths Forward Policy

I have many other smaller policies that are written down somewhere in some form, like about not misquoting or giving direct answers to direct questions (like say "yes" or "no" first when answering a yes or no question. then write extra stuff if you want. but don't skip the direct answer.)

A policy I thought of the other day, and recognized as worth writing down, is my debate policy sharing policy. I've had this policy for a long time. It's important but it isn't written in my debate policy.

If someone seems to want to debate me, but they don't invoke my debate policy, then I should link them to the debate policy so they have the option to use it. I shouldn't get out of the debate based on them not finding my debate policy.

In practice, I link the policy to a lot of people who I doubt want to debate me. I like sharing it. That's part of the point. It’s useful to me. It helps me deal with some situations in an easy way. I get in situations where I want to say/explain something, but writing it every time would be too much work, but some of the same things come up over and over, so I can write them once and then share links instead of rewriting the same points. My debate policy says some of the things I want frequently want to tell people, and linking it lets me repeat those things with very low effort.

One can imagine someone who put up a debate policy and then didn't mention it to critics who didn't ask for a debate in the right words. One can imagine someone who likes having the policy so they can claim they're rational, but they'd prefer to minimize actually using it. That would be problematic. I wrote my debate policy conditions so that if someone actually meets them, I'd like to debate. I don't dread that or want to avoid it. If you have a debate policy but hope people don't use it, then you have a problem to solve.

If I'm going to ignore a question or criticism from someone I don't know, then I want to link my policy so they have a way to fix things if I was wrong to ignore them. If I don't link it, and they have no idea it exists, then the results are similar to not having the policy. It doesn't function as a failsafe in that case.

Some policies offer hard guarantees and some are softer. What enforces the softer ones so they mean something instead of just being violated as much as one feels like? Generic, hard guarantees like a debate policy which can be used to address doing poorly at any softer guarantee.

For example, I don't have any specific written guarantee for linking people to my debate policy. There's an implicit (and now explicit in this post) soft guarantee that I should make a reasonable effort to share it with people who might want to use it. If I do poorly at that, someone could invoke my debate policy over my behavior. But I don't care much about making a specific, hard guarantee about debate policy link sharing because I have the debate policy itself as a failsafe to keep me honest. I think I do a good job of sharing my debate policy link, and I don't know how to write specific guarantees to make things better. It seems like something where a good faith effort is needed which is hard to define. Which is fine for some issues as long as you also have some clearer, more objective, generic guarantees in case you screw up on the fuzzier stuff.

Besides hard and soft policies, we could also distinguish policies from tools. Like I have a specific method of having a debate where people choose what key points they want to put in the debate tree. I have another debate method where people say two things at a time (it splits the conversation into two halves, one led by each person). I consider those tools. I don't have a policy of always using those things, or using those things in specific conditions. Instead, they're optional ways of debating that I can use when useful. There's a sort of soft policy there: use them when it looks like a good idea. Making a grammar tree is another tool, and I have a related soft policy of using that tool when it seems worthwhile. Having a big toolkit with great intellectual tools, along with actually recognizing situations for using them, is really useful.


Elliot Temple | Permalink | Messages (0)

Friendliness or Precision

I quit the Effective Altruism forum due to a new rule requiring posts and comments be basically put in the public domain without copyright. More info. I had a bunch of draft posts, so I’m posting some of them here with minimal editing.


In a debate, if you’re unfriendly and you make a lot of little mistakes, you should expect the mistakes to (on average) be biased for your side and against their side. In general, making many small, biased mistakes ruins debates dealing with complex or subtle issues. It’s too hard to fix them all, especially considering you’re the guy who made them (if you had the skill to fix them all, you could have used that same skill to avoid making some of them).

In other words, if you dislike someone, being extremely careful, rigorous and accurate with your reasoning provides a defense against bias. Without that defense, you don’t have much of a chance.

If you have a positive attitude and are happy to hear about their perspective, that helps prevent being biased against them. If you have really high intellectual standards and avoid making small mistakes, that helps prevent bias. If you have neither of those things, conversation doesn’t work well.


Elliot Temple | Permalink | Messages (0)

Attention Filtering and Debate

I quit the Effective Altruism forum due to a new rule requiring posts and comments be basically put in the public domain without copyright. More info. I had a bunch of draft posts, so I’m posting some of them here with minimal editing.


People skim and filter. Gatekeepers and many other types of filters end up being indirect proxies for social status much more than they are about truth seeking.

Filtering isn’t the only problem though. If you have some credentials – awards, a PhD, a popular book, thousands of fans – people often still won’t debate you. Also, I certainly get through initial filtering sometimes. People talk with me some, and a lot more people read some of what I say.

After you get through filters, you run into problems like people still not wanting to debate or not wanting to put in enough effort to understand your point. We could call this secondary filtering. Maybe if you get through five layers of filters, then they’ll debate. Or maybe not. I think some of the filters are generated ad hoc because they don’t want to debate or consider (some types of) ideas that disagree with their current ideas. People can keep making up new excuses as necessary.

Why don’t people want to debate? Often because they’re bad at it.

And they know – even if they don’t consciously admit it – that debating is risky to their social status, and the expectation value for their debating result is to lose status.

And they know that, if they lose the debate, they will then face a problem. They’ll be conflicted. They will partly want to change their mind, but part of them won’t want to change their mind. So they don’t want to face that kind of conflict because they don’t know how to deal with it, so they’d rather avoid getting into that situation.

Also they already have a ton of urgent changes to make in their lives. They already know lots of ways they’re wrong. They already know about many mistakes. So they don’t exactly need new criticism. Adding more issues to the queue isn’t valuable.

All of that is fine but on the other hand anyone who admits that is no thought leader. So people don’t want to admit it. And if an intellectual position has no thought leaders capable of defending it, that’s a major problem. So people make excuses, pretend someone else will debate if debate is merited, shift responsibility to others (usually not to specific people), etc.

Debating is a status risk, a self-esteem risk, a hard activity, and they maybe don’t want to learn about (even more) errors which will lead to thinking they should change which is a hard thing they may fail at (which may further harm status and self-esteem, and be distracting and unpleasant).


Elliot Temple | Permalink | Messages (0)

Betting Your Career

I quit the Effective Altruism forum due to a new rule requiring posts and comments be basically put in the public domain without copyright. More info. I had a bunch of draft posts, so I’m posting some of them here with minimal editing.


People bet their carers on various premises, outside their own expertise, e.g. AGI (alignment) researchers commonly bet on some epistemology without being experts on epistemology who actually read Popper and concluded, in their own judgment, that he’s wrong.

So you might expect them to be interested in criticism of those premises. Shouldn’t they want to investigate the risk?

But that depends on what you value about your career.

If you want money and status, and not to have to make changes, then maybe it’s safer to ignore critics who don’t seem likely to get much attention.

If you want to do productive work that’s actually useful, then your career is at risk.

People won’t admit it, but many of them don’t actually care that much about whether their career is productive. As long as they get status and money, they’re satisfied.

Also, a lot of people lack confidence that they can do very productive work whether or not their premises are wrong.

Actually, having wrong but normal/understandable/blameless premises has big advantages: you won’t come up with important research results but it’s not your fault. If it comes out that your premises were wrong, you did the noble work of investigating a lead that many people believed promising. Science and other types of research always involve investigating many leads that don’t turn out to be important. So if you find a lead people want investigated and then do nothing useful, and it turns out to be an important lead, then some other investigators outcompeted you. People could wonder why you didn’t figure out anything about the lead you worked on. But if the lead you work on turns out to be a dead end, then the awkward questions go away. So there’s an advantage to working on dead-ends as long as other people think it’s a good thing to work on.


Elliot Temple | Permalink | Messages (0)

AGI Alignment and Karl Popper

I quit the Effective Altruism forum due to a new rule requiring posts and comments be basically put in the public domain without copyright. I had a bunch of draft posts, so I’m posting some of them here with light editing.


On certain premises, which are primarily related to the epistemology of Karl Popper, artificial general intelligences (AGIs) aren’t a major threat. I tell you this as an expert on Popperian epistemology, which is called Critical Rationalism.

Further, approximately all AGI research is based on epistemological premises which contradict Popperian epistemology.

In other words, AGI research and AGI alignment research are both broadly premised on Popper being wrong. Most of the work being done is an implicit bet that Popper is wrong. If Popper is right, many people are wasting their careers, misdirecting a lot of donations, incorrectly scaring people about existential dangers, etc.

You might expect that alignment researchers would have done a literature review, found semi-famous relevant thinkers like Popper, and written refutations of them before being so sure of themselves and betting so much on the particular epistemological premises they favor. I haven’t seen anything of that nature, and I’ve looked a lot. If it exists, please link me to it.

To engage with and refute Popper requires expertise about Popper. He wrote a lot, and it takes a lot of study to understand and digest it. So you have three basic choices:

  • Do the work.
  • Rely on someone else’s expertise who agrees with you.
  • Rely on someone else’s expertise who disagrees with you.

How can you use the expertise of someone who disagrees with you? You can debate with them. You can also ask them clarifying questions, discuss issues with them, etc. Many people are happy to help explain ideas they consider important, even to intellectual opponents.

To rely on the expertise of someone on your side of the debate, you endorse literature they wrote. They study Popper, they write down Popper’s errors, and then you agree with them. Then when a Popperian comes along, you give them a couple citations instead of arguing the points yourself.

There is literature criticizing Popper. I’ve read a lot of it. My judgment is that the quality is terrible. And it’s mostly written by people who are pretty different than the AI alignment crowd.

There’s too much literature on your side to read all of it. What you need (to avoid doing a bunch of work yourself) is someone similar enough to you – someone likely to reach the same conclusions you would reach – to look into each thing. One person is potentially enough. So if someone who thinks similarly to you reads a Popper criticism and thinks it’s good, it’s somewhat reasonable to rely on that instead of investigating the matter yourself.

Keep in mind that the stakes are very high: potentially lots of wasted careers and dollars.

My general take is you shouldn’t trust the judgment of people similar to yourself all that much. Being personally well read regarding diverse viewpoints is worthwhile, especially if you’re trying to do intellectual work like AGI-related research.

And there aren’t a million well known and relevant viewpoints to look into, so I think it’s reasonable to just review them all yourself, at least a bit via secondary literature with summaries.

There are much more obscure viewpoints that are worth at least one person looking into, but most people can’t and shouldn’t try to look into most of those.

Gatekeepers like academic journals or university hiring committees are really problematic, but the least you should do is vet stuff that gets through gatekeeping. Popper was also respected by various smart people, like Richard Feynman.

Mind Design Space

The AI Alignment view claims something like:

Mind design space is large and varied.

Many minds in mind design space can design other, better minds in mind design space. Which can then design better minds. And so on.

So, a huge number of minds in mind design space work as starting points to quickly get to extremely powerful minds.

Many of the powerful minds are also weird, hard to understand, very different than us including regarding moral ideas, possibly very goal directed, and possibly significantly controlled by their original programming (which likely has bugs and literally says different things, including about goals, than the design intent).

So AGI is dangerous.

There is an epistemology which contradicts this, based primarily on Karl Popper and David Deutsch. It says that actually mind design space is like computer design space: sort of small. This shouldn’t be shocking since brains are literally computers, and all minds are software running on literal computers.

In computer design, there is a concept of universality or Turing completeness. In summary, when you start designing a computer and adding features, after very few features you get a universal computer. So there are only two types of computers: extremely limited computers and universal computers. This makes computer design space less interesting or relevant. We just keep building universal computers.

Every computer has a repertoire of computations it can perform. A universal computer has the maximal repertoire: it can perform any computation that any other computer can perform. You might expect universality to be difficult to get and require careful designing, but it’s actually difficult to avoid if you try to make a computer powerful or interesting.

Universal computers do vary in other design elements, besides what computations they can perform, such as how large they are. This is fundamentally less important than what computations they can do, but does matter in some ways.

There is a similar theory about minds: there are universal minds. (I think this was first proposed by David Deutsch, a Popperian intellectual.) The repertoire of things a universal mind can think (or learn, understand, or explain) includes anything that any other mind can think. There’s no reasoning that some other mind can do which it can’t do. There’s no knowledge that some other mind can create which it can’t create.

Further, human minds are universal. An AGI will, at best, also be universal. It won’t be super powerful. It won’t dramatically outthink us.

There are further details but that’s the gist.

Has anyone on the AI alignment side of the debate studied, understood and refuted this viewpoint? If so, where can I read that (and why did I fail to find it earlier)? If not, isn’t that really bad?


Elliot Temple | Permalink | Messages (0)

Altruism Contradicts Liberalism

I quit the Effective Altruism forum due to a new rule requiring posts and comments be basically put in the public domain without copyright. I had a bunch of draft posts, so I’m posting some of them here with light editing.


Altruism means (New Oxford Dictionary):

the belief in or practice of disinterested and selfless concern for the well-being of others

Discussion about altruism often involves being vague about a specific issue. Is this selfless concern self-sacrificial? Is it bad for the self or merely neutral? This definition doesn’t specify.

The second definition does specify but isn’t for general use:

Zoology behavior of an animal that benefits another at its own expense

Multiple dictionaries fit the pattern of not specifying self-sacrifice (or not) in the main definition, then bringing it up in an animal-focused definition.

New Oxford’s thesaurus is clear. Synonyms for altruism include:

unselfishness, selflessness, self-sacrifice, self-denial

Webster’s Third suggests altruism involves lack of calculation, and doesn’t specify whether it’s self-sacrificial:

uncalculated consideration of, regard for, or devotion to others' interests sometimes in accordance with an ethical principle

EA certainly isn’t uncalculated. EA does stuff like mathematical calculations and cost/benefit analysis. Although the dictionary may have meant something more like shrewd, self-interested, Machiavellian calculation. If so, they really shouldn’t try to put so much meaning into one fairly neutral word like that without explaining what they mean.

Macmillan gives:

a way of thinking or behaving that shows you care about other people and their interests more than you care about yourself

Caring more about their interests than yourself suggests self-sacrifice, a conflict of interest (where decisions favoring you or them must be made), and a lack of win-win solutions or mutual benefit.

Does EA have any standard, widely read and accepted literature which:

  • Clarifies whether it means self-sacrificial altruism or whether it believes its “altruism” is good for the self?
  • Refutes (or accepts!?) the classical liberal theory of the harmony of men’s interests.

Harmony of Interests

Is there any EA literature regarding altruism vs. the (classical) liberal harmony of interests doctrine?

EA believes in conflicts of interest between men (or between individual and total utility). For example, William MacAskill writes in The Definition of Effective Altruism:

Unlike utilitarianism, effective altruism does not claim that one must always sacrifice one’s own interests if one can benefit others to a greater extent.[35] Indeed, on the above definition effective altruism makes no claims about what obligations of benevolence one has.

I understand EA’s viewpoint to include:

  • There are conflicts between individual utility and overall utility (the impartial good).
  • It’s possible to altruistically sacrifice some individual utility in a way that makes overall utility go up. In simple terms, you give up $100 but it provides $200 worth of benefit to others.
  • When people voluntarily sacrifice some individual utility to altruistically improve overall utility, they should do it in (cost) effective ways. They should look at things like lives saved per dollar. Charities vary dramatically in how much overall utility they create per dollar donated.
  • It’d be good if some people did some effective altruism sometimes. EA wants to encourage more of this, although it doesn’t want to be too pressuring, so it does not claim that large amounts of altruism are a moral obligation for everyone. If you want to donate 10% of your income to cost effective charities, EA will say that’s great instead of saying you’re a sinner because you’re still deviating from maximizing overall utility. (EA also has elements which encourage some members to donate a lot more than 10%, but that’s another topic.)

Finally, unlike utilitarianism, effective altruism does not claim that the good equals the sum total of wellbeing. As noted above, it is compatible with egalitarianism, prioritarianism, and, because it does not claim that wellbeing is the only thing of value, with views on which non-welfarist goods are of value.[38]

EA is compatible with many views on how to calculate overall utility, not just the view that you should add every individual utility. In other words, EA is not based on a specific overall/impersonal utility function. EA also is not based on any advocating that individuals have any particular individual utility function or any claim that the world population currently has a certain distribution of individual utility functions.

All of this contradicts the classical liberal theory of the harmony of men’s (long term, rational) interests. And doesn’t engage with it. They just seem unaware of the literature they’re disagreeing with (or they’re aware and refusing to debate with it on purpose?), even though some of it is well known and easy to find.

Total Utility Reasoning and Liberalism

I understand EA to care about total utility for everyone, and to advocate people altruistically do things which have lower utility for themselves but which create higher total utility. One potential argument is that if everyone did this then everyone would have higher individual utility.

A different potential approach to maximizing total utility is the classical liberal theory of the harmony of men’s interests. It says, in short, that there is no conflict between following self-interest and maximizing total utility (for rational men in a rational society). When there appears to be a conflict, so that one or the other must be sacrificed, there is some kind of misconception, distortion or irrationality involved. That problem should be addressed rather than accepted as an inherent part of reality that requires sacrificing either individual or total utility.

According to the liberal harmony view, altruism claims there are conflicts between the individual and society which actually don’t exist. Altruism therefore stirs up conflict and makes people worse off, much like the Marxist class warfare ideology (which is one of the standard opponents of the harmony view). Put another way, spreading the idea of conflicts of interest is an error that lowers total utility. The emphasis should be on harmony, mutual benefit and win/win solutions, not on altruism and self-sacrifice.

It’s really bad to ask people to make tough, altruistic choices if such choices are unnecessary mistakes. It’s bad to tell people that getting a good outcome for others requires personal sacrifices if it actually doesn’t.

Is there any well-known, pre-existing EA literature which addresses this, including a presentation of the harmony view that its advocates would find reasonably acceptable? I take it that EA rejects the liberal harmony view for some reason, which ought to be written down somewhere. (Or they’re quite ignorant, which would be very unreasonable for the thought leaders who developed and lead EA.) I searched the EA forum and it looks like the liberal harmony view has never been discussed, which seems concerning. I also did a web search and found nothing regarding EA and the liberal harmony of interests theory. I don’t know where or how else to do an effective EA literature search.


Elliot Temple | Permalink | Messages (0)

Harmony, Capitalism and Altruism

I quit the Effective Altruism forum due to a new rule requiring posts and comments be basically put in the public domain without copyright. I had a bunch of draft posts, so I’m posting some of them here with light editing.


Are there conflicts of interest, or is mutual benefit always possible? This is one of the most important questions in political philosophy.

The belief in conflicts of interest leads to a further question. Who should win when there’s a conflict? One view is that individuals should get worse outcomes for the benefit of the group. Another view is that individuals should be prioritized over the group.

Why would one advocate worse outcomes for the group? That might sound odd initially. One reason is because it seems to be implied by individual freedom and individual rights. If each person has rights and freedom, then he’s free to maximize his interests within that framework. There’s nothing to do, besides asking nicely and trying to make persuasive arguments (which historically isn’t very effective), to try to get people to sacrifice their interests for the sake of others.

One consequence is the altruist-collectivist side of the debate has often considered rejecting individual freedom or some individual rights. What if most people won’t voluntarily act for the benefit of the group, to create a paradise society with the highest overall utility (the most good for the most people, or something along those lines)? Then some people will advocate violently forcing them.

Because there appears to be a conflict between the good of the group and the rights and freedoms of the individual, altruists have often advocated restricting the rights and freedoms of the individual. Sometimes they’ve used violence, in the name of the greater good, and killed millions. That kind of massive violence has never led to good results for the group, though it has led to somewhat good results for a few individuals who end up being wealthy rulers. There have always been questions about whether communist revolutionary leaders actually care about the welfare of everyone or are just seeking power so they can be corrupt and get personal luxuries. Historically, collectivist societies tend to be plagued by noticeably more corruption than the more individualist democracies have. Violence and corruption are linked together in some ways. It’s harder to profit from corruption if individuals have rights and society won’t let you get away with violating their rights to take their stuff.

Individualism

A rather different viewpoint is that we’re all fallible, we each individually have limited knowledge, and we can only coordinate with others a limited amount. We shouldn’t try to design paradise by looking at society from the perspective of an omniscient god. We have to consider decision making and action from the point of view of an individual. Instead of trying to have some wise philosopher kings or central planners telling everyone what to do, we need a system where individuals can figure out what to do based on their own situation and the knowledge they have. The central planner approach doesn’t work well because the planners don’t have enough detailed knowledge of each individual’s life circumstances, and can’t do a good enough job of optimizing what’s good for them. To get a good outcome for society, we need to use the brainpower of all its members, not just a few leaders. We have no god with infinite brainpower to lead us.

So maybe the best thing we can do is have each individual pay attention to and try to optimize the good for himself, while following some rules that prevent him from harming or victimizing others.

In total, there is one brain per person. A society with a million members has a million brains. So how much brainpower can be allocated to getting a good outcome for each person? On average, at most, one brain worth of brainpower. What’s the best way to assign brainpower to be used to benefit people? Should everyone line up, then each person looks 6 people to his right, and uses his brainpower to optimize that person’s life? No. It makes much more sense for each brain to be assigned the duty of optimizing the life of the person who that brain is physically inside of.

You can get a rough approximation of a good society by having each person make decisions for themselves and run their own lives, while prohibiting violence, theft and fraud.

Perhaps you can get efficiency gains with organized, centralized planning done by specialists – maybe they can make some ideas that are useful to many people. Or maybe people can share ideas in a decentralized way. There are many extra details to consider.

Coordination

Next let’s consider coordination between people. One model for how to do that is called trade. I make shoes, and you make pants. We’d each like to use a mix of shoes and pants, not just the thing we make. So I trade you some of my shoes for some of your pants. That trade makes us both better off. This is the model of voluntary trade for mutual benefit. It’s also the model of specialization and division of labor. And what if you make hats but I don’t want any hats, but you do want some of my shoes? That is the problem money solves. You can sell hats to someone else for money, then trade money for my shoes, and then I can trade that money to someone else for something I do want, e.g. shirts.

The idea here is that each individual makes sure each trade he participates in benefits him. If a trade doesn’t benefit someone, it’s his job to veto the trade and opt out. Trades only happen if everyone involved opts in. In this way, every trade benefits everyone involved (according to their best judgment using their brainpower, which will sometimes be mistaken), or at least is neutral and harmless for them. So voluntary trade raises the overall good in society – each trade raises the total utility score for that society. (So if you want high total utility, maybe you should think about how to increase the amount of trading that happens. Maybe that would do more good than donating to charity. And that’s a “real” maybe – I mean it’s something worth considering and looking into, not that I already reached a conclusion about it. And if EA has not looked into or considered that much, then I think that’s bad and shows a problem with EA, independent of whether increasing trade is a good plan.)

High Total Utility

It’s fairly hard to score higher on total good by doing something else besides individual rights plus voluntary trade and persuasion (meaning sharing ideas on a voluntary basis).

Asking people to sacrifice their self-interest usually results in lower total good, not higher total good. Minor exceptions, like some small voluntary donations to charity, may help raise total good a bit, though they may not. To the extent people donate due to social signaling or social pressure (rather than actually thinking a charity can use it better than they can) donations are part of some harmful social dynamics that are making society worse.

Donations or Trade

But many people look at this and say “Sometimes Joe could give up a pair of old pants that he doesn’t really need that’s just sitting around taking up space, and give it to Bob, who would benefit from it and actually wear it. The pants only have a small value to Joe, and if he would sacrifice that small value, Bob would get a large value, thus raising overall utility.”

The standard pro-capitalist rebuttal is that there’s scope for a profitable trade here. Also, the scenario was phrased from the perspective of an omniscient god, central planner or philosopher king. Joe needs to actually know that Bob needs a pair of used pants, and Bob needs to know that Joe has an extra pair. And Joe needs to consider the risk that several of the pants he currently wears become damaged in the near future in which case he’d want to wear that old pair again. And Bob needs to consider the risk that he’s about to be gifted a bunch of pairs of new pants from other people so he wouldn’t want Joe’s pants anyway.

But let’s suppose they know about all this stuff and still decide that, on average, taking into account risk and looking at expectation values, it’s beneficial for Bob to have the pants, not Joe. We can put numbers on it. It’s a $2 negative for Joe, but a $10 gain for Bob. That makes a total profit (increase in total utility) of $8 if Joe hands over the pants.

If handing over the pants increases total good by $8, how should that good be divided up? Should $10 of it go to Bob, and -$2 of it go to Joe? That’s hardly fair. Why should Bob get more benefit than the increase in total good? Why should Joe sacrifice and come out behind? It would be better if Bob paid $2 for the pants so Bob benefits by $8 and Joe by $0. That’s fairer. But is it optimal? Why shouldn’t Joe get part of the benefit? As a first approximation, the fairest outcome is that they split the benefit evenly. This requires Bob to pay $6 for the pants. Then Joe and Bob each come out ahead by $4 of value compared to beforehand.

There are objections. How are Joe and Bob going to find out about each other and make this trade happen? Maybe they are friends. But there are a lot more non-friends in society than friends, so if you only trade with your friends then a lot of mutually beneficial trades won’t happen. So maybe a middleman like a used clothing store can help – Joe sells his pants to the used clothing store where Bob later finds and buys them. The benefit is split up between Joe, Bob and the store. As a first approximation, we might want to give a third of the benefit to each party. In practice, used clothing stores often don’t pay very much for clothing and don’t charge very high profit margins, so Bob might get the largest share of the benefit. Also the overall benefit is smaller now because there are new costs like store employees, store lighting, marketing, and the building the store is in. Those costs may be worth it because otherwise Joe and Bob never would have found each other and made a trade, so a smaller benefit is better than no benefit. Those costs are helping deal with the problem of limited knowledge and no omniscient coordinator – coordination and finding beneficial trades actually takes work and has a downside. Some trades that would be beneficial if they took zero effort actually won’t work because the cost of the trading partners finding each other (directly or indirectly through a middle man) costs more than the benefit of the trade.

Not Having Enough Money

What if Bob doesn’t have $6 to spare? One possibility is a loan. A loan would probably from a bank not from Joe – this is an example of specialization and division of labor – Joe isn’t good at loans and a bank that handles hundreds of loans can have more efficient, streamlined processes. (In practice today, our banks have a lot of flaws and it’s more typical to get small loans from credit cards, which also have flaws. I was making a theoretical point.)

If Bob is having a hard time, but it’s only temporary, then a bank can loan him some money and he can pay it back later with interest. That can be mutually beneficial. But not everyone pays their loans back, so the bank will have to use the limited information it has to assess risk.

Long Term Poverty

What if Bob is unlikely to have money to spare in the future either? What if his lack of funds isn’t temporary? That raises the question of why.

Is Bob lazy and unproductive? Does he refuse to work, refuse to contribute to society and create things of value to others, but he wants things that other people worked to create like pants? That anti-social attitude is problematic under both capitalist and altruistic approaches. Altruism says he should sacrifice, by accepting the disutility of working, in order to benefit others. Capitalism gives him options. He can trade the disutility of working to get stuff like pants, if he wants to. Or he can decide the disutility of continuing to wear old pants is preferable to the disutility of working. Capitalism offers an incentive to work then lets people make their own choices.

It’s better (in some ways) if Joe trades pants to someone who works to create wealth that can benefit society, rather than someone who sits around choosing not to work. Joe should reward and incentivize people who participate in productive labor. That benefits both Joe (because he can be paid for his pants instead of give them away) and also society (which is better off in aggregate if more people work).

What if Bob is disabled, elderly, or unlucky, rather than lazy? There are many possibilities including insurance, retirement savings, and limited amounts of charitable giving to help out as long as these kinds of problems aren’t too common and there isn’t too much fraud or bad faith (e.g. lying about being disabled or choosing not to save for retirement on purpose because you know people will take pity on you and help you out later, so you can buy more alcohol and lotto tickets now).

Since the central planner approach doesn’t work well, one way to approach altruism is as some modifications on top of a free market. We can have a free market as a primary mechanism, and then encourage significant amounts of charitable sacrifice too. Will that create additional benefit? That is unclear. Why should Joe give his pants to Bob for free instead of selling them for $6 so that Joe and Bob split the benefit evenly? In the general case, he shouldn’t. Splitting the benefit – trade – makes more sense than charity.

Liberalism’s Premise

But pretty much everything I’ve said so far has a hidden premise which is widely disputed. It’s all from a particular perspective. The perspective is sometimes called classical liberalism, individualism, the free market or capitalism.

The hidden premise is that there are no conflicts of interest between people. This is often stated with some qualifiers, like that the people have to be rational, care about the long term not just the short term and live in a free, peaceful society. Sometimes it’s said that there are no innate, inherent or necessary conflicts of interest. The positive way of stating it is the harmony of interests theory.

An inherent conflict would mean Joe has to lose for Bob to win. And the win for Bob might be bigger than the loss for Joe. In other words, for some reason, Bob can’t just pay Joe $6 to split the benefit. Either Joe can get $2 of benefit from keeping that pair of pants, or Bob can get $10 if Joe gives it to him (or perhaps if Bob takes it), and there are no other options, so there’s a conflict. In this viewpoint, there have to be winners and losers. Not everything can be done for mutual benefit using a win/win approach or model. Altruism says Joe probably won’t want to give up the pants for nothing, but he should do it anyway for the greater good.

The hidden premise of altruism is that there are conflicts of interest, while the hidden premise of classical liberalism is that there are no necessary, rational conflicts of interest.

I call these things hidden premises but they aren’t all that hidden. There are books talking about them explicitly and openly. They aren’t well known enough though. The Marxist class warfare theory is a conflicts of interests theory, which has been criticized by the classical liberals who advocated a harmony of interests theory that says social harmony can be created by pursuing mutual benefit with no losers or sacrificial victims (note: it’s later classical liberals who criticized Marxism; classical liberalism is older than Marxism). Altruists sometimes openly state their belief in a conflict of interests viewpoint, but many of them don’t state that or aren’t even aware of it.

Put another way, most people have tribalist viewpoints. The altruists and collectivists think there are conflicts between the individual and group, and they want the group to win the conflict.

People on the capitalist, individualist side of the debate are mostly tribalists too. They mostly agree there are conflicts between the individual and group, and they want the individual to win the conflict.

And then a few people say “Hey, wait, the individual and group, or the individual and other individuals, or the group and the other groups, are not actually in conflict. They can exist harmoniously and even benefit each other.” And then basically everyone dislikes and ignores them, and refuses to read their literature.

The harmony theory of classical liberalism has historical associations with the free market, and my own thinking tends to favor the free market. But you should be able to reason about it from either starting point – individual or group – and reach the same conclusions. Or reason about it in a different way that doesn’t start with a favored group. There are many lines of reasoning that should work fine.

Most pro-business or pro-rich-people type thinking today is just a bunch of tribalism based on thinking there is a conflict and taking sides in the conflict. I don’t like it. I just like capitalism as an abstract economic theory that addresses some problems about coordinating human action given individual actors with limited knowledge. Also I like peace and freedom, but I know most people on most sides do too (or at least they think they do), so that isn’t very differentiating.

I think the most effective way to achieve peace and social harmony is by rejecting the conflicts of interest mindset and explaining stuff about mutual benefit. There is no reason to fight others if one is never victimized or sacrificed. Altruism can encourage people to pick fights because it suggests there are and should be sacrificial victims who lose out for the benefit of others. Tribalist capitalist views also lead to fights because they e.g. legitimize the exploitation of the workers and downplay the reasonable complaints of labor, rather than saying “You’re right. That should not be happening. This must be fixed. We must investigate how that kind of mistreatment by your employers is happening. There are definitely going to be some capitalism-compatible fixes; let’s figure them out.”

You can start with group benefit and think about how to get it given fallible actors with limited knowledge and limited brainpower. We won’t be able to design a societal system that gets a perfect outcome. We need systems that let people do the best with the knowledge they have, and let them coordinate, share knowledge, etc. We’ll want them to be able to trade when one has something that someone else could make better use of and vice versa. We’ll want money to deal with the double coincidence of wants problem. We’ll want stores with used goods functioning as middle men, as well as online marketplaces where individuals can find each other. (By the way, Time Will Run Back by Henry Hazlitt is a great book about a socialist leader who tries to solve some problems his society has and reinvents capitalism. It’s set in a world with no capitalist countries and where knowledge of capitalism had been forgotten.)

More Analysis

Will we want people to give stuff to each other, for nothing in return, when someone else can benefit more from it? Maybe. Let’s consider.

First, it’s hard to tell how much each person can benefit from something. How do I know that Bob values this object more than I do? If we both rate it on a 1-10 scale, how do we know our scales are equivalent? There’s no way to measure value. A common measure we use is comparing something to dollars. How many dollars would I trade for it and be happy? I can figure out some number of dollars I value more than the object, and some number of dollars I value less than the object, and with additional effort I can narrow down the range.

So how can we avoid the problem of mistakenly giving something to someone who actually gets less utility from it than I do? He could pay dollars for it. If he values it more in dollars than I do, then there’s mutual benefit in selling it to him. He could also offer an object in trade for it. What matters then is that we each value what we get more than what we give up. I might actually value the thing I trade away more than the other guy does, and there could still be mutual benefit.

Example:

I have pants that I value at $10 and Bob values at $5. For the pants, Bob offers to trade me artwork which I value at $100 and he values at $1. I value both the pants and artwork more than Bob does, but trading the pants to him still provides mutual benefit.

But would there be more total benefit if Bob simply gave me the artwork and I kept the pants. Sure. And what if I gave Bob $50 for the art? That has the same total benefit. On the assumption that we each value a dollar equally, transfers of dollars never change total benefit. (That’s not a perfect assumption but it’s often a reasonable approximation.) But transfers of dollars are useful, even when they don’t affect total utility, because they make trades mutually beneficial instead of having a winner and a loser. Transferring dollars also helps prevent trades that reduce total utility: If Bob will only offer me like $3 for the pants, which I value at $10, then we’ve figured out that the pants benefit me more than him and I should keep them.

BTW, if you want to help someone who has no dollars, you should consider giving him dollars, not other goods. Then see if he’ll pay you enough to trade for the other goods. If he won’t, that’s because he thinks he can get even more value by using the dollars in some other way.

Should I do services for Bob whenever the value to him is higher than the disutility to me? What if I I have very wonderful services that many people want – like I’m a great programmer or chef – and I end up working all day every day for nothing in return? That would create a disincentive to develop skills. From the perspective of designing a social system or society, it works better to set up good incentives instead of demanding people act contrary to incentives. We don’t want to have a conflict or misalignment between incentives and desired behaviors or we’ll end up with people doing undesirable but incentivized behavior. We’ll consider doing that if it’s unavoidable, but we should at least minimize it.

Social Credit

There’s a general problem when you don’t do dollar-based trading: what if some people keep giving and giving (to people who get higher utility for goods or services) but don’t get a similar amount of utility or more in return? If people just give stuff away whenever it will benefit others a bunch, wealth and benefit might end up distributed very unequally. How can we make things fairer? (I know many pro-capitalist people defend wealth inequality as part of the current tribalist political battle lines. And I don’t think trying to make sure everyone always has exactly equal amounts of wealth is a good idea. But someone giving a lot and getting little, or just significant inequality in general, is a concern worthy of some analysis.)

We might want to develop a social credit system (but I actually mean this in a positive way, despite various downsides of the Chinese Communist Party’s social credit system). We might want to keep score in some way to see who is contributing the most to society and make sure they get some rewards. That’ll keep incentives aligned well and prevent people from having bad luck and not getting much of the total utility.

So we have this points system where every time you benefit someone you get points based on the total utility created. And people with higher points should be given more stuff and services. Except, first of all, how? Should they be given stuff even if it lowers total utility? If the rule is always do whatever raises total utility, how can anyone deviate to help out the people with high scores (or high scores relative to personal utility).

Second, aren’t these points basically just dollars? Dollars are a social credit system which tracks who contributed the most. In the real world, many things go wrong with this and people’s scores sometimes end up wildly inaccurate, just like in China where their social credit system sometimes assigns people inaccurate scores. But if you imagine an ideal free market, then dollars basically track how much people contribute to total utility. And then you spend the dollars – lower your score – to get benefits for yourself. If someone helps you, you give him some of your dollars. He gave a benefit, and you got a benefit, so social credit points should be transferred from you to him. Then if everyone has the same number of dollars, that basically also means everyone got the same amount of personal utility or benefit.

What does it mean if someone has extra dollars? What can we say about rich people? They are the most altruistic. They have all these social credit points but they didn’t ask for enough goods and services in return to use up their credit. They contributed more utility to others than they got for themselves. And that’s why pro-capitalist reasoning sometimes says good things about the rich.

But in the real world today, people get rich in all kinds of horrible ways because no country has a system very similar to the ideal free market. And a ton of pro-capitalist people seem to ignore that. They like and praise the rich people anyway, instead of being suspicious of how they got rich. They do that because they’re pro-rich, pro-greed tribalists or something. Some of them aspire to one day be rich, and want to have a world that benefits the rich so they can keep that dream alive and imagine one day getting all kinds of unfair benefits for themselves. And then the pro-altruism and pro-labor tribalists yell at them, and they yell back, and nothing gets fixed. As long as both sides believe in conflicts of interest, and are fighting over which interest groups should be favored and disfavored in what ways, then I don’t expect political harmony to be achieved.

Free Markets

Anyway, you can see how a free market benefits the individual, benefits the group, solves various real problems about coordination and separate, fallible actors with limited knowledge, and focuses on people interacting only for mutual benefit. Interacting for mutual benefit – in ways with no conflict of interest – safeguards both against disutility for individuals (people being sacrificed for the alleged greater good) and also against disutility for the group (people sacrificing for the group in ineffective, counter-productive ways).

Are there benefits that can’t be achieved via harmony and interaction only for mutual benefit? Are there inherent conflicts where there must be losers in order to create utopia? I don’t think so, and I don’t know of any refutations of the classical liberal harmony view. And if there are such conflicts, what are they? Name one in addition to making some theoretical arguments. Also, if we’re going to create utopia with our altruism … won’t that benefit every individual? Who wouldn’t want to live in utopia? So that sounds compatible with the harmony theory and individual mutual benefit.

More Thoughts

People can disagree about what gives how much utility to Bob.

People can lie about how much utility they get from stuff.

People can have preferences about things other than their own direct benefit. I can say it’s high utility to have a walkable downtown even if I avoid walking. Someone else can disagree about city design. I can say it’s high utility for me if none of my neighbors are Christian (disclaimer: not my actual opinion). Others can disagree about what the right preferences and values are.

When preferences involve other people or public stuff instead of just your own personal stuff, then people will disagree about what’s good.

What can be done about all this? A lot can be solved by: whatever you think is high utility, pay for it. As a first approximation, whoever is willing to pay more is the person who would get the most utility from getting the thing or getting their way on the issue.

Paying social credit points, aka money, for things you value shows you actually value them that much. It prevents fraud and it enables comparison between people’s preferences. If I say “I strongly care” and you say “I care a lot”, then who knows who cares more. Instead, we can bid money/social credit to see who will bid higher.

People often have to estimate how much utility they would get from a good or service, before they have it. These estimates are often inaccurate. Sometimes they’re wildly inaccurate. Often, they’re systematically biased. How can we make the social system resilient to mistakes?

One way is to disincentivize mistakes instead of incentivizing them. Consider a simple, naive system, where people tend to be given more of whatever they value. The higher they value it, the more of it they get. Whoever likes sushi the most will be allocated the most sushi. Whoever likes gold bars the most will be allocated the most gold bars. Whoever is the best at really liking stuff, and getting pleasure, wellbeing or whatever other kind of utility from it, gets the most stuff. There is an incentive here to highly value lots of stuff, even by mistake. When in doubt, just decide that you value it a lot – maybe you’ll like it and there’s no downside to you of making a high bid in terms of how much utility you say it gives you. Your utility estimates are like a bank account with unlimited funds, so you can spend lavishly.

To fix this, we need to disincentivize mistakes. If you overbid for something – if you say it has higher utility for you than it actually does – that should have some kind of downside for you, such as a reduced ability to place high bids in the future.

How can we accomplish this? A simple model is everyone is assigned 1,000,000 utility points at birth. When you want a good or service, you bid utility points (fractions are fine). You can’t bid more than you have. If your bid is accepted, you transfer those utility points to the previous owner or the service provider, and you get the good or service. Now you have fewer utility points to bid in the future. If you are biased and systematically overbid, you’ll run out of points and you’ll get less stuff for your points than you could have.

If you’re low on utility points, you can provide goods or services to others to get more. There is an incentive to provide whatever good or services would provide the most utility to others, especially ones that you can provide efficiently or cheaply. Cost/benefit and specialization matter.

There are many ways we could make a more complex system. Do you have to plan way ahead? Maybe people should get 1,000 more utility points every month so they always have a guaranteed minimum income. Maybe inheritance or gifting should be allowed – those have upsides and downsides. If inheritance and gifting are both banned, then there’s an incentive to spend all your utility points before you die – even for little benefit – or else they’re wasted. And there’s less incentive to earn more utility points if you have enough, but would like to get more to help your children or favorite charity, but you can’t do gifting or inheritance. There’d also be people who pay 10,000 points for a marble to circumvent the no gifting rule. Or I might try to hire a tutor to teach my son, and pay him with my utility points rather than my son having to spend his own points.

Anyway, to a reasonable approximation, this is the system we already have, and utility points are called dollars. Dollars, in concept, are a system of social credit that track how much utility you’ve provided to others minus how much you’ve received from others. They keep score so that some people don’t hog a ton of utility.

There are many ways that, in real life, our current social system differs from this ideal. In general, those differences are not aspects of capitalist economic theory nor of dollars. They are deviations from the free market which let people grow rich by government subsidies, fraud, biased lawmaking, violence, and various other problems.

Note: I don’t think a perfect free market would automatically bring with it utopia. I just think it’s a system with some positive features and which is compatible with rationality. It doesn’t actively prevent or suppress people from having good, rational lives and doing problem solving and making progress. Allowing problem solving and helping with some problems (like coordination between people, keeping track of social credit, and allocating goods and services) is a great contribution from an economic system. Many other specific solutions are still needed. I don’t like the people who view capitalism as a panacea instead of a minimal framework and enabler. I also don’t think any of the alternative proposals, besides a free market, are any good.


Elliot Temple | Permalink | Messages (0)

Conflicts of Interest, Poverty and Rationality

I quit the Effective Altruism forum due to a new rule requiring posts and comments be basically put in the public domain without copyright. I had a bunch of draft posts, so I’m posting some of them here with light editing.


Almost everyone believes in conflicts of interest without serious consideration or analysis. It’s not a reasoned opinion based on studying the literature on both sides. They’re almost all ignorant of classical liberal reasoning and could not summarize the other side’s perspective. They also mostly haven’t read e.g. Marx or Keynes. I literally can’t find anyone who has ever attempted to give a rebuttal to Hazlitt’s criticism of Keynes. And I’ve never found an article like “Why Mises, Rand and classical liberalism are wrong and there actually are inherent conflicts of interest”.

(Searching again just now the closest-to-relevant thing I found was an article attacking Rand re conflicts of interest. Its argument is basically that she’s a naive idiot who is contradicting classical liberalism by saying whenever there is a conflict someone is evil/irrational. It shows no awareness that the “no conflicts of interest” is a classical liberal theory which Rand didn’t invent. It’s an anti-Rand article that claims, without details, that classical liberalism is on its side. It’s a pretty straightforward implication of the liberal harmony view that if there appears to be a conflict of interest or disharmony, someone is making a mistake that could and should be fixed, and fixing the mistake enough to avoid conflict is possible (in practice now, not just in theory) if no one is being evil, irrational, self-destructive, etc.)

There are some standard concerns about liberalism (which are already addressed in the literature) like: John would get more value from my ball than I would. So there’s a conflict of interest: I want to keep my ball, and John wants to have it.

Even if John would get less value from my ball, there may be a conflict of interest: John would like to have my ball, and I’d like to keep it.

John’s interest in taking my ball, even though it provides more value to me than him, is widely seen as illegitimate. The only principle it seems to follow is “I want the most benefit for me”, which isn’t advocated much, though it’s often said to be human nature and said that people will inevitably follow it.

Wanting to allocates resources where they’ll do the most good – provide the most benefit to the most people – is a reasonable, plausible principle. It has been advocated as a good, rational principle. There are intellectual arguments for it.

EA seems to believe in that principle – allocate resources where they’ll do the most good. But EA also tries not to be too aggressive about it and just wants people to voluntarily reallocate some resources to do more good compared to the status quo. EA doesn’t demand a total reallocate of all resources in the optimal way because that’s unpopular and perhaps unwise (e.g. there are downsides to attempting revolutionary changes to society (especially ones that many people will not voluntarily consent to) rather than incremental, voluntary changes, such as the risk of making costly mistakes while making massive changes).

But EA does ask for people to voluntarily make some sacrifices. That’s what altruism is. EA wants people to give up some benefit for themselves to provide larger benefits for others. E.g. give up some money that has diminishing returns for you, and donate it to help poor people who get more utility per dollar than you do. Or donate to a longtermist cause to help save the world, thus benefitting everyone, even though most people aren’t paying their fair share. In some sense, John is buying some extra beer while you’re donating to pay not only your own share but also John’s share of AGI alignment research. You’re making a sacrifice for the greater good while John isn’t.

This narrative, in terms of sacrifices, is problematic. It isn’t seeking win/win outcomes, mutual benefit or social harmony. It implicitly accepts a political philosophy involving conflicts of interest, and it further asks people to sacrifice their interests. By saying that morality and your interests contradict each other, it creates intellectual confusion and guilt.

Liberal Harmony

Little consideration has been given to the classical liberal harmony of interests view, which says no sacrifices are needed. You can do good without sacrificing your own interests, so it’s all upside with no downside.

How?

A fairly straightforward answer is: if John wants my ball and values it more than I do, he can pay me for it. He can offer a price that is mutually beneficial. If it’s worth $10 to me, and $20 to John, then he can offer me $15 for it and we both get $5 of benefit. On the other hand, if I give it to John for free, then John gets $20 of benefit and I get -$10 of benefit (that’s negative benefit).

If the goal is to maximize total utility, John needs to have that ball. Transferring the ball to John raises total utility. However, the goal of maximizing total utility is indifferent to whether John pays for it. As a first approximation, transferring dollars has no effect on total utility because everyone values dollars equally. That isn’t really true but just assume it for now. I could give John the ball and $100, or the ball and $500, and the effect on total utility would be the same. I lose an extra $100 or $500 worth of utility, and John gains it, which has no effect on total utility. Similarly, John could pay me $500 for the ball and that would increase total utility just as much (by $10) as if I gave him the ball for free.

Since dollar transfers are utility-neutral, they can be used to get mutual benefit and avoid sacrifices. Whenever some physical object is given to a new owner in order to increase utility, some dollars can be transferred in the other direction so that both the old and new owners come out ahead.

There is no need, from the standpoint of total utility, to have any sacrifices.

And these utility-increasing transfers can be accomplish, to the extent people know they exist, by free trade. Free trade already maximizes total utility, conditional on people finding opportunities and the transaction costs being lower than the available gains. People have limited knowledge, they’re fallible, and trade takes effort, so lots of small opportunities are missed out on that an omniscient, omnipotent God could do. If we think of this from a perspective of a central planner or philosopher king with unlimited knowledge who can do anything effortlessly, there’d be a lot of extra opportunities compared to the real situation where people have limited knowledge of who has what, how much utility they’d get from what, etc. This is an important matter that isn’t very relevant to the conflicts of interest issue. It basically just explains that some missed opportunities are OK and we shouldn’t expect perfection.

There is a second issue, besides John would value my physical object more than me. What if John would value my services more than the disutility of me performing those services? I could clean his bathroom for him, and he’d be really happy. It has more utility for him than I’d lose. So if I clean his bathroom, total utility goes up. Again, the solution is payment. John can give me dollars so that we both benefit, rather than me cleaning his bathroom for free. The goal of raising total utility has no objection to John paying me, and the goal of “no one sacrifices” or “mutual benefit” says it’s better if John pays me.

Valuing Dollars Differently

And there’s a third issue. What if the value of a dollar is different for two people? For a simple approximation, we’ll divide everyone up into three classes: rich, middle class and poor. As long as John and I are in the same class, we value a dollar equally, and my analysis above works. And if John is in a higher class than me, than him paying me for my goods or services will work fine. Possibly he should pay me extra. The potential problems for the earlier analysis come if John is in a lower class than me.

If I’m middle class and John is poor, then dollars are more important to him than to me. So if he gives me $10, that lowers total utility. We’ll treat middle class as the default, so that $10 has $10 of value for me, but for John it has $15 of value. Total utility goes down by $5. Money transfers between economic classes aren’t utility-neutral.

Also, if I simply give John $10, for nothing in return, that’s utility-positive. It increases total utility by $5. I could keep giving John money until we are in the same economic class, or until we have the same amount of money, or until we have similar amounts of money – and total utility would keep going up the whole time. (That’s according to this simple model.)

So should money be divided equally or approximately equally? Would that raise total utility and make a better society?

There are some concerns, e.g. that some people spend money more wastefully than others. Some people spend money on tools that increase the productivity of labor – they forego immediate consumption to invest in the future. Others buy alcohol and other luxury consumption. If more money is in the hands of investors rather than consumers, society will be better off after a few years. Similarly, it lowers utility to allocate seed corn to people who’d eat it instead of planting it.

Another concern is that if you equal out the wealth everyone has, it will soon become unequal again as some people consume more than others.

Another concern is incentives. The more you use up, the more you’ll be given by people trying to increase total utility? And the more you save, the more you’ll give away to others? If saving/investing benefits others not yourself, people will do it less. If people do it less, total utility will go down.

One potential solution is loans. If someone temporarily has less money, they can be loaned money. They can then use extra, loaned dollars when they’re low on money, thus getting good utility-per-dollar. Later when they’re middle class again, they can pay the loan back. Moving spending to the time period when they’re poor, and moving saving (loan payback instead of consumption) to the time period when they’re middle class, raises overall utility.

Poverty

But what if being poor isn’t temporary? Then I’d want to consider what is the cause of persistent poverty.

If the cause is buying lots of luxuries, then I don’t think cash transfers to that person are a good idea. Among other things, it’s not going to raise total utility of society to increase consumption of luxuries instead of capital accumulation. Enabling them to buy even more luxuries isn’t actually good for total utility.

If the cause is being wasteful with money, again giving the person more money won’t raise total utility.

If the cause is bad government policies, then perhaps fixing the government policies would be more efficient than transferring money. Giving money could be seen as subsidizing the bad government policies. It’d be a cost-ineffective way to reduce the harm of the policies, thus reducing the incentive to change the policies, thus making the harmful policies last longer.

If the person is poor because of violence and lack of secure property, then they need rule of law, not cash. If you give them cash, it’ll just get taken.

Can things like rule of law and good governance be provided with mutual benefit? Yes. They increase total wealth so much that everyone could come out ahead. Or put another way, it’s pretty easy to imagine good police and courts, which do a good job, which I’d be happy to voluntarily pay for, just like I currently voluntarily subscribe to various services like Netflix and renting web servers.

Wealth Equality

Would it still be important to even out wealth in that kind of better world where there are no external forces keeping people persistently poor? In general, I don’t think so. If there are no more poor people, that seems good enough. I don’t think the marginal utility of another dollar changes that much once you’re comfortable. People with plenty don’t need to be jealous of people with a bit more. I understand poor people complaining, but people who are upper middle class by today’s standards are fine and don’t need to be mad if some other people have more.

Look at it like this. If I have 10 million dollars and you have 20 million dollars, would it offer any kind of significant increase in total utility to even that out to 15 million each? Nah. We both can afford plenty of stuff – basically anything we want which is mass produced. The marginal differences come in two main forms:

1: Customized goods and services. E.g. you could hire more cooks, cleaners, personal drivers, private jet flights, etc.
2: Control over the economy, e.g. with your extra $10 million you could gain ownership of more businesses than I own.

I don’t care much about the allocation of customized goods and services besides to suggest that total utility may go up with somewhat less of them. Mass production and scalable services are way more efficient.

And I see no particular reason that total utility will go up if we even out the amount of control over businesses that everyone has. Why should wealth be transferred to me so that I can own a business and make a bunch of decisions? Maybe I’ll be a terrible owner. Who knows. How businesses are owned and controlled is an important issue but I basically don’t think that evening out ownership is the answer that will maximize total utility. Put another way, the diminishing returns on extra dollars is so small in this dollar range that personal preferences probably matter more. In other words, how much I like running businesses is a bigger factor than my net worth only being $10 million rather than $15 million. How good I am at running a business is also really important since it’ll affect how much utility the business creates or destroys. If you want to optimize utility more, you’ll have to start allocating specific things to the right people, which is hard, rather than simply trying to give more wealth to whoever has less. Giving more to whoever has less works pretty well at lower amounts but not once everyone is well off.

What about the ultra rich who can waste $44 billion dollars on a weed joke? Should anyone be a trillionaire? I’m not sure it’d matter in a better world where all that wealth was earned by providing real value to others. People that rich usually don’t spend it all anyway. Usually, they barely spend any of it, unless you count giving it to charity as spending. To the extent they keep it, they mostly invest it (in other words, basically, loan it out and let others use it). Having a ton of wealth safeguarded by people who will invest rather than consume it is beneficial for everyone. But mostly I don’t care much and certainly don’t want to defend current billionaires, many of who are awful and don’t deserve their money, and some of whom do a ton of harm by e.g. buying and then destroying a large business.

My basic claim here is that if everyone were well off, wealth disparities wouldn’t matter so much – we’d all be able to buy plenty of mass produced and scalable stuff, and so the benefit of a marginal dollar would be reasonably similar between people. It’s the existence of poverty that makes a dollar having different utility for different people a big issue.

The Causes of Poverty

If you give cash to poor people, you aren’t solving the causes of poverty. You’re just reducing some of the harm done (hopefully – you could potentially be fueling a drug addiction or getting a thug to come by and steal it or reducing popular resentment of a bad law and thus keeping it in place longer). It’s a superficial (band aid) solution not a root cause solution. If people want to do some of that voluntarily, I don’t mind. But I don’t place primary importance on that stuff. I’m more interested in how to fix the system and whether that can be done with mutual benefit.

From a conflicts of interest perspective, it’s certainly in my interest that human wealth goes up enough for everyone to have a lot. That world sounds way better for me to live in. I think the vast majority will agree. So there’s no large conflict of interest here. Many a few current elites would prefer to be a big fish in a smaller pond rather than live in that better world. But I think they’re wrong and that isn’t in their interest. Ideas like that will literally get them killed. Anti-aging research would be going so much better if humanity was so much richer that there were no poor people.

What about people who are really stupid, or disabled, or chronically fatigued or something so they can’t get a good job even in a much better world? Their families can help them. Or their neighbors, church group, online rationality forum, whatever. Failing that, some charity seems fine to fill in a few gaps here and there – and it won’t be a sacrifice because people will be happy to help and will still have plenty for themselves and nothing in particular they want to buy but have to give up. And with much better automation, people will be able to work much shorter hours and one worker will be able to easily support many people. BTW, we may run into trouble with some dirty/unpleasant jobs still being needed, that aren’t automated, and how do we incentivize anyone to do them when just paying higher wages for those jobs won’t attract much interest since everyone has plenty.

So why don’t we fix the underlying causes of poverty? People disagree about what those are. People try things that don’t work. There are many conflicting plans with many mistakes. But there’s no conflict of interest here, even between people who disagree on the right plan. It’s in both people’s interests to figure out a plan that will actually work and do that.

Trying to help poor people right now is a local optima that doesn’t invest in the future. As long as the amount of wealth being used on it is relatively small, it doesn’t matter much. It has upsides so I’m pretty indifferent. But we shouldn’t get distracted from the global optima of actually fixing the root problems.

Conclusion

I have ideas about how to fix the root causes of poverty, but I find people broadly are unwilling to learn about or debate my ideas (some of which are unoriginal and can be read in fairly well known books by other people). So if I’m right, there’s still no way to make progress. So the deeper root cause of poverty is irrationality, poor debate methods, disinterest in debate, etc. Those things are why no one is working out a good plan or, if anyone does have a good plan, it isn’t getting attention and acceptance.

Basically, the world is full of social hierarchies instead of truth-seeking, so great ideas and solutions often get ignored without rebuttal, and popular ideas (e.g. variants of Keynesian economics) often go ahead despite known refutations and don’t get refined with tweaks to fix all the known ways they’ll fail.

Fix the rationality problem, and get a few thousand people who are actually trying to be rational instead of following social status, and you could change the world and start fixing other problems like poverty. But EA isn’t that. When you post on EA, you’re often ignored. Attention is allocated by virality, popularity, whatever biased people feel like (which is usually related to status), etc. There’s no organized effort to e.g. point out one error in every proposal and not let any ideas get ignored with no counter-argument (and also not proceed with and spend money implementing any ideas with known refutations). There’s no one who takes responsibility for addressing criticism of EA ideas and there’s no particular mechanism for changing EA ideas when they’re wrong – the suggestion just has to gain popularity and be shared in the right social circles. To change EA, you have to market your ideas, impress people, befriend people, establish rapport with people, and otherwise do standard social climbing. Merely being right and sharing ideas, while doing nothing aimed at influence, wouldn’t work (or at least is unlikely to work and shouldn’t be expected to work). And many other places have these same irrationalities that EA has, which overall makes it really hard to improve the world much.


Elliot Temple | Permalink | Messages (0)