Food Industry Problems Aren’t Special

Many factory farms are dirty and problematic, but so are the workplaces for many (illegally underpaid) people sewing garments in Los Angeles.

Factory farms make less healthy meat, but the way a lot of meat is processed also makes it less healthy. And vegetable oil may be doing more harm to people’s health than meat. Adding artificial colorings and sweeteners to food does harm too, or piling in extra sugar. The world is full of problems.

You may disagree with me about some of these specific problems. My general point stands even if, actually, vegetable oil is fine. If you actually think the world is not full of huge problems, then we have a more relevant disagreement. In that case, you may wish to read some of my other writing about problems in the world or debate me.

The food industry, to some extent, is arrogantly trying to play God. They want to break food down into components (like salt, fat, sugar, protein, color and nutrients) and then recombine the components to build up ideal, cheap foods however they want. But they don’t know what they’re doing. They will remove a bunch of vitamins, then add back in a few that they know are important, while failing to add others back in. They have repeatedly hurt people by doing stuff like this. It’s especially dangerous when they think they know all the components needed to make baby formula, but they don’t – e.g. they will just get fat from soy and think they replaced fat with fat so it’s fine. But soy has clear differences from natural breast milk, such as a different fatty acid profile. They also have known for decades that the ratio of omega 3 and 6 fatty acids you consume is important, but then they put soy oil in many things and give people unhealthy ratios with too much omega 6. Then they also put out public health advice saying to eat more omega 3’s but not to eat fewer omega 6’s, even though they know it’s the ratio that matters and they know tons of people get way too much omega 6 (including infants).

Mixing huge batches of lettuce or ground beef (so if any of it has E. Coli, the whole batch is contaminated) is similar to how Amazon commingles inventory from different sellers (including themselves), doesn’t track what came from who, and thereby encourages fraud because when fraud happens they have no idea which seller sent in the fraudulent item. That doesn’t stop Amazon from blaming and punishing whichever seller was getting paid for the particular sale when the fraud was noticed, even though he’s probably innocent. Due to policies like these, Amazon has a large amount of fraud on its platforms. Not all industries, in all areas, have these problems. E.g. I have heard of milk samples being tested from dairy farms, before mixing the milk together from many farms, so that responsibility for problems can be placed on the correct people. Similarly, if they wanted to, Amazon could keep track of which products are sent in from which sellers in order to figure out which sellers send in fraudulent items.

Activists who care to consider the big picture should wonder why there are problems in many industries, and wonder what can be done about them.

For example, enforcing fraud laws better would affect basically all industries at once, so that is a candidate idea that could have much higher leverage.

Getting people to stop picking fights without 80% majorities could affect fighting, activism, tribalism and other problems across all topics, so it potentially has very high leverage.

Limiting the government’s powers to favor companies would apply to all industries.

Basic economics education could help people make better decisions about any industry and better judge what sort of policies are reasonable for any industry.

Dozens more high-leverage ideas could be brainstormed. The main obstacle to finding them is that people generally aren’t actually trying. Activists tend to be motivated by some concrete issue in one area (like helping animals or the environment, or preventing cancer, or helping kids or the elderly or battered women), not by abstract issues like higher leverage or fight-avoiding reforms. If a proposal is low-leverage and involves fighting with a lot of people (or powerful people) who oppose it, then in general it’s a really bad idea. Women’s shelters or soup kitchens are low leverage but largely unopposed, so that’s a lot better than a low leverage cause which many people think is bad. But it’s high leverage causes that have the potential to dramatically improve the world. Many low-leverage, unopposed causes can add up and make a big difference too. High leverage but opposed causes can easily be worse than low leverage unopposed causes. If you’re going to oppose people you really ought to aim for high leverage to try to make it worth it. Sometimes there’s an approach to a controversial cause that dramatically reduces opposition, and people should be more interested in seeking those out.


Elliot Temple | Permalink | Messages (0)

Activists Shouldn’t Fight with Anyone

This article is about how all activists, including animal activists, should stop fighting with people in polarizing ways. Instead, they should take a more intellectual approach to planning better strategies and avoiding fights.

Effective Activism

In a world with so many huge problems, there are two basic strategies for reforming things which make sense. Animal activism doesn’t fit either one. It’s a typical example, like many other types of activism, of what not to do (even if your cause is good).

Good Strategy 1: Fix things unopposed.

Work on projects where people aren’t fighting to stop you. Avoid controversy. This can be hard. You might think that helping people eat enough Vitamin A to avoid going blind would be uncontroversial, but if you try to solve the problem with golden rice then you’ll get a lot of opposition (because of GMOs). If it seems to you like a cause should be uncontroversial, but it’s not, you need to recognize and accept that in reality it’s controversial, no matter how dumb that is. Animal activism is controversial, whether or not it should be. So is abortion, global warming, immigration, economics, and anything else that sounds like a live political issue.

A simple rule of thumb is that 80% of people who care, or who will care before you’re done, need to agree with you. 80% majorities are readily available on many issues, but extremely unrealistic in the foreseeable future on many other issues like veganism or ending factory farming.

In the U.S. and every other reasonably democratic country, if you have an 80% majority on an issue, it’s pretty easy to get your way. If you live in a less democratic country, like Iran, you may have to consider a revolution if you have an 80% majority but are being oppressed by violent rulers. A revolution with a 52% majority is a really bad idea. Pushing hard for a controversial cause in the U.S. with a 52% majority is a bad idea too, though not as bad as a revolution. People should prioritize not fighting with other people a lot more than they do.

Put another way: Was there election fraud in the 2020 U.S. Presidential election? Certainly. There is every year. Did the fraud cost Trump the election? Maybe. Did Trump have an 80% majority supporting him? Absolutely not. There is nowhere near enough fraud to beat an 80% majority. If fraud swung the election, it was close anyway, so it doesn’t matter much who won. You need accurate enough elections that 80% majorities basically always win. The ability for the 80% to get their way is really important for enabling reform. But you shouldn’t care very much whether 52% majorities get their way. (For simplicity, my numbers are for popular vote elections with two candidates, which is not actually how U.S. Presidential elections work. Details vary a bit for other types of elections.)

Why an 80% majority instead of 100%? It’s more practical and realistic. Some people are unreasonable. Some people believe in UFOs or flat Earth. The point is to pick a high number that is realistically achievable when you persuade most people. We really do have 80% majorities on tons of issues, like whether women should be allowed to vote (which used to be controversial, but isn’t today). Should murder be illegal? That has well over an 80% majority. Are seatbelts good? Should alcohol be legal? Should we have some international trade instead of fulling closing our borders? Should parents keep their own kids instead of having the government raise all the children? These things have over 80% majorities. The point is to get way more than 50% agreement but without the difficulties of trying to approach 100%.

Good Strategy 2: Do something really, really, really important.

Suppose you can’t get a clean, neat, tidy, easy or unopposed victory for your cause. And you’re not willing to go do something else instead. Suppose you’re actually going to fight with a lot of people who oppose you and work against them. Is that ever worth it or appropriate? Rarely, but yes, it could be. Fighting with people is massively overrated but I wouldn’t say to never do it. Even literal wars can be worth it, though they usually aren’t.

If you’re going to fight with people, and do activism for an opposed cause, it better be really really worth it. That means it needs high leverage. It can’t just be one cause because you like that cause. The cause needs to be a meta cause that will help with many future reforms. For example, if Iran has a revolution and changes to a democratic government, that will help them fix many, many issues going forward, such as laws about homosexuality, dancing, or female head coverings. It would be problematic to pick a single issue, like music, and have a huge fight in Iran to get change for just that one thing. If you’re going to have a big fight, you need bigger rewards than one single reform.

If you get some Polish grocery stores to stop selling live fish, that activism is low leverage. Even if it works exactly as intended, and it’s an improvement, it isn’t going to lead to a bunch of other wonderful results. It’s focused on a single issue, not a root cause.

If you get factory farms to change, it doesn’t suddenly get way easier to do abortion-related reforms. You aren’t getting at root causes, such as economic illiteracy or corrupt, lobbyist-influenced lawmakers, which are contributing to dozens of huge problems. You aren’t figuring out why so many large corporations are so awful and fixing that, which would improve factory farms and dozens of other things too. You aren’t making the world significantly more rational, nor making the government significantly better at implementing reforms.

If you don’t know how to do a better, more powerful reform, stop and plan more. Don’t just assume it’s impossible. The world desperately needs more people willing to be good intellectuals who study how things work and create good plans. Help with that. Please. We don’t need more front-line activists working on ineffective causes, fighting with people, and being led by poor leaders. There are so many front-line activists who are on the wrong side of things, fighting others who merely got lucky to be on the right side (but tribalist fighting is generally counter-productive even if you’re on the right side).

Fighting with people tends to be polarizing and divisive. It can make it harder to persuade people. If the people who disagree with you feel threatened, and think you might get the government to force your way of life on them, they will dig in and fight hard. They’ll stop listening to your reasoning. If you want a future society with social harmony, agreement and rational persuasion, you’re not helping by fighting with people and trying to get laws made that many other people don’t want, which reduces their interest in constructive debate.

Root Causes

The two strategies I brought up are doing things with little opposition or doing things that are really super important and can fix many, many things. Be very skeptical of that second one. It’s a form of the “greater good” argument. Although fighting with people is bad, it could be worth it for the greater good if it fixes some root causes. But doing something with clear, immediate negatives in the name of the greater good rarely actually works out well.

There are other important ways to strategize besides looking for lack of opposition or importance. You can look for root causes instead of symptoms. Factory farms are downstream of various other problems. Approximately all the big corporations suck, not just the big farms. Why is that? What is going on there? What is causing that? People who want to fix factory farms should look into this and get a better understanding of the big picture.

I do believe that there are many practical, reasonable ways that factory farms could be improved, just like I think keeping factories clean in general is good for everyone from customers to workers to owners. What stops the companies from acting more reasonably? Why are they doing things that are worse for everyone, including themselves? What is broken there, and how is it related to many other types of big companies also being broken and irrational in many ways?

I have some answers but I won’t go into them now. I’ll just say that if you want to fix any of this stuff, you need a sophisticated view on the cause-and-effect relationships involved. You need to look at the full picture not just the picture in one industry before you can actually make a good plan. You also should find win/win solutions and present them in terms of mutual benefit instead of approaching it as activists fighting against enemies. You should do your best to proceed in a way where you don’t have enemies. Enemy-based activism is mostly counter-productive – it mostly makes the world worse by increasing fighting.

There are so many people who are so sure they’re right and so sure their cause is so important … and many of them are on opposite sides of the same cause. Don’t be one of those people. Stop rushing, stop fighting, read Eli Goldratt, look at the big picture, make cause-and-effect trees, make transition trees, etc., plan it all out, aim for mutual benefit, and reform things in a way with little to no fighting. If you can’t find ways to make progress without fighting, that usually means you don’t know what you’re doing and are making things worse not better (assuming a reasonably free, democratic country – this is less applicable in North Korea).


Elliot Temple | Permalink | Messages (0)

Don’t Legalize Animal Abuse

This article discusses why mistreating animals is bad even if they’re incapable of suffering.

I don’t think animals can suffer, but I’m not an activist about it. I’m not trying to change how the world treats animals. I’m not asking for different laws. I don’t emphasize this issue. I don’t generally bring it up. I care more about epistemology. Ideas about how minds work are an application of some of my philosophy.

On the whole, I think people should treat animals better, not worse. People should also treat keyboards, phones, buildings and their own bodies better. It’s (usually) bad to smash keyboards, throw phones, cause and/or ignore maintenance problems in buildings, or ingest harmful substances like caffeine, alcohol or smoke.

Pets

Legalizing animal abuse would have a variety of negative consequences if nothing else about the world changed. People would do it more because legalizing it would make it more socially legitimate. I don’t see much upside. Maybe more freedom to do scientific testing on animals would be good but, if so, that could be accomplished with a more targeted change that only applies to science – and lab animals should be treated well in ways compatible with the experiment to avoid introducing extra variables like physiological stress.

On the other hand, legalizing animal abuse would actually kill human children. Abused dogs are more likely to bite both humans and dogs.

When spouses fight, one can vandalize the other’s car because it’s shared property. Vandalizing a spouse’s dog would be worse. A dog isn’t replaceable like a car. If vandalizing a dog was treated like regular property damage, the current legal system wouldn’t protect against it well enough.

Why aren’t dogs replaceable? Because they have long-term memory which can’t be backed up and put into a new dog (compare with copying all your data to a new phone). If you’ve had a dog for five years and spent hundreds of hours around it, that’s a huge investment, but society doesn’t see that dog as being worth tens of thousands of dollars. If you were going to get rid of animal abuse laws, you’d have to dramatically raise people’s perception of the monetary value of pets, which is currently way too low.

Dogs, unlike robots we build, cannot have their memory reset. If a dog starts glitching out (e.g. becomes more aggressive) because someone kicks it, you can’t just reinstall and start over, and you wouldn’t want to because the dog has valuable data in it. Restoring a backup from a few days before the abuse would work pretty well but isn’t an option.

You’d be more careful with how you use your Mac or phone if you had no backups of your data and no way to undo changes. You’d be more careful with what software you installed if you couldn’t uninstall it or turn it off. Dogs are like that. And people can screw up your dog’s software by hitting your dog.

People commonly see cars and homes as by far the most valuable things that they own (way ahead of e.g. jewelry, watches and computers for most people). They care about their pets but they don’t put them on that list. They can buy a new dog for a few hundred dollars and they know that (many of them wouldn’t sell their dog for $10,000, but they haven’t all considered that). They don’t want to replace their dog, but many people don’t calculate monetary value correctly. The reason they don’t want to replace their dog is that their current dog has far more value than a new one would. People get confused about it because they can’t sell their dog for anywhere near its value to them. Pricing unique items with no market for them is problematic. Also, if they get a new dog, it will predictably gain value over time.

It’s like how it’s hard to put a price on your diary or journal. If someone burned it, that would be really bad. But how many dollars of damages is that worth? It’s hard to say. A diary has unique, irreplaceable data in it, but there’s no accurate market price because it’s worth far more to the owner than to anyone else.

Similarly, if someone smashes your computer and you lose a bunch of data, you will have a hard time getting appropriate compensation in court today. Being paid the price of a new computer is something the courts understand. And courts will take into account emotional suffering and trauma. They’re worse at taking into account the hassle and time cost of dealing with the whole problem, including going to court. And courts are bad at taking into account the data loss for personal files with no particular commercial value. A dog is like that – it contains a literal computer that literally contains personal data files with no backups. But we also have laws against animal abuse which help protect pet owners, because we recognize in some ways that pets are important. Getting rid of those laws without changing a bunch of other things would make things worse.

Why would you want to abuse a pet anyway? People generally abuse pets because they see the animals as proxies for humans (it can bleed and yelp) or they want to harm the pet’s owner. So that’s really bad. They usually aren’t hitting a dog in the same way they punch a hole in their wall. They know the dog matters more than the wall or their keyboard.

Note: I am not attempting to give a complete list of reasons that animal abuse is bad even if animals are incapable of suffering. I’m just making a few points. There are other issues too.

Factory Farms

Factory farms abuse animals for different reasons. They’re trying to make money. They’re mostly callous not malicious. Let’s consider some downsides of factory farms that apply even if animals cannot suffer.

When animals are sick or have stress hormones, they’re worse for people to eat. This is a real issue involving e.g. cortisol. Tuna fishing reality shows talk about it affecting the quality and therefore price of their catch, and the fishermen go out of their way to reduce fish’s physiological stress.

When animals eat something – like corn or soy – some of it may stay in their body and affect humans who later eat that animal. It can e.g. change the fatty acid profile of the meat.

I don’t like to eat at restaurants with dirty kitchens. I don’t want to eat from dirty farms either. Some people are poor enough for that risk to potentially be worth it, but many aren’t. And in regions where people are poorer, labor is generally cheaper too, so keeping things clean and well-maintained is cheaper there, so reasonably clean kitchens and factories broadly make sense everywhere. (You run into problems in e.g. poorer areas of the U.S. that are stuck with costly laws designed for richer areas of the U.S. They can have a bunch of labor-cost-increasing laws without enough wealth to reduce the impact of the laws.)

I don’t want cars, clothes, books, computers or furniture from dirty factories. Factories should generally be kept neat, tidy and clean even if they make machine tools, let alone consumer products, let alone food. Reasonable standards for cleanliness differ by industry and practicality, but some factory farms are like poorly-kept factories. And they produce food, which is one of the products where hygiene matters most.

On a related note, E. coli is a problem mainly because they mix together large amounts of e.g. lettuce or beef. One infected head of lettuce can contaminate hundreds of other heads of lettuce due to mixing (for pre-made salad mixes rather than buying whole heads of lettuce). They generally don’t figure out which farm had the problem and make them clean up their act. And the government spends money to trace E. coli problems in order to protect public health. This subsidizes having dirtier farms and then mixing lettuce together in processing. The money the government spends on public health, along with the lack of accountability, helps enable farms to get away with being dirtier.

These are just a sample of the problems with factory farms that are separate issues from animal suffering. I’d suggest that animal activists should emphasize benefits for humans more. Explain to people how changes can be good for them, instead of being a sacrifice for the sake of animals. And actually focus reforms on pro-human changes. Even if animals can suffer, there are lots of changes that could be made which would be better for both humans and animals; reformers should start with those.


Elliot Temple | Permalink | Messages (0)

Animal Welfare Overview

Is animal welfare a key issue that we should work on? If so, what are productive things to do about it?

This article is a fairly high level overview of some issues, which doesn’t attempt to explain e.g. the details of Popperian epistemology.

Human Suffering

Humans suffer and die, today, a lot. Look at what’s going on in Iran, Ukraine, Yemen, North Korea, Venezuela and elsewhere. This massive human suffering should, in general, be our priority before worrying about animals much.

People lived in terrible conditions, and died, building stadiums for the World Cup in Qatar. Here’s a John Oliver video about it. They were lied to, exploited, defrauded, and basically (temporarily) enslaved etc. People sometimes die in football (soccer) riots too. I saw a headline recently that a second journalist died in Qatar for the World Cup. FIFA is a corrupt organization that likes dictators. Many people regard human death as an acceptable price for sports entertainment, and many more don’t care to know the price.

There are garment workers in Los Angeles (USA) working in terrible conditions for illegally low wages. There are problems in other countries too. Rayon manufacturing apparently poisons nearby children enough to damage their intelligence due to workers washing off toxic chemicals in local rivers. (I just read that one article; I haven’t really researched this but it seems plausible and I think many industries do a lot of bad things. There are so many huge problems in human civilization that even reading one article per issue would take a significant amount of time and effort. I don’t have time to do in-depth research on most of the issues. Similarly, I have not done in-depth research on the Qatar World Cup issues.)

India has major problems with orphans. Chinese people live under a tyrannical government. Human trafficking continues today. Drug cartels exist. Millions of people live in prisons. Russia uses forced conscription for its war of aggression in Ukraine.

These large-scale, widespread problems causing human suffering seem more important than animal suffering. Even if you hate how factory farms treat animals, you should probably care more that a lot of humans live in terrible conditions including lacking their freedom in major ways.

Intelligence

Humans have general, universal intelligence. They can do philosophy and science.

Animals don’t. All the knowledge involved in animal behavior comes from genetic evolution. They’re like robots created by their genes and controlled by software written by their genes.

Humans can do evolution of ideas in their minds to create new, non-genetic knowledge. Animals can’t.

Evolution is the only known way of creating knowledge. It involves replication with variation and selection.

Whenever there is an appearance of design (e.g. a wing or a hunting behavior), knowledge is present.

People have been interested in the sources of knowledge for a long time, but it’s a hard problem and there have been few proposals. Proposals include evolution, intelligent design, creationism, induction, deduction and abduction.

If non-evolutionary approaches to knowledge creation actually worked, it would still seem that humans can do them and animals can’t – because there are human scientists and philosophers but no animal scientists or philosophers.

Human learning involves guessing or brainstorming (replication with variation) plus criticism and rejecting refuted ideas (selection). Learning by evolution means learning by error correction, which we do by creating many candidate ideas (like a gene pool) and rejecting ideas that don’t work well (like animals with bad mutations being less likely to have offspring).

Also, since people very commonly get this wrong: Popperian epistemology says we literally learn by evolution. It is not a metaphor or analogy. Evolution literally applies to both genes and memes. It’s the same process (replication with variation and selection). Evolution could also work with other types of replicators. For general knowledge creation, the replicator has to be reasonably complex, interesting, flexible or something (the exact requirements aren’t known).

Types of Algorithms

All living creatures with brains have Turing-complete computers for brains. A squirrel is a reasonable example animal. Let’s not worry about bacteria or worms. (Earthworms apparently have some sort of brain with only around 300 neurons. I haven’t researched it.)

Humans have more neurons, but the key difference between humans and squirrels is the software our brains run.

We can look at software algorithms in three big categories.

  1. Fixed, innate algorithm
  2. “Learning” algorithms which read and write data in long-term memory
  3. Knowledge-creation algorithm (evolution, AGI)

Fixed algorithms are inborn. The knowledge comes from genes. They’re complete and functional with no practice or experience.

If you keep a squirrel in a lab and never let it interact with dirt, and it still does behaviors that seem designed for burying nuts in dirt, that indicates a fixed, innate algorithm. These algorithms can lead to nonsensical behavior when taken out of context.

There are butterflies which do multi-generation migrations. How do they know where to go? It’s in their genes.

Why do animals “play”? To “learn” hunting, fighting, movement, etc. During play, they try out different motions and record data about the results. Later, their behavioral algorithms read that data. Their behavior depends partly on what that data says, not just on inborn, genetic information.

Many animals record data for navigation purposes. They look around, then can find their way back to the same spot (long-term memory). They can also look around, then avoid walking into obstacles (short-term memory).

Chess-playing software can use fixed, innate algorithms. A programmer can specify rules which the software follows.

Chess-playing software can also involve “learning”. Some software plays many practice games against itself, records a bunch of data, and uses that data in order to make better moves in the future. The chess-playing algorithm takes into account data that was created after birth (after the programmer was done).

I put “learning” in scare quotes because the term often refers to knowledge creation (evolution) which is different than an algorithm that writes data to long-term data storage then uses it later. When humans learn at school, it’s not the same thing as e.g. a “reinforcement learning” AI algorithm or what animals do.

People often confuse algorithms involving long-term memory, which use information not available at birth, with knowledge creation. They call both “learning” and “intelligent”.

They can be distinguished in several ways. Is there replication with variation and selection, or not? If you think there’s evolution, can it create a variety of types of knowledge, or is it limited to one tiny niche? If you believe a different epistemology, you might look for the presence of inductive thinking (but Popper and others have refuted induction). There are other tests and methods that can be used to identify new knowledge as opposed to the downstream consequences of existing knowledge created by genetic evolution, by a programmer, or by some other sort of designer.

Knowledge

What is knowledge? It’s information which is adapted to a purpose. When you see the appearance of design, knowledge is present. Understanding the source of that knowledge is often important. Knowledge is one of the more important and powerful things in the universe.

Binary Intelligence or Degrees?

The word “intelligence” is commonly used with two different meanings.

One is a binary distinction. I’m intelligent but a rock or tree isn’t.

The other meaning is a difference in degree or amount of intelligence: Alice is smarter than Joe but dumber than Feynman.

Degrees of intelligence can refer to a variety of different things that we might call logical skill, wisdom, cleverness, math ability, knowledge, being well spoken, scoring well on tests (especially IQ tests, but others too), getting high grades, having a large vocabulary, being good at reading, being good at scientific research or being creative.

There are many different ways to use your intelligence. Some are more effective than others. Using your intelligence effectively is often called being highly intelligent.

Speaking very roughly, many people believe a chimpanzee or dog is kind of like a 50 IQ person – intelligent, but much less intelligent than almost all humans. They think a squirrel passes the binary intelligence distinction to be like a human not a rock, but just has less intelligence. However, they usually don’t think a self-driving car, chat bot, chess software or video game enemy is intelligent at all – that’s just an algorithm which has a lot of advantages compared to a rock but isn’t intelligent. Some other people do think that present-day “AI” software is intelligent, just with a low degree of intelligence.

My position is that squirrels are like self-driving cars: they aren’t intelligent but the software algorithm can do things that a rock can’t. A well designed software algorithm can mimic intelligence without actually having it.

The reason algorithms are cleverer than rocks is they have knowledge in them. Creating knowledge is the key thing intelligence does that makes it seem intelligent. An algorithm uses built-in knowledge, while intelligences can create their own knowledge.

Basically, anything with knowledge seems either intelligent or intelligently-designed to us (speaking loosely and counting evolution as an intelligent designer). People tend to assume animals are intelligent rather than intelligently-designed because they don’t understand evolution or computation very well, and because the animals seem to act autonomously, and because of the similarities between humans and many animals.

Where does knowledge come from? Evolution. To get knowledge, algorithms need to either evolve or to have an intelligent designer. An intelligent designer, such a human software developer, creates the knowledge by evolving ideas about the algorithm within his brain. So the knowledge always comes from evolution. Evolution is the only known solution to how new knowledge can be created which isn’t refuted.

(General intelligence may be an “algorithm” in the same kind of sense that e.g. “it’s all just math”. If you want to call it an algorithm, then whenever I write “algorithm” you can read it as e.g. “algorithm other than general intelligence”.)

Universality

There are philosophical reasons to believe that humans are universal knowledge creators – meaning they can create any knowledge that any knowledge creator can create. The Popperian David Deutsch has written about this.

This parallels how the computer I’m typing on can compute anything that any computer can compute. It’s Turing-complete, a.k.a. universal. (Except quantum computers have extra abilities, so actually my computer is a universal classical computer.)

This implies a fundamental similarity between everything intelligent (they all have the same repertoire of things they can learn). There is no big, bizarre, interesting mind design space like many AGI researchers believe. Instead, there are universally intelligent minds and not much else of note, just like there are universal computers and little else of interest. If you believe in mind design space like Eliezer Yudkowsky does, it’s easy to imagine animals are in it somewhere. But if the only options for intelligence are basically universality or nothing, then animals have to be like humans or else unintelligent – there’s no where else in mind design space for them to be. If the only two options are basically that animals are intelligent in the same way as humans (universal intelligence), or aren’t intelligent, then most people will agree that animals aren’t intelligent.

This also has a lot of relevance to concerns about super-powerful, super-intelligent AGIs turning us all into paperclips. There’s actually nothing in mind design space that’s better than human intelligence, because human intelligence is already universal. Just like how there’s nothing in classical computer design space that’s better than a universal computer or Turing machine.

A “general intelligence” is a universal intelligence. A non-general “intelligence” is basically not an intelligence, like a non-universal or non-Turing-complete “computer” basically isn’t a computer.

Pain

Squirrels have nerves, “pain” receptors, and behavioral changes when “feeling pain”.

Robots can have sensors which identify damage and software which outputs different behaviors when the robot is damaged.

Information about damage travels to a squirrel’s brain where some behavior algorithms use it as input. It affects behavior. But that doesn’t mean the squirrel “feels pain” anymore than the robot does.

Similarly, information travels from a squirrel’s eyes to its brain where behavioral algorithms take it into account. A squirrel moves around differently depending on what it sees.

Unconscious robots can do that too. Self-driving car prototypes today use cameras to send visual information to a computer which makes the car behave differently based on what the camera sees.

Having sensors which transmit information to the brain (CPU), where it is used by behavior-control software algorithms, doesn’t differentiate animals from present-day robots.

Suffering

Humans interpret information. We can form opinions about what is good or bad. We have preferences, values, likes and dislikes.

Sometimes humans like pain. Pain does not automatically equate to suffering. Whether we suffer due to pain, or due to anything else, depends on our interpretation, values, preferences, etc.

Sometimes humans dislike information that isn’t pain. Although many people like it, the taste of pizza can result in suffering for someone.

Pain and suffering are significantly different concepts.

Pain is merely a type of information sent from sensors to the CPU. This is true for humans and animals both. And it’d be true for robots too if anyone called their self-damage related sensors “pain” sensors.

It’s suffering that is important and bad, not pain. Actually, being born without the ability to feel pain is dangerous. Pain provides useful information. Being able to feel pain is a feature, not a bug, glitch or handicap.

If you could disable your ability to feel pain temporarily, that’d be nice sometimes if used wisely, but permanently disabling it would be a bad idea. Similarly, being able to temporarily disable your senses (smell, touch, taste, sight or hearing) is useful, but permanently disabling them is a bad idea. We invent things like ear and nose plugs to temporarily disable senses, and we have built-in eyelids for temporarily disabling our sight (and, probably more importantly, for eye protection).

Suffering involves wanting something and getting something else. Reality violates what you want. E.g. you feel pain that you don’t want to feel. Or you taste a food that you don’t want to taste. Or your spouse dies when you don’t want them to. (People, occasionally, do want their spouse to die – as always, interpretation determines whether one suffers or not).

Karl Popper emphasized that all observation is theory-laden, meaning that all our scientific evidence has to be interpreted and if we get the interpretation wrong then our scientific conclusions will be wrong. Science doesn’t operate on raw data.

Suffering involves something happening and you interpreting it negatively. That’s another way to look at wanting something (that you would interpret positively or neutrally) but getting something else (that you interpret negatively).

Animals can’t interpret like this. They can’t create opinions of what is good and bad. This kind of thinking involves knowledge creation.

Animals do not form preferences. They don’t do abstract thinking to decide what to value, compare differential potential values, and decide what they like. Just like self-driving cars have no interpretation of crashing and do not feel bad about it when they crash. They don’t want to avoid crashing. Their programmers want them to avoid crashing. Evolution doesn’t want things like people do, but it does design animals to (mostly) minimize dying. That involves various more specific designs, like behavior algorithms designed to prevent an animal from starving to death. (Those algorithms are pretty effective but not perfect.)

Genetic evolution is the programmer and designer for animals. Does genetic evolution have values or preferences? No. It has no mind.

Genetic evolution also created humans. What’s different is it gave them the ability to do their own evolution of ideas, thus creating evolved knowledge that wasn’t in their genes, including knowledge about interpretations, preferences, opinions and values.

Animal Appearances

People often assume animals have certain mental states due to superficial appearance. They see facial expressions on animals and think those animals have corresponding emotions, like a human would. They see animals “play” and think it’s the same thing as human play. They see an animal “whimper in pain” and think it’s the same as a human doing that.

People often think their cats or dogs have complex personalities, like an adult human. They also commonly think that about their infants. And they also sometimes think that about chatbots. Many people are fooled pretty easily.

It’s really easy to project your experiences and values onto other entities. But there’s no evidence that animals do anything other than follow their genetic code, which includes sometimes doing genetically-programmed information-gathering behaviors, then writing that information into long-term memory, then using that information in behavior algorithms later in exactly the way the genes say to. (People also get confused by indirection. Genes don’t directly tell animals what to do like slave-drivers. They’re more like blueprints for the physical structure and built-in software of animals.)

Uncertainty

Should we treat animals partially or entirely like humans just in case they can suffer?

Let’s first consider a related question. Should we treat trees and 3-week-old human embryos partially or entirely like humans just in case they can suffer? I say no. If you agree with me, perhaps that will help answer the question about animals.

In short, we have to live by our best understanding of reality. You’re welcome to be unsure, but I have studied stuff, debated and reached conclusions. I have conclusions both about my personal debates and also the state of the debate involving all expert literature.

Also, we’ve been eating animals for thousands of years. It’s an old part of human life, not a risky new invention. Similarly, the mainstream view of human intellectuals, for thousands of years, has been to view animals as incapable of reason or irrational, and as very different than humans. (You can reason with other humans and form e.g. peace treaties or social contracts. You can resolve conflicts with persuasion. You can’t do that with animals.)

But factory farms are not a traditional part of human life. If you just hate factory farms but don’t mind people eating wild animals or raising animals on non-factory farms, then … I don’t care that much. I don’t like factory farms either because I think they harm human health (but so do a lot of other things, including vegetable oil and bad political ideas, so I don’t view factory farms as an especially high priority – the world has a ton of huge problems). I’m a philosopher who mostly cares about the in-principle issue of whether or not animals suffer, which is intellectually interesting and related to epistemology. It’s also relevant to issues like whether or not we should urgently try to push everyone to be vegan, which I think would be a harmful mistake.

Activism

Briefly, most activism related to animal welfare is tribalist, politicized fighting related to local optima. It’s inadequately intellectual, inadequately interested in research and debate about the nature of animals or intelligence, and has inadequate big picture planning about the current world situation and what plan would be very effective and high leverage for improving things. There’s inadequate interest in persuading other humans and reaching agreement and harmony, rather than trying to impose one’s values (like treating animals in particular ways) on others.

Before trying to make big changes, you need e.g. a cause-and-effect diagram about how society works and what all the relevant issues are. And you need to understand the global and local optima well. See Eli Goldratt for more information on project planning.

Also, as is common with causes, activists tend to be biased about their issue. Many people who care about the (alleged) suffering of animals do not care much about the suffering of human children, and vice versa. And many advocates for animals or children don’t care much about the problems facing elderly people in old folks homes, and vice versa. It’s bad to have biased pressure groups competing for attention. That situation makes the world worse. We need truth seeking and reasonable organization, not competitions for attention and popularity. A propaganda and popularity contest isn’t a rational, truth seeking way to organize human effort to make things better.


Elliot Temple | Permalink | Messages (0)

Conflicts of Interest, Poverty and Rationality

I quit the Effective Altruism forum due to a new rule requiring posts and comments be basically put in the public domain without copyright. I had a bunch of draft posts, so I’m posting some of them here with light editing.


Almost everyone believes in conflicts of interest without serious consideration or analysis. It’s not a reasoned opinion based on studying the literature on both sides. They’re almost all ignorant of classical liberal reasoning and could not summarize the other side’s perspective. They also mostly haven’t read e.g. Marx or Keynes. I literally can’t find anyone who has ever attempted to give a rebuttal to Hazlitt’s criticism of Keynes. And I’ve never found an article like “Why Mises, Rand and classical liberalism are wrong and there actually are inherent conflicts of interest”.

(Searching again just now the closest-to-relevant thing I found was an article attacking Rand re conflicts of interest. Its argument is basically that she’s a naive idiot who is contradicting classical liberalism by saying whenever there is a conflict someone is evil/irrational. It shows no awareness that the “no conflicts of interest” is a classical liberal theory which Rand didn’t invent. It’s an anti-Rand article that claims, without details, that classical liberalism is on its side. It’s a pretty straightforward implication of the liberal harmony view that if there appears to be a conflict of interest or disharmony, someone is making a mistake that could and should be fixed, and fixing the mistake enough to avoid conflict is possible (in practice now, not just in theory) if no one is being evil, irrational, self-destructive, etc.)

There are some standard concerns about liberalism (which are already addressed in the literature) like: John would get more value from my ball than I would. So there’s a conflict of interest: I want to keep my ball, and John wants to have it.

Even if John would get less value from my ball, there may be a conflict of interest: John would like to have my ball, and I’d like to keep it.

John’s interest in taking my ball, even though it provides more value to me than him, is widely seen as illegitimate. The only principle it seems to follow is “I want the most benefit for me”, which isn’t advocated much, though it’s often said to be human nature and said that people will inevitably follow it.

Wanting to allocates resources where they’ll do the most good – provide the most benefit to the most people – is a reasonable, plausible principle. It has been advocated as a good, rational principle. There are intellectual arguments for it.

EA seems to believe in that principle – allocate resources where they’ll do the most good. But EA also tries not to be too aggressive about it and just wants people to voluntarily reallocate some resources to do more good compared to the status quo. EA doesn’t demand a total reallocate of all resources in the optimal way because that’s unpopular and perhaps unwise (e.g. there are downsides to attempting revolutionary changes to society (especially ones that many people will not voluntarily consent to) rather than incremental, voluntary changes, such as the risk of making costly mistakes while making massive changes).

But EA does ask for people to voluntarily make some sacrifices. That’s what altruism is. EA wants people to give up some benefit for themselves to provide larger benefits for others. E.g. give up some money that has diminishing returns for you, and donate it to help poor people who get more utility per dollar than you do. Or donate to a longtermist cause to help save the world, thus benefitting everyone, even though most people aren’t paying their fair share. In some sense, John is buying some extra beer while you’re donating to pay not only your own share but also John’s share of AGI alignment research. You’re making a sacrifice for the greater good while John isn’t.

This narrative, in terms of sacrifices, is problematic. It isn’t seeking win/win outcomes, mutual benefit or social harmony. It implicitly accepts a political philosophy involving conflicts of interest, and it further asks people to sacrifice their interests. By saying that morality and your interests contradict each other, it creates intellectual confusion and guilt.

Liberal Harmony

Little consideration has been given to the classical liberal harmony of interests view, which says no sacrifices are needed. You can do good without sacrificing your own interests, so it’s all upside with no downside.

How?

A fairly straightforward answer is: if John wants my ball and values it more than I do, he can pay me for it. He can offer a price that is mutually beneficial. If it’s worth $10 to me, and $20 to John, then he can offer me $15 for it and we both get $5 of benefit. On the other hand, if I give it to John for free, then John gets $20 of benefit and I get -$10 of benefit (that’s negative benefit).

If the goal is to maximize total utility, John needs to have that ball. Transferring the ball to John raises total utility. However, the goal of maximizing total utility is indifferent to whether John pays for it. As a first approximation, transferring dollars has no effect on total utility because everyone values dollars equally. That isn’t really true but just assume it for now. I could give John the ball and $100, or the ball and $500, and the effect on total utility would be the same. I lose an extra $100 or $500 worth of utility, and John gains it, which has no effect on total utility. Similarly, John could pay me $500 for the ball and that would increase total utility just as much (by $10) as if I gave him the ball for free.

Since dollar transfers are utility-neutral, they can be used to get mutual benefit and avoid sacrifices. Whenever some physical object is given to a new owner in order to increase utility, some dollars can be transferred in the other direction so that both the old and new owners come out ahead.

There is no need, from the standpoint of total utility, to have any sacrifices.

And these utility-increasing transfers can be accomplish, to the extent people know they exist, by free trade. Free trade already maximizes total utility, conditional on people finding opportunities and the transaction costs being lower than the available gains. People have limited knowledge, they’re fallible, and trade takes effort, so lots of small opportunities are missed out on that an omniscient, omnipotent God could do. If we think of this from a perspective of a central planner or philosopher king with unlimited knowledge who can do anything effortlessly, there’d be a lot of extra opportunities compared to the real situation where people have limited knowledge of who has what, how much utility they’d get from what, etc. This is an important matter that isn’t very relevant to the conflicts of interest issue. It basically just explains that some missed opportunities are OK and we shouldn’t expect perfection.

There is a second issue, besides John would value my physical object more than me. What if John would value my services more than the disutility of me performing those services? I could clean his bathroom for him, and he’d be really happy. It has more utility for him than I’d lose. So if I clean his bathroom, total utility goes up. Again, the solution is payment. John can give me dollars so that we both benefit, rather than me cleaning his bathroom for free. The goal of raising total utility has no objection to John paying me, and the goal of “no one sacrifices” or “mutual benefit” says it’s better if John pays me.

Valuing Dollars Differently

And there’s a third issue. What if the value of a dollar is different for two people? For a simple approximation, we’ll divide everyone up into three classes: rich, middle class and poor. As long as John and I are in the same class, we value a dollar equally, and my analysis above works. And if John is in a higher class than me, than him paying me for my goods or services will work fine. Possibly he should pay me extra. The potential problems for the earlier analysis come if John is in a lower class than me.

If I’m middle class and John is poor, then dollars are more important to him than to me. So if he gives me $10, that lowers total utility. We’ll treat middle class as the default, so that $10 has $10 of value for me, but for John it has $15 of value. Total utility goes down by $5. Money transfers between economic classes aren’t utility-neutral.

Also, if I simply give John $10, for nothing in return, that’s utility-positive. It increases total utility by $5. I could keep giving John money until we are in the same economic class, or until we have the same amount of money, or until we have similar amounts of money – and total utility would keep going up the whole time. (That’s according to this simple model.)

So should money be divided equally or approximately equally? Would that raise total utility and make a better society?

There are some concerns, e.g. that some people spend money more wastefully than others. Some people spend money on tools that increase the productivity of labor – they forego immediate consumption to invest in the future. Others buy alcohol and other luxury consumption. If more money is in the hands of investors rather than consumers, society will be better off after a few years. Similarly, it lowers utility to allocate seed corn to people who’d eat it instead of planting it.

Another concern is that if you equal out the wealth everyone has, it will soon become unequal again as some people consume more than others.

Another concern is incentives. The more you use up, the more you’ll be given by people trying to increase total utility? And the more you save, the more you’ll give away to others? If saving/investing benefits others not yourself, people will do it less. If people do it less, total utility will go down.

One potential solution is loans. If someone temporarily has less money, they can be loaned money. They can then use extra, loaned dollars when they’re low on money, thus getting good utility-per-dollar. Later when they’re middle class again, they can pay the loan back. Moving spending to the time period when they’re poor, and moving saving (loan payback instead of consumption) to the time period when they’re middle class, raises overall utility.

Poverty

But what if being poor isn’t temporary? Then I’d want to consider what is the cause of persistent poverty.

If the cause is buying lots of luxuries, then I don’t think cash transfers to that person are a good idea. Among other things, it’s not going to raise total utility of society to increase consumption of luxuries instead of capital accumulation. Enabling them to buy even more luxuries isn’t actually good for total utility.

If the cause is being wasteful with money, again giving the person more money won’t raise total utility.

If the cause is bad government policies, then perhaps fixing the government policies would be more efficient than transferring money. Giving money could be seen as subsidizing the bad government policies. It’d be a cost-ineffective way to reduce the harm of the policies, thus reducing the incentive to change the policies, thus making the harmful policies last longer.

If the person is poor because of violence and lack of secure property, then they need rule of law, not cash. If you give them cash, it’ll just get taken.

Can things like rule of law and good governance be provided with mutual benefit? Yes. They increase total wealth so much that everyone could come out ahead. Or put another way, it’s pretty easy to imagine good police and courts, which do a good job, which I’d be happy to voluntarily pay for, just like I currently voluntarily subscribe to various services like Netflix and renting web servers.

Wealth Equality

Would it still be important to even out wealth in that kind of better world where there are no external forces keeping people persistently poor? In general, I don’t think so. If there are no more poor people, that seems good enough. I don’t think the marginal utility of another dollar changes that much once you’re comfortable. People with plenty don’t need to be jealous of people with a bit more. I understand poor people complaining, but people who are upper middle class by today’s standards are fine and don’t need to be mad if some other people have more.

Look at it like this. If I have 10 million dollars and you have 20 million dollars, would it offer any kind of significant increase in total utility to even that out to 15 million each? Nah. We both can afford plenty of stuff – basically anything we want which is mass produced. The marginal differences come in two main forms:

1: Customized goods and services. E.g. you could hire more cooks, cleaners, personal drivers, private jet flights, etc.
2: Control over the economy, e.g. with your extra $10 million you could gain ownership of more businesses than I own.

I don’t care much about the allocation of customized goods and services besides to suggest that total utility may go up with somewhat less of them. Mass production and scalable services are way more efficient.

And I see no particular reason that total utility will go up if we even out the amount of control over businesses that everyone has. Why should wealth be transferred to me so that I can own a business and make a bunch of decisions? Maybe I’ll be a terrible owner. Who knows. How businesses are owned and controlled is an important issue but I basically don’t think that evening out ownership is the answer that will maximize total utility. Put another way, the diminishing returns on extra dollars is so small in this dollar range that personal preferences probably matter more. In other words, how much I like running businesses is a bigger factor than my net worth only being $10 million rather than $15 million. How good I am at running a business is also really important since it’ll affect how much utility the business creates or destroys. If you want to optimize utility more, you’ll have to start allocating specific things to the right people, which is hard, rather than simply trying to give more wealth to whoever has less. Giving more to whoever has less works pretty well at lower amounts but not once everyone is well off.

What about the ultra rich who can waste $44 billion dollars on a weed joke? Should anyone be a trillionaire? I’m not sure it’d matter in a better world where all that wealth was earned by providing real value to others. People that rich usually don’t spend it all anyway. Usually, they barely spend any of it, unless you count giving it to charity as spending. To the extent they keep it, they mostly invest it (in other words, basically, loan it out and let others use it). Having a ton of wealth safeguarded by people who will invest rather than consume it is beneficial for everyone. But mostly I don’t care much and certainly don’t want to defend current billionaires, many of who are awful and don’t deserve their money, and some of whom do a ton of harm by e.g. buying and then destroying a large business.

My basic claim here is that if everyone were well off, wealth disparities wouldn’t matter so much – we’d all be able to buy plenty of mass produced and scalable stuff, and so the benefit of a marginal dollar would be reasonably similar between people. It’s the existence of poverty that makes a dollar having different utility for different people a big issue.

The Causes of Poverty

If you give cash to poor people, you aren’t solving the causes of poverty. You’re just reducing some of the harm done (hopefully – you could potentially be fueling a drug addiction or getting a thug to come by and steal it or reducing popular resentment of a bad law and thus keeping it in place longer). It’s a superficial (band aid) solution not a root cause solution. If people want to do some of that voluntarily, I don’t mind. But I don’t place primary importance on that stuff. I’m more interested in how to fix the system and whether that can be done with mutual benefit.

From a conflicts of interest perspective, it’s certainly in my interest that human wealth goes up enough for everyone to have a lot. That world sounds way better for me to live in. I think the vast majority will agree. So there’s no large conflict of interest here. Many a few current elites would prefer to be a big fish in a smaller pond rather than live in that better world. But I think they’re wrong and that isn’t in their interest. Ideas like that will literally get them killed. Anti-aging research would be going so much better if humanity was so much richer that there were no poor people.

What about people who are really stupid, or disabled, or chronically fatigued or something so they can’t get a good job even in a much better world? Their families can help them. Or their neighbors, church group, online rationality forum, whatever. Failing that, some charity seems fine to fill in a few gaps here and there – and it won’t be a sacrifice because people will be happy to help and will still have plenty for themselves and nothing in particular they want to buy but have to give up. And with much better automation, people will be able to work much shorter hours and one worker will be able to easily support many people. BTW, we may run into trouble with some dirty/unpleasant jobs still being needed, that aren’t automated, and how do we incentivize anyone to do them when just paying higher wages for those jobs won’t attract much interest since everyone has plenty.

So why don’t we fix the underlying causes of poverty? People disagree about what those are. People try things that don’t work. There are many conflicting plans with many mistakes. But there’s no conflict of interest here, even between people who disagree on the right plan. It’s in both people’s interests to figure out a plan that will actually work and do that.

Trying to help poor people right now is a local optima that doesn’t invest in the future. As long as the amount of wealth being used on it is relatively small, it doesn’t matter much. It has upsides so I’m pretty indifferent. But we shouldn’t get distracted from the global optima of actually fixing the root problems.

Conclusion

I have ideas about how to fix the root causes of poverty, but I find people broadly are unwilling to learn about or debate my ideas (some of which are unoriginal and can be read in fairly well known books by other people). So if I’m right, there’s still no way to make progress. So the deeper root cause of poverty is irrationality, poor debate methods, disinterest in debate, etc. Those things are why no one is working out a good plan or, if anyone does have a good plan, it isn’t getting attention and acceptance.

Basically, the world is full of social hierarchies instead of truth-seeking, so great ideas and solutions often get ignored without rebuttal, and popular ideas (e.g. variants of Keynesian economics) often go ahead despite known refutations and don’t get refined with tweaks to fix all the known ways they’ll fail.

Fix the rationality problem, and get a few thousand people who are actually trying to be rational instead of following social status, and you could change the world and start fixing other problems like poverty. But EA isn’t that. When you post on EA, you’re often ignored. Attention is allocated by virality, popularity, whatever biased people feel like (which is usually related to status), etc. There’s no organized effort to e.g. point out one error in every proposal and not let any ideas get ignored with no counter-argument (and also not proceed with and spend money implementing any ideas with known refutations). There’s no one who takes responsibility for addressing criticism of EA ideas and there’s no particular mechanism for changing EA ideas when they’re wrong – the suggestion just has to gain popularity and be shared in the right social circles. To change EA, you have to market your ideas, impress people, befriend people, establish rapport with people, and otherwise do standard social climbing. Merely being right and sharing ideas, while doing nothing aimed at influence, wouldn’t work (or at least is unlikely to work and shouldn’t be expected to work). And many other places have these same irrationalities that EA has, which overall makes it really hard to improve the world much.


Elliot Temple | Permalink | Messages (0)

Harmony, Capitalism and Altruism

I quit the Effective Altruism forum due to a new rule requiring posts and comments be basically put in the public domain without copyright. I had a bunch of draft posts, so I’m posting some of them here with light editing.


Are there conflicts of interest, or is mutual benefit always possible? This is one of the most important questions in political philosophy.

The belief in conflicts of interest leads to a further question. Who should win when there’s a conflict? One view is that individuals should get worse outcomes for the benefit of the group. Another view is that individuals should be prioritized over the group.

Why would one advocate worse outcomes for the group? That might sound odd initially. One reason is because it seems to be implied by individual freedom and individual rights. If each person has rights and freedom, then he’s free to maximize his interests within that framework. There’s nothing to do, besides asking nicely and trying to make persuasive arguments (which historically isn’t very effective), to try to get people to sacrifice their interests for the sake of others.

One consequence is the altruist-collectivist side of the debate has often considered rejecting individual freedom or some individual rights. What if most people won’t voluntarily act for the benefit of the group, to create a paradise society with the highest overall utility (the most good for the most people, or something along those lines)? Then some people will advocate violently forcing them.

Because there appears to be a conflict between the good of the group and the rights and freedoms of the individual, altruists have often advocated restricting the rights and freedoms of the individual. Sometimes they’ve used violence, in the name of the greater good, and killed millions. That kind of massive violence has never led to good results for the group, though it has led to somewhat good results for a few individuals who end up being wealthy rulers. There have always been questions about whether communist revolutionary leaders actually care about the welfare of everyone or are just seeking power so they can be corrupt and get personal luxuries. Historically, collectivist societies tend to be plagued by noticeably more corruption than the more individualist democracies have. Violence and corruption are linked together in some ways. It’s harder to profit from corruption if individuals have rights and society won’t let you get away with violating their rights to take their stuff.

Individualism

A rather different viewpoint is that we’re all fallible, we each individually have limited knowledge, and we can only coordinate with others a limited amount. We shouldn’t try to design paradise by looking at society from the perspective of an omniscient god. We have to consider decision making and action from the point of view of an individual. Instead of trying to have some wise philosopher kings or central planners telling everyone what to do, we need a system where individuals can figure out what to do based on their own situation and the knowledge they have. The central planner approach doesn’t work well because the planners don’t have enough detailed knowledge of each individual’s life circumstances, and can’t do a good enough job of optimizing what’s good for them. To get a good outcome for society, we need to use the brainpower of all its members, not just a few leaders. We have no god with infinite brainpower to lead us.

So maybe the best thing we can do is have each individual pay attention to and try to optimize the good for himself, while following some rules that prevent him from harming or victimizing others.

In total, there is one brain per person. A society with a million members has a million brains. So how much brainpower can be allocated to getting a good outcome for each person? On average, at most, one brain worth of brainpower. What’s the best way to assign brainpower to be used to benefit people? Should everyone line up, then each person looks 6 people to his right, and uses his brainpower to optimize that person’s life? No. It makes much more sense for each brain to be assigned the duty of optimizing the life of the person who that brain is physically inside of.

You can get a rough approximation of a good society by having each person make decisions for themselves and run their own lives, while prohibiting violence, theft and fraud.

Perhaps you can get efficiency gains with organized, centralized planning done by specialists – maybe they can make some ideas that are useful to many people. Or maybe people can share ideas in a decentralized way. There are many extra details to consider.

Coordination

Next let’s consider coordination between people. One model for how to do that is called trade. I make shoes, and you make pants. We’d each like to use a mix of shoes and pants, not just the thing we make. So I trade you some of my shoes for some of your pants. That trade makes us both better off. This is the model of voluntary trade for mutual benefit. It’s also the model of specialization and division of labor. And what if you make hats but I don’t want any hats, but you do want some of my shoes? That is the problem money solves. You can sell hats to someone else for money, then trade money for my shoes, and then I can trade that money to someone else for something I do want, e.g. shirts.

The idea here is that each individual makes sure each trade he participates in benefits him. If a trade doesn’t benefit someone, it’s his job to veto the trade and opt out. Trades only happen if everyone involved opts in. In this way, every trade benefits everyone involved (according to their best judgment using their brainpower, which will sometimes be mistaken), or at least is neutral and harmless for them. So voluntary trade raises the overall good in society – each trade raises the total utility score for that society. (So if you want high total utility, maybe you should think about how to increase the amount of trading that happens. Maybe that would do more good than donating to charity. And that’s a “real” maybe – I mean it’s something worth considering and looking into, not that I already reached a conclusion about it. And if EA has not looked into or considered that much, then I think that’s bad and shows a problem with EA, independent of whether increasing trade is a good plan.)

High Total Utility

It’s fairly hard to score higher on total good by doing something else besides individual rights plus voluntary trade and persuasion (meaning sharing ideas on a voluntary basis).

Asking people to sacrifice their self-interest usually results in lower total good, not higher total good. Minor exceptions, like some small voluntary donations to charity, may help raise total good a bit, though they may not. To the extent people donate due to social signaling or social pressure (rather than actually thinking a charity can use it better than they can) donations are part of some harmful social dynamics that are making society worse.

Donations or Trade

But many people look at this and say “Sometimes Joe could give up a pair of old pants that he doesn’t really need that’s just sitting around taking up space, and give it to Bob, who would benefit from it and actually wear it. The pants only have a small value to Joe, and if he would sacrifice that small value, Bob would get a large value, thus raising overall utility.”

The standard pro-capitalist rebuttal is that there’s scope for a profitable trade here. Also, the scenario was phrased from the perspective of an omniscient god, central planner or philosopher king. Joe needs to actually know that Bob needs a pair of used pants, and Bob needs to know that Joe has an extra pair. And Joe needs to consider the risk that several of the pants he currently wears become damaged in the near future in which case he’d want to wear that old pair again. And Bob needs to consider the risk that he’s about to be gifted a bunch of pairs of new pants from other people so he wouldn’t want Joe’s pants anyway.

But let’s suppose they know about all this stuff and still decide that, on average, taking into account risk and looking at expectation values, it’s beneficial for Bob to have the pants, not Joe. We can put numbers on it. It’s a $2 negative for Joe, but a $10 gain for Bob. That makes a total profit (increase in total utility) of $8 if Joe hands over the pants.

If handing over the pants increases total good by $8, how should that good be divided up? Should $10 of it go to Bob, and -$2 of it go to Joe? That’s hardly fair. Why should Bob get more benefit than the increase in total good? Why should Joe sacrifice and come out behind? It would be better if Bob paid $2 for the pants so Bob benefits by $8 and Joe by $0. That’s fairer. But is it optimal? Why shouldn’t Joe get part of the benefit? As a first approximation, the fairest outcome is that they split the benefit evenly. This requires Bob to pay $6 for the pants. Then Joe and Bob each come out ahead by $4 of value compared to beforehand.

There are objections. How are Joe and Bob going to find out about each other and make this trade happen? Maybe they are friends. But there are a lot more non-friends in society than friends, so if you only trade with your friends then a lot of mutually beneficial trades won’t happen. So maybe a middleman like a used clothing store can help – Joe sells his pants to the used clothing store where Bob later finds and buys them. The benefit is split up between Joe, Bob and the store. As a first approximation, we might want to give a third of the benefit to each party. In practice, used clothing stores often don’t pay very much for clothing and don’t charge very high profit margins, so Bob might get the largest share of the benefit. Also the overall benefit is smaller now because there are new costs like store employees, store lighting, marketing, and the building the store is in. Those costs may be worth it because otherwise Joe and Bob never would have found each other and made a trade, so a smaller benefit is better than no benefit. Those costs are helping deal with the problem of limited knowledge and no omniscient coordinator – coordination and finding beneficial trades actually takes work and has a downside. Some trades that would be beneficial if they took zero effort actually won’t work because the cost of the trading partners finding each other (directly or indirectly through a middle man) costs more than the benefit of the trade.

Not Having Enough Money

What if Bob doesn’t have $6 to spare? One possibility is a loan. A loan would probably from a bank not from Joe – this is an example of specialization and division of labor – Joe isn’t good at loans and a bank that handles hundreds of loans can have more efficient, streamlined processes. (In practice today, our banks have a lot of flaws and it’s more typical to get small loans from credit cards, which also have flaws. I was making a theoretical point.)

If Bob is having a hard time, but it’s only temporary, then a bank can loan him some money and he can pay it back later with interest. That can be mutually beneficial. But not everyone pays their loans back, so the bank will have to use the limited information it has to assess risk.

Long Term Poverty

What if Bob is unlikely to have money to spare in the future either? What if his lack of funds isn’t temporary? That raises the question of why.

Is Bob lazy and unproductive? Does he refuse to work, refuse to contribute to society and create things of value to others, but he wants things that other people worked to create like pants? That anti-social attitude is problematic under both capitalist and altruistic approaches. Altruism says he should sacrifice, by accepting the disutility of working, in order to benefit others. Capitalism gives him options. He can trade the disutility of working to get stuff like pants, if he wants to. Or he can decide the disutility of continuing to wear old pants is preferable to the disutility of working. Capitalism offers an incentive to work then lets people make their own choices.

It’s better (in some ways) if Joe trades pants to someone who works to create wealth that can benefit society, rather than someone who sits around choosing not to work. Joe should reward and incentivize people who participate in productive labor. That benefits both Joe (because he can be paid for his pants instead of give them away) and also society (which is better off in aggregate if more people work).

What if Bob is disabled, elderly, or unlucky, rather than lazy? There are many possibilities including insurance, retirement savings, and limited amounts of charitable giving to help out as long as these kinds of problems aren’t too common and there isn’t too much fraud or bad faith (e.g. lying about being disabled or choosing not to save for retirement on purpose because you know people will take pity on you and help you out later, so you can buy more alcohol and lotto tickets now).

Since the central planner approach doesn’t work well, one way to approach altruism is as some modifications on top of a free market. We can have a free market as a primary mechanism, and then encourage significant amounts of charitable sacrifice too. Will that create additional benefit? That is unclear. Why should Joe give his pants to Bob for free instead of selling them for $6 so that Joe and Bob split the benefit evenly? In the general case, he shouldn’t. Splitting the benefit – trade – makes more sense than charity.

Liberalism’s Premise

But pretty much everything I’ve said so far has a hidden premise which is widely disputed. It’s all from a particular perspective. The perspective is sometimes called classical liberalism, individualism, the free market or capitalism.

The hidden premise is that there are no conflicts of interest between people. This is often stated with some qualifiers, like that the people have to be rational, care about the long term not just the short term and live in a free, peaceful society. Sometimes it’s said that there are no innate, inherent or necessary conflicts of interest. The positive way of stating it is the harmony of interests theory.

An inherent conflict would mean Joe has to lose for Bob to win. And the win for Bob might be bigger than the loss for Joe. In other words, for some reason, Bob can’t just pay Joe $6 to split the benefit. Either Joe can get $2 of benefit from keeping that pair of pants, or Bob can get $10 if Joe gives it to him (or perhaps if Bob takes it), and there are no other options, so there’s a conflict. In this viewpoint, there have to be winners and losers. Not everything can be done for mutual benefit using a win/win approach or model. Altruism says Joe probably won’t want to give up the pants for nothing, but he should do it anyway for the greater good.

The hidden premise of altruism is that there are conflicts of interest, while the hidden premise of classical liberalism is that there are no necessary, rational conflicts of interest.

I call these things hidden premises but they aren’t all that hidden. There are books talking about them explicitly and openly. They aren’t well known enough though. The Marxist class warfare theory is a conflicts of interests theory, which has been criticized by the classical liberals who advocated a harmony of interests theory that says social harmony can be created by pursuing mutual benefit with no losers or sacrificial victims (note: it’s later classical liberals who criticized Marxism; classical liberalism is older than Marxism). Altruists sometimes openly state their belief in a conflict of interests viewpoint, but many of them don’t state that or aren’t even aware of it.

Put another way, most people have tribalist viewpoints. The altruists and collectivists think there are conflicts between the individual and group, and they want the group to win the conflict.

People on the capitalist, individualist side of the debate are mostly tribalists too. They mostly agree there are conflicts between the individual and group, and they want the individual to win the conflict.

And then a few people say “Hey, wait, the individual and group, or the individual and other individuals, or the group and the other groups, are not actually in conflict. They can exist harmoniously and even benefit each other.” And then basically everyone dislikes and ignores them, and refuses to read their literature.

The harmony theory of classical liberalism has historical associations with the free market, and my own thinking tends to favor the free market. But you should be able to reason about it from either starting point – individual or group – and reach the same conclusions. Or reason about it in a different way that doesn’t start with a favored group. There are many lines of reasoning that should work fine.

Most pro-business or pro-rich-people type thinking today is just a bunch of tribalism based on thinking there is a conflict and taking sides in the conflict. I don’t like it. I just like capitalism as an abstract economic theory that addresses some problems about coordinating human action given individual actors with limited knowledge. Also I like peace and freedom, but I know most people on most sides do too (or at least they think they do), so that isn’t very differentiating.

I think the most effective way to achieve peace and social harmony is by rejecting the conflicts of interest mindset and explaining stuff about mutual benefit. There is no reason to fight others if one is never victimized or sacrificed. Altruism can encourage people to pick fights because it suggests there are and should be sacrificial victims who lose out for the benefit of others. Tribalist capitalist views also lead to fights because they e.g. legitimize the exploitation of the workers and downplay the reasonable complaints of labor, rather than saying “You’re right. That should not be happening. This must be fixed. We must investigate how that kind of mistreatment by your employers is happening. There are definitely going to be some capitalism-compatible fixes; let’s figure them out.”

You can start with group benefit and think about how to get it given fallible actors with limited knowledge and limited brainpower. We won’t be able to design a societal system that gets a perfect outcome. We need systems that let people do the best with the knowledge they have, and let them coordinate, share knowledge, etc. We’ll want them to be able to trade when one has something that someone else could make better use of and vice versa. We’ll want money to deal with the double coincidence of wants problem. We’ll want stores with used goods functioning as middle men, as well as online marketplaces where individuals can find each other. (By the way, Time Will Run Back by Henry Hazlitt is a great book about a socialist leader who tries to solve some problems his society has and reinvents capitalism. It’s set in a world with no capitalist countries and where knowledge of capitalism had been forgotten.)

More Analysis

Will we want people to give stuff to each other, for nothing in return, when someone else can benefit more from it? Maybe. Let’s consider.

First, it’s hard to tell how much each person can benefit from something. How do I know that Bob values this object more than I do? If we both rate it on a 1-10 scale, how do we know our scales are equivalent? There’s no way to measure value. A common measure we use is comparing something to dollars. How many dollars would I trade for it and be happy? I can figure out some number of dollars I value more than the object, and some number of dollars I value less than the object, and with additional effort I can narrow down the range.

So how can we avoid the problem of mistakenly giving something to someone who actually gets less utility from it than I do? He could pay dollars for it. If he values it more in dollars than I do, then there’s mutual benefit in selling it to him. He could also offer an object in trade for it. What matters then is that we each value what we get more than what we give up. I might actually value the thing I trade away more than the other guy does, and there could still be mutual benefit.

Example:

I have pants that I value at $10 and Bob values at $5. For the pants, Bob offers to trade me artwork which I value at $100 and he values at $1. I value both the pants and artwork more than Bob does, but trading the pants to him still provides mutual benefit.

But would there be more total benefit if Bob simply gave me the artwork and I kept the pants. Sure. And what if I gave Bob $50 for the art? That has the same total benefit. On the assumption that we each value a dollar equally, transfers of dollars never change total benefit. (That’s not a perfect assumption but it’s often a reasonable approximation.) But transfers of dollars are useful, even when they don’t affect total utility, because they make trades mutually beneficial instead of having a winner and a loser. Transferring dollars also helps prevent trades that reduce total utility: If Bob will only offer me like $3 for the pants, which I value at $10, then we’ve figured out that the pants benefit me more than him and I should keep them.

BTW, if you want to help someone who has no dollars, you should consider giving him dollars, not other goods. Then see if he’ll pay you enough to trade for the other goods. If he won’t, that’s because he thinks he can get even more value by using the dollars in some other way.

Should I do services for Bob whenever the value to him is higher than the disutility to me? What if I I have very wonderful services that many people want – like I’m a great programmer or chef – and I end up working all day every day for nothing in return? That would create a disincentive to develop skills. From the perspective of designing a social system or society, it works better to set up good incentives instead of demanding people act contrary to incentives. We don’t want to have a conflict or misalignment between incentives and desired behaviors or we’ll end up with people doing undesirable but incentivized behavior. We’ll consider doing that if it’s unavoidable, but we should at least minimize it.

Social Credit

There’s a general problem when you don’t do dollar-based trading: what if some people keep giving and giving (to people who get higher utility for goods or services) but don’t get a similar amount of utility or more in return? If people just give stuff away whenever it will benefit others a bunch, wealth and benefit might end up distributed very unequally. How can we make things fairer? (I know many pro-capitalist people defend wealth inequality as part of the current tribalist political battle lines. And I don’t think trying to make sure everyone always has exactly equal amounts of wealth is a good idea. But someone giving a lot and getting little, or just significant inequality in general, is a concern worthy of some analysis.)

We might want to develop a social credit system (but I actually mean this in a positive way, despite various downsides of the Chinese Communist Party’s social credit system). We might want to keep score in some way to see who is contributing the most to society and make sure they get some rewards. That’ll keep incentives aligned well and prevent people from having bad luck and not getting much of the total utility.

So we have this points system where every time you benefit someone you get points based on the total utility created. And people with higher points should be given more stuff and services. Except, first of all, how? Should they be given stuff even if it lowers total utility? If the rule is always do whatever raises total utility, how can anyone deviate to help out the people with high scores (or high scores relative to personal utility).

Second, aren’t these points basically just dollars? Dollars are a social credit system which tracks who contributed the most. In the real world, many things go wrong with this and people’s scores sometimes end up wildly inaccurate, just like in China where their social credit system sometimes assigns people inaccurate scores. But if you imagine an ideal free market, then dollars basically track how much people contribute to total utility. And then you spend the dollars – lower your score – to get benefits for yourself. If someone helps you, you give him some of your dollars. He gave a benefit, and you got a benefit, so social credit points should be transferred from you to him. Then if everyone has the same number of dollars, that basically also means everyone got the same amount of personal utility or benefit.

What does it mean if someone has extra dollars? What can we say about rich people? They are the most altruistic. They have all these social credit points but they didn’t ask for enough goods and services in return to use up their credit. They contributed more utility to others than they got for themselves. And that’s why pro-capitalist reasoning sometimes says good things about the rich.

But in the real world today, people get rich in all kinds of horrible ways because no country has a system very similar to the ideal free market. And a ton of pro-capitalist people seem to ignore that. They like and praise the rich people anyway, instead of being suspicious of how they got rich. They do that because they’re pro-rich, pro-greed tribalists or something. Some of them aspire to one day be rich, and want to have a world that benefits the rich so they can keep that dream alive and imagine one day getting all kinds of unfair benefits for themselves. And then the pro-altruism and pro-labor tribalists yell at them, and they yell back, and nothing gets fixed. As long as both sides believe in conflicts of interest, and are fighting over which interest groups should be favored and disfavored in what ways, then I don’t expect political harmony to be achieved.

Free Markets

Anyway, you can see how a free market benefits the individual, benefits the group, solves various real problems about coordination and separate, fallible actors with limited knowledge, and focuses on people interacting only for mutual benefit. Interacting for mutual benefit – in ways with no conflict of interest – safeguards both against disutility for individuals (people being sacrificed for the alleged greater good) and also against disutility for the group (people sacrificing for the group in ineffective, counter-productive ways).

Are there benefits that can’t be achieved via harmony and interaction only for mutual benefit? Are there inherent conflicts where there must be losers in order to create utopia? I don’t think so, and I don’t know of any refutations of the classical liberal harmony view. And if there are such conflicts, what are they? Name one in addition to making some theoretical arguments. Also, if we’re going to create utopia with our altruism … won’t that benefit every individual? Who wouldn’t want to live in utopia? So that sounds compatible with the harmony theory and individual mutual benefit.

More Thoughts

People can disagree about what gives how much utility to Bob.

People can lie about how much utility they get from stuff.

People can have preferences about things other than their own direct benefit. I can say it’s high utility to have a walkable downtown even if I avoid walking. Someone else can disagree about city design. I can say it’s high utility for me if none of my neighbors are Christian (disclaimer: not my actual opinion). Others can disagree about what the right preferences and values are.

When preferences involve other people or public stuff instead of just your own personal stuff, then people will disagree about what’s good.

What can be done about all this? A lot can be solved by: whatever you think is high utility, pay for it. As a first approximation, whoever is willing to pay more is the person who would get the most utility from getting the thing or getting their way on the issue.

Paying social credit points, aka money, for things you value shows you actually value them that much. It prevents fraud and it enables comparison between people’s preferences. If I say “I strongly care” and you say “I care a lot”, then who knows who cares more. Instead, we can bid money/social credit to see who will bid higher.

People often have to estimate how much utility they would get from a good or service, before they have it. These estimates are often inaccurate. Sometimes they’re wildly inaccurate. Often, they’re systematically biased. How can we make the social system resilient to mistakes?

One way is to disincentivize mistakes instead of incentivizing them. Consider a simple, naive system, where people tend to be given more of whatever they value. The higher they value it, the more of it they get. Whoever likes sushi the most will be allocated the most sushi. Whoever likes gold bars the most will be allocated the most gold bars. Whoever is the best at really liking stuff, and getting pleasure, wellbeing or whatever other kind of utility from it, gets the most stuff. There is an incentive here to highly value lots of stuff, even by mistake. When in doubt, just decide that you value it a lot – maybe you’ll like it and there’s no downside to you of making a high bid in terms of how much utility you say it gives you. Your utility estimates are like a bank account with unlimited funds, so you can spend lavishly.

To fix this, we need to disincentivize mistakes. If you overbid for something – if you say it has higher utility for you than it actually does – that should have some kind of downside for you, such as a reduced ability to place high bids in the future.

How can we accomplish this? A simple model is everyone is assigned 1,000,000 utility points at birth. When you want a good or service, you bid utility points (fractions are fine). You can’t bid more than you have. If your bid is accepted, you transfer those utility points to the previous owner or the service provider, and you get the good or service. Now you have fewer utility points to bid in the future. If you are biased and systematically overbid, you’ll run out of points and you’ll get less stuff for your points than you could have.

If you’re low on utility points, you can provide goods or services to others to get more. There is an incentive to provide whatever good or services would provide the most utility to others, especially ones that you can provide efficiently or cheaply. Cost/benefit and specialization matter.

There are many ways we could make a more complex system. Do you have to plan way ahead? Maybe people should get 1,000 more utility points every month so they always have a guaranteed minimum income. Maybe inheritance or gifting should be allowed – those have upsides and downsides. If inheritance and gifting are both banned, then there’s an incentive to spend all your utility points before you die – even for little benefit – or else they’re wasted. And there’s less incentive to earn more utility points if you have enough, but would like to get more to help your children or favorite charity, but you can’t do gifting or inheritance. There’d also be people who pay 10,000 points for a marble to circumvent the no gifting rule. Or I might try to hire a tutor to teach my son, and pay him with my utility points rather than my son having to spend his own points.

Anyway, to a reasonable approximation, this is the system we already have, and utility points are called dollars. Dollars, in concept, are a system of social credit that track how much utility you’ve provided to others minus how much you’ve received from others. They keep score so that some people don’t hog a ton of utility.

There are many ways that, in real life, our current social system differs from this ideal. In general, those differences are not aspects of capitalist economic theory nor of dollars. They are deviations from the free market which let people grow rich by government subsidies, fraud, biased lawmaking, violence, and various other problems.

Note: I don’t think a perfect free market would automatically bring with it utopia. I just think it’s a system with some positive features and which is compatible with rationality. It doesn’t actively prevent or suppress people from having good, rational lives and doing problem solving and making progress. Allowing problem solving and helping with some problems (like coordination between people, keeping track of social credit, and allocating goods and services) is a great contribution from an economic system. Many other specific solutions are still needed. I don’t like the people who view capitalism as a panacea instead of a minimal framework and enabler. I also don’t think any of the alternative proposals, besides a free market, are any good.


Elliot Temple | Permalink | Messages (0)

Altruism Contradicts Liberalism

I quit the Effective Altruism forum due to a new rule requiring posts and comments be basically put in the public domain without copyright. I had a bunch of draft posts, so I’m posting some of them here with light editing.


Altruism means (New Oxford Dictionary):

the belief in or practice of disinterested and selfless concern for the well-being of others

Discussion about altruism often involves being vague about a specific issue. Is this selfless concern self-sacrificial? Is it bad for the self or merely neutral? This definition doesn’t specify.

The second definition does specify but isn’t for general use:

Zoology behavior of an animal that benefits another at its own expense

Multiple dictionaries fit the pattern of not specifying self-sacrifice (or not) in the main definition, then bringing it up in an animal-focused definition.

New Oxford’s thesaurus is clear. Synonyms for altruism include:

unselfishness, selflessness, self-sacrifice, self-denial

Webster’s Third suggests altruism involves lack of calculation, and doesn’t specify whether it’s self-sacrificial:

uncalculated consideration of, regard for, or devotion to others' interests sometimes in accordance with an ethical principle

EA certainly isn’t uncalculated. EA does stuff like mathematical calculations and cost/benefit analysis. Although the dictionary may have meant something more like shrewd, self-interested, Machiavellian calculation. If so, they really shouldn’t try to put so much meaning into one fairly neutral word like that without explaining what they mean.

Macmillan gives:

a way of thinking or behaving that shows you care about other people and their interests more than you care about yourself

Caring more about their interests than yourself suggests self-sacrifice, a conflict of interest (where decisions favoring you or them must be made), and a lack of win-win solutions or mutual benefit.

Does EA have any standard, widely read and accepted literature which:

  • Clarifies whether it means self-sacrificial altruism or whether it believes its “altruism” is good for the self?
  • Refutes (or accepts!?) the classical liberal theory of the harmony of men’s interests.

Harmony of Interests

Is there any EA literature regarding altruism vs. the (classical) liberal harmony of interests doctrine?

EA believes in conflicts of interest between men (or between individual and total utility). For example, William MacAskill writes in The Definition of Effective Altruism:

Unlike utilitarianism, effective altruism does not claim that one must always sacrifice one’s own interests if one can benefit others to a greater extent.[35] Indeed, on the above definition effective altruism makes no claims about what obligations of benevolence one has.

I understand EA’s viewpoint to include:

  • There are conflicts between individual utility and overall utility (the impartial good).
  • It’s possible to altruistically sacrifice some individual utility in a way that makes overall utility go up. In simple terms, you give up $100 but it provides $200 worth of benefit to others.
  • When people voluntarily sacrifice some individual utility to altruistically improve overall utility, they should do it in (cost) effective ways. They should look at things like lives saved per dollar. Charities vary dramatically in how much overall utility they create per dollar donated.
  • It’d be good if some people did some effective altruism sometimes. EA wants to encourage more of this, although it doesn’t want to be too pressuring, so it does not claim that large amounts of altruism are a moral obligation for everyone. If you want to donate 10% of your income to cost effective charities, EA will say that’s great instead of saying you’re a sinner because you’re still deviating from maximizing overall utility. (EA also has elements which encourage some members to donate a lot more than 10%, but that’s another topic.)

Finally, unlike utilitarianism, effective altruism does not claim that the good equals the sum total of wellbeing. As noted above, it is compatible with egalitarianism, prioritarianism, and, because it does not claim that wellbeing is the only thing of value, with views on which non-welfarist goods are of value.[38]

EA is compatible with many views on how to calculate overall utility, not just the view that you should add every individual utility. In other words, EA is not based on a specific overall/impersonal utility function. EA also is not based on any advocating that individuals have any particular individual utility function or any claim that the world population currently has a certain distribution of individual utility functions.

All of this contradicts the classical liberal theory of the harmony of men’s (long term, rational) interests. And doesn’t engage with it. They just seem unaware of the literature they’re disagreeing with (or they’re aware and refusing to debate with it on purpose?), even though some of it is well known and easy to find.

Total Utility Reasoning and Liberalism

I understand EA to care about total utility for everyone, and to advocate people altruistically do things which have lower utility for themselves but which create higher total utility. One potential argument is that if everyone did this then everyone would have higher individual utility.

A different potential approach to maximizing total utility is the classical liberal theory of the harmony of men’s interests. It says, in short, that there is no conflict between following self-interest and maximizing total utility (for rational men in a rational society). When there appears to be a conflict, so that one or the other must be sacrificed, there is some kind of misconception, distortion or irrationality involved. That problem should be addressed rather than accepted as an inherent part of reality that requires sacrificing either individual or total utility.

According to the liberal harmony view, altruism claims there are conflicts between the individual and society which actually don’t exist. Altruism therefore stirs up conflict and makes people worse off, much like the Marxist class warfare ideology (which is one of the standard opponents of the harmony view). Put another way, spreading the idea of conflicts of interest is an error that lowers total utility. The emphasis should be on harmony, mutual benefit and win/win solutions, not on altruism and self-sacrifice.

It’s really bad to ask people to make tough, altruistic choices if such choices are unnecessary mistakes. It’s bad to tell people that getting a good outcome for others requires personal sacrifices if it actually doesn’t.

Is there any well-known, pre-existing EA literature which addresses this, including a presentation of the harmony view that its advocates would find reasonably acceptable? I take it that EA rejects the liberal harmony view for some reason, which ought to be written down somewhere. (Or they’re quite ignorant, which would be very unreasonable for the thought leaders who developed and lead EA.) I searched the EA forum and it looks like the liberal harmony view has never been discussed, which seems concerning. I also did a web search and found nothing regarding EA and the liberal harmony of interests theory. I don’t know where or how else to do an effective EA literature search.


Elliot Temple | Permalink | Messages (0)

AGI Alignment and Karl Popper

I quit the Effective Altruism forum due to a new rule requiring posts and comments be basically put in the public domain without copyright. I had a bunch of draft posts, so I’m posting some of them here with light editing.


On certain premises, which are primarily related to the epistemology of Karl Popper, artificial general intelligences (AGIs) aren’t a major threat. I tell you this as an expert on Popperian epistemology, which is called Critical Rationalism.

Further, approximately all AGI research is based on epistemological premises which contradict Popperian epistemology.

In other words, AGI research and AGI alignment research are both broadly premised on Popper being wrong. Most of the work being done is an implicit bet that Popper is wrong. If Popper is right, many people are wasting their careers, misdirecting a lot of donations, incorrectly scaring people about existential dangers, etc.

You might expect that alignment researchers would have done a literature review, found semi-famous relevant thinkers like Popper, and written refutations of them before being so sure of themselves and betting so much on the particular epistemological premises they favor. I haven’t seen anything of that nature, and I’ve looked a lot. If it exists, please link me to it.

To engage with and refute Popper requires expertise about Popper. He wrote a lot, and it takes a lot of study to understand and digest it. So you have three basic choices:

  • Do the work.
  • Rely on someone else’s expertise who agrees with you.
  • Rely on someone else’s expertise who disagrees with you.

How can you use the expertise of someone who disagrees with you? You can debate with them. You can also ask them clarifying questions, discuss issues with them, etc. Many people are happy to help explain ideas they consider important, even to intellectual opponents.

To rely on the expertise of someone on your side of the debate, you endorse literature they wrote. They study Popper, they write down Popper’s errors, and then you agree with them. Then when a Popperian comes along, you give them a couple citations instead of arguing the points yourself.

There is literature criticizing Popper. I’ve read a lot of it. My judgment is that the quality is terrible. And it’s mostly written by people who are pretty different than the AI alignment crowd.

There’s too much literature on your side to read all of it. What you need (to avoid doing a bunch of work yourself) is someone similar enough to you – someone likely to reach the same conclusions you would reach – to look into each thing. One person is potentially enough. So if someone who thinks similarly to you reads a Popper criticism and thinks it’s good, it’s somewhat reasonable to rely on that instead of investigating the matter yourself.

Keep in mind that the stakes are very high: potentially lots of wasted careers and dollars.

My general take is you shouldn’t trust the judgment of people similar to yourself all that much. Being personally well read regarding diverse viewpoints is worthwhile, especially if you’re trying to do intellectual work like AGI-related research.

And there aren’t a million well known and relevant viewpoints to look into, so I think it’s reasonable to just review them all yourself, at least a bit via secondary literature with summaries.

There are much more obscure viewpoints that are worth at least one person looking into, but most people can’t and shouldn’t try to look into most of those.

Gatekeepers like academic journals or university hiring committees are really problematic, but the least you should do is vet stuff that gets through gatekeeping. Popper was also respected by various smart people, like Richard Feynman.

Mind Design Space

The AI Alignment view claims something like:

Mind design space is large and varied.

Many minds in mind design space can design other, better minds in mind design space. Which can then design better minds. And so on.

So, a huge number of minds in mind design space work as starting points to quickly get to extremely powerful minds.

Many of the powerful minds are also weird, hard to understand, very different than us including regarding moral ideas, possibly very goal directed, and possibly significantly controlled by their original programming (which likely has bugs and literally says different things, including about goals, than the design intent).

So AGI is dangerous.

There is an epistemology which contradicts this, based primarily on Karl Popper and David Deutsch. It says that actually mind design space is like computer design space: sort of small. This shouldn’t be shocking since brains are literally computers, and all minds are software running on literal computers.

In computer design, there is a concept of universality or Turing completeness. In summary, when you start designing a computer and adding features, after very few features you get a universal computer. So there are only two types of computers: extremely limited computers and universal computers. This makes computer design space less interesting or relevant. We just keep building universal computers.

Every computer has a repertoire of computations it can perform. A universal computer has the maximal repertoire: it can perform any computation that any other computer can perform. You might expect universality to be difficult to get and require careful designing, but it’s actually difficult to avoid if you try to make a computer powerful or interesting.

Universal computers do vary in other design elements, besides what computations they can perform, such as how large they are. This is fundamentally less important than what computations they can do, but does matter in some ways.

There is a similar theory about minds: there are universal minds. (I think this was first proposed by David Deutsch, a Popperian intellectual.) The repertoire of things a universal mind can think (or learn, understand, or explain) includes anything that any other mind can think. There’s no reasoning that some other mind can do which it can’t do. There’s no knowledge that some other mind can create which it can’t create.

Further, human minds are universal. An AGI will, at best, also be universal. It won’t be super powerful. It won’t dramatically outthink us.

There are further details but that’s the gist.

Has anyone on the AI alignment side of the debate studied, understood and refuted this viewpoint? If so, where can I read that (and why did I fail to find it earlier)? If not, isn’t that really bad?


Elliot Temple | Permalink | Messages (0)