Animal Welfare Overview

Is animal welfare a key issue that we should work on? If so, what are productive things to do about it?

This article is a fairly high level overview of some issues, which doesn’t attempt to explain e.g. the details of Popperian epistemology.

Human Suffering

Humans suffer and die, today, a lot. Look at what’s going on in Iran, Ukraine, Yemen, North Korea, Venezuela and elsewhere. This massive human suffering should, in general, be our priority before worrying about animals much.

People lived in terrible conditions, and died, building stadiums for the World Cup in Qatar. Here’s a John Oliver video about it. They were lied to, exploited, defrauded, and basically (temporarily) enslaved etc. People sometimes die in football (soccer) riots too. I saw a headline recently that a second journalist died in Qatar for the World Cup. FIFA is a corrupt organization that likes dictators. Many people regard human death as an acceptable price for sports entertainment, and many more don’t care to know the price.

There are garment workers in Los Angeles (USA) working in terrible conditions for illegally low wages. There are problems in other countries too. Rayon manufacturing apparently poisons nearby children enough to damage their intelligence due to workers washing off toxic chemicals in local rivers. (I just read that one article; I haven’t really researched this but it seems plausible and I think many industries do a lot of bad things. There are so many huge problems in human civilization that even reading one article per issue would take a significant amount of time and effort. I don’t have time to do in-depth research on most of the issues. Similarly, I have not done in-depth research on the Qatar World Cup issues.)

India has major problems with orphans. Chinese people live under a tyrannical government. Human trafficking continues today. Drug cartels exist. Millions of people live in prisons. Russia uses forced conscription for its war of aggression in Ukraine.

These large-scale, widespread problems causing human suffering seem more important than animal suffering. Even if you hate how factory farms treat animals, you should probably care more that a lot of humans live in terrible conditions including lacking their freedom in major ways.

Intelligence

Humans have general, universal intelligence. They can do philosophy and science.

Animals don’t. All the knowledge involved in animal behavior comes from genetic evolution. They’re like robots created by their genes and controlled by software written by their genes.

Humans can do evolution of ideas in their minds to create new, non-genetic knowledge. Animals can’t.

Evolution is the only known way of creating knowledge. It involves replication with variation and selection.

Whenever there is an appearance of design (e.g. a wing or a hunting behavior), knowledge is present.

People have been interested in the sources of knowledge for a long time, but it’s a hard problem and there have been few proposals. Proposals include evolution, intelligent design, creationism, induction, deduction and abduction.

If non-evolutionary approaches to knowledge creation actually worked, it would still seem that humans can do them and animals can’t – because there are human scientists and philosophers but no animal scientists or philosophers.

Human learning involves guessing or brainstorming (replication with variation) plus criticism and rejecting refuted ideas (selection). Learning by evolution means learning by error correction, which we do by creating many candidate ideas (like a gene pool) and rejecting ideas that don’t work well (like animals with bad mutations being less likely to have offspring).

Also, since people very commonly get this wrong: Popperian epistemology says we literally learn by evolution. It is not a metaphor or analogy. Evolution literally applies to both genes and memes. It’s the same process (replication with variation and selection). Evolution could also work with other types of replicators. For general knowledge creation, the replicator has to be reasonably complex, interesting, flexible or something (the exact requirements aren’t known).

Types of Algorithms

All living creatures with brains have Turing-complete computers for brains. A squirrel is a reasonable example animal. Let’s not worry about bacteria or worms. (Earthworms apparently have some sort of brain with only around 300 neurons. I haven’t researched it.)

Humans have more neurons, but the key difference between humans and squirrels is the software our brains run.

We can look at software algorithms in three big categories.

  1. Fixed, innate algorithm
  2. “Learning” algorithms which read and write data in long-term memory
  3. Knowledge-creation algorithm (evolution, AGI)

Fixed algorithms are inborn. The knowledge comes from genes. They’re complete and functional with no practice or experience.

If you keep a squirrel in a lab and never let it interact with dirt, and it still does behaviors that seem designed for burying nuts in dirt, that indicates a fixed, innate algorithm. These algorithms can lead to nonsensical behavior when taken out of context.

There are butterflies which do multi-generation migrations. How do they know where to go? It’s in their genes.

Why do animals “play”? To “learn” hunting, fighting, movement, etc. During play, they try out different motions and record data about the results. Later, their behavioral algorithms read that data. Their behavior depends partly on what that data says, not just on inborn, genetic information.

Many animals record data for navigation purposes. They look around, then can find their way back to the same spot (long-term memory). They can also look around, then avoid walking into obstacles (short-term memory).

Chess-playing software can use fixed, innate algorithms. A programmer can specify rules which the software follows.

Chess-playing software can also involve “learning”. Some software plays many practice games against itself, records a bunch of data, and uses that data in order to make better moves in the future. The chess-playing algorithm takes into account data that was created after birth (after the programmer was done).

I put “learning” in scare quotes because the term often refers to knowledge creation (evolution) which is different than an algorithm that writes data to long-term data storage then uses it later. When humans learn at school, it’s not the same thing as e.g. a “reinforcement learning” AI algorithm or what animals do.

People often confuse algorithms involving long-term memory, which use information not available at birth, with knowledge creation. They call both “learning” and “intelligent”.

They can be distinguished in several ways. Is there replication with variation and selection, or not? If you think there’s evolution, can it create a variety of types of knowledge, or is it limited to one tiny niche? If you believe a different epistemology, you might look for the presence of inductive thinking (but Popper and others have refuted induction). There are other tests and methods that can be used to identify new knowledge as opposed to the downstream consequences of existing knowledge created by genetic evolution, by a programmer, or by some other sort of designer.

Knowledge

What is knowledge? It’s information which is adapted to a purpose. When you see the appearance of design, knowledge is present. Understanding the source of that knowledge is often important. Knowledge is one of the more important and powerful things in the universe.

Binary Intelligence or Degrees?

The word “intelligence” is commonly used with two different meanings.

One is a binary distinction. I’m intelligent but a rock or tree isn’t.

The other meaning is a difference in degree or amount of intelligence: Alice is smarter than Joe but dumber than Feynman.

Degrees of intelligence can refer to a variety of different things that we might call logical skill, wisdom, cleverness, math ability, knowledge, being well spoken, scoring well on tests (especially IQ tests, but others too), getting high grades, having a large vocabulary, being good at reading, being good at scientific research or being creative.

There are many different ways to use your intelligence. Some are more effective than others. Using your intelligence effectively is often called being highly intelligent.

Speaking very roughly, many people believe a chimpanzee or dog is kind of like a 50 IQ person – intelligent, but much less intelligent than almost all humans. They think a squirrel passes the binary intelligence distinction to be like a human not a rock, but just has less intelligence. However, they usually don’t think a self-driving car, chat bot, chess software or video game enemy is intelligent at all – that’s just an algorithm which has a lot of advantages compared to a rock but isn’t intelligent. Some other people do think that present-day “AI” software is intelligent, just with a low degree of intelligence.

My position is that squirrels are like self-driving cars: they aren’t intelligent but the software algorithm can do things that a rock can’t. A well designed software algorithm can mimic intelligence without actually having it.

The reason algorithms are cleverer than rocks is they have knowledge in them. Creating knowledge is the key thing intelligence does that makes it seem intelligent. An algorithm uses built-in knowledge, while intelligences can create their own knowledge.

Basically, anything with knowledge seems either intelligent or intelligently-designed to us (speaking loosely and counting evolution as an intelligent designer). People tend to assume animals are intelligent rather than intelligently-designed because they don’t understand evolution or computation very well, and because the animals seem to act autonomously, and because of the similarities between humans and many animals.

Where does knowledge come from? Evolution. To get knowledge, algorithms need to either evolve or to have an intelligent designer. An intelligent designer, such a human software developer, creates the knowledge by evolving ideas about the algorithm within his brain. So the knowledge always comes from evolution. Evolution is the only known solution to how new knowledge can be created which isn’t refuted.

(General intelligence may be an “algorithm” in the same kind of sense that e.g. “it’s all just math”. If you want to call it an algorithm, then whenever I write “algorithm” you can read it as e.g. “algorithm other than general intelligence”.)

Universality

There are philosophical reasons to believe that humans are universal knowledge creators – meaning they can create any knowledge that any knowledge creator can create. The Popperian David Deutsch has written about this.

This parallels how the computer I’m typing on can compute anything that any computer can compute. It’s Turing-complete, a.k.a. universal. (Except quantum computers have extra abilities, so actually my computer is a universal classical computer.)

This implies a fundamental similarity between everything intelligent (they all have the same repertoire of things they can learn). There is no big, bizarre, interesting mind design space like many AGI researchers believe. Instead, there are universally intelligent minds and not much else of note, just like there are universal computers and little else of interest. If you believe in mind design space like Eliezer Yudkowsky does, it’s easy to imagine animals are in it somewhere. But if the only options for intelligence are basically universality or nothing, then animals have to be like humans or else unintelligent – there’s no where else in mind design space for them to be. If the only two options are basically that animals are intelligent in the same way as humans (universal intelligence), or aren’t intelligent, then most people will agree that animals aren’t intelligent.

This also has a lot of relevance to concerns about super-powerful, super-intelligent AGIs turning us all into paperclips. There’s actually nothing in mind design space that’s better than human intelligence, because human intelligence is already universal. Just like how there’s nothing in classical computer design space that’s better than a universal computer or Turing machine.

A “general intelligence” is a universal intelligence. A non-general “intelligence” is basically not an intelligence, like a non-universal or non-Turing-complete “computer” basically isn’t a computer.

Pain

Squirrels have nerves, “pain” receptors, and behavioral changes when “feeling pain”.

Robots can have sensors which identify damage and software which outputs different behaviors when the robot is damaged.

Information about damage travels to a squirrel’s brain where some behavior algorithms use it as input. It affects behavior. But that doesn’t mean the squirrel “feels pain” anymore than the robot does.

Similarly, information travels from a squirrel’s eyes to its brain where behavioral algorithms take it into account. A squirrel moves around differently depending on what it sees.

Unconscious robots can do that too. Self-driving car prototypes today use cameras to send visual information to a computer which makes the car behave differently based on what the camera sees.

Having sensors which transmit information to the brain (CPU), where it is used by behavior-control software algorithms, doesn’t differentiate animals from present-day robots.

Suffering

Humans interpret information. We can form opinions about what is good or bad. We have preferences, values, likes and dislikes.

Sometimes humans like pain. Pain does not automatically equate to suffering. Whether we suffer due to pain, or due to anything else, depends on our interpretation, values, preferences, etc.

Sometimes humans dislike information that isn’t pain. Although many people like it, the taste of pizza can result in suffering for someone.

Pain and suffering are significantly different concepts.

Pain is merely a type of information sent from sensors to the CPU. This is true for humans and animals both. And it’d be true for robots too if anyone called their self-damage related sensors “pain” sensors.

It’s suffering that is important and bad, not pain. Actually, being born without the ability to feel pain is dangerous. Pain provides useful information. Being able to feel pain is a feature, not a bug, glitch or handicap.

If you could disable your ability to feel pain temporarily, that’d be nice sometimes if used wisely, but permanently disabling it would be a bad idea. Similarly, being able to temporarily disable your senses (smell, touch, taste, sight or hearing) is useful, but permanently disabling them is a bad idea. We invent things like ear and nose plugs to temporarily disable senses, and we have built-in eyelids for temporarily disabling our sight (and, probably more importantly, for eye protection).

Suffering involves wanting something and getting something else. Reality violates what you want. E.g. you feel pain that you don’t want to feel. Or you taste a food that you don’t want to taste. Or your spouse dies when you don’t want them to. (People, occasionally, do want their spouse to die – as always, interpretation determines whether one suffers or not).

Karl Popper emphasized that all observation is theory-laden, meaning that all our scientific evidence has to be interpreted and if we get the interpretation wrong then our scientific conclusions will be wrong. Science doesn’t operate on raw data.

Suffering involves something happening and you interpreting it negatively. That’s another way to look at wanting something (that you would interpret positively or neutrally) but getting something else (that you interpret negatively).

Animals can’t interpret like this. They can’t create opinions of what is good and bad. This kind of thinking involves knowledge creation.

Animals do not form preferences. They don’t do abstract thinking to decide what to value, compare differential potential values, and decide what they like. Just like self-driving cars have no interpretation of crashing and do not feel bad about it when they crash. They don’t want to avoid crashing. Their programmers want them to avoid crashing. Evolution doesn’t want things like people do, but it does design animals to (mostly) minimize dying. That involves various more specific designs, like behavior algorithms designed to prevent an animal from starving to death. (Those algorithms are pretty effective but not perfect.)

Genetic evolution is the programmer and designer for animals. Does genetic evolution have values or preferences? No. It has no mind.

Genetic evolution also created humans. What’s different is it gave them the ability to do their own evolution of ideas, thus creating evolved knowledge that wasn’t in their genes, including knowledge about interpretations, preferences, opinions and values.

Animal Appearances

People often assume animals have certain mental states due to superficial appearance. They see facial expressions on animals and think those animals have corresponding emotions, like a human would. They see animals “play” and think it’s the same thing as human play. They see an animal “whimper in pain” and think it’s the same as a human doing that.

People often think their cats or dogs have complex personalities, like an adult human. They also commonly think that about their infants. And they also sometimes think that about chatbots. Many people are fooled pretty easily.

It’s really easy to project your experiences and values onto other entities. But there’s no evidence that animals do anything other than follow their genetic code, which includes sometimes doing genetically-programmed information-gathering behaviors, then writing that information into long-term memory, then using that information in behavior algorithms later in exactly the way the genes say to. (People also get confused by indirection. Genes don’t directly tell animals what to do like slave-drivers. They’re more like blueprints for the physical structure and built-in software of animals.)

Uncertainty

Should we treat animals partially or entirely like humans just in case they can suffer?

Let’s first consider a related question. Should we treat trees and 3-week-old human embryos partially or entirely like humans just in case they can suffer? I say no. If you agree with me, perhaps that will help answer the question about animals.

In short, we have to live by our best understanding of reality. You’re welcome to be unsure, but I have studied stuff, debated and reached conclusions. I have conclusions both about my personal debates and also the state of the debate involving all expert literature.

Also, we’ve been eating animals for thousands of years. It’s an old part of human life, not a risky new invention. Similarly, the mainstream view of human intellectuals, for thousands of years, has been to view animals as incapable of reason or irrational, and as very different than humans. (You can reason with other humans and form e.g. peace treaties or social contracts. You can resolve conflicts with persuasion. You can’t do that with animals.)

But factory farms are not a traditional part of human life. If you just hate factory farms but don’t mind people eating wild animals or raising animals on non-factory farms, then … I don’t care that much. I don’t like factory farms either because I think they harm human health (but so do a lot of other things, including vegetable oil and bad political ideas, so I don’t view factory farms as an especially high priority – the world has a ton of huge problems). I’m a philosopher who mostly cares about the in-principle issue of whether or not animals suffer, which is intellectually interesting and related to epistemology. It’s also relevant to issues like whether or not we should urgently try to push everyone to be vegan, which I think would be a harmful mistake.

Activism

Briefly, most activism related to animal welfare is tribalist, politicized fighting related to local optima. It’s inadequately intellectual, inadequately interested in research and debate about the nature of animals or intelligence, and has inadequate big picture planning about the current world situation and what plan would be very effective and high leverage for improving things. There’s inadequate interest in persuading other humans and reaching agreement and harmony, rather than trying to impose one’s values (like treating animals in particular ways) on others.

Before trying to make big changes, you need e.g. a cause-and-effect diagram about how society works and what all the relevant issues are. And you need to understand the global and local optima well. See Eli Goldratt for more information on project planning.

Also, as is common with causes, activists tend to be biased about their issue. Many people who care about the (alleged) suffering of animals do not care much about the suffering of human children, and vice versa. And many advocates for animals or children don’t care much about the problems facing elderly people in old folks homes, and vice versa. It’s bad to have biased pressure groups competing for attention. That situation makes the world worse. We need truth seeking and reasonable organization, not competitions for attention and popularity. A propaganda and popularity contest isn’t a rational, truth seeking way to organize human effort to make things better.


Elliot Temple | Permalink | Messages (0)

Don’t Legalize Animal Abuse

This article discusses why mistreating animals is bad even if they’re incapable of suffering.

I don’t think animals can suffer, but I’m not an activist about it. I’m not trying to change how the world treats animals. I’m not asking for different laws. I don’t emphasize this issue. I don’t generally bring it up. I care more about epistemology. Ideas about how minds work are an application of some of my philosophy.

On the whole, I think people should treat animals better, not worse. People should also treat keyboards, phones, buildings and their own bodies better. It’s (usually) bad to smash keyboards, throw phones, cause and/or ignore maintenance problems in buildings, or ingest harmful substances like caffeine, alcohol or smoke.

Pets

Legalizing animal abuse would have a variety of negative consequences if nothing else about the world changed. People would do it more because legalizing it would make it more socially legitimate. I don’t see much upside. Maybe more freedom to do scientific testing on animals would be good but, if so, that could be accomplished with a more targeted change that only applies to science – and lab animals should be treated well in ways compatible with the experiment to avoid introducing extra variables like physiological stress.

On the other hand, legalizing animal abuse would actually kill human children. Abused dogs are more likely to bite both humans and dogs.

When spouses fight, one can vandalize the other’s car because it’s shared property. Vandalizing a spouse’s dog would be worse. A dog isn’t replaceable like a car. If vandalizing a dog was treated like regular property damage, the current legal system wouldn’t protect against it well enough.

Why aren’t dogs replaceable? Because they have long-term memory which can’t be backed up and put into a new dog (compare with copying all your data to a new phone). If you’ve had a dog for five years and spent hundreds of hours around it, that’s a huge investment, but society doesn’t see that dog as being worth tens of thousands of dollars. If you were going to get rid of animal abuse laws, you’d have to dramatically raise people’s perception of the monetary value of pets, which is currently way too low.

Dogs, unlike robots we build, cannot have their memory reset. If a dog starts glitching out (e.g. becomes more aggressive) because someone kicks it, you can’t just reinstall and start over, and you wouldn’t want to because the dog has valuable data in it. Restoring a backup from a few days before the abuse would work pretty well but isn’t an option.

You’d be more careful with how you use your Mac or phone if you had no backups of your data and no way to undo changes. You’d be more careful with what software you installed if you couldn’t uninstall it or turn it off. Dogs are like that. And people can screw up your dog’s software by hitting your dog.

People commonly see cars and homes as by far the most valuable things that they own (way ahead of e.g. jewelry, watches and computers for most people). They care about their pets but they don’t put them on that list. They can buy a new dog for a few hundred dollars and they know that (many of them wouldn’t sell their dog for $10,000, but they haven’t all considered that). They don’t want to replace their dog, but many people don’t calculate monetary value correctly. The reason they don’t want to replace their dog is that their current dog has far more value than a new one would. People get confused about it because they can’t sell their dog for anywhere near its value to them. Pricing unique items with no market for them is problematic. Also, if they get a new dog, it will predictably gain value over time.

It’s like how it’s hard to put a price on your diary or journal. If someone burned it, that would be really bad. But how many dollars of damages is that worth? It’s hard to say. A diary has unique, irreplaceable data in it, but there’s no accurate market price because it’s worth far more to the owner than to anyone else.

Similarly, if someone smashes your computer and you lose a bunch of data, you will have a hard time getting appropriate compensation in court today. Being paid the price of a new computer is something the courts understand. And courts will take into account emotional suffering and trauma. They’re worse at taking into account the hassle and time cost of dealing with the whole problem, including going to court. And courts are bad at taking into account the data loss for personal files with no particular commercial value. A dog is like that – it contains a literal computer that literally contains personal data files with no backups. But we also have laws against animal abuse which help protect pet owners, because we recognize in some ways that pets are important. Getting rid of those laws without changing a bunch of other things would make things worse.

Why would you want to abuse a pet anyway? People generally abuse pets because they see the animals as proxies for humans (it can bleed and yelp) or they want to harm the pet’s owner. So that’s really bad. They usually aren’t hitting a dog in the same way they punch a hole in their wall. They know the dog matters more than the wall or their keyboard.

Note: I am not attempting to give a complete list of reasons that animal abuse is bad even if animals are incapable of suffering. I’m just making a few points. There are other issues too.

Factory Farms

Factory farms abuse animals for different reasons. They’re trying to make money. They’re mostly callous not malicious. Let’s consider some downsides of factory farms that apply even if animals cannot suffer.

When animals are sick or have stress hormones, they’re worse for people to eat. This is a real issue involving e.g. cortisol. Tuna fishing reality shows talk about it affecting the quality and therefore price of their catch, and the fishermen go out of their way to reduce fish’s physiological stress.

When animals eat something – like corn or soy – some of it may stay in their body and affect humans who later eat that animal. It can e.g. change the fatty acid profile of the meat.

I don’t like to eat at restaurants with dirty kitchens. I don’t want to eat from dirty farms either. Some people are poor enough for that risk to potentially be worth it, but many aren’t. And in regions where people are poorer, labor is generally cheaper too, so keeping things clean and well-maintained is cheaper there, so reasonably clean kitchens and factories broadly make sense everywhere. (You run into problems in e.g. poorer areas of the U.S. that are stuck with costly laws designed for richer areas of the U.S. They can have a bunch of labor-cost-increasing laws without enough wealth to reduce the impact of the laws.)

I don’t want cars, clothes, books, computers or furniture from dirty factories. Factories should generally be kept neat, tidy and clean even if they make machine tools, let alone consumer products, let alone food. Reasonable standards for cleanliness differ by industry and practicality, but some factory farms are like poorly-kept factories. And they produce food, which is one of the products where hygiene matters most.

On a related note, E. coli is a problem mainly because they mix together large amounts of e.g. lettuce or beef. One infected head of lettuce can contaminate hundreds of other heads of lettuce due to mixing (for pre-made salad mixes rather than buying whole heads of lettuce). They generally don’t figure out which farm had the problem and make them clean up their act. And the government spends money to trace E. coli problems in order to protect public health. This subsidizes having dirtier farms and then mixing lettuce together in processing. The money the government spends on public health, along with the lack of accountability, helps enable farms to get away with being dirtier.

These are just a sample of the problems with factory farms that are separate issues from animal suffering. I’d suggest that animal activists should emphasize benefits for humans more. Explain to people how changes can be good for them, instead of being a sacrifice for the sake of animals. And actually focus reforms on pro-human changes. Even if animals can suffer, there are lots of changes that could be made which would be better for both humans and animals; reformers should start with those.


Elliot Temple | Permalink | Messages (0)

Activists Shouldn’t Fight with Anyone

This article is about how all activists, including animal activists, should stop fighting with people in polarizing ways. Instead, they should take a more intellectual approach to planning better strategies and avoiding fights.

Effective Activism

In a world with so many huge problems, there are two basic strategies for reforming things which make sense. Animal activism doesn’t fit either one. It’s a typical example, like many other types of activism, of what not to do (even if your cause is good).

Good Strategy 1: Fix things unopposed.

Work on projects where people aren’t fighting to stop you. Avoid controversy. This can be hard. You might think that helping people eat enough Vitamin A to avoid going blind would be uncontroversial, but if you try to solve the problem with golden rice then you’ll get a lot of opposition (because of GMOs). If it seems to you like a cause should be uncontroversial, but it’s not, you need to recognize and accept that in reality it’s controversial, no matter how dumb that is. Animal activism is controversial, whether or not it should be. So is abortion, global warming, immigration, economics, and anything else that sounds like a live political issue.

A simple rule of thumb is that 80% of people who care, or who will care before you’re done, need to agree with you. 80% majorities are readily available on many issues, but extremely unrealistic in the foreseeable future on many other issues like veganism or ending factory farming.

In the U.S. and every other reasonably democratic country, if you have an 80% majority on an issue, it’s pretty easy to get your way. If you live in a less democratic country, like Iran, you may have to consider a revolution if you have an 80% majority but are being oppressed by violent rulers. A revolution with a 52% majority is a really bad idea. Pushing hard for a controversial cause in the U.S. with a 52% majority is a bad idea too, though not as bad as a revolution. People should prioritize not fighting with other people a lot more than they do.

Put another way: Was there election fraud in the 2020 U.S. Presidential election? Certainly. There is every year. Did the fraud cost Trump the election? Maybe. Did Trump have an 80% majority supporting him? Absolutely not. There is nowhere near enough fraud to beat an 80% majority. If fraud swung the election, it was close anyway, so it doesn’t matter much who won. You need accurate enough elections that 80% majorities basically always win. The ability for the 80% to get their way is really important for enabling reform. But you shouldn’t care very much whether 52% majorities get their way. (For simplicity, my numbers are for popular vote elections with two candidates, which is not actually how U.S. Presidential elections work. Details vary a bit for other types of elections.)

Why an 80% majority instead of 100%? It’s more practical and realistic. Some people are unreasonable. Some people believe in UFOs or flat Earth. The point is to pick a high number that is realistically achievable when you persuade most people. We really do have 80% majorities on tons of issues, like whether women should be allowed to vote (which used to be controversial, but isn’t today). Should murder be illegal? That has well over an 80% majority. Are seatbelts good? Should alcohol be legal? Should we have some international trade instead of fulling closing our borders? Should parents keep their own kids instead of having the government raise all the children? These things have over 80% majorities. The point is to get way more than 50% agreement but without the difficulties of trying to approach 100%.

Good Strategy 2: Do something really, really, really important.

Suppose you can’t get a clean, neat, tidy, easy or unopposed victory for your cause. And you’re not willing to go do something else instead. Suppose you’re actually going to fight with a lot of people who oppose you and work against them. Is that ever worth it or appropriate? Rarely, but yes, it could be. Fighting with people is massively overrated but I wouldn’t say to never do it. Even literal wars can be worth it, though they usually aren’t.

If you’re going to fight with people, and do activism for an opposed cause, it better be really really worth it. That means it needs high leverage. It can’t just be one cause because you like that cause. The cause needs to be a meta cause that will help with many future reforms. For example, if Iran has a revolution and changes to a democratic government, that will help them fix many, many issues going forward, such as laws about homosexuality, dancing, or female head coverings. It would be problematic to pick a single issue, like music, and have a huge fight in Iran to get change for just that one thing. If you’re going to have a big fight, you need bigger rewards than one single reform.

If you get some Polish grocery stores to stop selling live fish, that activism is low leverage. Even if it works exactly as intended, and it’s an improvement, it isn’t going to lead to a bunch of other wonderful results. It’s focused on a single issue, not a root cause.

If you get factory farms to change, it doesn’t suddenly get way easier to do abortion-related reforms. You aren’t getting at root causes, such as economic illiteracy or corrupt, lobbyist-influenced lawmakers, which are contributing to dozens of huge problems. You aren’t figuring out why so many large corporations are so awful and fixing that, which would improve factory farms and dozens of other things too. You aren’t making the world significantly more rational, nor making the government significantly better at implementing reforms.

If you don’t know how to do a better, more powerful reform, stop and plan more. Don’t just assume it’s impossible. The world desperately needs more people willing to be good intellectuals who study how things work and create good plans. Help with that. Please. We don’t need more front-line activists working on ineffective causes, fighting with people, and being led by poor leaders. There are so many front-line activists who are on the wrong side of things, fighting others who merely got lucky to be on the right side (but tribalist fighting is generally counter-productive even if you’re on the right side).

Fighting with people tends to be polarizing and divisive. It can make it harder to persuade people. If the people who disagree with you feel threatened, and think you might get the government to force your way of life on them, they will dig in and fight hard. They’ll stop listening to your reasoning. If you want a future society with social harmony, agreement and rational persuasion, you’re not helping by fighting with people and trying to get laws made that many other people don’t want, which reduces their interest in constructive debate.

Root Causes

The two strategies I brought up are doing things with little opposition or doing things that are really super important and can fix many, many things. Be very skeptical of that second one. It’s a form of the “greater good” argument. Although fighting with people is bad, it could be worth it for the greater good if it fixes some root causes. But doing something with clear, immediate negatives in the name of the greater good rarely actually works out well.

There are other important ways to strategize besides looking for lack of opposition or importance. You can look for root causes instead of symptoms. Factory farms are downstream of various other problems. Approximately all the big corporations suck, not just the big farms. Why is that? What is going on there? What is causing that? People who want to fix factory farms should look into this and get a better understanding of the big picture.

I do believe that there are many practical, reasonable ways that factory farms could be improved, just like I think keeping factories clean in general is good for everyone from customers to workers to owners. What stops the companies from acting more reasonably? Why are they doing things that are worse for everyone, including themselves? What is broken there, and how is it related to many other types of big companies also being broken and irrational in many ways?

I have some answers but I won’t go into them now. I’ll just say that if you want to fix any of this stuff, you need a sophisticated view on the cause-and-effect relationships involved. You need to look at the full picture not just the picture in one industry before you can actually make a good plan. You also should find win/win solutions and present them in terms of mutual benefit instead of approaching it as activists fighting against enemies. You should do your best to proceed in a way where you don’t have enemies. Enemy-based activism is mostly counter-productive – it mostly makes the world worse by increasing fighting.

There are so many people who are so sure they’re right and so sure their cause is so important … and many of them are on opposite sides of the same cause. Don’t be one of those people. Stop rushing, stop fighting, read Eli Goldratt, look at the big picture, make cause-and-effect trees, make transition trees, etc., plan it all out, aim for mutual benefit, and reform things in a way with little to no fighting. If you can’t find ways to make progress without fighting, that usually means you don’t know what you’re doing and are making things worse not better (assuming a reasonably free, democratic country – this is less applicable in North Korea).


Elliot Temple | Permalink | Messages (0)

Food Industry Problems Aren’t Special

Many factory farms are dirty and problematic, but so are the workplaces for many (illegally underpaid) people sewing garments in Los Angeles.

Factory farms make less healthy meat, but the way a lot of meat is processed also makes it less healthy. And vegetable oil may be doing more harm to people’s health than meat. Adding artificial colorings and sweeteners to food does harm too, or piling in extra sugar. The world is full of problems.

You may disagree with me about some of these specific problems. My general point stands even if, actually, vegetable oil is fine. If you actually think the world is not full of huge problems, then we have a more relevant disagreement. In that case, you may wish to read some of my other writing about problems in the world or debate me.

The food industry, to some extent, is arrogantly trying to play God. They want to break food down into components (like salt, fat, sugar, protein, color and nutrients) and then recombine the components to build up ideal, cheap foods however they want. But they don’t know what they’re doing. They will remove a bunch of vitamins, then add back in a few that they know are important, while failing to add others back in. They have repeatedly hurt people by doing stuff like this. It’s especially dangerous when they think they know all the components needed to make baby formula, but they don’t – e.g. they will just get fat from soy and think they replaced fat with fat so it’s fine. But soy has clear differences from natural breast milk, such as a different fatty acid profile. They also have known for decades that the ratio of omega 3 and 6 fatty acids you consume is important, but then they put soy oil in many things and give people unhealthy ratios with too much omega 6. Then they also put out public health advice saying to eat more omega 3’s but not to eat fewer omega 6’s, even though they know it’s the ratio that matters and they know tons of people get way too much omega 6 (including infants).

Mixing huge batches of lettuce or ground beef (so if any of it has E. Coli, the whole batch is contaminated) is similar to how Amazon commingles inventory from different sellers (including themselves), doesn’t track what came from who, and thereby encourages fraud because when fraud happens they have no idea which seller sent in the fraudulent item. That doesn’t stop Amazon from blaming and punishing whichever seller was getting paid for the particular sale when the fraud was noticed, even though he’s probably innocent. Due to policies like these, Amazon has a large amount of fraud on its platforms. Not all industries, in all areas, have these problems. E.g. I have heard of milk samples being tested from dairy farms, before mixing the milk together from many farms, so that responsibility for problems can be placed on the correct people. Similarly, if they wanted to, Amazon could keep track of which products are sent in from which sellers in order to figure out which sellers send in fraudulent items.

Activists who care to consider the big picture should wonder why there are problems in many industries, and wonder what can be done about them.

For example, enforcing fraud laws better would affect basically all industries at once, so that is a candidate idea that could have much higher leverage.

Getting people to stop picking fights without 80% majorities could affect fighting, activism, tribalism and other problems across all topics, so it potentially has very high leverage.

Limiting the government’s powers to favor companies would apply to all industries.

Basic economics education could help people make better decisions about any industry and better judge what sort of policies are reasonable for any industry.

Dozens more high-leverage ideas could be brainstormed. The main obstacle to finding them is that people generally aren’t actually trying. Activists tend to be motivated by some concrete issue in one area (like helping animals or the environment, or preventing cancer, or helping kids or the elderly or battered women), not by abstract issues like higher leverage or fight-avoiding reforms. If a proposal is low-leverage and involves fighting with a lot of people (or powerful people) who oppose it, then in general it’s a really bad idea. Women’s shelters or soup kitchens are low leverage but largely unopposed, so that’s a lot better than a low leverage cause which many people think is bad. But it’s high leverage causes that have the potential to dramatically improve the world. Many low-leverage, unopposed causes can add up and make a big difference too. High leverage but opposed causes can easily be worse than low leverage unopposed causes. If you’re going to oppose people you really ought to aim for high leverage to try to make it worth it. Sometimes there’s an approach to a controversial cause that dramatically reduces opposition, and people should be more interested in seeking those out.


Elliot Temple | Permalink | Messages (0)

What Is Intelligence?

Intelligence is a universal knowledge creation system. It uses the only known method of knowledge creation: evolution. Knowledge is information adapted to a purpose.

The replicators that evolve are called ideas. They are varied and selected too.

How are they selected? By criticism. Criticisms are themselves ideas which can be criticized.

To evaluate an idea and a criticism of it, you also need context including goals (or purposes or values or something else about what is good or bad). Context and goals are ideas too. They too can be replicated, varied, selected/criticized, improved, thought about, etc.

A criticism explains that an idea fails at a goal. An idea, in isolation from any purpose, cannot be evaluated effectively. There need to be success and failure criteria (the goal, and other results that aren’t the goal) in order to evaluate and criticize an idea. The same idea can work for one goal and fail for another goal. (Actually, approximately all ideas work for some goals and fail for others.)

How do I know this? I regard it as the best available theory given current knowledge. I don’t know a refutation of it and I don’t know of any viable alternative.

Interpretations of information (such as observations or sense data) are ideas too.

Emotions are ideas too but they’re often connected to preceding physiological states which can make things more complicated. They can also precede changes in physiological states.

We mostly use our ideas in a subconscious way. You can think of your brain like a huge factory and your conscious mind like one person who can go around and do jobs, inspect workstations, monitor employees, etc. But at any time, there’s a lot of work going on which he isn’t seeing. The conscious mind has limited attention and needs to delegate a ton of stuff to the subconscious after figuring out how to do it. This is done with e.g. practice to form habits – a habit means your subconscious is doing at least part of the work so it can seem (partly) automatic from the perspective of your consciousness.

Your conscious mind could also be thought of as a small group of people at the factory who often stick together but can split up. There are claims that we can think about up to roughly seven things at once, or keep up to seven separate things in active memory, or that we can do some genuine multi-tasking (meaning doing multiple things at once, instead of doing only one at a time but switching frequently).

Do to limited conscious attention and limited short-term memory, one of the main things we do is take a few ideas and combine them into one new idea. That’s called integration. We take what used to be several mental units and create a new, single mental unit which has a lot of the value of its components. But it’s just one thing so it costs less attention than the previous multiple things. By repeating this process, we can get advanced ideas.

If you combine four basic ideas, we can call the new idea a level 1 idea. Combine four level 1 ideas and you get a level 2 idea. Keep going and you can get a level 50 idea eventually. You can also combine ideas that aren’t from the same level, and you can combine different numbers of ideas.

This may create a pyramid structure with many more low level ideas than high level ideas. But it doesn’t necessarily have to. Say you have 10 level 0 ideas at the foundation. You can make 210 different combinations of four ideas from those original 10. You can also make 45 groups of two ideas, 120 groups of three, and 252 groups of five. (This assumes the order of the ideas don’t matter, and each idea can only be used once in a combination, or else you’d get even more combinations.)


Elliot Temple | Permalink | Messages (0)

EA Judo and Economic Calculation

Effective Altruism (EA) claims to like criticism. They have a common tactic: they say thanks for this criticism. Our critics make us stronger. We have used this information to fix the weakness. So don’t lower your opinion of EA due to the criticism; raise it due to one more weakness being fixed.

The term “EA Judo” comes from Notes on Effective Altruism by Michael Nielsen which defines EA Judo as:

strong critique of any particular "most good" strategy improves EA, it doesn't discredit it

This approach doesn’t always work. Some criticisms are hard to fix. Some criticisms require large changes to fix that EAs don’t want to make, such as making large changes to which causes they put money towards.

EA could sometimes be criticized for not having figured out a criticism themselves sooner, which shows a lack of intellectual rigor, leadership, organization, effort, something … which is much harder to fix than addressing one concrete weakness.

They like criticisms like “X is 10% more important, relative to Y, than you realized” at which point they can advise people to donate slightly more to X which is an easy fix. But they don’t like criticism of their methodology or criticism of how actually one of their major causes is counter-productive and should be discontinued.

The same pretending-to-like-criticism technique was the response Ludwig von Mises got from the socialists 100 years ago.

Mises told them a flaw in socialism (the economic calculation problem). At first they thought they could fix it. They thanked him for helping make socialism better.

Their fixes were superficial and wrong. He explained that their fixes didn’t work, and also why his criticism was deeper and more fundamental than they had recognized.

So then, with no fixes in sight, they stopped speaking with him. Which brings us to today when socialists, progressives and others still don’t really engage with Mises or classical liberalism.

EA behaves the same way. When you make criticisms that are harder to deal with, you get ignored. Or you get thanked and engaged with in non-impactful ways, then nothing much changes.


Elliot Temple | Permalink | Messages (0)

“Small” Errors, Frauds and Violences

People often don’t want to fix “small” problems. They commonly don’t believe that consistently getting “small” stuff right would lead to better outcomes for big problems.

“Small” Intellectual Errors

For example, people generally don’t think avoiding misquotes, incorrect cites, factual errors, math errors, grammar errors, ambiguous sentences and vague references would dramatically improve discussions.

They’ve never tried it, don’t have the skill to try it even if they wanted to, and have no experience with what it’d be like.

But they believe it’d be too much work because they aren’t imagining practicing these skills until they’re largely automated. If you do all the work with conscious effort, that would indeed be too much work, as it would be for most things.

You automatically use many words for their correct meanings, like “cat” or “table”. What you automatically and reliably get right, with ease, can be expanded with study and practice. What you find intuitive or second-nature can be expanded. Reliably getting it right without it taking significant conscious is called mastery.

But you can basically only expand your mastery to “small” issues. You can’t just take some big, hard, complex thing with 50 parts and master it as a single, whole unit. You have to break it into those 50 parts and mastery them individually. You can only realistically work on a few at a time.

So if you don’t want to work on small things, you’ll be stuck. And most people are pretty stuck on most topics, so that makes sense. The theory fits with my observations.

Also, in general, you can’t know how “small” an error is until after you fix it. Sometimes what appears to be a “small” error turns out very important and requires large changes to fix. And sometimes what appears to be a “big” error can be fixed with one small change. After you understand the error and its solution, you can judge its size. But when there are still significant unknowns, you’re just guessing. So if you refuse to try to fix “small” errors, you will inevitably guess that some “big” errors are small and then refuse to try to fix them.

Factory Farm Fraud

Similarly, animal welfare activists generally don’t believe that policing fraud is a good approach to factory farms. Fraud is too “small” of an issue which doesn’t directly do what they want, just like how avoiding misquotes is too “small” of an issue which doesn’t directly make conversations productive.

Activists tend to want to help the animals directly. They want better living conditions for animals. They broadly aren’t concerned with companies putting untrue statements on their websites which mislead the public. Big lies like “our chickens spend their whole lives in pasture” when they’re actually kept locked in indoor cages would draw attention. Meat companies generally don’t lie that egregiously, but they do make many untrue and misleading statements which contribute to the public having the wrong idea about what farms are like.

Fraud is uncontroversially illegal. But many people wouldn’t really care if a company used a misquote in an ad. That would be “small” fraud. Basically, I think companies should communicate with the public using similar minimal standards to what rational philosophy discussions should use. They don’t have to be super smart or wise, but they should at least get basics right. By basics I mean things where there’s no real controversy about what the correct answer is. They should quote accurately, cite accurately, get math right, get facts right, avoid statements that are both ambiguous and misleading, and get all the other “small” issues right. Not all of these are fraud issues. If a person in a discussion makes an ad hominem attack instead of an argument, that’s bad. If a company does it on their website, that’s bad too, but it’s not fraud, it’s just dumb. But many types of “small” errors, like wrong quotes or facts in marketing materials, can be fraud.

What Is Fraud?

Legally, fraud involves communicating something false or misleading about something where there is an objective, right answer (not something which is a matter of opinion). Fraud has to be knowing or reckless, not an innocent accident. If they lied on purpose, or they chose not to take reasonable steps to find out if what they said is true or false, then it can be fraud. Fraud also requires harm – e.g. consumers who made purchasing decisions partly based on fraudulent information. And all of this has to be judged according to current, standard ideas in our society, not by using any advanced but unpopular philosophy analysis.

Does “Small” Fraud Matter?

There’s widespread agreement that it’s important to police a “big” fraud like FTX, Enron, Theranos, Bernie Madoff’s ponzi scheme, or Wells Fargo creating millions of accounts that people didn’t sign up for.

Do large corporations commit “small” frauds that qualify according to the legal meaning of fraud that I explained? I believe they do that routinely. It isn’t policed well. It could be.

If smaller frauds were policed well, would that help much? I think so. I think the effectiveness would be similar to the effectiveness of policing “small” errors in intellectual discussions. I think even many people who think it’d be ineffective can agree with me that it’d be similarly effective in the two different cases. There’s a parallel there.

Disallowing fraud is one of the basics of law and order, after disallowing violence. It’s important to classical liberalism and capitalism. It’s widely accepted by other schools of thought too, e.g. socialists and Keynesians also oppose fraud. But people from all those schools of thoughts tend not to care about “small” fraud like I do.

Fraud is closely related to breach of contract and theft. Suppose I read your marketing materials, reasonably conclude that your mattress doesn’t contain fiberglass, and buy it. The implied contract is that I trade $1000 for a mattress with various characteristics like being new, clean, a specific size, fiberglass-free and shipped to my home. If the mattress provided doesn’t satisfy the (implied) contract terms, then the company has not fulfilled their side of the contract. They are guilty of breach of contract. They therefore, in short, have no right to receive money from me as specified in the contract that they didn’t follow. If they keep my money anyway then, from a theoretical perspective, that’s theft because they have my property and won’t give it back. I could sue (and that’s at least somewhat realistic). Many people would see the connection to breach of contract and theft if the company purposefully shipped an empty box with no mattress in it, but fewer people seem to see it if they send a mattress which doesn’t match what was advertised in a “smaller” way.

“Small” Violence

Disallowing “small” instances of violence is much more popular than disallowing “small” frauds, but not everyone cares about it. Some people think pushing someone at a bar, or even getting in a fist fight, is acceptable behavior. I think it’s absolutely unacceptable to get into someone’s personal space in an intimidating manner, so that they reasonably fear that you might touch them in any way without consent. Worse is actually doing any intentional, aggressive, non-consensual touch. I wish this was enforced better. I think many people do have anti-violence opinions similar to mine. There are people find even “small” violence horrifying and don’t want it to happen to anyone. That viewpoint exists for violence in a way that I don’t think it does for fraud or misquotes.

Note that I was discussing the enforcement of “small” violence to strangers. Unfortunately, people’s attitudes tend to be worse when it’s done to your wife, your girlfriend, your own child, etc. Police usually treat “domestic” violence differently than other violence and do less to stop it. However, again, lots of people besides me do want better protection for victims.

Maybe after “small” violence is more thoroughly rejected by almost everyone, society will start taking “small” fraud and “small” breach of contract more seriously.

Why To Improve “Small” Problems

Smaller problems tend to provide opportunities for improvement that are easier to get right, easier to implement, simpler, and less controversial about what the right answer is.

Basically, fix the easy stuff first and then you’ll get into a better situation and can reevaluate what problems are left. Fixing a bunch of small problems will usually help with some but not all of the bigger or harder problems.

Also, the best way to solve hard problems is often to break them down into small parts. So then you end up solving a bunch of small problems. This is just like learning a big, hard subject by breaking it down into many parts.

People often resist this, but not because they disagree that the small problem is bad or that your fix will work. There are a few reasons they resist it:

  • They are in a big hurry to work directly on the big problem that they care about
  • They are skeptical that the small fixes will add up to much or make much difference or be important enough
  • They think the small fixes will take too much work

Why would small fixes take a lot of work? Because people don’t respect them, sabotage them, complain about them, etc., instead of doing them. People make it harder than it has to be, then say it’s too hard.

Small fixes also seem like too much work if problem solving is broken to the point that fixing anything is nearly impossible. People in that situation often don’t even want to try to solve a problem unless the rewards are really bad (or the issue is so small that they don’t recognize what they’re doing as problem solving – people who don’t want to solve “small” problems often do solve hundreds of tiny problems every day).

If you’re really stuck on problem solving and can barely solve anything, working on smaller problems can help you get unstuck. If you try to work on big problems, it’s more overwhelming and gives you more hard stuff to deal with at once. The big problem is hard and getting unstuck is hard, so that’s at least two things. It’d be better to get unstuck with a small, easy problem that is unimportant (so the stakes are low and you don’t feel much pressure), so the only hard part is whatever you’re stuck on, and everything else provides minimal distraction. Though I think many of these people want to be distracted from the (irrational) reasons they’re stuck and failing to solve problems, rather than wanting to face and try to solve what’s going on there.

Small fixes also seem to hard if you imagine doing many small things using conscious effort and attention. To get lots of small things right, you must practice, automatize, and use your subconscious. If you aren’t doing that, you aren’t going to be very effective at small or big things. Most of your brainpower is in your subconscious.


See also my articles Ignoring “Small” Errors and “Small” Fraud by Tyson and Food Safety Net Services.


Elliot Temple | Permalink | Messages (0)

“Small” Fraud by Tyson and Food Safety Net Services

This is a followup for my article “Small” Errors, Frauds and Violences. It discusses a specific example of “small” fraud.


Tyson is a large meat processing company that gets meat from factory farms. Tyson’s website advertises that their meat that passes objective inspections and audits (mirror) from unbiased third parties.

Tyson makes these claims because these issues matter to consumers and affect purchasing. For example, a 2015 survey found that “56 percent of US consumers stop buying from companies they believe are unethical” and 35% would stop buying even if there is no substitute available. So if Tyson is lying to seem more ethical, there is actual harm to consumers who bought products they wouldn’t have bought without being lied to, so it’d qualify legally as fraud.

So if Tyson says (mirror) “The [third party] audits give us rigorous feedback to help fine tune our food safety practices.”, that better be true. They better actually have internal documents containing text which a reasonable person could interpret as “rigorous feedback”. And if Tyson puts up a website section about animal welfare on their whole website about sustainability, their claims better be true.

I don’t think this stuff is false in a “big” way. E.g., they say they audited 50 facilities in 2021 just for their “Social Compliance Auditing program”. Did they actually audit 0 facilities? Are they just lying and making stuff up? I really doubt it.

But is it “small” fraud? Is it actually true that the audits give them rigorous feedback? Are consumers being misled?

I am suspicious because they get third party audits from Food Safety Net Services, an allegedly independent company that posts partisan meat propaganda (mirror) on their own public website.

How rigorous or independent are the audits from a company that markets (mirror) “Establishing Credibility” as a service they provide while talking about how you need a “non-biased, third-party testing facility” (themselves) and saying they’ll help you gain the “trust” of consumers? They obviously aren’t actually non-biased since they somehow think posting partisan meat propaganda on their website is fine while trying to claim non-bias.

Food Safety Net Services don’t even have a Wikipedia page or other basic information about them available, but they do say (mirror) that their auditing:

started as a subset of FSNS Laboratories in 1998. The primary focus of the auditing group was product and customer-specific audits for laboratory customers. With a large customer base in the meat industry, our auditing business started by offering services specific to meat production and processing. … While still heavily involved in the meat industry, our focus in 2008 broadened to include all food manufacturing sites.

The auditing started with a pre-existing customer base in the meat industry, and a decade later expanded to cover other types of food. It sounds independent like how Uber drivers are independent contractors or how many Amazon delivery drivers work for independent companies. This is the meat industry auditing itself, displaying their partisan biases in public, and then claiming they have non-biased, independent auditing. How can you do a non-biased audit when you have no other income and must please your meat customers? How can you do a non-biased meat audit when you literally post meat-related propaganda articles on your website?

How can you do independent, non-biased audits when your meat auditing team is run by meat industry veterans? Isn’t it suspicious that your “Senior Vice President of Audit Services” “spent 20 years in meat processing facilities, a majority of the time in operational management. Operational experience included steak cutting, marinating, fully cooked meat products, par fry meat and vegetables, batter and breaded meat and vegetables, beef slaughter and fabrication, ground beef, and beef trimmings.” (source). Why exactly is she qualified to be in charge of non-biased audits? Did she undergo anti-bias training? What has she done to become unbiased about meat after her time in the industry? None of the her listed credentials actually say anything about her ability to be unbiased about meat auditing. Instead of trying to establish her objectivity in any way, they brag about someone with “a strong background in the meat industry” performing over 300 audits.

Their Impartiality Statement is one paragraph long and says “Team members … have agreed to operate in an ethical manner with no conflict or perceived conflict of interest.” and employees have to sign an ethics document promising to disclose conflicts of interest. That’s it. Their strategy for providing non-biased audits is to make low-level employees promise to be non-biased in writing, that way if anything goes wrong management can put all the blame on the workers and claim the workers defrauded them by falsely signing the contracts they were required to sign to be hired.

Is this a ridiculous joke, lawbreaking, or a “small” fraud that doesn’t really matter, or a “small” fraud that actually does matter? Would ending practices like this make the industry better and lead to more sanitary conditions for farm animals, or would it be irrelevant?

I think ending fraud would indirectly result better conditions for animals and reducing their suffering (on the premise that animals can suffer). Companies would have to make changes, like using more effective audits, so that their policies are followed more. And they’d have to change their practices to better match what the public thinks is OK.

This stuff isn’t very hard to find, but in a world where even some anti-factory-farm activists don’t care (and actually express high confidence about the legal innocence of the factory farm companies), it’s hard to fix.

Though some activists actually have done some better and more useful work. For example, The Humane League has a 2021 report about slaughterhouses not following the law. Despite bias, current auditing practices already show many violations. That’s not primarily about fraud, but it implies fraud because the companies tell the public that their meat was produced in compliance with the law.


Elliot Temple | Permalink | Messages (0)

Capitalism or Charity

In an ideal capitalist society, a pretty straightforward hypothesis for how to do the most good is: make the most money you can. (If this doesn’t make sense to you, you aren’t familiar with the basic pro-capitalist claims. If you’re interested, Time Will Run Back is a good book to start with. It’s a novel about the leaders of a socialist dystopia trying to solve their problems and thereby reinventing capitalism.)

Instead of earn to give, the advice could just be earn.

What’s best to do with extra money? The first hypothesis to consider, from a capitalist perspective, is invest it. Helping with capital accumulation will do good.

I don’t think Effective Altruism (EA) has any analysis or refutation of these hypotheses. I’ve seen nothing indicating they understand the basic claims and reasoning of the capitalist viewpoint. They seem to just ignore thinkers like Ludwig von Mises.

We (in USA and many other places) do not live in an ideal capitalist society, but we live in a society with significant capitalist elements. So the actions we’d take in a fully capitalist society should be considered as possibilities that may work well in our society, or which might work well with some modifications.

One cause that might do a lot of good is making society more capitalist. This merits analysis and consideration which I don’t think EA has done.

What are some of the objections to making money as a way to do good?

  • Disagreement about how economics works.
  • Loopholes – a society not being fully capitalist means it isn’t doing a full job of making sure satisfying consumers is the only way to make much money. E.g. it may be possible to get rich by fraud or by forcible suppression of competition (with your own force or the help of government force).
  • This only focuses on good that people are willing to pay for. People might not pay to benefit cats, and cats don’t have money to pay for their own benefit.
  • The general public could be shortsighted, have bad taste, etc. So giving them what they want most might not do the most good. (Some alternatives, like having a society ruled by philosopher kings, are probably worse.)

What are some advantages of the making money approach? Figuring out what will do good is really hard, but market economies provide prices that give guidance about how much people value goods or services. Higher prices indicate something does more good. Higher profits indicate something is most cost effective. (Profits are the selling price minus the costs of creating the product or providing the service. To be efficient, we need to consider expenses not just revenue.)

Measuring Value

Lots of charities don’t know how to measure how much good they’re doing. EA tries to help with that problem. EA does analysis of how effective different charities are. But EA’s methods, like those of socialist central planners, aren’t very good. The market mechanism is much better at pricing things than EA is at assigning effectiveness scores to charities.

One of the main issues, which makes EA’s analysis job hard, is that different charities do qualitatively different things. EA has to compare unlike things. EA has to combine factors from different dimensions. E.g. EA tries to determine whether a childhood vaccines charity does more or less good than an AI Alignment charity.

If EA did a good job with their analysis, they could make a reasonable comparison of one childhood vaccine charity with another. But comparing different types of charities is like comparing apples to oranges. This is fundamentally problematic. One of the most impressive things about the market price system is it takes products which are totally different – e.g. food, clothes, tools, luxuries, TVs, furniture, cars – and puts them all on a common scale (dollars or more generally money). The free market is able to validly get comparable numbers for qualitatively different things. That’s an extremely hard problem in general, for complex scenarios, so basically neither EA nor central planners can do it well. (That partly isn’t their fault. It doesn’t mean they aren’t clever enough. I would fail at it too. The only way to win is stop trying to do that and find a different approach. The fault is in using that approach, not in failing to get good answers with the approach. More thoughtful or diligent analysis won’t fix this.)

See Multi-Factor Decision Making Math for more information about the problems with comparing unlike things.


Elliot Temple | Permalink | Messages (0)