[Previous] Conflicts of Interest, Poverty and Rationality | Home | [Next] Don’t Legalize Animal Abuse

Animal Welfare Overview

Is animal welfare a key issue that we should work on? If so, what are productive things to do about it?

This article is a fairly high level overview of some issues, which doesn’t attempt to explain e.g. the details of Popperian epistemology.

Human Suffering

Humans suffer and die, today, a lot. Look at what’s going on in Iran, Ukraine, Yemen, North Korea, Venezuela and elsewhere. This massive human suffering should, in general, be our priority before worrying about animals much.

People lived in terrible conditions, and died, building stadiums for the World Cup in Qatar. Here’s a John Oliver video about it. They were lied to, exploited, defrauded, and basically (temporarily) enslaved etc. People sometimes die in football (soccer) riots too. I saw a headline recently that a second journalist died in Qatar for the World Cup. FIFA is a corrupt organization that likes dictators. Many people regard human death as an acceptable price for sports entertainment, and many more don’t care to know the price.

There are garment workers in Los Angeles (USA) working in terrible conditions for illegally low wages. There are problems in other countries too. Rayon manufacturing apparently poisons nearby children enough to damage their intelligence due to workers washing off toxic chemicals in local rivers. (I just read that one article; I haven’t really researched this but it seems plausible and I think many industries do a lot of bad things. There are so many huge problems in human civilization that even reading one article per issue would take a significant amount of time and effort. I don’t have time to do in-depth research on most of the issues. Similarly, I have not done in-depth research on the Qatar World Cup issues.)

India has major problems with orphans. Chinese people live under a tyrannical government. Human trafficking continues today. Drug cartels exist. Millions of people live in prisons. Russia uses forced conscription for its war of aggression in Ukraine.

These large-scale, widespread problems causing human suffering seem more important than animal suffering. Even if you hate how factory farms treat animals, you should probably care more that a lot of humans live in terrible conditions including lacking their freedom in major ways.

Intelligence

Humans have general, universal intelligence. They can do philosophy and science.

Animals don’t. All the knowledge involved in animal behavior comes from genetic evolution. They’re like robots created by their genes and controlled by software written by their genes.

Humans can do evolution of ideas in their minds to create new, non-genetic knowledge. Animals can’t.

Evolution is the only known way of creating knowledge. It involves replication with variation and selection.

Whenever there is an appearance of design (e.g. a wing or a hunting behavior), knowledge is present.

People have been interested in the sources of knowledge for a long time, but it’s a hard problem and there have been few proposals. Proposals include evolution, intelligent design, creationism, induction, deduction and abduction.

If non-evolutionary approaches to knowledge creation actually worked, it would still seem that humans can do them and animals can’t – because there are human scientists and philosophers but no animal scientists or philosophers.

Human learning involves guessing or brainstorming (replication with variation) plus criticism and rejecting refuted ideas (selection). Learning by evolution means learning by error correction, which we do by creating many candidate ideas (like a gene pool) and rejecting ideas that don’t work well (like animals with bad mutations being less likely to have offspring).

Also, since people very commonly get this wrong: Popperian epistemology says we literally learn by evolution. It is not a metaphor or analogy. Evolution literally applies to both genes and memes. It’s the same process (replication with variation and selection). Evolution could also work with other types of replicators. For general knowledge creation, the replicator has to be reasonably complex, interesting, flexible or something (the exact requirements aren’t known).

Types of Algorithms

All living creatures with brains have Turing-complete computers for brains. A squirrel is a reasonable example animal. Let’s not worry about bacteria or worms. (Earthworms apparently have some sort of brain with only around 300 neurons. I haven’t researched it.)

Humans have more neurons, but the key difference between humans and squirrels is the software our brains run.

We can look at software algorithms in three big categories.

  1. Fixed, innate algorithm
  2. “Learning” algorithms which read and write data in long-term memory
  3. Knowledge-creation algorithm (evolution, AGI)

Fixed algorithms are inborn. The knowledge comes from genes. They’re complete and functional with no practice or experience.

If you keep a squirrel in a lab and never let it interact with dirt, and it still does behaviors that seem designed for burying nuts in dirt, that indicates a fixed, innate algorithm. These algorithms can lead to nonsensical behavior when taken out of context.

There are butterflies which do multi-generation migrations. How do they know where to go? It’s in their genes.

Why do animals “play”? To “learn” hunting, fighting, movement, etc. During play, they try out different motions and record data about the results. Later, their behavioral algorithms read that data. Their behavior depends partly on what that data says, not just on inborn, genetic information.

Many animals record data for navigation purposes. They look around, then can find their way back to the same spot (long-term memory). They can also look around, then avoid walking into obstacles (short-term memory).

Chess-playing software can use fixed, innate algorithms. A programmer can specify rules which the software follows.

Chess-playing software can also involve “learning”. Some software plays many practice games against itself, records a bunch of data, and uses that data in order to make better moves in the future. The chess-playing algorithm takes into account data that was created after birth (after the programmer was done).

I put “learning” in scare quotes because the term often refers to knowledge creation (evolution) which is different than an algorithm that writes data to long-term data storage then uses it later. When humans learn at school, it’s not the same thing as e.g. a “reinforcement learning” AI algorithm or what animals do.

People often confuse algorithms involving long-term memory, which use information not available at birth, with knowledge creation. They call both “learning” and “intelligent”.

They can be distinguished in several ways. Is there replication with variation and selection, or not? If you think there’s evolution, can it create a variety of types of knowledge, or is it limited to one tiny niche? If you believe a different epistemology, you might look for the presence of inductive thinking (but Popper and others have refuted induction). There are other tests and methods that can be used to identify new knowledge as opposed to the downstream consequences of existing knowledge created by genetic evolution, by a programmer, or by some other sort of designer.

Knowledge

What is knowledge? It’s information which is adapted to a purpose. When you see the appearance of design, knowledge is present. Understanding the source of that knowledge is often important. Knowledge is one of the more important and powerful things in the universe.

Binary Intelligence or Degrees?

The word “intelligence” is commonly used with two different meanings.

One is a binary distinction. I’m intelligent but a rock or tree isn’t.

The other meaning is a difference in degree or amount of intelligence: Alice is smarter than Joe but dumber than Feynman.

Degrees of intelligence can refer to a variety of different things that we might call logical skill, wisdom, cleverness, math ability, knowledge, being well spoken, scoring well on tests (especially IQ tests, but others too), getting high grades, having a large vocabulary, being good at reading, being good at scientific research or being creative.

There are many different ways to use your intelligence. Some are more effective than others. Using your intelligence effectively is often called being highly intelligent.

Speaking very roughly, many people believe a chimpanzee or dog is kind of like a 50 IQ person – intelligent, but much less intelligent than almost all humans. They think a squirrel passes the binary intelligence distinction to be like a human not a rock, but just has less intelligence. However, they usually don’t think a self-driving car, chat bot, chess software or video game enemy is intelligent at all – that’s just an algorithm which has a lot of advantages compared to a rock but isn’t intelligent. Some other people do think that present-day “AI” software is intelligent, just with a low degree of intelligence.

My position is that squirrels are like self-driving cars: they aren’t intelligent but the software algorithm can do things that a rock can’t. A well designed software algorithm can mimic intelligence without actually having it.

The reason algorithms are cleverer than rocks is they have knowledge in them. Creating knowledge is the key thing intelligence does that makes it seem intelligent. An algorithm uses built-in knowledge, while intelligences can create their own knowledge.

Basically, anything with knowledge seems either intelligent or intelligently-designed to us (speaking loosely and counting evolution as an intelligent designer). People tend to assume animals are intelligent rather than intelligently-designed because they don’t understand evolution or computation very well, and because the animals seem to act autonomously, and because of the similarities between humans and many animals.

Where does knowledge come from? Evolution. To get knowledge, algorithms need to either evolve or to have an intelligent designer. An intelligent designer, such a human software developer, creates the knowledge by evolving ideas about the algorithm within his brain. So the knowledge always comes from evolution. Evolution is the only known solution to how new knowledge can be created which isn’t refuted.

(General intelligence may be an “algorithm” in the same kind of sense that e.g. “it’s all just math”. If you want to call it an algorithm, then whenever I write “algorithm” you can read it as e.g. “algorithm other than general intelligence”.)

Universality

There are philosophical reasons to believe that humans are universal knowledge creators – meaning they can create any knowledge that any knowledge creator can create. The Popperian David Deutsch has written about this.

This parallels how the computer I’m typing on can compute anything that any computer can compute. It’s Turing-complete, a.k.a. universal. (Except quantum computers have extra abilities, so actually my computer is a universal classical computer.)

This implies a fundamental similarity between everything intelligent (they all have the same repertoire of things they can learn). There is no big, bizarre, interesting mind design space like many AGI researchers believe. Instead, there are universally intelligent minds and not much else of note, just like there are universal computers and little else of interest. If you believe in mind design space like Eliezer Yudkowsky does, it’s easy to imagine animals are in it somewhere. But if the only options for intelligence are basically universality or nothing, then animals have to be like humans or else unintelligent – there’s no where else in mind design space for them to be. If the only two options are basically that animals are intelligent in the same way as humans (universal intelligence), or aren’t intelligent, then most people will agree that animals aren’t intelligent.

This also has a lot of relevance to concerns about super-powerful, super-intelligent AGIs turning us all into paperclips. There’s actually nothing in mind design space that’s better than human intelligence, because human intelligence is already universal. Just like how there’s nothing in classical computer design space that’s better than a universal computer or Turing machine.

A “general intelligence” is a universal intelligence. A non-general “intelligence” is basically not an intelligence, like a non-universal or non-Turing-complete “computer” basically isn’t a computer.

Pain

Squirrels have nerves, “pain” receptors, and behavioral changes when “feeling pain”.

Robots can have sensors which identify damage and software which outputs different behaviors when the robot is damaged.

Information about damage travels to a squirrel’s brain where some behavior algorithms use it as input. It affects behavior. But that doesn’t mean the squirrel “feels pain” anymore than the robot does.

Similarly, information travels from a squirrel’s eyes to its brain where behavioral algorithms take it into account. A squirrel moves around differently depending on what it sees.

Unconscious robots can do that too. Self-driving car prototypes today use cameras to send visual information to a computer which makes the car behave differently based on what the camera sees.

Having sensors which transmit information to the brain (CPU), where it is used by behavior-control software algorithms, doesn’t differentiate animals from present-day robots.

Suffering

Humans interpret information. We can form opinions about what is good or bad. We have preferences, values, likes and dislikes.

Sometimes humans like pain. Pain does not automatically equate to suffering. Whether we suffer due to pain, or due to anything else, depends on our interpretation, values, preferences, etc.

Sometimes humans dislike information that isn’t pain. Although many people like it, the taste of pizza can result in suffering for someone.

Pain and suffering are significantly different concepts.

Pain is merely a type of information sent from sensors to the CPU. This is true for humans and animals both. And it’d be true for robots too if anyone called their self-damage related sensors “pain” sensors.

It’s suffering that is important and bad, not pain. Actually, being born without the ability to feel pain is dangerous. Pain provides useful information. Being able to feel pain is a feature, not a bug, glitch or handicap.

If you could disable your ability to feel pain temporarily, that’d be nice sometimes if used wisely, but permanently disabling it would be a bad idea. Similarly, being able to temporarily disable your senses (smell, touch, taste, sight or hearing) is useful, but permanently disabling them is a bad idea. We invent things like ear and nose plugs to temporarily disable senses, and we have built-in eyelids for temporarily disabling our sight (and, probably more importantly, for eye protection).

Suffering involves wanting something and getting something else. Reality violates what you want. E.g. you feel pain that you don’t want to feel. Or you taste a food that you don’t want to taste. Or your spouse dies when you don’t want them to. (People, occasionally, do want their spouse to die – as always, interpretation determines whether one suffers or not).

Karl Popper emphasized that all observation is theory-laden, meaning that all our scientific evidence has to be interpreted and if we get the interpretation wrong then our scientific conclusions will be wrong. Science doesn’t operate on raw data.

Suffering involves something happening and you interpreting it negatively. That’s another way to look at wanting something (that you would interpret positively or neutrally) but getting something else (that you interpret negatively).

Animals can’t interpret like this. They can’t create opinions of what is good and bad. This kind of thinking involves knowledge creation.

Animals do not form preferences. They don’t do abstract thinking to decide what to value, compare differential potential values, and decide what they like. Just like self-driving cars have no interpretation of crashing and do not feel bad about it when they crash. They don’t want to avoid crashing. Their programmers want them to avoid crashing. Evolution doesn’t want things like people do, but it does design animals to (mostly) minimize dying. That involves various more specific designs, like behavior algorithms designed to prevent an animal from starving to death. (Those algorithms are pretty effective but not perfect.)

Genetic evolution is the programmer and designer for animals. Does genetic evolution have values or preferences? No. It has no mind.

Genetic evolution also created humans. What’s different is it gave them the ability to do their own evolution of ideas, thus creating evolved knowledge that wasn’t in their genes, including knowledge about interpretations, preferences, opinions and values.

Animal Appearances

People often assume animals have certain mental states due to superficial appearance. They see facial expressions on animals and think those animals have corresponding emotions, like a human would. They see animals “play” and think it’s the same thing as human play. They see an animal “whimper in pain” and think it’s the same as a human doing that.

People often think their cats or dogs have complex personalities, like an adult human. They also commonly think that about their infants. And they also sometimes think that about chatbots. Many people are fooled pretty easily.

It’s really easy to project your experiences and values onto other entities. But there’s no evidence that animals do anything other than follow their genetic code, which includes sometimes doing genetically-programmed information-gathering behaviors, then writing that information into long-term memory, then using that information in behavior algorithms later in exactly the way the genes say to. (People also get confused by indirection. Genes don’t directly tell animals what to do like slave-drivers. They’re more like blueprints for the physical structure and built-in software of animals.)

Uncertainty

Should we treat animals partially or entirely like humans just in case they can suffer?

Let’s first consider a related question. Should we treat trees and 3-week-old human embryos partially or entirely like humans just in case they can suffer? I say no. If you agree with me, perhaps that will help answer the question about animals.

In short, we have to live by our best understanding of reality. You’re welcome to be unsure, but I have studied stuff, debated and reached conclusions. I have conclusions both about my personal debates and also the state of the debate involving all expert literature.

Also, we’ve been eating animals for thousands of years. It’s an old part of human life, not a risky new invention. Similarly, the mainstream view of human intellectuals, for thousands of years, has been to view animals as incapable of reason or irrational, and as very different than humans. (You can reason with other humans and form e.g. peace treaties or social contracts. You can resolve conflicts with persuasion. You can’t do that with animals.)

But factory farms are not a traditional part of human life. If you just hate factory farms but don’t mind people eating wild animals or raising animals on non-factory farms, then … I don’t care that much. I don’t like factory farms either because I think they harm human health (but so do a lot of other things, including vegetable oil and bad political ideas, so I don’t view factory farms as an especially high priority – the world has a ton of huge problems). I’m a philosopher who mostly cares about the in-principle issue of whether or not animals suffer, which is intellectually interesting and related to epistemology. It’s also relevant to issues like whether or not we should urgently try to push everyone to be vegan, which I think would be a harmful mistake.

Activism

Briefly, most activism related to animal welfare is tribalist, politicized fighting related to local optima. It’s inadequately intellectual, inadequately interested in research and debate about the nature of animals or intelligence, and has inadequate big picture planning about the current world situation and what plan would be very effective and high leverage for improving things. There’s inadequate interest in persuading other humans and reaching agreement and harmony, rather than trying to impose one’s values (like treating animals in particular ways) on others.

Before trying to make big changes, you need e.g. a cause-and-effect diagram about how society works and what all the relevant issues are. And you need to understand the global and local optima well. See Eli Goldratt for more information on project planning.

Also, as is common with causes, activists tend to be biased about their issue. Many people who care about the (alleged) suffering of animals do not care much about the suffering of human children, and vice versa. And many advocates for animals or children don’t care much about the problems facing elderly people in old folks homes, and vice versa. It’s bad to have biased pressure groups competing for attention. That situation makes the world worse. We need truth seeking and reasonable organization, not competitions for attention and popularity. A propaganda and popularity contest isn’t a rational, truth seeking way to organize human effort to make things better.


Elliot Temple on December 14, 2022

Messages

Want to discuss this? Join my forum.

(Due to multi-year, sustained harassment from David Deutsch and his fans, commenting here requires an account. Accounts are not publicly available. Discussion info.)