[Previous] Intelligence Isn't Speed | Home | [Next] Dumb, Dishonest Memers

Discussing Animal Intelligence

This post replies to pdxthehunted from Reddit (everything he said there is included in quotes below). There is also previous discussion before this exchange, see here. This post will somewhat stand on its own without reading context, but not 100%. Topics include about whether animals can suffer, the nature of intelligence and the flaws of academia.

[While writing this response, the original post was removed. I think that’s unfortunate, but what’s done is done. I’d still love a quick response—just to see if I understand you correctly.]

Hi, Elliot. Thanks for your response. I want to say off the bat that I don’t think I’m equipped to debate the issue at hand with you past this point. (Mostly based off your sibling post; I’m not claiming you’re wrong, but just that I think I—finally—realize that I don’t understand where you’re coming from, entirely (or possibly at all). I’m willing to concede that—if you’re right about everything—you probably do need to have this conversation with programmers or physicists. If the general intelligence on display in the article I cited is categorically different from what you’re talking about when you talk about G.I. than I’m out of my depth.

Yes, what that article is studying is different and I don't think it should be called "general intelligence". General means general purpose, but the kind of "intelligence" in the article can't build a spaceship or write a philosophy treatise, so it's limited to only some cases. They are vague about this matter. They suggest they are studying general intelligence because their five learning tasks are "diverse". Being able to do 5 different learning tasks is a great sign if they are diverse enough, but I don't think they're diverse with respect to the set of all possible learning tasks, I think they're actually all pretty similar.

This is all more complicated because they think intelligence comes in degrees, so they maybe believe a mouse has the right type of intelligence to build a spaceship, just not enough of it. But their research is not about whether that premise (intelligence comes in degrees) is true, nor do they write philosophical arguments about it.

That being said, I’d love to continue the conversation for a little while, if you’re up for it, either here or possibly on your blog if that works better for you. I have some questions and would like to try and understand your perspective.

If I'm right about ~everything, that includes my views of the broad irrationality of academia and the negative value of current published research in many of the fields in question.

For example, David Deutsch's static meme idea, available in BoI, was rejected for academic publication ~20 years earlier. Academia gatekeeps to keep out ideas they don't want to hear, and they don't really debate what's true much in journals. It's like a highly moderated forum with biased moderators following unwritten and inconsistent rules (like reddit but stricter!).

My arguments re animals are largely Deutsch's. He taught me his worldview. The reason he doesn't write it up and publish it in a journal is because (he believes that) it either wouldn't be published or wouldn't be listened to (and it would alienate people who will listen to his physics papers). The same goes for many other important ideas he has. Being in the Royal Society, etc., is inadequate to effectively get past the academic gatekeeping (to get both published and seriously, productively engaged with). I don't think a PhD and 20 published papers would help either (especially not with issues involving many fields at once).

For what it’s worth, I think this is a fair criticism and concern, especially for someone—like you—who is trying to distill specific truths out of many fields at once. If your (and Deutsch’s) worldview conflicts with the prevailing academic worldview, I concede that publishing might be difficult or impossible and not the best use of your energy.

I asked for a solution but I'm happy with that response. I find it a very hard problem.

Sadly, Deutsch has given up on the problem to the point that he's focusing on physics (Constructor Theory) not philosophy now. Physics is one of the best academic fields to interact with, and one of the most productive and rational, while philosophy is one of the worst. Deutsch used to e.g. write about the implications of Critical Rationalism for parenting and education. The applications are pretty direct from philosophy of knowledge to how people learn, but the conclusions are extremely offensive to ~everyone because, basically, ~all parents and teachers are doing a bad job and destroying children's minds (which is one of the main underlying reasons for why academia and many other intellectual things are broken). Very important issues but people shoot messengers... The messenger shooting is bad enough that Deutsch refused me permission to post archived copies of hundreds of things he wrote publicly online but which are no longer available at their original locations. A few years earlier he had said he would like the archives posted. He changed his mind because he became more pessimistic about people reaction's to ideas.

I, by contrast, am pursuing a different strategy of speaking truth to power without regard for offending people. I don't want to hold back, but I also don't have a very large fanbase because even if someone agrees with me about many issues, I have like two dozen different ideas that would alienate many people, so pretty much everyone can find something to hate.

I don't think people would, at that point, start considering and learning different ideas than what they already have, e.g. learning Critical Rationalism so they could apply that framework to animal rights to reach a conclusion like "If Critical Rationalism is true, then animal rights is wrong." (And CR is not the only controversial premise I use that people are broadly ignorant of, so it's harder than that.) People commonly dismiss others, despite many credentials, if they don't like the message. I don't think playing the game of authority and credentials – an irrational game – will solve the problem of people's disinterest in truth-seeking. This is view of academia is, again, a view Deutsch taught me.

Karl Popper published a ton but was largely ignored. Thomas Szasz too. There are many other examples. Even if I got published, I could easily be treated like e.g. Richard Lindzen who has published articles doubting some claims about global warming.

Fair enough.

I’m not going to respond to the rest of your posts line-by-line because I think most of what you’re saying is uncontroversial or is not relevant to the OP (it was relevant to my posts; thank you for the substantial, patient responses).

I think most people would deny most of it. I wasn’t expecting a lot of agreement. But OK, great.

For any bystanders who are interested and have made it this far, I think that this conversation between OP and Elliot is helpful in understanding their argument (at least it was for me).

Without the relevant CS or critical rationality background, I can attempt to restate their argument in a way that seems coherent (to me). Elliot or OP can correct me if I’m way off base.

The capacity for an organism to suffer may be binary; essentially, at a certain level of general intelligence, the capacity to suffer may turn on.

I don’t think there are levels of general intelligence, I think it’s present or not present. This is analogous to there not being levels of computers: it’s either a universal classical computer or it’s not a computer and can compute ~nothing. The jump from ~nothing to universality is discussed in BoI.

Otherwise, close enough.

(I imagine suffering to exist on a spectrum; a human’s suffering may be “worse” than a cow’s or a chicken’s because we have the ability to reflect on our suffering and amplify it by imagining better outcomes, but I’m not convinced that—if I experienced life from the perspective of a cow—that I wouldn’t recognize the negative hallmarks of suffering, and prefer it to end. My thinking is that a sow in a gestation crate could never articulate to herself “I’m uncomfortable and in pain; I wish I were comfortable and pain-free,” but that doesn’t preclude a conscious preference for circumstances to be otherwise, accompanied by suffering or its nonhuman analog.)

I think suffering comes in degrees if it’s present at all. Some injuries hurt more than others. Some bad news is more upsetting than other bad news.

Similarly, how smart people are comes in degrees when intelligence is present. They have the same basic capacity but vary in thinking quality due to having e.g. different ideas and different thinking methods (e.g. critical rationalist thinking is more effective than superstition).

Roughly there are three levels like this:

  1. Computer (brain)
  2. Intelligent Mind (roughly: an operating system (OS) for the computer with the feature that it allows creating and thinking about ideas)
  3. Ideas within the mind.

Each level requires the previous level.

Sand fails to match humans at level 1. No brain.

Apes fail to match humans at level 2. They run a different operating system with features more similar to Windows or Mac than to intelligence. It doesn’t have support for ideas.

Self-driving cars have brains (CPUs) which are adequately comparable to an ape or human, but like apes they differ from humans at level 2.

When Sue is cleverer than Joe, that’s a level 3 difference. She doesn’t have a better brain (level 1), nor a better operating system (level 2), she has better ideas. She has some knowledge he doesn’t. That includes not just knowledge of facts but also knowledge about rationality, about how to think effectively. E.g. she knows some stuff about how to avoid bias, how to find and correct errors effectively, how to learn from criticism instead of getting angry, or how to interpret disagreements as disagreements instead of as other things like heresy, bad faith, or “not listening”.

Small hardware differences between people are possible. Sue’s brain might be a 5% faster computer than Joe’s. But this difference is unimportant relative to the impact of culture, ideas, rationality, bias, education, etc. Similarly, small OS differences are possible but they wouldn’t matter much either.

There are some complications. E.g. imagine a society which extensively tested children on speed of doing addition problems in their head. They care a ton about this. The best performers get educated to be scientists and lower performers do unskilled laborer. Someone with a slightly faster brain or slightly different OS might do better on those tests. Those tests limit the role of ideas. So, in this culture, a small hardware speed advantage could make a huge difference in life outcome including how clever the person is as an adult (due to huge educational differences which were caused by differences in arithmetic speed). But the same hardware difference could have totally different results in a different culture, and in a rational culture it wouldn’t matter much. What differentiates knowledge workers IRL, including scientists and philosophers, is absolutely nothing like that the 99th percentile successful guys are able to get equal quality work done 5% faster than the 20th percentile guys.

Our actual culture has some stuff kinda like this hypothetical culture, but much more accidental and with less control over your life (there are many different paths to success, so even if a few get blocked, you don’t have to do unskilled labor). It also has similar kinda things based on non-mental attributes like skin color, height, hair color, etc, though again with considerably smaller consequences than the hypothetical where your whole fate is determined just by addition tests.

Back to my interpretation of the argument: Beneath a certain threshold of general intelligence, pain—or the experience of having any genetically preprogrammed preference frustrated—may not be interpreted as suffering in the way humans understand it and may not constitute suffering in any meaningful or morally relevant way (even if you otherwise think we have a moral obligation to prevent suffering where we can).

It’s possible that suffering requires uniquely human metacognition; without the ability to think about pain and preference frustration abstractly, animals might not suffer in any meaningful sense.

This is a reasonable approximation except that I think preferences are ideas and I don’t think animals have them at all (not even preprogrammed).

So far (I hope) all I’ve done is restate what’s already been claimed by Elliot in his original post. Whether I’ve helped make it any clearer is probably an open question. Hopefully, Elliot can correct me if I’ve misinterpreted anything or if I’ve dumbed it down to a level where it’s fundamentally different from the original argument.

This is where I think it gets tricky and where a lot of miscommunication and misunderstanding has been going on. Here is a snippet of the conversation I linked earlier:

curi: my position on animals is awkward to use in debates because it's over 80% background knowledge rather than topical stuff.

curi: that's part of why i wanted to question their position and ask for literature that i could respond to and criticize, rather than focusing on trying to lay out my position which would require e.g. explaining KP and DD which is hard and indirect.

curi: if they'll admit they have no literature which addresses even basic non-CR issues about computer stuff, i'd at that point be more interested in trying to explain CR to them.

I’m willing to accept that Elliot is here in good faith; nothing I’ve read on their blog thus far looks like an attempt to “own the soyboys” or “DESTROY vegan arguments.” They’re reading Singer (and Korsgaard) and are legitimately looking for literature that compares or contrasts nonhuman animals with AI.

The problem is—whether they’re right or not—it seems like the foundation of their argument requires a background in CR and theoretical computer science.

Yes.

My view: if you want to figure out what’s true, a lot of ideas are relevant. Gotta learn it yourself and/or find a way to outsource some of the work. So e.g. Singer needs to read Popper and Deutsch or contact some people competent to discuss whether CR is correct and its implications. And Singer also needs to contact some computer people and ask them and try to meet them in the middle by explaining some of what he does to them so they understand the problems he’s working on, and then they explain some CS principles to him and how they apply to his problems. Something like that.

That is not happening.

It ought to actually be easier than that. Instead of contacting people Singer or anyone else could look at the literature. What criticisms of CR have been written? What counter-arguments to those criticisms have CR advocates written? How did those discussions end? You can look at the literature and get a picture of the state of the debate and draw some conclusions from that.

I find people don’t do this much or well. It often falls apart in a specific way. Instead of evaluating the pro-CR and anti-CR arguments – seeing what answers what, what’s unanswered, etc. – they give up on understanding the issues and just decide to assume the correctness of whichever side has a significant lead in popularity and prestige.

The result is, whenever some bad ideas and irrational thinkers become prestigious in a field, it’s quite hard to fix because people outside the field largely refuse to examine the field and see if a minority view’s arguments are actually superior.

Also, often people just use common sense about what they assume would be true of other fields instead of consulting literature. So e.g. rather than reading actual inductivist literature (induction is mainstream and is one of the main things CR rejects), most animal researchers and others rely on what they’ve picked up about induction, here and there, just from being part of an intellectual subculture. Hence there exist e.g. academic papers studying animal intelligence that don’t cite even mainstream epistemology books or papers.

The current state of the CR vs. induction debate, in my considered and researched opinion, is there don’t actually exist criticisms of CR from anyone who has understood it, and there’s very little willingness to engage in debate by any inductivists. Inductivists are broadly uninterested in learning about a rival idea which they have not understood or refuted. I think ignoring ideas that no one has criticized is something of a maximum for a type of irrationality. And people outside the field (and in the field too) mostly assume that some inductivists somewhere did learn and criticize CR, though people usually don’t have links to specific criticisms, which is a problem. I think it’s important to have sources in other fields that aren’t your own so that if your sources are incorrect they can be criticized and corrected and you can change your mind, whereas if you just say “people in the field generally conclude X” without citing any particular arguments then it’s very hard to continue the discussion and correct you about X from there.

From my POV, (a) the argument that suffering may be binary vs. occurring on a spectrum is possible but far from settled and might be unfalsifiable. From my POV, it’s far more likely that animals do suffer in a way that is very different from human suffering but still ethically and categorically relevant.

That’s a reasonable place to start. What I can say is that if you investigate the details, I think they come out particular way rather conclusively. (Actually the nature of arguments, and what is conclusive vs. unsettled – how to evaluate and think about that – is a part of epistemology, it’s one of the issues I think mainstream epistemology is wrong about. That’s actually the issue where I made my largest personal contribution to CR.)

If you don’t want to investigate the details, has anyone else done so as your proxy or representative? Has Singer or any other person or group done that work for you? Who has investigated, reached a conclusion, written it up, and you’re happy with what they did? If no one has done that, that suggests something is broken with all the intellectuals on your side – there may be a lot of them, but between all of them they aren’t doing much relevant thinking.

In some ways, the more people believe something and still no one writes detailed arguments and addresses rival ideas well, the more damning it is. In other words, CR has the excuse of not having essays to cover every little detail of every mainstream view because there aren’t many of us to write all that and we have ~no funding. The other side has no such excuse yet they’re the side, between all those people, has no representatives who will debate! They have plenty of people to have some specialists in refuting CR but they don’t have any.

Sadly, the same pattern repeats in other areas, e.g. The Failure of the 'New Economics’ by Henry Hazlitt is a point-by-point book-length refutation of Keynes’ main book. It uses tons of quotes from Keynes, similar to how I’m replying his this comment using quotes from pdxthehunted. As far as I know, Hazlitt’s criticisms went unanswered. Note: I think Hazlitt’s level of fame/prestige was loosely comparable to Popper and more than Deutsch; it’s not like he was ignored for being a nobody (which I’d object to too, but that isn’t what happened).

Large groups of people ignore critical arguments. What does it mean for intellectuals to rationally engage with critics and how can we get people to actually do that? I think it’s one of the world’s larger problems.

new_grass made a few posts that more eloquently describe that perspective; humans, yelping dogs, and so on evolved from a common ancestor and it seems unlikely that suffering is a uniquely human feature when so many of our other cognitive skills seem to be continuous with other animals.

New_grass says:

link

But this isn't the relevant proposition, unless you think the probability that general intelligence (however you are defining it) is required for the ability to suffer or be conscious is one. And that is absurd, given our current meager understanding of consciousness.

The relevant question is what the probability is that other animals are conscious, or, if you are a welfarist, whether they can suffer. And that probability is way higher than zero, for the naturalistic reasons I have cited.

But according to Elliot, our judgment of the conservatism argument hinges on our understanding of CR and Turing computability.

Does the following sound fair?

Yeah, I have arguments here covering other cases (the cases of the main issue being suffering or consciousness rather than intelligence) and linking the other cases to the intelligence issue. I think it’s linked.

If pdxthehunted had an adequate understanding of the Turing principle and CR and their implications on intelligence and suffering, their opinion on *(a)** would change; they would understand why suffering certainly does occur as a binary off/on feature of sufficiently intelligent life.*

In short, yes. Might have to add a few more pieces of background knowledge.

Please let me know if I’ve managed to at least get a clearer view of the state of the debate and where communication issues are popping up.

Frankly, I’ve enjoyed this thread. I’ve learned a lot. I bought DD’s BOI a couple of years ago after listening to his two podcasts with Sam Harris, but never got around to reading it. I’ve bumped it up to next on my reading list and am hoping that I’m in a better position to understand your argument afterward.

Yeah, comprehensive understanding of DD’s two books covers most of the main issues. That’s hard though. I run the forums where people reading those books (or Popper) can ask questions (it’s this website and an email group with a 25 year history, where DD used to write thousands of posts, but he doesn’t post anymore).

Finally--if capacity for suffering hinges on general intelligence, is consciousness relevant to the argument at all?

To a significant extent, I leave claims about consciousness out of my arguments. I think consciousness is relevant but isn’t necessary to say much about to reach a conclusion. I do have to make some claims about consciousness, which some people find pretty easy to accept, but others do deny. These claims include:

  1. Dualism is false.
  2. People don’t have souls and there’s no magic involved with minds.
  3. Consciousness is an emergent property of some computations.
  4. Computation is a purely physical process that is part of physics and obeys the laws of physics. Computers are regular matter like rocks.
  5. Computation takes information as input and outputs information. Information is a physical quantity. It’s part of the physical world.
  6. Some additional details about computation, along similar lines, to further rule out views of consciousness that are incompatible with my position. Like I don’t think consciousness can be a property of particular hardware (like organic molecules – molecules with carbon instead of silicon) because of the hardware independence of computation.
  7. I believe that consciousness is an emergent property of (general) intelligence. That claim makes things more convenient, but I don’t think it’s necessary. It’s a stronger claim than necessary. But it’s hard to explain or discuss a weaker and adequate claim. There aren’t currently any known alternative claims which make sense given my other premises including CR.

One more thing. The “general intelligence” terminology comes from the AI field which calls a Roomba’s algorithms AI and then differentiates human-type intelligence from that by calling it AGI. The concept is that a Roomba is intelligent regarding a few specific tasks while a human is able to think intelligently about anything. I’d prefer to say humans are intelligent and a Roomba or mouse is not intelligent. This corresponds to how I don’t call my text editor intelligent even though, e.g., it “intelligently” renumbered the items in the above list when I moved dualism to the top. In my view, there’s quite a stark contrast between humans – which can learn, can have ideas, can think about ideas, etc. – and everything else which can’t do that at all and has nothing worthy of the name “intelligence”. The starkness of this contrast helps explain why I reach a conclusion rather than wanting to err on the side of caution re animal welfare. A different and more CR-oriented explanation of the difference is that all knowledge creation functions via evolution (not induction) and only humans have the (software) capacity to do evolution of ideas within their brains. (Evolution = replication with variation and selection.)

That’s just the current situation. I do think we can program an AGI which will be just like us, a full person. And yes I do care about AGI welfare and think AGIs should have full rights, freedoms, citizenship, etc. (I’m also, similarly, a big advocate of children’s rights/welfare and I think there’s something wrong with many animal rights/welfare advocates in general that they are more concerned about animal suffering than the suffering of human children. This is something I learned from DD.) I think it’s appalling that in the name of safety (maybe AGIs will want to turn us into paperclips for some reason, and will be able to kill us all due to being super-intelligent) many AGI researchers advocate working on “friendly AI” which is an attempt to design an AGI with built-in mind control so that, essentially, it’s our slave and is incapable of disagreeing with us. I also think these efforts are bound to fail on technical grounds – AGI researchers don’t understand BoI either, neither its implications for mind control (which is an attempt to take a universal system and limit it with no workarounds, which is basically a lost cause unless you’re willing to lose virtually all functionality) nor its implications for super intelligent AGIs (they’ll just be universal knowledge creators like us, and if you give one a CPU that is 1000x as powerful as a human brain then that’ll be very roughly as good as having 1000 people work on something which is the same compute power.). This, btw, speaks to the importance of some interdisciplinary knowledge. If they understood classical liberalism better, that would help them recognize slavery and refrain from advocating it.


Elliot Temple on December 3, 2019

Messages (28)

The (lack of) interest of animal rights advocates in human rights has been brought up by pro-lifers too. If one believes a squirrel (maybe even a snail, salmon, insect or plankton) is human-like enough to have some rights and to be concerned with its welfare, then shouldn't you also want second and third trimester abortions banned? And seriously consider banning more or all abortions to be safe.

WebMD:

https://www.webmd.com/baby/1to3-months

> By the end of the third month of pregnancy, your baby is fully formed. Your baby has arms, hands, fingers, feet, and toes and can open and close its fists and mouth.


Anonymous at 5:44 PM on December 3, 2019 | #14677 | reply | quote

> Bacteria fail to match humans at level 1. No brain.

Maybe a nitpick but aren't bacteria computers?


Anonymous at 6:37 PM on December 3, 2019 | #14680 | reply | quote

#14680 Not an expert on bacteria. What do you mean? I thought, roughly, that there's no computer there in any normal sense, they are just structured in such a way to respond to certain stimuli and for certain chemical reactions to take place in the right environments and so on. So they're kinda like the cells which make my leg kick out if you hit my knee with a hammer rather than like the brain.

Is this a broader point about how any object will compute if you shine light on it because the laws of motion mean interacting objects are doing information processing of some sort at all times?


curi at 6:49 PM on December 3, 2019 | #14681 | reply | quote

Bacteria and cells as computers

Not making a broader point that any object will compute.

It's a consequence of von Neumann's work on self-reproducing automata. Von Neumann showed that any self-reproducing machine of sufficient complexity must have some kind of program separate from the machine which is also passed on. The information in the program is interpreted by the machine and used as instructions to build a copy of the machine and then the program is copied into the new machine. Von Neumann's result was obtained before the discovery of DNA and predicted its existence.

People seem reluctant to think of bacteria and cells as computers. But bacteria and cells are not like crystals that replicate, they contain a program and machinery for interpreting that program. As well as reproduction, that program is used for the maintenance of the bacterium or cell. Reprogramming is also possible. For example, bacteria can take in small fragments of DNA from other organisms and from the environment and acquire new characteristics. And humans have been able to make bacteria produce human proteins by transplanting bits of DNA.

Looking at more complicated eukaryotic cells, it seems like computing is going on all over the place. Check out, for example, the Golgi Apparatus, which is like a kind of CPU. And microtubules, which are like some kind of weird cellular automata.


Anonymous at 11:31 PM on December 3, 2019 | #14682 | reply | quote

#14682 OK thanks for the info. I edited the non-computer example from bacteria to sand.


curi at 11:36 PM on December 3, 2019 | #14683 | reply | quote

I imagine there's some sort of substantial distinction a brain makes, but I don't know precisely what it is.


curi at 11:42 PM on December 3, 2019 | #14684 | reply | quote

#14682 Where'd you learn that, including the Von Neumann part? Is there a particular book you can recommend?


curi at 11:48 PM on December 3, 2019 | #14685 | reply | quote

> I imagine there's some sort of substantial distinction a brain makes, but I don't know precisely what it is.

Don't know. Maybe a division of labor thing?


Anonymous at 2:24 AM on December 4, 2019 | #14687 | reply | quote

> #14682 Where'd you learn that, including the Von Neumann part? Is there a particular book you can recommend?

Just stuff I've picked up. No expert on cell biology or von Neumann. A lot of progress has been made in cell biology in recent years and I find it interesting. Like, using CRISPR, we can now create bespoke computers inside human cells. Nothing I can recommend. In addition to what Not Anon said, maybe have a look at von Neumann's "The Computer and the Brain". I haven't read it.


Anonymous at 2:38 AM on December 4, 2019 | #14689 | reply | quote

the place for emotions

Hello, I am very interested in your research as myself just started looking at the animal rights/veganism movement with a critical view.

I have a question for you. Have you ever considered emotions as a difference between robots and animals?

Thanks for you answer


Polvonauta at 4:26 AM on December 4, 2019 | #14690 | reply | quote

#14690 Emotions are a type of idea, so they don't change anything.

What if emotions weren't a type of idea? Then they'd be a type of non-idea, outside the mind, just like the number 7, so that wouldn't change anything either. That'd just be information just like a robot has which someone labelled an "emotion".

This applies to everything. It's inside the mind or outside the mind. Pick one. If it's inside the mind, a mind is required. If it's outside the mind, the mind can't suffer over it unless the creature has a mind which finds out about it and then forms an opinion (an idea).


curi at 12:41 PM on December 4, 2019 | #14691 | reply | quote

#14691

There is more to the reality than idea/non-idea. Emotion does not only happen in the mind, like a abstract concept of "happy".

Emotions are also physical : serotonin, cortisol, dopamine, adrenaline, heart rate, blood pressure, muscular tension, body temperature, body language/posture... And humans share those structures with a incredibly high number of species.

You could program a robot to simulate emotions but it won't FEEL it unless they have that body complexity.

Just to make my point... Actually, is very hard to have ideas when you are in pain. Imagine having a spear across your leg or someone kicked your genitalia. There a no space for ideas, only pain (which is very concrete).


Anonymous at 1:41 AM on December 6, 2019 | #14720 | reply | quote

It is still me, forgot to sign :)


Polvonauta at 1:42 AM on December 6, 2019 | #14721 | reply | quote

#14725 What ideas in which books (quotes so I can find the right spots?) are relevant to what nodes?

And he can post anonymously himself here.


curi at 12:59 PM on December 6, 2019 | #14726 | reply | quote

#14725

Interesting. Thanks for sharing these. BTW, I'm sorry that your post got deleted on the debate a vegan subreddit. I was there--arguing for the case of animal rights--and I'm very disappointed in the subreddit.

Elliot's argument has completely gripped me for the last three or four days. I am a student with finals coming up and I've found myself just thinking about the claim (which, as far as I know, could be legitimate) that animals don't experience suffering in a meaningful way.

I keep asking myself--if I were convinced of this, would I abandon veganism? I like to think that I would, but I also know that it would be paradigm-shifting in some unpleasant ways.

In my humble opinion, Elliot's argument is--at the very least-- by far the most logically consistent argument I've seen, or can even imagine, against veganism.

Which brings me to a few questions that I've had. These aren't intended as refutations or an intention to debate. These are honest questions looking for clarifications. Direct response is great, but links to other writings are also fine.

1) The biggest emotional response I've had while considering the argument has been, unsurprisingly, around my pets--who I love and openly admit to anthropomorphizing. I don't think it's harmful to anthropomorphize animals *if* their conscious experience is significantly similar enough to humans' to warrant ethical consideration. In that case, it might even be helpful.

However, if animals cannot suffer in a way that is at all meaningful--or if they're not meaningfully conscious at all--this goes out the window.

Sorry--the question: Is the logical conclusion that we have *no* obligations to any nonhuman animals?

There should be no torture that is impeachable, at least regarding one human and one animal. There should be no reason that we can't lock an animal in a cage and watch it die of thirst or hunger; "torture" isn't even a coherent term to use regarding the treatment of animals. We can't torture self-driving cars. Nothing constitutes animal abuse.

(There could still be issues with abusing an animal if it traumatized a human who was watching, if it belonged to someone else, etc.)

Like I said, this isn't a refutation; it's just where the original claim takes me, and I wanted a second opinion. Intuitively, it's *extremely* troubling to me, but that certainly doesn't make it false.

2) This question might just expose my ignorance around computer science or adjacent/relative fields.

My understanding of part of the original argument is:

Nonhuman animals have no behaviors that can no be replicated by software algorithms. Humans do, because we have general intelligence.

I've been wondering why a set of behaviors is necessary to differentiate nonhuman animals vs. software. I probably won't explain this well, in part due to my own incomplete understanding, but can't *different* algorithms result in identical behaviors/outputs?

A piece of software could have a much lower algorithmic complexity but still produce an identical output to a much more complex (if redundant) bio algorithm.

Could an experience *analogous* to suffering emerge from the millions of lines of redundant code that dictates animal behavior? Intelligent design can create an algorithm with indistinguishable output for a fraction of the complexity, but could evolution have resulted in (strictly) unnecessary emergent features?

Circling back to the original argument, I'm uncertain whether the claim lives or dies on my above question, anyway. I can imagine the pain of say--getting a filling without anesthetic--as a form excruciating pain that could be meaningless without a conscious preference for it to end.

At this point, I'm struggling with my own bias and overwhelming incredulity. I'm not to a point where I can make sense of the claim that animals cannot have conscious preferences for one state over another and experience pleasant or unpleasant states. The similarity of their wetware to ours doesn't *prove* that they have proportionally similar conscious experience, but to my mind Occam's razor suggests that they do, barring better evidence.

Obviously, an argument from incredulity is in no way a point against Elliot's original claim. I'm just trying to get to a point where I can engage with the idea on its own foundation. I've got some final exams this week, but plan on beginning Deutsch over the holiday. Maybe things will become clearer.

Thanks,

pdxthehunted


pdxthehunted at 1:27 PM on December 6, 2019 | #14728 | reply | quote

#14728 I'm a bystander but just a note to say thanks for seriously trying to understand Elliot's position. I'm enjoying the debate.

One question: What do vegan's make of the process of biological evolution? This regularly causes animals to be maimed, starved, poisoned, eaten, etc. If animals can suffer then evolution is a nasty, horrible process. I don't think vegans are trying to stop evolution from happening though. So what gives? Shouldn't we be trying to ameliorate the nastiness of evolution as much as possible if animals can suffer?


Bystander at 4:35 PM on December 6, 2019 | #14731 | reply | quote

#14731

Thanks. I can't speak for all vegans, obviously. I'm also enjoying the conversation (at this point, I don't think it's a debate because some of Elliot's claims seem to require background knowledge that I don't possess).

If animals can suffer, then they almost certainly suffer in nature. There are some philosophers who do think we have a moral imperative to eliminate nonhuman animal suffering in nature--e.g., David Pearce. However, once we're in that kind of situation, we're talking about genetically engineering entire populations of animals (obviously without their consent). If an animal has evolved as a predator, what percentage of their genetic heritage can be changed before we're effectively destroying a species vs. altering it?

I *think* the mainstream vegan view would be that animals suffer in nature (nasty, brutish, short) but our obligation to them is less relevant because we're not inflicting that suffering. Animals in commercial meat operations ("factory farms") are suffering (if they suffer at all) at our hands, and on a massive scale. If suffering occurs in these operations, the value of the coefficient is growing in proportion to our appetite for animal products.

It might be an issue of scope; convincing people to voluntarily give up animal products is already enormously difficult. Insisting that anyone who does *also* commit to trying to change the fundamental dynamics of life eating life would possibly do more harm than good to the campaign.

A definition I hear a lot for veganism is to limit the infliction of suffering to animals by humans "as far as practicable and possible." That's a little vague and looks different for everyone, but my guess is that most vegans don't consider global-scale genetic engineering programs possible and that none consider it practicable.

Your question does make me wonder how I would feel if (assuming animals can suffer) we could genetically engineer a species of animal that wouldn't have that capacity. If we could create an animal that was nutritious, tasty, and didn't require tons of antibiotics to remain healthy--and remove their ability to experience pain or fear--could a vegan eat them? Similar to lab-grown meat, I guess, but a little muddier.

My intuition is that they would be, if not a technically vegan product, a huge improvement. Especially if we were somehow certain that they did not suffer (that we hadn't engineered them to have some sort of nightmarish conscious experience). I think that even if they did still suffer to some extent, it would be a huge improvement over the current situation. I wonder if there's any kind of project like that going on?


pdxthehunted at 6:09 PM on December 6, 2019 | #14734 | reply | quote

reply part 1

> Nothing constitutes animal abuse[?]

Short answer, yeah there's no such thing as animal abuse in the conventional sense. It's kinda like throwing and breaking your phone or kicking your TV. Dumb, pointless, destructive. Though doing lots of things to animals can be legitimate scientific research, there are reasons in some cases.

But.

Plants are a good comparison. No one abuses (or "abuses") corn. What the hell for?

If someone is actively "abusing" an animal, presumably it's because they believe the animal can suffer or something along those lines. Why else would you do it? So that's bad, something bad is going on there. (Neglect is different than active abuse.)

In general, ~everyone thinks animals can suffer and that they should make some reasonable efforts to minimize that suffering. So when people don't, it really can be a sign they're cruel, nasty, callous, etc. Especially with pets. It's much less meaningful with e.g. a farmer who has become used to animal deaths and is less "nice" to an animal while trying to save time while doing his job.

And if you abuse your dog, it's more likely to bite you. Or bite another person or dog. That's dangerous.

If you don't want to take care of a pet and keep it in good condition, just don't get one. In general there's no reason to do stuff to pets which is conventionally considered bad. The main useful things to do, which some animal welfare advocates object to, are farming related and scientific research related.

Of course some creep somewhere will get a dog and beat it up at home (even if it's illegal). There are a lot of people. People also beat up their own kids, their wives, etc. We can't fully protect human beings from violence today. I wouldn't expect my worldview to contribute to those problems. It'd probably ruin the fun if he thought the dog was a like a robot or tree. He doesn't punch trees. And if he thought like me, god he'd sure be a different person. You have to learn stuff about reason to understand my perspective on animals.

I don't see anything rational about dog fighting or cock fighting. How is that entertaining? Where is the human skill? I'm not a big fan of sports but at least that has consent, some competition involving human skill and knowledge, some thinking is involved, etc. Human beings figuring out how to be good at something, even if the goal itself is kinda pointless, often turns out interesting anyway because of the process of trying to figure out how to do it better.

And it's rude to do things to a pet that bother other people.

And I think one should broadly follow mainstream ideas when it doesn't matter. Pick your battles, etc. So while I won't eat vegan (not mainstream + quite inconvenient), I can go along with my culture on e.g. not punching dogs. No problem. Other people can and should do that too.

If dog fights were mainstream, my opinion of the matter would depend on the motivation. It's hard to come up with a rational motive but I'm sure one could make something up. But the mainstream says they're bad while i don't agree with all the reasoning, no big deal, that's ok, i'll leave that issue alone and go do something else. the status quo for this stuff is mostly ok with me. i just think people are confused about intellectual issues, that some people are making their own lives worse with vegan diets, and that there is related political activism which is trying to make things worse for non-vegans and which pressures more people to be vegan. The activism may result in e.g. farmers having to put unnecessary extra effort into producing some foods which raises their price. And it's an allied cause to some other activism and political agendas that I also disagree with and consider dangerous (broadly: environmentalism, socialism, and undermining the civilization we have with its capitalist science, industry, mass production, etc.).


curi at 9:32 PM on December 6, 2019 | #14736 | reply | quote

> If animals can suffer, then they almost certainly suffer in nature. There are some philosophers who do think we have a moral imperative to eliminate nonhuman animal suffering in nature--e.g., David Pearce. However, once we're in that kind of situation, we're talking about genetically engineering entire populations of animals (obviously without their consent). If an animal has evolved as a predator, what percentage of their genetic heritage can be changed before we're effectively destroying a species vs. altering it?

There are alternatives to genetic engineering. We could nuke large areas of wilderness. I know, I know, lots of downsides. But think about how many fewer animals will be born – and therefore how many fewer will suffer and die. And tons of animals will die in a instant, painless way from the blast, so that's less suffering than if we didn't nuke them. So it helps lots of currently living animals too.

So: if you think animals can suffer, i suggest you also take the position that they can experience pleasure and that their life has some positive value to them (enough to more than make up for the part where they die, at least for the average natural death, so being born is a good thing for them not a bad thing). Which means leaving nature alone is ok on balance. It also makes at least some farming and lab animal stuff OK. Otherwise, if animals importantly suffer but get no benefits from life, it's time to seriously consider methods of wiping out most animals *to help them, for their sake*!


curi at 9:47 PM on December 6, 2019 | #14737 | reply | quote

> Nonhuman animals have no behaviors that can no be replicated by software algorithms. Humans do, because we have general intelligence.

Should say can't be replicated by *non-GI* software algorithms. GI is a different type of algorithm. Algorithm is a generic word.

> I've been wondering why a set of behaviors is necessary to differentiate nonhuman animals vs. software.

The behaviors are what we've observed and are trying to explain and understand the causes of.

> can't *different* algorithms result in identical behaviors/outputs?

Yes. DD told me about that and I've written about it:

http://fallibleideas.com/knowledge-structure

https://curi.us/988-structural-epistemology-introduction-part-1

http://curi.us/991-structural-epistemology-introduction-part-2

> could evolution have resulted in (strictly) unnecessary emergent features?

In general that doesn't happen because there isn't selection pressure for it. Even if it was functional in the past and evolved, when it became unnecessary it'd get lost over time to random changes because there's no selection pressure keeping it around.

> The similarity of their wetware to ours

The broad irrelevance of hardware details to software behavior is somewhat uncontroversial, or at least ought to be, in computer science theory.


curi at 10:12 PM on December 6, 2019 | #14738 | reply | quote

> Algorithm is a generic word.

It's also a somewhat confusing term.

In computer science, an algorithm is a finite sequence of instructions that starts from an initial state, produces an output, and then *halts*. So macOS and other operating systems are not algorithms because they do not halt. But they are composed of lots of algorithms for accomplishing tasks. Similarly web servers, programs running in animal brains, and GI are not algorithms although they are composed of algorithms. These things are computer programs.


Anonymous at 2:00 AM on December 7, 2019 | #14741 | reply | quote

>> Nonhuman animals have no behaviors that can no be replicated by software algorithms. Humans do, because we have general intelligence.

> Should say can't be replicated by *non-GI* software algorithms. GI is a different type of algorithm. Algorithm is a generic word.

>> I've been wondering why a set of behaviors is necessary to differentiate nonhuman animals vs. software.

> The behaviors are what we've observed and are trying to explain and understand the causes of.

Right--but as you write in the essays linked to in the quoted text below, identical denotation/output doesn't imply an identical or even (necessarily) similar knowledge structure.

The implications of this seem to be that we can't draw conclusions about animal cognition by comparing what we output we can tease out of our software.

At the very least, why should we assume that something non-general AI can do implies that an animal brain isn't using general intelligence to get to the same place?

To me, this seems like it might be more relevant than it appears on first blush. I don't imagine you're a behaviorist since you don't deny that human beings have internal states. I've also gathered that CR denies the validity of induction as a truth-finding mechanism. I can see where that is technically true, but I can't imagine a better method for making better inferences about the internal states of nonhuman animals.

We can't even *know* the internal states of other humans, can't prove whether or not we're speaking with a conscious agent. Inductively, we reason that we aren't the only conscious agent in the universe.

>> can't *different* algorithms result in identical behaviors/outputs?

> Yes. DD told me about that and I've written about it:

> http://fallibleideas.com/knowledge-structure

> https://curi.us/988-structural-epistemology-introduction-part-1

> http://curi.us/991-structural-epistemology-introduction-part-2

>> could evolution have resulted in (strictly) unnecessary emergent features?

> In general that doesn't happen because there isn't selection pressure for it. Even if it was functional in the past and evolved, when it became unnecessary it'd get lost over time to random changes because there's no selection pressure keeping it around.

I'm not an expert here, but I think I disagree with this statement. Senescence and cancers are both features that aren't selected for but are near-universal in our species. This is due to the fact that mutations don't always have an effect that either enhance fitness or detract from it.

Instead, a mutation can have an effect that increases fitness early on--it increases the chance that an organism will reproduce--but have deleterious effects later on. In fact, if the mutation increases fitness enough early on, eventually it might be universal through a population, despite it's "negative" effects on an individual in the population.

Senescence is just an example though, it isn't relevant to consciousness or suffering. I'm just arguing that evolution is a process that can leave an organism with a host of traits with effects that aren't fitness-enhancing.

Natural selection doesn't create an optimized suite of maximally efficient survival/reproductive skills. It creates a "good enough to reproduce in a particular environment" suite, which may include redundancies or inefficiencies that are useless/detrimental but built into the foundation.

>> The similarity of their wetware to ours

> The broad irrelevance of hardware details to software behavior is somewhat uncontroversial, or at least ought to be, in computer science theory.

Right. I'm not arguing against hardware-independent computability, or at least didn't mean to be.

But it's not just our hardware--our software has also evolved *from* those of non-human animals.

I guess this circles back around to claims about suffering, though. To me, it seems like we would require an evolutionary *miracle* to go from automata-like chimpanzees to conscious agents with general intelligence. You would argue that it's no miracle, and what's more--we've already seen evidence of it, in humanity's ability to create new knowledge from scratch, independent of our genetic "built-in" knowledge. (Correct me if I'm misstating your position.)

I'm still not convinced that GI is an all-or-nothing phenomenon, but I'm also not convinced that I never will be (for instance, after reading and discussing DD). So we can leave that aside for right now.

What about pain? As has been stated previously, pain is meaningless by itself. To constitute suffering, it requires an agent's negative judgment of it.

This might be a naive interpretation on my part, but what good is pain *without* the ability to interpret it negatively (or otherwise)? The ability to interpret pain as damage to or malfunction within an organism *is* the fitness-selecting aspect of the pain experience. Plants don't feel pain (what good would it do them?) and presumably neither do self-driving cars.

So far I don't think you've denied that animals experience pain, just that they're incapable of interpreting is as good or bad. But their behaviors--in which they seem to try to avoid pain--would suggest that there must be some sort of value judgment taking place. I know, I know--different epistemologies. Humor me.

When a human avoids pain it's (usually) because we don't want to experience it. We wince or yelp without any conscious volition--but only do so when there is a *conscious experience* of pain (someone under anesthesia doesn't wince at a scalpel). With the potential for identical outputs from different algorithmic processes, why assume animal software is *more similar* to a self-driving car (created by humans) than a human (whose software is an updated version of the animal's)?

I don't know if this is something we can get to the bottom of or not. I can imagine, even though I can't articulate, that there is "something it is like to be a bat" experiencing pain. This is absent any form of language or discrete, abstract thought. The experience of fear, joy, hunger, pain, or contentment all have components (even in humans) that appear in our conscious awareness without having to be teased out with abstract reasoning. Abstract reasoning can certainly allow us to change our experience of pain (or any of the emotional experiences listed) but I don't believe it's a necessary condition for the phenomenological experience itself.

General intelligence or abstract reasoning can be human-specific traits that come online "on top" of the qualitative foundation shared by some (if not all) animals. Animals don't even need free will to experience the physiological components that occur as their software executes.

Imagine a dog given a choice between a piece of liver and a piece of celery. The dog may exercise no volition when choosing the liver; he's hardwired to pick the meat. He doesn't need to be aware in any sense of why he wants the liver or be able to reason that he'll experience pleasure from eating it. His behavior can be completely predetermined and still include the experience of reward and hedonistic pleasure from eating it. He may be hardwired to *always* seek reward and avoid pain. But what he's hardwired to do *is* his conscious experience, *is* a matter of valuing certain states over others.

Thoughts?


Anonymous at 3:27 PM on December 7, 2019 | #14747 | reply | quote

The case for animal (general) intelligence is basically "They do behaviors that look (general) intelligent. How else could you explain their observed behavior?" I'm countering that case by providing an alternative and simpler explanation, as well as asking "If animals are general intelligent, why don't they ever do any behaviors that indicate general intelligence?".

I don't think there's a different case for animal intelligence. The things you bring up are ad hoc attempts to reconcile some new intellectual tools with a pre-existing conclusion. That's a good thing to try doing, consider and learn about. But so far you haven't raised a significant new challenge.

> I'm not an expert here, but I think I disagree with this statement. Senescence and cancers are both features that aren't selected for but are near-universal in our species. This is due to the fact that mutations don't always have an effect that either enhance fitness or detract from it.

Those are basically bugs, not complex, sophisticated, knowledge-heavy features. Animals don't evolve e.g. eyes without selection pressure. Intelligence is like eyes (but more complicated); cancer isn't.

> This might be a naive interpretation on my part, but what good is pain *without* the ability to interpret it negatively (or otherwise)?

Like touch or vision information, pain information is used as inputs to animal behavior algorithms.

No intellectual interpretation is necessary to use the data. Code can just say like "if data == 1, do X; if data == 2; do Y" and more complicated versions of that.

> So far I don't think you've denied that animals experience pain, just that they're incapable of interpreting is as good or bad. But their behaviors--in which they seem to try to avoid pain--would suggest that there must be some sort of value judgment taking place. I know, I know--different epistemologies. Humor me.

The behavior algorithms are evolved to avoid pain because pain correlates with negative survival and replication value (because pain correlates with actual or potential bodily damage).

> I can imagine, even though I can't articulate, that there is "something it is like to be a bat" experiencing pain.

There exists no theory detailing how that would or could work in computational terms, initial attempts to create such a model have failed so far, and there is no need for such a theory because there's no problem with the animals-as-robots explanation. There's nothing wrong with the simpler view, nothing it leaves unexplained, no known evidence that contradicts it.

One can always seek out new ideas about anything, e.g. maybe all U.S. presidents were aliens, but the theoretical potential to investigate new viewpoints (with no promising leads, and no known flaws in the current view) which could potentially lead to new discoveries is no reason to reject or doubt current knowledge.


curi at 3:50 PM on December 7, 2019 | #14748 | reply | quote

#14747

> I can imagine, even though I can't articulate, that there is "something it is like to be a bat" experiencing pain.

Is that something like being a video game boss losing a fight?

> Imagine a dog given a choice between a piece of liver and a piece of celery. The dog may exercise no volition when choosing the liver; he's hardwired to pick the meat. He doesn't need to be aware in any sense of why he wants the liver or be able to reason that he'll experience pleasure from eating it. His behavior can be completely predetermined and still include the experience of reward and hedonistic pleasure from eating it. He may be hardwired to *always* seek reward and avoid pain. But what he's hardwired to do *is* his conscious experience, *is* a matter of valuing certain states over others.

IIUC, that line of thought also leads to the conclusion that Roombas are conscious:

Imagine a Roomba given a choice between vacuuming the living room floor or the kitchen floor. The Roomba may exercise no volition when choosing the living room; it's hardwired to start with the room that's closest to its home base. It doesn't need to be aware in any sense of why it wants the living room or be able to reason that it'll experience pleasure from vacuuming it. Its behavior can be completely predetermined and still include the experience of reward and hedonistic pleasure from vacuuming it. It may be hardwired to *always* seek reward and avoid pain. But what it's hardwired to do *is* its conscious experience, *is* a matter of valuing certain states over others.


Alisa at 4:05 PM on December 7, 2019 | #14749 | reply | quote

Animals aren't intelligent, but they are conscious (unlike an algorithm or computer). They can't calculate things intelligently like we do, but they can feel pain, and inflicting that on them is bad regardless of the fact that they are unintelligent.

You are right to say that animals are fundamentally different from human beings, but it is equally obvious that they are also fundamentally different from machines.


Anonymous at 12:51 PM on December 8, 2019 | #14754 | reply | quote

#14754 Are you a dualist, mystic or what? You haven't explained your concept of non-computational consciousness. Or you think algorithms have a different algorithm which is conscious but unintelligent? You haven't really engaged with the issues productively.


Anonymous at 12:53 PM on December 8, 2019 | #14755 | reply | quote

Want to discuss this? Join my forum.

(Due to multi-year, sustained harassment from David Deutsch and his fans, commenting here requires an account. Accounts are not publicly available. Discussion info.)