Psychology Studies Mostly Suck

for most psychology issues, research is the wrong approach

ppl need to think, debate, explain, criticize. not measure empirical data

observe carefully, document some human behavior. get some examples. but don't just get a low-resolution, imprecise look at mass data and then do statistics

they are trying to copy the methods of the empirical sciences, which were quite successful, but it's inappropriate for their subject matter

so the field basically stopped making progress

there's certainly some overlap in methods btwn physics and pscyh but they are copying physics or even medical studies in bad ways. they copy too much of the format and details despite substantive differences. there's some cargo culting going on

psychology research uses too many analogies, poor proxies, and bad measures to try to get data to do statistics with. whereas physics data isn't based on analogies. you can't replace rulers and telescopes with questionnaires and think ur doing the same sort of science.


Elliot Temple | Permalink | Comments (0)

Voluntarist Left Anarchism Criticisms

Some people imagine a peaceful society, with no government, as an alternative to capitalism.

The idea of non-violent, voluntary anarchism presupposes capitalist premises. Voluntary communal sharing implies I may keep the products of my labor for myself. What I produce is, therefore, my property which no one may take. It’s mine. I can share it, keep it, or trade it. Those are the fundamental rules of laissez-faire capitalist society (aka classical liberal or minarchist society).

If what I produce goes straight to the community when I’d prefer not to share, then that isn’t voluntary.

But, you protest, you imagine a society full of sharing. If may have the same political rules as capitalism, but people will have different ideas and behave differently. The basic rules of capitalism are OK, but people need to be educated about the virtues of sharing, charity and community. Then they’ll make better choices and stop having jobs, bosses, wages, etc. All work will be volunteer work, everything important to someone’s life will be charitably given to them by someone, and we’ll all be happier and quite possibly richer too.

These proposals run into the standard problems of socialism and more.

What is the incentive to work hard, well, or at all? I can choose not to work and I’ll still be provided with plenty by the community. What is the incentive to do dirty jobs? Who will take out the garbage?

When you get rid of the profit motive, won’t people be wasteful, or economize less? How many logs should I use in my fire today? How much should I turn up the heat? How much do I have to want a book or anything else before I should have it? If I can just have as much as I want of whatever I want, I’ll get lots of things I only want a little bit instead of only getting the things which are most important to me. And I’ll ask for cleaning and cooking services to be shared with me instead of doing those things myself, or I’ll let my home become dirty and ask for a new home. I’m not responsible for shared property and the community will give me plenty more, right? Your answer is to rely on the new socialist man who makes altruistic sacrifices to benefit his comrades, right? That approach has never worked in the real world because it has theoretical flaws. Who should sacrifice how much for whom? How are any decisions made? How are disagreements resolved?

How will economic calculation be done? What quantities of what goods should be produced by what methods? How do people know if they’re producing efficiently without profit and loss to guide them? How do they know if a particular use of a good is economically efficient without a price of the good to tell them its value? And how should capital be allocated? What industries should expand? What new inventions should have how much effort put into inventing them?

How is anything organized? There are no more stores? If I want something, I just go around to my neighbors and ask for it until someone has one that they aren’t using? What happens when a factory produces 100,000 shoes? How do they get distributed around the country to the right 100,000 people? And how does the factory know what how many of what size to make, or what colors to make, or what materials to use?

What’s the point anyway? The point of trade is to exchange some goods I value less for some goods I value more. What’s the point of shuffling all the wealth around via charity? Is it so some central planner can decide who gets what? If not, won’t it be chaotic?

And this utopia fails to consider scarce resources. There won’t be plenty of everything to go around. People want more wealth than exists and they always will. There’s always scope to have more and better goods and services.

What happens to the method of voluntary sharing when people have disagreements about the allocation of resources?

When people disagree, given the voluntary nature of society, won’t people keep what they have for themselves instead of sharing it? Won’t disagreements cause reversion to capitalist trade where I share for mutual benefit but don’t give my stuff away? If I think I’m giving away more than I get, I’ll prefer trade. And if anyone is receiving more than they share, then others must be sharing more than they receive. Why is it moral that they have less than they produced for themselves? What’s rational about that? And this way I’m self-reliant and can plan for my future instead of relying on the less predictable production and gifting of others.

So suppose I keep everything I produce. That’s the first thing I’d consider doing. I don’t see how charity is economically efficient to make society richer in general, nor do I see how essentially giving away a bunch of gifts, and receiving a bunch of gifts will do a better job of getting me the right goods and services than if I simply traded for what I want. So anyway, I keep all my property. I receive gifts from generous people and trade with the less generous people. Will the community do anything to stop this? Let’s consider both alternatives.

If the community does nothing to stop me, I simply have more at the expense of others. I have what I produce and what others give me. I expect this to quickly lead to a system of trade with little charity outside the family, similar to what we have today. And I don’t see anything wrong with that. What problem will less voluntary trade, and more voluntary gifts, solve? What will that make better for society in general? If the goal is just to help crippled people who can’t work, or something like that, you can ask for generosity for that specific purpose instead of trying to change the basic economic system for everyone.

If the community stops me, they’ll either use violence (violating the concept of voluntarist anarchism) or they’ll use non-violent methods. The non-violent methods would be e.g. people stop sharing food with me and refuse to trade with me. So essentially society is my boss and I have to please others by working hard enough, and sharing enough, or else they’ll starve me. This is much worse than a boss today because I can’t just switch jobs and get a new boss. And it’s like having many bosses at once – all of society – so there isn’t much consistency about what will please my boss. To prosper I’ll have to make friends in high places. I’ll have to please the leaders and influential people. In short, it’s a status society where I must do politics and social climbing instead of production and trade.


Elliot Temple | Permalink | Comments (2)

Binswanger Misquotes Popper

Objectivist philosopher Harry Binswanger hates Popper. He replied to a question about Popper on his HBL forum, 2011-03-22:

HB: Popper has fooled you. He's one of the most notorious skeptics and positivists in 20th Century philosophy. He is the author of the “falsifiability” doctrine, which is an updated version of the positivist Verification Principle. It holds that (vs. the older positivists) nothing can be verified, we can hold onto to the distinction between what's scientific and what's meaningless by reference to what can be falsified. Here are a couple of juicy quotes from Popper's main work, Objective Knowledge (the term “Objective” here means what Kant meant by it—collective subjectivism—not what we mean by it).

This [realist] doctrine founders in my opinion on the problems of induction and of universals. For we can utter no scientific statement that does not go far beyond what can be known with certainty 'on the basis of immediate experience'. (This fact may be referred to as the 'transcendence inherent in any description'.) Every description uses universal names (or symbols, or ideas); every statement has the character of a theory, of a hypothesis. The statement, 'Here is a glass of water' cannot be verified by any observational experience. The reason is that the universals which appear in it cannot be correlated with any specific sense-experience. (An 'immediate experience' is only once 'immediately given': it is unique.) . . . Universals cannot be reduced to classes of experiences . . . (pp. 94-95)

HB: So, all statements are subjective, because statements use concepts and concepts are not logically derivable from perception.

Popper wasn't a positivist. Popper refuted postivism. But anyway, let's focus on the quote.

That quote is not from Objective Knowledge. It's from a different book, The Logic of Scientific Discovery. It's also on different page numbers. Has Binswanger read any Popper or does he just copy/paste quotes from the internet that someone else missourced?

Not reading Popper would explain why Binswanger doesn't know the context of the quote. He used square brackets to insert the word "realist" as a paraphrase. He presents Popper as attacking a realist doctrine. That's false.

The doctrine which Popper believes founders is not realism. Here is some preceding text describing the doctrine Popper opposes:

Science is merely an attempt to classify and describe this perceptual knowledge, these immediate experience whose truth we cannot doubt;

Popper is criticizing a doctrine which treats percepts as infallible and never bothers with concepts because it believes that thinking cannot add anything useful to sense perception.

Popper's main point here is that strong forms of empiricism (like positivism) are mistaken. Our knowledge is not merely a collection of observations and their deductive consequences. Rather, men do more than perceive: they think. By using our minds we go beyond 'immediate experience'.

Popper's position here is actually in agreement Objectivism. Objectivism, too, says we need to conceptual thinking about our observations.


Binswanger didn't care about his quoting error – both getting the book and content wrong – and didn't allow further discussion on his forum. His last word was:

Objectivism holds that perception is infallible and that all science is, ultimately, the unpacking of what's implicit in perception. There's no question that Popper is completely wrong and is the philosophical father of people like Feyerabend. On the latter, see the article, "The Anti-Philosophy of Science," by James G. Lennox (U. Pittsburgh), in The Objectivist Forum.

Popper criticized people who don't unpack perception. The lack of unpacking was part of what he was criticized. The view he was criticizing believes, in Popper's words and italics, "[science] is the systematic presentation of our immediate convictions". What Binswanger says here is actually still compatible with Popper.

Binswanger's comment on infallible perception is misleading. Objectivism holds that error is an attribute of conceptual thought. People make errors. Rocks don't make errors, they just follow their nature or identity (or in other words, follow the laws of physics). Similarly, eyes are outside the human mind. Seen as tools, eyes are like microscopes or cameras. They just obey the laws of physics. If a person has blurry vision because he isn't wearing glasses, that isn't an error, that is what eyes with that physical form see in those circumstances. The error would be if the person thought reality was blurry when it's not. I don't recall Rand saying that herself, but I've talked with a bunch of Objectivists about it and that's my understanding of the matter. That view is reasonable, true IMO, compatible with CR, and not infallibilist.

Smearing Popper with Feyerabend's ideas is also unfair. The issue should be whether CR is true. Here's what Popper had to say:

As far as my former pupil Feyerabend is concerned, I cannot recall any writing of mine in which I took notice of any writing of his.

That's from p 1069 of The Philosophy of Karl Popper, Volume 2, edited by Paul Arthur Schilpp.


Elliot Temple | Permalink | Comments (2)

Edwin Locke vs. Popper

This is a repost from The Beginning of Infinity google group, 2013-01-06, by me.


Harry Binswanger wrote:
Date: Fri, Mar 25, 2011 at 10:14 PM

Objectivism holds that perception is infallible and that all science is, ultimately, the unpacking of what's implicit in perception. There's no question that Popper is completely wrong and is the philosophical father of people like Feyerabend. On the latter, see the article, "The Anti-Philosophy of Science," by James G. Lennox (U. Pittsburgh), in The Objectivist Forum.

I think this is important for how extremely wrong it is, coming from a respected Objectivist leader. How can he be this wrong and still be respected?

It also gives you some sense of how misguided (at least some) Objectivist criticism of Popper is.

I suspect few if any Objectivist have a clue on this issue. Otherwise wouldn't someone have corrected Binswanger? How could he maintain errors like this if many Objectivists understood this stuff?

Note that his blatantly and directly contradicts Ayn Rand, who was a fallibilist. And blaming Feyerabend on Popper is dumb and also a ridiculous way of attacking Popper's ideas (pretty much ad hominem on the wrong person... lol).

Stuff like this is why I haven't had much interest in thoroughly checking out more Objectivist epistemology papers. I don't expect them to be any good.

I also previously looked at:

The Case for Inductive Theory Building
Edwin A Locke

Which was terrible. Sample quotes:

The proper epistemological standard to use in judging scientific discoveries is not omniscient certainty but contextual certainty. One attains contextual certainty when there is an accumulation of great deal of positive evidence supporting a conclusion and no contradictory evidence (Peikoff, 1991, see ch. 5 ).

Ayn Rand was a fallibilist but most of her followers lust after certainty.

In sum, Popper (2003) rejected not only induction but everything that makes induction possible: reality (specifically, the ability to know it), causality and objective concept formation.

Popper rejected reality? Umm, no.

Popper’s (2003) replacement for induction was deduction.

No. Not even close.

I'm also confused because Popper (who died in 1994) did not publish anything in 2003.

Axioms are self-evident and cannot be contradicted without accepting them in the process (Peikoff, 1991). They are grasped inductively; they are implicit in one’s first perceptions of reality. They are both true and non-falsifiable,

More anti-fallibilism.

This paper cites a bunch of others but I'm not really interested in going through them. I don't see any reason to expect them to be better.

Perhaps this is revealing: the only Popper book in the bibliography is LScD. He ignores all of Popper's later work.


When I tried to debate Popper on HBL, more respectfully than this with more helpful explanations, Edwin Locke emailed me to say "I am not interested in your views." He gave no reason why. He didn't try to objectively win the debate. He didn't want to address criticisms relating to his views on Popper.

Locke's paper is full of additional flaws (it's pretty easy to find a bunch if you're familiar with Popper's writing) but Locke expressed disinterest in discussion, and his arguments were low quality, so I and others didn’t see a reason to write more.


Elliot Temple | Permalink | Comments (0)

Dumb, Dishonest Memers

Many people really like internet "memes", reaction gifs, emojis, funny photos, short videos, etc.

Themes here include: few words, usually not even one complete sentence, more seeing something and less reading.

Why do people like this?

They don't like to think. Reading is hard. Creating or understanding sentences is thinking. A sentence expresses a thought. Less than a sentence means not thinking through a whole though enough for words.

It's very primitive. It's also a sort of reversion to oral culture from written culture. Not a complete reversion. Books aren't disappearing. It's just that most people don't like books. And maybe it's not a reversion since I guess most people never liked books. The majority who didn't like words didn't write down their opinions in books much. But now the internet has gone super mainstream so they are communicating in a public a lot. It's more revealing a problem than anything getting worse.

I thought of a different reason. It's not just that thinking takes thought and sentences are thoughts and they often don't consciously know what they mean or think, or why and don't want to figure it out.

It's also that sentences are connected with honesty. Saying what you mean is clearer to both yourself and others. A dishonest person prefers to live with a mental fog that helps hide the dishonesty.

Ambiguity and leaving a lot unstated gives room for all sorts of dishonesty and bias. Formulating thoughts in sentences is part of facing reality. (Dishonesty is a rebellion against reality. It thrives among those who spend their time dealing with people instead of things. Dishonesty is encouraged in many ways in the social world but is discouraged in and by the natural world.)


Elliot Temple | Permalink | Comments (6)

Discussing Animal Intelligence

This post replies to pdxthehunted from Reddit (everything he said there is included in quotes below). There is also previous discussion before this exchange, see here. This post will somewhat stand on its own without reading context, but not 100%. Topics include about whether animals can suffer, the nature of intelligence and the flaws of academia.

[While writing this response, the original post was removed. I think that’s unfortunate, but what’s done is done. I’d still love a quick response—just to see if I understand you correctly.]

Hi, Elliot. Thanks for your response. I want to say off the bat that I don’t think I’m equipped to debate the issue at hand with you past this point. (Mostly based off your sibling post; I’m not claiming you’re wrong, but just that I think I—finally—realize that I don’t understand where you’re coming from, entirely (or possibly at all). I’m willing to concede that—if you’re right about everything—you probably do need to have this conversation with programmers or physicists. If the general intelligence on display in the article I cited is categorically different from what you’re talking about when you talk about G.I. than I’m out of my depth.

Yes, what that article is studying is different and I don't think it should be called "general intelligence". General means general purpose, but the kind of "intelligence" in the article can't build a spaceship or write a philosophy treatise, so it's limited to only some cases. They are vague about this matter. They suggest they are studying general intelligence because their five learning tasks are "diverse". Being able to do 5 different learning tasks is a great sign if they are diverse enough, but I don't think they're diverse with respect to the set of all possible learning tasks, I think they're actually all pretty similar.

This is all more complicated because they think intelligence comes in degrees, so they maybe believe a mouse has the right type of intelligence to build a spaceship, just not enough of it. But their research is not about whether that premise (intelligence comes in degrees) is true, nor do they write philosophical arguments about it.

That being said, I’d love to continue the conversation for a little while, if you’re up for it, either here or possibly on your blog if that works better for you. I have some questions and would like to try and understand your perspective.

If I'm right about ~everything, that includes my views of the broad irrationality of academia and the negative value of current published research in many of the fields in question.

For example, David Deutsch's static meme idea, available in BoI, was rejected for academic publication ~20 years earlier. Academia gatekeeps to keep out ideas they don't want to hear, and they don't really debate what's true much in journals. It's like a highly moderated forum with biased moderators following unwritten and inconsistent rules (like reddit but stricter!).

My arguments re animals are largely Deutsch's. He taught me his worldview. The reason he doesn't write it up and publish it in a journal is because (he believes that) it either wouldn't be published or wouldn't be listened to (and it would alienate people who will listen to his physics papers). The same goes for many other important ideas he has. Being in the Royal Society, etc., is inadequate to effectively get past the academic gatekeeping (to get both published and seriously, productively engaged with). I don't think a PhD and 20 published papers would help either (especially not with issues involving many fields at once).

For what it’s worth, I think this is a fair criticism and concern, especially for someone—like you—who is trying to distill specific truths out of many fields at once. If your (and Deutsch’s) worldview conflicts with the prevailing academic worldview, I concede that publishing might be difficult or impossible and not the best use of your energy.

I asked for a solution but I'm happy with that response. I find it a very hard problem.

Sadly, Deutsch has given up on the problem to the point that he's focusing on physics (Constructor Theory) not philosophy now. Physics is one of the best academic fields to interact with, and one of the most productive and rational, while philosophy is one of the worst. Deutsch used to e.g. write about the implications of Critical Rationalism for parenting and education. The applications are pretty direct from philosophy of knowledge to how people learn, but the conclusions are extremely offensive to ~everyone because, basically, ~all parents and teachers are doing a bad job and destroying children's minds (which is one of the main underlying reasons for why academia and many other intellectual things are broken). Very important issues but people shoot messengers... The messenger shooting is bad enough that Deutsch refused me permission to post archived copies of hundreds of things he wrote publicly online but which are no longer available at their original locations. A few years earlier he had said he would like the archives posted. He changed his mind because he became more pessimistic about people reaction's to ideas.

I, by contrast, am pursuing a different strategy of speaking truth to power without regard for offending people. I don't want to hold back, but I also don't have a very large fanbase because even if someone agrees with me about many issues, I have like two dozen different ideas that would alienate many people, so pretty much everyone can find something to hate.

I don't think people would, at that point, start considering and learning different ideas than what they already have, e.g. learning Critical Rationalism so they could apply that framework to animal rights to reach a conclusion like "If Critical Rationalism is true, then animal rights is wrong." (And CR is not the only controversial premise I use that people are broadly ignorant of, so it's harder than that.) People commonly dismiss others, despite many credentials, if they don't like the message. I don't think playing the game of authority and credentials – an irrational game – will solve the problem of people's disinterest in truth-seeking. This is view of academia is, again, a view Deutsch taught me.

Karl Popper published a ton but was largely ignored. Thomas Szasz too. There are many other examples. Even if I got published, I could easily be treated like e.g. Richard Lindzen who has published articles doubting some claims about global warming.

Fair enough.

I’m not going to respond to the rest of your posts line-by-line because I think most of what you’re saying is uncontroversial or is not relevant to the OP (it was relevant to my posts; thank you for the substantial, patient responses).

I think most people would deny most of it. I wasn’t expecting a lot of agreement. But OK, great.

For any bystanders who are interested and have made it this far, I think that this conversation between OP and Elliot is helpful in understanding their argument (at least it was for me).

Without the relevant CS or critical rationality background, I can attempt to restate their argument in a way that seems coherent (to me). Elliot or OP can correct me if I’m way off base.

The capacity for an organism to suffer may be binary; essentially, at a certain level of general intelligence, the capacity to suffer may turn on.

I don’t think there are levels of general intelligence, I think it’s present or not present. This is analogous to there not being levels of computers: it’s either a universal classical computer or it’s not a computer and can compute ~nothing. The jump from ~nothing to universality is discussed in BoI.

Otherwise, close enough.

(I imagine suffering to exist on a spectrum; a human’s suffering may be “worse” than a cow’s or a chicken’s because we have the ability to reflect on our suffering and amplify it by imagining better outcomes, but I’m not convinced that—if I experienced life from the perspective of a cow—that I wouldn’t recognize the negative hallmarks of suffering, and prefer it to end. My thinking is that a sow in a gestation crate could never articulate to herself “I’m uncomfortable and in pain; I wish I were comfortable and pain-free,” but that doesn’t preclude a conscious preference for circumstances to be otherwise, accompanied by suffering or its nonhuman analog.)

I think suffering comes in degrees if it’s present at all. Some injuries hurt more than others. Some bad news is more upsetting than other bad news.

Similarly, how smart people are comes in degrees when intelligence is present. They have the same basic capacity but vary in thinking quality due to having e.g. different ideas and different thinking methods (e.g. critical rationalist thinking is more effective than superstition).

Roughly there are three levels like this:

  1. Computer (brain)
  2. Intelligent Mind (roughly: an operating system (OS) for the computer with the feature that it allows creating and thinking about ideas)
  3. Ideas within the mind.

Each level requires the previous level.

Sand fails to match humans at level 1. No brain.

Apes fail to match humans at level 2. They run a different operating system with features more similar to Windows or Mac than to intelligence. It doesn’t have support for ideas.

Self-driving cars have brains (CPUs) which are adequately comparable to an ape or human, but like apes they differ from humans at level 2.

When Sue is cleverer than Joe, that’s a level 3 difference. She doesn’t have a better brain (level 1), nor a better operating system (level 2), she has better ideas. She has some knowledge he doesn’t. That includes not just knowledge of facts but also knowledge about rationality, about how to think effectively. E.g. she knows some stuff about how to avoid bias, how to find and correct errors effectively, how to learn from criticism instead of getting angry, or how to interpret disagreements as disagreements instead of as other things like heresy, bad faith, or “not listening”.

Small hardware differences between people are possible. Sue’s brain might be a 5% faster computer than Joe’s. But this difference is unimportant relative to the impact of culture, ideas, rationality, bias, education, etc. Similarly, small OS differences are possible but they wouldn’t matter much either.

There are some complications. E.g. imagine a society which extensively tested children on speed of doing addition problems in their head. They care a ton about this. The best performers get educated to be scientists and lower performers do unskilled laborer. Someone with a slightly faster brain or slightly different OS might do better on those tests. Those tests limit the role of ideas. So, in this culture, a small hardware speed advantage could make a huge difference in life outcome including how clever the person is as an adult (due to huge educational differences which were caused by differences in arithmetic speed). But the same hardware difference could have totally different results in a different culture, and in a rational culture it wouldn’t matter much. What differentiates knowledge workers IRL, including scientists and philosophers, is absolutely nothing like that the 99th percentile successful guys are able to get equal quality work done 5% faster than the 20th percentile guys.

Our actual culture has some stuff kinda like this hypothetical culture, but much more accidental and with less control over your life (there are many different paths to success, so even if a few get blocked, you don’t have to do unskilled labor). It also has similar kinda things based on non-mental attributes like skin color, height, hair color, etc, though again with considerably smaller consequences than the hypothetical where your whole fate is determined just by addition tests.

Back to my interpretation of the argument: Beneath a certain threshold of general intelligence, pain—or the experience of having any genetically preprogrammed preference frustrated—may not be interpreted as suffering in the way humans understand it and may not constitute suffering in any meaningful or morally relevant way (even if you otherwise think we have a moral obligation to prevent suffering where we can).

It’s possible that suffering requires uniquely human metacognition; without the ability to think about pain and preference frustration abstractly, animals might not suffer in any meaningful sense.

This is a reasonable approximation except that I think preferences are ideas and I don’t think animals have them at all (not even preprogrammed).

So far (I hope) all I’ve done is restate what’s already been claimed by Elliot in his original post. Whether I’ve helped make it any clearer is probably an open question. Hopefully, Elliot can correct me if I’ve misinterpreted anything or if I’ve dumbed it down to a level where it’s fundamentally different from the original argument.

This is where I think it gets tricky and where a lot of miscommunication and misunderstanding has been going on. Here is a snippet of the conversation I linked earlier:

curi: my position on animals is awkward to use in debates because it's over 80% background knowledge rather than topical stuff.

curi: that's part of why i wanted to question their position and ask for literature that i could respond to and criticize, rather than focusing on trying to lay out my position which would require e.g. explaining KP and DD which is hard and indirect.

curi: if they'll admit they have no literature which addresses even basic non-CR issues about computer stuff, i'd at that point be more interested in trying to explain CR to them.

I’m willing to accept that Elliot is here in good faith; nothing I’ve read on their blog thus far looks like an attempt to “own the soyboys” or “DESTROY vegan arguments.” They’re reading Singer (and Korsgaard) and are legitimately looking for literature that compares or contrasts nonhuman animals with AI.

The problem is—whether they’re right or not—it seems like the foundation of their argument requires a background in CR and theoretical computer science.

Yes.

My view: if you want to figure out what’s true, a lot of ideas are relevant. Gotta learn it yourself and/or find a way to outsource some of the work. So e.g. Singer needs to read Popper and Deutsch or contact some people competent to discuss whether CR is correct and its implications. And Singer also needs to contact some computer people and ask them and try to meet them in the middle by explaining some of what he does to them so they understand the problems he’s working on, and then they explain some CS principles to him and how they apply to his problems. Something like that.

That is not happening.

It ought to actually be easier than that. Instead of contacting people Singer or anyone else could look at the literature. What criticisms of CR have been written? What counter-arguments to those criticisms have CR advocates written? How did those discussions end? You can look at the literature and get a picture of the state of the debate and draw some conclusions from that.

I find people don’t do this much or well. It often falls apart in a specific way. Instead of evaluating the pro-CR and anti-CR arguments – seeing what answers what, what’s unanswered, etc. – they give up on understanding the issues and just decide to assume the correctness of whichever side has a significant lead in popularity and prestige.

The result is, whenever some bad ideas and irrational thinkers become prestigious in a field, it’s quite hard to fix because people outside the field largely refuse to examine the field and see if a minority view’s arguments are actually superior.

Also, often people just use common sense about what they assume would be true of other fields instead of consulting literature. So e.g. rather than reading actual inductivist literature (induction is mainstream and is one of the main things CR rejects), most animal researchers and others rely on what they’ve picked up about induction, here and there, just from being part of an intellectual subculture. Hence there exist e.g. academic papers studying animal intelligence that don’t cite even mainstream epistemology books or papers.

The current state of the CR vs. induction debate, in my considered and researched opinion, is there don’t actually exist criticisms of CR from anyone who has understood it, and there’s very little willingness to engage in debate by any inductivists. Inductivists are broadly uninterested in learning about a rival idea which they have not understood or refuted. I think ignoring ideas that no one has criticized is something of a maximum for a type of irrationality. And people outside the field (and in the field too) mostly assume that some inductivists somewhere did learn and criticize CR, though people usually don’t have links to specific criticisms, which is a problem. I think it’s important to have sources in other fields that aren’t your own so that if your sources are incorrect they can be criticized and corrected and you can change your mind, whereas if you just say “people in the field generally conclude X” without citing any particular arguments then it’s very hard to continue the discussion and correct you about X from there.

From my POV, (a) the argument that suffering may be binary vs. occurring on a spectrum is possible but far from settled and might be unfalsifiable. From my POV, it’s far more likely that animals do suffer in a way that is very different from human suffering but still ethically and categorically relevant.

That’s a reasonable place to start. What I can say is that if you investigate the details, I think they come out particular way rather conclusively. (Actually the nature of arguments, and what is conclusive vs. unsettled – how to evaluate and think about that – is a part of epistemology, it’s one of the issues I think mainstream epistemology is wrong about. That’s actually the issue where I made my largest personal contribution to CR.)

If you don’t want to investigate the details, has anyone else done so as your proxy or representative? Has Singer or any other person or group done that work for you? Who has investigated, reached a conclusion, written it up, and you’re happy with what they did? If no one has done that, that suggests something is broken with all the intellectuals on your side – there may be a lot of them, but between all of them they aren’t doing much relevant thinking.

In some ways, the more people believe something and still no one writes detailed arguments and addresses rival ideas well, the more damning it is. In other words, CR has the excuse of not having essays to cover every little detail of every mainstream view because there aren’t many of us to write all that and we have ~no funding. The other side has no such excuse yet they’re the side, between all those people, has no representatives who will debate! They have plenty of people to have some specialists in refuting CR but they don’t have any.

Sadly, the same pattern repeats in other areas, e.g. The Failure of the 'New Economics’ by Henry Hazlitt is a point-by-point book-length refutation of Keynes’ main book. It uses tons of quotes from Keynes, similar to how I’m replying his this comment using quotes from pdxthehunted. As far as I know, Hazlitt’s criticisms went unanswered. Note: I think Hazlitt’s level of fame/prestige was loosely comparable to Popper and more than Deutsch; it’s not like he was ignored for being a nobody (which I’d object to too, but that isn’t what happened).

Large groups of people ignore critical arguments. What does it mean for intellectuals to rationally engage with critics and how can we get people to actually do that? I think it’s one of the world’s larger problems.

new_grass made a few posts that more eloquently describe that perspective; humans, yelping dogs, and so on evolved from a common ancestor and it seems unlikely that suffering is a uniquely human feature when so many of our other cognitive skills seem to be continuous with other animals.

New_grass says:

link

But this isn't the relevant proposition, unless you think the probability that general intelligence (however you are defining it) is required for the ability to suffer or be conscious is one. And that is absurd, given our current meager understanding of consciousness.

The relevant question is what the probability is that other animals are conscious, or, if you are a welfarist, whether they can suffer. And that probability is way higher than zero, for the naturalistic reasons I have cited.

But according to Elliot, our judgment of the conservatism argument hinges on our understanding of CR and Turing computability.

Does the following sound fair?

Yeah, I have arguments here covering other cases (the cases of the main issue being suffering or consciousness rather than intelligence) and linking the other cases to the intelligence issue. I think it’s linked.

If pdxthehunted had an adequate understanding of the Turing principle and CR and their implications on intelligence and suffering, their opinion on *(a)** would change; they would understand why suffering certainly does occur as a binary off/on feature of sufficiently intelligent life.*

In short, yes. Might have to add a few more pieces of background knowledge.

Please let me know if I’ve managed to at least get a clearer view of the state of the debate and where communication issues are popping up.

Frankly, I’ve enjoyed this thread. I’ve learned a lot. I bought DD’s BOI a couple of years ago after listening to his two podcasts with Sam Harris, but never got around to reading it. I’ve bumped it up to next on my reading list and am hoping that I’m in a better position to understand your argument afterward.

Yeah, comprehensive understanding of DD’s two books covers most of the main issues. That’s hard though. I run the forums where people reading those books (or Popper) can ask questions (it’s this website and an email group with a 25 year history, where DD used to write thousands of posts, but he doesn’t post anymore).

Finally--if capacity for suffering hinges on general intelligence, is consciousness relevant to the argument at all?

To a significant extent, I leave claims about consciousness out of my arguments. I think consciousness is relevant but isn’t necessary to say much about to reach a conclusion. I do have to make some claims about consciousness, which some people find pretty easy to accept, but others do deny. These claims include:

  1. Dualism is false.
  2. People don’t have souls and there’s no magic involved with minds.
  3. Consciousness is an emergent property of some computations.
  4. Computation is a purely physical process that is part of physics and obeys the laws of physics. Computers are regular matter like rocks.
  5. Computation takes information as input and outputs information. Information is a physical quantity. It’s part of the physical world.
  6. Some additional details about computation, along similar lines, to further rule out views of consciousness that are incompatible with my position. Like I don’t think consciousness can be a property of particular hardware (like organic molecules – molecules with carbon instead of silicon) because of the hardware independence of computation.
  7. I believe that consciousness is an emergent property of (general) intelligence. That claim makes things more convenient, but I don’t think it’s necessary. It’s a stronger claim than necessary. But it’s hard to explain or discuss a weaker and adequate claim. There aren’t currently any known alternative claims which make sense given my other premises including CR.

One more thing. The “general intelligence” terminology comes from the AI field which calls a Roomba’s algorithms AI and then differentiates human-type intelligence from that by calling it AGI. The concept is that a Roomba is intelligent regarding a few specific tasks while a human is able to think intelligently about anything. I’d prefer to say humans are intelligent and a Roomba or mouse is not intelligent. This corresponds to how I don’t call my text editor intelligent even though, e.g., it “intelligently” renumbered the items in the above list when I moved dualism to the top. In my view, there’s quite a stark contrast between humans – which can learn, can have ideas, can think about ideas, etc. – and everything else which can’t do that at all and has nothing worthy of the name “intelligence”. The starkness of this contrast helps explain why I reach a conclusion rather than wanting to err on the side of caution re animal welfare. A different and more CR-oriented explanation of the difference is that all knowledge creation functions via evolution (not induction) and only humans have the (software) capacity to do evolution of ideas within their brains. (Evolution = replication with variation and selection.)

That’s just the current situation. I do think we can program an AGI which will be just like us, a full person. And yes I do care about AGI welfare and think AGIs should have full rights, freedoms, citizenship, etc. (I’m also, similarly, a big advocate of children’s rights/welfare and I think there’s something wrong with many animal rights/welfare advocates in general that they are more concerned about animal suffering than the suffering of human children. This is something I learned from DD.) I think it’s appalling that in the name of safety (maybe AGIs will want to turn us into paperclips for some reason, and will be able to kill us all due to being super-intelligent) many AGI researchers advocate working on “friendly AI” which is an attempt to design an AGI with built-in mind control so that, essentially, it’s our slave and is incapable of disagreeing with us. I also think these efforts are bound to fail on technical grounds – AGI researchers don’t understand BoI either, neither its implications for mind control (which is an attempt to take a universal system and limit it with no workarounds, which is basically a lost cause unless you’re willing to lose virtually all functionality) nor its implications for super intelligent AGIs (they’ll just be universal knowledge creators like us, and if you give one a CPU that is 1000x as powerful as a human brain then that’ll be very roughly as good as having 1000 people work on something which is the same compute power.). This, btw, speaks to the importance of some interdisciplinary knowledge. If they understood classical liberalism better, that would help them recognize slavery and refrain from advocating it.


Elliot Temple | Permalink | Comments (26)

Intelligence Isn't Speed

I explained that intelligence isn't a matter of computing hardware speed.

Sounds like the IQ vs Universality thing is just two camps talking past each other.

Suppose we do believe in the basic premise of universality, that all computers are equally "powerful" in a specific way, namely that there's no problem a sophisticated computer can solve that a simple computer cannot, provided we just give the simple computer a long enough time frame to solve it in.

Fair enough. But surely we're also interested in how fast the computer can solve the problems. That's not a trivial factor, especially when we consider that human computers are prone to getting bored, frustrated, confused, or forgetful.

So maybe when we talk about IQ we're not talking about computational power, but maybe something like computational speed. Or, more likely, computational speed combined with some other personality traits.

I think computational universality helps change the primary point of interest (re intelligence) to software that is created and modified after birth. You think maybe it makes hardware speed the key place to look re intelligence. FYI, your view is something I've already considered and taken into account.

You also think some other (genetic) personality traits may be important to intelligence. I don't think so partly because of a different type of universality: universal intelligence (or universal learning, universal knowledge creating, universal problem solving, same things). Universalities are discussed in The Beginning of Infinity by David Deutsch. It's important, in these discussions, to keep the two types of universalities separate (universal computer; universal learning/thinking software). I won't go into this point further right now. I'm going to talk about the hardware speed issue.

Suppose my brain is 100% faster than yours (which sounds like an unrealistically high difference). You will still outperform me, by far, if you use a better algorithm than I do. E.g. if you use an O(N) algorithm to think about something while I'm using O(N^2).

That's called Big O notation basically means how many steps it takes to complete the algorithm. N is the number of data points. In this example, you need time proportional to the amount of data. I need time proportional to the square of the amount of data. So for decent sized data sets, you win even if my hardware is twice as fast. E.g. with 10 data points, you win by a a factor of 5. Taking 2 seconds per step, you need 10 * 2 = 20 seconds. I, doing steps in 1 second, need 10^2 = 100 seconds. How does it scale? With 100 data points, you need 200 seconds and I need 100^2 = 10,000 seconds. Now you won by a factor of 50. That factor will go up if there's more data. And the world has a lot of data.

Exponential differences in Big O complexity between algorithms are common and routinely make a huge difference in processing time – far more than CPU speed. In software we write, lots of work goes into using algorithms that are only sub-optimal by a linear or constant amount.

If people think at different speeds, you should probably blame their thinking method (software) rather than their hardware for well over 99% of the difference. Especially because hardware variation between humans is pretty small.

But most differences in intelligence are not speed differences anyway. For example, often one human solves a problem and another doesn't solve it at all. The second guy doesn't solve it slower, he fails. He gets stuck and gives up, or won't even begin because he knows he doesn't understand how to do it. This is partly because of what knowledge people have or lack (learned information that wasn't inborn), and partly because of thinking methods (e.g. algorithms which could be fast or exponentially slow depending on how well they're designed). With bad algorithms, the time to finish can be a million years while a good algorithm can do the same task in minutes on a slower CPU.

There are other crucial non-hardware issues too, e.g. error correction. If you make a thinking mistake, can you recover from that, identify that something has gone wrong, find the problem, and fix it? Some ways of thinking can accomplish that pretty reliably for a wide variety of errors. But some ways of thinking are quite fragile to error. This is leads to wildly different thinking results that aren't due to hardware speed.

I'll close with an explanation of these issues from David Deutsch, from my interview with him:

David: As to innate intelligence: I don't think that can possibly exist because of the universality of computation. Basically, intelligence or any kind of measure of quality of thinking is a measure of quality of software, not hardware. People might say, "Well, what hardware you have might affect how well your software can address problems." But because of universality, that isn't so: we know that hardware can at most affect the speed of computation. The thing that people call intelligence in everyday life — like the ability of some people like Einstein or Feynman to see their way through to a solution to a problem while other people can't — simply doesn't take the form that the person you regard as 'unintelligent' would take a year to do something that Einstein could do in a week; it's not a matter of speed. What we really mean is the person can't understand at all what Einstein can understand. And that cannot be a matter of (inborn) hardware, it is a matter of (learned) software.


Elliot Temple | Permalink | Comment (1)

Error Identifying Superpower

Suppose, hypothetically, that you had a superpower. It's the ability to find errors in academic papers (it works on books, blog posts, tweets, etc., too).

You can do it for most fields.

So I pick a field, e.g. animal intelligence, and you say "sure I can do that field", and I select 10 papers in the field. Then you find five errors in every paper. And not typos, significant errors. Mostly conceptual errors.

What would you do with your superpower? How could it be used to accomplish much of anything?

It may sound kind of amazing. But I claim it'd be hard to get much value out of it. People, broadly, don't want to hear about errors. And they will say you don't have any credentials and assume you're wrong without listening even though, hypothetically, you're right about 100% of the errors you point out.

Comment below with your plan to use this superpower.


Elliot Temple | Permalink | Comments (3)

Academia's Inadequacy

TheCriticalRat posted my article Animal Rights Issues Regarding Software and AGI on the Debate A Vegan SubReddit.

This post shares a discussion highlight where I wrote something I consider interesting and important. The main issue is about the inadequacy of academia.

The two large blockquotes are messages from pdxthehunted and the non-quotes are me. I made judgment calls about newline spacing for reproducing pdxthehunted's messages and I changed the format for the two footnotes.

This came up a few weeks ago, when u/curi was posing questions on this subreddit. I looked through some of Elliot's work then and did so again just now. I'm not accusing them of being here in bad faith--they seem like they are legitimately interested in thinking about this topic and are asking interesting questions/making interesting claims.

That being said, they also seem to have little or no formal education in philosophy of mind or AGI. All of their links to who they are circle back to their own commercial website/blog, where they sell their services as a rationalist philosopher/consultant. It appears that they are (mostly) self-taught. Their (supposed)[1] connection to David Deutsch is why I bothered even to look further.

I don't think you need to have a degree to understand even advanced tenets in philosophy of mind or artificial intelligence. The problem here is that Elliot seems to have written an enormous amount--possibly thousands of pages--but has never been published in any peer-reviewed journal (at least none that I have access to through my community college) and so their credibility is questionable. Judging from their previous interactions on this sub, Elliot seems to have created their own curriculum and field of expertise.

I was impressed by the scope and seriousness of their work (the little I took the time to read). Still, it's very problematic for debate: they seem to be looking for someone who has the exact same intellectual background as they do--but without any kind of standardization, it's very hard to know what that is without investing possibly hundreds of hours into reading his corpus. This is the benefit of academic credentials; we can engage with someone under the assumption that they know what they're talking about. Most of Elliot's citations and links are to their own blog--not to any peer-reviewed, actual science. I suspect that's why they've left the caveat that "blog posts are okay."

A very quick browse through Academic Search Premier found over 100 published peer-reviewed journal articles on nonhuman animals and general intelligence. I browsed the abstracts of the first three, all of which discuss general intelligence in nonhuman animals. General intelligence is hard to define--especially in a way that doesn't immediately bias it in favor of humans--but even looking at the usual suspects in cognition demonstrate that many animals possess it unless we move the goalposts to human-specific achievements like writing symphonies or building spacecraft (which of course leaves the vast majority of all humans who've ever existed in the cold).

In short--not to be rude or dismissive--but the reason that animal rights activists aren't concerned about the "algorithms" that animals have that "give them the capacity to suffer" (forgive me if I'm misquoting) is that it is a non-issue. No serious biologists doubt that nonhuman animals (at least mammals and birds) can have preferences for or against different mental states and that those preferences can be frustrated or thwarted. Pain and suffering are fitness-selecting traits that allowed animals to avoid danger and seek nourishment and mates. I'm not an expert in any of your claimed domains; that being said, to believe that consciousness and the capacity to suffer evolved only in one species of primate demonstrates a shockingly naive understanding of evolution, philosophy of mind, cognitive science/neuroscience, and biology.

Similar questions can be asked about general intelligence. My answer to that is we don’t entirely know. We haven’t yet written an AGI. So what should we think in the meantime? We can look at whether all animal behavior is consistent with non-AGI, non-conscious, non-suffering robots with the same sorts of features and design as present day software and robots that we have created and do understand. Is there any evidence to differentiate an animal from non-AGI software? I’m not aware of any, although I’ve had many people point me to examples of animal behavior that are blatantly compatible with non-AGI programming algorithms.

There is no "scoop" here. There are a few serious philosophers I've read--Daniel Dennett, for instance--who I think make similar arguments as you're making here, which we can call the "animals as automata" meme. The very fact that you believe that cows show no more intelligence than a self-driving car makes me feel very suspicious that you don't know what you're talking about. Nick Bostrum basically states in his AI opus Superintelligence that if humans managed to emulate a rodent mind, we would have mostly solved human-level AGI.

To claim that there are "no examples" of an animal doing something that a non-AGI robot couldn't[2] do discredits your entire thesis--you're either woefully misinformed, or disingenuous. Again, I'm very impressed by your (Elliot's) obvious dedication to learning and thinking. Still, I don't think this argument is even to the point where it's refined enough to take seriously. There's so much wrong with it that betrays not just a lack of competence in adjacent disciplines but also an arrogance around the author's imagined brilliance that it feels awkward and unrewarding to engage with.

EDIT 12/2: [1] Connection to Deutsch--though not necessarily relevant to this argument--is not overstated.

[2] Changed would to couldn't

Suppose I'm right about ~everything. What should I do that would fix these problems?

Thanks for the response. Also, I checked the Beginning of Infinity and saw that you don't seem to be exaggerating your claim (obviously you know this--I'm mentioning it for any skeptics). Elliot Temple is not only listed in the acknowledgments of BOI, but they are given special thanks from the author. That's very cool, regardless of anything else. Congratulations. I'm hesitant to do too much cognitive work for you on how to fix your problems--it sounds like you're used to charging people a fair amount of money to do the same. Still, I engaged with you here, so I'll let you know what I think.

Read More

You need to become better read in adjacent fields--cognitive neuroscience, ethology, evolutionary biology, ethics--these are just a few that come up off the top of my head. If you're right about more or less everything, peer-reviewed research done by actual scientists in most of these fields should agree with your thesis. If it doesn't, make a weaker claim.

Publish

Right now, your argument is formatted as a blog post. Anyone with access to a computer is technically capable of self-publishing thousands of pages of their thoughts. Write an article and submit it to an academic journal for peer review. Any publication that survives the peer-review process will give you more credibility. I'm not saying that's fair, but it is a useful heuristic for nonexperts to decide whether or not you are worth their time. An alternative would be to see your blog posts cited in books by experts (for instance, Eliezer Yudkowsky has no formal secondary education, but his ideas are good enough that he is credited by other experts in his field).

Empiricism/Falsifiability

As it currently stands, you're essentially making a claim and insisting that others disprove it. This, of course, is acceptable as a Reddit discussion or a blog post--but is not suitable for uncovering the truth. I can insist that my pet rock has a subjective experience and refuse to believe otherwise unless someone can prove it to me, but I won't be taken seriously (nor should I be). Could you design an experiment that tests a falsifiable claim about nonhuman animal general intelligence? (Or, alternatively, find one that has already been published demonstrating that only humans possess it?) What would it look like?

What computations, what information processing, what inputs or outputs to what algorithms, what physical states of computer systems like brains indicates or is consciousness? I have the same question for suffering too.

We don't know the answer to these questions. Staking your thesis on possible answers to open questions might be a way to stalemate internet debates, but won't deepen your or anyone else's understanding.

Gatekeeping

You're widely read and the depth of your knowledge/understanding in some areas is significant. You need to recognize that some people will have different foundations than yours--they might be very well-read on evolutionary biology--but have less of an understanding of Turing computability. Instead of rudely dismissing arguments that are outside of the disciplines you're most comfortable with, try to meet these people on their level. What do they have to teach you? What thinkers can they expose you to? Your self-curated curriculum is impressive but uneven and far from comprehensive. Try a little humility. Assuming you're right about everything, you should be able to communicate it to experts outside of your field.

Closing

I think that advice is good whether or not you're correct; if you are, people far more intelligent than I should start to recognize it. If you aren't, you might be able to clarify where you went wrong and either abandon your claim or reformulate it to make a weaker--but possibly true--version.

Lastly, I encourage anyone observing from the sidelines to use Google Scholar or similar if you have an interest in animal general intelligence. I linked an article above; here it is again. The article references 60 others and has been cited in 14. This does not mean that the authors' findings are replicable or ironclad, but again--it is a useful heuristic in deciding what kind of probability we want to assign to the likelihood it is on the right track, especially when the alternative is trying to read through hundreds of pages of random blog posts so that we can meet an interlocutor on their level.

To find that article, I searched for "general intelligence in animals" using Academic Search Premier. Pubmed and Google Scholar might find similar results. I filtered out all articles that were not subject to peer review or were published before 2012. It was the 4th search result out of over 50 published in the last seven years. Science may never be finished or solvable, but nonhuman animal's capacity to learn, have intentional states, preferences, and experience pain are not really still open questions in relevant disciplines.

If I'm right about ~everything, that includes my views of the broad irrationality of academia and the negative value of current published research in many of the fields in question.

For example, David Deutsch's static meme idea, available in BoI, was rejected for academic publication ~20 years earlier. Academia gatekeeps to keep out ideas they don't want to hear, and they don't really debate what's true much in journals. It's like a highly moderated forum with biased moderators following unwritten and inconsistent rules (like reddit but stricter!).

My arguments re animals are largely Deutsch's. He taught me his worldview. The reason he doesn't write it up and publish it in a journal is because (he believes that) it either wouldn't be published or wouldn't be listened to (and it would alienate people who will listen to his physics papers). The same goes for many other important ideas he has. Being in the Royal Society, etc., is inadequate to effectively get past the academic gatekeeping (to get both published and seriously, productively engaged with). I don't think a PhD and 20 published papers would help either (especially not with issues involving many fields at once). I don't think people would, at that point, start considering and learning different ideas than what they already have, e.g. learning Critical Rationalism so they could apply that framework to animal rights to reach a conclusion like "If Critical Rationalism is true, then animal rights is wrong." (And CR is not the only controversial premise I use that people are broadly ignorant of, so it's harder than that.) People commonly dismiss others, despite many credentials, if they don't like the message. I don't think playing the game of authority and credentials – an irrational game – will solve the problem of people's disinterest in truth-seeking. This is view of academia is, again, a view Deutsch taught me.

Karl Popper published a ton but was largely ignored. Thomas Szasz too. There are many other examples. Even if I got published, I could easily be treated like e.g. Richard Lindzen who has published articles doubting some claims about global warming.

Instead of rudely dismissing arguments that are outside of the disciplines you're most comfortable with, try to meet these people on their level.

If I'm right about ~everything (premise), that includes that I'm right about my understanding of evolutionary biology, which is an area I've studied a lot (as has Deutsch). That's not outside my comfort zone.

I think that advice is good whether or not you're correct; if you are, people far more intelligent than I should start to recognize it.

We disagree about the current state of the world. How many smart people exist, how many competent people exist in what fields, how reasonable are intellectuals, what sort of things do they do, etc. You mention Eliezer Yudkowsky, who FYI agrees with me about this something like this particular issue, e.g. he denies "civilizational adequacy", and says the world is on fire, in Hero Licenscing. OTOH, he's also the same guy who took moderator action to suppress discussion of Critical Rationalism on his site because – according to him – it was downvoted a lot (factually there were lots of downvotes, but I mean he actually said that was his reason for taking moderator action – so basically just suppressing unpopular ideas on the basis that they are unpopular). He has publicly claimed Critical Rationalism is crap but has never written anything substantive about that and won't debate, answer counter-arguments, or endorse any criticism of Critical Rationalism written by someone else (and I'm pretty confident there is no public evidence that he knows much about CR).

The reason I asked about how to fix this is I think your side of the debate, including academic institutions and their alleged adequacy, are blocking error correction. They don't allow any reasonable or realistic way that, if I'm right, it gets fixed. FYI I've written about the general topic of how intellectuals are closed to ideas and what rational methods of truth seeking look like, e.g. Paths Forward. The basic theme of that article is about doing intellectual activities in such a way that, if you're wrong, and someone knows you're wrong, and they're willing to tell you, you don't prevent them from correcting you. Currently ~everyone is doing that wrong. (Of course there are difficulties like how to do this in a time-efficient manner, which I go into. It's not an easy problem to solve but I think it is solvable.)

Lastly, I encourage anyone observing from the sidelines to use Google Scholar or similar if you have an interest in animal general intelligence. I linked an article above; here it is again.

PS, FYI it's readily apparent from the first sentence of the abstract of that article that it's based on an intellectual framework which contradicts the one in The Beginning of Infinity. It views intelligence in a different way than we do, which must be partly due to some epistemology ideas which are not stated or cited in the paper. And it doesn't contain the string "compu" so it isn't engaging with our framework re computation either (instead it's apparently making unstated, uncited background assumptions again, which I fear may not even be thought through).

I guess you'll think that, in that case, I should debate epistemologists, not animal rights advocates. Approach one of the biggest points of disagreements more directly. I don't object to that. I do focus a lot on epistemology and issues closer to it. The animal welfare thing is a side project. But the situation in academic epistemology has the same problems I talked about in my sibling post and is, overall, IMO, worse. Also, even if I convinced many epistemologists, that might not help much, considering lots of what I was saying about computation is already a standard (sorta, see quote) view among experts. Deutsch actually complains about that last issue in The Fabric of Reality (bold text emphasized by me):

The Turing principle, for instance, has hardly ever been seriously doubted as a pragmatic truth, at least in its weak forms (for example, that a universal computer could render any physically possible environment). Roger Penrose's criticisms are a rare exception, for he understands that contradicting the Turing principle involves contemplating radically new theories in both physics and epistemology, and some interesting new assumptions about biology too. Neither Penrose nor anyone else has yet actually proposed any viable rival to the Turing principle, so it remains the prevailing fundamental theory of computation. Yet the proposition that artificial intelligence is possible in principle, which follows by simple logic from this prevailing theory, is by no means taken for granted. (An artificial intelligence is a computer program that possesses properties of the human mind including intelligence, consciousness, free will and emotions, but runs on hardware other than the human brain.) The possibility of artificial intelligence is bitterly contested by eminent philosophers (including, alas, Popper), scientists and mathematicians, and by at least one prominent computer scientist. But few of these opponents seem to understand that they are contradicting the acknowledged fundamental principle of a fundamental discipline. They contemplate no alternative foundations for the discipline, as Penrose does. It is as if they were denying the possibility that we could travel to Mars, without noticing that our best theories of engineering and physics say that we can. Thus they violate a basic tenet of rationality — that good explanations are not to be discarded lightly.

But it is not only the opponents of artificial intelligence who have failed to incorporate the Turing principle into their paradigm. Very few others have done so either. The fact that four decades passed after the principle was proposed before anyone investigated its implications for physics, and a further decade passed before quantum computation was discovered, bears witness to this. People were accepting and using the principle pragmatically within computer science, but it was not integrated with their overall world-view.

I think we live in a world where you can be as famous as Turing, have ~everyone agree you're right, and still have many implications of your main idea substantively ignored for decades (or forever. Applying Turing to physics is a better result than has happened with many other ideas, and Turing still isn't being applied to AI adequately). As Yudkowsky says, it's not an adequate world.


Update: Read more of this discussion at Discussing Animal Intelligence


Elliot Temple | Permalink | Comments (9)

Division of Labor and Experts: Generally Great but Sometimes Overrated

Our society has had great success due to the division of labor. People economically specialize. We have farmers, lawyers, barbers, bakers, security guards, inventors, novelists, architects, engineers, programmers, managers, marketers, etc.

Division of labor is far more efficient than everyone living independently and doing a little of each job. It lets people focus a large portion of their time, attention and learning on one area. As a result, they get better at it.

Trade is what makes division of labor work. I don't farm or bake but I can trade for corn and bread. Trade is how I benefit from other people doing something.

This system plays a big role in our lives. We're all familiar with it even if we don't really think about it or study economics. Besides providing material prosperity, it has led to certain psychological attitudes.

I don't know how most things work and I get them from specialists. I don't know how to repair a car so I rely on a mechanic. I don't know how to write a good poem so I get poems from poets. I don't know how to paint so I get my paintings from people who do. I do know how to create software, but I still get most of my software from other people who specialize in that particular type of software.

People have developed an attitude of not knowing how most things work and not needing to. Someone else will do it better than I would, anyway. For almost everything, there is a specialist who's better at it than me. If I do it, other than my career, it's just a hobby for fun.

This attitude is partly reasonable but partly dangerous. People can overestimate experts.

There don't actually exist good specialists for every speciality. Some areas have too few people working in them, e.g. life extension, AGI or epistemology. In some areas, tons of experts are wrong, e.g. Keynesian economists and Kantian philosophers.

People overestimate medicine's ability to fix their problems. Many surgeries and medications are cruder and less effective than people think. It's good that they exist. They're good options to have. But they aren't just safe, perfectly effective and wonderful. They're risky and doctors downplay the risks and side effects. Doctors can't fix everything and lots of the fixes have a meaningful chance of breaking something.

Worse are experts for mental, not physical, problems. Sad? Go talk to a "professional" to get help for your "depression". Marriage problems? There's an expert for that. Kid doesn't listen in school? There are experts for that. But these people don't know much about ideas. They are neither philosohpers nor scientists. They can give some basic self-help advice and they can use social pressure to manipulate people. The whole field isn't merely largely ineffective, it's dangerous with its brain-disabling drugs, it's imprisonment without trial ("involuntary commitment"), and how it misdirects people away from solving their own problems with self-improvement, studying better ideas, and other productive activities.

Experts encourage people to be irresponsible. Don't worry about it, the expert is responsible for getting a good outcome. But people are often disappointed by the outcome the expert provides. It's your life. You have to live with the outcome. You need to judge which experts are effective enough and when you need to take matters into your own hands.

Many types of experts are fine. People who produce material goods for sale are broadly OK. People who provide relatively simple or easily evaluated services are broadly OK. The longer term the issue is, and the more ongoing interaction with the specialist is needed, the more you should be careful.

The most dangerous experts that people consult direct are "mental health" experts. That whole industry is poison.

The capabilities of physical doctors are overestimated but they're basically on your side, try to help, and mostly make things better. Mental doctors make a lot of things worse. Many people regret interacting with them, and others are brainwashed/indoctrinated/pressured to the point they have trouble thinking critically about it and forming an independent opinion of their psychology or psychiatry experiences.

The most dangerous experts that people deal with indirectly are philosophers. Most people don't read philosophy books but they pick up ideas, here and there, about learning, knowledge, critical thinking, reason, morality, political principles, the metaphysical nature of reality, etc. Many of those ideas are badly wrong. They lead to people being irrational, unerasonable, bad at learning, biased, etc., which makes things worse throughout their lives. Economists also spread a ton of really bad ideas to people indirectly.


Elliot Temple | Permalink | Comments (9)

Childing a Child

Many people think "gendering" a child might be bad.

That means: Teaching the boy role to a boy might be bad. Teaching the girl role to a girl might be bad.

No one considers that that teaching the child role to a child might be bad.

I think the child role is more impactful (for good or bad) than a gender role. There's a much larger difference between young children (e.g. a 5 year old) and adults than between men and women. I'd estimate that under 20% of the adult/child difference is due to the child's learned social role, but the social role is still a big factor.

The child social role has a lasting impact after childhood. People don't just forget being children. And people transition in stages where they e.g. put effort into differentiating themselves from a child. When reacting to a former role is a significant part of one's life, then that role is still impactful.

Adults put substantial ongoing effort into avoid being childlike. It prevents them from doing much learning. They don't want to be beginners or learners because that's for children. It also prevents adults from having some types of fun. Curiosity is another childish trait which adults suppress a lot of.

Most people are mostly respectful of both genders. They don't have a significant grudge against either gender. By contrast, people are frequently hateful of the child role. They dislike and mistreat children routinely. Teaching your offspring a social role that you don't respect, then disrespecting them for years, is a nasty system. People think children are stupid and use the fact that they do child social role behaviors – while adults don't – as proof.


Elliot Temple | Permalink | Comments (2)

Research and Discussions About Animal Rights and Welfare

I made Discussion Tree: State of Animal Rights Debate. My tree diagram summarizes pro-animal-rights arguments from Peter Singer and asks some questions about major issues he didn’t cover. It reveals that his arguments were incomplete. The incompleteness I’ve focused on is that they don’t address issues related to computers and software. Maybe animals are like self-driving cars with some extra features, not like humans. Self-driving cars aren’t intelligent, conscious or capable of suffering. Singer doesn’t try to address that issue.

I did additional research to find arguments to add to my discussion tree. I found no answers to basic computer science questions from the animal welfare advocates.

I posted to five pro animal rights forums asking for links to written material (like books, articles, or blog posts) making arguments that Singer didn’t make, so I could read about why they’re right. I received no relevant responses and almost zero interest.

Later, I and others posted to eleven more places. Although this resulted in a bunch of discussion, I was not referred to a single piece of relevant literature. No one had a single piece of evidence to differentiate animals from fancy self-driving cars, nor any substantive argument. Many people insulted me. None had a scientific, materialist worldview, incorporating computer science principles, and could give any argument against my position which is compatible with that type of worldview. Nor did they give arguments that that kind of worldview is false. No one said anything that could plausibly have changed my mind. And people didn’t quote from my discussion tree and respond, nor suggest text for a new node. I linked and documented lots of the discussion on this page.

I was referred to dozens of pieces of literature, but none were relevant. In general, searching for terms like “software”, “hardware”, “algorithm” and “compu” immediately showed the source was irrelevant.

I also went to a vegan Discord for a YouTube debater to ask if they could help me improve my discussion tree diagram. I streamed what happened. Summary: They laughed at my view, then asked me to debate in voice chat (instead of giving literature), then banned me for not responding in 30 seconds while they knew I was busy fixing an audio issue.

This illustrates several things. First, my discussion tree shows how you can begin researching a topic in an organized way. You can pick a topic and create something similar. If you want to learn, it’s a great approach.

Second, there’s a serious lack of interest in discussion or debate in the world, and most people are quite ignorant and don’t even know of sources which argue why their beliefs are correct. They have some sources for why they’re right and rival views X and Y are wrong, but no answer to view Z, and will just keep giving you their answers to X and Y. Are you better or do you know of anyone who is better? Speak up.

Third, animal rights advocates broadly don’t know anything about computers and software and haven’t tried to update their thinking to take that stuff into account. Sad!

I encourage people to try creating a discussion tree on a topic that interests them, then ask for help finding sources and adding arguments to it. See what people, with what conclusions, have anything they’re willing to contribute, or not. You’ll learn a lot about the topic and about the rationality of the advocates of each viewpoint. It’ll help you judge issues yourself instead of deferring to the conclusions of experts (rather than their arguments). Even if you were happy to defer to expert opinions, it’s hard because experts disagree with each other; a discussion tree can help you organize those expert arguments.

You can also use discussion trees to organize and keep track of debates/discussions you have – as the conversation goes along, keep notes in a tree diagram.

I made a video covering these events and more. It’s from when I’d gotten almost no answers, rather than a bunch of bad answers. And I streamed a bunch of my discussions when I got bad answers.

While discussing, I wrote several additional blog posts, including a second discussion tree.


This content was borrowed from my free email newsletter. Sign up here.


Elliot Temple | Permalink | Comments (16)

Elliot Temple | Permalink | Comment (1)

Human and Animal Differences

In the comments below, reply saying which is the first sentence you disagree with, and why you disagree.

Minds are software. Suffering is a state of mind. Physical information signals, whether from the eyes or from pain nerves, have to be processed by the software before they can cause suffering, be liked or disliked, etc. Before that they're just raw data and no meaning has been determined yet by the conscious mind.

Brains (both human and animal) are universal classical computers. The hardware between humans and some animals is similar. Hardware similarity doesn't tell you about software similarity. Computation is hardware independent. Similar or even identical hardware can run totally different computations. Studying hardware and comparing hardware similarities is a red herring.

All animal behavior follows algorithms specified by their genes. Human genes specify a different type of algorithm – general intelligence – which involves the ability to create/design new knowledge, just as biological evolution created/designed the knowledge of optics in our eyes, the knowledge for how to build a computer out of neurons, the knowledge for what situations a rabbit should run away in, etc. General intelligence is the ability to evolve new knowledge. It’s the ability to replicate ideas with variation and selection, just as biological evolution proceeds by replicating genes with variation and selection. With animals, all the knowledge comes from biological evolution. Humans can do evolution of ideas inside their brains to create new knowledge, animals can’t.


Elliot Temple | Permalink | Comments (0)

The Cambridge Declaration on Consciousness

The Cambridge Declaration on Consciousness (2012):

The field of Consciousness research is rapidly evolving. Abundant new techniques and strategies for human and non-human animal research have been developed. Consequently, more data is becoming readily available, and this calls for a periodic reevaluation of previously held preconceptions in this field.

ok

Studies of non-human animals have shown that homologous brain circuits correlated with conscious experience and perception can be selectively facilitated and disrupted to assess whether they are in fact necessary for those experiences. Moreover, in humans, new non-invasive techniques are readily available to survey the correlates of consciousness.

No. Wrong just in this summary, unsourced, and focusing on correlation instead of causation.

You can’t tell what is “necessary” by turning some things on and off. You turn off X and then Y doesn’t happen. Does that mean X is necessary to Y? No, some Z you didn’t consider could cause or allow Y. So they’re making a basic logic error.

And how can you do a correlation study involving “conscious experience” in non-human animals? How do you know if or when they have any conscious experience at all?

The neural substrates of emotions do not appear to be confined to cortical structures.

These people don’t seem to understand the hardware independence of computation. Or they think emotions are non-computational or something. But they don’t explain what they think and address the computer science issues.

In fact, subcortical neural networks aroused during affective states in humans are also critically important for generating emotional behaviors in animals.

Wait lol, after they brought up emotions the next sentence (this one) switches from emotions to “emotional behaviors”. Emotional behaviors are behaviors which look emotional according to some cultural intuitions of some researchers. This ain’t science.

The rest is more of the same crap that doesn’t address the issues or give sources, so I’m stopping now.


Elliot Temple | Permalink | Comment (1)