Open Letter to Machine Intelligence Research Institute

I emailed this to some MIRI people and others related to Less Wrong.

I believe I know some important things you don't, such as that induction is impossible, and that your approach to AGI is incorrect due to epistemological issues which were explained decades ago by Karl Popper. How do you propose to resolve that, if at all?

I think methodology for how to handle disagreements comes prior to the content of the disagreements. I have writing about my proposed methodology, Paths Forward, and about how Less Wrong doesn't work because of the lack of Paths Forward:

Can anyone tell me that I'm mistaken about any of this? Do you have a criticism of Paths Forward? Will any of you take responsibility for doing Paths Forward?

Have any of you written a serious answer to Karl Popper (the philosopher who refuted induction – )? That's important to address, not ignore, since if he's correct then lots of your research approaches are mistakes.

In general, if someone knows a mistake you're making, what are the mechanisms for telling you and having someone take responsibility for addressing the matter well and addressing followup points? Or if someone has comments/questions/criticism, what are the mechanisms available for getting those addressed? Preferably this should be done in public with permalinks at a venue which supports nested quoting. And whatever your answer to this, is it written down in public somewhere?

Do you have public writing detailing your ideas which anyone is taking responsibility for the correctness of? People at Less Wrong often say "read the sequences" but none of them take responsibility for addressing issues with the sequences, including answering questions or publishing fixes if there are problems. Nor do they want to address existing writing (e.g. by David Deutsch – ) which contains arguments refuting major aspects of the sequences.

Your forum ( ) says it's topic-limited to AGI math, so it's not appropriate for discussing criticism of the philosophical assumptions behind your approach (which, if correct, imply the AGI math you're doing is a mistake). And it states ( ):

It’s important for us to keep the forum focused, though; there are other good places to talk about subjects that are more indirectly related to MIRI’s research, and the moderators here may close down discussions on subjects that aren’t a good fit for this forum.

But you do not link those other good places. Can you tell me any Paths-Forward-compatible other places to use, particularly ones where discussion could reasonably result in MIRI changing?

If you disagree with Paths Forward, will you say why? And do you have some alternative approach written in public?

Also, more broadly, whether you will address these issues or not, do you know of anyone that will?

If the answers to these matters are basically "no", then if you're mistaken, won't you stay that way, despite some better ideas being known and people being willing to tell you?

The (Popperian) Fallible Ideas philosophy community ( ) is set up to facilitate Paths Forward (here is our forum which does this ), and has knowledge of epistemology which implies you're making big mistakes. We address all known criticisms of our positions (which is achievable without using too much resources like time and attention, as Paths Forward explains); do you?

Elliot Temple | Permalink | Comments (161)

Less Wrong Lacks Representatives and Paths Forward

In my understanding, there’s no one who speaks for Less Wrong (LW), as its representative, and is responsible for addressing questions and criticisms. LW, as a school of thought, has no agents, no representatives – or at least none who are open to discussion.

The people I’ve found interested in discussion on the website and slack have diverse views which disagree with LW on various points. None claim LW is true. They all admit it has some weaknesses, some unanswered criticisms. They have their own personal views which aren’t written down, and which they don’t claim to be correct anyway.

This is problematic. Suppose I wrote some criticisms of the sequences, or some Bayesian book. Who will answer me? Who will fix the mistakes I point out, or canonically address my criticisms with counter-arguments? No one. This makes it hard to learn LW’s ideas in addition to making it hard to improve them.

My school of thought (Fallible Ideas – FI) has representatives and claims to be correct as far as is known (like LW, it’s fallibilist, so of course we may discover flaws and improve it in the future). It claims to be the best current knowledge, which is currently non-refuted, and has refutations of its rivals. There are other schools of thought which say the same thing – they actually think they’re right and have people who will address challenges. But LW just has individuals who individually chat about whatever interests them without there being any organized school of thought to engage with. No one is responsible for defining an LW school of thought and dealing with intellectual challenges.

So how is progress to be made? Suppose LW, vaguely defined as it may be, is mistaken on some major points. E.g. Karl Popper refuted induction. How will LW find out about its mistake and change? FI has a forum where its representatives take responsibility for seeing challenges addressed, and have done so continuously for over 20 years (as some representatives stopped being available, others stepped up).

Which challenges are addressed? All of them. You can’t just ignore a challenge because it could be correct. If you misjudge something and then ignore it, you will stay wrong. Silence doesn’t facilitate error correction. For information on this methodology, which I call Paths Forward. BTW if you want to take this challenge seriously, you’ll need to click the link; I don’t repeat all of it. In general, having much knowledge is incompatible with saying all of it (even on one topic) upfront in forum posts without using references.

My criticism of LW as a whole is that it lacks Paths Forward (and lacks some alternative of its own to fulfill the same purpose). In that context, my criticisms regarding specific points don’t really matter (or aren’t yet ready to be discussed) because there’s no mechanism for them to be rationally resolved.

One thing FI has done, which is part of Paths Forward, is it has surveyed and addressed other schools of thought. LW hasn’t done this comparably – LW has no answer to Critical Rationalism (CR). People who chat at LW have individually made some non-canonical arguments on the matter that LW doesn’t take responsibility for (and which often involve conceding LW is wrong on some points). And they have told me that CR has critics – true. But which criticism(s) of CR does LW claim are correct and take responsibility for the correctness of? (Taking responsibility for something involves doing some major rethinking if it’s refuted – addressing criticism of it and fixing your beliefs if you can’t. Which criticisms of CR would LW be shocked to discover are mistaken, and then be eager to reevaluate the whole matter?) There is no answer to this, and there’s no way for it to be answered because LW has no representatives who can speak for it and who are participating in discussion and who consider it their responsibility to see that issues like this are addressed. CR is well known, relevant, and makes some clear LW-contradicting claims like that induction doesn’t work, so if LW had representatives surveying and responding to rival ideas, they would have addressed CR.

BTW I’m not asking for all this stuff to be perfectly organized. I’m just asking for it to exist at all so that progress can be made.

Anecdotally, I’ve found substantial opposition to discussing/considering methodology from LW people so far. I think that’s a mistake because we use methods when discussing or doing other activities. I’ve also found substantial resistance to the use of references (including to my own material) – but why should I rewrite a new version of something that’s already written? Text is text and should be treated the same whether it was written in the past or today, and whether it was written by someone else or by me (either way, I’m taking responsibility. I think that’s something people don’t understand and they’re used to people throwing references around both vaguely and irresponsibly – but they haven’t pointed out any instance where I made that mistake). Ideas should be judged by the idea, not by attributes of the source (reference or non-reference).

The Paths Forward methodology is also what I think individuals should personally do – it works the same for a school of thought or an individual. Figure out what you think is true and take responsibility for it. For parts that are already written down, endorse that and take responsibility for it. If you use something to speak for you, then if it’s mistaken you are mistaken – you need to treat that the same as your own writing being refuted. For stuff that isn’t written down adequately by anyone (in your opinion), it’s your responsibility to write it (either from scratch or using existing material plus your commentary/improvements). This writing needs to be put in public and exposed to criticism, and the criticism needs to actually get addressed (not silently ignored) so there are good Paths Forward. I hoped to find a person using this method, or interested in it, at LW; so far I haven’t. Nor have I found someone who suggested a superior method (or even any alternative method to address the same issues) or pointed out a reason Paths Forward doesn’t work.

Some people I talked with at LW seem to still be developing as intellectuals. For lots of issues, they just haven’t thought about it yet. That’s totally understandable. However I was hoping to find some developed thought which could point out any mistakes in FI or change its mind. I’m seeking primarily peer discussion. (If anyone wants to learn from me, btw, they are welcome to come to my forum. It can also be used to criticize FI.) Some people also indicated they thought it’d be too much effort to learn about and address rival ideas like CR. But if no one has done that (so there’s no answer to CR they can endorse), then how do they know CR is mistaken? If CR is correct, it’s worth the effort to study! If CR is incorrect, someone better write that down in public (so CR people can learn about their errors and reform; and so perhaps they could improve CR to no longer be mistaken or point out errors in the criticism of CR.)

One of the issues related to this dispute is I believe we can always proceed with non-refuted ideas (there is a long answer for how this works, but I don’t know how to give a short answer that I expect LW people to understand – especially in the context of the currently-unresolved methodology dispute about Paths Forward). In contrast, LW people typically seem to accept mistakes as just something to put up with, rather than something to try to always fix. So I disagree with ignoring some known mistakes, whereas LW people seem to take it for granted that they’re mistaken in known ways. Part of the point of Paths Forward is not to be mistaken in known ways.

Paths Forward is a methodology for organizing schools of thought, ideas, discussion, etc, to allow for unbounded error correction (as opposed to typical things people do like putting bounds on discussions, with discussion of the bounds themselves being out of bounds). I believe the lack of Paths Forward at LW is preventing the resolution of other issues like about the correctness of induction, the right approach to AGI, and the solution to the fundamental problem of epistemology (how new knowledge can be created).

Elliot Temple | Permalink | Comments (7)

Criticism of Eliezer Yudkowsky on Karl Popper

I wrote this in Feb 2009. There was no reply.

Dear Eliezer Yudkowsky,

I am writing to criticize some of your statements regarding Karl Popper. I hope this will be of interest.

Previously, the most popular philosophy of science was probably Karl Popper's falsificationism - this is the old philosophy that the Bayesian revolution is currently dethroning. Karl Popper's idea that theories can be definitely falsified, but never definitely confirmed, is yet another special case of the Bayesian rules

That isn't Popper's idea because he doesn't believe in definite falsifications. Falsifications are themselves tentative conjectures which must be held open to criticism and reconsidering.

Popper also doesn't assert that confirmations are never definite, rather he denies there is confirmation at all. The reason is that any given confirming evidence for theory T is logically consistent with T being false.

More generally, Popper's philosophy is not about what we can do definitely. He does not address himself to the traditional philosophical problem of what we can and can't be certain of, or what is and isn't a justified, true belief. While he did comment on those issues, his epistemic philosophy is not an alternative answer to those questions. Rather, his positive contributions focus on a more fruitful issue: conjectural knowledge. How do people acquire conjectural knowledge? What is its nature? And so on.

BTW, conjectural knowledge does not mean the probabilistic knowledge that Bayesians are fond of. Probabilistic knowledge is just as much of an anathema to Popper as certain knowledge, because the same criticisms (for example that attempting justification leads to regress or circularity) apply equally well to each.

Your claim at the end of the quote that Popperian epistemology is a special case of Bayesian epistemology is especially striking. Popper considered the Bayesian approach and told us where he stands on it. On page 141 of Objective Knowledge he states, "I have combated [Bayesian epistemology] for thirty-three years."

To say that something which Popper combatted for over three decades is a more general version of his own work is an extraordinary claim. It should be accompanied with extraordinary substantiation, and some account of where Popper's arguments on the subject go wrong, but it is not.

Popper was a hardworking, academic person who read and thought about philosophy extensively, including ideas he disagreed with. He would often try to present the best possible version of an idea, as well as a history of the problem in question, before offering his criticism of it. I would ask that a similar approach be taken in criticizing Popper. Both as a matter of respect, and because it improves discussion.

Elliot Temple | Permalink | Comments (2)

Elliot Temple | Permalink | Comments (6)

If I Were President...

If I were president I'd cancel most of the meetings, travel, etc, etc, and make some forums which are publicly readable.

There'd be a forum where all the countries have an account with write access. And one where all US politicians have write access. And one with a lot of media and intellectuals.

I think forum discussions are actually the best thing the US president could do.

Imagine if all the politicians, media personalities, etc., with bad ideas had to actually write about them on the record on a daily basis? Imagine if you just kept following up on discussions. What would they do? What most people do currently with me is just stop responding to things, which they can get away with socially because I have low prestige. But just refusing to answer forever wouldn't be a viable answer to the president's forum, and arguments/questions from the president and his staff. That'd look really bad to the world: Nancy Pelosi has been asked the same question for 5 days in a row and just won't answer at all.

But if they did answer they'd get pinned down.

They'd have to do evasive tactics: missing the point, playing dumb, trying to create confusion, saying unclear things, trying to make the discussion go in circles, etc. All that stuff can be called out, pointed out, and basically made to look as foolish as it is. People get away with that stuff in verbal formats with little followup, and behind closed doors, but not against the best debaters over a period of weeks with every word of it in the record. None of the bad guys have any method of dealing with that level of intellectual scrutiny.

They can lie, but the lies can be documented and the canonical links documenting lies can be repetitively posted every single time a lie is repeated. Staff can be hired to do that. That would cost a hell of a lot less than a wide variety of current, unimportant government departments. It's very easy by government standards.

And how do you deal with media questions? Press briefings are so incomplete that it's hard for people to see who's right and why. What if all the bullshit the press kept bugging Trump about was on a forum where some staff members replied with canonical links over and over so everything was getting answered? How would the media continue to ignore the main points, which they currently ignore, if it was being linked in reply to them every time they talked?

Elliot Temple | Permalink | Comments (35)

Elliot Temple | Permalink | Comments (0)

Jack Murphy on Workplace SJWs

Jack Murphy wrote good tweets about Social Justice Warriors in the workplace:

We read about SJW insanity everywhere but somehow it still doesn't seem real. I began new day job recently which brought it all home to me.

On the first day they gave me the organizational goals for the year. One of them was: "Rid the org of bias through implicit bias training."

They made everyone take the implicit bias test online and then dedicated a required two-day retreat to reprogramming all the employees.

There were working groups which produced new goals such "reduce the number of white people." and "rid org of white supremacy bias."

They circulated pre-work reading materials with such titles as "Finding White Supremacy at work" and "How white culture creates injustice."

Apparently, insisting on promptness for meetings is now considered white supremacy.

What's interesting is that these ideas are bubbling up from the staffer level, + forcing management to respond. It's ground up not top down

Management said they hoped hiring me would help break the org of the mind virus. I'm not sure they know exactly what they're getting w/ me.

Some things which are now white supremacist: Objectivity. Individualism. "Worship of the written word." Sense of urgency. Perfectionism.

I walked into a room that had "too many white people" written on the white board. The culture war seems unreal until you see that at work.

I wouldn't ordinarily subject myself to this stuff, but a) the work itself is fascinating and I'm expert and b) it's like being undercover

I'm getting an inside look at shit I thought only existed in paranoid alt-right delusions. It's intriguing to see the mind virus at work.

I suspect at some point after the book comes out, my two worlds will collide. That'll be something. Maybe I'll get fired for truth.

If I get fired from the job for writing, that'll be good for the book. I'll cause a shit storm and the word will spread farther.

And until then, the pay is good, the work is even better, and I get real world confirmation of the culture war at work. Personal experience.

I'll keep cataloging the SJW craziness. If they fire me, I'll have a comprehensive list of EEO violations and file a civil rights case.

Plus, it's great fodder for the current book and future work. They're paying me to get a valuable dose of culture war reality.

I've resisted accepting the culture war as real. I don't want to think along gender divides or racial ones. But they leave me no choice.

I wrote this months ago and now I'm living it today:

For more content like this, make sure you buy my forthcoming book: #DemocratToDeplorable Sign up here for more!

Elliot Temple | Permalink | Comments (0)

IQ 3

These are replies to Ed Powell discussing IQ. This follows up on my previous posts: IQ and IQ 2.

Thanks for writing a reasonable reply to someone you disagree with. My most important comments are at the bottom and concern a methodology that could be used to make progress in the discussion.

I think we both have the right idea of "heritable." Lots of things are strongly heritable without being genetic.

OK, cool. Is there a single written work – which agrees “heritable” doesn’t imply genetic – which you think adequately expresses the argument today for genetic degrees of intelligence? It’d be fine if it’s a broad piece discussing lots of arguments with research citations that it’s willing to bet its claims on, or if it focuses on one single unanswerable point.

I think you take my analogy of a brain with a computer too far.

It's not an analogy, brains are literally computers. A computer is basically something that performs arbitrary computations, like 2+3 or reversing the letters in a word. That’s not nearly enough for intelligence, but it’s a building block intelligence requires. Computation and information flow are a big part of physics now, and if you try to avoid them you're stuck with alternatives like souls and magic.

I don't pretend to understand your argument above, and so I won't spend time debating it, but you surely realize that human intelligence evolved gradually over the last 5 or so million years (since our progenitors split from the branch that became chimps), and that this evolution did not consist of a mutant ADD Gate gene and another mutant NOT Gate gene.

There are lots of different ways to build computers. I don't think brains are made out of a big pile of NAND gates. But computers with totally different designs can all be universal – able to compute all the same things.

Indeed, if intelligence is properly defined as "the ability to learn", then plenty of animals have some level of intelligence. Certain my cats are pretty smart, and one can, among the thousands of cute cat videos on the internet, find examples of cats reasoning through options to open doors or get from one place to another. Dogs are even more intelligent. Even Peikoff changed his mind on Rand's pronouncement that animals and man are in different distinct classes of beings (animals obey instinct, man has no instinct and thinks) when he got a dog. Who knew that first hand experience with something might illuminate a philosophical issue?

I agree with Rand and I can also reach the same conclusion with independent, Popperian reasons.

I've actually had several dogs and cats. So I'm not disagreeing from lack of first hand experience.

What I would ask if I lacked that experience – and this is relevant anyway – is if you could point out one thing I'm missing (due to lack of experience, or for any other reason). What fact was learned from experience with animals that I don't know, and which contradicts my view?

I think you're not being precise enough about learning, and that with your approach you'd have to conclude that some video game characters also learn and are pretty smart. Whatever examples you provide about animal behaviors, I’ll be happy to provide parallel software examples – which I absolutely don’t think constitute human-like intelligence (maybe you do?).

Rand's belief in the distinct separation between man and animals when it comes to intellect is pretty contrary to the idea that man evolved gradually,

The jump to universality argument provides a way that gradual evolution could create something so distinct.

in the next few years the genetic basis of intelligence will in fact be found and we will no longer have anything to argue about. I don't think there's any real point arguing over this idea.

Rather than argue, would you prefer to bet on whether the genetic basis higher intelligence will be found within the next 5 years? I'd love to bet $10,000 on that issue.

In any case, even if there was such a finding, there’d still be plenty to argue about. It wouldn’t automatically and straightforwardly settle the issues regarding the right epistemology, theory of computation, way to understand universality, etc.

We all know a bunch of really smart people who are in some ways either socially inept or completely nuts.

Yes, but there are cultural explanations for why that would be, and I don't think genes can control social skill (what exactly could the entire mechanism be, in hypothetical-but-rigorous detail?).

I know a number of people smarter than myself who have developed some form of mental illness, and it's fairly clear that these things are not unrelated.

Tangent: I consider the idea of "mental illness" a means of excusing and legitimizing the initiation of force. It's used to subvert the rule of law – both by imprisoning persons without trial and by keeping some criminals out of jail.

Link: Thomas Szasz Manifesto.

The point of IQ tests is to determine (on average) whether an individual will do well in school or work, and the correspondence between test results and success in school and work is too close to dismiss the tests as invalid, even if you don't believe in g or don't believe in intelligence at all.

Sure. As I said, I think IQ tests should be used more.

The tests are excellent predictors, especially in the +/- 3 SD area

Yes. I agree the tests do worse with outliers, but working well for over 99% of people is still useful!

The government has banned IQ tests from being used as discriminators for job fitness;

That's an awful attack on freedom and reason!

Take four or five internet IQ tests. I guarantee you the answers will be in a small range (+/- 5ish), even though they are all different. Clearly they measure something! And that something is correlated with success in school and work (for large enough groups).

I agree.

My one experience with Deutsch was his two interviews on Sam Harris's podcast

For Popper and Deutsch, I'd advise against starting with anything other than Deutsch's two books.

FYI Deutsch is a fan of Ayn Rand, an opponent of global warming, strongly in favor of capitalism, a huge supporter of Israel, and totally opposed to cultural and moral relativism (thinks Western culture is objectively and morally better, etc.).

I have some (basically Objectivist) criticism of Deutsch's interviews which will interest people here. In short, he's recently started sucking up to lefty intellectuals, kinda like ARI. But his flawed approach to dealing with the public doesn't prevent some of his technical ideas about physics, computation and epistemology from being true.

But if one doesn't believe g exists,

I think g is a statistical construct best forgotten.

or that IQ tests measure anything real,

I agree that they do, and that the thing measured is hard to change. Many people equate genetic with hard to change, and non-genetic with easy to change, but I don't. There are actual academic papers in this field which say, more or less, "Even if it's not genetic, we may as well count it as genetic because it's hard to change."

or that IQ test results don't correlate with scholastics or job success across large groups, then there's really nothing to discuss.

I agree that they do. I am in favor of more widespread use of IQ testing.

As I said, I think IQ tests measure a mix of intelligence, culture and background knowledge. I think these are all real, important, and hard to change. (Some types of culture and background knowledge are easy to change, but some other types are very hard to change, and IQ tests focus primarily on measuring the hard to change stuff, which is mostly developed in early childhood.)

Of course intelligence, culture and knowledge all correlate with job and school success.

Finally, I don't think agreement is possible on this issue, because much of your argument depends upon epistemological ideas of Pooper/Deutsch and yourself, and I have read none of the source material. [...] I don't see how a discussion can proceed though on this IQ issue--or really any other issue--with you coming from such an alien (to me) perspective on epistemology that I have absolutely no insight into. I can't argue one way or the other about cultural memes since I have no idea what they are and what scientific basis for them exists. So I won't. I'm not saying you're wrong, I'm just saying I won't argue about something I know nothing about.

I'd be thrilled to find a substantial view on an interesting topic that I didn't already know about, that implied I was wrong about something important. Especially if it had some living representative(s) willing to respond to questions and arguments. I've done this (investigated ideas) many times, and currently have no high priority backlog. E.g. I know of no outstanding arguments against my views on epistemology or computation to address, nor any substantial rivals which aren't already refuted by an existing argument that I know of.

I've written a lot about methods for dealing with rival ideas. I call my approach Paths Forward. The basic idea is that it's rational to act so that:

  1. If I'm mistaken
  2. And someone knows it (and they're willing to share their knowledge)
  3. Then there's some reasonable way that I can find out and correct my mistake.

This way I don't actively prevent fixing my mistakes and making intellectual progress.

There are a variety of methods that can be used to achieve this, and also a variety of common methods which fail to achieve this. I consider the Paths-Forward-compatible methods rational, and the others irrational.

The rational methods vary greatly on how much time they take. There are ways to study things in depth, and also faster methods available when desired. Here's a fairly minimal rational method you could use in this situation:

Read until you find one mistake. Then stop and criticize.

You’ll find the first mistake early on unless the material is actually good. (BTW you're allowed to criticize meta mistakes, such as that the author failing to say why his stuff matters, rather than only criticizing internal or factual errors. You can also stop reading at your first question, instead of criticism.)

Your first criticism (or question) will often be met with dumb replies that you can evaluate using knowledge you already have about argument, logic, etc. Most people with bad ideas will make utter fools of themselves in answer to your first criticism or question. OK, done. Rather than ignore them, you've actually addressed their position, and their position now has an outstanding criticism (or unanswered question), and there is a path forward available (they could, one day, wise up and address the issue).

Sometimes the first criticism will be met with a quality reply which addresses the issue or refers you to a source which addresses it. In that case, you can continue reading until you find one more mistake. Keep repeating this process. If you end up spending a bunch of time learning the whole thing, it's because you can't find any unaddressed mistakes in it (it's actually great)!

A crucial part of this method is actually saying your criticism or question. A lot of people read until the first thing they think is a mistake, then stop with no opportunity for a counter-argument. By staying silent, they're also giving the author (and his fans) no information to use to change their minds. Silence prevents progress regardless of which side is mistaken. Refusing to give even one argument leaves the other guy's position unrefuted, and leaves your position as not part of the public debate.

Another important method is to cite some pre-existing criticism of a work. You must be willing to take responsibility for what you cite, since you're using it to speak for you. It can be your own past arguments, or someone else's. The point is, the same bad idea doesn't need to be refuted twice – one canonical, reusable refutation is adequate. And by intentionally writing reusable material throughout your life, you'll develop a large stockpile which addresses common ideas you disagree with.

Rational methods aren't always fast, even when the other guy is mistaken. The less you know about the issues, the longer it can take. However, learning more about issues you don't know about is worthwhile. And once you learn enough important broad ideas – particularly philosophy – you can use it to argue about most ideas in most fields, even without much field-specific knowledge. Philosophy is that powerful! Especially when combined with a moderate amount of knowledge of the most important other fields.

Given limited time and many things worth learning, there are options about prioritization. One reasonable thing to do, which many people are completely unwilling to do, is to talk about one's interests and priorities, and actually think them through in writing and then expose one's reasoning to public criticism. That way there's a path forward for one's priorities themselves.

To conclude, I think a diversion into methodology could allow us to get the genetic intelligence discussion unstuck. I also believe that such methodology (epistemology) issues are a super important topic in their own right.

Elliot Temple | Permalink | Comments (7)

IQ 2

These are replies to Ed Powell discussing IQ. This follows up on my previous post.

I believe I understand that you’re fed up with various bad counter-arguments about IQ, and why, and I sympathize with that. I think we can have a friendly and productive discussion, if you’re interested, and if you either already have sophisticated knowledge of the field or you’re willing to learn some of it (and if, perhaps as an additional qualification, you have an IQ over 130). As I emphasized, I think we have some major points of agreement on these issues, including rejecting some PC beliefs. I’m not going to smear you as a racist!

Each of these assertions is contrary to the data.

My claims are contrary to certain interpretations of the data, which is different than contradicting the data itself. I’m contradicting some people regarding some of their arguments, but that’s different than contradicting facts.

Just look around at the people you know: some are a lot smarter than others, some are average smart, and some are utter morons.

I agree. I disagree about the details of the underlying mechanism. I don’t think smart vs. moron is due to a single underlying thing. I think it’s due to multiple underlying things.

This also explains reversion to the mean

Reversion to the mean can also be explained by smarter parents not being much better parents in some crucial ways. (And dumber parents not being much worse parents in some crucial ways.)

Every piece of "circumstantial evidence" points to genes

No piece of evidence that fails to contradict my position can point to genes over my position.

assertion that there exists a thing called g

A quote about g:

To summarize ... the case for g rests on a statistical technique, factor analysis, which works solely on correlations between tests. Factor analysis is handy for summarizing data, but can't tell us where the correlations came from; it always says that there is a general factor whenever there are only positive correlations. The appearance of g is a trivial reflection of that correlation structure. A clear example, known since 1916, shows that factor analysis can give the appearance of a general factor when there are actually many thousands of completely independent and equally strong causes at work. Heritability doesn't distinguish these alternatives either. Exploratory factor analysis being no good at discovering causal structure, it provides no support for the reality of g.

Back to quoting Ed:

I just read an article the other day where researchers have identified a large number of genes thought to influence intelligence.

I’ve read many primary source articles. That kind of correlation research doesn’t refute what I’m saying.

What do you think psychometricians have been doing for the last 100 years?

Remaining ignorant of philosophy, particularly epistemology, as well as the theory of computation.

It is certainly true that one can create culturally biased IQ test questions. This issue has been studied to death, and such questions have been ruthlessly removed from IQ tests.

They haven’t been removed from the version of the Wonderlic IQ test you chose to link, which I took my example from.

I think there’s an important issue here. I think you believe there are other IQ tests which are better. But you also believe the Wonderlic is pretty good and gets the roughly same results as the better tests for lots of people. Why, given the flawed question I pointed out (which had a lot more wrong with it than cultural bias), would the Wonderlic results be similar to the results of some better IQ test? If one is flawed and one isn’t flawed, why would they get similar results?

My opinion is as before: IQ tests don’t have to avoid cultural bias (and some other things) to be useful, because culture matters to things like job performance, university success, and how much crime an immigrant commits.

I don't use the term "genetic" because I don't mean "genetic", I mean "heritable," because the evidence supports the term "heritable."

The word "heritable" is a huge source of confusion. A technical meaning of "heritable" has been defined which is dramatically different than the standard English meaning. E.g. accent is highly "heritable" in the terminology of heritability research.

The technical meaning of “heritable” is basically: “Variance in this trait is correlated with changes in genes, in the environment we did the study in, via some mechanism of some sort. We have no idea how much of the trait is controlled by what, and we have no idea what environmental changes or other interventions would affect the trait in what ways.” When researchers know more than that, it’s knowledge of something other than “heritability”. More on this below.

I have not read the articles you reference on epistemology, but intelligence has nothing to do with epistemology, just as a computer's hardware has nothing to do with what operating system or applications you run on it.

Surely you accept that ideas (software) have some role in who is smart and who is a moron? And so epistemology is relevant. If one uses bad methods of thinking, one will make mistakes and look dumb.

Epistemology also tells us how knowledge can and can’t be created, and knowledge creation is a part of intelligent thinking.

OF COURSE INTELLIGENCE IS BASED ON GENES, because humans are smarter than chimpanzees.

I have a position on this matter which is complicated. I will briefly give you some of the outline. If you are interested, we can discuss more details.

First, one has to know about universality, which is best approached via the theory of computation. Universal classical computers are well understood. The repertoire of a classical computer is the set of all computations it can compute. A universal classical computer can do any computation which any other classical computer can do. For evaluating a computer’s repertoire, it’s allowed unlimited time and data storage.

Examples of universal classical computers are Macs, PCs, iPhones and Android phones (any of them, not just specific models). Human brains are also universal classical computers, and so are the brains of animals like dogs, cows, cats and horses. “Classical” is specified to omit quantum computers, which use aspects of quantum physics to do computations that classical computers can’t do.

Computational universality sounds very fancy and advanced, but it’s actually cheap and easy. It turns out it’s difficult to avoid computational universality while designing a useful classical computer. For example, the binary logic operations NOT and AND (plus some control flow and input/output details) are enough for computational universality. That means they can be used to calculate division, Fibonacci numbers, optimal chess moves, etc.

There’s a jump to universality. Take a very limited thing, and add one new feature, and all of a sudden it gains universality! E.g. our previous computer was trivial with only NOT, and universal when we added AND. The same new feature which allowed it to perform addition also allowed it to perform trigonometry, calculus, and matrix math.

There are different types of universality, e.g. universal number systems (systems capable of representing any number which any other number system can represent) and universal constructors. Some things, such as the jump to universality, apply to multiple types of universality. The jump has to do with universality itself rather than with computation specifically.

Healthy human minds are universal knowledge creators. Animal minds aren’t. This means humans can create any knowledge which is possible to create (they have a universal repertoire). This is the difference between being intelligent or not intelligent. Genes control this difference (with the usual caveats, e.g. that a fetal environment with poison could cause birth defects).

Among humans, there are also degrees of intelligence. E.g. a smart person vs. an idiot. Animals are simply unintelligent and don’t have degrees of intelligence at all. Why do animals appear somewhat intelligent? Because their genes contain evolved knowledge and code for algorithms to control animal behavior. But that’s a fundamentally different thing than human intelligence, which can create new knowledge rather than relying on previously evolved knowledge present in genes.

Because of the jump to universality, there are no people or animals which can create 20%, 50%, 80% or 99% of all knowledge. Nothing exists with that kind of partial knowledge creation repertoire. It’s only 100% (universal) or approximately zero. If you have a conversation with someone and determine they can create a variety of knowledge (a very low bar for human beings, though no animal can meet it), then you can infer they have the capability to do universal knowledge creation.

Universal knowledge creation (intelligence) is a crucial capability our genes give us. From there, it’s up to us to decide what to do with it. The difference between a moron and a genius is how they use their capability.

Differences in degrees of human intelligence, among healthy people (with e.g. adequate food) are due to approximately 100% ideas, not genes. Some of the main factors in early childhood idea development are:

  • Your culture’s anti-rational memes.
  • The behavior of your parents.
  • The behavior of other members of your culture that you interact with.
  • Sources of cultural information such as YouTube.
  • Your own choices, including mental choices about what to think.

The relevant ideas for intelligence are mostly unconscious and involve lots of methodology. They’re very hard for adults in our culture to change.

This is not the only important argument on this topic, but it’s enough for now.

This isn’t refuted in The Bell Curve, which doesn’t discuss universality. The concept of universal knowledge creators was first published in 2011. (FYI this book is by my colleague, and I contributed to the writing process).

Below I provide some comments on The Bell Curve, primarily about how it misunderstands heritability research.

There is a most absurd and audacious Method of reasoning avowed by some Bigots and Enthusiasts, and through Fear assented to by some wiser and better Men; it is this. They argue against a fair Discussion of popular Prejudices, because, say they, tho’ they would be found without any reasonable Support, yet the Discovery might be productive of the most dangerous Consequences. Absurd and blasphemous Notion! As if all Happiness was not connected with the Practice of Virtue, which necessarily depends upon the Knowledge of Truth.
EDMUND BURKE A Vindication of Natural Society

This is a side note, but I don’t think the authors realize Burke was being ironic and was attacking the position stated in this quote. The whole work, called a vindication of natural society (anarchy), is an ironic attack, not actually a vindication.

Heritability, in other words, is a ratio that ranges between 0 and 1 and measures the relative contribution of genes to the variation observed in a trait.

This is incomplete because it omits the simplifying assumptions being made. From Yet More on the Heritability and Malleability of IQ:

To summarize: Heritability is a technical measure of how much of the variance in a quantitative trait (such as IQ) is associated with genetic differences, in a population with a certain distribution of genotypes and environments. Under some very strong simplifying assumptions, quantitative geneticists use it to calculate the changes to be expected from artificial or natural selection in a statistically steady environment. It says nothing about how much the over-all level of the trait is under genetic control, and it says nothing about how much the trait can change under environmental interventions. If, despite this, one does want to find out the heritability of IQ for some human population, the fact that the simplifying assumptions I mentioned are clearly false in this case means that existing estimates are unreliable, and probably too high, maybe much too high.

Note that the word “associated” in the quote refers to correlation, not to causality. Whereas the authors of The Bell Curve use the word “contribution” instead, which doesn’t mean “correlation” and is therefore wrong.

Here’s another source on the same point, Genetics and Reductionism:

high [narrow] heritability, which is routinely taken as indicative of the genetic origin of traits, can occur when genes alone do not provide an explanation of the genesis of that trait. To philosophers, at least, this should come as no paradox: good correlations need not even provide a hint of what is going on. They need not point to what is sometimes called a "common cause". They need not provide any guide to what should be regarded as the best explanation.

You can also read some primary source research in the field (as I have) and see what sort of “heritability” it does and doesn’t study, and what sort of limitations it has. If you disagree, feel free to provide a counter example (primary source research, not meta or summary), which you’ve read, which studies a different sort of IQ “heritability” than my two quotes talk about.

What happens when one understands “heritable” incorrectly?

Then one of us, Richard Herrnstein, an experimental psychologist at Harvard, strayed into forbidden territory with an article in the September 1971 Atlantic Monthly. Herrnstein barely mentioned race, but he did talk about heritability of IQ. His proposition, put in the form of a syllogism, was that because IQ is substantially heritable, because economic success in life depends in part on the talents measured by IQ tests, and because social standing depends in part on economic success, it follows that social standing is bound to be based to some extent on inherited differences.

This is incorrect because it treats “heritable” (as measured in the research) as meaning “inherited”.

How Much Is IQ a Matter Genes?

In fact, IQ is substantially heritable. [...] The most unambiguous direct estimates, based on identical twins raised apart, produce some of the highest estimates of heritability.

This incorrectly suggests that IQ is substantially a matter of genes because it’s “heritable” (as determined by twin studies).

Specialists have come up with dozens of procedures for estimating heritability. Nonspecialists need not concern themselves with nuts and bolts, but they may need to be reassured on a few basic points. First, the heritability of any trait can be estimated as long as its variation in a population can be measured. IQ meets that criterion handily. There are, in fact, no other human traits—physical or psychological—that provide as many good data for the estimation of heritability as the IQ. Second, heritability describes something about a population of people, not an individual. It makes no more sense to talk about the heritability of an individual’s IQ than it does to talk about his birthrate. A given individual’s IQ may have been greatly affected by his special circumstances even though IQ is substantially heritable in the population as a whole. Third, the heritability of a trait may change when the conditions producing variation change. If, one hundred years ago, the variations in exposure to education were greater than they are now (as is no doubt the case), and if education is one source of variation in IQ, then, other things equal, the heritability of IQ was lower then than it is now.


Now for the answer to the question, How much is IQ a matter of genes? Heritability is estimated from data on people with varying amounts of genetic overlap and varying amounts of shared environment. Broadly speaking, the estimates may be characterized as direct or indirect. Direct estimates are based on samples of blood relatives who were raised apart. Their genetic overlap can be estimated from basic genetic considerations. The direct methods assume that the correlations between them are due to the shared genes rather than shared environments because they do not, in fact, share environments, an assumption that is more or less plausible, given the particular conditions of the study. The purest of the direct comparisons is based on identical (monozygotic, MZ) twins reared apart, often not knowing of each other’s existence. Identical twins share all their genes, and if they have been raised apart since birth, then the only environment they shared was that in the womb. Except for the effects on their IQs of the shared uterine environment, their IQ correlation directly estimates heritability. The most modern study of identical twins reared in separate homes suggests a heritability for general intelligence between .75 and .80, a value near the top of the range found in the contemporary technical literature. Other direct estimates use data on ordinary siblings who were raised apart or on parents and their adopted-away children. Usually, the heritability estimates from such data are lower but rarely below .4.

This is largely correct if you read “heritability” with the correct, technical meaning. But the assumption that people raised apart don’t share environment is utterly false. People raised apart – e.g. in different cities in the U.S. – share tons of cultural environment. For example, many ideas about parenting practices are shared between parents in different cities.

Despite my awareness of these huge problems with IQ research, I still agree with some things you’re saying and believe I know how to defend them correctly. In short, genetic inferiority is no good (and contradicts Ayn Rand, btw), but cultural inferiority is a major world issue (and correlates with race, which has led to lots of confusion).

As a concrete reminder of what we’re discussing, I’ll leave you with an IQ test question to ponder:

Read my followup post: IQ 3

Elliot Temple | Permalink | Comments (0)


This is a reply to Ed Powell writing about IQ.

I believe IQ tests measure a mix of intelligence, culture and background knowledge.

That's useful! Suppose I'm screening employees to hire. Is a smart employee the only thing I care about? No. I also want him to fit in culturally and be knowledgable. Same thing with immigrants.

The culture and background knowledge measured by IQ tests isn't superficial. It's largely learned in early childhood and is hard to change. It is possible to change. I would expect assimilating to raise IQ scores on many IQ tests, just as learning arithmetic raises scores on many IQ tests for people who didn't know it before.

Many IQ test questions are flawed. They have ambiguities. But this doesn't make IQ tests useless. It just makes them less accurate, especially for people who are smarter than the test creators. Besides, task assignments from your teacher or boss contain ambiguities too, and you're routinely expected to know what they mean anyway. So it matters whether you can understand communications in a culturally normal way.

Here's a typical example of a flawed IQ test question. We could discuss the flaws if people are interested in talking about it. And I'm curious what people think the answer is supposed to be.

IQ tests don't give perfect foresight about an individual's future. So what? You don't need perfectly accurate screening for hiring, college admissions or immigration. Generally you want pretty good screening which is cheap. If someone comes up with a better approach, more power to them.

Would it be "unfair" to some individual that they aren't hired for a job they'd be great at because IQ tests aren't perfect? Sure, sorta. That sucks. The world is full of things going wrong. Pick yourself up and keep trying – you can still have a great life. You have no right to be treated "fairly". The business does have a right to decide who to hire or not. There's no way to making hiring perfect. If you know how to do hiring better, sell them the method. But don't get mad at hiring managers for lacking omniscience. (BTW hiring is already unfair and stupid in lots of ways. They should use more work sample tests and less social metaphysics. But the problems are largely due to ignorance and error, not conscious malice.)

Ed Powell writes:

Since between 60% and 80% of IQ is heritable, it means that their kids won't be able to read either. Jordan Peterson in one of his videos claims that studies show there are no jobs at all in the US/Canadian economies for anyone with an IQ below about 83. That means 85% of the Somalian immigrants (and their children!) are essentially unemployable. No immigration policy of the US should ignore this fact.

I've watched most of Jordan Peterson's videos. And I know, e.g., that the first video YouTube sandboxed in their new censorship campaign was about race and IQ.

I agree that it's unrealistic for a bunch of low IQ Somalians to come here and be productive in U.S. jobs. I think we agree on lots of conclusions.

But I don't think IQ is heritable in the normal sense of the word "heritable", meaning that it's controlled by genes passed on by parents. (There's also a technical definition of "heritable", which basically means correlation.) For arguments, see: Yet More on the Heritability and Malleability of IQ.

I don't think intelligence is genetic. The studies claiming it's (partly) genetic basically leave open the possibility that it's a gene-environment interaction of some kind, which leaves open the possibility that intelligence is basically due to memes. Suppose parents in our culture give worse treatment to babies with black skin, and this causes lower intelligence. That's a gene-environment interaction. In this scenario, would you say that the gene for black skin is a gene for low intelligence? Even partly? I wouldn't. I'd say genes aren't controlling intelligence in this scenario, culture is (and, yes, our culture has some opinions about some genetic traits like skin color).

When people claim intelligence (or other things) are due to ideas, they usually mean it's easy to change. Just use some willpower and change your mind! But memetic traits can actually be harder to change than genetic traits. Memes evolve faster than genes, and some old memes are very highly adapted to prevent themselves from being changed. Meanwhile, it's pretty easy to intervene to change your genetic hair color with dye.

I think intelligence is a primarily memetic issue, and the memes are normally entrenched in early childhood, and people largely don't know how to change them later. So while the mechanism is different, the conclusions are still similar to if it were genetic. One difference is that I'm hopeful that dramatically improved parenting practices will make a large difference in the world, including by raising people's intelligence.

Also, if memes are crucial, then current IQ score correlations may fall apart if there's a big cultural shift of the right kind. IQ test research only holds within some range of cultures, not in all imaginable cultures. But so what? It's not as if we're going to wake up in a dramatically different culture tomorrow...

I don't believe that IQ tests measure general intelligence – which I don't think exists as a single, well-defined thing. I have epistemological reasons for this which are complicated and differ from Objectivism on some points. I do think that some people are smarter than others. I do think there are mental skills, which fall under the imprecise term "intelligence", and have significant amounts of generality.

Because of arguments about universality (which we can discuss if there's interest), I think all healthy people are theoretically capable of learning anything that can be learned. But that doesn't mean they will! What stops them isn't their genes, it's their ideas. They have anti-rational memes from early childhood which are very strongly entrenched. (I also think people have free will, but often choose to evade, rationalize, breach their integrity, etc.)

Some people have better ideas and memes than others. So I share a conclusion with you: some people are dumber than others in important very-hard-to-change ways (even if it's not genetic), and IQ test scores do represent some of this (imperfectly, but meaningfully).

For info about memes and universality, see The Beginning of Infinity.

And, btw, of course there are cultural and memetic differences correlated with e.g. race, religion and nationality. For example, on average, if you teach your kids not to "act white" then they're going to turn out dumber.

So, while I disagree about many of the details regarding IQ, I'm fine with a statement like "criminality is mainly concentrated in the 80-90 IQ range". And I think IQ tests could improve immigration screening.

Read my followup post: IQ 2

Elliot Temple | Permalink | Comments (0)

Banned from "Critical Rationalist" Facebook Group

Matt Dioguardi owns a Facebook group with around 5000 members. The membership believes it's an open discussion forum with relaxed rules (just post all you want that's related to Popper "in some manner"), because that's what it publicly states, in writing.

However, I was banned because I didn't like some of Matt's friends' comments and blocked them on Facebook to stop seeing their messages. I don't need toxic people in my life.

I would never dream of banning someone from the Fallible Ideas forum because they set up a mail rule to block posts by my friends Justin and Alan. Some of Matt's friends, like Justin and Alan, were moderators – so what?

Prior to that I had some posts blocked for reasons like mentioning Ayn Rand (in addition to Popper) or mentioning parenting and education (from a Popperian perspective, and in addition to talking about how to spread Critical Rationalist ideas). Discussing the moderation had been unproductive (they refused to answer clarifying questions about the policies or update the stated rules to the actual rules). Some of the forum discussions had also been unproductive (e.g. I repeatedly asked some flamers to stop harassing me, and they did the passive-aggressive version of telling me to go fuck myself – then redoubled their efforts to harrass me). I didn't flame anyone.

So I decided it was time to stop engaging with the toxic people. I knew I was at risk of being banned if I did some further action that wasn't appreciated and there was no problem-solving discussion to address it. I decided to risk this because I thought talking with the toxic people wouldn't solve problems and could actually cause problems. But they wouldn't just leave me alone. For my decision to refocus on productive discussion, and ignore everything else, I was banned. (Dioguardi stated the reason for the ban, it's not speculation.)

Some of them clearly didn't like me (e.g. one of the moderators was also one of the repeat flamers) and wanted an excuse to get rid of me. But what kind of excuse is this? Nothing was wrong with anything I posted, and they banned me anyway!

Update: They also banned anyone from posting a link to anything I wrote.

Elliot Temple | Permalink | Comments (21)

Discussion About the Importance of Explanations with Andrew Crawshaw

From Facebook:

Justin Mallone:

The following excerpt argues that explanations are what is absolutely key in Popperian philosophy, and that Popper over-emphasizes the role of testing in science, but that this mistake was corrected by physicist and philosopher David Deutsch (see especially the discussion of the grass cure example). What do people think?
(excerpted from:

Most ideas are criticized and rejected for being bad explanations. This is true even in science where they could be tested. Even most proposed scientific ideas are rejected, without testing, for being bad explanations.
Although tests are valuable, Popper's over-emphasis on testing mischaracterizes science and sets it further apart from philosophy than need be. In both science and abstract philosophy, most criticism revolves around good and bad explanations. It's largely the same epistemology. The possibility of empirical testing in science is a nice bonus, not a necessary part of creating knowledge.

In [The Fabric of Reality], David Deutsch gives this example: Consider the theory that eating grass cures colds. He says we can reject this theory without testing it.
He's right, isn't he? Should we hire a bunch of sick college students to eat grass? That would be silly. There is no explanation of how grass cures colds, so nothing worth testing. (Non-explanation is a common type of bad explanation!)
Narrow focus on testing -- especially as a substitute for support/justification -- is one of the major ways of misunderstanding Popperian philosophy. Deutsch's improvement shows how its importance is overrated and, besides being true, is better in keeping with the fallibilist spirit of Popper's thought (we don't need something "harder" or "more sciency" or whatever than critical argument!).

Andrew Crawshaw: I see, but it might turn out that grass cures cold. This would just be an empirical fact, demanding scientific explanation.

TC: Right, and if a close reading of Popper yielded anything like "test every possible hypothesis regardless of what you think of it", this would represent an advancement over Popper's thought. But he didn't suggest that.

Andrew Crawshaw: We don't reject claims of the form by indicated by Deustch because they are bad explanations. There are plenty of dangling empirical claims that we still hold to be true but which are unexplained. Deutsch is mistaking the import of his example.

Elliot Temple:

There are plenty of dangling empirical claims that we still hold to be true but which are unexplained.

That's not the issue. Are there any empirical claims we have criticism of, but which we accept? (Pointing out that something is a bad explanation is a type of criticism.)

Andrew Crawshaw: If you think that my burden is to show that there are empirical claims that are refuted but that we accept, then you have not understood my criticism.

For example

Grass cures colds.

Is of the same form as

aluminium hydroxide contributes to the production of a large quantity of antibodies.

Both are empirical claims, but they are not explanatory. That does not make them bad

Neither of them are explanations. One is accepted and the other is not.

It's not good saying that the former is a bad explanation.

The latter has not yet been properly explained by sciences

Elliot Temple: The difference is we have explanations of how aluminum hydroxide works, e.g. from wikipedia " It reacts with excess acid in the stomach, reducing the acidity of the stomach content"

Andrew Crawshaw: Not in relation to its antibody mechanism.

Elliot Temple: Can you provide reference material for what you're talking about? I'm not familiar with it.

Andrew Crawshaw: I can, but it is still irrelevant to my criticism. Which is that they are both not explanatory claims, but one is held as true while the other not.

They are low-level empirical claims that call out for explantion, they don't themselves explain. Deutsch is misemphesising.

Elliot Temple: your link is broken, and it is relevant b/c i suspect there is an explanation.

Andrew Crawshaw: It's still irrelevant to my criticism. Which is that we often accept things like rules of thumb, even when they are unexplained. They don't need to be explained for them to be true of for us to class them as true. Miller talks about this extensively. For instance strapless evening gowns were not understand scientifically for ages.

Elliot Temple: i'm saying we don't do that, and you're saying you have a counter-example but then you say the details of the counter-example are irrelevant. i don't get it.

Elliot Temple: you claim it's a counter example. i doubt it. how are we to settle this besides looking at the details?

Andrew Crawshaw: My criticism is that calling such a claim a bad explanation is irrelevat to those kinds of claims. They are just empirical claims that beg for explanation.

Elliot Temple: zero explanation is a bad explanation and is a crucial criticism. things we actually use have more explanation than that.

Andrew Crawshaw: So?

Elliot Temple: so DD and I are right: we always go by explanations. contrary to what you're saying.

Andrew Crawshaw: We use aliminium hydroxide for increasing anti-bodies and strapless evening gowns p, even before they were explained.

Elliot Temple: i'm saying i don't think so, and you're not only refusing to provide any reference material about the matter but you claimed such reference material (indicating the history of it and the reasoning involved) is irrelevant.

Andrew Crawshaw: I have offered it. I re-edited my post.

Elliot Temple: please don't edit and expect me to see it, it usually doesn't show up.

Andrew Crawshaw: You still have not criticised my claim. The one comparing the two sentences which are of the same form, yet one is accepted and one not.

Elliot Temple: the sentence "aluminium hydroxide contributes to the production of a large quantity of antibodies." is inadequate and should be rejected.

the similar sentence with a written or implied footnote to details about how we know it would be a good claim. but you haven't given that one. the link you gave isn't the right material: it doesn't say what aluminium hydroxide does, how we know it, how it was discovered, etc

Elliot Temple: i think your problem is mixing up incomplete, imperfect explanations (still have more to learn) with non-explanation.

Andrew Crawshaw: No, it does not. But to offer that would be to explain. Which is exactly what I am telling is irrelevant.

What is relevant is whether the claim itself is a bad explanation. It's just an empirical claim.

The point is just that we often have empirical claims that are not explained scientifically yet we accept them as true and use them.

Elliot Temple: We don't. If you looked at the history of it you'd find there were lots of explanations involved.

Elliot Temple: I guess you just don't know the history either, which is why you don't know the explanations involved. People don't study or try things randomly.

Elliot Temple: If you could pick a better known example which we're both familiar with, i could walk you through it.

Andrew Crawshaw: There was never an explanation of how bridges worked. But there were rules of thumb of how to build them. There is explanations of how to use aluminium hydroxide but is actual mechanism is unknown.

Elliot Temple: what are you talking about with bridges. you can walk on strong, solid objects. what do you not understand?

Andrew Crawshaw: That's not how they work. I am talking about the scientific explanation of forces and tensions. It was not always understood despite the fact that they were built. This is the same with beavers dams, they don't know any of the explanations of how to build dams.

Elliot Temple: you don't have to know everything that could be known to have an explanation. understanding that you can walk on solid objects, and they can be supported, etc, is an explanation, whether you know all the math or not. that's what the grass cure for the cold lacks.

Elliot Temple: the test isn't omniscience, it's having a non-refuted explanation.

Andrew Crawshaw: Hmm, but are you saying then that even bad-explanations can be accepted. Cuz as far as I can tell many of the explanations for bridge building were bad, yet they stil built bridges.

Anyway you are still not locating my criticism. You are criticising something I never said it seems. Which is that Grass cures cold has not been explained. But what Deutsch was claiming was that the claim itself was a bad explanation, which is true if bad explanation includes non-explanation, but it is not the reason it is not accepted. As the hydroxide thing suggests.

Elliot Temple: We should only accept an explanation that we don't know any criticism of.

We need some explanation or we'd have no idea if what we're doing would work, we'd be lost and acting randomly without rhyme or reason. And that initial explanation is what we build on – we later improve it to make it more complete, explain more stuff.

Andrew Crawshaw: I think this is incorrect. All animals that can do things refutes your statement.

Elliot Temple: The important thing is the substance of the knowledge, not whether it's written out in the form of an English explanation.

Andrew Crawshaw: Just because there is an explanation of how some physical substrate interacts with another physical substrate, does not mean that you need explanations. Explanations are in language. Knowledge not necessarily. Knowledge is a wider phenomenon than explanation. I have many times done things by accident that have worked, but I have not known why.

Elliot Temple: This is semantics. Call it "knowledge" then. You need non-refuted knowledge of how something could work before it's worth trying. The grass cure for the cold idea doesn't meet this bar. But building a log bridge without knowing modern science is fine.

Andrew Crawshaw: Before it's worth trying? I don't think so, rules of thumb are discovered by accident and then re-used without knowing how or why it could work,,it's just works and then they try it again and it works again. Are you denying that that is a possibility?

Elliot Temple: Yes, denying that.

Andrew Crawshaw: Well, you are offering foresight to evolution then, it seems.

Elliot Temple: That's vague. Say what you mean.

Andrew Crawshaw: I don't think it is that vague. If animals can build complex things like behaves and they should have had knowledge of how it could work before it was worth trying out, then they have a lot of forsight before they tried them out. Or could it be the fact that it is the other way round, we stumble in rules of thumb develop them, then come up with explanations about how they possibly work. I am more inclined to the latter. The former is just another version of the argument from design.

Elliot Temple: humans can think and they should think before acting. it's super inefficient to act mindlessly. genetic evolution can't think and instead does things very, very, very slowly.

Andrew Crawshaw: But thinking before acting is true. Thinking is critical. It needs material to work on. Which is guesswork and sometimes, if not often, accidental actions.

Elliot Temple: when would it be a good idea to act thoughtlessly (and which thoughtless action) instead of acting according to some knowledge of what might work?

Elliot Temple: e.g. when should you test the grass cure for cancer, with no thought to whether it makes any sense, instead of thinking about what you're doing and acting according to your rational thought? (which means e.g. considering what you have some understanding could work, and what you have criticisms of)

Andrew Crawshaw: Wait, we often act thoughtlessly whether or not we should do. I don't even think it is a good idea. But we often try to do things and end up somewhere which is different to what we expected, it might be worse or better. For instance, we might try to eat grass because we are hungry and then happen to notice that our cold disspaeard and stumble on a cure for the cold.

Andrew Crawshaw: And different to what we expected might work even though we have no idea why.

Elliot Temple: DD is saying what we should do, he's talking about reason. Sometimes people act foolishly and irrationally but that doesn't change what the proper methods of creating knowledge are.

Sometimes unexpected things happen and you can learn from them. Yes. So what?

Andrew Crawshaw: But if Deustch expects that we can only work with explanations. Then he is mistaken. Which is, it seems, what you have changed your mind about.

Elliot Temple: I didn't change my mind. What?

What non-explanations are you talking about people working with? When an expectation you have is violated, and you investigate, the explanation is you're trying to find out if you were mistaken and figure out the thing you don't understand.

Elliot Temple: what do you mean "work with"? we can work with (e.g. form explanations about) spreadsheet data. we can also work with hammers. resources don't have to be explanations themselves, we just need an explanation of how to get value out of the resource.

Andrew Crawshaw: There is only one method of creating knowledge. Guesswork. Or, if genetically, by mutation. Physical things are often made without knows how and then they are applied in various contexts and they might and mint not work, that does not mean we know how they work.

Elliot Temple: if you didn't have an explanation of what actions to take with a hammer to achieve what goal, then you couldn't proceed and be effective with the hammer. you could hit things randomly and pray it works out, but it's not a good idea to live that way.

Elliot Temple: (rational) humans don't proceed purely by guesses, they also criticize the guesses first and don't act on the refuted guesses.

Andrew Crawshaw: Look there are three scenarios

  1. Act on knowledge
  2. Stumble upon solution by accident, without knowing why it works.
  3. Act randomly

Elliot Temple: u always have some idea of why it works or you wouldn't think it was a solution.

Andrew Crawshaw: No, all you need is to recognise that it worked. This is easily done by seeing that what you wanted to happen happened. It is non-sequitur to then assume that you know something of how it works.

Elliot Temple: you do X. Y results. Y is a highly desirable solution to some recurring problem. do you now know that X causes Y? no. you need some causal understanding, not just a correlation. if you thought it was impossible that X causes Y, you would look for something else. if you saw some way it's possible X causes Y, you have an initial explanation of how it could work, which you can and should expose to criticism.

Elliot Temple:

Know all you need is to recognise that it works.

plz fix this sentence, it's confusing.

Andrew Crawshaw: You might guess that it caused it. You don't need to understand it to guess that it did.

Elliot Temple: correlation isn't causation. you need something more.

Elliot Temple: like thinking of a way it could possibly cause it.

Elliot Temple: that is, an explanation of how it works.

Andrew Crawshaw: I am not saying correlation is causation, you don't need to explained guesswork, before you have guess it. You first need to guess that something caused something before you go out and explain it. Otherwise what are explaining?

Elliot Temple: you can guess X caused Y and then try to explain it. you shouldn't act on the idea that X caused Y if you have no explanation of how X could cause Y. if you have no explanation, then that's a criticism of the guess.

Elliot Temple: you have some pre-existing understanding of reality (including the laws of physics) which you need to fit this into, don't just treat the world as arbitrary – it's not and that isn't how one learns.

Andrew Crawshaw: That's not a criticism of the guess. It's ad hominem and justificationist.

Elliot Temple: "that" = ?

Andrew Crawshaw: I am agreeing totally with you about many things

  1. We should increase our criticism as much as possible.
  2. We do have inbuilt expectations about how the world works.

What We are not agreeing about is the following

  1. That a guess has to be back up by explanation for it to be true or classified as true. All we need is to criticise the guess. Arguing otherwise seems to me a type of justificationism.

  2. That in order to get novel explanations and creations, this often is done despite the knowledge and necessarily has to be that way otherwise it would not be new.

Elliot Temple:

That's not a criticism of the guess. It's ad hominem and justificationist.

please state what "that" refers to and how it's ad hominem, or state that you retract this claim.

Andrew Crawshaw: That someone does not have an explanation. First, because explanations are not easy to come by and someone not having an explanation for something does not in anyway impugn the pedigree of the guess or the strategy etc. Second explanation is important and needed, but not necessary for trying out the new strategy, y, that you guess causes x. You might develope explanations while using it. You don't need the explanation before using it.

Elliot Temple: Explanations are extremely easy to come by. I think you may be adding some extra criteria for what counts as an explanation.

Re your (1): if you have no explanation, then you can criticize it: why didn't they give it any thought and come up with an explanation? they should do that before acting, not act thoughtlessly. it's a bad idea to act thoughtlessly, so that's a criticism.

it's trivial to come up with even an explanation of how grass cures cancer: cancer is internal, and various substances have different effects on the body, so if you eat it it may interact with and destroy the cancer.

the problem with this explanation is we have criticism of it.

you need the explanation so you can try criticizing it. without the explanation, you can't criticize (except to criticize the lack of explanation).

re (2): this seems to contain typos, too confusing to answer.

Elliot Temple: whenever you do X and Y happens, you also did A, B, C, D. how do you know it was X instead of A, B, C or D which caused Y? you need to think about explanations before you can choose which of the infinite correlations to pay attention to.

Elliot Temple: for example, you may have some understanding that Y would be caused by something that isn't separated in space or time from it by very much. that's a conceptual, explanatory understanding about Y which is very important to deciding what may have caused Y.

Andrew Crawshaw: Again, it's not a criticism of the guess. It's a criticism of how the person acted.

The rest of your statements are compatible with what I am saying. Which is just that it can be done and explanations are not necessary either for using something or creating something. As the case of animals surely shows.

You don't know, you took a guess. You can't know before you guess that your guess was wrong.

Elliot Temple: "I guess X causes Y so I'll do X" is the thing being criticized. If the theory is just "Maybe X causes Y, and this is a thing to think about more" then no action is implied (besides thinking and research) and it's harder to criticize. those are different theories.

even the "Maybe X causes Y" thing is suspect. why do you think so? You did 50 million actions in your life and then Y happened. Why do you think X was the cause? You have some explanations informing this judgement!

Andrew Crawshaw: There is no difference between maybe Y and Y. It's always maybe Y. Unless refuted.

Andrew Crawshaw: You are subjectivist and justificationist as far as I can tell. A guess is objective and if someone despite the fact that they have bad judgement guesses correctly. They still guess correctly. Nothing mitigates the precariousness of this situation. Criticism is the other component.

Elliot Temple: If the guess is just "X causes Y", period, you can put that on the table of ideas to consider. However, it will be criticized as worthless: maybe A, B, or C causes Y. Maybe Y is self-caused. There's no reason to care about this guess. It doesn't even include any mention of Y ever happening.

Andrew Crawshaw: The guess won't be criticised, what will be noticed is that it shouts out for explanation and someone might offer it.

Elliot Temple: If the guess is "Maybe X causes Y because I once saw Y happen 20 seconds after X" then that's a better guess, but it will still get criticized: all sorts of things were going on at all sorts of different times before Y. so why think X caused Y?

Elliot Temple: yes: making a new guess which adds an explanation would address the criticism. people are welcome to try.

Elliot Temple: they should not, however, go test X with no explanation.

Andrew Crawshaw: That's good, but one of the best ways to criticise it, is to try it again and see if it works.

Elliot Temple: you need an explanation to understand what would even be a relevant test.

Elliot Temple: how do you try it again? how do you know what's included in X and what isn't included? you need an explanation to differentiate relevant stuff from irrelevant

Elliot Temple: as the standard CR anti-inductivist argument goes: there are infinite patterns and correlations. how do you pick which ones to pay attention to?

Elliot Temple: you shouldn't pick one thing, arbitrarily, from an INFINITE set and then test it. that's a bad idea. that's not how scientific progress is made.

Elliot Temple: what you need to do is have some conceptual understanding of what's going on. some explanations of what types of things might be relevant to causing Y and what isn't relevant, and then you can start doing experiments guided by your explanatory knowledge of physics, reality, some possible causes, etc

Elliot Temple: i am not a subjectivist or justificationist, and i don't see what's productive about the accusation. i'm willing to ignore it, but in that case it won't be contributing positively to the discussion.

Andrew Crawshaw: I am not saying that we have no knowledge. I am sayjng that we don't have an explanation of the mechanism.

Elliot Temple: can you give an example? i think you do have an explanation and you just aren't recognizing what you have.

Andrew Crawshaw: For instance, washing hands and it's link to mortality rates.

Elliot Temple: There was an explanation there: something like taint could potentially travel with hands.

Elliot Temple: This built on previous explanations people had about e.g. illnesses spreading to nearby people.

Andrew Crawshaw: Right, but the use of soap was not derived from the explanation. And that explanation might have been around before, and no such soap was used because of it.

Elliot Temple: What are you claiming happened, exactly?

Andrew Crawshaw: I am claiming that soap was invented for various reasons and then it turned out that the soap could be used for reducing mortality"

Elliot Temple: That's called "reach" in BoI. Where is the contradiction to anything I said?

Andrew Crawshaw: Reach of explanations. It was not the explanation, it was the invention of soap itself. Which was not anticipated or even encouraged by explanations. Soap is invented, used in a context an explanation might be applied to it. Then it is used in another context and again the explanation is retroactively applied to it. The explantion does not necessarily suggest more uses, nor need it.

Elliot Temple: You're being vague about the history. There were explanations involved, which you would see if you analyzed the details well.

Andrew Crawshaw: So, what if there were explanations "involved" The explanations don't add anything to the discovery of the uses of the soap. This are usually stumbled in by accident. And refinements to soaps as well for those different contexts.

Andrew Crawshaw: I am just saying that explanations of the soap works very rarely suggest new avenues. It's often a matter of trial and error.

Elliot Temple: You aren't addressing the infinite correlations/patterns point, which is a very important CR argument. Similarly, one can't observe without some knowledge first – all observation is theory laden. So one doesn't just observe that X is correlated to Y without first having a conceptual understanding for that to fit into.

Historically, you don't have any detailed counter example to what I'm saying, you're just speculating non-specifically in line with your philosophical views.

Andrew Crawshaw: It's an argument against induction. Not against guesswork informed by earlier guesswork, that often turns out to be mistaken. All explanations do is rule things out. unless they are rules for use, but these are developed while we try out those things.

Elliot Temple: It's an argument against what you were saying about observing X correlated with Y. There are infinite correlations. You can either observe randomly (not useful, has roughly 1/infinity chance of finding solutions, aka zero) or you can observe according to explanations.

Elliot Temple: You're saying to recognize a correlation and then do trial and error. But which one? Your position has elements of standard inductivist thinking in it.

Andrew Crawshaw: I never said anything about correlation - you did.

What is said was we could guess that x caused y and be correct. That's what I said, nothing more mothing less.

Andrew Crawshaw: One instance does not a correlation make.

Elliot Temple: You could also guess Z caused Y. Why are you guessing X caused Y? Filling up the potential-ideas with an INFINITE set of guesses isn't going to work. You're paying selective attention to some guesses over others.

Elliot Temple: This selective attention is either due to explanations (great!) or else it's the standard way inductivists think. Or else it's ... what else could it be?

Andrew Crawshaw: Why not? Criticise it. If you have a scientific theory that rules my guess out, that would be intersting. But saying why not this guess and why not that one. Some guesses are not considered by you maybe because they are ruled out by other expectations, or ey do not occurs to you.

Elliot Temple: The approach of taking arbitrary guesses out of an infinite set and trying to test them is infinitely slow and unproductive. That's why not. And we have much better things we can do instead.

Elliot Temple: No one does this. What they do is pick certain guesses according to unconscious or unstated explanations, which are often biased and crappy b/c they aren't being critically considered. We can do better – we can talk about the explanations we're using instead of hiding them.

Andrew Crawshaw: So, you are basically gonna ignore the fact that I have agreed that expecations and earlier knowledge do create selective attention, but what to isolate is neither determined by theory, nor by earlier perceptions, it is large amount guesswork controlled by criticism. Humans can do this rapidly and well.

Elliot Temple: Please rewrite that clearly and grammatically.

Andrew Crawshaw: It's like you are claiming there is no novelty in guesswork, if we already have that as part of our expectation ps it was not guesswork.

Elliot Temple: I am not claiming "there is no novelty in guesswork".

Andrew Crawshaw: So we are in agreement, then. Which is just that there are novel situations and our guesses are also novel. How we eliminate them is through other guesses. Therefore the guesses are sui generiz and then deselected according earlier expecations. It does not follow that the guess was positively informed by anything. It was a guess about what caused what.

Elliot Temple: Only guesses involving explanations are interesting and productive. You need to have some idea of how/why X causes Y or it isn't worth attention. It's fine if this explanation is due to your earlier knowledge, or it can be a new idea that is part of the guess.

Andrew Crawshaw: I don't think that's true. Again beavers make interesting and productive dams.

Elliot Temple: Beavers don't choose from infinite options. Can we stick to humans?

Andrew Crawshaw: Humans don't choose from infinite options....They choose from the guess that occur to them, which are not infinite. Their perception is controlled by both pyshiologival factors and their expectations. Novel situations require guesswork, because guesswork is flexible.

Elliot Temple: Humans constantly deal with infinite categories. E.g. "Something caused Y". OK, what? It could be an abstraction such as any integer. It could be any action in my whole life, or anyone else's life, or something nature did. There's infinite possibilities to deal with when you try to think about causes. You have to have explanations to narrow things down, you can't do it without explanations.

Elliot Temple: Arbitrary assertions like "The abstract integer 3 caused Y" are not productive with no explanation of how that could be possible attached to the guess. There are infinitely more where that came from. You won't get anywhere if you don't criticize "The abstract integer 3 caused Y" for its arbitrariness, lack of explanation of how it could possibly work, etc

Elliot Temple: You narrow things down. You guess that a physical event less than an hour before Y and less than a quarter mile distant caused Y. You explain those guesses, you don't just make them arbitrarily (there are infinite guesses you could make like that, and also that category of guess isn't always appropriate). You expose those explanations to criticism as the way to find out if they are any good.

Andrew Crawshaw: You are arguing for an impossible demand that you yourself can't meet, event when you have explanations. It does not narrow it down from infinity. What narrows it down is our capacity to form guess which is temporal and limited. It's our brains ability to process and to intepret that information.

Elliot Temple: No, we can deal with infinite sets. We don't narrow things down with our inability, we use explanations. I can and do do this. So do you. Explanations can have reach and exclude whole categories of stuff at once.

Andrew Crawshaw: But it does not reduce it to less than infinite. Explanations allow an infinite amount of thugs most of them useless. It's what they rule out, and things they can rule out is guess work. And this is done over time. So we might guess this and then guess that x caused y, we try it again and it might not work, so we try to vary the situation and in the way develope criticism and more guesses.

Elliot Temple: Let's step back. I think you're lost, but you could potentially learn to understand these things. You think I'm mistaken. Do you want to sort this out? How much energy do you want to devote to this? If you learn that I was right, what will you do next? Will you join my forum and start contributing? Will you study philosophy more? What values do you offer, and what values do you seek?

Andrew Crawshaw: Mostly explanations take time to understand why they conflict with some guess. It might be that the guess only approximates the truth and then find later that it is wrong because we look more into the explanation of i.

Andrew Crawshaw: Elliot, if you wish to meta, I will step out of the conversation. It was interesting, yet you still refuse to concede my point that inventions can be created without explanations. But yet this is refuted by the creations of animals and many creations of humans. You won't concede this point and then make your claims pretty well trivial. Like you need some kind od thing to direct what you are doing. When the whole point is the Genesis of new ideas and inventions and theories which cannot be suggest by earlier explanations. It is true that explanations can help, I refining and understanding. But that is not the whole story of human cognition or human invention.

Elliot Temple: So you have zero interest in, e.g., attempting to improve our method of discussion, and you'd prefer to either keep going in circles or give up entirely?

Elliot Temple: I think we could resolve the disagreement and come to agree, if we make an effort to, AND we don't put arbitrary boundaries on what kinds of solutions and actions are allowed to be part of the problem solving process. I think if you make methodology off-limits, you are sabotaging the discussion and preventing its rational resolution.

Elliot Temple: Not everything is working great. We could fix it. Or you could just unilaterally blame me and quit..?

Andrew Crawshaw: Sorry, I am not blaming you for anything.

Elliot Temple: OK, you just don't really care?

Andrew Crawshaw: Wait. I want to say two things.

  1. It's 5 in the morning, and I was working all day, so I am exhausted.

  2. This discussion is interesting, but fragmented. I need to moderate my posts on here, now. And recuperate.

Elliot Temple: I haven't asked for fast replies. You can reply on your schedule.

Elliot Temple: These issues will still be here, and important, tomorrow and the next day. My questions are open. I have no objection to you sleeping, and whatever else, prior to answering.

Andrew Crawshaw: Oh, I know you haven't asked for replies. I just get very involved in discussion. When I do I stop monitoring my tiredness levels and etc.

I know this discussion is important. The issues and problems.

Elliot Temple: If you want to drop it, you can do that too, but I'd want to know why, and I might not want to have future discussions with you if I expect you'll just argue a while and then drop it.

Andrew Crawshaw: Like to know why? I have been up since very early yesterday, like 6. I don't want to drop the discussion I want to postpone it, if you will.

Elliot Temple: That's not a reason to drop the conversation, it's a reason to write your next reply at a later time.

Andrew Crawshaw: I explicitly said: I don't want to drop the discussion.

Your next claim is a non-sequitur. A conversation can be resumed in many ways. I take it you think it would be better for me to initiate it.

Andrew Crawshaw: I will read back through the comments and see where this has lead and then I will post something on fallible ideas forum.

Elliot Temple: You wrote:

Elliot, if you wish to meta, I will step out of the conversation.

I read "step out" as quit.

Anyway, please reply to my message beginning "Let's step back." whenever you're ready. Switching forums would be great, sure :)

Elliot Temple | Permalink | Comments (17)

Yes or No Philosophy Discussion with Andrew Crawshaw

From Facebook:

Alan Forrester:

Assigning weights to ideas never really fitted very well with critical rationalism. Evolution doesn't assign points to genes: they either survive and get copied or they don't. The same is true for an idea: it either solves a problem or it doesn't. This post is relevant to whether there is always a solution to a problem or if we have to weigh ideas to avoid throwing away conflicting ideas that might be okay.

BC: "The same is true for an idea: it either solves a problem or it doesn't." quote

Well who determines whether a problem is solved or not or even what is the problem? The problem of the basis, empirical or otherwise? The search for the algorithm to end all algorithms?

Elliot Temple: problems are solved, or not, in objective reality. people try to understand this with guesses and criticism, as always. there's no authorities. "who determines...?" is begging for an authoritarian answer just like "who should rule?"

BC: "A problem is perceived as such when the progress to a goal by an obvious route is impossible and when an automatism does not provide an effective answer." (W D Wall) What determines the goal?

Elliot Temple: people are free to determine their own goals, by thinking (guesses and criticism).

BC: So what point is being made?

Elliot Temple: you asked tangential questions. i answered. it was your responsibility for them to have a point.

Andrew Crawshaw: I think, Bruce, that the point is is that CR should be about either or claims about truth and falsity. What I don't understand is why this would be incompatible with measures of verisimilitude. I do not know if either Forrester or Temple are averse verisimiltude per se. I think they are critical of the idea that we can build a theory of critical preference on top of this, which was Popper's hope.

Am I right in suggesting, Elliot, that you think that we should only act under the circumstance that there is a single exit strategy, as it is called, and if there is not a single exist strategy that there are ways of making the circumstance such that there is a single exit strategy, therefore getting rid of the need for critical preferences.

Elliot Temple: Ideas either solve a problem or they don't solve it. A criticism either explains why an idea doesn't solve a problem, or fails to. There's no room here for amounts of goodness of ideas, which is a core idea of justificationism. Yes I think critical preferences are a mistake. See:

Andrew Crawshaw: Yes, I have read that. Are you saying that, given that I have a cold, and that there are two ways of alleviating it but they are incompatible solutions to alleviating this cold, ie they cannot be taken together. Say they are both to hand and both are explained as being effective by the scientific theories we have at our disposal. Would you say then that it is not right to take either?

Elliot Temple: What does "that" refer to? I gave 3 links.

Elliot Temple: > Would you say then that it is not right to take either?

no. i don't know where that's coming from.

Andrew Crawshaw: There is only one link showing. And it says Fallible ideas - Yes or No Philosophy.

Elliot Temple: all 3 links are showing, please look in the text of the post.

Andrew Crawshaw: Okay, I was just clearing up whether I might have misinterpreted you. So your theory applies only to what theories we should act on?

Elliot Temple: No. I don't know where you're getting that interpretation either. I think it would help if you quoted the text you're talking about

Andrew Crawshaw: I am responding to your reply to my comment. I asked about single exit strategies, the scenario I gave was not a single exist strategy, I was wondering how you would answer it.

Elliot Temple: Come up with a theory about what to do that you don't have a criticism of. E.g. "I should take medicine A now b/c i don't have a better idea and it's way better than nothing and it's not worthwhile to spend more time deciding". You can form an idea like that and see if you have a criticism of it or not.

Andrew Crawshaw: But you could substitute Medicine B in your theory and the situation would still be symmetrical.

Elliot Temple: So what?

Elliot Temple: If your theory is that it's best to take one medicine, but not both or neither, and it doesn't matter which one then it's ok to choose arbitrarily or randomly. you don't have a criticism of doing so.

Andrew Crawshaw: Now, you might think my question peculiar. Say I have medicine A and medicine B, everything is exactly the same as it is in the previous scenario, except that medicine B is in the bathroom and medicine A is to hand. Could this be part of preferential decision in favour of A? Even though it's not a criticism of it as a solution?

Elliot Temple: Yes. "Why would I want to go walk to the bathroom for no reason?" is a criticism. Everything else being equal (which it usually isn't), in general I'd rather not go walk to get something.

Andrew Crawshaw: But there is a difference between the two types of criticism, one is of the solution whether it would actually solve it if carried out and the other to do with whether there are other factors. The other factors being about preference.

Elliot Temple: The idea "medicine B as a solution to problem 1" and "medicine B as a solution to problem 2" are different ideas. A criticism may apply to only one of them. The criticism that i don't want to walk and get B doesn't matter for B as a solution to problem 1 (cure my illness), but does criticize choosing B for problem 2 (what action should i take in my life right now, with the situation that A and B medicines are equally good, and the only difference is one is further away and i'd rather not go get it).

This is explained at length in my Yes or No Philosophy.

Andrew Crawshaw: Isn't it slightly unhelpful to add your preference to the formulation of the problem. I mean, in otherwords, that you can just keep extending the formulation of the problem as you think about to carry it out. it seem to me no different than weighing up preferences.

Elliot Temple: Preferences need to be dealt with by critical thinking, not weighing. Weighing doesn't work. Also explained in my Yes or No Philosophy.

Elliot Temple: Weighing is also criticized in BoI and in various blog posts. Did you read the 3 I linked you? You can find more relevant posts e.g. here which is linked at the bottom of a link i gave you:

Andrew Crawshaw: Maybe I did not communicate properly. The problem is that I want to administer medicine. I have a preference...I would rather not walk. Therefore I go for medicine A. What's changed by reformulating the problem to contain the preference?

Elliot Temple: The point isn't where you notionally put the preference – it's part of the situation in any case. The point is you have a criticism of one option (walking is too hard) and not the other.

Elliot Temple: So one always can and should act on a single, non-refuted idea.

Elliot Temple: You never have to act on a refuted idea, or try to choose between non-refuted ideas by a method other than conjectures and criticism. Such an alternative method would actually be a huge problem for epistemology and basically destroy CR.

Andrew Crawshaw: The administering of medicine B has not been refuted qua alleviating my headache.

Elliot Temple: Right, I said that too.

Andrew Crawshaw: I am not sure of the difference between critical preference and your theory. Seems to be the same theory redescribed. I will have to think about it a little.

Andrew Crawshaw: Thanks for the links, I will read them more carefully over the next week.

Andrew Crawshaw: Oh, Elliot, could you give me the chapter of BoI, where weighing is criticised.

Elliot Temple: 13. Choices

Andrew Crawshaw: Thanks

Elliot Temple | Permalink | Comments (0)

Do Thousands of Error Corrections

This is from a Fallible Ideas email.

I wrote (Sept 2017):

but i also did NOT just accept whatever DD said b/c he said it. i expected him to be right but ALSO challenged his claims. i asked questions and argued, while expecting to lose the debate, to learn more about it. i very persistently brought stuff up again and again until i was FULLY satisfied. lots of people concede stuff and then think it's done and don't learn more about it, and end up never learning it all that well. sometimes i thought i conceded and said so, but even if i did, i had zero shame about re-opening any topic from any amount of time ago to ask a new question or ask how to address a new argument for any side.

i also fluidly talked about arguments for ANY side instead of just arguing a particular side. even if i was mostly arguing a particularly side, i'd still sometimes think of stuff for DD's side and say that too. ppl are usually so biased and one-sided with their creativity.

after i learned things from DD i found people to discuss them with, including people who disagreed with them. then if i had any trouble thoroughly winning the debate with zero known flaws on my side, zero open problems, zero unanswered criticisms, etc, then i'd go back to DD and expect more and better answers from him to address everything fully. i figured out lots of stuff myself but also my attitude of "DD is always right and knows everything" enabled me to be INFINITELY DEMANDING – i expected him to be a perfect oracle and just kept asking questions about anything and everything expecting him to always have great answers to whatever level of precision, thoroughness, etc, i wanted. when i wasn't fully convinced by every aspect of an answer i'd keep trying over and over to bring up the subject in more ways – state different arguments and ask what's wrong with them, state more versions of his position (attempting to fix some problem) and ask if that's right, find different ways to think about a question and express it, etc. this of course was very useful for encouraging DD to create more and better answers than he already knew or already had formulated in English words.

i didn't 100% literally expect him to know everything, but it was a good mantra and was compatible with questioning him, debating him, etc. it's important to be able to expect to be mistaken and lose a debate and still have it, eagerly and thoroughly. and to keep saying every damn doubt you have, every counter-argument you think of, to address ALL of them, even when you're pretty convinced by some main points that you must be badly wrong or ignorant.

anyway the method of not being satisfied with explanations until i'd explained them myself to teach others and win several debates – with NO outstanding known hiccups, flaws, etc – is really good. that's the kind of standard of knowledge people need.

Anne B replied (Sept 2017):

Is this a model you recommend for the rest of us to learn? I can give it a try but I don't think it'll be easy for me for two reasons.

1) I've spent decades trying to be a person who DOESN'T argue. What I usually do when someone says something I don't agree with is stop talking about it. I don't want to rock any boats or get anyone mad at me, especially if I'm wrong.

2) I don't really believe that I could very often reach a point of understanding something so well that I could easily refute any competing arguments. I picture myself asking a question here, someone giving an answer I don't fully believe or understand, then doing a bit of arguing back and forth but never reaching a point where we both understand and agree. I'd give up long before that, not wanting to press the issue, and just "agree to disagree" in my mind. Out loud I might concede. Do you really think I could succeed at this kind of arguing? (By succeed I mean fully convince myself of anything?)

Why can I write decent sentences but Kate and most people are bad at it? (See the "Running your own life" discussion from today.)

Because I found thousands of flaws with my writing in the past (including by listening to criticism) and made efforts to fix those flaws.

I did thousands of error corrections. That's what it takes to be good at something which is moderately difficult.

doing thousands of error corrections requires an attitude towards life and learning. you have to be interested in mistakes, including small mistakes, and make changes to address them.

it also requires being able to make changes without it being a huge cost. if changing anything is super expensive, you'll only do it for BIG fixes. you need changing to be cheap to do it thousands of times.

there's no other way to build up skill. you need to be able to make changes cheaply and do thousands of them. and the changes should focus on error correction.

anyone could do this but most people don't want to. and many people have lots of anti-change stuff in their minds getting in the way. but the disinterest in error correction is problem number one. if people cared enough, then they could start a series of enthusiastic attempts to do something about their change-is-expensive problem.

Elliot Temple | Permalink | Comments (3)

Human Problems and Abstract Problems

This is an email I wrote in July 2013. I'm replying to David Deutsch (Feb 2001) who is in regular yellow quotes and was addressing the topic, Are common preferences always possible?. Two quote levels is Demosthenes (Feb 2001).

Susan Ramirez asked (Feb, 1997):

Why do you believe that it is always possible to create a common preference?

Sarah Lawrence replied (Jan, 2001)

This question is important because it is the same as - Are there some problems which in principle cannot be solved? Or, when applied to human affairs: - Is coercion (or even force, or the threat of force) an objectively inevitable feature of certain situations, or is it always the result of a failure to find the solution which, in principle, exists?

David Deutsch begins his reply:

I think that both Sarah and Demosthenes (below) somewhat oversimplify when they identify 'avoiding coercion' with 'problem-solving'. For instance, Sarah says "This question ... Is the same as[:] Are there some problems

Let's watch out for different uses of the word "problem". [This unquoted material is Elliot writing.]

which in principle cannot be solved?" Well, in a sense it is the same issue. But due to the imprecision of everyday language, this also gives the impression that avoiding coercion depends on everyone adopting the same theory (the solution, the common preference) about whatever was at issue. In fact, that is seldom literally the case, because the parties' conceptions of what is 'at issue' typically change quite radically during common-preference finding. All that is necessary is that the participants change to states of mind which (1) they prefer to their previous states, and (2) no longer cause them to hurt each other.

In other words, common preferences can often be much narrower than it may first appear. You needn't agree about everything, or even everything relevant, but only enough to proceed without hurting (TCS-coercing) each other (or oneself in the case of self-conflicts).

[This next section has two levels of quoting and is Demosthenes. The black bar indicates an additional level of quoting. Two levels means that I'm quoting David Deutsch quoting it.]

I agree that this question is important, though I would offer instead the following two elucidating questions:

In the sphere of human affairs:

  1. Are there any problems that would remain unavoidably insoluble even if they could be worked on without any time and resource limits?

  2. Are there any problems that are unavoidably insoluble within the time and resource limits of the real life situations in which they arise?

The word "problem" in both of these is ambiguous.

Problem-1: (we might call it "human problem"): "a matter or situation regarded as unwelcome or harmful and needing to be dealt with and overcome"

Problem-2: (we might call it an "abstract problem"): "a thing that is difficult to achieve or accomplish"

There are problems, notionally, like going to the moon. But no one gets hurt unless a person has the problem of going to the moon. Problem-1 involves preferences, and the possibility of harm and TCS-coercion. And it is the type of problem which is solved by common preferences.

Problem-2, inherently, does not have time or resource limits, because the universe is not in a hurry, only people are.

So, are there any problems which are insoluble with the time and resource limits of real life situations? Not problem-2 type, because those do not arise in people's life situations, and they do not have time or resource limits.

And as for problem-1 type problems, those are always soluble (within time/resource constraints), possibly involving changing preferences. (BTW, as a general rule of thumb, in non-trivial common preference finding, all parties always change their initial preferences.)

An example:

problem-2: adding 2+2 (there is no time limit, no resource limit -- btw time is a type of resource)

problem-1: adding 2+2 within the next hour for this math test (now there are resource issues, preferences are involved)

Another way to make the distinction is:

problem-1: any problem which could TCS-coerce (hurt) someone

problen-2: any problem which could not possibly ever TCS-coerce (hurt) anyone

problem-2s are not bad. Not even potentially. Problem-1s are bad if and only if they TCS-coerce anyone. A problem like 2+2=? cannot TCS-coerce anyone, ever. There's just no way. It takes a different problem like, "A person asked me what 2+2 is, and I wanted to answer" to have the potential for TCS-coercion.

Notice solving this different problem does not necessarily require figuring out what 2+2 is. Solving problem-1s never requires solving any associated problem-2s, though that is often a good approach. But it's not necessary. So the fact that various problem-2s won't be solved this year need not hurt anyone or cause any problem-1s -- with their time limits and potential for harm -- to go unsolved.

I believe that the answer to question (1) is, no -- there are no human problems that are intrinsically insoluble, given unbounded resources.

This repeated proviso "given unbounded resources" indicates a misconception, I think. The answer to (2) is, uncontroversially, yes. Of course there exist disagreements -- both between people and within a person -- that take time to resolve, and many will not be resolved in any of our lifetimes.

I think this unclear about the two types problems. While it agrees with me in substance, it defers to ambiguous terminology that basically uses unsolved problem-2s to say there are insoluble problems and try to imply it's now talking about problem-1s.

There is a mix up regarding failure to solve an abstract problem like figuring out the right theory of physics (which two friends might disagree about) with failure to solve human problems, like the type that make those friends hurt each other.

It's harmless to have some disagreements that you "agree to disagree" about, for example. But if you can't agree to disagree, then the problem is more dangerous and urgent.

It's uncontroversial that people have unsolved abstract problems for long periods of time, e.g. they might be working on a hard math problem and not find the answer for a decade. And their friend might disagree with them about the best area to look for a solution.

But so what?

Human problems are things like, "I want to solve the problem this week" (maybe you should change your preference?) or "I want to work on the math problem and find good states of mind in regard to it, and enjoy making progress" (this human problem can easily be solved while not solving the harmless abstract problem).

But that has nothing to do with the question being discussed here.

Right because of the confusion over different meanings of "problem".

The fact that after 25 years of almost daily attention to the conflict between quantum theory and general relativity I have failed to discover a theory that I prefer to both (or indeed to either), does not indicate that I have "failed to find a common preference"

Right. Common preferences do not even apply to problem-2s, only problem-1s.

either within myself, or with other proponents of those theories, in the sense that interested Susan Ramirez. I have not found a preferred theory of physics, but I have found successively better states of mind in regard to that problem, each the result of successive failures to solve it.

However this view is only available to those of us who believe that for all moral problems there exists, in principle, a unique, objectively right solution. If you are any kind of moral relativist, or a moral pluralist (as many people seem to be) then you can have no grounds for arguing that all human disputes are in principle soluble.

It is only in spheres where the objective truth of the matter exists and is in principle discoverable, that the possibility of converging on the truth guarantees that all problems are, in principle, soluble.

I agree that for all moral problems

No clear statement of which meaning of problem this refers to.

there exists an objectively right solution, and that this is why consensual relationships -- and indeed all liberal institutions of human cooperation, including science -- can work. The mistake is to suppose that if one does not believe this, it will cease to be true. For people to be able to reach agreement, it suffices that, for whatever reason, they seek agreement in a way that conforms to the canons of rationality and are, as a matter of fact, converging on a truth. Admittedly it is a great impediment if they think that agreement is not possible, and very helpful if they think that it is, but that is certainly not essential: many a cease-fire has evolved into a peace without a further shot being fired. It is also helpful if they see themselves as cooperating in discovering an objective truth, and not merely an agreement amongst themselves, but that too is far from essential: plenty of moral relativists have done enormous good, and made enormous moral progress -- for instance towards creating institutions and traditions of tolerance -- without ever seeking an objective truth, or realising that they were finding one. In fact many did not realise that they were creating agreement at all, merely a tolerance of disagreement. And incidentally, they were increasing the number of unsolved problems in society by promoting dissent and diversity.

Increasing the number of unsolved problem-2s, but decreasing the number of unsolved problem-1s.

What we need to avoid, both in society and in our own minds, is not unsolved problems,

Ambiguous between problem-1s and problem-2s.

not even insoluble problems,

Ambiguous between problem-1s and problem-2s.

Also doesn't seem to be counting preference changing as a solution, contrary to the standard TCS attitude which regards preference changing as a normal part of common preference finding, and part of problem solving.

but a state in which our problems are not being solved

But this time it means problem-1s.

-- where thinking is occurring but none of our theories are changing.

I believe that the answer to question (2) is yes -- human problems that cannot be solved even in principle, given the prevailing time and resource constraint, are legion. Albeit, nowhere near as legion as non-TCS believers would have it. My main argument in support of this thesis is based on introspection: Let him or her who is without ongoing inner conflict proffer the first refutation.

This is a bit like saying, at the time of the Renaissance, that science is impossible because "let him who is without superstition proffer the first refutation". The whole point about reason is that it does not require everything to be right before it can work. That is just another version of the "who should rule?" error in politics. The important thing is not to start out right, but to try to set things up in such a way that what is wrong can be altered. The object of the exercise is not to create a chimerical (and highly undesirable!) problem-free state,

A problem-2-free state is bad. As in, not having any problems we might like to work on. This is bad because it creates a very hard problem-1: the problem of boredom (having no problem-2s to work on, while wanting some will cause TCS-coercion).

A problem-1-free state is ... well there is another ambiguity. Problem-1s are fine if one is rationally coping with them. It's not bad to have human problems and deal with them. What's bad is failure to cope with them, i.e. TCS-coercion.

How can we tell which/when problem-1s get bad? When they do harm (TCS-coercion).

To put it another way: problem-1s are bad when one acts on an idea while having a criticism of it. But if it's just the potential for such a thing in the future, that's part of normal life and fine.

but simply to embark upon actually solving problems rather than being stuck not solving any (or not solving one's own, anyway). Happiness is solving one's problems, not 'being without problems'.

"one's problems" refers only to problem-1s, but "being without problems" and "actually solving problems" are ambiguous.

In other words, I suggest that there isn't a person alive whose creativity is not diminished in some significant way by the existence of inner conflict. Or rather dozens, if not hundreds or thousands, of inner conflicts.

Yes. But having diminished creativity (compared to what is maximally possible, presumably) is and always will be the human condition. Minds are fallible. Fortunately, it is not one's distance from the ideal state that makes one unhappy, but an inability to move towards it.

And if you cannot find a common preference for all the problems that arise within your own mind, it is a logical absurdity to expect to be able always to find a common preference with another, equally conflicted, mind.

Just as well, really. If you found a common preference for all the problems within your own mind, you'd be dead. If you found a common preference for all the problems you have with another person with whom you interact closely, you'd be the same person.


However, and it is an important however, to approach this goal we must dare to face the inescapable facts that, in practice, it is by no means always possible to find a common preference; that therefore it is not always possible to avoid coercion;

This does not follow, or at least, not in any useful sense. Demosthenes could just as well have make the identical comments about science:

[Demosthenes could have written:]

In the sphere of science:

  1. Are there any problems that would remain unavoidably insoluble even if they could be worked on without any time and resource limits?

  2. Are there any problems that are unavoidably insoluble within the time and resource limits of the real life situations in which they arise?

I believe that the answer to question (1) is, no -- there are no scientific problems that are intrinsically insoluble, given unbounded resources.

Right. And why should it follow from this that a certain minimum of superstition is unavoidable in any scientific enterprise, and that people who try to reject superstition on principle will undergo "intellectual and moral corrosion" if, as is inevitable, they fail to achieve this perfectly -- or even if they fail completely?

As Bronowski stressed and illustrated in so many ways, doing science depends on adopting a certain morality: a desire for truth, a tolerance, an openness to change, an awareness of one's own fallibility and the fallibility of authority, yet also a respect and understanding for tradition ... (It's the same morality as TCS depends on.) And yes, no scientist has ever been entirely free from irrationality, superstition, dogma and all the things that the canons of rationality say are supposed to be absent from a true scientist's mind. Yet none of that provides the slightest argument that a person entering upon a life of science is likely to become unhappy

Tangent: this is a misuse of probability. Whether that happens depends on human choices not chance.

in their work, is likely to find their enterprise ruined either because they encounter a scientific problem that they never solve, or because they fail to rid their own minds of certain superstitions that prevent them from solving anything.

The thing is, all these sweeping statements about insoluble problems


and unlimited resources, though true (some of them trivially, some because of fallibilism) are irrelevant to the issue here, of whether a lifestyle that rejects coercion is possible and practical in the here and now. A TCS family can and should reject coercion in exactly the same sense, and by the same means, and for the same reason, as a scientist can and should reject superstition. And to the same extent: utterly. In neither case can the objective ever be achieved perfectly, with finite resources. In neither case can any guarantee be given about what the outcome will be. Will they be happier than if they become astrologers instead? Who knows? And certainly good intentions alone can guarantee nothing. In neither case can the enterprise be without setbacks and failures, perhaps disasters. And in neither case is any of this important, because ... well, whatever goes wrong, however badly, superstition is going to make it worse.

-- David Deutsch

Josh Jordan wrote:

I think it makes sense to proceed according to the best plan you have, even if you know of flaws in it.

What if those flaws are superstition? Or TCS-coercion?

Whatever happens, acting against one's best judgment -- e.g. by disregarding criticisms of flaws one knows -- is only going to make things worse.

Elliot Temple | Permalink | Comments (0)