Ayn Rand Lexicon Quote Checking

In The Ayn Rand Lexicon (book not website), Harry Binswanger wrote in the Honesty section:

Self-esteem is reliance on one’s power to think. It cannot be replaced by one’s power to deceive. The self-confidence of a scientist and the self-confidence of a con man are not interchangeable states, and do not come from the same psychological universe. The success of a man who deals with reality augments his self-confidence. The success of a con man augments his panic.

The intellectual con man has only one defense against panic: the momentary relief he finds by succeeding at further and further frauds.
[“The Comprachicos,” NL, 181.]  

The words, book and page number are correct, but the quote is from "The Age of Envy" not "The Comprachicos".

In the self-esteem section, Binswanger gives the correct cite for part of the same quote:

Self-esteem is reliance on one’s power to think. It cannot be replaced by one’s power to deceive. The self-confidence of a scientist and the self-confidence of a con man are not interchangeable states, and do not come from the same psychological universe. The success of a man who deals with reality augments his self-confidence. The success of a con man augments his panic.
[“The Age of Envy,” NL, 181.]

Justin Mallone found this error and I checked it myself too. I asked him to look into Lexicon quoting accuracy after I found multiple citation errors on the Lexicon website that weren't in the book. This is the only error he found in the book. He did find citation and formatting errors on the website. None of the errors, even on the website, are wording errors. (Note there's a second website for the Lexicon. I compared the "Automatization" page and the only difference I found was whether there were spaces around dashes or not.)

I checked 4 quotes originally and Justin checked 16 more. So the book had 1 partial citation error in 20 quotes, but the website had 5 errors in 20 quotes (counting at most one error per quote). The wordings seem to be reliable, unlike in The Beginning of Infinity, and the Lexicon book seems to be pretty reliable. It seems like a serious effort went into getting details right for the book, but the process of creating the website was sloppier and introduced many small errors.

Even the Lexicon website is much better than David Deutsch's use of quotations in The Beginning of Infinity. Deutsch frequently doesn't give sources, makes frequent changes to wordings (with no indicator of any change), changes punctuation too, and uses ellipses and square brackets incorrectly. Even worse, several of the quotes appear to be made up.


Elliot Temple | Permalink | Message (1)

Objectivism, Certainty, Peikoff, More

This is lightly edited from 2013 emails I wrote to FI list. I was talking about Peikoff's Objective Communication audio lectures.

First Email

Ayn Rand (AR) advocates fallibilism. In a serious, substantive way, in print.

So far from Leonard Peikoff, I've heard a lot of stuff that sounds potentially incompatible with fallibilism, such as advocating certainty, with no effort made to explain how he means something compatible with fallibilism.

I've heard him dismiss some fallibilist arguments, which are true, as ridiculously stupid, without argument.

I've heard him define skepticism as a denial that certainty is possible. Then talk about it as a denial that knowledge is possible. The unstated and unargued premise is that knowledge requires certainty (he didn't mention Justified True Belief, but is that what he has in mind?). How that premise is compatible with fallibilism, he has not informed me.

I have not heard him advocate fallibilism like Rand has.

In addition to certainty, Peikoff has said perfection is possible. He clarified that he meant contextual perfection. Perhaps he also thinks that only contextual certainty is possible. I think this is a misuse of words. He hasn't explained why it isn't. And he keeps talking about "certainty" without any mention of "contextual certainty". If he means something rather different than a typical infallibilist meaning, shouldn't he be clear about it?

Further, when he attacks skeptics for rejecting certainty, it's unclear that those skeptics are all rejecting "contextual certainty" (if that is what he actually means but doesn't say). There are skeptics who (correctly) refute non-contextual certainty (which is infallibilism). If a skeptic refutes non-contextual certainty, and an anti-skeptic like Peikoff advocates contextual certainty, then they haven't necessarily contradicted each other. Peikoff talks about these subjects but doesn't deal with points like this. But he doesn't just omit stuff; he seems to be contradicting points like this -- and therefore be mistaken -- and he fails to explain how he isn't mistaken.

Peikoff focusses his attacks on the worst kinds of skeptics and acts like he has criticized the entire category of all skepticism. He doesn't mention or discuss that there are different types of skeptics (e.g. rejecting all knowledge, or just rejecting non-contextual certainty. He seems to lump fallibilists in with skeptics, though I have no doubt he wouldn't want to lump AR in with skeptics, so his position isn't explained well.)

If you want to exclude people like myself and Karl Popper (and AR) from being skeptics, fine. But then you can't just define skepticism as rejecting certainty! Unless you add a bunch of clarifications and qualifications about what you mean, Popper absolutely does reject certainty! (As do I.) You'd also have to stop presenting it as skeptics and non-skeptics, only two categories, since Popper and Peikoff would be non-skeptics with major differences in views. (I don't normally present it as skeptics and non-skeptics, but Peikoff did.)


These comments above are from his Objective Communication lectures. Epistemology is not the primary topic, but he keeps talking about it. (He's also talked about induction and empiricism a number of times. That material is also problematic.)

I've never seen AR do it like Peikoff. Whenever she talks about these things I have a tiny fraction of the objections. But when it's Peikoff (or Binswanger or I think many other Objectivists) then I see lots of problems.


On another note, Peikoff's comments about how awful school is are worthwhile. They are directed especially at grad school and university. He talks about how much it trashed his mind (despite his best efforts not to let it do that), and how dangerous it is and hard to stay rational, and how much time and effort it took to recover.

In a way, it excuses his other mistakes. He actually read some stuff from a paper he wrote in grad school. He's improved a lot since then!! So that's great. One can respect how far he's come and perhaps sympathize a bit with some of his mistakes.

I for one have the advantage of avoiding a lot of the tortures Peikoff endured at school. It really helps. Yeah, sure, K-12 sucked but I never took it seriously after around 6th grade or maybe earlier. It's so much worse and harder if you take it seriously.

(But I fear he wouldn't appreciate this perspective much. I fear he'd say he's super awesome now and not making mistakes, and I'm wrong about epistemology -- but without wishing to debate it to a conclusion in a serious way, as I am willing to do. If he rejects the attitudes and role of a learner still making progress, then it becomes hard to sympathize with errors. If he also isn't open to answering criticisms, then it's even worse.)


How few philosophers Objectivists find to appreciate is one of the worrisome things that does apply to AR herself (I learned from AR, Popper, Goldratt and others. Peikoff doesn't seem to have gotten much value from people besides AR). Like it's a problem with Peikoff but also with AR. She was aware of Mises and Szasz. But she missed Popper, Burke, Godwin and Feynman, for example. Is there any excuse for that? Godwin is obscure but Szasz was aware of him! Mises was aware of Godwin too, but Mises read a translation and totally got the wrong idea. Szasz and Mises were also aware of Burke. I'm not sure how much Mises knew about Burke, but Szasz had a good understanding. Szasz also knew a lot about Popper, and had some familiarity with Feynman. So if Szasz can find all these philosophers, and learn from them, what is AR's excuse?

And of course I can and did find and study Godwin and others too. I sought out good philosophy with some success. It's not trivial to find, but it's worth the effort.

Second Email

Peikoff's on-topic comments about Objective Communication continue to be good. No monumental breakthrough, but lots of solid points explained well.

Peikoff said certainty is conclusiveness.

If we figure he meant contextual conclusiveness (if he didn't, that's worse!), that's Popper-compatible. Popperians reach what they call "tentative" conclusions which means that they are the current conclusion but could need to be reevaluated if the context changes (e.g. something new is thought of).

But can something called "tentativity" really be what Peikoff has in mind for "certainty"? I don't think so. If you listen to how he talks about it, and his examples, they do not fit this interpretation of the definition. But he doesn't clarify the correct definition or the way to interpret this one.

No comments are made about how his definition is compatible with this other thing he doesn't mean, or what's wrong with this thing. He doesn't address it. I don't think he's thought of it.

Long story short, what's going on is Peikoff is mistaken about the topic so his comments come off confused from the perspective of someone who already understands what he's missing.

Peikoff is targeting his comments against ideas much worse than his own. He's defeating what he sees as his (awful, pathetic) rivals. But why hasn't he engaged with any better rivals?

I don't think it's pure ignorance. For one thing, that would not be excusable: he should have checked for the existence of some better ideas.

But also, Peikoff knows (and endorses) Binswanger, and Binswanger knows of Popper. Binswanger's attitude to Popper is a combination of extreme ignorance and extreme venom (with extras features such as misquoting Popper and then not caring or correcting it). Some other Objectivists also know of Popper but reject him without rational, well-informed arguments or an adequate understanding of his ideas.

I suppose I should look these issues up in OPAR. But he's supposed to be talking to an audience with merely some knowledge of Objectivism. So if you've read everything AR says about this, that ought to be (more than) enough. His comments weren't meant only for audiences that have read OPAR.


Elliot Temple | Permalink | Messages (0)

Rand, Popper and Fallibility

I wrote this at an Objectivist forum in 2013.


http://rebirthofreason.com/Forum/Dissent/0261.shtml

Popper is by no means perfect. The important thing is the best interpretations (that we can think of) of his best ideas. The comment below about "animals" is a good example. I do not agree with his attitude to animals in general, and I'm uncomfortable with this statement. However, everything he said about animals (not much) can be removed from his epistemology without damaging the important parts.

Popper made some bad statements about epistemology, and some worse ones about politics. I don't think this should get in the way of learning from him. That said, I agree with Popper's main points below.

1) Can you show if Popper ever fully realized that the falsification of a universal positive proposition is a necessary truth? In other words, if a black swan is found, then the proposition "All swans are white" is falsified, but more than that, it is absolutely falsified (which is a form of absolute knowledge/absolute certainty)? Even if you can't, please discuss.

No, Popper denied this. The claim that we have found a black swan is fallible, as is our understanding of its implications.

Fallibility is not a problem in general. We can act on, live with, and use fallible knowledge. However, it does start to contradict you a lot when you start saying things like "absolute certainty".

Rand isn't fully clear about this. Atlas Shrugged:

"Do not say that you're afraid to trust your mind because you know so little. Are you safer in surrendering to mystics and discarding the little that you know? Live and act within the limit of your knowledge and keep expanding it to the limit of your life. Redeem your mind from the hockshops of authority. Accept the fact that you are not omniscient, but playing a zombie will not give you omniscience—that your mind is fallible, but becoming mindless will not make you infallible—that an error made on your own is safer than ten truths accepted on faith, because the first leaves you the means to correct it, but the second destroys your capacity to distinguish truth from error. In place of your dream of an omniscient automaton, accept the fact that any knowledge man acquires is acquired by his own will and effort, and that that is his distinction in the universe, that is his nature, his morality, his glory.

"Discard that unlimited license to evil which consists of claiming that man is imperfect. By what standard do you damn him when you claim it? Accept the fact that in the realm of morality nothing less than perfection will do. But perfection is not to be gauged by mystic commandments to practice the impossible [...]

Here Rand accepts fallibility and only rejects misuses like claiming man is "imperfect" to license evil. Man's imperfection is not an excuse for any evil -- agreed.

Rand has just acknowledged that man and his ideas and achievements are fallible. But then she decides to demand moral "perfection". Which must mean some sort of contextual, achievable perfection -- not the sort of infallible, omniscient perfection Popper rejects and Rand acknowledges as impossible.

It's the same when Rand talks about "certainty" which is really "contextual certainty" which is open to criticism, arguments, improvement, changing our mind, etc... (Only in new contexts, but every time anyone thinks of anything, or any time passed, then the context has changed at least a little. So the new context requirement doesn't cause trouble.)

2) Can you offer something to redeem Popper of seemingly damning quotes such as:

In so far as a scientific statement speaks about reality, it must be falsifiable: and in so far as it is not falsifiable, it does not speak about reality.

... which preemptively denies the possibility of axiomatic concepts (i.e., the possibility of statements that speak about reality, but are not, themselves, falsifiable).

Any statement which speaks about reality is potentially falsifiable (open to the possibility of criticism using empirical evidence) because, if it speaks about reality, then it runs the risk of being contradicted by reality.

Popper does deny axiomatic concepts, meaning infallible statements. Statements that you couldn't even try to argue with, potentially criticize, question, or improve on. All ideas should be open to the possibility of critical questioning and progress.

There is a big difference between open to refutation and refuted. What's wrong with keeping things open to the potential that, if someone has a new idea, we could learn better in the future?

"If realism is true, if we are animals trying to adjust ourselves to our environment, then our knowledge can be only the trial-and-error affair which I have depicted. If realism is true, our belief in the reality of the world, and in physical laws, cannot be demonstrable, or shown to be certain or 'reasonable' by any valid reasoning. In other words, if realism is right, we cannot expect or hope to have more than conjectural knowledge."

... which preemptively denies the possibility of arriving at a necessary truth about the world.

Conjectural knowledge (or trial-and-error knowledge) is Popper's term for fallible knowledge. It's objective, effective, connected to reality, etc, but not infallible. We improve it by identifying and correcting errors, so our knowledge makes progress.

We cannot establish our ideas are infallibly correct, or even that they are good or reasonable. Such claims (that some idea is good) never have authority. Rather, we accept them as long as we don't find any errors with them.

I think this is different than Objectivism, but correct. Well, sort of different. The following passage in ITOE could be read as something kind of like a defense of this Popperian position (and I think that is the correct reading).

One of Rand's themes here, in my words, is that fallibility doesn't invalidate knowledge.

The extent of today’s confusion about the nature of man’s conceptual faculty, is eloquently demonstrated by the following : it is precisely the “open-end” character of concepts, the essence of their cognitive function, that modern philosophers cite in their attempts to demonstrate that concepts have no cognitive validity. “When can we claim that we know what a concept stands for?” they clamor—and offer, as an example of man’s predicament, the fact that one may believe all swans to be white, then discover the existence of a black swan and thus find one’s concept invalidated.

This view implies the unadmitted presupposition that concepts are not a cognitive device of man’s type of consciousness, but a repository of closed, out-of-context omniscience —and that concepts refer, not to the existents of the external world, but to the frozen, arrested state of knowledge inside any given consciousness at any given moment. On such a premise, every advance of knowledge is a setback, a demonstration of man’s ignorance. For example, the savages knew that man possesses a head, a torso, two legs and two arms; when the scientists of the Renaissance began to dissect corpses and discovered the nature of man’s internal organs, they invalidated the savages’ concept “man”; when modern scientists discovered that man possesses internal glands, they invalidated the Renaissance concept “man,” etc.

Like a spoiled, disillusioned child, who had expected predigested capsules of automatic knowledge, a logical positivist stamps his foot at reality and cries that context, integration, mental effort and first-hand inquiry are too much to expect of him, that he rejects so demanding a method of cognition, and that he will manufacture his own “constructs” from now on. (This amounts, in effect, to the declaration: “Since the intrinsic has failed us, the subjective is our only alternative.”) The joke is on his listeners: it is this exponent of a primordial mystic’s craving for an effortless, rigid, automatic omniscience that modern men take for an advocate of a free-flowing, dynamic, progressive science.

One of the things that stands out to me in discussions like this is that all today's Objectivists seem (to me) more at odds with Popper than Rand's own writing is.

I'll close with one more relevant ITOE quote:

Man is neither infallible nor omniscient; if he were, a discipline such as epistemology—the theory of knowledge—would not be necessary nor possible: his knowledge would be automatic, unquestionable and total. But such is not man’s nature. Man is a being of volitional consciousness: beyond the level of percepts—a level inadequate to the cognitive requirements of his survival—man has to acquire knowledge by his own effort, which he may exercise or not, and by a process of reason, which he may apply correctly or not. Nature gives him no automatic guarantee of his mental efficacy; he is capable of error, of evasion, of psychological distortion. He needs a method of cognition, which he himself has to discover: he must discover how to use his rational faculty, how to validate his conclusions, how to distinguish truth from falsehood, how to set the criteria of what he may accept as knowledge. Two questions are involved in his every conclusion, conviction, decision, choice or claim: What do I know?—and: How do I know it?


Elliot Temple | Permalink | Messages (0)

Popperian Alternative to Induction

This wrote this on an Objectivist discussion forum in 2013.


http://rebirthofreason.com/Forum/Dissent/0265.shtml

I wrote:

Observe what? There are always many many things you could observe. Real scientific observation is selective.

Perform which action? There are many many actions one could perform. Real scientific action is selective.

Which patterns? There's always many many patterns.

In each case, being selective requires complex (critical) thinking. Ideas come first. Induction is supposed to explain how thinking works, but actually presupposes it.

Merlin Jetton replied:

Okay. Give us your answer to these questions. Please give us simple methods that cover all possible cases. How do we delimit those infinitely many possible conjectures?

(Following Popper.) We don't run into all the same problems because we use different methods in the first place.

We don't start with observation, scientific experiment, or finding patterns. All of those come later, after you already have various ideas. Then you do them according to your ideas. This is not problematic in general. It is a problem when you say stuff is "step 1" that actually presupposes ideas, and then claim your set of steps is a solution in epistemology and is how we get ideas.

We have a different approach that is not like induction and avoids many of induction's problems. By using different methods some problems never come up. We never have the problem of figuring out what to observe before having ideas, for example, because we say ideas come first before observations.

How are ideas learned then? Not from observations. Ideas come first. That's not to say observations are excluded. Observations are very useful. But first you need some ideas. Then you can observe (selectively, according to your ideas about what is important, what is interesting, what is notable, what is relevant to problems of interest, what clashes with your expectations, etc, etc ... and if your way of observing doesn't work out you can improve it with criticism, you can change and adjust it) and use the observations to help with further ideas (in a critical role – they rule things out).

Now this is a hard issue and you haven't read the literature and don't be too ambitious about how much you expect to learn from a summary. But anyway, because it's hard I'm going to split it up. First we'll consider an adult who wants to learn something. Then we could talk about how a child gets started after. I'll save that for later if the adult explanation goes over OK. The child is the harder case. I think it's too much to do the child first, all at once.

So, one of Popper's insights is that starting places aren't so important. I'm guessing this sounds dumb to you, because you're a foundationalist and think you have to start with the right foundations/premises/basis and then build up from there, step by step, making sure not to introduce errors or contradictions as you go. And Popper criticized and rejected that approach and offered a significantly different approach.

So let me try to explain what Popper's approach is like. People make mistakes. People are fallible. Errors are common. People mess up all the time. This isn't skepticism. People also get things right, learn, acquire knowledge, make scientific progress, etc, etc... But it's important to understand how easy it is to make mistakes. Knowledge is possible but hard to come by. To get knowledge you have to put a ton of effort into dealing with the problem of mistakes. I think if you read this the right way, you could agree with it. Objectivism recognizes that lots of philosophies go wrong and using the right methods is important and makes a big difference and some stuff like that.

So, OK, error is common and a big part of epistemology and philosophy is how you deal with error. What are you going to do about it? One school of thought tries to avoid errors. You use the right methods and then you get the right answers. That sounds very plausible but I don't think it's the right approach. I'll try to talk about Popper's approach instead. Popper's approach is you do try to avoid errors but you're never going to avoid all of them in the first place. That's not the primary most important thing. Whatever you do, some errors are going to get through. What you really have to do is set up mechanisms to identify and correct errors.

Popper applied this approach widely. Take politics and political systems. One of Popper's big ideas about politics is that trying to elect the right ruler is the wrong thing to focus on. Electing the right guy is trying to avoid errors. Yes you should put some effort into that but you can't do it perfectly and it's not the most important issue. What is the most important issue? That errors can be identified and corrected. In politics that means if you elect the wrong guy you find out fast, and you can get rid of him fast and you can get rid of him without violence. Popper called the wrong approach the "Who should rule?" problem and said most political philosophy argues about who should rule, when it should be focussing a lot more on how to set up political systems capable of correcting mistakes about who gets to rule.

What about epistemology? "Which ideas should we start with?" is a bit like "Who should rule?" You're never going to get it perfect and it shouldn't be the primary focus of your attention. Instead you want to set things up so if you start with the wrong ideas you can find out about the mistake and fix it quickly, easily, cheaply.

error correction is (a lot) more important than starting in a good place. look at it another way. if you start in a bad place but keep making progress, after a while you'll get to a good place and keep going. but if you start in a good place but aren't correcting errors, there is no progress, things never get better, long term you're doomed. so error correction is the more crucial thing that you really need.

so how can adults be selective? how can they decide what scientific experiments to do or which actions and results to investigate? how can they decide what patterns to look for? answer: they already have ideas about that. they can use the ideas they already have. that's ok! they don't need me to tell them some perfect answer. i could give them some advice and there could be some value in it, but it doesn't matter so much. they should start with the ideas they already have, use those, and then if something goes wrong they can make adjustments to try to do something about it. (and they can also philosophically examine their ideas and try to criticize instead of waiting for something noticeable to go wrong.)

in one sense, we're both advocating the same thing. people can and do use the ideas they already have about how to be selective, what issues to focus on, which patterns are notable, and more. but we Popperians know that is what's going on, and know how to keep making progress from there even if people aren't great at it. inductivists on the other hand think they have this method from first principles that is how people think but actually it smuggles in all sorts of common sense and pre-existing ideas as unexamined, uncriticized premises. and that's a really bad idea. those premises being smuggled in are good enough to start with, but what you really need to do is examine and criticize them!

i have not addressed how children/infants get started. i also haven't explained how thinking works at a lower level. (being able to criticize and correct errors requires thinking. how is that done?). we can get to those next if what i'm saying so far goes over ok. also the very short answer for how thinking works is that evolution is the only known theory for how knowledge can be created from non-knowledge. human thinking, at a low level, uses an evolutionary process to create knowledge. (i mean thinking literally uses evolution, not metaphorically. and no i'm not saying you consciously do that).


Elliot Temple | Permalink | Messages (0)

Deutsch Misquoted Turing

David Deutsch (DD) wrote in Quantum theory, the Church-Turing principle and the universal quantum computer (1985), p. 3:

Church (1936) and Turing (1936) conjectured ... This is called the ‘Church-Turing hypothesis’; according to Turing,

Every ‘function which would naturally be regarded as computable’ can be computed by the universal Turing machine. (1.1)

And from Deutsch's references (p. 19):

Turing, A. M. 1936 Proc. Lond. math. Soc. Ser. 2, 442, 230.

Now we'll compare with Turing's paper: On Computable Numbers, With An Application To The Entscheidungsproblem (1936), p. 230:

the computable numbers include all numbers which could naturally be regarded as computable.

Turing wrote "numbers", but DD misquoted that as "function". Turing also wrote "could" which DD misquoted as "would".

I double checked using two other copies of Turing's paper. (One and two.)

There's also a problem because Deutsch uses what appears to be an italicized block quote. You'd expect the whole block quote to be a quote of Turing, but instead it's a paraphrase. Inside the paraphrase are quotation marks surrounding the misquote of Turing that I criticized.

DD's citation is also incorrect. DD cites Turing's paper to volume 442 of the Proceedings of the London Mathematical Society, but it was actually in volume 42 not 442.

To determine what's correct, we can check how Turing himself cites it. In a correction to his paper, Turing cited himself:

Proc. London Math. Soc. (2), 42 (1936-7), 230-265.

You can also get the correct cite, with volume 42, from the Stanford Encyclopedia of Philosophy or from Wikipedia.

You can also see that the latest volume of the journal, published in 2021, is volume 122. Volume 442 is unlikely to exist for over 100 more years. And the journal's website has archives showing that the Turing article was in volume 42.

Tangentially, I hope this lowers your opinion of academic peer review. DD's paper was published in Proceedings of the Royal Society of London, a prestigious and peer-reviewed journal that started in around 1830. It has published work from many famous scientists.


Thanks to Dec for finding this misquote.

Note that DD has published a lot of misquotes.


Update 2021-07-15: Dec pointed out that a similar Turing misquote is in DD's book The Fabric of Reality:

He [Turing] conjectured that this repertoire consisted precisely of ‘every function that would naturally be regarded as computable’.

No, Turing wrote "all numbers which could" not "every function that would".

It appears that DD got this misquote from his own paper, and also modified it. There's a recurring pattern where every time DD touches a quote, there's a significant chance that he changes something. Here, he took the word "every" which was outside of quote marks in his paper and moved it inside quote marks for his book.


Update 2021-09-14: I contacted the academic publisher (proceedings of the royal society). They looked into the matter and said:

Apologies for the delay in getting back to you on this. A board member has had a look at the paper and does not think the misquote affects the outcome of the research presented in the paper. Although the error in the refences is unfortunate, we do not believe it will prevent readers from finding the correct article. Given the age of the paper we therefore do not think any further action is necessary.

I have several criticisms of this response.

They agree with me that DD misquoted and miscited.

Why won't they put up errata on their website? Is that too hard for them (they are bad at websites?) or do they actually not want to?

Errata serves several purposes. Academics working in the field could find out about the issue. People debating the issue could also refer to it – it would e.g. let a student whose professor repeated the error borrow the journal's authority to correct the professor. It's risky to correct your professor in general, but much easier with an official errata to point him to.

Is correcting professors a real issue? I think so because professors have been teaching Deutsch's error (there are some examples posted in the comments below). And they've been doing it out of context. In other words, even if the error did not affect the conclusion of Deutsch's paper, it still can affect other conclusions about other issues. So spreading the error matters, and it has in fact been taught in schools. Also, any reader of the paper may remember the Turing quote and use it for something else, and it may negatively affect the conclusion of their usage, even if it didn't affect the conclusion of Deutsch's paper. (Admittedly, some of the professors don't cite a source and might have been getting the error from Deutsch's book The Fabric of Reality where he repeated a similar error. But the fact that Deutsch put roughly the same error in his book is, IMO, an additional reason to errata it and at least do a little bit to stop the spread of the error.)

If they published an errata or other note about the error, they could also state their reasons for why they believe the paper's conclusion is unaffected. Other people could consider that reasoning and potentially disagree. This could be an area for critical thinking and truth seeking rather than an unaccountable authority pronouncing judgment for secret reasons. Even if it's no big deal in this case, their general attitude is concerning. How many other judgments do they make with no transparency? What is the nature of those judgments? Are any of those judgments mistaken? Do they gloss over many errors in papers they published? Could they be doing that partly out of bias and not wanting to draw attention to their own involvement in errors?

People expect academic science journals with peer review to have high standards and to be really picky about errors. They are not living up to this reputation. So much for their unlimited interest in truth for the sake of truth or whatever they were supposed to be doing.

They are still sharing the paper electronically and could update it there. Deutsch is still alive and available and could actually write or approve a small update, or they could do an update which is labelled as written by a journal editor not Deutsch.

How did this error happen? How did every step of the publishing process miss it? Did anyone intentionally cause or allow the error? Were any biases involved? They did no post mortem, no root cause analysis, no investigation into their peer review and editorial process, etc.

There are major causes for concern here. This errors calls into question how effective their reviewers and editors are. It also calls into question Deutsch's integrity. Maybe it was an accident but they have given no account of how it could have happened accidentally nor asked him to give one.

Do peer reviewers or editors not check quotes or cites? Should they? How widespread a problem is misquoting? How many other misquote reports do they receive, validate as correct criticism, and then bury? Might they be hiding a pattern revealing that many papers contain misquotes? Instead of hiding misquotes should they be doing something different like e.g. paying people enough money for misquote reports to make finding the misquotes worth the time and effort? If they actually wanted to find out about misquotes, and find out how big a problem it is, wouldn't they do something more like that? They could have responded to me by offering me money to find more misquotes since I've proven I can do it. That seems reasonable if they were better and more interesting in correcting errors.

Deutsch had an argument with a referree which was related to the text Deutsch misquoted:

http://www.daviddeutsch.org.uk/wp-content/uploads/2018/03/MathematiciansMisconception.pdf

But I soon found out that not everyone saw it that way. I also had referee problems. The referee of the paper in which I presented that proof insisted that Turing’s phrase “would naturally be regarded as computable” referred to mathematical naturalness – mathematical intuition – not nature.

(BTW, as a first impression, without reading Turing's paper or investigating the issue, I agree with the referree. When talking about naturally regarding something, that sounds like it's talking about what is natural or intuitive to people and their opinions, not about nature, due to what the key word "regard" means.)

Could Deutsch have intentionally misquoted in order to help win a specific logical point he was arguing about with the reviewer? Could the horrible, misleading presentation of the quote (as a block quote with an internal quote – which btw has tricked some people into thinking the whole thing is a quote) have been some kinda compromise worked out between Deutsch and the peer reviewer? Was the misquote in earlier drafts of the paper? Do they have records of what changes were made to the paper during peer review? In any case, there is some possible motive here for Deutsch falsifying the quote on purpose or just being biased and more careless in his own favor. Deutsch has a history of repeated misquotes throughout his career and most of them favor him in some way and I don't recall any that were bad for him, so it seems like whatever's going on involves bias if not actual deliberate, fully-conscious misquoting.

Seriously, how do wording errors in quotes happen accidentally? I understand typoing a letter or two when typing a quote in from a paper book or journal. But how do you just change the word? That seems more like Deutsch quotes stuff from memory – and his memory is biased in his favor (or there's selection bias – if he likes the version he remembers then he uses it, but if it's not ideal then he looks up the exact wording). Quoting from memory in your books and papers (and scripted speeches) is a serious scholarship violation that should lead to repercussions and major reputational damage. That's totally unacceptable. Another possibility, which there have also been potential indicators for, is that Deutsch changes quotes during his editing process without double checking the original. I suspect Deutsch thinks certain minor changes to quotes are OK, and maybe this somehow escalates to more major wording changes after multiple editing passes. Deutsch's editing could be like the game "telephone" where you whisper something to the guy next to you, who whispers it to the next guy, and so on. The goal is to repeat exactly what you heard. After something has been whispered a dozen times, often all the words are different and the meaning is totally changed.

In my experience, people are often willing to view things as "an accident" or "a mistake" without thinking about how exactly it happened. Some mistakes are simple like a one letter typo happening because you pressed the wrong keyboard key by accident because your finger dexterity is good but imperfect so occasionally you hit the wrong key (and then you usually notice and fix the typo, but not always). But many errors don't have such simple explanations and merit actual analysis. Changing the word "numbers" to "function" is not a typo due to flawed finger dexterity. That's bias, misremembering (while incorrectly believing quoting from memory is OK), intentionally falsifying the quote, or perhaps a horribly unreasonable editing processes that edits words within quotes similarly to how it edits words that are not within quotes. Or there are other possibilities like maybe a peer reviewer or editor caused the error and Deutsch didn't have full control over the final wording of his paper.

And how did the journal miss the error? Was it anyone's job to catch the error? Would the journal like to catch such errors in the future? And how did the error remain unnoticed in the archives for decades? Do they have a tiny readership? Do their readers not care about errors? Do their readers fail to report errors? Do their readers report errors but nothing is done? Would it make sense to hire people to review the archives for errors or should they focus on catching more errors before publication or should they just continue to not even post errata about errors and pretend nothing happened?

For more info, see my reply email to the journal:

https://curi.us/2477-academic-journals-are-unreasonable


Elliot Temple | Permalink | Messages (15)

Elliot Temple | Permalink | Messages (0)

Elliot Temple | Permalink | Messages (0)

Fallible Justificationism

This is adapted from a Feb 2013 email. I explain why I don't think all justificationism is infallibilist. Although I'm discussing directly with Alan, this issue came up because I'm disagreeing with David Deutsch (DD). DD claims in The Beginning of Infinity that the problem with justificationism is infallibilism:

To this day, most courses in the philosophy of knowledge teach that knowledge is some form of justified, true belief, where ‘justified’ means designated as true (or at least ‘probable’) by reference to some authoritative source or touchstone of knowledge. Thus ‘how do we know . . . ?’ is transformed into ‘by what authority do we claim . . . ?’ The latter question is a chimera that may well have wasted more philosophers’ time and effort than any other idea. It converts the quest for truth into a quest for certainty (a feeling) or for endorsement (a social status). This misconception is called justificationism.

The opposing position – namely the recognition that there are no authoritative sources of knowledge, nor any reliable means of justifying ideas as being true or probable – is called fallibilism.

DD says fallibilism is the opposing position to justificationism and that justificationists are seeking a feeling of certainty. And when I criticized this, DD defended this view in discussion emails (rather than saying that's not what he meant or revising his view). DD thinks justificationism necessarily implies infallibilism. I disagree. I believe that some justificationism isn't infallibilist. (Note that DD has a very strong "all" type claim and I have a weak "not all" type claim. If only 99% of justificationism is infallibilist, then I'm right and DD is wrong. The debate isn't about what's common or typical.)

Alan Forrester wrote:

[Justification is] impossible. Knowledge can't be proven to be true since any argument that allegedly proves this has to start with premises and rules of inference that might be wrong. In addition, any alleged foundation for knowledge would be unexplained and arbitrary, so saying that an idea is a foundation is grossly irrational.

I replied:

But "justified" does not mean "proven true".

I agree that knowledge cannot be proven true, but how is that a complete argument that justification is impossible?

And Alan replied:

You're right, it's not a complete explanation.

Justified means shown to be true or probably true. I didn't cover the "probably true" part. The case in which something is claimed to be true is explicitly covered here. Showing that a statement X is probably true either means (1) showing that "statement X is probably true" is true, or it means that (2) X is conjectured to be probably true. (1) has exactly the same problem as the original theory.

In (2) X is admitted to be a conjecture and then the issue is that this conjecture is false, as argued by David in the chapter of BoI on choices. I don't label that as a justificationist position. It is mistaken but it is not exactly the same mistake as thinking that stuff can be proved true or probably true.

In parallel, Alan had also written:

If you kid yourself that your ideas can be guaranteed true or probably true, rather than admitting that any idea you hold could be wrong, then you are fooling yourself and will spend at least some of your time engaged in an empty ritual of "justification" rather than looking for better ideas.

I replied:

The basic theme here is a criticism of infallibilism. It criticizes guarantees and failure to admit one's ideas could be wrong.

I agree with this. But I do not agree that criticizing infallibilism is a good reply to someone advocating justificationism, not infallibilism. Because they are not the same thing. And he didn't say anything glaringly and specifically infallibilist (e.g. he never denied that any idea he has could turn out to be a mistake), but he did advocate justificationism, and the argument is about justification.

And Alan replied:

Justificationism is inherently infallibilist. If you can show that some idea is true or probably true, then when you do that you can't be mistaken about it being true or probably true, and so there's no point in looking for criticism of that idea.

My reply below responds to both of these issues.


Justificationism is not necessarily infallibilist. Justification does not mean guaranteeing ideas are true or probably true. The meaning is closer to: supporting some ideas as better than others with positive arguments.

This thing -- increasing the status of ideas in a positive way -- is what Popper calls justificationism and criticizes in Realism and the Aim of Science.

I'll give a quote from my own email from Jan 2013, which begins with a Popper quote, and then I'll continue my explanation below:

Realism and the Aim of Science, by Karl Popper, page 19:

The central problem of the philosophy of knowledge, at least since the Reformation, has been this. How can we adjudicate or evaluate the far-reaching claims of competing theories and beliefs? I shall call this our first problem. This problem has led, historically, to a second problem: How can we justify our theories or beliefs? And this second problem is, in turn, bound up with a number of other questions: What does a justification consist of? and, more especially: Is it possible to justify our theories or beliefs rationally: that is to say, by giving reasons -- 'positive reasons' (as I shall call them), such as an appeal to observation; reasons, that is, for holding them to be true, or at least 'probable' (in the sense of the probability calculus)? Clearly there is an unstated, and apparently innocuous, assumption which sponsors the transition from the first to the second question: namely, that one adjudicates among competing claims by determining which of them can be justified by positive reasons, and which cannot.

Now Bartley suggests that my approach solves the first problem, yet in doing so changes its structure completely. For I reject the second problem as irrelevant, and the usual answers to it as incorrect. And I also reject as incorrect the assumption that leads from the first to the second problem. I assert (differing, Bartley contends, from all previous rationalists except perhaps those who were driven into scepticism) that we cannot give any positive justification or any positive reason for our theories and our beliefs. That is to say, we cannot give any positive reasons for holding our theories to be true. Moreover, I assert that the belief we can give such reasons, and should seek for them is itself neither a rational nor a true belief, but one that can be shown to be without merit.

(I was just about to write the word 'baseless' where I have written 'without merit'. This provides a good example of just how much our language is influenced by the unconscious assumptions that are attacked within my own approach. It is assumed, without criticism, that only a view that lacks merit must be baseless -- without basis, in the sense of being unfounded, or unjustified, or unsupported. Whereas, on my view, all views -- good and bad -- are in this important sense baseless, unfounded, unjustified, unsupported.)

In so far as my approach involves all this, my solution of the central problem of justification -- as it has always been understood -- is as unambiguously negative as that of any irrationalist or sceptic.

If you want to understand this well, I suggest reading the whole chapter in the book. Please don't think this quote tells all.

Some takeaways:

  • Justificationism has to do with positive reasons.

  • Positive reasons and justification are a mistake. Popper rejects them.

  • The right approach to epistemology is negative, critical. With no compromises.

  • Lots of language is justificationist. It's easy to make such mistakes. What's important is to look
    out for mistakes and try to correct them. ("Solid", as DD recently used, was a similar mistake.)

  • Popper writes with too much fancy punctuation which makes it harder to read.

A key part of the issue is the problem situation:

How can we adjudicate or evaluate the far-reaching claims of competing theories and beliefs?

Justificationism is an answer to this problem. It answers: the theories and beliefs with more justification are better. Adjudicate in their favor.

This is not an inherently infallibilist answer. One could believe that his conception of which theories have how much justification is fallible, and still give this answer. One could believe that his adjudications are final, or one could believe that his adjudications could be overturned when new justifications are discovered. Infallibilism is not excluded nor required.


Looking at the big picture, there is the critical approach to evaluating ideas and the justificationist or "positive" approach.

In the Popperian critical approach, we use criticism to reject ideas. Criticism is the method of sorting out good and bad ideas. (Note that because this is the only approach that actually works, everyone does it whenever they think successfully, whether they realize it or not. It isn't optional.) The ideas which survive criticism are the winners.

In the justificationist approach, rather than refuting ideas with negative criticism, we build them up with positive arguments. Ideas are supported with supporting evidence and arguments. The ones we're able to support the most are the winners. (Note: this doesn't work, no successful thinking works this way.)

These two rival approaches are very different and very important. It's important to differentiate between them and to have words for them. This is why Popper named the justificationist approach, which had gone without a name because everyone took it for granted and didn't realize it had any rival or alternative approaches.

Both approaches are compatible with both infallibilism and fallibilism. They are metaphorically orthogonal to the issue of fallibility. In other words, fallibilism and justificationism are separate issues.

Fallibilism is about whether or not our evaluations of ideas should be subjected to revision and re-checking, or whether anything can be established with finality so that we no longer have to consider arguments on the topic, whether they be critical or justifying arguments.

All four combinations are possible:

Infallible critical approach: you believe that once socialist criticisms convince you capitalism is false, no new arguments could ever overturn that.

Infallible justificationist approach: you believe that once socialist arguments establish the greatness of socialism, then no new arguments could ever overturn that.

Fallible critical approach: you believe that although you currently consider socialist criticisms of capitalism compelling, new arguments could change your mind.

Fallible justificationist approach: you believe that although you currently consider socialist justifying arguments compelling (at establishing the greatness and high status of the socialism, and therefore its superiority to less justified rivals), you are open to the possibility that there is a better system which could be argued for even more strongly and justified even more and better than socialism.


BTW, there are some complicating factors.

Although there is an inherent asymmetry between positive and negative arguments (justifying and critical arguments), many arguments can be converted from one type to the other while retaining some of the knowledge.

For example, someone might argue that the single particle two slit experiment supports (justifies) the many-worlds interpretation of quantum physics. This can be converted into criticisms of rivals which are incompatible with the experiment. (You can convert the other way too, but the critical version is better.)

Another complicating factor is that justificationists typically do allow negative arguments. But they use them differently. They think negative arguments lower status. So you might have two strong positive arguments for an idea, but also one mild negative argument against it. This idea would then be evaluated as a little worse than a rival idea with two strong positive arguments but no negative arguments against it. But the idea with two strong positive arguments and one weak criticism would be evaluated above an idea with one weak positive argument and no criticism.

This is easier to express in numbers, but usually isn't. E.g. one argument might add 100 justification and another adds 50, and then a minor criticism subtracts 10 and a more serious criticism subtracts 50, for a final score of 90. Instead, people say things like "strong argument" and "weak argument" and it's ambiguous how many weak arguments add up to the same positive value as a strong argument.

In justification, arguments need strengths. Why? Because simply counting up how many arguments each idea has for it (and possibly subtracting the number of criticisms) is too open to abuse by using lots of unimportant arguments to get a high count. So arguments must be weighted by their importance.

If you try to avoid this entirely, then justificationism stops functioning as a solution to the problem of evaluating competing ideas. You would have many competing ideas, each with one or more argument on their side, and no way to adjudicate. To use justificationism, you have to have a way of deciding which ideas have more justificationism.

The critical approach, properly conceived, works differently than that. Arguments do not have strengths or weights, and nor do we count them up. How can that be? How can we adjudicate between competing ideas with out that? Because one criticism is decisive. What we seek are ideas we don't have any criticisms of. Those receive a good evaluation. Ideas we do have criticisms of receive a bad evaluation. (These evaluations are open to revision as we learn new things.) (Also there are only two possible evaluations in this system. The ideas we do have criticisms of, and the ideas we don't. If you don't do it that way, and you follow the logic of your approach consistently, you end up with all the problems of justificationism. Unless perhaps you have a new third approach.)


Elliot Temple | Permalink | Messages (0)