Criticizing Ideas by Source

This posted is adapted from an email I wrote in 2019. I was accused of judging ideas by their author instead of their content and merit. In this post, I explain something a little bit similar to judging ideas by source but which is rational.


The issue isn't judging ideas negatively by source.

The issue is that there are outstanding, unaddressed criticisms of her ideas. And not merely of individual ideas, but of patterns of systematic error. New ideas should be checked against those known errors before being accepted. That isn’t judging ideas by source, it’s judging ideas by whether the criticisms refute them. It’s just adjusting which criticisms are considered by source.

We have default sets of criticisms we consider based on topic and some other contextual info. And we also do freeform criticism – you can try whatever you want, for whatever reason. It’s good to do both (if you leave out the standard criticisms, and only do freeform criticism, you risk missing basic or glaring errors, and the quality of your thinking will be inconsistent). The thing to do with Kate is add a few extra things to the list of criticisms to consider, which aren’t on your default list, because they are things Kate’s gotten wrong repeatedly and never fixed.

This is not judging ideas by source. It's taking the list of 25,000 criticisms I was already going to check (most criticisms are done very quickly, with very little conscious attention) and adding an extra 50 more criticisms to the list based on the source. If Kate actually fixes her mistakes, this won't lead to negative judgments by author. I'm judging by whether I have a criticism of the arguments. What I'm changing is just checking for some specific errors because Kate wrote it, even though they aren't common enough that I'd always check for them with any author.

Source-based error-checking is different than source-based negative judgments.

Taking into account context like this is standard and good. It’s the same principle used with other parts of the context. Like suppose an idea is presented verbally, then I will add extra criticisms to the list because of that contextual info. I treat ideas differently based on the context of being text or audio, which is an aspect of the idea’s source. E.g. when it’s verbal you should critically consider if you misheard someone due to their accent (normally done in under one second and without using conscious attention), but when it’s text you don’t consider that particular issue.

Slogans like “Judge ideas by content, not source”, there’s nothing objectionable or irrational about these general principles.

The right model is: consider some criticisms by default, some by context, some by intuition or creativity or whatever, some because someone else suggested it, and some for whatever other reasons. It’s pretty much the more the merrier as long as people aren’t trying to bulk-add millions of criticisms to the list for consideration (if they try that, you should address the matter, just not by addressing each individually – you should address the broader pattern, the template they are using to generate a million criticisms).

Put another way, the model is: consider criticisms based on broad, non-specific context (defaults). And consider criticisms based on specific contextual details. And consider criticisms based on mediumly-specific context. And so on. There’s a continuum of criticisms at different levels of specificity. There are criticisms that apply enough to consider in 90% of situations in your life, others that apply 60% of the time, others 30%, others 10%, etc.

Different levels of specificity of context examples: “it has to do with ideas and i need to consider what makes sense” (very non-specific context, makes some generic critical thinking stuff relevant like Paths Forward). “it has to do with medicine” (more specific topic, you could brainstorm some things worth considering for medicine that don’t apply for dealing with your lawyer). “it has to do with penicillin” (even more specific, suggests considering e.g. if you’re allergic and if your problem involves bacterial infection). and much more specific (so specific you wouldn’t have pre-existing known criticisms to use for this context, you’d have to think of them when it comes up): “medical test X indicates I have disease Y. it was done twice to double check. the test has the following false positive rate and has been researched in the following ways as explained in the following texts... should i now take drug A? here is what’s known about drug A... and here are alternative drugs..."

And don’t try to suppress criticism. Don’t limit this. Add criticisms to the “to be considered” list in all sorts of ways. Add whatever you want for intuitive reasons that you can’t justify. Add whatever you want for logical reasons. Add “might it kill me?” because the topic is medicine and some medicines can risk killing you and dying is bad. (that doesn’t mean you don’t take a medicine just because it could kill you. it may be worth the risk. but you need to critically consider that rather than fail to notice or think about that issue. that is something which shouldn’t get passed you without you realizing there is an issue there. which means it’s something you’re checking by default. you can’t just think of that only when it’s relevant. to reliably not miss it when it is relevant you have to be checking it in a broad category of situations, e.g. whenever medicine comes up.)

Unfortunately, the specifics of what criticisms should be considered because Kate is the author are things Kate doesn’t want to think about. This issue is coming up because she doesn’t want to talk about her recurring problems. But the same reasoning errors keep recurring in her reasoning, so they're relevant to most of her posts. Her posts and ideas should be judged critically, including by checking for the recurring errors every single time.


Elliot Temple | Permalink | Messages (0)

Treat Yourself Rationally

You can't tell whether an idea you have is an irrationality or a good idea until you resolve the conflict between it and your other ideas (the conflict is the thing that's making you suspect it's an irrationality).

If you declare something an irrationality, you're saying you already know the answer to the conflict. You're predicting what your answer to the conflict will be. But as DD has explained, the growth of knowledge isn't predictable (if you could predict the answers, then you'd already know them – there's no way to predict something is the right answer without knowing it's the right answer).

Kate asked about this:

what does it mean to resolve the conflict when we are talking about complex inexplicit static memes? is it once the meme is totally gone, then you say the conflict is resolved?

the conflict means: you have some ideas and some other conflicting ideas.

so, a disagreement. a conflict of ideas.

some people label one side of this disagreement the static meme side, then assume from the start that the goal is to make that side lose. they see it as the false bad side.

but you shouldn’t pre-judge disagreements like that. that approach is irrational.

the conflict is resolved when your truth-seeking arbitration process comes up with a win/win outcome which all sides of the disagreement prefer.

the point is: you have to deal with all disagreements by the normal methods of reason. don't assume one side is the static meme side and then treat it like an enemy combatant and start making exceptions to reason.

(I originally wrote this in 2015. I made minor edits.)


Elliot Temple | Permalink | Messages (0)

One Criticism Is Decisive

I'm sharing two answers I gave in 2019 explaining why we should reject an idea if we know one criticism of it. In short, a criticism is an explanation of why an idea fails at its purpose. It never makes sense to act on or accept an idea you think won't work.


https://curi.us/2124-critical-rationalism-epistemology-explanations#13292

I will also add that we don't reject a theory just from 1 failed observation. We must also have a better theory in place. One that explains what the previous theory successfully explained, and accounts for the mismatch in observation.

If it's a universal theory (X), and you (tentatively) accept one failed observation, and accept the arguments about why it's a counter-example, then you must reject the theory, immediately. It is false. You may temporarily accept a replacement, e.g. "X is false but I will keep using it as an approximation in low-risk, low-consequences daily situations for now until I figure out a better replacement. A replacement could be a new theory in the usual sense, but could also e.g. be a new combination of X + additional info which more clearly specifies the boundaries of when X is a good approximation and when it's not."

For a non-universal theory Y which applies to a domain D, then the same reasoning applies for one failed relevant observation – a counter-example within D.


https://curi.us/2124-critical-rationalism-epistemology-explanations#13300

As I understood it before, we don't reject it until we have a better explanation. Like for the theory or relativity, we have "failed observations" at the quantum level right? But we don't reject it because we don't have another better theory yet. What am I missing?

If you know something is false, you should never accept it because it's false.

The theory of relativity is accepted as true by (approximately) no one. Call it R. What people accept is e.g. "R is a good approximation of the truth (in context C)." This meta theory is not known to be false. I call it a meta theory because it contains R within it, plus additional commentary governing the use of R.

This meta theory, which has no known refutation, is better than R, which we consider false.

KP and DD did not make this clear. I have.

If you believe a theory is false, you must find a variant which you don't know to be false. You should never act on known errors. Errors are purely and always bad and known errors are always avoidable and best to avoid. Coming up with a great variant can be hard, but a quick one like "Use theory T for purposes X and Y but not otherwise until we know more." is generally easy to create and defend against criticism (unless the theory actually shouldn't be used at all, in any manner).

This is fundamentally the same issue as fixing small errors in a theory.

If someone points out a criticism C of theory T and you think it's small/minor/unimportant (but not wrong), then the proper thing to do is create a variant of T which is not refuted by C. If the variant barely changes anything and solves the problem, then you were correct that C was minor (and you can see that in retrospect). Sometimes it turns out to be harder to create a viable variant of T than you expected (it's hard to accurately predict how important every criticism is before you've come up with a solution. that can be done only approximately, not reliably).

It's easy to make a variant if you allow arbitrary exceptions. "Use T except in the following cases..." That is in fact better than "Always use T" for a T with known exceptions. It's better to state and accept the exceptions than accept the original theory with no exceptions. (It's a different matter if you are doubtful of the exceptions and want to double check the experiments or something. That's fine. I'm just talking from the premises that you accept the criticism/exception.) You can make exceptions for all kinds of issues, not just experiments. If someone criticizes a writing method for being bad for a purpose, let's say when you want to write something serious, then you can create the variant theory consisting of the writing method plus the exception that it shouldn't be used for serious writing. You can take whatever the criticism is about and add an exception that the theory is for people in situations where they don't care about that issue.

Relativity is in the situation or context that we know it's not universally true but it works great for many purposes so we think there's substantial knowledge in it. No one currently has a refutation of that view of relativity, that meta theory which contains relativity plus that commentary.


Elliot Temple | Permalink | Messages (0)

Human Problems and Abstract Problems

I originally wrote this in 2012. Single quotes are DD. One nesting level (single black line indenting the quote) is DD's friend Demosthenes who was involved with TCS a lot.


David Deutsch wrote in 2001 on TCS list regarding "Are common preferences always possible?":

Demosthenes wrote on 10/2/01 5:16 am:

On Tue, 16 Jan 2001 11:09:21 +0100, Sarah Lawrence wrote:

On Thu, 6 Feb 1997 at 10:32:03 -0700, Susan Ramirez asked:

Why do you believe that it is always possible to create a common preference?

This question is important because it is the same as

  • Are there some problems which in principle cannot be solved?

Or, when applied to human affairs:

  • Is coercion (or even force, or the threat of force) an objectively inevitable feature of certain situations, or is it always the result of a failure to find the solution which, in principle, exists?

I think that both Sarah and Demosthenes (below) somewhat oversimplify when they identify 'avoiding coercion' with 'problem-solving'. For instance, Sarah says "This question ... Is the same as[:] Are there some problems

Let's watch out for different uses of the word "problem".

which in principle cannot be solved?" Well, in a sense it is the same issue. But due to the imprecision of everyday language, this also gives the impression that avoiding coercion depends on everyone adopting the same theory (the solution, the common preference) about whatever was at issue. In fact, that is seldom literally the case, because the parties' conceptions of what is 'at issue' typically change quite radically during common-preference finding. All that is necessary is that the participants change to states of mind which (1) they prefer to their previous states, and (2) no longer cause them to hurt each other.

In other words, common preferences can often be much narrower than it may first appear. You needn't agree about everything, or even everything relevant, but only enough to proceed without hurting (TCS-coercing) each other (or oneself in the case of self-conflicts).

I agree that this question is important, though I would offer instead the following two elucidating questions:

In the sphere of human affairs:

  1. Are there any problems that would remain unavoidably insoluble even if they could be worked on without any time and resource limits?

  2. Are there any problems that are unavoidably insoluble within the time and resource limits of the real life situations in which they arise?

The word "problem" in both of these is ambiguous.

Problem-1: (we might call it "human problem"): "a matter or situation regarded as unwelcome or harmful and needing to be dealt with and overcome"

Problem-2: (we might call it an "abstract problem"): "a thing that is difficult to achieve or accomplish"

There are problems, notionally, like going to the moon. But no one gets hurt unless a person has the problem of going to the moon. Problem-1 involves preferences, and the possibility of harm and TCS-coercion. And it is the type of problem which is solved by common preferences.

Problem-2, inherently, does not have time or resource limits, because the universe is not in a hurry, only people are.

So, are there any problems which are insoluble with the time and resource limits of real life situations? Not problem-2 type, because those do not arise in people's life situations, and they do not have time or resource limits.

And as for problem-1 type problems, those are always soluble (within time/resource constraints), possibly involving changing preferences. (BTW, as a general rule of thumb, in non-trivial common preference finding, all parties always change their initial preferences.)

An example:

problem-2: adding 2+2 (there is no time limit, no resource limit -- btw time is a type of resource)

problem-1: adding 2+2 within the next hour for this math test (now there are resource issues, preferences are involved)

Another way to make the distinction is:

problem-1: any problem which could TCS-coerce (hurt) someone

problen-2: any problem which could not possibly ever TCS-coerce (hurt) anyone

problem-2s are not bad. Not even potentially. Problem-1s are bad if and only if they TCS-coerce anyone. A problem like 2+2=? cannot TCS-coerce anyone, ever. There's just no way. It takes a different problem like, "A person asked me what 2+2 is, and I wanted to answer" to have the potential for TCS-coercion.

Notice solving this different problem does not necessarily require figuring out what 2+2 is. Solving problem-1s never requires solving any associated problem-2s, though that is often a good approach. But it's not necessary. So the fact that various problem-2s won't be solved this year need not hurt anyone or cause any problem-1s -- with their time limits and potential for harm -- to go unsolved.

I believe that the answer to question (1) is, no -- there are no human problems that are intrinsically insoluble, given unbounded resources.

This repeated proviso "given unbounded resources" indicates a misconception, I think. The answer to (2) is, uncontroversially, yes. Of course there exist disagreements -- both between people and within a person -- that take time to resolve, and many will not be resolved in any of our lifetimes.

I think this unclear about the two types problems. While it agrees with me in substance, it defers to ambiguous terminology that basically uses unsolved problem-2s to say there are insoluble problems and try to imply it's now talking about problem-1s.

There is a mix up regarding failure to solve an abstract problem like figuring out the right theory of physics (which two friends might disagree about) with failure to solve human problems, like the type that make those friends hurt each other.

It's harmless to have some disagreements that you "agree to disagree" about, for example. But if you can't agree to disagree, then the problem is more dangerous and urgent.

It's uncontroversial that people have unsolved abstract problems for long periods of time, e.g. they might be working on a hard math problem and not find the answer for a decade. And their friend might disagree with them about the best area to look for a solution.

But so what?

Human problems are things like, "I want to solve the problem this week" (maybe you should change your preference?) or "I want to work on the math problem and find good states of mind in regard to it, and enjoy making progress" (this human problem can easily be solved while not solving the harmless abstract problem).

But that has nothing to do with the question being discussed here.

Right because of the confusion over different meanings of "problem".

The fact that after 25 years of almost daily attention to the conflict between quantum theory and general relativity I have failed to discover a theory that I prefer to both (or indeed to either), does not indicate that I have "failed to find a common preference"

Right. Common preferences do not even apply to problem-2s, only problem-1s.

either within myself, or with other proponents of those theories, in the sense that interested Susan Ramirez. I have not found a preferred theory of physics, but I have found successively better states of mind in regard to that problem, each the result of successive failures to solve it.

However this view is only available to those of us who believe that for all moral problems there exists, in principle, a unique, objectively right solution. If you are any kind of moral relativist, or a moral pluralist (as many people seem to be) then you can have no grounds for arguing that all human disputes are in principle soluble.

It is only in spheres where the objective truth of the matter exists and is in principle discoverable, that the possibility of converging on the truth guarantees that all problems are, in principle, soluble.

I agree that for all moral problems

No clear statement of which meaning of problem this refers to.

there exists an objectively right solution, and that this is why consensual relationships -- and indeed all liberal institutions of human cooperation, including science -- can work. The mistake is to suppose that if one does not believe this, it will cease to be true. For people to be able to reach agreement, it suffices that, for whatever reason, they seek agreement in a way that conforms to the canons of rationality and are, as a matter of fact, converging on a truth. Admittedly it is a great impediment if they think that agreement is not possible, and very helpful if they think that it is, but that is certainly not essential: many a cease-fire has evolved into a peace without a further shot being fired. It is also helpful if they see themselves as cooperating in discovering an objective truth, and not merely an agreement amongst themselves, but that too is far from essential: plenty of moral relativists have done enormous good, and made enormous moral progress -- for instance towards creating institutions and traditions of tolerance -- without ever seeking an objective truth, or realising that they were finding one. In fact many did not realise that they were creating agreement at all, merely a tolerance of disagreement. And incidentally, they were increasing the number of unsolved problems in society by promoting dissent and diversity.

Increasing the number of unsolved problem-2s, but decreasing the number of unsolved problem-1s.

What we need to avoid, both in society and in our own minds, is not unsolved problems,

Ambiguous between problem-1s and problem-2s.

not even insoluble problems,

Ambiguous between problem-1s and problem-2s.

Also doesn't seem to be counting preference changing as a solution, contrary to the standard TCS attitude which regards preference changing as a normal part of common preference finding, and part of problem solving.

but a state in which our problems are not being solved

But this time it means problem-1s.

-- where thinking is occurring but none of our theories are changing.

I believe that the answer to question (2) is yes -- human problems that cannot be solved even in principle, given the prevailing time and resource constraint, are legion. Albeit, nowhere near as legion as non-TCS believers would have it. My main argument in support of this thesis is based on introspection: Let him or her who is without ongoing inner conflict proffer the first refutation.

This is a bit like saying, at the time of the Renaissance, that science is impossible because "let him who is without superstition proffer the first refutation". The whole point about reason is that it does not require everything to be right before it can work. That is just another version of the "who should rule?" error in politics. The important thing is not to start out right, but to try to set things up in such a way that what is wrong can be altered. The object of the exercise is not to create a chimerical (and highly undesirable!) problem-free state,

A problem-2-free state is bad. As in, not having any problems we might like to work on. This is bad because it creates a very hard problem-1: the problem of boredom (having no problem-2s to work on, while wanting some will cause TCS-coercion).

A problem-1-free state is ... well there is another ambiguity. Problem-1s are fine if one is rationally coping with them. It's not bad to have human problems and deal with them. What's bad is failure to cope with them, i.e. TCS-coercion.

How can we tell which/when problem-1s get bad? When they do harm (TCS-coercion).

To put it another way: problem-1s are bad when one acts on an idea while having a criticism of it. But if it's just the potential for such a thing in the future, that's part of normal life and fine.

but simply to embark upon actually solving problems rather than being stuck not solving any (or not solving one's own, anyway). Happiness is solving one's problems, not 'being without problems'.

"one's problems" refers only to problem-1s, but "being without problems" and "actually solving problems" are ambiguous.

In other words, I suggest that there isn't a person alive whose creativity is not diminished in some significant way by the existence of inner conflict. Or rather dozens, if not hundreds or thousands, of inner conflicts.

Yes. But having diminished creativity (compared to what is maximally possible, presumably) is and always will be the human condition. Minds are fallible. Fortunately, it is not one's distance from the ideal state that makes one unhappy, but an inability to move towards it.

And if you cannot find a common preference for all the problems that arise within your own mind, it is a logical absurdity to expect to be able always to find a common preference with another, equally conflicted, mind.

Just as well, really. If you found a common preference for all the problems within your own mind, you'd be dead. If you found a common preference for all the problems you have with another person with whom you interact closely, you'd be the same person.

[SNIP]

However, and it is an important however, to approach this goal we must dare to face the inescapable facts that, in practice, it is by no means always possible to find a common preference; that therefore it is not always possible to avoid coercion;

This does not follow, or at least, not in any useful sense. Demosthenes could just as well have make the identical comments about science:

[Demosthenes could have written:]

In the sphere of science:

  1. Are there any problems that would remain unavoidably insoluble even if they could be worked on without any time and resource limits?

  2. Are there any problems that are unavoidably insoluble within the time and resource limits of the real life situations in which they arise?

I believe that the answer to question (1) is, no -- there are no scientific problems that are intrinsically insoluble, given unbounded resources.

Right. And why should it follow from this that a certain minimum of superstition is unavoidable in any scientific enterprise, and that people who try to reject superstition on principle will undergo "intellectual and moral corrosion" if, as is inevitable, they fail to achieve this perfectly -- or even if they fail completely?

As Bronowski stressed and illustrated in so many ways, doing science depends on adopting a certain morality: a desire for truth, a tolerance, an openness to change, an awareness of one's own fallibility and the fallibility of authority, yet also a respect and understanding for tradition ... (It's the same morality as TCS depends on.) And yes, no scientist has ever been entirely free from irrationality, superstition, dogma and all the things that the canons of rationality say are supposed to be absent from a true scientist's mind. Yet none of that provides the slightest argument that a person entering upon a life of science is likely to become unhappy

Tangent: this is a misuse of probability. Whether that happens depends on human choices not chance.

in their work, is likely to find their enterprise ruined either because they encounter a scientific problem that they never solve, or because they fail to rid their own minds of certain superstitions that prevent them from solving anything.

The thing is, all these sweeping statements about insoluble problems

Ambiguous.

and unlimited resources, though true (some of them trivially, some because of fallibilism) are irrelevant to the issue here, of whether a lifestyle that rejects coercion is possible and practical in the here and now. A TCS family can and should reject coercion in exactly the same sense, and by the same means, and for the same reason, as a scientist can and should reject superstition. And to the same extent: utterly. In neither case can the objective ever be achieved perfectly, with finite resources. In neither case can any guarantee be given about what the outcome will be. Will they be happier than if they become astrologers instead? Who knows? And certainly good intentions alone can guarantee nothing. In neither case can the enterprise be without setbacks and failures, perhaps disasters. And in neither case is any of this important, because ... well, whatever goes wrong, however badly, superstition is going to make it worse.

-- David Deutsch
http://www.qubit.org/people/david/David.html

And Josh Jordan wrote:

I think it makes sense to proceed according to the best plan you have, even if you know of flaws in it.

What if those flaws are superstition? Or TCS-coercion?

Whatever happens, acting against one's best judgment -- e.g. by disregarding criticisms of flaws one knows -- is only going to make things worse.


Elliot Temple | Permalink | Message (1)

Reasoning from Problems not Assumptions

Ron Garret (RG) wrote (CR means Critical Rationalism):

All reasoning has to start from assumptions. Assumptions by definition can't be proven or disproven. So how can we evaluate our core assumptions? If we try to use reason, that reasoning must itself be based on some assumptions like, "Reason is the best way to evaluate assumptions." But since that is an assumption, how can we evaluate it without getting into a infinite regression?

And near the end of the post:

The point is: the apparent infinite regress of rationality bottoms out in its effectiveness

And in comments:

BTW, I very much doubt that CR actually claims that reasoning is possible with no assumptions. If Popper (or Deutsch) ever actually said this, it's news to me. It seems self-evident to me that all reasoning has to start with assumptions. Whatever else a reasoning process consists of, there has to be some point in the process at which you assert for the first time the truth of some proposition. That assertion cannot be based on the truth of any previously asserted proposition because, if it were, it would not be the first time you asserted the truth of some proposition. A proposition that is asserted to be true without any prior assertions to support it is by definition an assumption.

(Note that even this argument makes assumptions, e.g. that reasoning has a beginning, that it involves the assertion of propositions, that words like "assert" and "proposition" have coherent meanings, etc. etc. etc.)

The view described by RG is the standard, non-CR view. It is regarded by CR as incorrectly relying on foundations and justification, and as not having the right paradigm. Example quotes about foundations (partly to explain, partly because of RG’s doubts that the CR thinkers Karl Popper (KP) or David Deutsch (DD) disagree with him):

KP in The Logic of Scientific Discovery:

The empirical basis of objective science has thus nothing 'absolute' about it.[4] Science does not rest upon solid bedrock. The bold structure of its theories rises, as it were, above a swamp. It is like a building erected on piles. The piles are driven down from above into the swamp, but not down to any natural or 'given' base; and if we stop driving the piles deeper, it is not because we have reached firm ground. We simply stop when we are satisfied that the piles are firm enough to carry the structure, at least for the time being.

DD in The Beginning of Infinity:

The whole motivation for seeking a perfectly secure foundation for mathematics was mistaken. It was a form of justificationism. Mathematics is characterized by its use of proofs in the same way that science is characterized by its use of experimental testing; in neither case is that the object of the exercise. The object of mathematics is to understand – to explain – abstract entities. Proof is primarily a means of ruling out false explanations; and sometimes it also provides mathematical truths that need to be explained. But, like all fields in which progress is possible, mathematics seeks not random truths but good explanations.

DD in The Beginning of Infinity:

there can be no such thing as an ultimate explanation: just as ‘the gods did it’ is always a bad explanation, so any other purported foundation of all explanations must be bad too. It must be easily variable because it cannot answer the question: why that foundation and not another? Nothing can be explained only in terms of itself.

KP in Conjectures and Refutations:

The question about the sources of our knowledge can be replaced in a similar way [to replacing the “Who should rule?” question in politics]. It has always been asked in the spirit of: ‘What are the best sources of our knowledge—the most reliable ones, those which will not lead us into error, and those to which we can and must turn, in case of doubt, as the last court of appeal?’ I propose to assume, instead, that no such ideal sources exist—no more than ideal rulers—and that all ‘sources’ are liable to lead us into error at times. And I propose to replace, therefore, the question of the sources of our knowledge by the entirely different question: ‘How can we hope to detect and eliminate error?

The question of the sources of our knowledge, like so many authoritarian questions, is a genetic one. It asks for the origin of our knowledge, in the belief that knowledge may legitimize itself by its pedigree. The nobility of the racially pure knowledge, the untainted knowledge, the knowledge which derives from the highest authority, if possible from God: these are the (often unconscious) metaphysical ideas behind the question. My modified question, ‘How can we hope to detect error?’ may be said to derive from the view that such pure, untainted and certain sources do not exist, and that questions of origin or of purity should not be confounded with questions of validity, or of truth. This view may be said to be as old as Xenophanes. Xenophanes knew that our knowledge is guesswork, opinion—doxa rather than epistēmē—as shown by his verses (DK, B, 18 and 34):

The gods did not reveal, from the beginning,
All things to us; but in the course of time,
Through seeking we may learn, and know things better.

But as for certain truth, no man has known it,
Nor will he know it; neither of the gods,
Nor yet of all the things of which I speak.
And even if by chance he were to utter
The perfect truth, he would himself not know it;
For all is but a woven web of guesses.

Yet the traditional question of the authoritative sources of knowledge is repeated even today—and very often by positivists and by other philosophers who believe themselves to be in revolt against authority.

The proper answer to my question ‘How can we hope to detect and eliminate error?’ is, I believe, ‘By criticizing the theories or guesses of others and—if we can train ourselves to do so—by criticizing our own theories or guesses.’ (The latter point is highly desirable, but not indispensable; for if we fail to criticize our own theories, there may be others to do it for us.) This answer sums up a position which I propose to call ‘critical rationalism’.

[...]

So my answer to the questions ‘How do you know? What is the source or the basis of your assertion? What observations have led you to it?’ would be: ‘I do not know: my assertion was merely a guess. Never mind the source, or the sources, from which it may spring—there are many possible sources, and I may not be aware of half of them; and origins or pedigrees have in any case little bearing upon truth. But if you are interested in the problem which I tried to solve by my tentative assertion, you may help me by criticizing it as severely as you can; and if you can design some experimental test which you think might refute my assertion, I shall gladly, and to the best of my powers, help you to refute it.’

The standard, non-CR view involves problems like a regress because it tries to do things like argue for ideas "based on the truth of any previously asserted proposition” (RG’s words above). RG acknowledges some of the problems with arbitrary foundations or, in the alternative, an infinite regress. He tries to solve them by suggesting an effectiveness criterion for judging ideas. This doesn’t solve the problem: it is an arbitrary foundation or leads to a regressing debate about the effectiveness of the effectiveness criterion, and the effectiveness of whatever arguments are used in that debate, and so on.

The CR view is that we start our reasoning with problems, not assumptions. We proceed to brainstorm guesses about solutions. We do not assert that our guesses are true; we expect our guesses to one day be discarded as obsolete falsehoods because progress is infinite. And then we criticize the guesses. This leads to fixing the errors in some guesses and rejecting other guesses, and generally to progress.

The CR paradigm is not about establishing things on the basis of assumptions or on any other basis or foundation, nor is it about choosing a criterion for what types of theories are best (e.g. effective ones or simple ones). The CR paradigm is about error correction. CR says we learn not by making foundational assumptions and building from them to other ideas, but by making unjustified guesses to try to solve our problems, which we then expose to error correction.

Both CR and the standard view try to deal with the problem of differentiating good and bad ideas. The standard view seeks to find a good starting point, and good methods of thinking, so that bad ideas can never be introduced (or at least are hard to introduce). The CR view accepts there is no way to avoid error or even to make it uncommon, and instead focuses its primary effort on error correction. We can’t make error uncommon because we’re all alike in our infinite ignorance (as KP said) and we’re always at the beginning of infinity (as DD said) with infinite stuff left to learn. (There are also other arguments about fallibilism.)

The CR view on assumptions and foundations is that we can start anywhere. We can start with high level ideas or low level ideas. We can start in the middle. Anything goes because we aren’t trying to solve the problem of avoiding error by limiting where we begin our reasoning. What’s important is that all ideas be held open to error correction. Nothing is beyond question or criticism. There are no limits beyond which we can’t delve further and learn more. No matter where we start, we can always work in any direction. We can flesh out prior or lower level ideas more. We can flesh out later or higher level ideas more. We can go sideways. And things don’t organize neatly into levels anyway, for all is a woven, tangled, chaotic, web of guesses, not a pyramid hierarchy.

What stops the regress of asking “Why?” and “How do you know?” infinitely? Nothing formal. CR isn’t about proving we’re right. A CRist will say, “I’ve explained why I think this, and how I know, in what I think is an adequate level of detail to solve the problem I’m trying to solve. Do you see an error I’ve made?” CR is about searching for and fixing errors, not establishing that our answers are correct. We expect our answers will be improved in the future. We follow our interests in our attempts to live our lives, solve problems, and learn. There are infinite places we may direct our attention and we make judgments about which to prioritize. These interests and judgments, like everything else, are themselves open to criticism.

There is no way to provide infinite detail about one’s reasoning. This is not actually a problem unique to foundations. It applies just as well to the consequences of one’s reasoning (the further implications). But we don’t need infinite detail if we aren’t after a guarantee of correctness. If instead we know we may well be wrong, but we’re doing our best to find and correct errors, then the finite detail is adequate for that purpose. And there are no bounds on where we can go into more detail. Any part that people think could use more questioning can be critically considered more. We never have to stop, we just stop when we think our attention is better used elsewhere (and we don’t know of an error with that).

A criticism is an explanation of why an idea does not solve the problem it’s claiming to solve. The reason we shouldn’t accept (or act on) criticized ideas, even tentatively, is because we have an explanation of why they won’t work. And all criticisms are themselves open to criticism. (What do you do if people keep throw infinitely many dumb criticisms at an idea? In short, criticize infinite categories of idea all at once. Criticize patterns of error. Don’t criticize all the criticisms individually. In general, good will and good faith are helpful and make things better. But if someone wants to throw infinitely many criticisms at an idea, they may try it. It’s easy to do that if you generate the criticisms according to a pattern, but then they can also be criticized as a group because they fit that pattern. To defend against this, we’ll only need one counter-argument for each pattern the critic thinks of to form an infinite set of criticisms from. So we don’t have a greater burden than he does. And actually it’s better than that if we can identify a meta-pattern – a pattern to his patterns – and criticize that. If we use powerful criticisms with high “reach” (DD’s term meaning broad/wide applicability), which deal with the right issues, it becomes harder and harder for a critic to think of anything new to say which isn’t already addressed by our criticisms. And we can write them down and reuse them with all future critics. That is one of the main projects intellectuals should be engaged in.)

Our guesses can be arbitrary non sequiturs. They need not be based on anything – the source or basis is not the important thing. However, it’s hard to make them survive criticism if they don’t use any existing knowledge. It’s hard to start over, without the benefit of any existing knowledge (which has had a bunch of error-correction effort already put into it) and make something good. So we often build on, e.g., the English language. However, just because I use the English language to help me formulate my idea does not mean my idea depends on the English language in some kind of chain of logical implication. The English language is not necessarily assumed or an important basis. My idea may well be approximately autonomous. Maybe we’ll one day find huge flaws in English, and find that Japanese is much better, and then notice that my idea can be easily translated to Japanese because it was never actually tightly coupled to English in the first place. It’s like how the C programming language isn’t based on any particular CPU architecture and code can be recompiled for other architectures (so while my code needs a CPU to run, it’s not based on whatever CPU I’m currently using).

The CR paradigm lacks the solidity sought by the standard view. It doesn’t justify its ideas. It doesn’t provide justified, true belief. It doesn’t offer ways to demonstrate that an idea is true so that we need never worry about it having an error again. It doesn’t offer ways to positively establish ideas. It differentiates good and bad ideas by criticism of the bad ones, not by anything to bestow a good, positive status on the good ideas (which CR views as merely ideas which are not currently known to be wrong). CR is all we can have due to logical problems that the standard view has been unable to deal with century after century. And CR is enough for science to work, among other things.

I suggest rereading the DD and KP quotes (that I gave above) at this point. I think they’ll make more sense after reading the rest (both what they mean and how they are relevant), and they’ll also help clarify my text. See e.g. how KP talks about the sources of our ideas not mattering.

This is all a lot to understand. As far as I’ve been able to determine, DD and probably Feynman are the only people who ever understood CR by reading Popper’s books, without the help of a bunch of discussion with people who already knew CR (like Popper, Popper's students, or DD). We’ve never found a single person who has understood CR well from DD’s books without discussing with DD or DD’s students. I had many large confusions after reading FoR, which took years of discussion, study and DD help to resolve. CR is deeply counterintuitive because it goes against ~2300 years of philosophical tradition, and those ideas have major influence throughout our culture. Supporting people’s CR learning processes, if they’re interested, is one of the important purposes of this forum. Questions are welcome and you shouldn’t expect to fully understand this already or soon.

Note that CR theory explains this (the previous paragraph). Errors are inevitable and common, including when understanding even one sentence[1]. Trying your best to correct your own errors is a good start, but critical discussion has big advantages. People have different strengths and weaknesses, knowledge and ignorance, biases and irrationalities, etc. People differ. External criticism is valuable because other people will catch errors you miss (including errors they made in the past and already fixed). Because error correction is such a big deal, critical discussion is approximately necessary for ambitious people (the alternative plan is to be one of the best thinkers ever who is so much better than ~everyone at ~everything that external criticism doesn’t add much). Critical discussion also lets people share explanations, problems, and other knowledge which isn’t criticism, which is also helpful.

[1] DD in The Beginning of Infinity:

SOCRATES: But wait! What about when knowledge does not come from guesswork – as when a god sends me a dream? What about when I simply hear ideas from other people? They may have guessed them, but I then obtain them merely by listening.
HERMES: You do not. In all those cases, you still have to guess in order to acquire the knowledge.
SOCRATES: I do?
HERMES: Of course. Have you yourself not often been misunderstood, even by people trying hard to understand you?
SOCRATES: Yes.
HERMES: Have you, in turn, not often misunderstood what someone means, even when he is trying to tell you as clearly as he can?
SOCRATES: Indeed I have. Not least during this conversation!
HERMES: Well, this is not an attribute of philosophical ideas only, but of all ideas. Remember when you all got lost on your way here from the ship? And why?
SOCRATES: It was because – as we realized with hindsight – we completely misunderstood the directions given to us by the captain.
HERMES: So, when you got the wrong idea of what he meant, despite having listened attentively to every word he said, where did that wrong idea come from? Not from him, presumably . . .
SOCRATES: I see. It must come from within ourselves. It must be a guess. Though, until this moment, it had never even remotely occurred to me that I had been guessing.
HERMES: So why would you expect that anything different happens when you do understand someone correctly?
SOCRATES: I see. When we hear something being said, we guess what it means, without realizing what we are doing. That is beginning to make sense to me.

When you read books, you guess. Many guesses are wrong. You fix many of them yourself. Critical discussion helps fix more errors. People routinely overestimate how well they understood moderately difficult books that they read, and it becomes a huge problem with very hard material like CR books. Understanding of books should be tested, and one of the best methods of doing that is to write down your understanding and then share it with people who already understand the book and see if they agree that you have their position right. (You can do this test of understanding whether you agree or disagree with the material).

Summary: According to CR, making assumptions is not the way one solves problems. One solves problems by brainstorming solutions and doing error correction on the solutions. And while doing that, CR holds that it’s important to recognize the fallibility of all of our ideas. We should hold our ideas open to critical questioning and improvement, and expect that they can be improved, not take them to be true. (Here I'm contradicting "All reasoning has to start from assumptions." An "assumption" means a proposition taken to be true). CR holds things like: Don’t assume your ideas are true; keep looking for errors.


I originally wrote this in 2019 and I've made minor edits.


Elliot Temple | Permalink | Messages (0)

Learning and Unlearning Habits

When people learn a new computer game, what happens? Especially a pretty good gamer and a pretty fast paced game. He forms some habits. He learns to press certain combos of buttons. He learns to react in X way to Y situation. He learns some pattern recognition – for various patterns, start shooting. For various other patterns, start blocking. Stuff like that.

So he’s creating, in a matter of minutes, new habits, new automatic reactions, new intuitions, new things that are now second nature or intuitive and he can do them without much thought. You have to get the basics of the game to be like that so you can think about more advanced strategy. Just as we automate walking around in real life, we also need to automate walking around in video games so we can focus on other parts of the games. (btw sometimes ppl automate video game controls so much that they forget what the controls are. like you ask them how they did that, and they are like “uhhhh i hit the button, idk i didn’t think about it”. sometimes they have to like look at their hand to see what buttons they are pressing, or stop and remember the buttons, or something. it’s so automatic they aren’t thinking about it. it’s a little like asking a person which muscles he uses when walking, except less hard.)

ok so this video game player is creating habits/automazations/etc. and what always happens is: some are mistakes. so he has to unlearn some. he has to change some. some of his first guesses about how to play the game turn out wrong.

and that isn’t that big a deal. that’s just part of learning. you gotta do some unlearning too. video game players do that all the time. it’s so common.

sometimes you have to relearn things even if you didn’t make a mistake, btw. like you learn to beat a boss, then later there is a similar boss with some changes. so you take your old habits for the first boss and you make adjustments so they can work on the second boss. so in some situation, with the new boss, you have to stop yourself from doing Y after X, as you were in the habit of doing. you dismantle the habit that was automating that.

when people can’t dismantle or change automated habits it’s commonly an indication of irrationality, dishonesty, etc. it can also be an indication that the habit is used by a hundred other habits which rely on it, so it’s hard to mess with because of its complex involvement in lots of other stuff you don’t want to break. and ppl forget how habits work that they made long ago, especially in early childhood, which is what’s going on with some sexual orientation stuff (that’s in addition to the other things from earlier in this paragraph, it doesn’t have to be just one).


Mastery typically comes from practicing to the point that encountering new errors is rare, and you figured out solutions to all the errors you’ve seen before (except maybe a few rare ones that you decided to ignore). When nothing is gonna go wrong then you can go faster and it starts getting boring consciously (cuz there’s nothing left for your conscious mind to do, no changes are needed, no additional creativity is needed) and you stop paying conscious attention to it. (people often stop paying conscious attention way too early, btw, which prevents them actually getting good at stuff.)


The above were two sections of a Fallible Ideas email I wrote in 2019. I edited the term "workstation" to "habit" in a few places. I talked about mental workstations in this post, but "habit" is clearer for people who haven't read that. I was answering a question about firing workers at one's mental workstations (aka automatized ideas, aka habits) or dismantling/retiring the workstations. I like the metaphor of the mind as a factory with many workstations (with machines, robots or low-skill workers) and the conscious mind as a manager, inspector or leader who can go around and look at workstations, review what people are doing, make changes, build new workstations, etc., and when the manager isn't present the workstations keep running without him (the unconscious mind). You can only look at one part of your mind at a time (or maybe fewer than ten parts at once), and the only way to get much done is with automation so stuff works without your manager/conscious-attention being there. Your mind is like a powerful factory that's mostly automated and whenever you need to do manual labor (conscious/manager attention) that's really inefficient and slow. Conscious/manager attention is best used for fixing workstations or creating new workstations, not for doing work that could be done by a workstation. (It's OK for the manager to do work a few times when you're new to it, to figure out how to do it, but then he needs to delegate. Practice should involve figuring out how to delegate and set up automated workstations to do something and get those working right, not your conscious mind doing everything itself. Practice should primarily be a process of automating, not a process of your consciousness/manager practicing stuff himself. Once you figure out how to do something initially, then further practice should be kinda like doing job-training for subordinates (the subordinates being cheap, plentiful mental resources that require little to no conscious attention once they're set up). The conscious mind tells them what to do then watches them try doing the work and gives corrections.)


Elliot Temple | Permalink | Messages (0)

Henry Hazlitt on Practice

In Thinking as a Science (1916), Henry Hazlitt wrote (my emphasis):

The secret of practice is to learn thoroughly one thing at a time.

As already stated, we act according to habit. The only way to break an old habit or to form a new one is to give our whole attention to the process. The new action will soon require less and less attention, until finally we shall do it automatically, without thought—in short, we shall have formed another habit. This accomplished we can turn to still others.

I agree and have been advocating this for years. People learn to do something correctly, once, and then think they've learned it and they're done. But that's just the first step. For skills you'll use often, you should practice until you can do it cheaply, easily and reliably. E.g. it's important to be able to type using almost zero conscious attention so that I can focus my attention on the ideas I'm writing. It's best to think in an objective – not biased – way pretty much automatically in general so that you can focus on considering a specific topic (like economics); people who need to use a bunch of mental focus to avoid bias are at a big disadvantage because they have less attention left for the actual topic (and what often happens is, at some point, they focus their attention on the topic and then their habitual bias starts happening).


Elliot Temple | Permalink | Messages (0)

Learning to Mastery and Repetition

I originally wrote this to the Fallible Ideas email list in 2019.


every adult learned some stuff to the point of MASTERY – very low attention needed, can do it great while tired/low-focus/low-effort, very low error rate, etc.

like walking. and talking. and reading. and, for many people, basic arithmetic. and, for many people, the year WWII ended and the number of states in the US and the number of original colonies (they don’t have to stop and think about those things, they just know, instantly).

doesn’t work in all contexts. giving a speech or walking on ice are different. but that’s ok. they know that. they pay more attention in those contexts. they understand pretty well what is mastered and what isn’t.

there are generic things that ~everyone gains mastery over, like walking. and there are generic things that lots of ppl gain mastery over, like some basic arithmetic.

and there are other things that only a few ppl gain mastery over. like i mastered tons of chess skills. lots of stuff is automated to the point where i can play good chess moves in under 1 second. and i could still mostly do that even though i quit chess many years ago – like i’d be worse now, and rusty, but still worlds better than a beginner. and giving me 10 minutes to think about a move right now, vs. 10 seconds, still wouldn’t make a ton of difference. the skill i still have is still mostly automatic. (when i was actively playing, 10 seconds vs. 10 minutes also wasn’t a huge difference. it matters, especially when playing someone who is very similar skill level to you, but over 90% of your skill works within 10 seconds, and the extra 10min of thought only adds a bit extra.) btw i haven’t mastered chess as a whole, i just have mastery over lots of pieces of chess to the point that i’m a good player as a whole but certainly not the best. mastery doesn’t mean perfection overall, it can just mean mastering a specific piece of something, or sub-skill, and then you have mastery over that piece. mastery is about getting something to the point of it being really automatic – very low error rate while using very little conscious attention.

some ppl get really good at an instrument or a sport or many other things.

but most stuff that ppl master, they master in childhood. and they don’t remember the learning process very well. and so, as adults, they don’t have a good example to refer to for how to learn. they haven’t mastered anything recently.

most adults either learned to touch type as a kid or they still aren’t great at it. actually mastering it as an adult is uncommon.

Dennis replied:

I agree wholeheartedly. It's a really rewarding experience to have learned something new and somewhat mastered it as an adult. It's a neat way to reward one's future self. I still thank myself for teaching myself to 10 finger touch type last year. Somehow I had gotten by using just three or four fingers over the years, and this is just so much better now.

My original email continued:

so one of the things i recommend ppl do is master something. learn something. see how learning works. doesn’t matter what it is. just gotta succeed. it shouldn’t be very hard. don’t make philosophy be the first thing you learn really well in the last 20 years. that’s ridiculous. learn something easier for practice. you can learn a bit of philosophy but don’t go for mastery until you master some easier stuff.

the best thing to master, in general, for practice, is a video game. there are lots of options but video games have a lot of very good characteristics. but if you don’t like them, or you have something else that you really wanna use, you can consider alternatives. i have explained in the past what’s good about video games, what kinda characteristics to look for in something to master, and written about many game examples.

what lots of ppl do is learn stuff a little bit, halfway, don’t master it, and move on. then repeat.

so, yet again, i advise ppl to learn a video game to get a feel for mastery and how learning works. or master something else. but no one listens to me. to the extent anyone else here plays video games, they don’t stream it on twitch, they don’t master it, they don’t talk about it much, and they aren’t very good.

Dennis replied:

In one of Popper's essays I read the other day he talks about the difference between creative learning (ie problem solving) and learning by repetition. [...]

Do you differentiate at all between the two modes of learning? I've been wondering about Popper's remark about learning by repetition. He seems to claim that its akin to induction, but induction is impossible, so... how could anyone learn by repetition? Also, I doubt people actually have two different modes of learning. [...]

I replied:

You can’t learn merely by repetition, you have to think about what will and won’t work. Repeating can’t figure out solutions and can’t do anything to find or correct errors.

Some of my examples are simpler because people should master some easier things before aiming for some harder ones. There has to be a progression.

In order to effectively think creatively about chess strategies, you can’t be too distracted by remembering how the pieces move. Practice does help automate one’s understanding of the piece movement rules. But practice isn’t just about repeating things, you think through what the rule for moving a piece is and figure out where it can go – it gets actual conscious attention when you’re learning it. You couldn’t just repeat correct piece movements without conscious attention, as a practice method, because you don’t know them well enough yet. (You could repeatedly move a rook back and forth between two adjacent squares, or something else simple, and thus make correct moves without thinking about it even though you don’t know the piece moves well, but you wouldn’t learn much by doing that, that’d be bad practice.)

It’s the same with everything else. Interesting, creative conscious thought is always building on many layers of thinking that were conscious in the past but no longer require conscious attention – that attention is now freed up for more advanced things.

Learning touch typing requires directing conscious attention to doing it correctly, as well as some creative problem solving – identifying what you’re screwing up and figuring out how to fix it. Generally this means doing things slowly at first so you can get it correct even though you’re barely able to do it. Then you speed up a bit at a time and check for new errors happening due to going faster. Trying the same thing at successively faster speeds isn’t really repetition because the speed is changing. You do repeat a little because of variance – to find out if you are making mistakes at a new speed, you might need to do it 20 times, perhaps more, depending on what sort of error rate is acceptable. Doing it once at a new speed doesn’t mean you can do it reliably at that speed. The same method is common with instruments and many other things people learn.

Since there’s infinite potential progress, ideally ~all our current thinking would be so easy in the future that it takes almost no conscious attention, and we could consciously focus on more advanced things. I think this is an atypical goal, but important. I generally don’t regard things as finished if I can do them but it’s hard or slow or it only works 1 in 3 times (or even 99 out of 100 can be too low depending on what it is). As one example, I think it’s a travesty that most of the world’s so-called “intellectuals” can only read at 300 words per minute or fewer and aren’t trying to improve that, they think they’re done learning to read even though they do it slowly using lots of conscious attention.


Elliot Temple | Permalink | Messages (0)