Coercion and Critical Preferences

Written Aug 2008, addressing ideas of Karl Popper and David Miller about critical preferences:

You should eliminate rival theories until you have exactly one candidate theory and then act on that. We thus dodge any issues of comparing two still-standing theories using some sort of criterion.

And that's it. The problem is solved. When there is only one candidate theory the solution is dead easy. But this solution raises a new problem which is how to deal with all the rival theories (in short order).

If you act on a theory while there are any active rivals that is coercion, which is the cause of distress. You are, roughly, forsaking a part of yourself (without having dealt with it in a rational fashion first).

Often we don't know how to resolve a controversy between two theories promptly, perhaps not even in our lifetime. But that does not mean we are doomed to any coercion. We can adopt a *single theory* with *no active rivals* which says "I don't know whether A or B is better yet, but I do know that I need to choose what to do now, and thus I will do C for reason D." A and B talk about, say, chemistry, and don't contain the means to argue with this new theory proposed -- they don't address the issue now being considered of what to do given the unsettled dispute between A and B -- so are not relevant rival theories, so we end up with only one theory about what to do, this new one we just invented. And acting on this new theory clearly does not forsake A or B; it's not in conflict with them.

We might invent two new theories, one siding more with A, and one more with B, and thus have a new conflict to deal with. But then we have *new problem* which does not depend on resolving the dispute between A and B. And we can do as many layers of reinterpretations and meta-theorizing like this as we want. Coercion is avoidable and practical problems of action are soluble promptly.

If it really comes down to it, just formulate one new theory "I better just do C for reason D *and* all arguments to the contrary are nothing but attempts to sabotage my life because I only have 3 seconds left to act." That ought to put a damper on rival theories popping up -- it'd now have to include a reason it's not sabotage.

One could still think of a rival theory which says it wants to do E because of B and this isn't sabotage b/c the A-based C will be disastrous and it's trying to help" or whatever. There is no mechanical strategy for making choices or avoiding coercion. What I mean to illustrate is we have plenty of powerful tools at our disposal. This process can go wrong, but there is plenty of hope and possibility for it to go right.


BTW this does not only apply to resolving rival theories *for purpose of action* or *within oneself*. It also works for abstract theoretical disputes between different people.

Suppose I believe X and you believe Y, *and we are rational*. Think of something like MWI vs copenhagen -- theories on that scale -- except something that we don't already know the answer to.

So we argue for a while, and it's clear that you can't answer some of my questions and criticisms, and I can't answer some of yours.

Most people would say "you haven't proven me wrong, and i see problems with your theory, so i am gonna stick with mine". That's called bias.

Some people might be tempted to analyze, objectively, which theory has more unanswered questions (weighted by importance of the question), and which theory has troubled with how many criticisms (weighted by amount of trouble and important of criticism). And thus they'd try to figure out which theory is "better" (which doesn't imply it's true, or even that the other is false -- well of course strictly they are both false, but the truth might be a minor modification of the worse theory).

What I think they've done there is abandon the critical rationalist process and replace it with a misguided attempt to measure which theories are good.

What we should do is propose *new theories* like "the current state of the debate leaves open the question of X and Y, but we should be able to all agree on Z so for issue A we should do B, and for issue C we should do D, and that's something we can all agree on. We can further agree about what research ought to be done and is important to do to resolve questions about both X and Y." Thus we can make a new *single theory* that *everyone on both sides can agree to* which does not forsake X or Y. This is the *one rational view to take of the field*, unlike the traditional approach of people being in different and incommensurable camps. This view will leave them with *nothing to argue about* and *no disagreement*.

Of course, someone might say it's mistaken and propose another view of the same type. And so we could have an argument about that. But this new argument does not depend on your view in the X vs Y dispute. It's a new problem. Just like above. And if it gets stuck, we can make another meta-theory.

This approach I advocate follows the critical rationalist process through and through. It depends on constructing new theories (which are just guesses and may go wrong) and criticizing them. It never resorts to static criteria of betterness.

Elliot Temple | Permalink | Comments (0)

Criticism is Contextual

Criticism is contextual. The "same idea" as far as explicit content and the words it's expressed in, can have a different status (criticized, or not) depending on the context, because the idea might successfully solve one problem but not solve some other problem. So it's criticized in its capacity to solve that second problem, but not the first. This kind of thing is ubiquitous.

Example: you want to drive somewhere using a GPS navigator system. The routes your GPS navigator system gives you are often not the shortest, best routes. You have a criticism of the GPS navigator system.

Today, you want to drive somewhere. Should you use your GPS? Sure! Why not? Yes you had a criticism of it, but that criticism was contextual. You criticized it in the context of wanting shortest routes. But it's not criticized in the context of whether it will do a good job of getting you to today's destination (compared to the alternatives available). It could easily lead you to drive an extra hundred meters (maybe that's 0.1% more distance!) and still get you to your destination on time. Despite being flawed in some contexts, by some criteria, it could still get you there faster and easier than you would have done with manual navigation.

That there is a criticism of the GPS navigator system does not mean never to use it. Nor does it mean if you use it you're acting on refuted or criticized ideas. Criticism is contextual.

Elliot Temple | Permalink | Comments (0)

Rational People

I wrote this in Dec 2008.

Rational people are systems of ideas that can temporarily remove any one idea in the system without losing identity. We can remain functional without any one idea. This means we can update or replace it. And in fact we can often change a lot of ideas at once (how many depends in part on which).

To criticize one idea is not to criticize my rationality, or my ability to create knowledge, or my ability to make progress. It doesn't criticize what makes me human, nor anything permanent about me. So I have no reason to mind it. Either I will decide it is correct, and change (and if I don't understand how to change, then no one has reason to fault me for not changing yet), or decide it is incorrect and learn something from considering it.

The way ideas die in our place is that we change ourselves, while retaining our identity (i.e., we don't die), but the idea gets abandoned and does die.

Elliot Temple | Permalink | Comments (0)

Avoiding Coercion Clarification

In reply to my Avoiding Coercion essay, I got a question about the need for creativity and thinking of good questions as part of the process. In March 2012, for the beginning of infinity list, I clarified:

There is no simple method of creating questions. Questioning stuff is creative thinking, like criticizing, and so on.

The point of this method is that it's so trivially easy. Anything that relies on thinking of good questions is not reliable. We don't reliably have good ideas. We don't reliably notice criticisms of ideas we consider. We don't always reach a conclusion for some tricky issue for days, years, or ever.

If avoiding coercion required creative questions, imaginative critical thinking, and so on -- if a bunch of success at all that stuff was required -- then it wouldn't be available to everyone, all the time. It would fail sometimes. There's no way to always do a great job at those things, always get them right.

But one of the big important things is: we can always avoid coercion. It's not like always figuring out the answer to every math problem put in front of us. Every math problem we're given is soluble, and given infinite time/energy/attention/learning/etc we could figure it out. But the problems which threaten coercion aren't like that. They don't require infinite time/resources. We can deal with all of them, now, reliably, enough not to be coerced.

That is what the article is about. What is this simple method that we can do reliably without making a big creative effort.

Now the method doesn't 100% ignore creativity. You can use creative thought as part of it, and you should. But even if you do a really bad job of that, you reach success anyway (where success means non-coercion). If you have an off day solving math problems, maybe you don't solve any today. But an off-day for creative thinking about your potentially coercive problems need not lead to any coercion.

The method doesn't require you to be thinking of good questions as you go along. If you do, great. If you don't, it works anyway. Which is necessary to reach the conclusion: all family coercion is avoidable, in practice, today, without being superman.

(I weakened the claim from dealing with all coercion because I don't want to comment on the situation where you get abducted by terrorists at gunpoint or other situations with force in them. That's hard. But all regular family situations are very manageable.)

Elliot Temple | Permalink | Comments (0)

Multiple Incompatible Unrefuted Conjectures

I wrote this in Jan 2011 for the Fabric of Reality email list.

I had written:
in my scheme of things begin refuted is the only reason not to believe (tentatively accept) a theory,
And Peter D had replied:
That would leave you believing multiple incompatible theories, since there are multiple incompatible unrefuted conjectures.
This is a good issue to raise.

The outline of the answer is: you can and must always reach a state of having exactly one non-refuted theory/idea/conjecture (for any single issue). (Note: theory/idea/conjecture are basically interchangeable, see: )

You are correct that if it weren't for this then our epistemology would not work (pending a brilliant new idea). We have known that for a long time and solved the problem.

Around 20 years ago, David Deutsch was creating a new educational philosophy called Taking Children Seriously.

One of the core ideas of that philosophy is a concept he calls "coercion". Be careful b/c it is defined differently than the dictionary. When clarity is desired, it can be called TCS-coercion.

TCS-coercion is the state of having multiple contradicting and unrefuted conjectures active in your mind, and acting or choosing, using one as against the others, while the conflict is still unresolved.

This issue has been important to David's views on epistemology and education for some 20 years. In that time, it has been addressed!

TCS-coercion is a crucial concept which is connected not only to epistemology and education, but also to psychology, to the issue of force and the issue of suffering. It is consequently connected to politics and to morality. I will not be surprised if connections to other fields are found too.

As an example of a possible major consequence of these ideas, we have conjectured that TCS-coercion A) always causes suffering B) is the only cause of suffering.

TCS-coercion is not an easy to understand concept. Many people interested in TCS have failed to grasp it, and there have been many conversations on the topic.

Another big idea, which is pretty easy to understand the gist of and may be interesting (but which is much harder to substantiate) is this: coercion-theory is basically the theory of disagreements (conflicts between theories) and how to resolve them. The rational truth-seeking method for approaching disagreements is the same when the ideas are in one mind or in two separate minds. Disagreement in one mind is TCS-coercion and disagreement in two minds is conflict between people and in both cases the conflict should be resolved by the same truth seeking methods that create harmony/cooperation.

The problem Peter brings up about multiple unrefuted and incompatible conjectures, and the issue of TCS-coercion, are closely related. They basically share a single answer to the question: how does one avoid being coerced?

Addressing it is not simple. My guess is you (Peter, and most of the audience) will not be able to understand the topic before you understand the more basic points of Popperian epistemology like induction being a fictitious process, justificationism being a mistake, the impossibility of positive support, that all observation is theory laden, fallibilism, the communication gap, that all learning is by C&R, and so on.

However if you are interested the best place to start, for purposes of getting straight to the meat of the issue and skipping the prerequisites, is by reading these:

If you don't understand them, or think you see a flaw in them, and you wish to comment, please try to make your comments simple and narrow and aim to focus on one important thing at a time. I think if we talk about everything at once it will be very confusing.

Besides reading those pages, it would also help to read all the other pages on that website, all of my blog posts, and all of David Deutsch's posts that you can find in the FoR list archives. Plus various books.

If you think that's a lot of reading you are correct. But it helps one learn. I myself read every single one of David's TCS emails in addition to his FoR emails and everything else I could find. That was around a decade worth of old emails from before I joined the list. Reading them helped me to understand things better.

Anyone interested in learning more these topics can also join the email list for it:

Elliot Temple | Permalink | Comments (0)

We Can Always Act on Non-Criticized Ideas

I originally wrote this for the Beginning of Infinity email list in Jan 2012.

Consider situations in the general form:

X disagrees with (conflicts with) Y about Z.

X and Y could be people. (Really: ideas in people.)

Or they could be ideas within one person.

One or both could be criticisms (explanations of mistakes, rather than positive ideas about what's good).

Z, by the way, might be more than one thing. X and Y can also be multi-part.

Let's consider a more specific example.

X is some idea, e.g. that I'll have pizza for dinner.

Y is a criticism of X, e.g. that I haven't got enough money to afford pizza.

So, what happens? I use an option that I have no criticism of. I get a dinner I can afford.

Now we'll add more detail to make it harder. This time, X also includes criticism of all non-X dinner plans, e.g. that they won't taste like pizza (and pizza tastes good, which is a good thing).

Now I can't simply choose some other dinner which I can afford, because I have a criticism of that.

To solve this, I could refute the second part of X and change my preferences, e.g. by figuring out that something else besides pizza is good too. Or I could acquire some more money. Or adjust my budget.

There's always many ways forward that I would potentially not have any criticism of.

What if I get stuck? I want pizza, because it's delicious, but I also don't want pizza, because I'm too poor. Whatever I do I have a criticism of it. I try to think of ideas like adjusting my budget, or eating something else, but I don't see how to make them work.

There is a simple, generalized approach. I don't have to think haphazardly and hope I find an answer.

All conflicts, as we've been discussing, always raise new problems. In particular:

X disagrees with (conflicts with) Y about Z.

If we don't solve this directly, it raises the problem:

Given X disagrees with (conflicts with) Y about Z, then what should I do?

This is a new problem. And it has several positive features:

1) This technique can be repeated unlimited times. If I use the technique once then get stuck again, I can use the technique again to get unstuck.

2) In general, any solutions we think of for this new problem will not be criticized by any of the criticisms we already had in mind. This makes it relatively easy to come up with a solution we don't have any criticism of. All those criticisms we were having a hard time with are not directly relevant.

3) Every application of this technique provides an *easier problem* than we had before. So we don't just get a new problem, but also an easier one. This, combined with the ability to use the technique repeatedly, lets us make our problem situation as easy as we like.

Why do the problems get progressively easier? Because they are progressively less ambitious. They accept various things, for the moment, and ask what to do anyway, instead of trying to deal with them directly.

The new problems are also more targeted to dealing with the specific issue of finding a way to move forward. This additional focus, instead of just on figuring stuff out generally, makes it easier. It tends towards a minimal solution.

In the context of disagreements between persons, the problems this technique generates progressively tend towards less cooperation, which is easier. In the context of ideas within a person, it's basically the same thing but a little harder to understand.

So, that's why this is true (which I'd written previously):
Premise: there is an available option that one doesn't have a criticism of (and it can be figured out fast enough even in time pressure)
Because we can get past any sticking points, in a simple way, while also reducing our problem(s) to easier problem(s) as much as necessary. (It only works the purpose of figuring out an option for how to proceed, not for any conflict of ideas. But we only have time limits in the context of needing to proceed or act or choose.)

See also:

Elliot Temple | Permalink | Comments (0)

All Problems Are Soluble

I originally wrote this for the beginning of infinity email list in Jan 2012.

To consider whether all problems are soluble (strong version), it's important to consider:

- what is a problem?
- what is a solution?

Notice that I did not include the qualifier, "unless it violates the laws of physics" anymore than one about time limits or having enough knowledge. Why not?

Consider the problem of an asteroid going twice as fast as a photon. How can that be solved?

Answer: It is not a problem. Asteroids do not have problems.

Only persons have problems.

Here is what my dictionary has for problem: "a matter or situation regarded as unwelcome or harmful and needing to be dealt with and overcome". Notice that "regarded as" implies a mind doing some thinking.

Situations like an asteroid moving at speed S slower than a photon are not inherently problematic. Whether they are a problem or not is a matter of interpretation (how it's regarded). That requires a person to interpret.

Note: the word "problem" is routinely used in more than one way. For example we will say, "Yesterday I worked on a fun math problem. I haven't solved it yet." The "math problem" is *welcome*, not unwelcome, but is called a "problem". With this use of the word problem, *no solution is even needed* since no one minds the problem. My dictionary tries to cover this use with "a thing that is difficult to achieve or accomplish" which is decent but I think imperfect.

This type of problem is not the one we're talking about with "all problems are soluble". But these can turn into the other kind of problem -- the type regarded as unwelcome -- and if that happens then we are talking about it.

So, problem 2: a *person* wants to make an asteroid move at twice as fast as the speed of light.

How can he solve that? Doesn't solving it violate the laws of physics?

(Note: we could actually turn out to be mistaken about the laws of physics, but that's not important to our discussion.)

To solve this one, we need a more nuanced conception of what a solution is. Not all problems are solved in the straightforward way.

This problem can be solved by the person changing his preference. If he no longer wants to make the asteroid go twice as fast as the speed of light, it will no longer be "regarded as unwelcome", and so there is no more problem.

So, you may be wondering: can we solve all problems by changing our preferences? Wouldn't that be vacuous? Or does this technique have limits?

There are objective truths about which preferences are good or bad.

Bad is stuff like: unobtainable, counter productive, doesn't solve the problem it intends to solve, or aimed at a bad problem.

People cannot arbitrarily change their preferences by an act of will. They can only change them, in short, when they are persuaded that the new preference is better. This limits preference changing only to objectively better preferences. (As a complicating factor, which I won't explore here, they can also change preferences when they think a new one is better but are mistaken, but don't know they are mistaken.)

This allows for changing preferences as a solution (or part of a solution) to the extent it's objectively needed -- because bad preferences themselves cause problems -- but limits it from being arbitrarily used for everything.

Elliot Temple | Permalink | Comments (0)

Handling Information Overload

I wrote this in May, 2009 and posted it to TCS list. It provides an example of how problem solving works. how can we always find a solution ("common preference", aka something non-coercive to do) in time? this is an illustration of the method.

Here is a way to approach information overload (whether novel or not):

make a very rough estimate of the amount of information

make a rough estimate of how much time you have before the next deadline or important event

consider several methods of dealing with the information (such as thinking it all through carefully, or skimming it, or setting it aside for a week until you're less busy) and choose one that you expect will finish fast enough not to cause any problems.

With this simple technique, one can remain calm in the face of an unlimited amount of information, and it doesn't matter how slow one processes information. If necessary, just say "wow, that's too much to handle right now. i will look at it later and decide how to approach it then."

CEOs and other leaders do this kind of thing all the time. it's perfectly normal. if they need to make a decision, and their advisors are rushing or confusing them, they say "please write it up in a report" and then they have time to read the report and make a decision without being rushed or anything.

this technique can fail. things can go wrong. it's not 100% foolproof. in those cases, creativity is required. and in those cases, the amount of information you're facing is irrelevant. it never fails b/c the amount is large. if the amount is truly huge, that only makes it *easier* to decide how to deal with it. it makes it very clear you can't process all of it this minute, so you'll know not to try.

what especially makes this technique fail is entrenched theories, irrationality, hang ups. that's why they cause coercion, frustrated, psychological trauma, etc. (btw the TCS technique for avoiding coercion is fairly parallel and similar to the technique i described above. roughly, if you can't resolve a dispute btwn two active theories now, adopt the single theory for how to proceed in the meantime, and figure it out later.)

Elliot Temple | Permalink | Comments (0)

Critical Preferences

Originally posted Feb 2010 at

What problem is the idea of a “critical preference” intended to solve? (And how does it solve it?) I think the problem is this:
We form theories to solve our problems, and we criticize them. Sometimes we decisively refute a theory. If this leaves us with exactly one theory standing, there is no problem, we should prefer that theory.

Refutations can be hard to create. Often there are several theories offered as solutions to one problem, which contradict each other, but which are not decisively refuted. What are we to do then? The intellectual answer is to invent new criticisms. That may take years, so there is a pragmatic problem: we must get on with our life, and some of our decisions, today, may depend on these theories.

The idea of a critical preference is aimed to solve the pragmatic problem: how should we proceed while there is a pending conflict between non-refuted theories?
Popper proposes (without using the term “critical preference”) that we can form a critical preference for one theory, and proceed using that theory in preference to the others. The critical preference should be for whichever theory best stands up to criticism, or in Popper’s words the theory that “in the light of criticism, appears to be better than its competitors” (C&R p 74). Popper writes something similar in Objective Knowledge, p 82 (see also pages 8, 16, 22, 41, 67, 95, 103). Similarly, Firestone wrote, “The best problem solution is the competitive alternative that best survives criticism.”

(How we judge which theories are better, or best survive criticism, is another question, and Popper gives various guidance (e.g. C&R p 74, and the idea of corroboration), as does Deutsch (e.g. his recommendation to prefer explanations that are hard to vary), but I’m not interested in that here.)

Would others here agree that this is the problem and solution of critical preferences? (My purpose here is that I think it is mistaken, and I want to get the theory right prior to offering criticism. Perhaps I’ve misunderstood it.)

Follow up post:

Elliot Temple | Permalink | Comments (0)


Originally posted April 2010 at

When we corroborate a theory (i.e., it passes tests), the theory is better in some way. This is a dangerous statement because being better sounds like it’s more supported.

The way it’s better is this: it is now harder to invent rival theories which are not already refuted by existing knowledge. The scope for rival theories is reduced because they have more evidence they have to be consistent with.

Better tests are the ones which will more greatly reduce the scope for possible rival theories. Corroboration increases our stock of known criticism. More severe tests increase it more.

In this way, we can clearly see that corroboration is distinct from confirmation, and is not a type of confirmation, and does not play a related role to confirmation. It’s role is exclusively criticism oriented, and not support oriented.

This is the only valid way to “support” theories: by building up a stock of known criticisms of potential rivals.

Elliot Temple | Permalink | Comments (0)

Conflict, Criticism, Learning, Reason

I wrote this in March, 2010.

We are fallible: we can and do make mistakes, often. Therefore we have a great need for mistake (error) correction. Mistakes cause bad things like suffering. The way to correct mistakes is find or notice bad things, criticize them, and come up with better ideas which will solve the problem. Criticism consists of explanations of mistakes: what the mistake is, and why it's bad.

Creating knowledge (learning) consists of imaginatively coming up with ideas and then using criticism to correct mistakes in them and thus improve them. No method of creating ideas reliably creates good ideas directly. Ideas have to be improved. This process is called evolution and the creation of knowledge in genes is another instance of it.

In science, we refer to evidence in our criticisms. When an empirical claim does not match the physical facts, that's a flaw in it. However, even in science most ideas are never empirically tested, but instead are rejected due to philosophical criticism first. Ideas need to be good explanations. They should be non-arbitrary, solve some problem or answer some question, be clear, understandable and unambiguous, be self-consistent, and not contradict any existing ideas which we have no criticism of.

A rational lifestyle (way of life including policies, institutions, traditions, ways of thinking, philosophical attitudes, background knowledge, etc) is one that does a good job of correcting mistakes rather than repeating mistakes. It's important because mistakes are common and uncorrected mistakes hurt people. It's also important because it's a knowledge creating lifestyle and knowledge lets us solve our problems and make progress.

An important way to judge a lifestyle is by how it treats disagreements or conflicts. When there's no disagreement or conflict, life is straightforward; everyone agrees about what to do, and doesn't see any problem with it, so just do that. When there is a conflict or disagreement, a rational lifestyle will not use force but will instead focus on critical discussion and persuasion with an aim towards conflict resolution.

Getting your way by force is bad because winning a fight does not magically make your ideas good. Force does nothing to improve our ideas, nor to evaluate which are good. Force also hurts people, and using force risks losing the fight and being hurt. When we use force, we do not correct our mistakes; choosing for conflicts to be determined by force is an irrational lifestyle. The only reason to use force is if the other guy has decided the disagreement will be settled by force, thus precluding a rational outcome or any learning, and so you're just using force to defend yourself.

In a disagreement, it's crucial for me to remember that I may be mistaken. My goal should not be to get my way, but rather to consider that the other guy's ideas might have some value, and this is an opportunity for both of us to learn something. Clashes of ideas can help us discover flaws and improve.

The first rule of conflict resolution is that the harder time you're having coming to an agreement, the less ambitious you should be. If you can't cooperate to mutual benefit, don't cooperate. Go your separate ways. If one person wants to end the discussion, and he's under no clear, specific and binding obligation to carry on (examples: people are obligated by contracts they agree to, and parents are obligated to feed their children), then the discussion must end. The reason the discussion must end is that there's no way to continue it without using force to prevent the person from leaving. Of course if you can tell him a good reason to continue then go ahead, but if he doesn't want to listen to any more of your ideas there's nothing you can rationally do about that. He may be making a big mistake; but on the other hand you might be; to think that using force to get the outcome you think best would improve matters is to assume you are right, but in a disagreement you might be mistaken. Plus fighting is destructive and you might lose.

Conflict resolution does not just apply to disagreements between two people. It its purest form, it applies to disagreements between ideas, which may both be within the same mind. In this case, going separate ways is not an option, but there is a solution. Conflicts of ideas are only dangerous when there is a decision to be made, otherwise we can just peacefully ponder them until one day we come up with an answer. If we can't resolve a conflict directly in time to make the decision, we can consider the question, "Given this conflict is undecided, how should I make the decision?" This method can be repeated if further conflicts come up. In this way, decision making can continue while an unlimited number of conflicts between ideas remain. All we need is one idea about what to do which isn't in any conflict and which we have no criticism of.

Learning in general is about creating one single idea. Whenever we have multiple ideas, we don't know what to think. But we use criticism to eliminate mistaken ideas. When we get to exactly one idea, then we can tentatively accept it. What if we have two conflicting ideas and can't think of any criticism of either? Don't worry about it too much, maybe you'll come up with a criticism later, or someone else will. And what if we have to make a decision that depends on this conflict? Then use the method above: consider the conflict undecided and come up with a decision compatible with that. And what if we have zero ideas that survive criticism? Then come up with one idea about what to do which is compatible with not knowing the answer.

Liberalism is this rational style of decision making and conflict resolution applied to political questions. For example, liberalism highly values freedom, and freedom means that you can live your life without anyone using force against you. The Government uses force (and, more often, threat of force) routinely, and therefore the Government should be improved to use less force and, ultimately, not to use any non-defensive force. The free market is freedom/liberalism applied to economics. It recognizes that force never helps anything, so it insists that all interactions be voluntary.

Morality is rational epistemology applied to decision making. In this way, one will never hurt an innocent, will resolve all his conflicts amicably (unless the other person irrationally prevents that), will solve as many of his problems as he can, and will have the best life he knows how to. And one can do this while always learning so that his problem solving capabilities, and the best life he understands, continuously improve.

In each of these fields, there is an objective truth. That is necessary to agreement and consent and voluntary cooperation and so on being attainable. If every arbitrary idea was as good as every other, people would have no reason to ever change their minds or care what anyone else thought.

Elliot Temple | Permalink | Comments (0)

Induction is Wrong. A lot

There are two particularly hard parts of explaining why induction is false. First, there are many refutations. Where do you start? Second, most refutations are targeted at professional philosophers. What most people mean by "induction" varies a great deal.

Most professional philosophers are strongly attached to the concept of induction and know what it is. Most people are strongly attached to the word "induction" and will redefine it in response to criticism.

In *The World of Parmenides*, Popper gives a short refutation of induction. It's updated from an article in Nature. It involves what most people would consider a bunch of tricky math. To seriously defend induction, doesn't one need to understand arguments like this and address them?

Some professional philosophers do read and respond to this kind of thing. You can argue with them. You can point out a mistake in their response. But what do you do with people who aren't familiar with the material and think it's above their head?

If you aren't familiar with this argument against induction, how do you know induction is any good? If you don't have a first hand understanding of both the argument and a mistake in it, then why take sides in favor of induction?

Actually, inductivists have more responses open to them than pointing out a mistake in the argument or rejecting induction (or evading, or pleading ignorance). Do you know what the other important option is? Or will you hear it for the first time from me in the next paragraph, and then adopt it as your position? I don't recommend getting your position on induction from someone who thinks induction is a mistake – all the defenses I bring up are things I already know about and I *still* consider induction to be mistaken.

Another option is to correctly point out that Popper's refutation only applies to some meanings of "induction", not all. It's possible to have a position on induction which is only refuted by other arguments, not by this particular one. I won't help you too much though. What do you have to mean by "induction" to not be refuted by this particular argument? What can't you mean? You figure it out.

Popper argues against induction in books like LScD, C&R, OK, RASc. Deutsch does in FoR and BoI. Should I repeat points which are already published? What for? If some inductivist doesn't care to read the literature, will my essay do any good? Why would it?

I recently spoke with some Objectivists who said they weren't in favor of enumerative induction. They were in favor of the other kind. What other kind? How does it work? Where are the details? They wouldn't say. How do you argue with that? Someone told me that OPAR solves the problem of induction. OPAR, like ITOE, actually barely mentions induction. Some other Objectivists were Bayesians. Never mind that Bayesian epistemology contradicts Objectivist epistemology. In any case, dealing with Bayesians is *different*.

One strategy is to elicit from people *their* ideas about induction, then address those. That poses several problems. For one thing, it means you have to write a personalized response to each person, not a single essay. (But we already have general purpose answers by Popper and Deutsch published, anyway.) Another problem is that most people's ideas about induction are vague. And they only successfully communicate a fraction of their ideas about it.

How do you argue with people who have only a vague notion of what "induction" is, but who are strongly attached to defending "induction"? They shouldn't be advocating induction at all without a better idea of what it means, let alone strongly.

There are many other difficulties as well. For example, no one has ever written a set of precise instructions for how to do induction. They will tell me that I do it every day, but they never give me any instructions so how am I supposed to do it even once? Well I do it without knowing it, they say. Well how do they know that? To decide I did induction, you'd have to first say what induction is (and how it works, and what actions do and don't constitute doing induction) and then compare what I did against induction. But they make no such comparison – or won't share it.

Often one runs into the idea that if you get some general theories, then you did induction. Period, the end. Induction means ANY method of getting general theories whatsoever. This vacuous definition helps explain why some people are so attached to "induction". But it is not the actual meaning of "induction" in philosophy which people have debated. Of course there is SOME way to get general theories – we know that because we have them – the issue is how do you do it? Induction is an attempt to give an answer to that, not a term to be attached to any answer to it.

And yet I will try. Again. But I would like suggestions about methods.

Induction says that we learn FROM observation data. Or at least from actively interpreted ideas about observation data. The induced ideas are either INFALLIBLE or SUPPORTED. The infallible version was refuted by Hume among others. As a matter of logic, inductive conclusions aren't infallibly proven. It doesn't work. Even if you think deduction or math is infallible (it's not), induction STILL wouldn't be infallible.

Infallible means error is ABSOLUTELY 100% IMPOSSIBLE. It means we'll never improve our idea about this. This is it, this is the final answer, the end, nothing more to learn. It's the end of thinking.

Although most Objectivists (and most people in general) are infallibilists, Objectivism rejects infallibilism. Many people are skeptical of this and often deny being infallibilists. Why? Because they are only infallibilists 1% of the time; most of their thinking, most of the time, doesn't involve infallibilism. But that makes you an infallibilist. It's just like if you only think 1% of haunted houses really have a ghost, you are superstitious.

So suppose induction grants fallible support. We still haven't said how you do induction, btw. But, OK, what does fallible support mean? What does it do? What do you do with it? What good is it?

Support is only meaningful and useful if it helps you differentiate between different ideas. It has to tell you that idea X is better than idea Y which is better than idea Z. Each idea has an amount of support on a continuum and the ones with more support are better.

Apart from this not working in the first place (how much support is assigned to which idea by which induction? there's no answer), it's also irrational. You have these various ideas which contradict each other, and you declare one "better" in some sense without resolving the contradiction. You must deal with the contradiction. If you don't know how to address the contradiction then you don't know which is right. Picking one is arbitrary and irrational.

Maybe X is false and Y is true. You don't know. What does it matter that X has more support?

Why does X have more support anyway? Every single piece of data you have to induce from does not contradict Y. If it did contradict Y, Y would be refuted instead of having some lesser amount of support. Every single piece of data is consistent with both X and Y. It has the same relationship with X and with Y. So why does Y have more support?

So what really happens if you approach this rationally is everything that isn't refuted has exactly the same amount of support. Because it is compatible with exactly the same data set. So really there are only two categories of ideas: refuted and non-refuted. And that isn't induction. I shouldn't have to say this, but I do. That is not induction. That is Popper. That is a rejection of induction. That is something different. If you want to call that "induction" then the word "induction" loses all meaning and there's no word left to refer to the wrong ideas about epistemology.

Why would some piece of data that is consistent with both X and Y support X over Y? There is no answer and never has been. (Unless X and Y are themselves probabilistic theories. If X says that a piece of data is 90% likely and Y says it's 20% likely, then if that data is observed the Bayesians will start gloating. They'd be wrong. That's another story. But why should I tell it? You wouldn't have thought of this objection yourself. You only know about it because I told you, and I'm telling you it's wrong. Anyway, for now just accept that what I'm talking about works with all regular ideas that actually assert things about reality instead of having built-in maybes.)

Also, the idea of support really means AUTHORITY. Induction is one of the many attempts to introduce authority into epistemology.

Authority in epistemology is abused in many ways. For example, some people think their idea has so much authority that if there is a criticism of it, that doesn't matter. It'd take like 5 criticisms to reduce its authority to the point where they might reject it. This is blatantly irrational. If there is a mistake in your idea it's wrong. You can't accept or evade any contradictions, any mistakes. None. Period.

Just the other day a purported Objectivist said he was uncomfortable that if there is one criticism of an idea then that's decisive. He didn't say why. I know why. Because that leaves no room for authority. But I've seen this a hundred times. It's really common.

If no criticism is ever ignored, the authority never actually gets to do anything. Irrationally ignoring criticism is the main purpose of authority in epistemology. Secondary purposes include things like intimidating people into accepting your idea.

But wait, you say, induction is a method of MAKING theories. We still need it for that even if it doesn't grant them support/authority.

Well, is it really a method of making theories? There's a big BLANK OUT in the part of induction where it's supposed to actually tell you what to do to make some theories. What is step one? What is step two? What always fills in this gap is intuition, common sense, and sometimes, for good measure, some fallacies (like that correlation implies or hints at causation).

In other words, induction means think of theories however (varies from person to person), call it "induction", and never consider or examine or criticize or improve your methods of thinking (since you claim to be using a standard method, no introspection is necessary).

For any set of data, infinitely many general conclusions are logically compatible. Many people try to deny this. As a matter of logic they are just wrong. (Some then start attacking logic itself and have the audacity to call themselves Objectivists). Should I go into this? Should I give an example? If I give an example, everyone will think the example is STUPID. It will be. So what? Logic doesn't care what sounds dumb. And I said infinitely many general conclusions, not infinitely many general conclusions that are wise. Of course most of them are dumb ideas.

So now a lot of people are thinking: induce whichever one isn't dumb. Not the dumb ones. That's how you pick.

Well, OK, and how do you decide what's dumb? That takes thinking. So in order to do induction (as it's just been redefined), in one of the steps, you have to think. That means we don't think by induction. Thinking is a prerequisite for induction (as just redefined), so induction can't be part of thinking.

What happens here is the entirety of non-inductivist epistemology is inserted as one of the steps of induction and is the only reason it works. All the induction stuff is unnecessary and unhelpful. Pick good ideas instead of dumb ones? We could have figured that out without induction, it's not really helping.

Some people will persevere. They will claim that it's OBVIOUS which ideas are dumb or not – no thinking required. What does that mean? It means they can figure it out in under 3 seconds. This is silly. Under 3 seconds of thinking is still thinking.

Do you see what I mean about there are so many things wrong with induction it's hard to figure out where to start? And it's hard to go through them in an orderly progression because you start talking about something and there's two more things wrong in the middle. And here I am on this digression because most defenses of induction – seriously this is the standard among non-professionals – involve a denial of logic.

So backing up, supposedly induction helps us make theories. How? Which ones? By what steps do we do it? No answers. And how am I supposed to prove a negative? How do I write an essay saying "induction has no answers"? People will say I'm ignorant and if only I read the right book I'd see the answer. People will say that just because we don't know the answer doesn't mean there isn't one. (And remember that refutation of induction I mentioned up top? Remember Popper's arguments that induction is impossible? They won't have read any of that, let alone refuted it.)

And I haven't even mentioned some of the severe flaws in induction. Induction as originally intended – and it's still there but it varies, some people don't do this or aren't attached to it – meant you actually read the book of nature. You get rid of all your prejudices and biases and empty your mind and then you read the answers straight FROM the observation data. Sound like a bad joke? Well, OK, but it's an actual method of how to do induction. It has instructions and steps you could follow, rather than evasion. If you think it's a bad joke, how much better is it to replace those concrete steps with vagueness and evasion?

Many more subtle versions of this way of thinking are still popular today. The idea of emptying your mind and then surely you'll see the truth isn't so popular. But the idea that data can hint or lead or point is still popular. But completely false. Observation data is inactive and passive. Further, there's so much of it. Human thinking is always selective and active. You decide which data to focus on, and which ways to approach the issue, and what issues to care about, and so on. Data has to be interpreted, by you, and then it is you interpretations, not the data itself, which may give you hints or leads. To the extent data seems to guide you, it's always because you added guidance into the data first. It isn't there in the raw data.

Popper was giving a lecture and at the start he said, "Observe!" People said, "Observe what?" There is no such thing as emptying your mind and just observing and being guided by the data. First you must think, first you must have ideas about what you're looking for. You need interests, problems, expectations, ideas. Then you can observe and look for relevant data.

The idea that we learn FROM observation is flawed in another way. It's not just that thinking comes first (which btw again means we can't think by induction since we have to think BEFORE we have useful data). It also misstates the role of data in thinking. Observations can contradict things (via arguments, not actually directly). They can rule things out. If the role of data is to rule things out, then whatever positive ideas we have we didn't learn from the data. What we learned from the data, in any sense, is which things to reject, not which to accept.

Final point. Imagine a graph with a bunch of dots on it. Those are data points. And imagine a line connecting the dots would be a theory that explained them. This is a metaphor. Say there are a hundred points. How many ways can you draw a line connecting them? Answer: infinitely many. If you don't get that, think about it. You could take a detour anywhere on the coordinate plane between any two connections.

So we have this graph and we're connecting the dots. Induction says: connect the dots and what you get is supported, it's a good theory. How do I connect them? It doesn't say. How do people do it? They will draw a straight line, or something close to that, or make it so you get a picture of a cow, or whatever else seems intuitive or obvious to them. They will use common sense or something – and never figure out the details of how that works and whether they are philosophically defensible and so on.

People will just draw using unstated theories about which types of lines to prefer. That's not a method of thinking, it's a method of not thinking.

They will rationalize it. They may say they drew the most "simple" line and that's Occam's razor. When confronted with the fact that other people have different intuitions about what lines look simple, they will evade or attack those people. But they've forgotten that we're trying to explain how to think in the first place. If understanding Occam's razor and simplicity and stuff is a part of induction and thinking, then it has to be done without induction. So all this understanding and stuff has to come prior to induction. So really the conclusion is we don't think by induction, we have a whole method of thinking which works and is a prerequisite for induction. Induction wouldn't solve epistemology, it'd presuppose epistemology.

What we really know, from the graph with the data points, is that all lines which don't go through every point are wrong. We rule out a lot. (Yes, there's always the possibility of our data having errors. That's a big topic I'm not going to go into. Regardless, the possibility of data errors does not help induction's case!)

And what about the many lines which aren't ruled out by the data? That's where philosophy comes in! We don't and can't learn everything from the data. Data is useful but isn't the answer. We always have to think and do philosophy to learn. We need criticisms. Yes, lots of those lines are "dumb". There are things wrong with them. We can use criticism to rule them out.

And then people will start telling me how inconvenient and roundabout that is. But it's the only way that works. And it's not inconvenient. Since it's the only way that works, it's what you do when you think successfully. Do you find thinking inconvenient? No? Then apparently you can do critical thinking in a convenient, intuitive, fast way.

At least you can do critical thinking when you're not irrational defending "induction" because in your mind it has authority.

EDIT: Some more problems with induction I didn't mention:

"the future is like the past" principle. or "the future will resemble the past". or other similar formulations. the problem is the future always resembles the past in some ways and not others. which are which? induction doesn't tell you which are which. instead it implicitly smuggles in thinking as a prerequisite to figure that out. (so it presupposes, rather than explains, thinking).

Induction is supposed to be a solution to epistemology but doesn't address Paley's problem.

The issue of how we think about abstract non-empirical ideas is not addressed by induction.

See also:

Elliot Temple | Permalink | Comments (0)