Rationally Resolving Conflicts of Ideas

I was planning to write an essay explaining the method of rationally resolving conflicts and always acting on a single idea with no outstanding criticisms. It would followup on my essay Epistemology Without Weights and the Mistake Objectivism and Critical Rationalism Both Made where I mentioned the method but didn't explain it.

I knew I'd already written a number of explanations on the topic, so I decided to reread them for preparation. While reading them I decided that the topic is hard and it'd be very hard to write a single essay which is good enough for someone to understand it. Maybe if they already had a lot of relevant background knowledge, like knowing Popper, Deutsch or TCS, one essay could work OK. But for an Objectivist audience, or most audiences, I think it'd be really hard.

So I had a different idea I think will work better: gather together multiple essays. This lets people learn about the subject from a bunch of different angles. I think this way will be the most helpful to someone who is interested in understanding this philosophy.

Each link below was chosen selectively. I reread all of them as well as other things that I decided not to include. It may look like a lot, but I don't think you should expect an important new idea in epistemology to be really easy and short to learn. I've put the links in the order I recommend reading them, and included some explanations below.

Instead of one perfect essay – which is impossible – I present instead some variations on a theme.

Update 2017: Buy my Yes or No Philosophy to learn a ton more about this stuff. It has over 6 hours of video and 75 pages of writing. See also this free essay giving a short argument for it.

Update Oct 2016: Read my new Rejecting Gradations of Certainty.

Popper's critical preferences idea is incorrect. It's similar to standard epistemology, but better, but still shares some incorrectness with rival epistemologies. My criticisms of it can be made of any other standard epistemology (including Objectivism) with minor modifications. I explained a related criticism of Objectivism in my prior essay.

Critical Preferences
Critical Preferences and Strong Arguments

The next one helps clarify a relevant epistemology point:

Corroboration

Regress problems are a major issue in epistemology. Understanding the method of rationally resolving conflicts between ideas to get a single idea with no outstanding criticism helps deal with regresses.

Regress Problems

Confused about anything? Maybe these summary pieces will help:

Conflict, Criticism, Learning, Reason
All Problems are Soluble
We Can Always Act on Non-Criticized Ideas

This next piece clarifies an important point:

Criticism is Contextual

Coercion is an important idea to understand. It comes from Taking Children Seriously (TCS), the Popperian educational and parenting philosophy by David Deutsch. TCS's concept of "coercion" is somewhat different than the dictionary, keep in mind that it's our own terminology. TCS also has a concept of a "common preference" (CP). A CP is any way of resolving a problem between people which they all prefer. It is not a compromise; it's only a CP if everyone fully prefers it. The idea of a CP is that it's a preference which everyone shares in common, rather than disagreeing.

CPs are the only way to solve problems. And any non-coercive solution is a CP. CPs turn out to be equivalent to non-coercion. One of my innovations is to understand that these concepts can be extended. It's not just about conflicts between people. It's really about conflicts between ideas, including ideas within the same mind. Thus coercion and CPs are both major ideas in epistemology.

TCS's "most distinctive feature is the idea that it is both possible and desirable to bring up children entirely without doing things to them against their will, or making them do things against their will, and that they are entitled to the same rights, respect and control over their lives as adults." In other words, achieving common preferences, rather than coercion, is possible and desirable.

Don't understand what I'm talking about? Don't worry. Explanations follow:

Taking Children Seriously
Coercion

The next essay explains the method of creating a single idea with no outstanding criticisms to solve problems and how that is always possible and avoids coercion.

Avoiding Coercion
Avoiding Coercion Clarification

This email clarifies some important points about two different types of problems (I call them "human" and "abstract"). It also provides some historical context by commenting on a 2001 David Deutsch email.

Human Problems and Abstract Problems

The next two help clarify a couple things:

Multiple Incompatible Unrefuted Conjectures
Handling Information Overload

Now that you know what coercion is, here's an early explanation of the topic:

Coercion and Critical Preferences

This is an earlier piece covering some of the same ideas in a different way:

Resolving Conflicts of Interest

These pieces have some general introductory overview about how I approach philosophy. They will help put things in context:

Think
Philosophy: What For?

Update: This new piece (July 2017) talks about equivocations and criticizes the evidential continuum: Don't Equivocate

Want to understand more?

Read these essays and dialogs. Read Fallible Ideas. Join my discussion group and actually ask questions.

Elliot Temple | Permalink | Messages (239)

Accepting vs. Preferring Theories – Reply to David Deutsch

David Deutsch has some misconceptions about epistemology. I explained the issue on Twitter.

I've reproduced the important part below. Quotes are DD, regular text is me.

There's no such thing as 'acceptance' of a theory into the realm of science. Theories are conjectures and remain so. (Popper, Miller.)

We don't accept theories "into the realm of science", we tentatively accept them as fallible, conjectural, non-refuted solutions to problems (in contexts).

But there's no such thing as rejection either. Critical preference (Popper) refers to the state of a debate—often complex, inconsistent, and transient.

Some of them [theories] are preferred (for some purposes) because they seem to have survived criticism that their rivals haven't. That's not the same as having been accepted—even tentatively. I use quantum theory to understand the world, yet am sure it's false.

Tentatively accepting an idea (for a problem context) doesn't mean accepting it as true, so "sure it's false" doesn't contradict acceptance. Acceptance means deciding/evaluating it's non-refuted, rivals are refuted, and you will now act/believe/etc (pending reason to reconsider).

Acceptance deals with the decision point where you move past evaluating the theory, you reach a conclusion (for now, tentatively). you don't consider things forever, sometimes you make judgements and move on to thinking about other things. ofc it's fluid and we often revisit.

Acceptance is clearer word than preference for up-or-down, yes-or-no decisions. Preference often means believing X is better than Y, rather than judging X to have zero flaws (that you know of) & judging Y to be decisively flawed, no good at all (variant of Y could ofc still work)

Acceptance makes sense as a contrast against (tentative) rejection. Preference makes more sense if u think u have a bunch of ideas which u evaluate as having different degrees of goodness, & u prefer the one that currently has the highest score/support/justification/authority.


Update: DD responded, sorta:

You are blocked from following @DavidDeutschOxf and viewing @DavidDeutschOxf's Tweets.


Update: April 2019:

DD twitter blocked Alan, maybe for this blog post critical of LT:

https://conjecturesandrefutations.com/2019/03/16/lulie-tanett-vs-critical-rationalism/

DD twitter blocked Justin, maybe for this tweet critical of LT:

https://twitter.com/j_mallone/status/1107349577538158592


Elliot Temple | Permalink | Messages (8)

Critical Rationalism Epistemology Explanations

I discussed epistemology in a recent email:

I really enjoyed David Deutsch's explanation of Popper's epistemology and since reading Fabric of Reality I've read quite a bit of Popper. I've become convinced that Deutsch's explanation of Popper is correct, but I can also see why few people come away from Popper understanding him correctly. I believe Deutsch interprets Popper in a way that is much easier to understand.

Yes, I agree. DD refined and streamlined Critical Rationalism, and he's a better writer than Popper was. Popper made the huge breakthrough in the field and wrote a lot of good material about it, but there's still more work to do before most people get it.

Plus, I think he actually adds some ideas to Popper that matter that make it less misleading. Popper was struggling himself to understand his own theories, so it's understandable that he struggled to explain some parts of it.

I agree. I don't blame Popper for this, since he had very original and important ideas. He did more than enough!

(For example, it was problematic to refer to good theories as 'improbable' rather than 'hard to vary.' In context, I feel Popper meant the same thing, but the words he chose were problematic for conveying the meaning to others.)

So I've been wondering if it's possible to boil Popper's epistemology (with additions and interpretations from Deutsch) down to a few basic principles that seem 'self evident' and then to draw necessary corollaries. If this could be done, it would make Popper's epistemology much easier to understand.

Here is what I've come up with so far. (I'm looking for feedback from others familiar with Popper's epistemology as interpreted and adjusted by Deutsch to point out where I got it wrong or are missing things..)

Criteria for a Good Explanation:

1. We should prefer theories that are explanations over those that are not.

This is an approximation.

The point of an idea is to solve a problem (or multiple problems). We should prefer ideas which solve problems.

Many interesting problems require explanations to solve them, but not all. Whether we want an explanation depends on the problem being addressed.

In general, we want to understand things, not just be told answers to trust on authority. So we need explanations of how and why the answers will work, that way we can think for ourselves, recognize what sort of situations would be an exception, and potentially fix errors or make improvements.

But some problems don't need explanations. I might ask my friend, who is good at cooking, "How long should I boil an egg?" and just want to hear a number of minutes without any explanation. Finding out the number of minutes solves my cooking problem. I didn't want to try to understand how cooking eggs works, and I didn't want to debate the matter or check my friend's ideas for errors, I just wanted it to come out decently. It can be reasonable to prioritize what issues I investigate more and which I don't.

2. We should prefer explanations that are hard to vary over ones that can easily be adjusted to fit the facts because a theory that can be easily adjusted to fit any facts explains every possible world and thus explains nothing in the actual world.

Hard to vary given what constraints?

Any idea is easy to vary if there are no constraints. You can vary it to literally any other idea, arbitrarily, in one step.

The standard constraint on varying an idea is that it still solve (most of) the same problems as before. To improve an idea, we want to make it solve more and better problems than before with little or no downside to the changes.

The problems ideas solve aren't just things like "explain the motion of balls" or "help me organize my family so we don't fight". Another important type of problem is understanding how ideas fit together with other ideas. Our knowledge has tons of connections where we understand ideas (often from different fields) to be compatible, and we understand how and why they are compatible. Fitting our knowledge together into a unified picture is an important problem.

The more our knowledge is constrained by connections to problems and other ideas, the more highly adapted it is to that problem situation, and therefore the harder it is to vary while keeping the same or greater level of adaptation. The more ideas are connected to other problems and ideas, the less wiggle room there is to make arbitrary changes without breaking anything.

Fundamentally, "hard to vary" just means "is knowledge". Knowledge in the CR view is adapted information. The more adapted information is, the more chance a random change will make it worse instead of better (worse and better here are relative to the problem situation).

There are many ways to look at knowledge that are pretty equivalent. Some ways are: ideas adapted to a problem situation, ideas that are hard to vary, non-arbitrary ideas, ideas that break symmetries (that give you a way to differentiate things, prefer some over others, evaluate some as better than others, etc. You can imagine that, by default, there's tons of ideas and they all look kinda equally good. And when two ideas disagree with each other, by default that is a symmetric situation: either one could be mistaken and we can't take sides. Knowledge lets us take sides it helps us break the symmetry of "X contradicts Y, therefore also Y contradicts X" and helps us differentiate ideas so they don't all look the same to us.)

3. A theory (or explanation) can only be rejected by the existence of a better explanatory theory.

Ideas should be rejected when they are refuted. A refutation is an explanation of how/why the idea will not solve the problem it was trying to solve. (Sometimes an idea is proposed as a solution to multiple different problems. In that case, it may be refuted as a solution to some problems while not being refuted as a solution for others. In this way, criticism and refutation are contextual rather than universal.)

You don't need a better idea in order to decide that an idea won't work – that it fails to solve the problem you thought it solved. If it simply won't work, it's no good, whether you have a better idea or not.

These are fairly basic and really do seem 'self evident.' But are they complete? What did I miss?

I then added a number of corollaries that come out of the principles to explain the implications.

1. We should prefer theories that are explanations over those that are not.
a. Corollary 1-1: We should prefer theories that explain more over those that explain less. In other words, we should prefer theories that have fewer problems (things it can’t explain) over ones that have more problems.

Don't judge ideas on quantity of explanation. Quality is more important. Does it solve problems we care about? Which problems are important to solve? Which issues are important to explain and which aren't?

Also, we never need to prefer one idea over another when they are compatible. We can have both.

When two ideas contradict each other, then at least one is false. We can't determine that one is false by looking at their positive virtues (how wonderful are they, how useful are they, how much do they explain). Instead, we have to deal with contradictions by figuring out that an idea is actually wrong, we have to look at things critically.

b. Corollary 1-2: We should prefer actual explanations over pseudo-explanations (particularly explanation spoilers) disguised as explanations.
c. Corollary 1-3: If the explanatory power of a theory comes by referencing another theory, then we prefer the other theory because it’s the one that actually explains things.
2. We should prefer explanations that are hard to vary over ones that can easily be adjusted to fit the facts because a theory that can be easily adjusted to fit any facts explains every possible world and thus explains nothing in the actual world.
a. Corollary 2-1: We should prefer explanations that have survived the strongest criticisms or tests we have currently been able to devise.

Criticisms don't have strengths. A criticism either explains why an idea fails to solve a problem, or it doesn't.

See: https://yesornophilosophy.com and http://curi.us/1595-rationally-resolving-conflicts-of-ideas and especially http://curi.us/1917-rejecting-gradations-of-certainty

Popper and DD both got this wrong, despite DD's brilliant criticism of weighing ideas in BoI. The idea of arguments having strengths is really ingrained in common sense in our culture.

b. Corollary 2-2: We should prefer explanations that are consistent with other good explanations (that makes it harder to vary), unless it violates the first principle.
3. A theory (or explanation) can only be rejected by the existence of a better explanatory theory.
a. Corollary 3-1: We should prefer theories (or explanations) that suggest tests that the previously best explanation can’t pass but the new one can. (This is called a Critical Test.)
b. Corollary 3-2: It is difficult to devise a Critical Test of a theory without first conjecturing a better theory first.
c. Corollary 3-3: A theory that fails a test due to a problem in a theory and a theory that fails a test due to some other factor (say experimental error) are often indistinguishable unless you have a better theory to explain which is which.

Yes, after a major existing idea fails an experimental test we generally need some explanatory knowledge to understand what's going on, and what the consequences are, and what we should do next.


Elliot Temple | Permalink | Messages (41)

Errors Merit Post-Mortems

After people make errors, they should do post-mortems. How did that error happen? What caused it? What thinking processes were used and how did they fail? Try to ask “Why?” several times to get to deeper issues than your initial answers.

And then, especially, what other errors would that cause also cause? This gives info about the need to make changes going forward, or not. Is it a one-time error or part of a pattern?

Effective post-mortems are something people generally don’t want to do. What causes errors? Frequently it’s irrationality, including dishonesty.

Lots of things merit post-mortems other than losing a debate. If you have an inconclusive debate, why didn’t you do better? No doubt there were errors in your communication and ideas. If you ask a question, why were you ignorant of the answer? What happened there? Maybe you made a mistake. That should be considered. After you ask a question and get an answer, you should post-mortem whether your understanding is now adequate. People usually don’t discuss thoroughly enough to effectively learn the answers to their questions.

Regarding questions: If you were ignorant of something because you hadn’t yet gotten around to learning about it, and you knew the limits of your knowledge, that can be a quick and easy post-mortem. That’s fine, but you should check if that’s what happened or it’s something else that merits more attention. Another common, quick post-mortem for a question is, “I asked because the other person was unclear, not because of my own ignorance.” But many questions relate to your own confusions and what went wrong should be post-mortemed. And if you hadn’t learned something yet, you should consider if you are organizing your learning priorities in a reasonable way. Why learn this now? Why not earlier or later? Do you have considered reasoning about that?

What if you try to post-mortem something and you don’t know what went wrong? If your post-mortem fails, that is itself something to post-mortem! Consider what you’ve done to learn how to post-mortem effectively in general. Have you studied techniques and practiced them? Did you start with easier cases and succeed many times? Do you have a history of successes and failures which you can compare this current failure to? Do you know what your success rate at post-mortems is in general, on average? And you should consider if you put enough effort into this particular post-mortem or just gave up fast.

You may wonder: We make errors all the time. Should we post-mortem all of them? That sounds like it’d take too much time and effort.

First, you can only post-mortem known errors. You have to find out something is an error. You can’t post-mortem it as an error just because people 500 years from now will know better. This limits the issues to be addressed.

Second, an irrelevant “error” is not an error. Suppose I’m moving to a new home. I’m measuring to see where things will fit. I measure my couch and the measurement is accurate to within a half inch. I measure where I want to put it and find there are 5 inches to spare (if it was really close, I’d re-measure). The fact that my measurement is an eighth of an inch off is not an error. The general principle is that errors are reasons a solution to a problem won’t work. The small measurement “error” doesn’t prevent my from succeeding at the problem I’m working on, so it’s not an error. It would be an error in a different context like doing a science experiment that relies on much more accurate measurements, but I’m not doing that.

Third, yes you should try to post-mortem all your errors that get past the previous two points. If you find this overwhelming, there are two things to do:

  1. Do easier stuff so you make fewer errors. Get your error rate under control. There’s no benefit to doing stuff that’s full of errors – it won’t work. Correctness works better both for immediate practical benefits (you get more stuff done that is actually good or effective instead of broken) and for learning better so you can do better in the future.
  2. Learn and write down recurring patterns/themes/concepts and reuse them instead of trying to work out every post-mortem from scratch. If you develop good ideas that can help with multiple post-mortems, that’ll speed it up a ton. Reusing ideas is a major part of Paths Forward and is crucial to all of life.

Elliot Temple | Permalink | Messages (9)