Rationally Resolving Conflicts of Ideas

I was planning to write an essay explaining the method of rationally resolving conflicts and always acting on a single idea with no outstanding criticisms. It would followup on my essay Epistemology Without Weights and the Mistake Objectivism and Critical Rationalism Both Made where I mentioned the method but didn't explain it.

I knew I'd already written a number of explanations on the topic, so I decided to reread them for preparation. While reading them I decided that the topic is hard and it'd be very hard to write a single essay which is good enough for someone to understand it. Maybe if they already had a lot of relevant background knowledge, like knowing Popper, Deutsch or TCS, one essay could work OK. But for an Objectivist audience, or most audiences, I think it'd be really hard.

So I had a different idea I think will work better: gather together multiple essays. This lets people learn about the subject from a bunch of different angles. I think this way will be the most helpful to someone who is interested in understanding this philosophy.

Each link below was chosen selectively. I reread all of them as well as other things that I decided not to include. It may look like a lot, but I don't think you should expect an important new idea in epistemology to be really easy and short to learn. I've put the links in the order I recommend reading them, and included some explanations below.

Instead of one perfect essay – which is impossible – I present instead some variations on a theme.

Update 2017: Buy my Yes or No Philosophy to learn a ton more about this stuff. It has over 6 hours of video and 75 pages of writing. See also this free essay giving a short argument for it.

Update Oct 2016: Read my new Rejecting Gradations of Certainty.

Popper's critical preferences idea is incorrect. It's similar to standard epistemology, but better, but still shares some incorrectness with rival epistemologies. My criticisms of it can be made of any other standard epistemology (including Objectivism) with minor modifications. I explained a related criticism of Objectivism in my prior essay.

Critical Preferences
Critical Preferences and Strong Arguments

The next one helps clarify a relevant epistemology point:

Corroboration

Regress problems are a major issue in epistemology. Understanding the method of rationally resolving conflicts between ideas to get a single idea with no outstanding criticism helps deal with regresses.

Regress Problems

Confused about anything? Maybe these summary pieces will help:

Conflict, Criticism, Learning, Reason
All Problems are Soluble
We Can Always Act on Non-Criticized Ideas

This next piece clarifies an important point:

Criticism is Contextual

Coercion is an important idea to understand. It comes from Taking Children Seriously (TCS), the Popperian educational and parenting philosophy by David Deutsch. TCS's concept of "coercion" is somewhat different than the dictionary, keep in mind that it's our own terminology. TCS also has a concept of a "common preference" (CP). A CP is any way of resolving a problem between people which they all prefer. It is not a compromise; it's only a CP if everyone fully prefers it. The idea of a CP is that it's a preference which everyone shares in common, rather than disagreeing.

CPs are the only way to solve problems. And any non-coercive solution is a CP. CPs turn out to be equivalent to non-coercion. One of my innovations is to understand that these concepts can be extended. It's not just about conflicts between people. It's really about conflicts between ideas, including ideas within the same mind. Thus coercion and CPs are both major ideas in epistemology.

TCS's "most distinctive feature is the idea that it is both possible and desirable to bring up children entirely without doing things to them against their will, or making them do things against their will, and that they are entitled to the same rights, respect and control over their lives as adults." In other words, achieving common preferences, rather than coercion, is possible and desirable.

Don't understand what I'm talking about? Don't worry. Explanations follow:

Taking Children Seriously
Coercion

The next essay explains the method of creating a single idea with no outstanding criticisms to solve problems and how that is always possible and avoids coercion.

Avoiding Coercion
Avoiding Coercion Clarification

This email clarifies some important points about two different types of problems (I call them "human" and "abstract"). It also provides some historical context by commenting on a 2001 David Deutsch email.

Human Problems and Abstract Problems

The next two help clarify a couple things:

Multiple Incompatible Unrefuted Conjectures
Handling Information Overload

Now that you know what coercion is, here's an early explanation of the topic:

Coercion and Critical Preferences

This is an earlier piece covering some of the same ideas in a different way:

Resolving Conflicts of Interest

These pieces have some general introductory overview about how I approach philosophy. They will help put things in context:

Think
Philosophy: What For?

Update: This new piece (July 2017) talks about equivocations and criticizes the evidential continuum: Don't Equivocate

Want to understand more?

Read these essays and dialogs. Read Fallible Ideas. Join my discussion group and actually ask questions.

Elliot Temple | Permalink | Messages (241)

Regress Problems

Written April 2011 for the beginning of infinity email list:

Infinite regresses are nasty problems for epistemologies.

All justificationist epistemologies have an infinite regress.

That means they are false. They don't work. End of story.

There are options of course. Don't want a regress? No problem. Have an arbitrary foundation. Have an unjustified proposition. Have a circular argument. Or have something else even sillier.

The regress goes like this, and the details of the justification don't matter.

If you want to justify a theory, T0, you have to justify it with another theory, T1. Then T1 needs justifying by T2. Which needs justifying by T3. Forever. And if T25 turns out wrong, then T24 loses it's justification. And with T24 unjustified, T23 loses its justification. And it cascades all the way back to the start.

I'll give one more example. Consider probabilistic justification. You assign T0 a probability, say 99.999%. Never mind how or why, the probability people aren't big on explanations like that. Just do your best. It doesn't matter. Moving on, what we should wonder is if that 99.999% figure is correct. If it's not correct then it could be anything such as 90% or 1% or whatever. So it better be correct. So we better justify that it's a good theory. How? Simple. We'll use our whim to assign the probability estimate itself a probability of 99.99999%. OK! Now we're getting somewhere. I put a lot of 9s so we're almost certain to be correct! Except, what if I had that figure wrong? If it's wrong it could be anything such as 2% or 0.0001%. Uh oh. I better justify my second probability estimate. How? Well we're trying to defend this probabilistic justification method. Let's not give up yet and do something totally differently, instead we'll give it another probability. How about 80%? OK! Next I ask: is that 80% figure correct? If it's not correct, the probability could be anything, such as 5%. So we better justify it. So it goes on and on forever. Now there's two problems. First it goes forever, and you can't ever stop, you've got an infinite regress. Second, suppose you stopped after some very large but finite number of steps. Then the probability the first theory is correct is arbitrarily small. Because remember that at each step we didn't even have a guarantee, only a high probability. And if you roll the dice a lot of times, even with very good odds, eventually you lose. And you only have to lose once for the whole thing to fail.

OK so regresses are a nasty problem. They totally ruin all justificationist epistemologies. That's basically every epistemology anyone cares about except skepticism and Popperian epistemology. And forget about skepticism, that's more of an anti-epistemology than an epistemology: skepticism consists of giving up on knowledge.

So how could Popperian epistemology deal with regresses?

(I've improved on Popper some.)

Regresses all go away if we drop justification. Don't justify anything, ever. Simple.

But justification had a purpose.

The purpose of justification is to sort out good ideas from bad ideas. How do we know which ideas are any good? Which should we believe are true? Which should we act on?

BTW that's the same general problem that induction was trying to address. And induction is false. So that's another reason we need a solution to this issue.

The method of addressing this issue has several steps, so try to follow along.

Step 1) You can suggest any ideas you want. There's no rules, just anything you have the slightest suspicion might be useful. The source of the ideas, and the method of coming up with them, doesn't matter to anything. This part is easy.

Step 2) You can criticize any idea you want. There's no rules again. If you don't understand it, that's a criticism -- it should have been easier to understand. If you find it confusing, that's a criticism -- it should have been clearer. If you think you see something wrong with it, that's a criticism -- it shouldn't have been wrong in that way, *or* it should have included an explanation so you wouldn't make a mistaken criticism. This step is easy too.

Step 3) All criticized ideas are rejected. They're flawed. They're not good enough. Let's do better. This is easy too. Only the *exact* ideas criticized are rejected. Any idea with at least one difference is deemed a new idea. It's OK to suggest new ideas which are similar to old ideas (in fact it's a good idea: when you find something wrong with an idea you should try to work out a way to change it so it won't have that flaw anymore).

Step 4) If we have exactly one idea remaining to address some problem or question, and no one wants to revisit the previous steps at this time, then we're done for now (you can always change your mind and go back to the previous steps later if you want to). Use that idea. Why? Because it's the only one. It has no rivals, no known alternatives. It stands alone as the only non-refuted idea. We have sorted out the good ideas from the bad -- as best we know how -- and come to a definite answer, so use that answer. This step is easy too!

Step 5) What if we have a different number of ideas left over which is not exactly one? We'll divide that into two cases:

Case 1) What if we have two or more ideas? This one is easy. There is a particular criticism you can use to refute all the remaining theories. It's the same every time so there's not much to remember. It goes like this: idea A ought to tell me why B and C and D are wrong. If it doesn't, it could be better! So that's a flaw. Bye bye A. On to idea B: if B is so great, why hasn't it explained to me what's wrong with A, C and D? Sorry B, you didn't answer all my questions, you're not good enough. Then we come to idea C and we complain that it should have been more help and it wasn't. And D is gone too since it didn't settle the matter either. And that's it. Each idea should have settled the matter by giving us criticisms of all its rivals. They didn't. So they lose. So whenever there is a stalemate or a tie with two or more ideas then they all fail.

Case 2) What if we have zero ideas? This is crucial because case one always turns into this! The answer comes in two main parts. The first part is: think of more ideas. I know, I know, that sounds hard. What if you get stuck? But the second part makes it easier. And you can use the second part over and over and it keeps making it easier every time. So you just use the second part until it's easy enough, then you think of more ideas when you can. And that's all there is to it.

OK so the second part is this: be less ambitious. You might worry: but what about advanced science with its cutting edge breakthroughs? Well, this part is optional. If you can wait for an answer, don't do it. If there's no hurry, then work on the other steps more. Make more guesses and think of more criticisms and thus learn more and improve your knowledge. It might not be easy, but hey, the problem we were looking at is how to sort out good ideas from bad ideas. If you want to solve hard problems then it's not easy. Sorry. But you've got a method, just keep at it.

But if you have a decision to make then you need an answer now so you can make your decision. So in that case, if you actually want to reach a state of having exactly one theory which you can use now, then the trick is when you get stuck be less ambitious. How would that work in general terms? Basically if human knowledge isn't good enough to give you an answer of a certain quality right now, then your choices are either to work on it more and not have an answer now, or accept a lower quality answer. You can see why there isn't really any way around that. There's no magic way to always get a top quality answer now. If you want a cure for cancer, well I can't tell you how to come up with one in the next five minutes, sorry.

This is a bit vague so far. How does lowering your standards address the problem? So what you do is propose a new idea like this, "I need to do something, so I will do..." and then you put whatever you want (idea A, idea B, some combination, whatever else).

This new idea is not refuted by any of the existing criticisms. So now you have one idea, it isn't refuted, and you might be done. If you're happy with it, great. But you might not be. Maybe you see something wrong with it, or you have another proposal. That's fine; just go back to the first three steps and do them more. Then you'll get to step 4 or 5 again.

What if we get back here? What do we do the second time? The third time? We simply get less ambitious each time. The harder a time we're having, the less we should expect. And so we can start criticizing any ideas that aim too high (while under too much time pressure to aim that high).

BTW it's explained on my website here, including an example:

http://fallibleideas.com/avoiding-coercion

Read that essay, keeping in mind what what I've been saying, and hopefully everything will click. Just bear in mind that when it talks about cooperation between people, and disagreements between people, and coming up with solutions for people -- when it discusses ideas in two or more separate minds -- everything applies exactly the same if the two or more conflicting ideas are all in the same mind.

What if you get really stuck? Well why not do the first thing that pops into your head? You don't want to? Why not? Got a criticism of it? It's better than nothing, right? No? If it's not better than nothing, do nothing! You think it's silly or dumb? Well so what? If it's the best idea you have then it doesn't matter if it's dumb. You can't magically instantly become super smart. You have to use your best idea even if you'd like to have better ideas.

Now you may be wondering whether this approach is truth-seeking. It is, but it doesn't always find the truth immediately. If you want a resolution to a question immediately then its quality cannot exceed today's knowledge (plus whatever you can learn in the time allotted). It can't do better than the best that is known how to do. But as far as long term progress, the truth seeking came in those first three steps. You come up with ideas. You criticize those ideas. Thereby you eliminate flaws. Every time you find a mistake and point it out you are making progress towards the truth; you're learning. That's how we approach the truth: not by justifying but by identifying mistakes and learning better. This is evolution, it's the solution to Paley's problem, it's discussed in BoI and on my Fallible Ideas website. And it's not too hard to understand: improve stuff, keep at it, and you make progress. Mistake correcting -- criticism -- is a truth-seeking method. That's where the truth-seeking comes from.

Elliot Temple | Permalink | Messages (0)

Coercion and Critical Preferences

Written Aug 2008, addressing ideas of Karl Popper and David Miller about critical preferences:

You should eliminate rival theories until you have exactly one candidate theory and then act on that. We thus dodge any issues of comparing two still-standing theories using some sort of criterion.

And that's it. The problem is solved. When there is only one candidate theory the solution is dead easy. But this solution raises a new problem which is how to deal with all the rival theories (in short order).

If you act on a theory while there are any active rivals that is coercion, which is the cause of distress. You are, roughly, forsaking a part of yourself (without having dealt with it in a rational fashion first).

Often we don't know how to resolve a controversy between two theories promptly, perhaps not even in our lifetime. But that does not mean we are doomed to any coercion. We can adopt a *single theory* with *no active rivals* which says "I don't know whether A or B is better yet, but I do know that I need to choose what to do now, and thus I will do C for reason D." A and B talk about, say, chemistry, and don't contain the means to argue with this new theory proposed -- they don't address the issue now being considered of what to do given the unsettled dispute between A and B -- so are not relevant rival theories, so we end up with only one theory about what to do, this new one we just invented. And acting on this new theory clearly does not forsake A or B; it's not in conflict with them.

We might invent two new theories, one siding more with A, and one more with B, and thus have a new conflict to deal with. But then we have *new problem* which does not depend on resolving the dispute between A and B. And we can do as many layers of reinterpretations and meta-theorizing like this as we want. Coercion is avoidable and practical problems of action are soluble promptly.

If it really comes down to it, just formulate one new theory "I better just do C for reason D *and* all arguments to the contrary are nothing but attempts to sabotage my life because I only have 3 seconds left to act." That ought to put a damper on rival theories popping up -- it'd now have to include a reason it's not sabotage.

One could still think of a rival theory which says it wants to do E because of B and this isn't sabotage b/c the A-based C will be disastrous and it's trying to help" or whatever. There is no mechanical strategy for making choices or avoiding coercion. What I mean to illustrate is we have plenty of powerful tools at our disposal. This process can go wrong, but there is plenty of hope and possibility for it to go right.

-----

BTW this does not only apply to resolving rival theories *for purpose of action* or *within oneself*. It also works for abstract theoretical disputes between different people.

Suppose I believe X and you believe Y, *and we are rational*. Think of something like MWI vs copenhagen -- theories on that scale -- except something that we don't already know the answer to.

So we argue for a while, and it's clear that you can't answer some of my questions and criticisms, and I can't answer some of yours.

Most people would say "you haven't proven me wrong, and i see problems with your theory, so i am gonna stick with mine". That's called bias.

Some people might be tempted to analyze, objectively, which theory has more unanswered questions (weighted by importance of the question), and which theory has troubled with how many criticisms (weighted by amount of trouble and important of criticism). And thus they'd try to figure out which theory is "better" (which doesn't imply it's true, or even that the other is false -- well of course strictly they are both false, but the truth might be a minor modification of the worse theory).

What I think they've done there is abandon the critical rationalist process and replace it with a misguided attempt to measure which theories are good.

What we should do is propose *new theories* like "the current state of the debate leaves open the question of X and Y, but we should be able to all agree on Z so for issue A we should do B, and for issue C we should do D, and that's something we can all agree on. We can further agree about what research ought to be done and is important to do to resolve questions about both X and Y." Thus we can make a new *single theory* that *everyone on both sides can agree to* which does not forsake X or Y. This is the *one rational view to take of the field*, unlike the traditional approach of people being in different and incommensurable camps. This view will leave them with *nothing to argue about* and *no disagreement*.

Of course, someone might say it's mistaken and propose another view of the same type. And so we could have an argument about that. But this new argument does not depend on your view in the X vs Y dispute. It's a new problem. Just like above. And if it gets stuck, we can make another meta-theory.

This approach I advocate follows the critical rationalist process through and through. It depends on constructing new theories (which are just guesses and may go wrong) and criticizing them. It never resorts to static criteria of betterness.

Elliot Temple | Permalink | Messages (0)

Criticism is Contextual

Criticism is contextual. The "same idea" as far as explicit content and the words it's expressed in, can have a different status (criticized, or not) depending on the context, because the idea might successfully solve one problem but not solve some other problem. So it's criticized in its capacity to solve that second problem, but not the first. This kind of thing is ubiquitous.

Example: you want to drive somewhere using a GPS navigator system. The routes your GPS navigator system gives you are often not the shortest, best routes. You have a criticism of the GPS navigator system.

Today, you want to drive somewhere. Should you use your GPS? Sure! Why not? Yes you had a criticism of it, but that criticism was contextual. You criticized it in the context of wanting shortest routes. But it's not criticized in the context of whether it will do a good job of getting you to today's destination (compared to the alternatives available). It could easily lead you to drive an extra hundred meters (maybe that's 0.1% more distance!) and still get you to your destination on time. Despite being flawed in some contexts, by some criteria, it could still get you there faster and easier than you would have done with manual navigation.

That there is a criticism of the GPS navigator system does not mean never to use it. Nor does it mean if you use it you're acting on refuted or criticized ideas. Criticism is contextual.

Elliot Temple | Permalink | Message (1)

Rational People

I wrote this in Dec 2008.

Rational people are systems of ideas that can temporarily remove any one idea in the system without losing identity. We can remain functional without any one idea. This means we can update or replace it. And in fact we can often change a lot of ideas at once (how many depends in part on which).

To criticize one idea is not to criticize my rationality, or my ability to create knowledge, or my ability to make progress. It doesn't criticize what makes me human, nor anything permanent about me. So I have no reason to mind it. Either I will decide it is correct, and change (and if I don't understand how to change, then no one has reason to fault me for not changing yet), or decide it is incorrect and learn something from considering it.

The way ideas die in our place is that we change ourselves, while retaining our identity (i.e., we don't die), but the idea gets abandoned and does die.

Elliot Temple | Permalink | Messages (0)

Avoiding Coercion Clarification

In reply to my Avoiding Coercion essay, I got a question about the need for creativity and thinking of good questions as part of the process. In March 2012, for the beginning of infinity list, I clarified:

There is no simple method of creating questions. Questioning stuff is creative thinking, like criticizing, and so on.

The point of this method is that it's so trivially easy. Anything that relies on thinking of good questions is not reliable. We don't reliably have good ideas. We don't reliably notice criticisms of ideas we consider. We don't always reach a conclusion for some tricky issue for days, years, or ever.

If avoiding coercion required creative questions, imaginative critical thinking, and so on -- if a bunch of success at all that stuff was required -- then it wouldn't be available to everyone, all the time. It would fail sometimes. There's no way to always do a great job at those things, always get them right.

But one of the big important things is: we can always avoid coercion. It's not like always figuring out the answer to every math problem put in front of us. Every math problem we're given is soluble, and given infinite time/energy/attention/learning/etc we could figure it out. But the problems which threaten coercion aren't like that. They don't require infinite time/resources. We can deal with all of them, now, reliably, enough not to be coerced.

That is what the article is about. What is this simple method that we can do reliably without making a big creative effort.


Now the method doesn't 100% ignore creativity. You can use creative thought as part of it, and you should. But even if you do a really bad job of that, you reach success anyway (where success means non-coercion). If you have an off day solving math problems, maybe you don't solve any today. But an off-day for creative thinking about your potentially coercive problems need not lead to any coercion.

The method doesn't require you to be thinking of good questions as you go along. If you do, great. If you don't, it works anyway. Which is necessary to reach the conclusion: all family coercion is avoidable, in practice, today, without being superman.

(I weakened the claim from dealing with all coercion because I don't want to comment on the situation where you get abducted by terrorists at gunpoint or other situations with force in them. That's hard. But all regular family situations are very manageable.)

Elliot Temple | Permalink | Messages (0)

Multiple Incompatible Unrefuted Conjectures

I wrote this in Jan 2011 for the Fabric of Reality email list.

I had written:
in my scheme of things begin refuted is the only reason not to believe (tentatively accept) a theory,
And Peter D had replied:
That would leave you believing multiple incompatible theories, since there are multiple incompatible unrefuted conjectures.
This is a good issue to raise.

The outline of the answer is: you can and must always reach a state of having exactly one non-refuted theory/idea/conjecture (for any single issue). (Note: theory/idea/conjecture are basically interchangeable, see: http://fallibleideas.com/ideas )

You are correct that if it weren't for this then our epistemology would not work (pending a brilliant new idea). We have known that for a long time and solved the problem.

Around 20 years ago, David Deutsch was creating a new educational philosophy called Taking Children Seriously.

One of the core ideas of that philosophy is a concept he calls "coercion". Be careful b/c it is defined differently than the dictionary. When clarity is desired, it can be called TCS-coercion.

TCS-coercion is the state of having multiple contradicting and unrefuted conjectures active in your mind, and acting or choosing, using one as against the others, while the conflict is still unresolved.

This issue has been important to David's views on epistemology and education for some 20 years. In that time, it has been addressed!

TCS-coercion is a crucial concept which is connected not only to epistemology and education, but also to psychology, to the issue of force and the issue of suffering. It is consequently connected to politics and to morality. I will not be surprised if connections to other fields are found too.

As an example of a possible major consequence of these ideas, we have conjectured that TCS-coercion A) always causes suffering B) is the only cause of suffering.

TCS-coercion is not an easy to understand concept. Many people interested in TCS have failed to grasp it, and there have been many conversations on the topic.

Another big idea, which is pretty easy to understand the gist of and may be interesting (but which is much harder to substantiate) is this: coercion-theory is basically the theory of disagreements (conflicts between theories) and how to resolve them. The rational truth-seeking method for approaching disagreements is the same when the ideas are in one mind or in two separate minds. Disagreement in one mind is TCS-coercion and disagreement in two minds is conflict between people and in both cases the conflict should be resolved by the same truth seeking methods that create harmony/cooperation.

The problem Peter brings up about multiple unrefuted and incompatible conjectures, and the issue of TCS-coercion, are closely related. They basically share a single answer to the question: how does one avoid being coerced?

Addressing it is not simple. My guess is you (Peter, and most of the audience) will not be able to understand the topic before you understand the more basic points of Popperian epistemology like induction being a fictitious process, justificationism being a mistake, the impossibility of positive support, that all observation is theory laden, fallibilism, the communication gap, that all learning is by C&R, and so on.

However if you are interested the best place to start, for purposes of getting straight to the meat of the issue and skipping the prerequisites, is by reading these:

http://fallibleideas.com/coercion
http://fallibleideas.com/avoiding-coercion

If you don't understand them, or think you see a flaw in them, and you wish to comment, please try to make your comments simple and narrow and aim to focus on one important thing at a time. I think if we talk about everything at once it will be very confusing.

Besides reading those pages, it would also help to read all the other pages on that website, all of my blog posts, and all of David Deutsch's posts that you can find in the FoR list archives. Plus various books.

If you think that's a lot of reading you are correct. But it helps one learn. I myself read every single one of David's TCS emails in addition to his FoR emails and everything else I could find. That was around a decade worth of old emails from before I joined the list. Reading them helped me to understand things better.

Anyone interested in learning more these topics can also join the email list for it: http://groups.yahoo.com/group/fallible-ideas/

Elliot Temple | Permalink | Messages (0)

We Can Always Act on Non-Criticized Ideas

I originally wrote this for the Beginning of Infinity email list in Jan 2012.

Consider situations in the general form:

X disagrees with (conflicts with) Y about Z.

X and Y could be people. (Really: ideas in people.)

Or they could be ideas within one person.

One or both could be criticisms (explanations of mistakes, rather than positive ideas about what's good).

Z, by the way, might be more than one thing. X and Y can also be multi-part.


Let's consider a more specific example.

X is some idea, e.g. that I'll have pizza for dinner.

Y is a criticism of X, e.g. that I haven't got enough money to afford pizza.

So, what happens? I use an option that I have no criticism of. I get a dinner I can afford.



Now we'll add more detail to make it harder. This time, X also includes criticism of all non-X dinner plans, e.g. that they won't taste like pizza (and pizza tastes good, which is a good thing).


Now I can't simply choose some other dinner which I can afford, because I have a criticism of that.

To solve this, I could refute the second part of X and change my preferences, e.g. by figuring out that something else besides pizza is good too. Or I could acquire some more money. Or adjust my budget.


There's always many ways forward that I would potentially not have any criticism of.


What if I get stuck? I want pizza, because it's delicious, but I also don't want pizza, because I'm too poor. Whatever I do I have a criticism of it. I try to think of ideas like adjusting my budget, or eating something else, but I don't see how to make them work.


There is a simple, generalized approach. I don't have to think haphazardly and hope I find an answer.


All conflicts, as we've been discussing, always raise new problems. In particular:

X disagrees with (conflicts with) Y about Z.

If we don't solve this directly, it raises the problem:

Given X disagrees with (conflicts with) Y about Z, then what should I do?

This is a new problem. And it has several positive features:


1) This technique can be repeated unlimited times. If I use the technique once then get stuck again, I can use the technique again to get unstuck.

2) In general, any solutions we think of for this new problem will not be criticized by any of the criticisms we already had in mind. This makes it relatively easy to come up with a solution we don't have any criticism of. All those criticisms we were having a hard time with are not directly relevant.

3) Every application of this technique provides an *easier problem* than we had before. So we don't just get a new problem, but also an easier one. This, combined with the ability to use the technique repeatedly, lets us make our problem situation as easy as we like.


Why do the problems get progressively easier? Because they are progressively less ambitious. They accept various things, for the moment, and ask what to do anyway, instead of trying to deal with them directly.

The new problems are also more targeted to dealing with the specific issue of finding a way to move forward. This additional focus, instead of just on figuring stuff out generally, makes it easier. It tends towards a minimal solution.

In the context of disagreements between persons, the problems this technique generates progressively tend towards less cooperation, which is easier. In the context of ideas within a person, it's basically the same thing but a little harder to understand.


So, that's why this is true (which I'd written previously):
Premise: there is an available option that one doesn't have a criticism of (and it can be figured out fast enough even in time pressure)
Because we can get past any sticking points, in a simple way, while also reducing our problem(s) to easier problem(s) as much as necessary. (It only works the purpose of figuring out an option for how to proceed, not for any conflict of ideas. But we only have time limits in the context of needing to proceed or act or choose.)


See also: http://fallibleideas.com/avoiding-coercion

Elliot Temple | Permalink | Messages (0)