no philosophy equals big risk

learning productivity multipliers (such as philosophy) ASAP is most efficient by far. i think that's something worth aggressively optimizing.

for example, some people prioritize their career ahead of philosophy. so then they do all kinds of career stuff which they could have done twice as fast if they were better at philosophy and a few other skills. like they have to learn some skills for work, and they learn them slowly, whereas if they knew more about learning they could have learned it a lot faster.

or people start dating and get married before learning about romance philosophy. big mistake.

besides philosophy first being a more efficient order (if are ever going to learn to learn faster, the sooner you do that the more efficient, since you get to use it in the most cases), it also helps deal with mistakes of various kinds (like marriage. marriage shouldn't be done at all at any speed).

how is one to know whether he's making a big picture mistake he'll regret later, without knowing lots of philosophy? i think it's a serious risk. this includes both risks of doing things badly out of order so it's really inefficient and also risks of doing something that shouldn't be done at all in any order.

so learn a substantial amount of philosophy ASAP or huge risk of disaster. those are the only choices.

put another way, you should start on the beginning of infinity track, now. that means thinking, learning, aiming for lots of speedy progress. you've gotta start making progress now, not at some indeterminate point in the future. and if you're trying to make rapid progress as your standard way of life, then to do it well you've gotta learn what's known about how to do that (which is called "philosophy").

ppl often seem to think the risks of doing their life as-is and meeting current preferences are low. they think that things seem to be going pretty well, how bad can it be? maybe they even know some philosophy and fixed some mistakes, and think there can't be too many more (uhh what? how do you know how many more there could be? we're all alike in our infinite ignorance!)

i think basically anything but doing quite a bit of philosophy is extremely risky. also i do and know more philosophy than you and i'm telling you it's risky. so why are you doubting me when you have no criticisms of my philosophical positions? and since you do way less philosophy, how would you even evaluate the risk? it takes philosophy to evaluate how much danger there is and to do anything about the danger. so how can you decide it's an ok risk to take when you lack the knowledge to even understand the risk?

Elliot Temple at 12:55 PM on September 9, 2013 | Permalink | Comments (0)

Jews

Jews and Israel are good. Anti-semitism is bad, including anti-Israel stuff.

As far as I know, none of the following list of people made a public pro-Jewish statement: David Deutsch, William Godwin, Edmund Burke, Thomas Szasz, Karl Popper and Ludwig von Mises.

They should have. That's why I wrote this post. I think it's important to be clear about this issue.

Ayn Rand did, see comments below. :)

Elliot Temple at 4:46 PM on September 2, 2013 | Permalink | Comments (6)

Edmund Burke Anti-semitic Comment

http://www.gutenberg.org/files/15679/15679-h/15679...
Other revolutions [than the French one] have been conducted by persons who, whilst they attempted or affected changes in the commonwealth, sanctified their ambition by advancing the dignity of the people whose peace they troubled. They had long views. They aimed at the rule, not at the destruction of their country. They were men of great civil and great military talents, and if the terror, the ornament of their age. They were not like Jew brokers contending with each other who could best remedy with fraudulent circulation and depreciated paper the wretchedness and ruin brought on their country by their degenerate councils.
Edmund Burke. :(

Elliot Temple at 7:33 PM on August 31, 2013 | Permalink | Comments (0)

Nietzsche the Anti-semite

The Siege: The Saga of Israel and Zionism by Conor Cruise O'Brien, pp 57-58:
... Nietzsche, through his work in replacing Christian (limited) anti-semitism with anti-Christian (unlimited) anti-semitism, played a large part in opening the way for the Nazis and the Holocaust.

I am well aware that that will seem to many people an extravagant, to some even outrageous, statement. The current[67] academic convention regarding Nietzsche is to treat Nazi admiration for this thinker as due to a misunderstanding. As far as anti-semitism is concerned, it can be shown that he condemned it, occasionally. Since the Second World War there has been a consensus for excluding him from the intellectual history of anti-semitism, in which, in fact, his role is decisive.

It is true that Nietzsche detested the vulgar (and Christian) anti-semitism of his own day, especially of his brother-in-law, Bernhard Foerster. It is also true that the main thrust of Nietzsche's writing was not directed against the Jews. It was directed against Christianity. But the way in which it was directed against Christianity made it far more dangerous to Jews than to Christians.

Anti-Christian anti-semitism in itself was nothing new. The most anti-Christian of the philosophes of the eighteenth century–Voltaire especially–were also anti-semitic, though not consistently so.[68] What was new in Nietzsche, however, was the ethical radicalism of his sustained onslaught on Christianity. The Enlightenment tradition, on the whole, had respected, and even to a great extent inculcated–throught its advocacy of tolerance–the Christian ethic, the Sermon on the Mount.

Nietzsch's message was that the Christian ethic was poison; its emphasis on mercy reversed the true Aryan values of fierceness; "pride, serverity, strength, hatred revenge." And the people responsible for this transvaluation of values (Umwertung des Wertes), the root of all evil, were the Jews.

In The Antichrist he writes about the Gospels:
One is among Jews–the first consideration to keep from losing the threat completely–Paul and Christ were little superlative Jews. ... One would no more associate with the first Christians than one would with Polish Jews–they both do not smell good. ... Pontius Pilate is the only figure in the New Testament who commands respect. To take a Jewish affair seriously–he does not persuade himself to do that. One Jews more or less–what does it matter?
Nietzsche's real complaint against the vulgar Christian anti-semites of his day was that they were not anti-semitic enough; that they did not realize that they were themselves carriers of that semitic infection, Christianity.[69] "The Jews," he wrote in The Antichrist, "have made mankind so thoroughly false that even today the Christian can feelanti-Jewish without realizing that he is himself the ultimate Jewish consequence."
I think this is convincing. And important. Does anyone disagree or know more about it?

Elliot Temple at 8:17 PM on August 30, 2013 | Permalink | Comments (0)

Epistemology In Short

I got asked for my philosophy on one foot. I personally never found Objectivism on one foot that useful. I thought it's too hard to understand if you don't already know what the stuff means. Philosophy is hard enough to communicate in whole books. Some people read Atlas Shrugged and think Rand is a communist or altruist. Some people read Popper and think he's a positivist or inductivist. Huge mistakes are easily possible even with long philosophical statements. I think the best solution involves back and forth communication so that miscommunication mistakes can be fixed along the way and understanding can be built up incrementally. But this requires the right attitudes and methods for talking to be very effective. And that's hard. And if people don't already have the right methods to learn and communicate well, how do you explain it to them? There's a chicken and egg problem that I don't have a great answer to. But anyway, philosophy, really short, I tried, here you go:

There is only one known rational theory of how knowledge is created: evolution. It answers Paley's problem. No one has ever come up with any other answer. Yet most people do not recognize evolution as a key theory in epistemology, and do not recognize that learning is an evolutionary process. They have no refutation of evolution, nor any alternative, and persist with false epistemologies. This includes Objectivism – Ayn Rand choose not to learn much about evolution.

Evolution is about how knowledge can be created from non-knowledge, and also how knowledge is improved. This works by a process of replication with variation and selection. In epistemology, ideas and variants are criticized and the survivors continue on in the process. This process incrementally makes progress, just like biological evolution. Step by step, flaws get eliminated and the knowledge gets better adapted and refined. This correction of errors is crucial to how knowledge is created and improved.

Another advantage of evolutionary processes is that they are resilient to mistakes. Many individual steps can be done badly and a good result still achieved. Biological evolution works even though many animals with advantageous genes die before other animals with inferior genes; there's a large random luck factor which does not ruin the process. This is important because of human fallibility: mistakes are common. We cannot avoid making any mistakes and should instead emphasize using methods that can deal with mistakes well. (Methods which deal with mistakes well are rational; methods which do not are irrational because they entrench mistakes long term.)

A key issue in epistemology is how conflicts of ideas are handled. Trying to resolve these conflicts by authority or by looking at the source of ideas is irrational. It can make mistakes persist long term. A rational approach which can quickly catch and eliminate mistakes is to judge conflicting ideas by their content. How do you judge the content of an idea? You try to find something wrong with it. You should not focus on saying why ideas are good because if they have mistakes you won't find the mistakes that way. However, finding something good about an idea is useful for criticizing other ideas which lack that good feature – it reveals a flaw in those rivals. However, in cases where a good feature of an idea does not lead to any criticism of a rival, it provides no advantage over that rival. This critical approach to evaluating ideas follows the evolutionary method.

This has implications for morality and politics. How people handle conflicts and disagreements are defining issues for their morality and politics. Conflicts of ideas should not be approached by authority and disagreement should not be disregarded. This implies a voluntary system with consent as a major issue. Consent implies agreement; lack of consent implies disagreement. Voluntary action implies agreement; involuntary action implies disagreement.

Political philosophy usually focuses too much on who should rule (or which laws should rule), instead of how to incrementally evolve our political knowledge. It tries to set up the right laws in the first place, instead of a system that is good at improving its laws. Mistakes should be expected. Disagreement should be expected. Everything should be set up to deal with this well. That implies making it easy to change rulers and laws (without violence). Also disagreement and diversity should be tolerated within the law.

Moral philosophy usually makes the same mistake as political philosophy. It focuses too much on deciding-declaring what is moral and immoral. There should be more concern with fallibility, and setting things up for moral knowledge to incrementally evolve. We aren't going to get all the answers right today. We should judge moral ideas more by how much they allow evolution, progress and mistake-correction, rather than by trying to know whether a particular idea would be ideal forever. Don't try to prophesy the future and do start setting things up so we can adjust well in the unknown future.

Things will go wrong in epistemology, morality and politics. The focus should be on incrementally evolving things to be better over time and setting things up to be resilient to mistakes. It's better to have mistaken ideas today and good mistake-correction setup than to have superior ideas today which are hard to evolve and fragile to error.

Elliot Temple at 3:30 PM on July 27, 2013 | Permalink | Comments (0)

Rationally Resolving Conflicts of Ideas

I was planning to write an essay explaining the method of rationally resolving conflicts and always acting on a single idea with no outstanding criticisms. It would followup on my essay Epistemology Without Weights and the Mistake Objectivism and Critical Rationalism Both Made where I mentioned the method but didn't explain it.

I knew I'd already written a number of explanations on the topic, so I decided to reread them for preparation. While reading them I decided that the topic is hard and it'd be very hard to write a single essay which is good enough for someone to understand it. Maybe if they already had a lot of relevant background knowledge, like knowing Popper, Deutsch or TCS, one essay could work OK. But for an Objectivist audience, or most audiences, I think it'd be really hard.

So I had a different idea I think will work better: gather together multiple essays. This lets people learn about the subject from a bunch of different angles. I think this way will be the most helpful to someone who is interested in understanding this philosophy.

Each link below was chosen selectively. I reread all of them as well as other things that I decided not to include. It may look like a lot, but I don't think you should expect an important new idea in epistemology to be really easy and short to learn. I've put the links in the order I recommend reading them, and included some explanations below.

Instead of one perfect essay – which is impossible – I present instead some variations on a theme.
Popper's critical preferences idea is incorrect. It's similar to standard epistemology, but better, but still shares some incorrectness with rival epistemologies. My criticisms of it can be made of any other standard epistemology (including Objectivism) with minor modifications. I explained a related criticism of Objectivism in my prior essay.

Critical Preferences
Critical Preferences and Strong Arguments

The next one helps clarify a relevant epistemology point:

Corroboration

Regress problems are a major issue in epistemology. Understanding the method of rationally resolving conflicts between ideas to get a single idea with no outstanding criticism helps deal with regresses.

Regress Problems

Confused about anything? Maybe these summary pieces will help:

Conflict, Criticism, Learning, Reason
All Problems are Soluble
We Can Always Act on Non-Criticized Ideas

This next piece clarifies an important point:

Criticism is Contextual

Coercion is an important idea to understand. It comes from Taking Children Seriously (TCS), the Popperian educational and parenting philosophy by David Deutsch. TCS's concept of "coercion" is somewhat different than the dictionary, keep in mind that it's our own terminology. TCS also has a concept of a "common preference" (CP). A CP is any way of resolving a problem between people which they all prefer. It is not a compromise; it's only a CP if everyone fully prefers it. The idea of a CP is that it's a preference which everyone shares in common, rather than disagreeing.

CPs are the only way to solve problems. And any non-coercive solution is a CP. CPs turn out to be equivalent to non-coercion. One of my innovations is to understand that these concept can be extended. It's not just about conflicts between people. It's really about conflicts between ideas, including ideas within the same mind. Thus coercion and CPs are both major ideas in epistemology.

TCS's "most distinctive feature is the idea that it is both possible and desirable to bring up children entirely without doing things to them against their will, or making them do things against their will, and that they are entitled to the same rights, respect and control over their lives as adults." In other words, achieving common preferences, rather than coercion, is possible and desirable.

Don't understand what I'm talking about? Don't worry. Explanations follow:

Taking Children Seriously
Coercion

The next essay explains the method of creating a single idea with no outstanding criticisms to solve problems and how that is always possible and avoids coercion.

Avoiding Coercion
Avoiding Coercion Clarification

This email clarifies some important points about two different types of problems (I call them "human" and "abstract"). It also provides some historical context by commenting on a 2001 David Deutsch email.

Human Problems and Abstract Problems

The next two help clarify a couple things:

Multiple Incompatible Unrefuted Conjectures
Handling Information Overload

Now that you know what coercion is, here's an early explanation of the topic:

Coercion and Critical Preferences

This is an earlier piece covering some of the same ideas in a different way:

Resolving Conflicts of Interest

These pieces have some general introductory overview about how I approach philosophy. They will help put things in context:

Think
Philosophy: What For?

Want to understand more?

Read these essays and dialogs. Read Fallible Ideas. Join my discussion group and actually ask questions.

Elliot Temple at 3:28 PM on July 12, 2013 | Permalink | Comments (0)

Regress Problems

Written April 2011 for the beginning of infinity email list:

Infinite regresses are nasty problems for epistemologies.

All justificationist epistemologies have an infinite regress.

That means they are false. They don't work. End of story.

There's options of course. Don't want a regress? No problem. Have an arbitrary foundation. Have an unjustified proposition. Have a circular argument. Or have something else even sillier.

The regress goes like this, and the details of the justification don't matter.

If you want to justify a theory, T0, you have to justify it with another theory, T1. Then T1 needs justifying by T2. Which needs justifying by T3. Forever. And if T25 turns out wrong, then T24 loses it's justification. And with T24 unjustified, T23 loses its justification. And it cascades all the way back to the start.

I'll give one more example. Consider probabilistic justification. You assign T0 a probability, say 99.999%. Never mind how or why, the probability people aren't big on explanations like that. Just do your best. It doesn't matter. Moving on, what we should wonder is if that 99.999% figure is correct. If it's not correct then it could be anything such as 90% or 1% or whatever. So it better be correct. So we better justify that it's a good theory. How? Simple. We'll use our whim to assign the probability estimate itself a probability of 99.99999%. OK! Now we're getting somewhere. I put a lot of 9s so we're almost certain to be correct! Except, what if I had that figure wrong? If it's wrong it could be anything such as 2% or 0.0001%. Uh oh. I better justify my second probability estimate. How? Well we're trying to defend this probabilistic justification method. Let's not give up yet and do something totally differently, instead we'll give it another probability. How about 80%? OK! Next I ask: is that 80% figure correct? If it's not correct, the probability could be anything, such as 5%. So we better justify it. So it goes on and on forever. Now there's two problems. First it goes forever, and you can't ever stop, you've got an infinite regress. Second, suppose you stopped have some very large but finite number of steps. Then the probability the first theory is correct is arbitrarily small. Because remember that at each step we didn't even have a guarantee, only a high probability. And if you roll the dice a lot of times, even with very good odds, eventually you lose. And you only have to lose once for the whole thing to fail.

OK so regresses are a nasty problem. They totally ruin all justificationist epistemologies. That's basically every epistemology anyone cares about except skepticism and Popperian epistemology. And forget about skepticism, that's more of an anti-epistemology than an epistemology: skepticism consists of giving up on knowledge.

So how could Popperian epistemology deal with regresses?

(I've improved on Popper some.)

Regresses all go away if we drop justification. Don't justify anything, ever. Simple.

But justification had a purpose.

The purpose of justification is to sort out good ideas from bad ideas. How do we know which ideas are any good? Which should we believe are true? Which should we act on?

BTW that's the same general problem that induction was trying to address. And induction is false. So that's another reason we need a solution to this issue.

The method of addressing this issue has several steps, so try to follow along.

Step 1) You can suggest any ideas you want. There's no rules, just anything you have the slightest suspicion might be useful. The source of the ideas, and the method of coming up with them, doesn't matter to anything. This part is easy.

Step 2) You can criticize any idea you want. There's no rules again. If you don't understand it, that's a criticism -- it should have been easier to understand. If you find it confusing, that's a criticism -- it should have been clearer. If you think you see something wrong with it, that's a criticism -- it shouldn't have been wrong it that way, *or* it should have included an explanation so you wouldn't make a mistaken criticism. This step is easy too.

Step 3) All criticized ideas are rejected. They're flawed. They're not good enough. Let's do better. This is easy too. Only the *exact* ideas criticized are rejected. Any idea with at least one difference is deemed a new idea. It's OK to suggest new ideas which are similar to old ideas (in fact it's a good idea: when you find something wrong with an idea you should try to work out a way to change it so it won't have that flaw anymore).

Step 4) If we have exactly one idea remaining to address some problem or question, and no one wants to revisit the previous steps at this time, then we're done for now (you can always change your mind and go back to the previous steps later if you want to). Use that idea. Why? Because it's the only one. It has no rivals, no known alternatives. It stands alone as the only non-refuted idea. We have sorted out the good ideas from the bad -- as best we know how -- and come to a definite answer, so use that answer. This step is easy too!

Step 5) What if we have a different number of ideas left over which is not exactly one? We'll divide that into two cases:

Case 1) What if we have two or more ideas? This one is easy. There is a particular criticism you can use to refute all the remaining theories. It's the same every time so there's not much to remember. It goes like this: idea A ought to tell me why B and C and D are wrong. If it doesn't, it could be better! So that's a flaw. Bye bye A. On to idea B: if B is so great, why hasn't it explained to me what's wrong with A, C and D? Sorry B, you didn't answer all my questions, you're not good enough. Then we come to idea C and we complain that it should have been more help and it wasn't. And D is gone too since it didn't settle the matter either. And that's it. Each idea should have settled the matter by giving us criticisms of all its rivals. They didn't. So they lose. So whenever there is a stalemate or a tie with two or more ideas then they all fail.

Case 2) What if we have zero ideas? This is crucial because case one always turns into this! The answer comes in two main parts. The first part is: think of more ideas. I know, I know, that sounds hard. What if you get stuck? But the second part makes it easier. And you can use the second part over and over and it keeps making it easier every time. So you just use the second part until it's easy enough, then you think of more ideas when you can. And that's all there is to it.

OK so the second part is this: be less ambitious. You might worry: but what about advanced science with its cutting edge breakthroughs? Well, this part is optional. If you can wait for an answer, don't do it. If there's no hurry, then work on the other steps more. Make more guesses and think of more criticisms and thus learn more and improve your knowledge. It might not be easy, but hey, the problem we were looking at is how to sort out good ideas from bad ideas. If you want to solve hard problems then it's not easy. Sorry. But you've got a method, just keep at it.

But if you have a decision to make then you need an answer now so you can make your decision. So in that case, if you actually want to reach a state of having exactly one theory which you can use now, then the trick is when you get stuck be less ambitious. How would that work in general terms? Basically if human knowledge isn't good enough to give you an answer of a certain quality right now, then your choices are either to work on it more and not have an answer now, or accept a lower quality answer. You can see why there isn't really any way around that. There's no magic way to always get a top quality answer now. If you want a cure for cancer, well I can't tell you how to come up with one in the next five minutes, sorry.

This is a bit vague so far. How does lowering your standards address the problem? So what you do is propose a new idea like this, "I need to do something, so I will do..." and then you put whatever you want (idea A, idea B, some combination, whatever else).

This new idea is not refuted by any of the existing criticisms. So now you have one idea, it isn't refuted, and you might be done. If you're happy with it, great. But you might not be. Maybe you see something wrong with it, or you have another proposal. That's fine; just go back to the first three steps and do them more. Then you'll get to step 4 or 5 again.

What if we get back here? What do we do the second time? The third time? We simply get less ambitious each time. The harder a time we're having, the less we should expect. And so we can start criticizing any ideas that aim too high (while under too much time pressure to aim that high).

BTW it's explained on my website here, including an example:

http://fallibleideas.com/avoiding-coercion

Read that essay, keeping in mind what what I've been saying, and hopefully everything will click. Just bear in mind that when it talks about cooperation between people, and disagreements between people, and coming up with solutions for people -- when it discusses ideas in two or more separate minds -- everything applies exactly the same if the two or more conflicting ideas are all in the same mind.

What if you get really stuck? Well why not do the first thing that pops into your head? You don't want to? Why not? Got a criticism of it? It's better than nothing, right? No? If it's not better than nothing, do nothing! You think it's silly or dumb? Well so what? If it's the best idea you have then it doesn't matter if it's dumb. You can't magically instantly become super smart. You have to use your best idea even if you'd like to have better ideas.

Now you may be wondering whether this approach is truth-seeking. It is, but it doesn't always find the truth immediately. If you want a resolution to a question immediately then its quality cannot exceed today's knowledge (plus whatever you can learn in the time allotted). It can't do better than the best that is known how to do. But as far as long term progress, the truth seeking came in those first three steps. You come up with ideas. You criticize those ideas. Thereby you eliminate flaws. Every time you find a mistake and point it out you are making progress towards the truth; you're learning. That's how we approach the truth: not by justifying but by identifying mistakes and learning better. This is evolution, it's the solution to Paley's problem, it's discussed in BoI and on my Fallible Ideas website. And it's not too hard to understand: improve stuff, keep at it, and you make progress. Mistake correcting -- criticism -- is a truth-seeking method. That's where the truth-seeking comes from.

Elliot Temple at 11:53 AM on July 12, 2013 | Permalink | Comments (0)

Coercion and Critical Preferences

Written Aug 2008, addressing ideas of Karl Popper and David Miller about critical preferences:

You should eliminate rival theories until you have exactly one candidate theory and then act on that. We thus dodge any issues of comparing two still-standing theories using some sort of criterion.

And that's it. The problem is solved. When there is only one candidate theory the solution is dead easy. But this solution raises a new problem which is how to deal with all the rival theories (in short order).

If you act on a theory while there are any active rivals that is coercion, which is the cause of distress. You are, roughly, forsaking a part of yourself (without having dealt with it in a rational fashion first).

Often we don't know how to resolve a controversy between two theories promptly, perhaps not even in our lifetime. But that does not mean we are doomed to any coercion. We can adopt a *single theory* with *no active rivals* which says "I don't know whether A or B is better yet, but I do know that I need to choose what to do now, and thus I will do C for reason D." A and B talk about, say, chemistry, and don't contain the means to argue with this new theory proposed -- they don't address the issue now being considered of what to do given the unsettled dispute between A and B -- so are not relevant rival theories, so we end up with only one theory about what to do, this new one we just invented. And acting on this new theory clearly does not forsake A or B; it's not in conflict with them.

We might invent two new theories, one siding more with A, and one more with B, and thus have a new conflict to deal with. But then we have *new problem* which does not depend on resolving the dispute between A and B. And we can do as many layers of reinterpretations and meta-theorizing like this as we want. Coercion is avoidable and practical problems of action are soluble promptly.

If it really comes down to it, just formulate one new theory "I better just do C for reason D *and* all arguments to the contrary are nothing but attempts to sabotage my life because I only have 3 seconds left to act." That ought to put a damper on rival theories popping up -- it'd now have to include a reason it's not sabotage.

One could still think of a rival theory which says it wants to do E because of B and this isn't sabotage b/c the A-based C will be disastrous and it's trying to help" or whatever. There is no mechanical strategy for making choices or avoiding coercion. What I mean to illustrate is we have plenty of powerful tools at our disposal. This process can go wrong, but there is plenty of hope and possibility for it to go right.

-----

BTW this does not only apply to resolving rival theories *for purpose of action* or *within oneself*. It also works for abstract theoretical disputes between different people.

Suppose I believe X and you believe Y, *and we are rational*. Think of something like MWI vs copenhagen -- theories on that scale -- except something that we don't already know the answer to.

So we argue for a while, and it's clear that you can't answer some of my questions and criticisms, and I can't answer some of yours.

Most people would say "you haven't proven me wrong, and i see problems with your theory, so i am gonna stick with mine". That's called bias.

Some people might be tempted to analyze, objectively, which theory has more unanswered questions (weighted by importance of the question), and which theory has troubled with how many criticisms (weighted by amount of trouble and important of criticism). And thus they'd try to figure out which theory is "better" (which doesn't imply it's true, or even that the other is false -- well of course strictly they are both false, but the truth might be a minor modification of the worse theory).

What I think they've done there is abandon the critical rationalist process and replace it with a misguided attempt to measure which theories are good.

What we should do is propose *new theories* like "the current state of the debate leaves open the question of X and Y, but we should be able to all agree on Z so for issue A we should do B, and for issue C we should do D, and that's something we can all agree on. We can further agree about what research ought to be done and is important to do to resolve questions about both X and Y." Thus we can make a new *single theory* that *everyone on both sides can agree to* which does not forsake X or Y. This is the *one rational view to take of the field*, unlike the traditional approach of people being in different and incommensurable camps. This view will leave them with *nothing to argue about* and *no disagreement*.

Of course, someone might say it's mistaken and propose another view of the same type. And so we could have an argument about that. But this new argument does not depend on your view in the X vs Y dispute. It's a new problem. Just like above. And if it gets stuck, we can make another meta-theory.

This approach I advocate follows the critical rationalist process through and through. It depends on constructing new theories (which are just guesses and may go wrong) and criticizing them. It never resorts to static criteria of betterness.

Elliot Temple at 11:19 AM on July 12, 2013 | Permalink | Comments (0)

Criticism is Contextual

Criticism is contextual. The "same idea" as far as explicit content and the words it's expressed in, can have a different status (criticized, or not) depending on the context, because the idea might successfully solve one problem but not solve some other problem. So it's criticized in its capacity to solve that second problem, but not the first. This kind of thing is ubiquitous.

Example: you want to drive somewhere using a GPS navigator system. The routes your GPS navigator system gives you are often not the shortest, best routes. You have a criticism of the GPS navigator system.

Today, you want to drive somewhere. Should you use your GPS? Sure! Why not? Yes you had a criticism of it, but that criticism was contextual. You criticized it in the context of wanting shortest routes. But it's not criticized in the context of whether it will do a good job of getting you to today's destination (compared to the alternatives available). It could easily lead you to drive an extra hundred meters (maybe that's 0.1% more distance!) and still get you to your destination on time. Despite being flawed in some contexts, by some criteria, it could still get you there faster and easier than you would have done with manual navigation.

That there is a criticism of the GPS navigator system does not mean never to use it. Nor does it mean if you use it you're acting on refuted or criticized ideas. Criticism is contextual.

Elliot Temple at 12:16 PM on July 11, 2013 | Permalink | Comments (0)

Rational People

I wrote this in Dec 2008.

Rational people are systems of ideas that can temporarily remove any one idea in the system without losing identity. We can remain functional without any one idea. This means we can update or replace it. And in fact we can often change a lot of ideas at once (how many depends in part on which).

To criticize one idea is not to criticize my rationality, or my ability to create knowledge, or my ability to make progress. It doesn't criticize what makes me human, nor anything permanent about me. So I have no reason to mind it. Either I will decide it is correct, and change (and if I don't understand how to change, then no one has reason to fault me for not changing yet), or decide it is incorrect and learn something from considering it.

The way ideas die in our place is that we change ourselves, while retaining our identity (i.e., we don't die), but the idea gets abandoned and does die.

Elliot Temple at 11:54 AM on July 11, 2013 | Permalink | Comments (0)