Fallible Justificationism

This is adapted from a Feb 2013 email. I explain why I don't think all justificationism is infallibilist. Although I'm discussing directly with Alan, this issue came up because I'm disagreeing with David Deutsch (DD). DD claims in The Beginning of Infinity that the problem with justificationism is infallibilism:

To this day, most courses in the philosophy of knowledge teach that knowledge is some form of justified, true belief, where ‘justified’ means designated as true (or at least ‘probable’) by reference to some authoritative source or touchstone of knowledge. Thus ‘how do we know . . . ?’ is transformed into ‘by what authority do we claim . . . ?’ The latter question is a chimera that may well have wasted more philosophers’ time and effort than any other idea. It converts the quest for truth into a quest for certainty (a feeling) or for endorsement (a social status). This misconception is called justificationism.

The opposing position – namely the recognition that there are no authoritative sources of knowledge, nor any reliable means of justifying ideas as being true or probable – is called fallibilism.

DD says fallibilism is the opposing position to justificationism and that justificationists are seeking a feeling of certainty. And when I criticized this, DD defended this view in discussion emails (rather than saying that's not what he meant or revising his view). DD thinks justificationism necessarily implies infallibilism. I disagree. I believe that some justificationism isn't infallibilist. (Note that DD has a very strong "all" type claim and I have a weak "not all" type claim. If only 99% of justificationism is infallibilist, then I'm right and DD is wrong. The debate isn't about what's common or typical.)

Alan Forrester wrote:

[Justification is] impossible. Knowledge can't be proven to be true since any argument that allegedly proves this has to start with premises and rules of inference that might be wrong. In addition, any alleged foundation for knowledge would be unexplained and arbitrary, so saying that an idea is a foundation is grossly irrational.

I replied:

But "justified" does not mean "proven true".

I agree that knowledge cannot be proven true, but how is that a complete argument that justification is impossible?

And Alan replied:

You're right, it's not a complete explanation.

Justified means shown to be true or probably true. I didn't cover the "probably true" part. The case in which something is claimed to be true is explicitly covered here. Showing that a statement X is probably true either means (1) showing that "statement X is probably true" is true, or it means that (2) X is conjectured to be probably true. (1) has exactly the same problem as the original theory.

In (2) X is admitted to be a conjecture and then the issue is that this conjecture is false, as argued by David in the chapter of BoI on choices. I don't label that as a justificationist position. It is mistaken but it is not exactly the same mistake as thinking that stuff can be proved true or probably true.

In parallel, Alan had also written:

If you kid yourself that your ideas can be guaranteed true or probably true, rather than admitting that any idea you hold could be wrong, then you are fooling yourself and will spend at least some of your time engaged in an empty ritual of "justification" rather than looking for better ideas.

I replied:

The basic theme here is a criticism of infallibilism. It criticizes guarantees and failure to admit one's ideas could be wrong.

I agree with this. But I do not agree that criticizing infallibilism is a good reply to someone advocating justificationism, not infallibilism. Because they are not the same thing. And he didn't say anything glaringly and specifically infallibilist (e.g. he never denied that any idea he has could turn out to be a mistake), but he did advocate justificationism, and the argument is about justification.

And Alan replied:

Justificationism is inherently infallibilist. If you can show that some idea is true or probably true, then when you do that you can't be mistaken about it being true or probably true, and so there's no point in looking for criticism of that idea.

My reply below responds to both of these issues.


Justificationism is not necessarily infallibilist. Justification does not mean guaranteeing ideas are true or probably true. The meaning is closer to: supporting some ideas as better than others with positive arguments.

This thing -- increasing the status of ideas in a positive way -- is what Popper calls justificationism and criticizes in Realism and the Aim of Science.

I'll give a quote from my own email from Jan 2013, which begins with a Popper quote, and then I'll continue my explanation below:

Realism and the Aim of Science, by Karl Popper, page 19:

The central problem of the philosophy of knowledge, at least since the Reformation, has been this. How can we adjudicate or evaluate the far-reaching claims of competing theories and beliefs? I shall call this our first problem. This problem has led, historically, to a second problem: How can we justify our theories or beliefs? And this second problem is, in turn, bound up with a number of other questions: What does a justification consist of? and, more especially: Is it possible to justify our theories or beliefs rationally: that is to say, by giving reasons -- 'positive reasons' (as I shall call them), such as an appeal to observation; reasons, that is, for holding them to be true, or at least 'probable' (in the sense of the probability calculus)? Clearly there is an unstated, and apparently innocuous, assumption which sponsors the transition from the first to the second question: namely, that one adjudicates among competing claims by determining which of them can be justified by positive reasons, and which cannot.

Now Bartley suggests that my approach solves the first problem, yet in doing so changes its structure completely. For I reject the second problem as irrelevant, and the usual answers to it as incorrect. And I also reject as incorrect the assumption that leads from the first to the second problem. I assert (differing, Bartley contends, from all previous rationalists except perhaps those who were driven into scepticism) that we cannot give any positive justification or any positive reason for our theories and our beliefs. That is to say, we cannot give any positive reasons for holding our theories to be true. Moreover, I assert that the belief we can give such reasons, and should seek for them is itself neither a rational nor a true belief, but one that can be shown to be without merit.

(I was just about to write the word 'baseless' where I have written 'without merit'. This provides a good example of just how much our language is influenced by the unconscious assumptions that are attacked within my own approach. It is assumed, without criticism, that only a view that lacks merit must be baseless -- without basis, in the sense of being unfounded, or unjustified, or unsupported. Whereas, on my view, all views -- good and bad -- are in this important sense baseless, unfounded, unjustified, unsupported.)

In so far as my approach involves all this, my solution of the central problem of justification -- as it has always been understood -- is as unambiguously negative as that of any irrationalist or sceptic.

If you want to understand this well, I suggest reading the whole chapter in the book. Please don't think this quote tells all.

Some takeaways:

  • Justificationism has to do with positive reasons.

  • Positive reasons and justification are a mistake. Popper rejects them.

  • The right approach to epistemology is negative, critical. With no compromises.

  • Lots of language is justificationist. It's easy to make such mistakes. What's important is to look
    out for mistakes and try to correct them. ("Solid", as DD recently used, was a similar mistake.)

  • Popper writes with too much fancy punctuation which makes it harder to read.

A key part of the issue is the problem situation:

How can we adjudicate or evaluate the far-reaching claims of competing theories and beliefs?

Justificationism is an answer to this problem. It answers: the theories and beliefs with more justification are better. Adjudicate in their favor.

This is not an inherently infallibilist answer. One could believe that his conception of which theories have how much justification is fallible, and still give this answer. One could believe that his adjudications are final, or one could believe that his adjudications could be overturned when new justifications are discovered. Infallibilism is not excluded nor required.


Looking at the big picture, there is the critical approach to evaluating ideas and the justificationist or "positive" approach.

In the Popperian critical approach, we use criticism to reject ideas. Criticism is the method of sorting out good and bad ideas. (Note that because this is the only approach that actually works, everyone does it whenever they think successfully, whether they realize it or not. It isn't optional.) The ideas which survive criticism are the winners.

In the justificationist approach, rather than refuting ideas with negative criticism, we build them up with positive arguments. Ideas are supported with supporting evidence and arguments. The ones we're able to support the most are the winners. (Note: this doesn't work, no successful thinking works this way.)

These two rival approaches are very different and very important. It's important to differentiate between them and to have words for them. This is why Popper named the justificationist approach, which had gone without a name because everyone took it for granted and didn't realize it had any rival or alternative approaches.

Both approaches are compatible with both infallibilism and fallibilism. They are metaphorically orthogonal to the issue of fallibility. In other words, fallibilism and justificationism are separate issues.

Fallibilism is about whether or not our evaluations of ideas should be subjected to revision and re-checking, or whether anything can be established with finality so that we no longer have to consider arguments on the topic, whether they be critical or justifying arguments.

All four combinations are possible:

Infallible critical approach: you believe that once socialist criticisms convince you capitalism is false, no new arguments could ever overturn that.

Infallible justificationist approach: you believe that once socialist arguments establish the greatness of socialism, then no new arguments could ever overturn that.

Fallible critical approach: you believe that although you currently consider socialist criticisms of capitalism compelling, new arguments could change your mind.

Fallible justificationist approach: you believe that although you currently consider socialist justifying arguments compelling (at establishing the greatness and high status of the socialism, and therefore its superiority to less justified rivals), you are open to the possibility that there is a better system which could be argued for even more strongly and justified even more and better than socialism.


BTW, there are some complicating factors.

Although there is an inherent asymmetry between positive and negative arguments (justifying and critical arguments), many arguments can be converted from one type to the other while retaining some of the knowledge.

For example, someone might argue that the single particle two slit experiment supports (justifies) the many-worlds interpretation of quantum physics. This can be converted into criticisms of rivals which are incompatible with the experiment. (You can convert the other way too, but the critical version is better.)

Another complicating factor is that justificationists typically do allow negative arguments. But they use them differently. They think negative arguments lower status. So you might have two strong positive arguments for an idea, but also one mild negative argument against it. This idea would then be evaluated as a little worse than a rival idea with two strong positive arguments but no negative arguments against it. But the idea with two strong positive arguments and one weak criticism would be evaluated above an idea with one weak positive argument and no criticism.

This is easier to express in numbers, but usually isn't. E.g. one argument might add 100 justification and another adds 50, and then a minor criticism subtracts 10 and a more serious criticism subtracts 50, for a final score of 90. Instead, people say things like "strong argument" and "weak argument" and it's ambiguous how many weak arguments add up to the same positive value as a strong argument.

In justification, arguments need strengths. Why? Because simply counting up how many arguments each idea has for it (and possibly subtracting the number of criticisms) is too open to abuse by using lots of unimportant arguments to get a high count. So arguments must be weighted by their importance.

If you try to avoid this entirely, then justificationism stops functioning as a solution to the problem of evaluating competing ideas. You would have many competing ideas, each with one or more argument on their side, and no way to adjudicate. To use justificationism, you have to have a way of deciding which ideas have more justificationism.

The critical approach, properly conceived, works differently than that. Arguments do not have strengths or weights, and nor do we count them up. How can that be? How can we adjudicate between competing ideas with out that? Because one criticism is decisive. What we seek are ideas we don't have any criticisms of. Those receive a good evaluation. Ideas we do have criticisms of receive a bad evaluation. (These evaluations are open to revision as we learn new things.) (Also there are only two possible evaluations in this system. The ideas we do have criticisms of, and the ideas we don't. If you don't do it that way, and you follow the logic of your approach consistently, you end up with all the problems of justificationism. Unless perhaps you have a new third approach.)


Elliot Temple | Permalink | Messages (0)

Problem Solving While Reading

I'd urge anyone who has trouble reading something to stop and do problem solving instead of ignoring the problem or giving up. This kind of thing is an opportunity to practice and improve.

You could e.g. take a paragraph you have trouble with and analyze it, possibly with a paragraph tree.

If you do that kind of activity many times, you will get better at reading that type of material and reading in general. You can automatize some of the analysis steps so, in the future, you automatically know some of the results without having to go through all the steps. A way to look at it is if you do those activities enough, you'll get faster at it, and also some of the conclusions will become predictable to you before you've consciously/explicitly done all the steps.

When stuff is hard, slow down and figure out the correct answer – the way you ideally want to do it – so you end up forming good habits (a habit of doing what you think is best when you go slowly and put in more effort) instead of bad habits.

This is the same as improving at other kinds of things, e.g. typing. If you’re typing incorrectly (e.g. hitting a key with the wrong finger, or looking at the keyboard while typing), you should slow down, fix the problems, then speed up only when you’re doing it the way you want to. It’s hard to fix errors while going fast. And you should avoid habit-forming amounts of repetition of the activity until you’re satisfied with the way you’re doing it.

You can never be perfect. It’s also important to sometimes change your habits after they’re formed. Sometimes you’ll learn something new and realize a habit or subconscious automatization should be changed. But forming habits/automatizations and then changing them soon after is inefficient; it’s more efficient to make a serious effort to get them right in the first place so you can reduce the need to change habits. You don’t want to form a habit than is worse than your current knowledge.


If you do this text analysis stuff consistently whenever there are hard parts, it will be disruptive to reading the book. It'll slow you way down and spread your reading out due to taking many breaks to practice. You won’t get much reading flow due to all the interruptions. Here are some options for dealing with that problem:

  1. It doesn't matter. Improving skills is the priority, not understanding the book. You can read the book later including rereading the sections you had many stops during.
  2. Read something else where you run into harder parts infrequently so stopping for every hard part isn't very disruptive.
  3. Make trees, outlines or other notes covering everything so you get an understanding of the book that way rather than from direct reading. E.g. do paragraph trees for every paragraph and then make section trees that put the paragraphs together, and then do trees that put the sections together, and keep doing higher level trees until you cover the whole book.
  4. Read a section at a time then go back and do the analysis and practice after finishing the section but before reading the next section, rather than stopping in the middle of a section. That'll let you read and understand a whole chunk at once (to your current standards). Analyzing/practicing/etc. in between sections shouldn't be very disruptive.

With option 4, it’s very important not to cheat and read multiple sections in a row while planning to go back to stuff eventually. Even if you try to go back later, the hard stuff won’t be fresh enough in your mind anymore. If you’re procrastinating on doing any analysis, it’s because you don’t actually want to do it. In that case you need to do problem solving about that. Why are you conflicted? Why does part of you want to improve intellectually and do learning activities, etc., while part of you doesn’t? What part doesn’t and what is its motivation?

Also how big a section should you use? It depends on the book (does it have natural break points often?) and your memory (if a section is too big you’ll forget stuff from the earlier parts) and your skill level. If a section is too big, you’ll also have too many hard parts you need to do (e.g. 20) which may be overwhelming or seem like too much work. Also by the time you analyze the first 19 hard parts, you won’t remember the 20th one because it’s been so long since you read the end of the section. And if you’re trying to analyze and revise how you understood 20 parts at once, it’s hard to take those all into account at once to update your understanding of what the book said. Doing it closer to “read something, analyze it right away to understand it correctly, keep reading” has clear advantages like letting you actually use your analysis to help your reading instead of the analysis being tacked on later and not actually being used. So you might need to use sections that are pretty short, like 2 or 3 pages long, which could give you more uninterrupted reading flow without being too much to deal with at once. You could do it based on reading time too, like maybe 5 or 10 minutes would be a reasonable chunk to read at once before you stop to analyze (depending on how many problems you’re having). Also if you have a big problem, like you’re really extra confused about a sentence, paragraph or argument, you may want to stop early.

Also, it’s important to analyze and practice regarding small problems and slightly hard parts, not just major problems. Some people only want to focus on the really visible problems, but optimizing smaller stuff will help you get really good at what you’re doing. Also if something is actually a small difficulty then working on it should go fast. If it takes a long time and seems like a hassle, then you needed the practice and it wasn’t that small for you after all. Though if it feels like a hassle, that means you’re conflicted and should investigate that conflict.

If you’re conflicted, here are relevant articles by me:

And I wrote a part 2 for this post:

Subconscious Reading; Conscious Learning; Getting Advanced Skills

And recorded a podcast:

Reading, Learning and the Subconscious | Philosophy Podcast


Elliot Temple | Permalink | Messages (0)

Subconscious Reading; Conscious Learning; Getting Advanced Skills

Yesterday I wrote about practicing when you find any hard parts while reading. I have more to say.

First, noticing it was hard is a visible problem. What you noticed is usually under 10% of the actual problem(s). The problem is probably at least 10x larger than you initially think. So don’t ignore it. When you find visible problems you should be really glad they weren’t hidden problems, and assume they might be the visible tip of an iceberg of problems, and investigate to see if there are more hard-to-find problems near the visible problem. A visible problem is a warning something is wrong that lets you know where to investigate. That’s really useful. Sometimes things go badly wrong and you get no warning and have no idea what’s going on. Lots of people react to visible problems by trying to get rid of them, which is shooting the messenger and making any other related problems harder to find. If you have a habit of “solving” problems just enough that you no longer see the problem, then you’re hiding all the evidence of your other less visible problems and entrenching them, and you’ll have chronic problems in your life without any idea about the causes because you got rid of all the visible clues that you could.

Second, if people practiced hard reading once a day (or once per reading session) regardless of how many hard parts they ran into, they would make progress. That would be good enough in some sense even though they ignored a bunch of problems. But why would you want to do that? What is the motivation there? What part of you wants to ignore a problem, keep going, and never analyze it? What do you think you’re getting out of getting more reading done and less problem solving done?

Are you reading a book that you believe will help you with other urgent problems even if you understand it poorly? Is finishing the book faster going to be more beneficial than understanding it well due to an urgent situation? Possible but uncommon. And if you’re in that situation where you urgently need to read a book and also your reading skill is inadequate to understand the book well, you have other problems. How did you get in that situation? Why didn’t you improve at reading sooner? Or avoid taking on challenges you wouldn’t be able to do with your current skills?

Do you think your current reading, when you find stuff hard to read, is actually adequate and fine? You just think struggling while reading – enough to notice it – is part of successful reading and the solution is extra work and/or a “nobody’s perfect” attitude? Your knowledge can never be perfect so what does it matter if there were visible flaws? It could be better! You could have higher standards.

If you notice reading being hard, your subconscious doesn’t fully know how to read it. Your reading-related habits and automatizations are not good enough. There are three basic ways to deal with that:

  1. Ignore the problem.
  2. Read in a more conscious way. Try to use extra effort to succeed at reading.
  3. Improve your automatizations so your subconscious can get better at reading.

I think a ton of people believe if they can consciously read it, with a big effort, then they do know how to read it, and they have nothing more to learn. They interpret it being hard as meaning they have to try harder, not as indicating they need better skills.

What are the problems with using conscious effort to read?

First, your subconscious isn’t learning what you read well in that case. So you won’t be able to implement it in your life. People have so many problems with reading something then not using it. There are two basic ways to use something in your life:

  1. You can use it by conscious effort. You can try extra hard every time you use it.
  2. You can learn it subconsciously and then use it in a natural, intuitive, normal way. This is how we use ideas the vast majority of the time.

We don’t have the energy and conscious attention to use most of our ideas consciously. Our subconscious has 99% of our mental resources. If you try to learn something in a conscious-effort-only way, you’re unlikely to get around to ever using it, because your conscious attention is already overloaded. It’s already a bottleneck. You’re already using it around maximum capacity. Your subconscious attention is a non-bottleneck. Teaching your subconscious to do things is the only way to get more done. If you learn something so you can only do/use it by conscious effort, then you will never do/use it unless you stop doing/using some other idea. You will have to cut something out to make room for it. But if you learn something subconsciously, then you can use it without cutting anything out. Your subconscious has excess capacity.

So if reading takes conscious effort, you’ll do way less of that reading. And then every idea and skill you learn from that conscious reading will require conscious effort to use, so the reading won’t change your life much. The combination of reading not improving your life, plus taking a lot of precious conscious effort, will discourage you from reading.

It’s possible to read with conscious effort, then do separate practice activities to teach your subconscious. Even if your subconscious doesn’t learn something by reading, it can still learn it in other ways. But people usually don’t do that. And it’s better if your subconscious can learn as much as possible while you read, so less practice is needed later. That’s more efficient. It saves time and effort.

Also you can’t read in a fully conscious way. You always use your subconscious some. If your subconscious is making lots of mistakes, you’re going to make more conscious mistakes. Your conscious reading will be lower quality than when your subconscious is supporting you better. You’ll have more misunderstandings. You can try to counter that by even more conscious effort, but ultimately your conscious mind is too limited and you need to use your subconscious as an effective ally. There is an upper limit on what you can do using only your conscious mental resources plus a little of your subconscious. If your add in effective use of your subconscious, the ceiling of your capabilities rises dramatically.

Also, if you’re reading by conscious effort, you might as well use it as practice and teach your subconscious. The right way to read by conscious effort involves things like making tree diagrams. If you do that a bunch, your subconscious can learn a lot of what you’re doing so that in the future you’ll sometimes intuitively know answers before you make the diagrams.

What people do with high-effort conscious reading often involves avoiding tree diagrams, outlines, or even notes. It’s like saying “I find this math problem hard, so I’m going to try really hard … but only using mental math.” Why!? I think they often just don’t know how to explicitly and consciously break it down into parts, organize the information, process it, etc. If you can’t write down what’s going on in a clear way – if you can’t get the information out of your head onto paper or computer – then the real problem is you don’t know how to read it consciously either. If you could correctly read it in a conscious way, you could write it down. If you had a proper explicit understanding of what you read, what would stop you from putting it into words and speaking them out loud, writing them down, communicating with others, etc? It’s primarily when we’re relying on our subconscious – or just failing – that we struggle to communicate.

People don’t do tree diagrams and other higher-effort conscious analysis mostly because they don’t know how. When they try to do higher effort conscious reading, they don’t actually know what they’re doing. They just muddle through and ignore lots of problems. They weren’t just having and ignoring subconscious reading problems. They were also having and ignoring conscious reading problems. Their conscious understanding is also visibly flawed.

What should be done? You need to figure out how to get it right consciously as step one of learning a skill. Then once you’re satisfied with how you do it consciously, you practice that and form good habits/automatizations in your subconscious. This is the general, standard pattern of how learning works.

If you just keep reading a bunch while being consciously confused, you’re forming bad subconscious habits and failing to make progress. You’re missing out on the opportunity to improve your reading skills. You’re a victim of your own low standards or pessimism. If you want to be a very good, rational thinker you need to get good at reading, both consciously and subconsciously. If you don’t do that, you’ll get stuck regarding fields like critical thinking and you’ll run into chronic problems with learning, with not using and acting on what you read and “learn” (because you can’t act on what you never learned properly – or even if you managed to learn it consciously that won’t work because your conscious is already too busy – to actually do something you have to either stop doing something else or else use your more plentiful subconscious resources).

If you want to get better at reading beyond whatever habits you picked up from our culture, school, childhood, etc., you have two basic options.

Option 1: Read a huge amount and you might very gradually get better. That works for some people but not everyone. It often has diminishing returns. If you’re bad at reading and rarely read, then reading 50 novels has a decent chance to help significantly. If you’re already experienced at reading novels, then you might see little to no improvements after reading more of them. This strategy is basically hoping your subconscious will figure out how to improve if you give it lots of opportunities.

Option 2: Consciously try to improve your reading. This means explicitly figuring out how reading works, breaking it down into parts, and treating it as something that you can analyze. This is where things like outlines, grammar, sentence trees, paragraph trees, and section trees come in. Those are ways of looking at text and ideas in a more conscious, intentional, organized, explicit way.

I think people resist working on conscious reading because it’s a hassle. They read mostly in a subconscious, automatic way. Their conscious mind is actually bad at reading and unable to help much. So when they first start trying to do conscious reading, they actually get worse at reading. They have to catch their conscious reading abilities up to their subconscious reading level before they can actually take the lead with their conscious reading and then start teaching their subconscious some improvements. I suspect people don’t like getting temporarily worse at reading when trying to do it more consciously so they avoid that approach and give up fast. They don’t consciously know what the problem is but they intuitively didn’t like an approach where they’re less able to read and actually quite bad at it. Their conscious reading is a mess so they’d rather stick with their current subconscious reading automatizations – but then it’s very hard to improve much.

The only realistic way to make a lot of progress and intentionally get really good at this stuff is to figure out how to approach reading and textual analysis consciously, gain conscious competence, then gain conscious higher skill level, then teach that higher skill level to your subconscious. If you just stick with your subconscious competence, it works better in the short term but isn’t a path to making much progress. If you’re willing to face your lack of conscious reading skills and you see the value in creating those skills, then you can improve. It’s very hard to learn and improve without doing it consciously. When you originally learned to read, your conscious reading ability was at least as good as your subconscious reading ability. But then you forgot a lot of your conscious reading skill after many years of reading mostly subconsciously. You don’t remember how you thought about reading when you were first learning it and were making a big conscious effort.

You do remember some things. You could probably consciously sound out the letters in a word if you wanted to. But you don’t need to. Your reading problems are more related to reading comprehension, not about reading individual words or letters. Doing elementary school reading comprehension homework is a perfectly reasonable place to start working on your conscious reading skills again. Maybe you’d quickly move up to harder stuff and maybe not and it’s OK either way. I’ve seen adults make errors when trying to read a short story aimed at third graders and then correctly answer some questions about what happened in the story. It’s good to test yourself in some objective ways. You need an answer key or some other people who can catch errors you miss. They don’t necessarily have to be better than reading at you. If you have a group of ten people who are similarly smart and skilled to you, you can all correct each other’s work. That will work reasonably well because you have different strengths and weaknesses. You’ll make some mistakes that other people don’t, and vice versa, even though on average your skill levels are similar. There will also be some systemic mistakes everyone in your group makes, but you can improve a lot even if you don’t address that.

Doing grammar and trees is a way to try to get better at reading than most people. It’s part of the path to being an advanced reader who knows stuff that most people don’t. But a lot of people should do some more standard reading comprehension work too, which is aimed at reducing reading errors you make and getting more used to practicing reading skills, but which isn’t aimed at being an especially advanced reader. I think a lot of people don’t want to do that because of their ego, their desire to appear and/or already be clever, and their focus on advanced skills. But you’re never going to be great at advanced skills unless you go back through all your beginner and intermediate skills and fix errors. You need a much higher level of mastery than is normal at the beginner and intermediate stuff in order to be able to build advanced skills on top of them. The higher you want to build skills above a level, the lower error rate you need at that level. The bigger your aspirations for advanced stuff, the more perfect you need your foundational knowledge to be to support a bunch of advanced knowledge built on top of it.

You can think of it in terms of software functions which call other functions (sub-functions) which call other functions (sub-sub-functions). The lower level functions, like sub-sub-sub-functions, are called more times. For every high level function you call, many many lower level functions are called. So the error rate of the lower level functions needs to be very very low or else you’ll get many, many errors because they’re used so much. This is approximate in some ways but the basic concept is the more you build on something – the more you’re relying on it and repeatedly reusing it – the more error-free it needs to be. If something gets used once a month, maybe it’s OK if it screws up 1% of the time and then you have to do problem solving. If something is used 10,000 times a day, and it’s a basic thing you never want to be distracted by, then it better have a very low error rate – less than a 1 in 100,000 chance of an error is needed for it to cause a problem less than every 10 days on average.

So don’t lose self-esteem over needing to improve your basic or intermediate skills, knowledge and ideas. If you’re improving them to higher standards (lower error rates) than normal, then you aren’t just going back to school like a child due to incompetence. You’re trying to do something that most people can’t do. You’re trying to be better in a way that is relevant to gaining advanced skills that most people lack. You’re not just relearning what you should have learned in school. School teaches those ideas to kinda low standards. School teaches the ideas with error rates like 5%, and if you’re a smart person reading my stuff you’re probably already doing better than that at say a 1% error rate but now you need to revisit that stuff to get the error rate down to 0.0001% so it can support 10+ more levels of advanced knowledge above it.

For more information, see Practice and Mastery.

And I recorded a podcast: Reading, Learning and the Subconscious | Philosophy Podcast


Elliot Temple | Permalink | Messages (0)

Visible and Hidden Problems

Some problems are easier to see than others. If you look for problems, there are some that are pretty easy to find, and some that are hard to find. Some problems are so easy to find that you’ll find them without even looking for problems. Other problems are so hard to find that you could fail to find them after a lifetime of searching.

There are often many related problems. Having no money is a problem that’s easy to notice. But it’s not the whole story. What’s the cause or underlying issue? Maybe it’s because you disliked the jobs you tried and spend most of your time intentionally unemployed. That’s not very hidden either. Why do you dislike those jobs? Maybe you didn’t like being bossed around to do things that you consider unwise. Maybe you didn’t like being bossed around because you’d rather boss others around. Maybe you’re lazy. There are lots of potential problems here.

There can be many, many layers of problems, and the deeper layers are often harder to analyze, so the problems there are more hidden.

Hard to find problems can be impactful. People often see negative consequences in their lives but don’t understand enough about what is causing those consequences.

Like maybe you don’t have many friends and you want more. But you keep not really getting along with people. But you don’t know much about what’s going wrong. Or you might think you know what the problems are, but be wrong – it’s actually mainly something else you never thought of. People often try to solve the wrong problem.


One problem solving strategy people have is to find all the most visible, easy-to-find problems they can and solve them.

This is like going around and cutting off the tips of icebergs. You have these problem-icebergs and you get rid of the visible part and leave the hidden part as a trap. That actually makes things worse and will lead to more boats crashing because now the icebergs are still there but are harder to see. (Actually I’m guessing if you cut the tip of the iceberg then off the rest would float up a little higher and become visible. But pretend it wouldn’t.)

Your visible problems are your guide to where your hidden problems are. They’re not a perfect, reliable or complete guide. But they give pretty good hints. Lots of your invisible problems are related to your visible problems. If you get rid of the visible problems and then start looking for more problems, it’ll be hard to find anything. You basically went around and destroyed most of your evidence about what invisible problems you have.


What should you do instead?

Don’t rush to make changes. Do investigations and post mortems when you identify problems. Look for related problems. Take your time and try to understand root causes more deeply.

Once you have a deeper understanding of the situation, you can try to come up with the right high-power solutions that will solve many related problems at once.

If you target a solution at one problem, you’re likely to fix it in a parochial, unprincipled way – put a band-aid on it.

If you figure out ten problems including some that were harder to see, and you come up with ten solutions, then each of the solutions is likely to be superficial.

But if you figure out ten problems and come up with one solution to address all ten at once, then that solution has high leverage. There’s some conceptual reasoning for how it works. It involves a good explanation. It has wider reach or applicability. It’s more principled or general purpose.

So, not only will this solution solve ten problems at once, it will probably solve twenty more you didn’t know about. It works on some whole categories of problems, not just one or a few specific problems. So it’ll also solve many similar problems that you didn’t even realize you had.


Elliot Temple | Permalink | Messages (0)

Big Picture Reasons People Give Up on Learning Philosophy

Why might people give up on learning philosophy or learning to be a great (critical) thinker?

I think maybe no one has ever quit my community while making rapid progress.

Maybe people only quit when they get fully stuck or progress gets too slow.

How/why do they get stuck?

People are very resistant to doing easy/childish/basic stuff. They want to do complex stuff which they think is more interesting, less boring, more impressive, more important, etc. When they do harder and more complicated stuff, I regard it as skipping steps/prerequisites which leads directly to an overwhelmingly high error rate. They may experience their high error rate as e.g. me having 10 criticisms for each of their posts, which they can't deal with so they might blame the messenger, me. They may be blind to their high error rate because they don't understand what they're doing enough to spot or understand the errors (due to the missing prerequisites, skipped steps) or because they have low standards (they're used to being partially confused and calling that success and moving on – that's how they have dealt with everything complicated since age 5).

People may be disorganized. If you successfully do many tiny projects which don't skip steps, that will only translate into substantive progress if you are following some plan/path towards more advanced stuff and/or you integrate multiple smaller things into more complex stuff.

People may have some hangup/bias and be unwilling to question/reconsider some particular idea.

People are often very hostile to meta discussion. This prevents a lot of problem solving, like doing workarounds. Like if they are biased about X, you could have a meta discussion about how to make progress in a way that avoids dealing with X. It’s completely reasonable to claim “You may be biased about X. I think you are. If you are and we ignore it and assume you aren’t, that could make you stuck. So let’s come up with a plan that works if you are biased about X and also works if you aren’t biased about X.” In other words, we disagree about something (whether you’re biased or wrong about X) and can’t easily agree, so we can come up with a plan that works regardless of who is right about the disagreement. People have trouble treating some of their knowledge as unreliable when it feels reliable to them. Their subconscious intuitions treat it as reliable, and they are bad at temporarily turning those off (in a selective way for just X) or relying on conscious thought processes for dealing with this specific thing. They’re also bad at quickly (and potentially temporarily) retraining their subconscious intuitions.

More broadly if there is any impasse in a discussion, you could meta-discuss a way to proceed productively that avoids assuming a conclusion about the impasse, but people tend to be unwilling to engage in that sort of (meta) problem solving. You can keep going productively in discussions, despite disagreements, if you are willing to come up with neutral plans for continuing that can get a good result regardless of who was right about the disagreement. But people usually won’t do that kind of meta planning and seem unwilling to take seriously that they might be wrong unless you actually convince them that they are wrong. They just want to debate the issue directly, and if that gets stuck, then there’s no way to make progress because they won’t do the meta technique. Or if they will do a meta level, they probably won’t do 5 meta levels to get past 5 disagreements (even with no nesting – just 5 separate, independent disagreements, which is easier than nested disagreements), so you’ll get stuck later.

The two big themes here are people get stuck because they try to build advanced knowledge on an inadequate foundation and they don’t want to work on the foundation. And they have issues with problem solving and get stuck on problems and won’t meta discuss the problems (talking about the problem itself, rather than continuing the original discussion).

Lots of this stuff happens alone. Like biased people might get stuck because they’re biased. And even if they realize they might be wrong or biased about a specific thing, they can still get stuck similar to if I pointed out a potential error or bias.

One pattern I’ve seen is people make progress at first, and then the first time they run into a problem that they get stuck on for a week, they never solve it. That can lead to quitting pretty quickly or sometimes many months later if they keep trying other stuff. When trying other stuff, they will occasionally run into another problem they don’t solve, so the list of unsolved problems grows. They made initial progress by solving problems they found easy (ones their intuitions were good at or whatever), but unless they can solve problems they find hard, they are screwed in the long run.

Regarding going back to less complex stuff to improve knowledge quality, sometimes people try that but run into a few problems. One, they go back to a lot more basic than they’re used to and still make tons of errors and they don’t want to go back way further. Two, they do some basic stuff but are not able to connect it to the more advanced stuff and use it – they aren’t organized enough, don’t integrate enough, do conscious review but don’t change their subconscious, or don’t understand the chain of connections from the basic stuff to the advanced stuff well enough.


Elliot Temple | Permalink | Messages (0)

Prices, Decision Factors and Time Will Run Back

Prices (or exchange ratios between goods) are not primaries. They are derived from something else. They come second. They are implied by other information. You can’t just go directly discover them; they aren’t inherent in reality; they aren’t raw evidence; you go find something else and then calculate prices from it.

The value of each good is a factor in a different dimension. There are no general case conversions between dimensions.

Many individuals make many trades. How is any one trade done? By people judging “subjectively” whether they prefer A to B, and if so they will trade B for A, and if not they won’t. People do critical thinking about which thing will be more valuable to them. Prices are implied by these many binary decisions.

Sometimes people trade using amounts, e.g. 10 pounds of rice. There’s still ultimately a binary decision about whether to accept a particular trade or not. But people also judge, in a narrow context, how much of something they value against how much of something else. It’s a direct comparison normally between only and exactly two factors from two dimensions. When people judge how much they value rice against wheat – what quantity of each they see as equal – they aren’t thinking about corn. It’s not a many-factor decision.

There are two main ways they compare quantities. They either think about what problems stuff will solve in their own lives or they think about using it for future trade, which is ultimately based in solving problems in their lives but more indirectly.

Money doesn’t change the principles.

Anyway, I was rereading Time Will Run Back by Henry Hazlitt (which I like and recommend) and thinking about the connections between the coupon market with its exchange ratios (from the book) and decision making factors in different dimensions (from Critical Fallibilism (CF)). Here are some book quotes:

There is no inherent exchange ratio between bread coupons and cigarette coupons, or between bread and cigarettes.

The gain from the exchange occurs in each case not because of some inherent difference in the relative objective value of the goods themselves, but because each party to the exchange more fully meets his own desires by making it.

“Marx’s labor theory of value was wrong [...] among other reasons, because it rested on the assumption that values were measured by some objective unit, whereas values are only measured subjectively. The value of a commodity doesn’t reside in the commodity; it resides in a relationship between somebody’s needs or desires and the capacity of that commodity to satisfy those needs or desires.... Marx looked for some objective standard of value because he assumed that two commodities that exchanged for each other must do so because of some ‘equality’ between them. But if two commodities were exactly equal, in the opinion of two persons, each of whom held one of them, there would be no reason for any exchange to take place at all. It is only because Peter, who holds potatoes, thinks that a certain amount of prunes, held by Paul, would be more valuable to him, that Peter would want to make an exchange. And only if Paul placed the opposite relative value on a given amount of potatoes and prunes would he agree to make the exchange.”

Similarly, there are no objective exchange ratios between factors in different dimensions. There are no single right answers for conversion factors (as there are between miles and meters, which are factors in the same dimension, length). Instead, the comparative values of dimensions are contextual which Austrian economists call “subjective” – they vary by the relationship between the factor and a person’s needs or desires (which changes over time even for the same person, as his situation, goals, resources, etc. change).


Ludwig von Mises says socialists can’t do economic calculation (without cheating). This is a claim that unit conversions between factors of production can only come from the free market, not from central planners. Why? Because they are “subjective” – they aren’t inherent in the goods themselves, but are instead related in complex ways to individual contexts. So a central planner can’t calculate it because it isn’t a property of the good itself, or of the manufacturing process, or of the statistics on raw materials mined this year, or of any combination of such things.

The ways socialists cheat are to use prices from a market as approximations. They can get prices from memory, looking at another country, or by allowing some trade in their socialist country.

In Time Will Run Back, the plot is designed so there are no other non-socialist countries and the characters can’t remember pre-socialist prices (and conditions changed a lot since then anyway). It’s been 100 years of worldwide socialism, and they destroyed books with capitalist ideas, and they made everyone change to a new language and stop using all prior languages, and they’ve been using the gulag system on ~15% of the population (that’s at present – probably more in the past). The people who remember capitalism are more likely to end up in labor camps (or, to avoid that fate, to never speak of it).

So they wonder how to calculate the total value of all their production for a year, and how to compare it to alternative production plans. Another book quote:

“But, chief, how can you possibly have such a figure? What is 200,000,000 pairs of shoes added to 1,000,000,000 bushels of wheat added to 1,000,000 quarts of gin? It’s 1,201,000,000—of what? You can only add things of precisely the same kind—otherwise the total is meaningless.”

Shoes, wheat and gin are factors in different dimensions. You can’t add them to get a total. This logic matches CF. This matters to planning:

Suppose we increased the production of shoes from 200,000,000 to 250,000,000 pairs a year, but only at the cost of reducing the production of wheat from 1,000,000,000 to 800,000,000 bushels a year. Would we be better off or worse off?

Without prices (exchange ratios between goods, unit conversions between dimensions), they can’t compare goods (or services).

A person can judge, for himself, if he’d rather have one pair of shoes or four bushels of wheat. But a central planner can’t judge which is better for the country.

This is really unintuitive to many people. They’re used to having prices. They don’t understand that socialism would actually lack prices and be unable to figure out basic issues like these. It's hard to envision such a different situation where you don't already have prices and then understand that you couldn't just figure them out easily even though they're so familiar to us.

Similarly, one of the typical attempted answers to CF's multi-factor decision making claims is: just convert everything into dollars to get all the factors into the same dimension. In general in the world today, dollars are the best solution we have for getting many different factors into a common measure of value. People assume that we can do it for everything like we do it for dollars. But we only do it for dollars using a market. You can’t do it without a market. And, further, people typically don’t understand how it’s done with dollars, what actions are required, what limitations there are, etc.

Market prices fluctuate. That’s totally unlike meter-to-mile unit conversions. Why? The prices aren't inherent in the goods. They’re “subjective” (contextual). They fluctuate with supply and demand, which fluctuate both with the conditions of production (how much of what is being made) and also the preferences of traders.

So you can’t just find out the market price and be done. You need to keep the market around to find out about price changes. If you get rid of the market, your prices will get more and more out of date as time passes.

Markets aren't perfect by the way. They don't find perfect prices. They're just a lot better at figuring out prices than any known alternatives, especially for a large society or for trade between strangers.

Market prices are derived. You don’t just figure out the market price because it’s not a primary. It comes out of many individual decisions to make or decline particular trades. People often try to approach decision making by coming up with weights first, but that’s not how markets work. In markets, weights (how much each factor is worth) are implied by what actually comes first, which is individual decision making about specific trades. Weights (prices) are an aggregate that we see second.

The market is our best mechanism for dealing with factors in many dimensions in a unified way, and people typically don’t understand the market which leads to confusions about decision making too. Asking people to make up decision making weights is like asking a central planner to make up prices.

The fundamental thing is critical thinking. Would this trade benefit me right now in my specific context? I try to think of problems with it and problems with refusing it. I try to think of my goals and how different goods can be used to achieve my goals.

Austrian economics doesn’t know specifically how people make their individual decisions. It’s not an epistemology. That’s OK. It just specifies that the decisions are made based on people’s preferences not based on inherent traits of the things traded (those traits are still relevant though), and that different people have different preferences, and that people’s preferences change over time and change as their context changes.

A good (or trade) is a bundle of many factors. Austrian economics doesn’t expect anyone to evaluate each of the many factors, weight them and sum them. It just expect them to choose between bundles. Do you prefer this shirt or that sack of rice? There are many factors by which we might evaluate shirts or rice, and you may take some into account, but you don’t have to do weighted factor math, and aren’t expected or assumed to. Instead, you think about things like whether your old shirt can be mended and whether you have enough food. Those are factors but they don’t aim at anything like comprehensiveness (you can’t ever really be comprehensive). Instead, they are just a few key factors related to your problems and preferences.

People’s assumption that decision makers can assign weights to a bunch of factors is the same basic error as socialist central planners assigning weights to goods to enable them to calculate tradeoffs between different production plans. There’s too much complexity. Instead, people tend to make choices like “I prefer A to B” and if you get enough choices like that organized by a market then you can derive market prices. And then when you get used to market prices, you start thinking in terms of prices (factor weights), and viewing them as normal. But they’re derived and they aren’t available in general, only via markets. They are summaries of market trading.

Market prices don’t tell individuals what to prefer. You can’t/shouldn’t use market prices to know what trades to make. They don’t dictate how much you should value something. You look at market prices to see what trades are available. They tell you your options. You then trade based on differences between your preferences and the market price options. To benefit from trade, you must have some preferences that are independent of market prices, which you then compare to market prices to find differences, and then you trade based on those differences. E.g. you currently value shirts (relative to dollars or potatoes) more than the market does, so you can beneficially trade for shirts. (And if you value shirts less, you could beneficially trade them away to get something else (sell them). Positive and negative differences both enable beneficial trading.)

If your preferences exactly matched market prices, then trade would be neutral for you, not beneficial. Market prices are an aggregate from many people’s preferences which differ from most/all people’s preferences. They’re like an average or blending.

Market prices don’t guide individual preferences. They’re derived from preferences and only useful because they differ from people's preferences. Market prices instead help guide production and trade. They alert people to what their fellow men value highly, so that people can offer it for trade (if they already have it) or produce it for trade.

People commonly don’t understand dollar prices and how those relate to exchange ratios between all the goods. They don’t think about the wheat-rice exchange ratio or “price”, and the shirt-rice price, the potato-rice price, and so on. They only think of prices in dollars. They convert everything to dollars (like generic goodness points in decision making) instead of converting directly between dimensions. To compare shirts and rice, they tend to compare the dollar value of each. They may not even be used to thinking of which they prefer between the two. But they might wonder which of two expensive items they’d rather have, realizing they can’t afford to buy both. They’ll also directly compare two couches, realizing they’ll only buy one (for space, price and/or usefulness reasons).

Instead of pricing everything in dollars, you could price everything in rice. In Time Will Run Back, after a while the market prices everything in terms of cigarettes.

Mathematically, it doesn’t matter what you price things in terms of. Practically, it matters to use something non-perishable, small, light, divisible (for making change), easy to measure and hard to counterfeit – something practical for trade.

Why does the math not care what prices are stated in terms of? Because markets work to make the exchange ratios between all the dimensions consistent, and when we do math we tend to idealize the market and assume consistent prices for all goods. How do markets approach pricing consistency? Arbitrage. When prices are inconsistent, people can trade profitably and this trading makes the prices more harmonious.

If 3 rice is worth a shirt, 2 potatoes is worth a shirt, and 1 rice is worth 1 potato, then prices are inconsistent. You can trade 2 potatoes for a shirt, which you trade for 3 rice, which you trade for 3 potatoes. You gained a potato for nothing. Your trades are also asymmetric: you’re adding demand to trade potatoes for shirts, but not shirts for potatoes. You’re also adding demand to trade shirts for rice and rice for potatoes, but not vice versa. Because you only take one side of each trade, you help move the price. The more you demand potatoes for shirts, but not shirts for potatoes, the more you’ll raise the price of potatoes in terms of shirts. You can keep doing this until the profit from your trades approaches or reaches zero.

After a while, you have to trade 2.33 potatoes to get a shirt, and the shirt only gets you 2.66 rice, and the 2.66 rice only trades for 2.33 potatoes. And so at that point, you trade in a circle, ending up with what you started with (probably slightly less due to transaction costs). The prices are now harmonious. There’s no more arbitrage opportunity.

Since the prices now all match each other mathematically, you could state the prices in terms of any good, and nothing would be inconsistent with that. In some sense, money is basically just whichever good is most popular for trading. Money is used as a store of value (you can sell now and buy later, instead of only trading away something now to get something now) and for indirect trade. Indirect trade means I want a shirt but I can trade with someone who isn’t offering a shirt. I get a shirt through two steps. I trade potatoes for money, then I trade money for a shirt. Without money, indirect trade can theoretically be done but it's likely to involve more steps and more hassle. Money keeps trading steps to two maximum, which is actually more convenient than one step because I can buy and sell as separate trades instead of having to buy from the same person I sell to. I can sell my eggs to one guy then buy a chair from someone else, instead of having to find one guy trading a chair for eggs.

I'll close with one more Time Will Run Back quote related to CF's Multi-Factor Decision Making Math:

“We have already decided [...] that we are working completely in the dark. You simply can’t add things that are unlike each other. Or subtract, or multiply, or divide them, or even, in any meaningful quantitative way, compare them. You can’t add pigs to pears, or subtract houses from horses, or multiply tractors by toothbrushes.”

Time Will Run Back is available for free as an ebook.


Elliot Temple | Permalink | Messages (0)

Getting Stuck in Discussions; Meta Discussion

This is a followup on my post Rationally Ending Discussions. It also relates to gjm’s reply and Dagon’s reply. gjm said (my paraphrase) that he’s more concerned about people putting ongoing effort into conversations providing low value than about people missing out on important discussions; staying too long is a bigger concern than staying too briefly. Dagon said (my interpretation which I think he’d deny): intellectual norms are not the norm when human beings interact; human interaction follows social norms and it’s weird of you to even bring up rationality outside of special cases.

This post is fairly long and meandering. It's not a direct response to gjm or Dagon, just some thoughts on these general topics. I think it's interesting and relevant but it's not focused and won't provide talking point responses.

First I'll give brief but direct replies:

For gjm: If people are systematically missing out on the best, most important discussions, so those never take place, that is a huge problem for humanity's progress and innovation worldwide. I don't think most people are doing a good job of more efficiently (meaning with less time and energy) doing error correction elsewhere when they avoid a lot of low quality discussions but also avoid crucial error-correcting discussions. There aren't easy, better alternatives.

Also, there are lots of ways to deal with low value discussions better without being closed to discussion and missing out on great discussions. People can improve at discussion instead of giving up and focusing only on flawed, problematic alternatives that also need improvement. It's possible to identify and end low value discussions quickly and rationally, and to develop and use rational filters before discussion that improve the signal/noise ratio without blocking error correction like typical filtering methods today do.

Also, most of the people avoiding low quality discussion and blaming others don't know how to participate in a high quality discussion themselves and should practice more and try to fix their own errors instead of complaining about others. If you haven't spent thousands of hours on discussion (and on post mortems, analysis of discussion, trying to improve), why think you're so great and blame everyone else when things don't work out? Even if you spent a ton of time discussing you might still be bad at it, but if you haven't put in the time why be confident that you're discussing well? Even many prestigious intellectuals with multiple books and tons of fans have not actually spent a lot of time actually having intellectual discussions and trying to get better at discussing rationally (and aren't very good at it). Even if they've done a lot of podcast discussions, that's going to be a lot fewer hours than people can get doing written discussion on forums, verbal discussion enables lower precision and followups over time than written discussion, and it will also leave them inexperienced at discussing with the many types of people who don't get invited on podcasts.

For Dagon: I think it's reasonable for rationality-oriented forums to exist online, and he was posting on one of them. Like science labs, rationality-oriented forums are the special cases where rationality may be discussed today, or at least they are trying to be even if they sometimes fail. Let's try to make rational spaces work better instead of attacking that goal.


People sometimes stay in discussions they don’t want to have. This is often bad for all parties. They tend to write low quality messages when they don’t want to be there. And they write things which are misleading about their intentions, values, goals, preferences, etc. – e.g. which suggest they will want to continue discussing this a bunch in the future, as a major project, rather than suggesting they have one foot out the door already. When they do leave, it’s often abrupt and surprising for the other person.

Sometimes there are hints the person may be leaving soon but they aren’t explicit statements like “I am losing interest and may leave soon”. Often the guy’s explicit statements deny there’s a problem. Instead, the hints are things like being a bit insulting. If the guy is insulting you, you can infer that maybe he’s less interested in talking to you. Or not. There are alternatives. Maybe he’s just a jerk who isn’t very skilled at calm, truth-seeking discussion, and you can be tolerant and charitable by ignoring the insults. Maybe the rational thing to do is refuse to take a hint. That’s certainly encouraged sometimes in some ways.

Sometimes people think I’m insulting them when I didn’t intend to. What people consider insulting varies by subculture, temperament, etc. A thick skin is a virtue sometimes.

If someone insults you and you say “Is this a sign you want to end the discussion?” often they will deny it. They might even apologize and say they’ll stop insulting you. That kind of statement is often a more effective way to get an apology than directly asking for an apology is.

Why do people deny wanting to end the discussion? One reason is they think they should discuss or debate their ideas. Another reason is they think you’re asking “Do you give up?” They want to defend certain ideas, not give up on defending those ideas. They want to be able to claim that their side can stand up to debate, not concede. So when they end the discussion, they want to blame you or external circumstances, so it has nothing to do with their arguments failing. If they stop discussing because they’re tilted and they insulted you, it’s like admitting they don’t have reasonable arguments. So they want to reply like “No, I really do have good arguments; let me do a rewrite.”

The same sort of thing happens with milder stuff than insults, in which case it’s even harder to deal with and it’s less clear what’s going on.

Insults are one of the ways people communicate that they think you’re being dumb. They see your position and reasoning as low quality. Why, then, are they engaging? It’s usually not some sort of rational, intellectual tolerance, nor a belief that the world is inadequate so high quality stuff more or less doesn’t exist (so there’s no better discussion available). It’s often that they want to change and control you – someone is wrong on the internet and they can’t just let him stay wrong! Or they want a victory for themselves. Or they want to embarrass and attack your tribe (the group of people with similar beliefs that they see you as representing). Or if you have obscure, non-standard or unique beliefs, often they want to suppress the heresy. How can you believe something so ridiculous or sinful? That seems maybe fixable in debate if anything is. If you can’t even correct a guy who believes something that’s flat-Earth level dumb (or pro-adultery level immoral), can you ever expect to correct anyone on anything? And one might think that correcting people with unpopular views should be especially easy (but actually people usually hold unpopular views for a reason, whereas if they hold a popular view they might never have thought about it much and not have a strong or informed opinion). If reason can work, surely this is one of the cases where it’ll work? (Nah, it’s actually easier to persuade people more similar to yourself who are already pretty good at reason. You could see them as more challenging opponents but persuasion isn’t just a battle; they have to think and learn something from what you said in order to substantively change their mind. Persuasion is partly about their skill and their willingness to be reasonable voluntarily.)

The easiest people to persuade in general have a bit less skill and knowledge than you, but not a lot less. So you can correct them on some stuff but they actually understand you pretty well and they know some stuff about learning and being reasonable. Amount of skill (or knowledge) isn’t a single dimension though so it’s more complicated (there exist many different skills).

Someone more skilled than you, even a lot more, can be easy to persuade if you’re right – if you actually know something they don’t and have some decent reasons (not just blind luck; you can explain some reasoning that makes sense). But that’s uncommon. Usually when you try to debate them you’ll be wrong. They already thought of more stuff than you did. But if you’re right, it can be easy, because they’ll help argue your side and fill in gaps for you. And they may quickly recognize you’re saying something they haven’t addressed before and take an interest in it, rather than seeing it as “the opposing tribe”.

There’s no great way to know who is more or less skilled than you. We can approximate that some. We can guess. A lot of guesses are really just using social status as a proxy. Some ways of guessing are better, e.g. if I read an intellectual book and it really impresses me, then I may guess the author is high skill: probably at least near my own skill, or else above me, not way below me. Part of what happens in debate is you can find out who is higher skill and more knowledgeable. It’s a bit of a side issue (the main issue is evaluating specific ideas) but people do gain some visibility into it (though they often find it rude for the higher skilled person to say anything about it, and also in acrimonious debates usually the people both think they’re higher skill than the other person).

In some ways it’s not important to evaluate people’s skill levels, but it’s not useless and it can help with mentally modeling the world. Imagine if a young child didn’t realize he was lower skill than the adults he was debating, when he really was way lower skill. That arrogance could make learning harder. It can be hard to do everything in the maximally rational way of just evaluating every idea on its merits. Realizing you’re outmatched and should try listening, even when you don’t fully understand why you’re wrong about everything (and may not be wrong in every case) can help too.


People can exit discussions more easily when the stakes are lower, when they have a good reason/excuse, when the guy they are debating agrees with their reason/excuse for leaving, when their ego isn’t on the line, or when they won’t be assumed to have lost the debate because they left.

It’s hard to exit discussions when you’re trying to prove you’re open minded or open to debate. We all know (at least in communities with some interest in rationality), in some sense, that we’re supposed to be willing to argue our points instead of just assert claims and then ignore dissent.

Having low stakes, no debate methodology, no claim that debate matters to rationality, etc., clashes with the goal of transparency and with anti-bias procedures. Cheaply exiting discussions without explanation makes it easy for bias to ruin most of most discussions. Whenever something important comes up that relates to an anti-rational meme or bias, people can just take their free option to end the discussion with zero accountability. It’s unlimited, consequence-free evasion.

How do you control evasion, rationalization, bias, dodging questions, etc?

Similarly, when people decide the other guy is an idiot, or the discussion is low quality or unproductive due to the other guy, then approximately half the time they are wrong. It’s important that there be some error correction mechanisms: when you blame the other guy, how would you find out if actually you’re wrong? If you tried to construct a length 5 impasse chain, it’d often reveal the truth: it’d become clear to reasonable people whether you were right or wrong (sometimes this will even work for you even though you’re biased: what actually happened in the debate can be so clear it overcomes your rationalizations when you try to actually write it out).

Standard discussion procedure largely avoids meta discussion. If someone says something I regard as low quality, some of my main options are:

  1. Don’t reply. Don’t explain why to the other guy or to the audience.
  2. Ignore the low quality part and try to reply to whatever I think has value. Often this means ignoring most of what they said and focusing on the topic itself. This often results in complaints from people who don’t think you’re engaging with them … which makes sense because you’re intentionally not engaging with some of what they said.
  3. Steel man it by charitably interpreting them as meaning something different than what they said. Try to guess a good idea similar to their bad idea and engage with that. But sometimes you can't think of any good ideas similar to the (from your point of view) confused nonsense they just said... And sometimes they don't mean or understand a better idea than what they said or they would have said something better.

What’s not socially permitted in general is:

  1. Explain why I think their message is low quality. This would invite a correction or an end to the discussion.

The first difficulty is that the other guy will get defensive and the audience will read it as socially aggressive. You’re not allowed to openly talk about most discussion problems and try to address them head on. You’re typically supposed to sort of pretend everyone is good at discussion and doing nothing wrong, and discussions fade out blamelessly because people have differing interests and because productive discussion is hard and doesn’t always happen even when people make good tries at it.

For certain types of problems, you’re not supposed to attempt cooperative problem-solving. You’re allowed to assume and guess about what’s going on and make adjustments unilaterally (some of which are wrong and make things worse, which often would have been easy to figure out if you’d communicated).

Continuing to speak generally not personally: If I try to talk about a discussion problem, and the guy responds defensively and fights back, what happens next? Will I learn from his negative comments? No. I will see it as flaming or shoddy argument. I won’t find out I was wrong. This happens often. Even if he gave a great rebuttal, the typical result is I’d still be biased and not get corrected. There’s nothing here that causes me to overcome my bias. I had a negative viewpoint, then I stated a problem, and he responded, and then why would I learn I’m wrong? What is going to make that amazing result happen? It’s a harder thing to be corrected about than a regular topic. Maybe. Is it? If I made a specific accusation and he gave a specific, short, clear response which engaged with those specific words, that’s fairly easy to be corrected by. Not easy but easy relative to most stuff. But more ego is involved. Maybe. People are really tribalist about most of the stuff they care enough about to talk about. If they don’t have some sort of emotional investment or bias or whatever – if they don’t care – then they tend not to talk about it on forums (they’ll talk about the weather in small talk without really caring though).

Do people care about things without being biased? Do they have tentative positions which they think are correct but they’d like to upgrade with better views? Not usually.


What do you do in a world where the dominant problem is people staying in discussions they don’t want to be in, sabotaging the hell out of them? And the concept of actually adding anti-bias procedures is a pipe dream that’ll threaten and pressure people into even more bad behavior? What a nightmare that is if straightforward rationality stuff actually makes things worse? What can be done?

This is a rationality is a harsh mistress type problem. (In my opinion, The Moon Is a Harsh Mistress by Robert Heinlein is a good book, though it's not relevant to this essay other than me borrowing words from the title.) People find rationality itself pressuring. Rationality is demanding and people self-pressure to try to live up to it. They can experience discussion norms about rational methods as escalating that pressure.

And how will they respond to analysis like this? They’ll find it pressuring, condescending, wrong, hostile, etc. Or they might grant it applies to most (other) people. But they generally won’t face head on the situation they are actually in and talk about how to handle it and what could work for them. That’d admit too much.

So … don’t talk to almost everyone? But that lets fakers get to the top of the intellectual world since they aren’t really held to any standards and aren’t accountable in any way. But if you publicly broadcast standards, as I do, it pressures people; it’s seen as a challenge to them (it basically is a challenge to the public intellectuals).

Most people on Less Wrong (LW) don’t take themselves seriously or think they matter, but also won’t admit that. They don't expect to come up with any great intellectual innovations and don't think their forum discussions are important to humanity's intellectual progress. It’s hard to ask for the people who think they matter to come forward and chat. People will pretend they are in that category who aren’t.

One of the things you can do is speak differently at different places. I’ve tried posting meta discussion at my own forum but not on another forum like LW. This doesn’t communicate about discussion problems with the people you’re discussing with, so it mostly isn’t a solution. But at least some people – those who care more – can discuss the discussion. It also has the risk that someone from the forum where you’re more guarded finds your less guarded comments and gets mad. But most people don’t look around enough to find out. There’s a self-selection issue. People who find the less guarded comments aren’t a random sample. They’re people who are more willing to explore who are more likely to take the comments well.


Eliezer Yudkowsky doesn’t like to have discussions with his own community. He doesn’t post on Less Wrong anymore. My experience is they aren’t much like him (where “him” = his writing, which has a variety of good stuff, but I think he’s actually worse than that in discussion; people rarely live up to their best work on an ongoing basis). His fans mostly don’t seem to understand or remember a bunch of his published ideas. Plus they’re generally pretty flawed. Not entirely bad though.

One of the hard parts with LW is people read random bits of my posts. I posted a bunch of related stuff, mostly in sequence, and then people come in the middle and don’t understand what’s going on.

I can’t explain everything at once in the first post and also no one seems to be following along or be interested in a sequence of posts that builds up to something bigger. And they are pretty openly willing to skim and complain about post length when something is 4000 words. Saying “this is part 3 of 6 in a series” and linking the other stuff doesn’t help much. Most people just ignore that and won’t go visit part 1. Even if they only wanted to read one, I’d rather they go read part 1 not the new part they just saw, but they usually won’t. Most people have strong recency bias and a strong bias against clicking links.


It’s hard to signal the right things to connect with the right people when people are putting up a bunch of fake signals.

The people staying in discussions they don’t want to be in are communicating false information and causing trouble. This is a major problem that makes it hard for the people who want rational discussion to find each other. Instead of viewing them as victims of social pressure (which they are), you can view them as trolls who are lying and sabotaging (which they also are).

When I signal (or very explicitly state) what I want, a bunch of people join the discussion who don’t want it. They don’t admit they don’t want it. They make it hard to figure out who actually wants it because they’re all pretending to want it.

What can be done about this false signaling problem? I’m pretty good at spotting false signalers. I can sometimes tell quickly now. That used to be way harder for me but I know more signs now. And I can point them out explicitly and do analysis instead of just asserting it. But the analysis sometimes involves a bunch of prerequisites and advanced logic so other people don't follow it. I could also explain the prerequisites but then it’s a big, long learning process that could take years.

But what do I do when I spot fakers? Telling them to go away tends to offend people and get requests for reasoning that they will find insulting and unwanted. They aren’t really open to hearing objective analysis of their flaws. And this can also get multiple offended people to gang up on me. Doing the critical analysis and reasoning without telling them to go away gets a similar result; they don’t want criticism.

I can ignore the fakers but then they’ll imply that I don’t want discussion since I’m ignoring lots of people without explaining. That’s one of the issues. There are social pressures to reply. People make assumptions if you don’t. Willingness to defend your ideas in debate is already judged conventionally; that’s not a new thing I made up.

I’m not socially allowed to just ignore over 90% of people that reply to me on forums because I don't think their claims to be interested in discussion are genuine. And I’m not socially allowed to say I don’t believe them; that’s offensive. And I’m not socially allowed to explain why I don’t believe them and criticize their integrity. And I don’t know how to create productive discussion with them. And I don’t know how to explain that I’m looking for a small minority of people on their forum and get the wrong people to actually stop pretending they qualify.

I do know some stuff about how to proceed properly in discussion. I could pretend the person wants a real discussion and do what I’d do in that case. The result is catching them out and showing some example of what they’re doing wrong, since they never discuss right. But then they just stop talking or muddy the waters. No one learns any lessons. I’ve done it many times. Maybe some of my fans learn something since they’ve seen it a bunch and now it informs their understanding of what the world is like and what forums are like (or maybe they just cargo cult some of my behaviors and treat people badly while thinking they’re being rational). But the people at the forum I’m visiting, or the people new to my forum, don’t learn about general trends from examples like that. Because they don’t want to actually discuss the trends.

People’s hostility to meta discussion makes rational discussion pretty impossible. That’s the key.

The general pattern of what ruins everything is:

  1. Problem. This is OK so far. Problems are inevitable.
  2. Something to suppress problem solving.

Part 2 is called "irrationality". Working against error correction or problem solving is what irrationality is.

And suppressing meta discussion means suppressing problem solving. Discussions run into problems but you aren’t allowed to talk about them and try to solve them because then you’re discussing the discussion, discussing the behavior of the participants, changing the topic (people wanted to talk about the original, object topic, not this new topic), etc. People are interested in talking about minimum wage or global warming, not about whether a particular paragraph they posted fits some particular aspects of rationality or logic or not. People generally don't want to discuss whether their writing is unclear and that is symptomatic of a pattern where their subconscious writing automatizations aren’t good enough to productively deal with the advanced topic they want to talk about.

If you try to do X (any activity or project including a discussion), and then you run into a problem, and then you talk about that problem, that is meta discussion. You’re talking about the activity and how to do the activity and that sort of thing, rather than doing the activity. How to do X is a meta level above X. People do put up with that sometimes. Mostly in learning contexts. If you go to a cooking class you’re allowed to talk about how to cook. But if you’re just cooking with your friend, commonly you’re supposed to assume you both already know how to cook and don’t talk about how to do it, just do it.

Some stuff has problem solving built in. If you’re playing a video game, talking about strategies (limited parts of how to play) may be considered part of the game activity. If you go to an escape room, talking about how to solve the puzzles is normal.

What people object to is one meta level above whatever they expect or want. Which is often exactly what you need for problem solving. Whatever the highest level of abstraction or meta that they are OK with, if you have a problem at that level, then talking about that problem and trying to solve it is one meta level too far.

If the goal is to learn cooking, then a problem at that level is a learning problem (not a cooking problem, which is a lower level). And talking about learning problems would be viewed as off topic, out of bounds, etc. So you can’t solve the learning problem.

In general, people can only learn one meta level below what they are willing to deal with. If you’re willing to talk about learning to cook, then you can learn to cook (one meta level lower) but you can’t learn to learn to cook (same level you’re willing to deal with). Learning about X requires going one meta level above/past X so you can talk about X and talk about problems regarding X.

But it’s actually harder than that. That’s something of a best case scenario. Sometimes your meta discussion has problems and needs a higher level of meta discussion.

With cooking, some people are willing to talk about learning to cook while trying to cook. They’re open to two different levels at once. But typical philosophy discussion doesn’t offer two levels to enable learning about the lower one because people aren’t openly trying to learn. They’ll try to talk about AGI or free will or atheism and that’s the one single level the whole discussion is supposed to take place at. Just do it. You aren’t discussing how to do anything, or directly trying to learn, so you don’t learn. People will set out to learn to cook but it’s uncommon to see anyone on a philosophy forum trying to learn. You can find people on Reddit (mostly students who are taking philosophy classes at university, but some hobbyists too) asking for reading recommendations to learn from, but they don’t normally actually try to learn in online discussion. On some subreddits (like AskHistorians or AskPhilosophy) people ask questions and try to learn from the answers without having discussions. People tend to try to do their learning by themselves with some books and maybe lectures and then when they talk to other people (in a back-and-forth discussion not just asking a question or two) they’re trying to be philosophers who say wise things.

And people will claim that of course learning from debate or saying ideas is one of the goals; but it usually isn’t really, not in a serious way; their focus is on being clever and saying things they think are right and they aren’t talking about actually learning to do a skill in the way people will try to learn to cook. They’re always saying like “I think X is true because Y” not “I need to figure out how to analyze this. What are some good techniques I could use? I better double check my books to make sure I do those techniques correctly.” Whereas with cooking people will ask how to cook something, and maybe double check some information source to remind themselves how to use a tool, which is more learning oriented than philosophy questions people ask like “What is a good answer to [complex, hard issue]?” When they ask for ready-made answers to tough topics, they aren’t learning to create those answers themselves; they aren’t learning all the steps to invent such an answer, compare it to other answers, and reach a good conclusion. With cooking, people often ask for enough information that they can do it themselves, so it’s more connected to actually learning something.


Elliot Temple | Permalink | Messages (0)

What Is a Philosopher?

  • What is a philosopher?
  • Why be a philosopher?
  • What skills are needed?
  • What are the goals of philosophy?
  • How do you tell good philosophy from bad philosophy?
  • Is philosophy for everyone?

Regarding whether philosophy is for everyone: it could be in a better world. It used to be that only a few people learned math, but now in USA we try to teach arithmetic to everyone in elementary school. We’re not very good at teaching it, but a lot of people become competent at it. Not many people become specialist mathematicians who learn really advanced math. Philosophy could be the same way: ~everyone learns the basics, but only a few people become experts on more advanced parts of the field. Like we’re trying to make ~everyone literate but only a few people write books, are grammar experts, or are great at analyzing and understanding text (which gets into philosophy skill too).


Philosophy is the field which takes on big, fundamental, timeless, impersonal and abstract questions which aren’t covered by the hard sciences. To some extent, philosophy covers anything which isn’t parochial and isn’t covered by another field. Philosophy also has some specific topics it has claimed like epistemology (the study of knowledge), rationality, morality, metaphysics, ontology, scientific method, political philosophy and some stuff related to logic, aesthetics and theology (but logic, art and religion are major topics outside of philosophy, too).

The origin of the universe is a big question, but the “big bang” theory is part of physics not philosophy. Philosophers don’t do a lot of measuring or experimenting like scientists. (In the past, there wasn’t a clear distinction between philosophers and scientists, and “natural philosophy” meant things like physics, chemistry and biology.)

Philosophy is also used in all other fields because it provides part of the foundations of all other fields. Philosophical ideas can be applied to help with other fields. Philosophy includes topics like how to think rationally or how learning works (conceptually; various specific learning or teaching methods can be outside of philosophy), so that’s useful in every field.

Philosophy generally doesn’t deal with the details of psychology or current events. It sticks to more general principles like political philosophy deals with broad, timeless questions of how society can and should be organized. Economics is relevant there but is its own separate field.

On average, philosophers are more willing to be unconventional and potentially offensive, in pursuit of truth, than other people.

Debate and rhetoric are partly separate fields from philosophy. Debate clubs with score systems do things that aren’t very philosophical or rational. But how to actually debate issues to a conclusion is a philosophical matter (as opposed to how to bicker with others in ways our culture current considers effective or persuasive).

Philosophy deals with some very hard questions which is one of the reasons it’s not very popular. People often get pretty stuck and come up with several plausible answers and debate those answers for centuries without reaching conclusions. Philosophy has been overshadowed by science because scientists, in the last few centuries, have made a bunch of clear, objective progress. They’ve settled a bunch of disagreements using experiments and more conclusive modes of debate. Also, successful technologies have been pretty convincing. You generally can’t build a machine to demonstrate that your philosophy theory is correct as you can with many scientific theories. And machines can provide a lot of practical benefit to people. Electricity and motors come from science. You can see that light bulbs work and they’re very useful. Whoever invents or manufactures light bulbs must genuinely know something since they do work. People don't go around claiming that light bulbs are impossible; they aren't controversial; while there must have been skeptics about whether light bulbs would work before they were invented, the issue has now been settled.

Philosophy is pretty fractured with varied opinions. Even when many people appear to agree on something, like induction or justified, true belief (JTB), it’s often not really much agreement. There are many different views on induction or JTB. Advocates of these ideas continue to debate amongst themselves. They haven’t settled the issues and moved on to more advanced issues. Physicists and chemists have a bunch of ideas which weren’t known 500 years ago but which are pretty uncontroversial today, which they can build on instead of continuing to debate the same old topics.

I believe philosophy is very important today partly because it’s neglected and unsuccessful. There’s lots of room for improvement and more need to improve. Philosophy is more broken than science, so fixing it could provide more benefit.

Scientists use philosophy (“philosophy of science”). If philosophy were improved, then scientists could invent more. Philosophy could also help with political improvements, moral improvements, and people having more rational, effective, unbiased debates.

Scientists use the “scientific method”. How does that work? What thought processes should scientists use and what should they avoid? How can scientists be unbiased? These are philosophical questions. The detailed methods of experiment, like how to use specific tools or chemicals, are scientific matters. But questions like “How do you know when you have enough evidence to reach a conclusion?” are philosophical questions that scientists must deal with. “How can you know anything at all?” is another question that’s relevant to scientists who are trying to gain knowledge.

Scientists all have philosophical ideas. They have opinions on what knowledge is, how it’s acquired, and what makes it reliable, high quality or true. Scientists have opinions on whether we live in an objective reality which actually exists or whether our planet and lives are all one alien’s dream or a simulation on an alien’s computer. If scientists have incorrect ideas about these philosophical issues, then they do a worse job. For example, there are ongoing problems with scientists confusing correlation (which is easier to find and study) with causation. There are widespread confusions about what correlation is, how it does and doesn’t matter, what it means, what it hints at, etc. And correlation is an abstract, philosophical issue about knowledge and how to learn things and seek the truth. It’s covered by philosophy. When scientists study it, they are branching out into doing some philosophy. How correlation works is the kind of thing which has to be (hopefully rationally) debated. You can’t do experiments to figure out the right perspective on correlation.

Sometimes scientists work on philosophy without realizing it. This can go poorly because they don't know what field they're in, don't learn an overview of what's already known about philosophy, and try to use scientific methods that are a poor fit for philosophical jobs. Not having all the biases and misconceptions of typical philosophers could also be a helpful advantage, but so far I haven't seen scientists come in and solve major philosophical problems while mistakenly thinking they're still working within their own field. I have seen scientists write stuff that is over 50% philosophy and then tell me I can't possibly understand it and evaluate it myself because I don't have training as a scientist. They don't realize they're working more in my field than theres, and they're missing more relevant background knowledge than I am. (I've also studied science and math more than most scientists or mathematicians have studied philosophy. While I'm not an expert, I do know a lot of the basics, know a lot of general overview/summary information, and know some specific details. I also have fairly broad knowledge about science covering many topics, for example I know more about physics than most nutritionists and more about nutrition than most physicists. Many scientists working on philosophy know much less about it than I know about science; sometimes they know so little they don't even know which topics are part of philosophy not science.)

Should we gather evidence to support our hypotheses, and then decide they’re true when we have a big enough pile of evidence? That is one methodology. There are alternatives, like believing that some pieces of evidence matter more than others. Or maybe evidence can’t actually be weighed up to see how good a theory is. Maybe evidence has to be interpreted by theories rather than telling us about theories.

Another methodological issue is p-values (probabilistic confidence values for scientific experiments). People claim that if certain statistics show over a 95% chance that their experimental results aren’t due to random chance, then their results are “significant” and any lesser result (e.g. 93%) is “insignificant”. That is confused and contributes to lowering the overall quality and effectiveness of science.

Worse, an experiment might get 98% confidence and be considered significant. But what assumptions does that result rely on? Taking a step back and thinking about the bigger picture is one of the things philosophers do. Many of the assumptions that scientific results are based on are actually philosophical issues, not earlier scientific theories or experiments.

Experiments with 98% confidence have premises like “There were no relevant factors that we didn’t control for.” How do you know that? Maybe there were. Evaluating that issue requires philosophical debate. Trying to come up with relevant stuff that you didn’t even consider is the kind of task philosophers deal with. Philosophy can be hard and many philosophers aren't very effective, but at least philosophers try to do this kind of thing and they are on average better at it than non-philosophers (people who don't study philosophy and aren't trying to be good at philosophy).

Judging which of two scientific theories is better or should be tentatively believed and acted on (or concluding that we don’t yet know an answer) is a philosophical issue. Deciding whether we need more experiments or we should move on is a philosophical issue. Those involved methods of truth seeking and evaluating ideas.

Philosophers deal with arguments and explanations a lot. Those are part of how we seek the truth and try to understand the world. And being good at those techniques is more clearly necessary for philosophical questions (where it's common to make no progress at all without good arguments and explanations) than for scientific or cultural questions (where people figure some stuff out despite being pretty poor at explanation and argument).

What is an argument? What categories of argument are important? E.g. there are positive arguments (supporting ideas) and negative arguments (criticism). There are decisive arguments and indecisive arguments (partial support or weak criticism). How does an argument differ from an explanation? Do all arguments have to be explanations, but some explanations aren’t arguments? What makes some sentences explain an issue while others don't?

Philosophers also deal with the nature of ideas, e.g. what does it mean for an idea to be fallible or infallible? Tentative or not? Absolute or not? Biased? Meta?

Meta ideas are ideas about other ideas. Philosophers deal with meta ideas a lot, while other fields don’t use them as much. Meta ideas let you take an idea, take a step back, and analyze it or talk about it. They’re crucial for not getting too caught up in details and instead seeing the bigger picture, as philosophers strive to do.

Philosophers don’t look at only the big picture. They’re also somewhat known for detailed debates over the meanings of words and sentences. They can try to get details right so they can be precise. Philosophers focus on conceptual and linguistic details (language is one of the main tools of philosophy). Scientists often focus on other types of details such as doing very precise measurements or cleaning laboratory equipment very thoroughly.

Philosophers try to use their big picture view to figure out which details matter and then pay attention to those. They especially try to pay attention to details that matter to figure out big picture views correctly.

Philosophers often fail. They deal with hard stuff so the failure rate is higher than in most fields. Sometimes they advocate wrong ideas. Sometimes they debate details that aren’t important. Whole philosophical movements or schools of thought have gone wrong (but perhaps so have some whole scientific research projects like string theory).

In general, it’s harder to tell when a philosopher is doing a good job than when someone else is. Philosophical ideas are harder to evaluate. It takes a lot of skill to work with them. But it’s worth trying, even though it’s hard, because they’re important.


Here are some additional thoughts that partly repeat the above:

Philosophers take on the deepest, hardest questions. As we get better at dealing with some questions, such as natural science, it becomes its own field and stops being part of philosophy. Philosophers don’t deal with settled, solved, uncontroversial easy stuff. They push boundaries. They work on problems that aren’t well defined yet. They work on issues where it’s hard to tell even when you have a correct answer. Many philosophical problems have been debated for millennia or centuries without reaching a clear conclusion.

Are philosophers the deepest, smartest thinkers? No. There’s lots of low quality philosophy work. Also academia tries to commoditize philosophy into specific specialties where people can make steady progress (or just teach and talk about existing ideas while making no progress) doing certain types of less creative research. Academia tries to get philosophers to fit into some predefined types which is contrary to the nature of a philosopher. Philosophers should be seekers of truth who explore widely instead of each philosopher staying under a particular lamp post (with some lamp posts having thousands of philosophers crowded under it).

Philosophers do the most big picture and meta thinking. They think about principles and methods. There’s confusion because many scientists try to understand the scientific method, while many philosophers don’t. But it’s a philosophical topic because you can’t measure it. It has to be debated with abstract reasoning. Scientists often spend some time working as untrained philosophers, and sometimes do better than trained philosophers (the philosophy training in school has upsides and downsides rather than being clearly good), which gives philosophy a bad reputation and confuses people about what philosophy is and isn’t.

Many famous philosophers were wrong, bad, ineffective. Studying them won’t teach you to think effectively. Not many philosophers had very useful things to say about the methods of rationality, how to be less biased, how to reach conclusions in debates, etc. But those things are necessary tools for analyzing all the other philosophical questions.

Philosophy is important but humanity hasn’t done very well at it so far. The ancient greeks did well for the time period, but it’s not clear that we’ve improved much since then. Certainly the progress in science and math has been far, far better.

What do philosophers do?

  • hard problems
  • bigger picture
  • meta
  • methods
  • principles
  • problems not addressed by other fields
  • foundations and fundamentals
  • asking “Why?” more than other people
  • questioning stuff
  • timeless ideas
  • non-parochial ideas
  • questioning intuitions, not being satisfied by intuitions (not just rejecting intuitions either)

Fields of philosophy include epistemology, morality, ontology and political philosophy.

Epistemology is a prerequisite for the others.

Many philosophers try to look at epistemology as vague stuff like “What is the nature of knowledge?” They don’t see philosophy as practical. They aren’t trying to figure out how to be rational, how to be unbiased, how to reach conclusions in debates. You need that stuff to do any kind of philosophy effectively. All fields of philosophy (and all other fields) require some skill at judging ideas. Philosophy requires that skill more because you get fewer practical hints about what works or not because it’s more abstract and deals with questions/goals that are harder to define. You can get by as a barber, dentist or experimental scientist with less knowledge of how to judge ideas than a philosopher needs. Knowledge and skill at rationality helps, particularly when trying to make progress (which many barbers don’t even attempt), but a philosopher is in more trouble without skill at rationality. Or put another way, a barber might get by with mediocre knowledge about judging ideas (not none) while a philosopher needs higher quality knowledge to be effective.


Elliot Temple | Permalink | Messages (0)

Learning Philosophy and Tutoring

If you want to improve at philosophy, it's important to work on it on a regular basis. It generally takes a long time to get big results.

How do you work on it? Read, write about what you read, write about your thoughts, write for sharing with others (e.g. forum posts, essays), do intentional practice sessions, work on underlying skills (e.g. grammar, math, logic, trees). These steps aren't especially complicated or hard to figure out. They're well known.

Many people would improve significantly if they did this for an hour a day for a year. Even 15 minutes per day for a year would lead to progress.

It's important to do all of the major learning actions frequently, not skip some. If you read but never write, it's common to make less progress. If you read and write but never do practice sessions and you consider all prerequisites boring even though you're not great at them, that can be significantly less effective.

Many people have issues with working on something consistently over time. Their life is chaotic. They get distracted by a series of crises. Their goals and motivations change frequently. They get interested in something else and switch. If you want to be an expert at something, you need to have a stable interest over time and keep working on it.

You don't need to be a dedicated expert at anything. You can be a generalist not a specialist. You can learn about a variety of things. Maybe you can learn more than most people about a topic in a month but not try to compete with people who spend many years on that topic.

Some people like to participate in philosophical discussions and online arguments but don't like to study philosophy. They are different activities, which can be related and connected, but don't have to be.

What can I do as an async tutor?

My main goal is to help people with learning philosophy. I can give guidance on productive philosophical activities and review work for errors. I can answer questions. I can converse with people, guide discussions between others, design practice activities, recommend readings, ask questions, and identify weaknesses people can work on. I can help with building up underlying skills that are relevant to philosophy.

People could (for example) read Eli Goldratt books without me, if they wanted to, and make some progress. With my help, they could avoid some mistakes, get some corrections, get some pointers in the right direction, get some questions answered, and get better results faster. I can explain Goldratt's ideas or point out connections between them and other ideas. My goal and role is to make things better and more effective, not to take someone who doesn't want to read Goldratt, or couldn't handle it at all on their own, and change their personality or lifestyle.

I don't make people do philosophy activities. I can't reliably fix motivation or procrastination. I can't choose other people's goals for them. I can give some reasons that philosophical goals are appealing and some candidate goals, which people may find persuasive or not. I can talk about what I like about philosophy. I can give some tips on scheduling and other related issues, e.g. I can recommend getting plenty of sleep. I can suggest trying a variety of self-help books until one works well for you. I can give some thoughts on solving some specific problems in one's life that are getting in the way. But it's really up to each person what they want to do with their lives, what goals they have, what interests them, what they spend time on.

I made my async tutoring self-paced, with procrastination and motivation excluded as topics that I deal with much. I don't have great, super-effective solutions there (and I don't think anyone else does either). I know more about philosophy than psychology. And your preferences aren't necessarily problems to be solved. If you prefer knitting or baseball, maybe you should do that.

Not spending time on philosophy (or another goal) can mean a lot of things. Maybe you're really busy. Maybe you don't enjoy it. Maybe it's not the right goal for you. Maybe you're a perfectionist who's paralyzed by fear of making any mistakes so you don't do anything (even though inaction and perfectionism are themselves mistakes). Maybe you're chronically ill, exhausted, or have lots of bad habits. Maybe things will be different later or maybe they won't.

If you think you should like philosophy, but don't, I don't recommend trying to do it due to feeling external pressure from me or other sources. Please don't think I am pressuring you every time I speak positively about philosophy or criticize something. People also speak positively about all sorts of other topics, like playing the piano or cooking healthy food, and they criticize alternatives, and that's OK and shouldn't pressure you. When I write essays, I'm sharing my opinion and hoping my ideas are helpful on a fully voluntary basis; I'm not trying to and don't want to control your life.

I often share ideas which you can figure out how to use in your life, or not use, as you please. You can also have impersonal thoughts and discussions about my ideas, or not, as you choose. Ideas you read in essays are never perfect, never 100% complete, and are somewhat generic or impersonal. Even when essays include action plans, customization for your personal circumstances is important for getting good results.

Even if I'm your async tutor, detailed help with your psychology or motivations is out of scope, and my tutoring is more focused on more impersonal topics like philosophy, English, math, economics, science, etc. I'm willing to make some comments on psychology or self-help as abstract topics but that isn't detailed personal advice and async tutoring isn't therapy or life coaching.

Note: It's OK for students to bring up personal issues including about motivation. If there are relevant problems, I do want to have some idea of what's going on, even if I don't provide a full solution. I'm trying to set expectations, not give students something to worry about; as the paid tutor, staying on topic is my job not students' job.

I can help people in a more personalized way if we do regular calls instead of just async tutoring, but ultimately, as Karl Popper explained, people do their own learning. People have to create their own knowledge in their own heads with their own conjectures and refutations. People have to think for themselves to understand what others say or what knowledge is in a book. Teachers and authors are helpers who can be quite helpful but play a secondary role, not the primary role.


Elliot Temple | Permalink | Messages (0)