Getting Stuck in Discussions; Meta Discussion

This is a followup on my post Rationally Ending Discussions. It also relates to gjm’s reply and Dagon’s reply. gjm said (my paraphrase) that he’s more concerned about people putting ongoing effort into conversations providing low value than about people missing out on important discussions; staying too long is a bigger concern than staying too briefly. Dagon said (my interpretation which I think he’d deny): intellectual norms are not the norm when human beings interact; human interaction follows social norms and it’s weird of you to even bring up rationality outside of special cases.

This post is fairly long and meandering. It's not a direct response to gjm or Dagon, just some thoughts on these general topics. I think it's interesting and relevant but it's not focused and won't provide talking point responses.

First I'll give brief but direct replies:

For gjm: If people are systematically missing out on the best, most important discussions, so those never take place, that is a huge problem for humanity's progress and innovation worldwide. I don't think most people are doing a good job of more efficiently (meaning with less time and energy) doing error correction elsewhere when they avoid a lot of low quality discussions but also avoid crucial error-correcting discussions. There aren't easy, better alternatives.

Also, there are lots of ways to deal with low value discussions better without being closed to discussion and missing out on great discussions. People can improve at discussion instead of giving up and focusing only on flawed, problematic alternatives that also need improvement. It's possible to identify and end low value discussions quickly and rationally, and to develop and use rational filters before discussion that improve the signal/noise ratio without blocking error correction like typical filtering methods today do.

Also, most of the people avoiding low quality discussion and blaming others don't know how to participate in a high quality discussion themselves and should practice more and try to fix their own errors instead of complaining about others. If you haven't spent thousands of hours on discussion (and on post mortems, analysis of discussion, trying to improve), why think you're so great and blame everyone else when things don't work out? Even if you spent a ton of time discussing you might still be bad at it, but if you haven't put in the time why be confident that you're discussing well? Even many prestigious intellectuals with multiple books and tons of fans have not actually spent a lot of time actually having intellectual discussions and trying to get better at discussing rationally (and aren't very good at it). Even if they've done a lot of podcast discussions, that's going to be a lot fewer hours than people can get doing written discussion on forums, verbal discussion enables lower precision and followups over time than written discussion, and it will also leave them inexperienced at discussing with the many types of people who don't get invited on podcasts.

For Dagon: I think it's reasonable for rationality-oriented forums to exist online, and he was posting on one of them. Like science labs, rationality-oriented forums are the special cases where rationality may be discussed today, or at least they are trying to be even if they sometimes fail. Let's try to make rational spaces work better instead of attacking that goal.


People sometimes stay in discussions they don’t want to have. This is often bad for all parties. They tend to write low quality messages when they don’t want to be there. And they write things which are misleading about their intentions, values, goals, preferences, etc. – e.g. which suggest they will want to continue discussing this a bunch in the future, as a major project, rather than suggesting they have one foot out the door already. When they do leave, it’s often abrupt and surprising for the other person.

Sometimes there are hints the person may be leaving soon but they aren’t explicit statements like “I am losing interest and may leave soon”. Often the guy’s explicit statements deny there’s a problem. Instead, the hints are things like being a bit insulting. If the guy is insulting you, you can infer that maybe he’s less interested in talking to you. Or not. There are alternatives. Maybe he’s just a jerk who isn’t very skilled at calm, truth-seeking discussion, and you can be tolerant and charitable by ignoring the insults. Maybe the rational thing to do is refuse to take a hint. That’s certainly encouraged sometimes in some ways.

Sometimes people think I’m insulting them when I didn’t intend to. What people consider insulting varies by subculture, temperament, etc. A thick skin is a virtue sometimes.

If someone insults you and you say “Is this a sign you want to end the discussion?” often they will deny it. They might even apologize and say they’ll stop insulting you. That kind of statement is often a more effective way to get an apology than directly asking for an apology is.

Why do people deny wanting to end the discussion? One reason is they think they should discuss or debate their ideas. Another reason is they think you’re asking “Do you give up?” They want to defend certain ideas, not give up on defending those ideas. They want to be able to claim that their side can stand up to debate, not concede. So when they end the discussion, they want to blame you or external circumstances, so it has nothing to do with their arguments failing. If they stop discussing because they’re tilted and they insulted you, it’s like admitting they don’t have reasonable arguments. So they want to reply like “No, I really do have good arguments; let me do a rewrite.”

The same sort of thing happens with milder stuff than insults, in which case it’s even harder to deal with and it’s less clear what’s going on.

Insults are one of the ways people communicate that they think you’re being dumb. They see your position and reasoning as low quality. Why, then, are they engaging? It’s usually not some sort of rational, intellectual tolerance, nor a belief that the world is inadequate so high quality stuff more or less doesn’t exist (so there’s no better discussion available). It’s often that they want to change and control you – someone is wrong on the internet and they can’t just let him stay wrong! Or they want a victory for themselves. Or they want to embarrass and attack your tribe (the group of people with similar beliefs that they see you as representing). Or if you have obscure, non-standard or unique beliefs, often they want to suppress the heresy. How can you believe something so ridiculous or sinful? That seems maybe fixable in debate if anything is. If you can’t even correct a guy who believes something that’s flat-Earth level dumb (or pro-adultery level immoral), can you ever expect to correct anyone on anything? And one might think that correcting people with unpopular views should be especially easy (but actually people usually hold unpopular views for a reason, whereas if they hold a popular view they might never have thought about it much and not have a strong or informed opinion). If reason can work, surely this is one of the cases where it’ll work? (Nah, it’s actually easier to persuade people more similar to yourself who are already pretty good at reason. You could see them as more challenging opponents but persuasion isn’t just a battle; they have to think and learn something from what you said in order to substantively change their mind. Persuasion is partly about their skill and their willingness to be reasonable voluntarily.)

The easiest people to persuade in general have a bit less skill and knowledge than you, but not a lot less. So you can correct them on some stuff but they actually understand you pretty well and they know some stuff about learning and being reasonable. Amount of skill (or knowledge) isn’t a single dimension though so it’s more complicated (there exist many different skills).

Someone more skilled than you, even a lot more, can be easy to persuade if you’re right – if you actually know something they don’t and have some decent reasons (not just blind luck; you can explain some reasoning that makes sense). But that’s uncommon. Usually when you try to debate them you’ll be wrong. They already thought of more stuff than you did. But if you’re right, it can be easy, because they’ll help argue your side and fill in gaps for you. And they may quickly recognize you’re saying something they haven’t addressed before and take an interest in it, rather than seeing it as “the opposing tribe”.

There’s no great way to know who is more or less skilled than you. We can approximate that some. We can guess. A lot of guesses are really just using social status as a proxy. Some ways of guessing are better, e.g. if I read an intellectual book and it really impresses me, then I may guess the author is high skill: probably at least near my own skill, or else above me, not way below me. Part of what happens in debate is you can find out who is higher skill and more knowledgeable. It’s a bit of a side issue (the main issue is evaluating specific ideas) but people do gain some visibility into it (though they often find it rude for the higher skilled person to say anything about it, and also in acrimonious debates usually the people both think they’re higher skill than the other person).

In some ways it’s not important to evaluate people’s skill levels, but it’s not useless and it can help with mentally modeling the world. Imagine if a young child didn’t realize he was lower skill than the adults he was debating, when he really was way lower skill. That arrogance could make learning harder. It can be hard to do everything in the maximally rational way of just evaluating every idea on its merits. Realizing you’re outmatched and should try listening, even when you don’t fully understand why you’re wrong about everything (and may not be wrong in every case) can help too.


People can exit discussions more easily when the stakes are lower, when they have a good reason/excuse, when the guy they are debating agrees with their reason/excuse for leaving, when their ego isn’t on the line, or when they won’t be assumed to have lost the debate because they left.

It’s hard to exit discussions when you’re trying to prove you’re open minded or open to debate. We all know (at least in communities with some interest in rationality), in some sense, that we’re supposed to be willing to argue our points instead of just assert claims and then ignore dissent.

Having low stakes, no debate methodology, no claim that debate matters to rationality, etc., clashes with the goal of transparency and with anti-bias procedures. Cheaply exiting discussions without explanation makes it easy for bias to ruin most of most discussions. Whenever something important comes up that relates to an anti-rational meme or bias, people can just take their free option to end the discussion with zero accountability. It’s unlimited, consequence-free evasion.

How do you control evasion, rationalization, bias, dodging questions, etc?

Similarly, when people decide the other guy is an idiot, or the discussion is low quality or unproductive due to the other guy, then approximately half the time they are wrong. It’s important that there be some error correction mechanisms: when you blame the other guy, how would you find out if actually you’re wrong? If you tried to construct a length 5 impasse chain, it’d often reveal the truth: it’d become clear to reasonable people whether you were right or wrong (sometimes this will even work for you even though you’re biased: what actually happened in the debate can be so clear it overcomes your rationalizations when you try to actually write it out).

Standard discussion procedure largely avoids meta discussion. If someone says something I regard as low quality, some of my main options are:

  1. Don’t reply. Don’t explain why to the other guy or to the audience.
  2. Ignore the low quality part and try to reply to whatever I think has value. Often this means ignoring most of what they said and focusing on the topic itself. This often results in complaints from people who don’t think you’re engaging with them … which makes sense because you’re intentionally not engaging with some of what they said.
  3. Steel man it by charitably interpreting them as meaning something different than what they said. Try to guess a good idea similar to their bad idea and engage with that. But sometimes you can't think of any good ideas similar to the (from your point of view) confused nonsense they just said... And sometimes they don't mean or understand a better idea than what they said or they would have said something better.

What’s not socially permitted in general is:

  1. Explain why I think their message is low quality. This would invite a correction or an end to the discussion.

The first difficulty is that the other guy will get defensive and the audience will read it as socially aggressive. You’re not allowed to openly talk about most discussion problems and try to address them head on. You’re typically supposed to sort of pretend everyone is good at discussion and doing nothing wrong, and discussions fade out blamelessly because people have differing interests and because productive discussion is hard and doesn’t always happen even when people make good tries at it.

For certain types of problems, you’re not supposed to attempt cooperative problem-solving. You’re allowed to assume and guess about what’s going on and make adjustments unilaterally (some of which are wrong and make things worse, which often would have been easy to figure out if you’d communicated).

Continuing to speak generally not personally: If I try to talk about a discussion problem, and the guy responds defensively and fights back, what happens next? Will I learn from his negative comments? No. I will see it as flaming or shoddy argument. I won’t find out I was wrong. This happens often. Even if he gave a great rebuttal, the typical result is I’d still be biased and not get corrected. There’s nothing here that causes me to overcome my bias. I had a negative viewpoint, then I stated a problem, and he responded, and then why would I learn I’m wrong? What is going to make that amazing result happen? It’s a harder thing to be corrected about than a regular topic. Maybe. Is it? If I made a specific accusation and he gave a specific, short, clear response which engaged with those specific words, that’s fairly easy to be corrected by. Not easy but easy relative to most stuff. But more ego is involved. Maybe. People are really tribalist about most of the stuff they care enough about to talk about. If they don’t have some sort of emotional investment or bias or whatever – if they don’t care – then they tend not to talk about it on forums (they’ll talk about the weather in small talk without really caring though).

Do people care about things without being biased? Do they have tentative positions which they think are correct but they’d like to upgrade with better views? Not usually.


What do you do in a world where the dominant problem is people staying in discussions they don’t want to be in, sabotaging the hell out of them? And the concept of actually adding anti-bias procedures is a pipe dream that’ll threaten and pressure people into even more bad behavior? What a nightmare that is if straightforward rationality stuff actually makes things worse? What can be done?

This is a rationality is a harsh mistress type problem. (In my opinion, The Moon Is a Harsh Mistress by Robert Heinlein is a good book, though it's not relevant to this essay other than me borrowing words from the title.) People find rationality itself pressuring. Rationality is demanding and people self-pressure to try to live up to it. They can experience discussion norms about rational methods as escalating that pressure.

And how will they respond to analysis like this? They’ll find it pressuring, condescending, wrong, hostile, etc. Or they might grant it applies to most (other) people. But they generally won’t face head on the situation they are actually in and talk about how to handle it and what could work for them. That’d admit too much.

So … don’t talk to almost everyone? But that lets fakers get to the top of the intellectual world since they aren’t really held to any standards and aren’t accountable in any way. But if you publicly broadcast standards, as I do, it pressures people; it’s seen as a challenge to them (it basically is a challenge to the public intellectuals).

Most people on Less Wrong (LW) don’t take themselves seriously or think they matter, but also won’t admit that. They don't expect to come up with any great intellectual innovations and don't think their forum discussions are important to humanity's intellectual progress. It’s hard to ask for the people who think they matter to come forward and chat. People will pretend they are in that category who aren’t.

One of the things you can do is speak differently at different places. I’ve tried posting meta discussion at my own forum but not on another forum like LW. This doesn’t communicate about discussion problems with the people you’re discussing with, so it mostly isn’t a solution. But at least some people – those who care more – can discuss the discussion. It also has the risk that someone from the forum where you’re more guarded finds your less guarded comments and gets mad. But most people don’t look around enough to find out. There’s a self-selection issue. People who find the less guarded comments aren’t a random sample. They’re people who are more willing to explore who are more likely to take the comments well.


Eliezer Yudkowsky doesn’t like to have discussions with his own community. He doesn’t post on Less Wrong anymore. My experience is they aren’t much like him (where “him” = his writing, which has a variety of good stuff, but I think he’s actually worse than that in discussion; people rarely live up to their best work on an ongoing basis). His fans mostly don’t seem to understand or remember a bunch of his published ideas. Plus they’re generally pretty flawed. Not entirely bad though.

One of the hard parts with LW is people read random bits of my posts. I posted a bunch of related stuff, mostly in sequence, and then people come in the middle and don’t understand what’s going on.

I can’t explain everything at once in the first post and also no one seems to be following along or be interested in a sequence of posts that builds up to something bigger. And they are pretty openly willing to skim and complain about post length when something is 4000 words. Saying “this is part 3 of 6 in a series” and linking the other stuff doesn’t help much. Most people just ignore that and won’t go visit part 1. Even if they only wanted to read one, I’d rather they go read part 1 not the new part they just saw, but they usually won’t. Most people have strong recency bias and a strong bias against clicking links.


It’s hard to signal the right things to connect with the right people when people are putting up a bunch of fake signals.

The people staying in discussions they don’t want to be in are communicating false information and causing trouble. This is a major problem that makes it hard for the people who want rational discussion to find each other. Instead of viewing them as victims of social pressure (which they are), you can view them as trolls who are lying and sabotaging (which they also are).

When I signal (or very explicitly state) what I want, a bunch of people join the discussion who don’t want it. They don’t admit they don’t want it. They make it hard to figure out who actually wants it because they’re all pretending to want it.

What can be done about this false signaling problem? I’m pretty good at spotting false signalers. I can sometimes tell quickly now. That used to be way harder for me but I know more signs now. And I can point them out explicitly and do analysis instead of just asserting it. But the analysis sometimes involves a bunch of prerequisites and advanced logic so other people don't follow it. I could also explain the prerequisites but then it’s a big, long learning process that could take years.

But what do I do when I spot fakers? Telling them to go away tends to offend people and get requests for reasoning that they will find insulting and unwanted. They aren’t really open to hearing objective analysis of their flaws. And this can also get multiple offended people to gang up on me. Doing the critical analysis and reasoning without telling them to go away gets a similar result; they don’t want criticism.

I can ignore the fakers but then they’ll imply that I don’t want discussion since I’m ignoring lots of people without explaining. That’s one of the issues. There are social pressures to reply. People make assumptions if you don’t. Willingness to defend your ideas in debate is already judged conventionally; that’s not a new thing I made up.

I’m not socially allowed to just ignore over 90% of people that reply to me on forums because I don't think their claims to be interested in discussion are genuine. And I’m not socially allowed to say I don’t believe them; that’s offensive. And I’m not socially allowed to explain why I don’t believe them and criticize their integrity. And I don’t know how to create productive discussion with them. And I don’t know how to explain that I’m looking for a small minority of people on their forum and get the wrong people to actually stop pretending they qualify.

I do know some stuff about how to proceed properly in discussion. I could pretend the person wants a real discussion and do what I’d do in that case. The result is catching them out and showing some example of what they’re doing wrong, since they never discuss right. But then they just stop talking or muddy the waters. No one learns any lessons. I’ve done it many times. Maybe some of my fans learn something since they’ve seen it a bunch and now it informs their understanding of what the world is like and what forums are like (or maybe they just cargo cult some of my behaviors and treat people badly while thinking they’re being rational). But the people at the forum I’m visiting, or the people new to my forum, don’t learn about general trends from examples like that. Because they don’t want to actually discuss the trends.

People’s hostility to meta discussion makes rational discussion pretty impossible. That’s the key.

The general pattern of what ruins everything is:

  1. Problem. This is OK so far. Problems are inevitable.
  2. Something to suppress problem solving.

Part 2 is called "irrationality". Working against error correction or problem solving is what irrationality is.

And suppressing meta discussion means suppressing problem solving. Discussions run into problems but you aren’t allowed to talk about them and try to solve them because then you’re discussing the discussion, discussing the behavior of the participants, changing the topic (people wanted to talk about the original, object topic, not this new topic), etc. People are interested in talking about minimum wage or global warming, not about whether a particular paragraph they posted fits some particular aspects of rationality or logic or not. People generally don't want to discuss whether their writing is unclear and that is symptomatic of a pattern where their subconscious writing automatizations aren’t good enough to productively deal with the advanced topic they want to talk about.

If you try to do X (any activity or project including a discussion), and then you run into a problem, and then you talk about that problem, that is meta discussion. You’re talking about the activity and how to do the activity and that sort of thing, rather than doing the activity. How to do X is a meta level above X. People do put up with that sometimes. Mostly in learning contexts. If you go to a cooking class you’re allowed to talk about how to cook. But if you’re just cooking with your friend, commonly you’re supposed to assume you both already know how to cook and don’t talk about how to do it, just do it.

Some stuff has problem solving built in. If you’re playing a video game, talking about strategies (limited parts of how to play) may be considered part of the game activity. If you go to an escape room, talking about how to solve the puzzles is normal.

What people object to is one meta level above whatever they expect or want. Which is often exactly what you need for problem solving. Whatever the highest level of abstraction or meta that they are OK with, if you have a problem at that level, then talking about that problem and trying to solve it is one meta level too far.

If the goal is to learn cooking, then a problem at that level is a learning problem (not a cooking problem, which is a lower level). And talking about learning problems would be viewed as off topic, out of bounds, etc. So you can’t solve the learning problem.

In general, people can only learn one meta level below what they are willing to deal with. If you’re willing to talk about learning to cook, then you can learn to cook (one meta level lower) but you can’t learn to learn to cook (same level you’re willing to deal with). Learning about X requires going one meta level above/past X so you can talk about X and talk about problems regarding X.

But it’s actually harder than that. That’s something of a best case scenario. Sometimes your meta discussion has problems and needs a higher level of meta discussion.

With cooking, some people are willing to talk about learning to cook while trying to cook. They’re open to two different levels at once. But typical philosophy discussion doesn’t offer two levels to enable learning about the lower one because people aren’t openly trying to learn. They’ll try to talk about AGI or free will or atheism and that’s the one single level the whole discussion is supposed to take place at. Just do it. You aren’t discussing how to do anything, or directly trying to learn, so you don’t learn. People will set out to learn to cook but it’s uncommon to see anyone on a philosophy forum trying to learn. You can find people on Reddit (mostly students who are taking philosophy classes at university, but some hobbyists too) asking for reading recommendations to learn from, but they don’t normally actually try to learn in online discussion. On some subreddits (like AskHistorians or AskPhilosophy) people ask questions and try to learn from the answers without having discussions. People tend to try to do their learning by themselves with some books and maybe lectures and then when they talk to other people (in a back-and-forth discussion not just asking a question or two) they’re trying to be philosophers who say wise things.

And people will claim that of course learning from debate or saying ideas is one of the goals; but it usually isn’t really, not in a serious way; their focus is on being clever and saying things they think are right and they aren’t talking about actually learning to do a skill in the way people will try to learn to cook. They’re always saying like “I think X is true because Y” not “I need to figure out how to analyze this. What are some good techniques I could use? I better double check my books to make sure I do those techniques correctly.” Whereas with cooking people will ask how to cook something, and maybe double check some information source to remind themselves how to use a tool, which is more learning oriented than philosophy questions people ask like “What is a good answer to [complex, hard issue]?” When they ask for ready-made answers to tough topics, they aren’t learning to create those answers themselves; they aren’t learning all the steps to invent such an answer, compare it to other answers, and reach a good conclusion. With cooking, people often ask for enough information that they can do it themselves, so it’s more connected to actually learning something.


Elliot Temple | Permalink | Messages (0)

Prices, Decision Factors and Time Will Run Back

Prices (or exchange ratios between goods) are not primaries. They are derived from something else. They come second. They are implied by other information. You can’t just go directly discover them; they aren’t inherent in reality; they aren’t raw evidence; you go find something else and then calculate prices from it.

The value of each good is a factor in a different dimension. There are no general case conversions between dimensions.

Many individuals make many trades. How is any one trade done? By people judging “subjectively” whether they prefer A to B, and if so they will trade B for A, and if not they won’t. People do critical thinking about which thing will be more valuable to them. Prices are implied by these many binary decisions.

Sometimes people trade using amounts, e.g. 10 pounds of rice. There’s still ultimately a binary decision about whether to accept a particular trade or not. But people also judge, in a narrow context, how much of something they value against how much of something else. It’s a direct comparison normally between only and exactly two factors from two dimensions. When people judge how much they value rice against wheat – what quantity of each they see as equal – they aren’t thinking about corn. It’s not a many-factor decision.

There are two main ways they compare quantities. They either think about what problems stuff will solve in their own lives or they think about using it for future trade, which is ultimately based in solving problems in their lives but more indirectly.

Money doesn’t change the principles.

Anyway, I was rereading Time Will Run Back by Henry Hazlitt (which I like and recommend) and thinking about the connections between the coupon market with its exchange ratios (from the book) and decision making factors in different dimensions (from Critical Fallibilism (CF)). Here are some book quotes:

There is no inherent exchange ratio between bread coupons and cigarette coupons, or between bread and cigarettes.

The gain from the exchange occurs in each case not because of some inherent difference in the relative objective value of the goods themselves, but because each party to the exchange more fully meets his own desires by making it.

“Marx’s labor theory of value was wrong [...] among other reasons, because it rested on the assumption that values were measured by some objective unit, whereas values are only measured subjectively. The value of a commodity doesn’t reside in the commodity; it resides in a relationship between somebody’s needs or desires and the capacity of that commodity to satisfy those needs or desires.... Marx looked for some objective standard of value because he assumed that two commodities that exchanged for each other must do so because of some ‘equality’ between them. But if two commodities were exactly equal, in the opinion of two persons, each of whom held one of them, there would be no reason for any exchange to take place at all. It is only because Peter, who holds potatoes, thinks that a certain amount of prunes, held by Paul, would be more valuable to him, that Peter would want to make an exchange. And only if Paul placed the opposite relative value on a given amount of potatoes and prunes would he agree to make the exchange.”

Similarly, there are no objective exchange ratios between factors in different dimensions. There are no single right answers for conversion factors (as there are between miles and meters, which are factors in the same dimension, length). Instead, the comparative values of dimensions are contextual which Austrian economists call “subjective” – they vary by the relationship between the factor and a person’s needs or desires (which changes over time even for the same person, as his situation, goals, resources, etc. change).


Ludwig von Mises says socialists can’t do economic calculation (without cheating). This is a claim that unit conversions between factors of production can only come from the free market, not from central planners. Why? Because they are “subjective” – they aren’t inherent in the goods themselves, but are instead related in complex ways to individual contexts. So a central planner can’t calculate it because it isn’t a property of the good itself, or of the manufacturing process, or of the statistics on raw materials mined this year, or of any combination of such things.

The ways socialists cheat are to use prices from a market as approximations. They can get prices from memory, looking at another country, or by allowing some trade in their socialist country.

In Time Will Run Back, the plot is designed so there are no other non-socialist countries and the characters can’t remember pre-socialist prices (and conditions changed a lot since then anyway). It’s been 100 years of worldwide socialism, and they destroyed books with capitalist ideas, and they made everyone change to a new language and stop using all prior languages, and they’ve been using the gulag system on ~15% of the population (that’s at present – probably more in the past). The people who remember capitalism are more likely to end up in labor camps (or, to avoid that fate, to never speak of it).

So they wonder how to calculate the total value of all their production for a year, and how to compare it to alternative production plans. Another book quote:

“But, chief, how can you possibly have such a figure? What is 200,000,000 pairs of shoes added to 1,000,000,000 bushels of wheat added to 1,000,000 quarts of gin? It’s 1,201,000,000—of what? You can only add things of precisely the same kind—otherwise the total is meaningless.”

Shoes, wheat and gin are factors in different dimensions. You can’t add them to get a total. This logic matches CF. This matters to planning:

Suppose we increased the production of shoes from 200,000,000 to 250,000,000 pairs a year, but only at the cost of reducing the production of wheat from 1,000,000,000 to 800,000,000 bushels a year. Would we be better off or worse off?

Without prices (exchange ratios between goods, unit conversions between dimensions), they can’t compare goods (or services).

A person can judge, for himself, if he’d rather have one pair of shoes or four bushels of wheat. But a central planner can’t judge which is better for the country.

This is really unintuitive to many people. They’re used to having prices. They don’t understand that socialism would actually lack prices and be unable to figure out basic issues like these. It's hard to envision such a different situation where you don't already have prices and then understand that you couldn't just figure them out easily even though they're so familiar to us.

Similarly, one of the typical attempted answers to CF's multi-factor decision making claims is: just convert everything into dollars to get all the factors into the same dimension. In general in the world today, dollars are the best solution we have for getting many different factors into a common measure of value. People assume that we can do it for everything like we do it for dollars. But we only do it for dollars using a market. You can’t do it without a market. And, further, people typically don’t understand how it’s done with dollars, what actions are required, what limitations there are, etc.

Market prices fluctuate. That’s totally unlike meter-to-mile unit conversions. Why? The prices aren't inherent in the goods. They’re “subjective” (contextual). They fluctuate with supply and demand, which fluctuate both with the conditions of production (how much of what is being made) and also the preferences of traders.

So you can’t just find out the market price and be done. You need to keep the market around to find out about price changes. If you get rid of the market, your prices will get more and more out of date as time passes.

Markets aren't perfect by the way. They don't find perfect prices. They're just a lot better at figuring out prices than any known alternatives, especially for a large society or for trade between strangers.

Market prices are derived. You don’t just figure out the market price because it’s not a primary. It comes out of many individual decisions to make or decline particular trades. People often try to approach decision making by coming up with weights first, but that’s not how markets work. In markets, weights (how much each factor is worth) are implied by what actually comes first, which is individual decision making about specific trades. Weights (prices) are an aggregate that we see second.

The market is our best mechanism for dealing with factors in many dimensions in a unified way, and people typically don’t understand the market which leads to confusions about decision making too. Asking people to make up decision making weights is like asking a central planner to make up prices.

The fundamental thing is critical thinking. Would this trade benefit me right now in my specific context? I try to think of problems with it and problems with refusing it. I try to think of my goals and how different goods can be used to achieve my goals.

Austrian economics doesn’t know specifically how people make their individual decisions. It’s not an epistemology. That’s OK. It just specifies that the decisions are made based on people’s preferences not based on inherent traits of the things traded (those traits are still relevant though), and that different people have different preferences, and that people’s preferences change over time and change as their context changes.

A good (or trade) is a bundle of many factors. Austrian economics doesn’t expect anyone to evaluate each of the many factors, weight them and sum them. It just expect them to choose between bundles. Do you prefer this shirt or that sack of rice? There are many factors by which we might evaluate shirts or rice, and you may take some into account, but you don’t have to do weighted factor math, and aren’t expected or assumed to. Instead, you think about things like whether your old shirt can be mended and whether you have enough food. Those are factors but they don’t aim at anything like comprehensiveness (you can’t ever really be comprehensive). Instead, they are just a few key factors related to your problems and preferences.

People’s assumption that decision makers can assign weights to a bunch of factors is the same basic error as socialist central planners assigning weights to goods to enable them to calculate tradeoffs between different production plans. There’s too much complexity. Instead, people tend to make choices like “I prefer A to B” and if you get enough choices like that organized by a market then you can derive market prices. And then when you get used to market prices, you start thinking in terms of prices (factor weights), and viewing them as normal. But they’re derived and they aren’t available in general, only via markets. They are summaries of market trading.

Market prices don’t tell individuals what to prefer. You can’t/shouldn’t use market prices to know what trades to make. They don’t dictate how much you should value something. You look at market prices to see what trades are available. They tell you your options. You then trade based on differences between your preferences and the market price options. To benefit from trade, you must have some preferences that are independent of market prices, which you then compare to market prices to find differences, and then you trade based on those differences. E.g. you currently value shirts (relative to dollars or potatoes) more than the market does, so you can beneficially trade for shirts. (And if you value shirts less, you could beneficially trade them away to get something else (sell them). Positive and negative differences both enable beneficial trading.)

If your preferences exactly matched market prices, then trade would be neutral for you, not beneficial. Market prices are an aggregate from many people’s preferences which differ from most/all people’s preferences. They’re like an average or blending.

Market prices don’t guide individual preferences. They’re derived from preferences and only useful because they differ from people's preferences. Market prices instead help guide production and trade. They alert people to what their fellow men value highly, so that people can offer it for trade (if they already have it) or produce it for trade.

People commonly don’t understand dollar prices and how those relate to exchange ratios between all the goods. They don’t think about the wheat-rice exchange ratio or “price”, and the shirt-rice price, the potato-rice price, and so on. They only think of prices in dollars. They convert everything to dollars (like generic goodness points in decision making) instead of converting directly between dimensions. To compare shirts and rice, they tend to compare the dollar value of each. They may not even be used to thinking of which they prefer between the two. But they might wonder which of two expensive items they’d rather have, realizing they can’t afford to buy both. They’ll also directly compare two couches, realizing they’ll only buy one (for space, price and/or usefulness reasons).

Instead of pricing everything in dollars, you could price everything in rice. In Time Will Run Back, after a while the market prices everything in terms of cigarettes.

Mathematically, it doesn’t matter what you price things in terms of. Practically, it matters to use something non-perishable, small, light, divisible (for making change), easy to measure and hard to counterfeit – something practical for trade.

Why does the math not care what prices are stated in terms of? Because markets work to make the exchange ratios between all the dimensions consistent, and when we do math we tend to idealize the market and assume consistent prices for all goods. How do markets approach pricing consistency? Arbitrage. When prices are inconsistent, people can trade profitably and this trading makes the prices more harmonious.

If 3 rice is worth a shirt, 2 potatoes is worth a shirt, and 1 rice is worth 1 potato, then prices are inconsistent. You can trade 2 potatoes for a shirt, which you trade for 3 rice, which you trade for 3 potatoes. You gained a potato for nothing. Your trades are also asymmetric: you’re adding demand to trade potatoes for shirts, but not shirts for potatoes. You’re also adding demand to trade shirts for rice and rice for potatoes, but not vice versa. Because you only take one side of each trade, you help move the price. The more you demand potatoes for shirts, but not shirts for potatoes, the more you’ll raise the price of potatoes in terms of shirts. You can keep doing this until the profit from your trades approaches or reaches zero.

After a while, you have to trade 2.33 potatoes to get a shirt, and the shirt only gets you 2.66 rice, and the 2.66 rice only trades for 2.33 potatoes. And so at that point, you trade in a circle, ending up with what you started with (probably slightly less due to transaction costs). The prices are now harmonious. There’s no more arbitrage opportunity.

Since the prices now all match each other mathematically, you could state the prices in terms of any good, and nothing would be inconsistent with that. In some sense, money is basically just whichever good is most popular for trading. Money is used as a store of value (you can sell now and buy later, instead of only trading away something now to get something now) and for indirect trade. Indirect trade means I want a shirt but I can trade with someone who isn’t offering a shirt. I get a shirt through two steps. I trade potatoes for money, then I trade money for a shirt. Without money, indirect trade can theoretically be done but it's likely to involve more steps and more hassle. Money keeps trading steps to two maximum, which is actually more convenient than one step because I can buy and sell as separate trades instead of having to buy from the same person I sell to. I can sell my eggs to one guy then buy a chair from someone else, instead of having to find one guy trading a chair for eggs.

I'll close with one more Time Will Run Back quote related to CF's Multi-Factor Decision Making Math:

“We have already decided [...] that we are working completely in the dark. You simply can’t add things that are unlike each other. Or subtract, or multiply, or divide them, or even, in any meaningful quantitative way, compare them. You can’t add pigs to pears, or subtract houses from horses, or multiply tractors by toothbrushes.”

Time Will Run Back is available for free as an ebook.


Elliot Temple | Permalink | Messages (0)

"No" Evidence

Many people's favorite argument lately is to assert that something has "no evidence" for it. In general, there is lots of evidence, which they don't count for various reasons they usually don't explain. The kind of person to say this doesn't usually make a list of all the evidence and then go point by point and say why each piece of evidence counts as "no" evidence.

What do they dismiss? Generally anything that's merely similar but not 100% direct proof. They want the exact thing in question to have been directly tested. For example, there could be dozens of studies showing that masks work with dozens of different viruses. But then when a new virus, COVID, comes along, some people will say there is "no evidence that masks work for COVID".

They should accept that the evidence about masks and viruses applies to COVID because COVID is a virus. Instead, they focus on the lack of studies specifically testing masks for COVID.

And evidence is broadly overrated anyway. Lots of issues should be considered with conceptual thinking, logical reasoning, explanations, and critical arguments. Evidence is most useful when you have multiple ideas which contradict each other that you don't know how to choose between. If you fail to resolve the issue using many other more convenient tools, then you look for differentiating evidence. Before that point, you don't know which evidence to look for.

The concept of looking for evidence for (or against) one idea is wrong. The only way that can work is if you find evidence directly contradicting the idea, in which case you can rule the idea out. Other than that, searching for evidence about one idea doesn't make sense. Which evidence should you look for? Why that? There are no good answers.

Instead, it makes more sense to look for evidence when considering two or more ideas. Which evidence do you look for? Look at issues where the ideas make different predictions about the same thing. Then observe and see which idea (or neither) was right. They can't both be right when they make different claims about the same thing.

Having two or more ideas tells you where to focus: you can specifically seek evidence that'll differentiate between those ideas. You figure out what they disagree about, which is related to the physical world, and then go look at or test the world to get a result. Then, since the ideas disagreed, no matter what result you get, at least one of them is wrong, so you can rule something out. There's still no such thing as positive evidence supporting ideas. All you can have is evidence that contradicts ideas or does not contradict them. Non-contradiction is also called consistency, but that isn't support or justification.

The idea of looking for evidence to differentiate between two or more ideas, rather than to support or evaluate one idea by itself, comes from Karl Popper. So does the idea that evidence can refute ideas but can't support them.


Say, as a simple example, that I'm thinking of an integer from 1 to 10. John thinks it's 7. Alice thinks it's 4. They can't both be right. If I reveal my number, at least one of their predictions will be refuted.

Say they both claim to have telepathy and that my number is 4. That is negative evidence about John's telepathy. He was wrong. Is it positive evidence about Alice's telepathy? No. This particular piece of evidence doesn't refute Alice being telepathic, but it doesn't positively support or justify it either.


In general, is it possible to test the exact right scenario? No. You can do a test of masks with COVID, but you'll do it in a different room than I'm in. How do you know it works in my room, or outdoors, rather than in the room you ran the tests in? There is no evidence – no published, peer-reviewed studies – that it works at my location or in the identical type of surroundings to mine.

You must always use conceptual/creative/logical thinking, not just evidence, to decide how to apply results. If it works in Idaho, will it also work in Texas? Yes, because the physics of air, viruses and masks is the same in both places. Should I also wear masks on the moon? Not necessarily: the moon doesn't have atmosphere, so there's a relevant difference there and reconsideration and possibly additional evidence is required.

A scientist may test 10 masks. But my mask isn't one of those ten. It may be the same type as one of those ten, or it may be a similar type, but it's certainly not one of the masks that was tested. So there is "no evidence" that my mask will work because my mask has never been tested. This is absurd, but arguments that are no better are routinely made in public, by respected intellectuals, and accepted by audiences. Usually they are made in favor of conclusions the speaker, and a lot of their audience, are already biased in favor of.

Why are they biased in favor of their conclusion? Some reason other than evidence. They analyze it in some other way, reach a conclusion, then claim the reason they reached it is based on evidence, even though it isn't. They do this because "evidence" is widely considered powerful and considered the appropriate way to evaluate ideas. Arguments and reasoning are less fashionable.

In our culture, saying "all the evidence supports my side" is a great appeal to authority; if people believe your assertion is true, they'll be really impressed. But if you say "all the arguments support my side" people aren't as impressed. It doesn't work as well. People seem to think that confusion about arguments is common – arguments are fallible – but evidence is more safe, reliable, trustworthy. Of course, sometimes they're wrong, at which point they may say "the evidence changed, and I changed my mind with it" or similar. They do know the current state of the evidence can change, just as the current state of the arguments can.

I think appeals to authority are bad. But they do seem pretty popular. I think our society would benefit a lot from better rationality.

Perhaps the fundamental reason people get away with saying there is "no evidence" for conclusions they dislike is because there is no such thing as supporting evidence. They may dislike and disagree with Karl Popper, but they're also sort of implicitly weaponizing one of his major claims (about the logical difficulties with supporting evidence) against everything they dislike. But they're also hypocritical. They don't use the same standards for ideas they do like. For their own ideas, they claim there's lots of positive, supporting evidence.

The correct way to judge ideas is: can we come up with any alternatives which are not refuted by evidence or critical reasoning? If you use criticism, including empirical (evidence-based) and non-empirical criticism, to refute all the alternatives anyone has come up with, then you can reach a conclusion because you have one single idea left that survived criticism. At least that's what I think. Popper wrote a lot of things which kind of suggest a viewpoint like that, but he didn't get all the way there. He said you could use a bunch of criticism and then evaluate which idea best survives criticism and use that even if there are several contradictory survivors. Popper didn't focus on decisive arguments like Critical Fallibilism. I think Popper's proposal makes more sense than the standard view. It's important to use negative arguments and to consider alternatives. If you want an idea to win, you need to figure out and criticize alternatives, not ignore alternatives and add a bunch of support to it. Support doesn't work, both logically and practically.

Note as a detail: People commonly bring up the concern that alternatives haven't been researched enough. But "this is under-researched and more research is needed" is an alternative that's very easy to think of; it's generic and you can repeat it for many subjects without having to creatively innovate each time. So that is one of the things you should consider, and if all alternatives are refuted it means you have a refutation of that.


A better thing to say instead of "there is no evidence for X" is instead "the evidence for X isn't a perfect fit". In other words, none of our evidence directly confirms X as a really specific, narrow claim. Instead, our evidence shows patterns that fit X pretty well but don't fit every detail. That doesn't mean the patterns contradict any details, just that some details are omitted.

For example, the evidence that masks help prevent the spread of the flu isn't a perfect fit for whether this mask, in this building, will help prevent me from getting COVID.

If you say it this way, it leads to discussion about what patterns we're aware of and have evidence for, and how the patterns should be extrapolated or generalized. What cases are similar enough that the evidence for the pattern is evidence for that case, and when does it not count or not apply? That would be an honest conversation, and potentially productive, even though I think that kind of positive-evidence-for epistemology is incorrect.

But when people say "no evidence for X" they're suggesting there's nothing to talk about. They're suggesting there are no patterns that may or may not fit well enough to count and nothing to be discussed. They're suggesting there is no evidence/pattern that is even close. They're being super dismissive instead of opening the door to conversation. That's usually dishonest. If it's some dumb issue like "no evidence that aliens visited us and planted fake dinosaur fossils" then whatever, the phrase isn't such a big deal. But for reasonable topics where some reasonable people are on the other side, the people saying there is "no evidence" for the other side are just trying to prevent debate by being unreasonably dismissive. They're treating the other side like they're being as reasonable and evidence-based as people claiming aliens planted fossils, even though they're just saying masks work for COVID. That shows poor integrity.

Also, when people say "no evidence for X" it often sounds to the audience like X was studied and failed. But the real situation is often that there's no evidence just because it wasn't studied yet, so if it were studied you don't know what the outcome would be.

Also, as above, often some relevant things have been studied, and it takes thoughtful discussion to figure out if they are close enough, if their patterns should be extrapolated to this case, or not. Or put another way, even if the patterns are a pretty poor fit, wouldn't that be a small amount of evidence, not literally none? Stuff has to be pretty dumb for "no" evidence to be a reasonable, honest claim instead of "low" evidence. So "no evidence" is generally a red flag about the speaker, whereas "low evidence" is a hint they might be a more precise, reasonable, honest person who is open to conversation instead of really dismissive of alternative ideas.

Tons of stuff hasn't ever been studied well, even things that many people claim, assume or expect was studied well. So "no evidence" doesn't really mean as much as it sounds like. Like there's often "no evidence" that a vitamin helps with a particular disease because that hasn't ever been studied. Medical studies on humans are expensive! But there certainly is evidence that that vitamin has various benefits, and there's some evidence and knowledge about its function that lets us reason about what diseases it could help with, so there is absolutely relevant evidence that could be extrapolated to at least partially apply even if the exact issue hasn't gotten an expensive study.


Elliot Temple | Permalink | Messages (0)

Chess Time Loop

There was a viral discussion about being trapped in a time loop that you can only escape by winning a game of chess against Gary Kasparov (one of the all-time best chess players). I think the discussion is bad and it going viral shows bad things about people.

The scenario is ambiguous. Removing ambiguity from the problem/scenario/goal is one of the main things Critical Fallibilism (CF) says to do at the start of the discussion, before debating, quarreling, or reaching conclusions. So I want to talk about how CF would approach this and how it differs from typical discourse.

First, as a minor point, the vast majority of people participating in this discussion don't know enough about chess to have an informed opinion. It's viral among regular people, not just chess players. Yet people who are ignorant about chess seem to like arguing over this scenario about chess. CF says to consider what prerequisites you need in order to realistically and effectively accomplish your goals (like figuring out the correct conclusion to a scenario), not to ignore the concept of prerequisites.

One of the main reasons people disagree about the chess time loop scenario is they interpret it in different ways. Each person bases their conclusions on their interpretation. Then they disagree with people about their conclusions and argue over them. So, as I see it, they are debating a downstream consequence of an earlier disagreement, and they aren't focusing their debate on the earlier disagreement, the origin of their disagreement. They aren't looking for the root cause of their disagreement and trying to focus discussion there. This doesn't make sense logically or organizationally. You don't have to always look at root causes, but they're often useful to consider.

For an example of different interpretations, some people think that Kasparov would play a variety of different moves in the same position. Other people think if you reach a specific position, Kasparov would always play the identical move. Whether he's extremely predictable, or not, makes a large difference to the difficulty of beating him. But from what I've seen (and admittedly I didn't research this much), people are more inclined to debate their final conclusions than their interpretations/premises like whether or not Kasparov would always play the same move in the same position.

I've also seen people say basically that determinism implies he'd always play the same moves in the same position. This is false and careless. Determinism (with some standard assumptions like no multiverse) implies he'd always take the same actions in the identical situation. But the current position on the chess board is not the full situation. Even under determinism, he could play different moves, in the same position, depending on your demeanor and how long you spend on your moves. If you played particularly quickly or slowly, or seemed especially nervous or calm, any of those (and many more subtle things) could cause Kasparov to go down a different train of thought in his head and therefore end up playing a different move, even under determinism. If you want to repeat moves based on a deterministic premise, you need to be a great actor who can put on the identical show every time. Or maybe a similar show would work. You'd have to find out from experience how much leeway you have to behave differently on different moves, as well as how much your behavior can result in changes on later moves not just on the next move.

Also, if you have to behave the same to get the same moves from Kasparov, then it may ruin the strategy of switching colors and playing his own moves against him. Some people assumed you can choose your color at the start of the game, so you can play black for the first game and see what Kasparov does, then resign after he moves, play white in your second game, play his move, see his reply, remember it, resign, play black again and find out his second white move, etc. But this means I need to memorize one series of moves for black, and one series of moves for white, and the assumption is I can play on either side and Kasparov will always play the same moves. If I'm white and I use the same demeanor and move speed every time, maybe that works. But how can I duplicate the same game with black? I'd have to, by trial and error, figure out some different demeanor, suitable to a different set of moves, that'll cause Kasparov to play the white side of the same game I got him to play with black? Just using my white demeanor, while playing black, wouldn't make sense because the moves are different and have different conceptual meaning. And if I don't know their conceptual meaning, my demeanor will be off, and Kasparov will notice something is weird, and he won't play the same moves he normally would against a strong player. The idea with switching colors is basically to get him to play against himself, but if you don't know how to act like a strong chess player it won't work right. If you play Kasparov's moves that you memorized, but you don't act like a strong player, he may change his moves.

Also, as someone who actually knows about chess, I can tell you the likely result of getting Kasparov to play against himself: a draw. Draws are the most common result among top players. You have to win, not just draw, so you have to outplay him somehow, not just match his skill level. Chess has a large margin of error to get draws; it takes a significant advantage to win; small advantages often fizzle out to draws instead of being increased to decisive advantages. Usually losing requires making a bunch of mistakes that add up; one severe mistake works too but grandmasters are great at avoiding huge mistakes.

Anyway, there are different interpretations of the scenario, and people bicker about the right conclusion to reach, and that is viral for some reason among people who know little about chess, logic, philosophy, debate or the physics of time travel.

I suspect many people like debating something where they secretly feel there's no right answer. They act like there is a right answer. They'll be pushy about their answer. But I suspect deep down they feel safe because it's silly and they can't really be wrong because the scenario is impossible and ambiguous. They want to bicker over stuff where there is no objective truth for smarter people to discover and win with.

If people were being reasonable, many of them would be less interested or approach it in other ways such as how CF would: make a list of interpretations of the scenario, make a list of conclusions of the scenario, then match them up. For each interpretation, you could agree on what conclusion makes sense. You could still disagree about which interpretation to believe, but you could discuss some stuff objectively and reach a significant amount of agreement about issues like which versions of the scenario imply which outcomes. You could agree about whether many scenario plus conclusion pairs are correct or incorrect. This is like the CF method of pairing multiple ideas with multiple goals and evaluating the pairs instead of trying to evaluate the ideas independent of the goals or with ambiguity about which goal(s) you care about.

But people aren't trying to have structured, productive discussions. That's not what they want from the scenario. That isn't why it's viral. They aren't approaching it with the attitude of a scientist or rational philosopher. They don't know how to and don't seem very interested.

As to the right interpretation of the chess time loop, there isn't one. I think that's the objectively correct, logical conclusion, but I grant that it's not a very satisfying or fun conclusion. The scenario wording is ambiguous and the scenario also violates the laws of physics. Sometimes you can figure out a good interpretation of something ambiguous by thinking about what makes sense, and bringing in additional constraints on what could be meant like physics or logic. But this scenario explicitly does not follow the laws of physics. Time loops aren't a real thing. There is no science that can help fill in the details that the problem statement left out. People tend to assume things like the amount it violates the laws of physics or logic should be minimized, but that doesn't actually make sense and isn't meaningful guidance. It's well known that, logically, you can use a single contradiction to reach any conclusion whatsoever. One contradiction, which violates the laws of logic, implies anything at all, of your choice, which you can prove using the standard laws of logic plus the one contradiction. A single contradiction completely breaks all of logic. Similarly, there's no good physics-based way to reason about how time loops would work since they're simply made up and not part of real physics. Trying to extrapolate from real physics to them isn't going to work well because real physics doesn't allow them. If it did work well, people interpreting the scenario in different ways would be less of a problem.

What people are actually doing is closer to extrapolating how time loops work based on some movies and TV shows they've seen. So the answers they reach depend on their TV viewing history and memory, not on physics or logic. They wouldn't want to admit this, but TV, movies, video games and novels are probably where their intuitions about time loops come from. Sometimes they like to pretend to be reasoning like scientists and logicians, but they're using a ridiculous scenario that's actually based on fictional stories they watched actors portray on sets that would look extremely fake when viewed at most angles and distances besides the actual camera angle and distance used. And the actors did many takes and the scenes were edited together. And they based it on scripts that were edited: people made stuff up, then crossed things out and made up different stuff.

But, again, it's viral. Something about this kind of unproductive debate, that could never be very productive, really appeals to millions of people. And it could be made more productive with CF methods – discuss the scenario, clarify it, write down multiple interpretations, and then evaluate the right conclusion separately for each interpretation. But I don't think most people want to do that. I don't think that would appeal to them even if they knew that method existed. They know the scientific method exists and they aren't trying to use it. They know there are experts at debate and they aren't trying to research what those people do and emulate it. They're just quarreling on social media because, apparently, they wanted an opportunity to do that and jumped at the chance even though they aren't chess players.

I think the thing they're doing is basically what they want to do and like. It's not all that different than bullying people who don't wear fashionable clothing. People are used to, in many parts of life, taking sides on arbitrary issues. Sports fans will strongly take sides in favor of their team even though it doesn't really matter who wins the sports game and they don't even try to discuss which team objectively merits the win more or how that could be determined.

This article has over-generalized some. People vary. I'm trying to comment on some themes and issues that apply to a lot of people but certainly not everyone. People are individuals but there are also some common patterns that come up with many people that can be worth understanding. Even when a pattern applies to a person, they're still also an individual who generally doesn't fit the pattern perfectly, just pretty well (or you could build a lot of margin of error into the pattern, in which case people could fit perfectly because the pattern itself is less specific and gives a lot of room for variety and individuality).


Elliot Temple | Permalink | Messages (0)

Toxic Attitudes about Greatness

LMD wrote on my forum:

Something in me says its a waste if you don’t actually achieve actual prime-mover greatness.

This attitude is toxic and I want to warn people against it. This wording suggests it's coming from Ayn Rand (without necessarily assigning blame) by using her prime mover terminology. I think she had some of this attitude but it also comes from David Deutsch a lot. Popper and Goldratt are much better about it (not at countering it, just at not doing/spreading it).

Due to having a mentor (Deutsch) who is bad on this issue, I've struggled with this (I didn't have the issue before the mentor). Personally, I'm happy with my abilities and accomplishments, but I've had confusions about how to view good but not great ideas, skills and actions.

The idea I got from Deutsch is sort of like: society is broken, the default status quo is broken, so you have to be way better than most people just to avoid disaster. It's not that you have to be super skilled in order to do awesome stuff; it's just that everyone who isn't super skilled does awful stuff. Like Deutsch basically says if you aren't a top 0.001% outlier, then you will be an abusive parent (I'm not exaggerating; he's actually used stronger rhetoric than that including comparisons with slavery). For Deutsch, you have to be one of the very best just to be OK, decent, acceptable – you actually don't really get to be considered great or special for doing that. It's all stick, no carrot. It's toxic.

Deutsch also basically said you have to question conventional romance and monogamy, and have great outlier ideas, or else have awful relationships. And you have to question conventional philosophy (like induction) and find something far better or else you'll have awful philosophy ideas (and therefore e.g. have poor prospects at doing good work as a scientist or AI researcher). And, according to Deutsch, if you don't adopt especially good versions of libertarian type ideas about politics and economics, then you'll have really awful ideas on those subjects, with various bad consequences. And if you practice a religion you're probably just full of really nasty static memes that drive you to really immoral behaviors and ideas; static memes make you have anti-criticism attitudes and biases and otherwise sabotage the means of correcting errors, which is according to Deutsch basically the most evil, immoral, depraved thing you can do. It's being bad at error correction (not e.g. something about violence) that Deutsch makes the centerpiece of morality in his book:

Deutsch wrote in The Beginning of Infinity:

Could it be that the moral imperative not to destroy the means of correcting mistakes is the only moral imperative? That all other moral truths follow from it?

A practical result of this is that many of Deutsch's fans lie about wanting all types and strengths of criticism on all topics, lie about their openness to debate, lie about how open to outlier ideas they are, lie about being thick-skinned, and lie about liking and wanting intellectual conversations when they are actually having negative emotions. They can feel especially pressured to be great regarding this specific issue to avoid Deutsch's moral condemnation, but they generally aren't already great at it, and some try to fake it until they make it, which I've never seen work. Others, failing to meet Deutsch's moral standards, give up entirely instead of doing some of this stuff in a way that would work for them.

Deutsch presents too many of his ideas in an all-or-nothing way where you have to do great at the whole thing or else be condemned, which leads to people giving up who could otherwise have done a good or OK job that actually would have been a lot better than nothing. Reasonable people want self-esteem and if greatness is required to have it, but they aren't already great, then they'll either be dishonest about how great they already are or they'll be alienated and reject that school of thought.

Another example of Deutsch's all-or-nothing attitude is when he tweeted:

All compulsory education, "tough" or not, "love" or not, in camps or not, and whether it "traumatises" or not, is a violation of human rights.

The tweet was a comment on Troubled US teens left traumatised by tough love camps, which it linked to.

So Deutsch was defending abusive teen wilderness camps as not being worse than other schools, because all schools are bad, so to him they're all the same (regardless of e.g. whether the children at the school are sexually assaulted or not). A school must be great or else Deutsch condemns it just as much as the worst schools.

He's also said stuff about how parents have to permanently give their young children total freedom, with no possibility of ever changing your mind, or else the children are unfree today and the parents are coercive no matter what their actual relationship with their children is like. You have to do everything fully how he wants, and sometimes even commit to it forever, or you're bad. Committing not to change your mind in the future is a problematic demand coming from someone who also says being open to changing your mind in the future is the key to both rationality and morality.

Another practical result of Deutsch's writing about morality is to confuse people. Most people are fairly concrete thinkers. Abstract statements about error correction being the only moral imperative can mislead people about other issues like violence that seem to be excluded from mattering morally. They might hear that violence is morally acceptable since there is only one moral imperative which does not mention violence. That wouldn't be the correct analysis but Deutsch didn't explain a better one. He brought the idea up briefly, enough to get people's attention (I've seen a lot of people remember and talk about this), but then he didn't explain it much or offer much help for dealing with it. Morality is a particularly bad topic to come in, say something really strong and really unconventional, imply that readers have to do a great job with it or they're immoral, and then leave readers to work out all the implications and details about what to do. People aren't good at reinventing morality based on a couple clues plus fear of being immoral.

Also he phrased it as a question but he's implying that the answer is "yes" and lots of readers take it that way. Being able to say you didn't make a claim, you just brought up a question to consider, is a way to get an idea across to readers while still being able to deny responsibility for it.

Also from The Beginning of Infinity:

This is a rare and deadly sort of error: it prevents itself from being undone.

That type of error is common not rare (as Deutsch himself argues elsewhere in the same book when he talks about static memes). And calling something "deadly" when basically everyone does it daily is toxic and also incorrect (if lots of people do it daily for decades without dying, then "deadly" is the wrong word; it being a metaphor doesn't prevent if from being a bad choice of words).


Returning to the bigger picture, I think Deutsch is wrong about greatness and I've unpacked, untangled and rejected some of it but I think some errors linger in my thinking and also sometimes people go read my old more-Deutsch-influenced writing and find more of these errors.

Deutsch's version is more toxic than Ayn Rand's. Rand didn't say that Eddie Willers or Mike (the construction worker who makes friends with Roark) is a bad or awful person. She thought lots of non-great people were decent, reasonable people who had fine lives. According to Rand, you don't have to be like Dagny just to avoid being like her brother; there's a lot of room in between.

I'm guessing that LMD picked up some of Deutsch's attitude.

One of the things that's been hard for me with rejecting Deutsch's toxicity is that I do think that Keynesian, Marxist and some other widespread economics that exist today are very flawed. I do think conventional parenting and relationships have lots of flaws. I think politics is a broken mess. I think the government is inefficient, clumsy, makes lots of mistakes, etc. and I also now (unlike Deutsch) think that about big companies (which I now also now think commit lots of fraud and do various bad things like poisoning us, see e.g. Silent Spring). I have a lot of criticisms of social dynamics, social status, social networking, etc.

I don't think society is very merit-based. I don't think merit rises to the top well. That's relevant: Deutsch is demanding that people be so great in terms of merit that often isn't rewarded by society, and may well be punished by society, as Deutsch himself has mentioned. When Deutsch demands that people be super rational, he's by implication telling them not to play irrational social status games that actually work and get you career advancement. It's problematic to tell your smart readers to be incompatible with most careers. And it's hypocritical because himself Deutsch put a lot of work into fitting in and getting along with people in order to advance his career; he didn't just pursue rational greatness as his only imperative above other concerns.

Seeing a lot of flaws in society makes it harder to think that being normal is just fine. So there's an issue there. I think it's toxic and false to tell everyone they have to be in the top 1% or else their lives will be disasters full of errors, misery and treating themselves and others immorally. But I do think there are a lots of big problems in the world. How do you tell people there is huge room for improvement without them thinking that if they don't improve they're bad? How do you tell people that some particular conventional ideas lead to concrete negative results, including harm, without telling them that continuing to hold those ideas is harmful and bad? Just saying that you're always going to be fallible and be at the beginning of infinity, and error is just an ever present part of life, and everyone can just do their best ... is not good enough to address this; it's not much of a solution (even setting aside that "do their best" could be taken (contrary to common usage) to imply striving as hard as possible for greatness).

There's also a related issue that comes up on a small scale. For example, if you don't smoke or drink alcohol or coffee because you think those things are unhealthy (for all humans, not just for yourself personally), how do you say that without people feeling that your opinion applies to them too? You can present it as a choice about your own life and how you want to live, but they can still see it as having implications about their life. They can feel judged or pressured. Ideas which are critical of things in objective, impersonal ways can be a big deal to other people.

I don't have all the answers here but I think the way Deutsch handles it is really toxic and Rand's handling is problematic though better than Deutsch's (due to giving aspirational positive examples and saying there's a lot of room to exist in between her heroes and villains). It's also notable that Deutsch is worse than Rand about this because Rand has a (partially fair, partially unfair) well known reputation for being especially toxic about this kind of issue.


Elliot Temple | Permalink | Messages (0)

Responding to AI Summaries of Popper's Critics

I asked the Gemini 2.5 Pro (Preview) AI:

Today, what are the most popular texts criticizing Karl Popper's epistemology?

Let's go through Gemini's answer. I think this will be reasonably useful because the AI is trained on many statements people wrote about Popper and about these critical texts. I expect these answers to provide a reasonably representative mixture of what many philosophers have written.

Identifying the "most popular" texts criticizing Karl Popper's epistemology today involves looking at works that remain highly influential, are frequently cited, and continue to shape discussions in the philosophy of science. Based on these criteria, here are some of the most significant texts and influential lines of criticism:

  1. Thomas S. Kuhn - The Structure of Scientific Revolutions (1962): This is arguably one of the most impactful books in the 20th-century philosophy of science and offers a profound challenge to Popper's views.
  • Main Criticism: Kuhn argued that Popper's emphasis on falsification as the primary activity of scientists misrepresents much of scientific practice. Kuhn introduced the concept of "normal science," where scientists work within an established "paradigm," solving puzzles rather than constantly trying to refute the paradigm itself. Falsification, for Kuhn, is more characteristic of "scientific revolutions" (paradigm shifts), which are rarer and more complex social and psychological processes than Popper's logical schema suggests. The "Kuhn-Popper debate," particularly prominent in the collection Criticism and the Growth of Knowledge (edited by Imre Lakatos and Alan Musgrave, 1970), highlights these differences.

Popper didn't say that falsification is the primary activity of scientists. He said scientists need conjectures and refutations, not primarily refutations. Popper emphasized this point, even titling a book Conjectures and Refutations.

Popper's goal wasn't to observe what scientists do in practice then accurately document it. He used abstract, logical reasoning to analyze how science could, should, and couldn't possibly work. Responding about what the primaries activities of scientists are or aren't is not very relevant to Popper's main ideas.

The "normal science" and "scientific revolutions" issue is basically about sociology, psychology and human behavior. If it's correct, that wouldn't make Popper's logical analysis of epistemology incorrect. What people are like today (or even their unchangeable human nature) is a separate issue from the logic of knowledge creation.

These claims don't challenge Popper's main, logical points about epistemology. They're a side issue from what I wanted to defend or discuss.

  1. Imre Lakatos - "Falsification and the Methodology of Scientific Research Programmes" (1970): Published in Criticism and the Growth of Knowledge, this work attempts to provide a more sophisticated version of falsificationism that addresses some of the criticisms leveled against Popper (including Kuhn's).
  • Main Criticism (and development): While a student of Popper, Lakatos acknowledged the naivety of "dogmatic falsificationism." He proposed that scientists work with "research programmes" consisting of a "hard core" of theories protected from refutation by a "protective belt" of auxiliary hypotheses. A research programme is progressive if it leads to novel predictions and degenerating if it only makes ad-hoc adjustments to anomalies. This implicitly criticizes Popper's simpler model by showing how theories can be legitimately shielded from falsification, at least temporarily.

Popper's model was focused on a different thing. Popper focused more on the logic of abstract epistemology, not on the behavior of people. The behavior of people is more complex and is also a different topic. Popper's focus on logic not psychology or behavior was even in his book title, The Logic of Scientific Discovery. Again, this criticism a bit off topic and doesn't actually try to challenge Popper's main epistemology claims.

  1. Paul Feyerabend - Against Method (1975): This book presents a radical critique of the idea that there is a single, universal scientific method, including Popper's falsificationism.
  • Main Criticism: Feyerabend advocated for "epistemological anarchism," arguing that major scientific advancements often occurred precisely because scientists violated established methodological rules. He contended that any prescribed methodology, including falsification, would stifle scientific progress. His famous slogan is "anything goes."

This doesn't actually claim that Popper's epistemology methods are wrong or don't work. It just claims that there might exist other working methods. None are mentioned here. There is no mention here of any defense of the particular methodologies that Popper said don't work (like induction or justificationism).

I don't recall Popper saying there couldn't possibly be any other methods. If Feyerabend actually knows of any other viable methods, that would be interesting, but that isn't mentioned here. I tried following up with Gemini:

Short answer. Did Feyerabend defend any specific methods besides Popper's as viable?

It said:

No, Feyerabend famously argued against the idea of any single, universally prescribed scientific method, including Popper's, being uniquely viable.

Instead, he advocated for methodological pluralism, famously encapsulated in his phrase "anything goes," suggesting that scientists should be free to use whatever methods, rational or irrational, best suit their research and lead to progress, rather than being constrained by one specific set of rules.[1] He didn't champion another specific method over others as the single correct one.  

So Feyerabend said go ahead and do anything even irrational things. I guess he also was in favor of using illogical methods. This strikes me as either silly or not really talking about the same things as Popper. Methods that, as a matter of logic, cannot work ... cannot work. So don't use them. What would make some sense is using methods that people don't know how to explain in words or logically defend in hopes that they have differences from the methods that have been logically refuted. Human behavior is often quite complex so even if some elements of it are logically wrong, progress could be made because other elements are better. But this is again getting into issues of human behavior instead of the abstract logic of science and epistemology.

  1. Hilary Putnam - Various Essays (e.g., "The 'Corroboration' of Theories" in Philosophical Papers, Vol. 1, 1975): As detailed previously, Putnam raised significant objections to Popper's epistemology.
  • Main Criticism: Putnam focused on the untenability of Popper's rejection of induction, arguing that if corroboration provides no reason to believe a theory is more likely to be true or reliable for future predictions, then science becomes practically irrelevant. He also emphasized, in line with the Duhem-Quine thesis, that theories are not tested in isolation, making straightforward falsification problematic.

Saying not to reject induction because you don't like the consequences is a bad argument. You can't figure out what's true based on what you'd like to be true.

Also, Popper didn't think science is irrelevant. Putnam is taking some extreme, exaggerated version of Popper's position, then finding he dislikes it, then concluding that Popper must be wrong and we can stick with some standard views despite Popper's criticisms of them.

You don't need any alternative in order to refute something like induction. Again, all of these criticisms strike me as not really focused on logical analysis.

Being unhappy about Popper's alternative doesn't make it false, and it's also weird to be unhappy over conclusions Popper himself didn't reach and denies.

Regarding testing theories in isolation: that's an important issue which Popper worked on, other thinkers worked on, and which could still be worked on more. It's a difficulty that comes up for Popper and others, which we could still figure out better ways to handle, but which isn't a decisive refutation of Popper's epistemology.

  1. The Duhem-Quine Thesis (articulated in works by Pierre Duhem and W.V.O. Quine, e.g., Quine's "Two Dogmas of Empiricism," 1951): While not a direct critique solely of Popper (Duhem predates much of Popper's relevant work), the implications of this thesis are fundamental to many criticisms of falsification.
  • Main Criticism: This thesis asserts that a hypothesis cannot be tested in isolation because empirical tests of a hypothesis require numerous background assumptions (auxiliary hypotheses). Therefore, when an experiment yields a result contrary to the hypothesis, it's not clear whether the fault lies with the hypothesis itself or one of the auxiliary assumptions. This makes conclusive falsification of a single theory difficult, if not impossible.

Yes. Popper discussed this. His methods can be used despite these issues because they don't rely on conclusive proof of anything. All our knowledge is tentative and fallible.

  1. Larry Laudan - e.g., "The Demise of the Demarcation Problem" (1983) and Beyond Positivism and Relativism (1996): Laudan famously criticized Popper's central project of demarcating science from non-science (or pseudo-science) using the criterion of falsifiability.
  • Main Criticism: Laudan argued that the demarcation problem is a pseudo-problem and that no single criterion (including falsifiability) can successfully distinguish science from non-science in all historical and practical contexts. He pointed out that many well-established scientific theories have been unfalsifiable at certain points, while some pseudoscientific theories can be formulated to be falsifiable.

These texts and the arguments they contain remain central to understanding the limitations and challenges to Popper's epistemology. They are standard readings in most philosophy of science courses and continue to inform contemporary debates about scientific methodology, rationality, and the nature of scientific progress.

Popper's demarcation criterion is a terminology suggestion related to his commentary and criticism of some thinkers like Marx, Freud and Adler. It's not a very important issue to his epistemology. Science can use many types of criticism, including empirical tests. If Popper was wrong about terminology and some scientific claims can't be empirically tested, that's OK because non-empirical criticism would still work.

My conclusion is that none of these six criticisms actually sounds like relevant, logical criticism of Popper's core epistemology claims. They're mostly either indecisive points (like about some things being difficult) or off-topic points (mostly about human behavior rather than the logic of how knowledge can and can't be created). These points mostly, even if they are correct, aren't a threat to Critical Rationalism. If anyone knows that any of these AI summaries are wrong, or knows of any more relevant criticisms, please let me know.

When I've looked at Popper criticism papers in the past, and books advocating rival schools of thought, including pro-induction textbooks, my general experience has been similar to this: most of it misses the point and is more clearly off-topic than wrong. Many of the criticisms of Popper are things I could concede while still liking Popper and getting a lot of value from his ideas that weren't criticized. And a lot of the positive claims that contradict Popper, like about induction, are too vague to engage with directly. I can respond with meta-criticism about how it's vague or doesn't address key questions, but that is less satisfying than actually being able to respond about epistemology. If anyone knows of some good materials to engage with and/or has enough knowledge about induction and is open to debate, let me know. I've been looking for many years.

I've tried making guesses about induction to add details. I've tried branching (they could mean X, Y or Z, so let me respond to all three). I've tried filling in blanks, writing my own summary of induction, etc. I haven't found this an effective way to enable writing critiques that inductivists accept. I, like Popper, have been accused of attacking a straw man. I think I could pick any of the six texts Gemini listed, criticize it, and be accused of attacking a straw man or dumb text by some inductivists and told that I should have responded to some of the good inductivist literature instead. I don't want to attack a straw man; I don't actually want to guess what inductivists think or fill in blanks for them; I don't want to put words in their mouth; I want them to write it down so I can respond to quotes. I've never been able to find anything suitable to engage with that addresses the key questions necessary for me to write Popperian criticism instead of either asking clarifying questions or filling in blanks myself. I've also asked over a dozen inductivists to provide a realistic, detailed, step-by-step example of using induction, but none of them have ever been willing to try, and I've never found that in literature. Nor have I ever gotten a list of steps to do induction that I could follow. So I've failed to find any abstract induction theory with enough information to engage with well and I also failed to find specific examples or steps for induction. That leaves me wondering what exactly persuaded anyone to be an inductivist.


Elliot Temple | Permalink | Messages (0)

Hilary Putnam Misquoted Karl Popper

A critic of Karl Popper on Reddit wrote:

Here's an exercise for you: to the best of your ability, restate and respond to Putnam's criticism of Popper in 'The 'Corroboration' of Theories' or Lakatos's criticism of Popper in 'Falsification and the Methodology of Scientific Research Programmes'. Bonus points for also stating how they differ from each other. Shouldn't be too difficult since you're aware of these common criticisms of Popper!

So let's take a look at that Putnam text. It opens:

... Sir Karl's fundamental attitudes: 'There is no method peculiar to philosophy'. 'The growth of knowledge can be studied best by studying the growth of scientific knowledge.'

Philosophers should not be specialists. For myself, I am interested in science and in philosophy only because I want to learn something about the riddle of the world in which we live, and the riddle of man's knowledge of that world. And I believe that only a revival of interest in these riddles can save the sciences and philosophy from an obscurantist faith in the expert's special skill and in his personal knowledge and authority.

I checked the three unsourced quotes. They're from Popper's The Logic of Scientific Discovery, in the 1959 preface to the first English edition. The third quote has significant edits with no indication that Putnam made changes. Let's go through them in order.

The first quote omits Popper's italics.

The second quote omits italics, begins mid-sentence, and changes the first word to begin with a capital letter.

The third quote, the block quote, also starts mid-sentence and edits the first word to be capitalized.

So far these issues are somewhat minor. But it gets worse. The third and final sentence of Putnam's block quote is:

And I believe that only a revival of interest in these riddles can save the sciences and philosophy from an obscurantist faith in the expert's special skill and in his personal knowledge and authority.

Here's the sentence Popper actually wrote:

And I believe that only a revival of interest in these riddles can save the sciences and philosophy from narrow specialization and from an obscurantist faith in the expert's special skill, and in his personal knowledge and authority; a faith that so well fits our 'post-rationalist' and 'post-critical' age, proudly dedicated to the destruction of the tradition of rational philosophy, and of rational thought itself.

Putnam deleted the words "from narrow specialization and".

Putnam deleted the comma after "special skill".

Putnam replaced the semi-colon and everything after it with a period.

Putnam didn't communicate that he made edits.

Deleting words from the middle of a quote without using an ellipsis or square brackets is unacceptable.

Putnam's text is a reprint of a text that Popper already responded to, plus a brief response to Popper's response. In Putnam's followup he has a section "The Charge of Textual Misrepresentation" which responds to Popper's claim that Putnam had misrepresented Popper. It doesn't address these quotation issues, which seems careless to me: if you're going to address that kind of charge, you ought to double check the accuracy of your quotes!

This does not inspire confidence that Putnam's text is worth reading or contains good criticism of Critical Rationalism (or that the Lakatos text, recommended by the same Redditor, is good).

Lakatos, it should be noted, is a fan of Popper. Here's how Lakatos begins his contribution to The Philosophy of Karl Popper, edited by Paul A. Schilpp, volume 1, page 241:

Popper’s ideas represent the most important development in the philosophy of the twentieth century; an achievement in the tradition— and on the level—of Hume, Kant, or Whewell. Personally, my debt to him is immeasurable: more than anyone else, he changed my life.

So it doesn't make sense to cite Lakatos when you want to dismiss Popper as a bad thinker who should be ignored and whose fans shouldn't be debated.

And Putnam is a partial fan of Popper, not someone who thinks Popper should be dismissed or ignored. Here's how Putnam begins the text the Redditor recommended:

Sir Karl Popper is a philosopher whose work has influenced and stimulated that of virtually every student in the philosophy of science. In part this influence is explainable on the basis of the healthy-mindedness of some of Sir Karl’s fundamental attitudes

And Putnam wrote a text trying to engage with Popper, plus a followup. Putnam did not behave like some Redditors (who ironically cite Putnam) who think Popper should be treated as an unreasonable outcast and his claims and fans ignored without debate.


Elliot Temple | Permalink | Messages (0)

Philosophy Is Important But Underserved

Philosophy is an important field because it provides some premises for all other fields. Issues that are very relevant to many fields include: how to be rational and objective, how to deal with bias, critical thinking, organizing knowledge, debate, how to learn effectively, how to find and correct errors, problem solving and goal accomplishing methods, and integrity.

Philosophy is an underserved field. There isn't a lot of good work being done in philosophy. A lot of the effort is basically about history of philosophy and isn't really bringing useful breakthroughs relevant to other fields today like modern science. A lot of philosophy work is inductivist, justificationist, indecisive, etc. It broadly ignores Karl Popper and there aren't good debates happening. Ideas like JTB (knowledge = justified, true, belief) and credences are bad.

It's hard to find philosophy bloggers worth reading or philosophy YouTubers worth watching. The field feels pretty empty. It's not like video games where there's tons of decent material available. It also feels very empty compared to politics. This is partly because academic philosophers engage with the public at low rates, which is partly because the public doesn't like them, which they would partly blame on the public not having enough expertise and intelligence. I put more blame on the philosophers lacking communication skill, having dumb ideas, obscuring the lack of substance with complexity, etc. Academic philosophy is a field that alienated Richard Feynman and Karl Popper; it's not just ignorant or dumb people who are unimpressed. Lots of smart, well-read people think academic philosophy isn't very good currently.

And academic philosophers are very into gatekeeping, so if a lay person reads some of their favorite books and has some ideas about them, they still generally won't want to engage with that person. They also don't mind ignoring Objectivism even though Rand is one of the most famous and best selling philosophers. They aren't really trying to engage with stuff that resonates with more people. Lots of them are happy in their little niche where they often play office politics and social climb instead of actually challenging each other appropriately. Part of what many of them like about their niche is the lack of criticism and lack of threatening types of debate that would challenge their key claims or premises in a way where they could potentially clearly lose the debate.

Most philosophers don't even try to work on rationality and other really useful, practical topics. Even the ones whose speciality is epistemology generally turn it into something abstract and disconnected from decision making, learning and debate.

Less Wrong (including Effective Altruism) cares about rationality but they aren't mainstream or academic philosophy, they're quite hostile to Critical Rationalism (even without my addition of rejecting credences, weighted factors, etc., which they'd really dislike), they won't debate and choose not to engage with lots of criticism, and they moderate their forums to limit dissent. Objectivism also cares about rationality but Objectivists don't actually try to study and analyze as much as Less Wrong and also broadly aren't open to debate or free speech discussion either. I've never found the discussion quality to be good at Critical Rationalism forums either and I haven't found Theory of Constraints forums at all. These sorts of forums allow some discussion and have some argumentative members, but if you want more structured discussions/debates that actually go somewhere and reach any important conclusions, rather than short verbal skirmishes that people quit in the middle, then people stop talking to you or moderators intervene.

A lot of people, including academic philosophers, seem satisfied by Facebook and Twitter for discussion. Social media isn't designed for serious discussions that reach important conclusions. Academic journals aren't designed for back-and-forth discussions either and are gatekept. In general, most intellectuals seem to think of debate as something that happens in voice, preferably in person, as a one-day event, rather than something that's best done in text over a period of many days (having some voice debates with a time limit like two hours is fine but I don't think those should be primary).

The people who will debate some on the internet tend to not think of themselves as top experts, tend not think of the conclusions of their debates as being important to the world, and if they start losing a debate they often either flake or get angry/upset/tilted. It's hard to get anyone to have multi-day text debates and take them really seriously. That kind of activity just doesn't have enough people in general, not just for philosophy topics.

I don't see a bunch of great text debates happening between other people; the primary issue isn't lack of access or gatekeeping, it's that people don't do this stuff enough.

You can find some long disorganized debates on internet forums, but it's hard to find anything that's structured in a way to reach clear conclusions. In my experience, as soon as I start asking for structured debate and bringing up suggested methods to accomplish it, people stop debating. In the past, I had no trouble getting some long disorganized debates that failed to stay on topic, keep track of everything that was said, and reach clear conclusions. But now that I'm looking for more effective discussions, and have some ideas about methods (and am open to alternative methods if anyone else has ideas about how to have productive discussions), I find it's hard to get discussions. And I don't think that is about me because I don't see other people having those kinds of conversations without me.

I think many people don't want discussions that result in clear conclusions because then they can lose. I think most people don't like to risk losing. Also, most people aren't good at cooperative critical thinking and pursuing the same topic in a way that no one loses; if it's not adversarial debate, people tend to lose interest and flake, maybe because a lot of their motivation is the "someone is wrong on the internet" type or because the internet forum posters don't see themselves as capable of doing serious, important research and the academics don't do research that way.

Tons of people do their own thing and make their own claims without being willing to thoroughly address much criticism. They attract fans, funding, promotions, etc., or they fail to. There is a competition to get those resources where people and their theories try to be more appealing than others rather than directly debating others about which theories are true. This happens in many intellectual fields, like various sciences, not just in philosophy.

I don't have a solution. I just think this is an important problem that's worth bringing up. There's a lack of people doing useful work on key areas of philosophy that underly other fields, and there's a broad lack of organized, effective discussion and debate in the world.


Elliot Temple | Permalink | Messages (0)