Repetitive Stress Injury Psychology and Personal Story

Below is an email to Robert Spillane. He's a thinker who agrees with lots of Thomas Szasz's ideas, and knows a lot about Popper and other philosophers. His book An Eye for An I: Philosophies of Personal Power covers many philosophical ideas. He wrote an article about Repetitive Stress Injury (RSI).

I share my experience with RSI. From my story, you can learn about RSI, and you can also learn how to think about, take responsibility for and solve one's problems.


http://www.szasz.com/spillaneremarks.html

I had RSI problems, which I solved by myself before reading Szasz. Before reading much of your perspective, I wrote down my existing thinking, below. After reading the rest, I see that we broadly agree. I believe my view adds something you don't say.

I liked your comment on the word "demoralised". I particularly agree with:

There are serious psychosocial consequences when people with discomfort in the arm are told that they may have a crippling disease which demands urgent medical treatment and cessation of physical activities.

And I found this especially horrible:

Personal activity is discouraged because insurance companies, facing large payouts, employed private investigators whose evidence, admissible in industrial courts, could prove embarrassing to plaintiffs. Faced with the prospect of jeopardising their claim, workers were inclined to adopt the patient role and assume a state of dependency

I'd be very interested if you think any of my account is mistaken or contradicts Szasz:

I had wrist pain which disrupted my computer use. I wasn't malingering. I wanted to use computers heavily. I didn't have a job at the time ("Occupational Overuse Syndrome" is stupid). I didn't spend much time interacting with doctors about it. I didn't find the doctors useful. I found better info online. I didn't use any RSI medicine beyond wearing wrist splints while sleeping. I could have gotten cortisone shots and probably surgery if I'd wanted to; that would have been a terrible idea.

Bodies have physical limits. My physical problem was real and was addressed with physical solutions: a better chair, ergonomic changes, stretching, breaks, and a temporary reduction in typing. My main problem was typing with bent wrists, which I ceased after educating myself.

I was scared by reading about how RSI could cripple me long-term. What people say about RSI is very dangerous. While learning standard RSI advice, I made myself fearful and stressed about whether my wrists would improve. RSI advice says you're largely helpless – you may be crippled for life with nothing you can do about it. I started worrying.

My physical problem was adequately solved after perhaps a few months, but I didn't notice. I had ongoing pain for several years! Because of my fear, I was oversensitive to minor pain and minor non-pain sensations, and I imagined some pain. I hated my RSI problem rather than benefitting from it.

What really scared me was the claim, which I accepted, that pushing past pain would make my injury worse. That was completely different than my attitude to sports. In sports, I routinely ignored minor pains because I had a rational understanding of which pain indicated a genuine danger and which pain was harmless. I'm good at ignoring pain that I don't consider dangerous.

I had a bad time with RSI because I accepted bad ideas about which pain is dangerous. After the initial physical improvements, I only had mild pain that I could have tolerated if I wanted to. But I was unwilling to because medical authorities told me that ignoring the pain could damage my body and cripple me in the long term. I could have toughened up, as I'd done with sports pains, but medical advice told me not to! I was trying to be responsible and conscientious...

My pain went away when I recognized what was going on and relaxed about it. I'd already solved the physical problem in the past. Introspection and changing my attitude then solved the mental problem.

I believe on principle and logic (without much direct evidence) that the pattern of my experience is common, minus the solution. But I couldn't estimate how common it is compared to other patterns like malingering. The pattern is:

  1. Have a real physical problem while psychologically fine.
  2. Learn about RSI and create a psychological problem.
  3. Take steps to solve the physical problem, which work.
  4. Have an ongoing psychological problem which you confuse with the original physical RSI injury.

Note this pattern explains the development of RSI over time, in contrast to the 8 scenarios you present which state the situation at a particular time.

So I think the standard advice and medical authority associated with RSI is doing immense harm. It scares people, and encourages them to be oversensitive to pain and therefore to exaggerate. Thereby, "medical" advice causes RSI!

I was fooled by bad, pseudo-medical advice to intentionally be sensitive to mild discomfort... The reasoning was that pain is a warning sign for injury, so if you try to be mentally tough about the pain then you will cripple yourself. I think serious physical injuries called "RSI" happen, but malingering, exaggerations and mental errors are way more common.


Elliot Temple | Permalink | Messages (0)

Sunk Costs

Many people know about the sunk cost fallacy. And they often think other people are stupid for getting sunk costs wrong.

But people often talk about real issues and call it a sunk cost when it's actually something real. So sunk cost claims shouldn't just be ignored as if they never matter.

A sunk cost is an investment in some kind of project which you already made and can't recover. It can include money, time, effort, etc.

Should you stick with projects you already invested in? Everything else being equal, no. If some other project is better than continuing this one, switch. The sunk cost should be ignored. You look at what continuing this project onward from this point is like compared to other projects.

Here's the real issue which people are sometimes speaking incorrectly about: We're presumably looking at a project with some large startup costs, barriers to entry, costs to finish the whole project, etc. If it was a cheap project, people wouldn't care about the sunk costs much. (Actually sometimes people eat food they hate because of the sunk cost of spending $10 on it already, even though they can easily afford food they do like. That's stupid.)

Projects are usually replaced by similar projects. So expensive projects are frequently being compared to other expensive projects. If I already invested $1000 in this project, maybe I'll have to invest $1000 in the alternative project, too.

So people complain about the "sunk cost" of $1000 already spent on this project. When what they really should say is they'd like to switch projects but the other project would cost $1000. If they'd change their thinking in that way, it'd be better. They're wrong. And they don't understand the sunk cost concept correctly (which already mentioned comparing continuing the current project from where you are to the new project from where it starts, which implies taking into account the startup costs of the alternative project).

But when you tell people to ignore sunk costs, you can be giving bad advice. They can think they are supposed to ignore the $1000 difference between the projects (both cost $1000 originally, but you already paid for one and not the other) because it's a sunk cost issue. Furthermore, this sunk cost issue didn't exist before they paid $1000 for the first project, since back then they had a spare $1000.


Elliot Temple | Permalink | Messages (0)

Humans Matter – Post and Podcast

I've been making podcasts. I made a new one:

Watch: Humans Matter

This post is the thoughts I wrote before podcasting.


Human beings are amazing. They can be powerful, wise, and accomplish great things. Most people don't think of themselves that way. They're kinda shitty and they've accepted that. Kids think they will be awesome later or never. Adults wait tables and know they're nothing great. People think greatness is for a few great men, geniuses and giants, and they don't know how to be that. They think it just happens, somehow, and if it doesn't happen to them well, so what? They don't pursue greatness. Sometimes they talk about passions and dreams and then ... start a restaurant and cook some food. What about science? What about big ideas? Yes lots of big ideas are dumb and impractical. So make good ones! That doesn't mean there can't be good ideas, it means more people need to work on it. Like you, not someone else!

People need meaning and responsibility in their life too. They need to do something that matters or they won't be happy. They may pretend they're happy and have some short term pleasures, but it's not fulfilling. They might cope, but they could be a lot happier. (They might possibly even cope without drugs – including recreational drugs, prescription drugs, psychiatric drugs, painkillers, alcohol, nicotine, marijuana and caffeine.)

People don't tell their kids they're sacred and have a divine soul, that they are demigods who can move heaven and Earth if they want to, and learn enough (knowledge is power), and run their life efficiently and keep up with solving their problems (rather than taking on new problems faster than they solve problems, which is what most people do, and then they get overwhelmed and start lowering their standards and trying to cope with a chronic situation of suffering through many unsolved problems).

Especially today, in our secular society, people don't think human beings are special. But they are. A single person is like a whole species. They're unique. Elephants are a unique animal compared to cows, but individual elephants are all the same thing just like two iPhones are fundamentally the same (even if one is a bit older and slower and has a unique scratch on it).

A human being is a universal knowledge creator. That means they can learn anything that can be learned. You have that capability. Your child has that capability. That's your birthright. Humans are born with tremendous potential.

People come along to FI and say philosophy is hard, they aren't super into it, blah blah blah. They are wasting their lives on petty things. The universe is a big place. People should be exploring the stars, harnessing nuclear power, understanding the multiple universes implied by quantum physics, programming AIs, curing cancer, curing aging, automating all the easy and boring jobs to rescue billions of people from manual labor and drudgery, and understanding reason and morality better and better.

People aim so small. Want to earn a few hundred thousand dollars? Why not a billion? Seriously. Create something great which creates $100 of value for 10 million people that didn't exist before. That's a billion dollars. OK OK, charge them half, so you make half a billion and those 10 million people are all $50 dollars richer (half a billion in total). You want to help people? Do that. You think that's totally unbelievably impossible for you? Why? It's easy to make a website which can handle 10 million visitors in a year. You can reach 10 million people if you want to, and they want to.

A hundred dollars isn't much. There are over 10 million Americans who have more than a hundred dollar problem with how they eat. They endure thousands of dollars of suffering, human scales, food scales, calorie counting, buying things, forcing themselves to exercise, wasted gym memberships, etc, trying to diet. They eat things which cost more, taste worse, and are counterproductive. There's so many big pain points in so many people's lives.


Elliot Temple | Permalink | Messages (0)

What Philosophy Takes

suppose someone wanted to know what i know today about philosophy.

they better be as smart, honest and good at learning as me or put in as much time/attention/effort as me. if they are way behind on both, how is that going to work?

if you aren't even close in either area, but you pretend you're learning FI, you're being irresponsible and lying to yourself. you don't actually have a plan to learn it which makes sense and which appears workable if you stop and look at it in broad strokes.

consider, seriously, what advantages you have, compared to me, if any. consider your actual, realistic capabilities. if the situation looks bad, that is good information to know, which you can use to formulate a plan for dealing with your actual situation. it's better to have some kind of plan than to ignore the situation and work with no plan or with a plan for a different (more positive) situation you aren't in.

if you're young, this stuff still applies to you. if you aren't doing much to learn philosophy now, when will you? it doesn't get easier if you wait. it gets harder. over time you will get less honest and more tied up in a different non-FI life.

whatever issues you have with FI, they won't go away by themselves. waiting won't fix anything. face them now, or don't pretend you're going to face them at all.

if you're really young, you may find it helpful to do things like learn to read first. there's audiobooks, but it isn't really just about reading, it's also vocabulary and other related skills. putting effort into improving your ability to read is directly related to FI, it's directly working on one of the issues separating you from FI. that's fine.

if you're doing something which isn't directly related, but which you think will help with FI, post it and see if others agree with your plan or think you're fooling yourself. if you're fooling yourself, the sooner you find out the sooner you can fix it. (or do you want to fool yourself?)


Elliot Temple | Permalink | Messages (4)

Reading Recommendations

I made a reading list. If you want to be good at thinking and know much about the world, these are the best books to read by the best thinkers. In particular, if you don't understand Ayn Rand and Karl Popper then you're at a huge disadvantage throughout life. (Almost everyone is at this huge disadvantage. It's a sad state of affairs. You don't have to be, though.) I put lots of effort into selecting the best books and chapters to highlight, and including brief summaries. The selected chapters are especially important for Karl Popper, who I don't think you should read cover-to-cover.

Many other philosophy books, including common recommendations, are actually so bad that people think intellectual books suck and give up on learning. So I want to help point people in the right direction. (If you think my recommendations are bad, speak up and state your judgement and criticisms. Don't silently dismiss the ideas with no possibility of being corrected if you're mistaken.)

Ayn Rand is the best moral philosopher. That covers issues like how to be happy, what is a good life, and how to make decisions. There's no avoiding those issues in your life. Your choice is whether to read the best ideas on the topic or muddle through life with some contradictions you picked up from your culture and never critically considered.

Karl Popper is the best philosopher of knowledge. That covers issues like how to learn, how to come up with solutions to problems (solutions are a type of knowledge, and problem solving is a type of learning), and how to evaluate ideas as good, bad, true or false. Critical thinking skills like this are part of everyone's life. Your choice is whether to use half-remembered half-false critical thinking skills you picked up in school, or to learn from the best humanity has ever had and consciously think things through.

I made a free video presentation covering the reading list. It'll help you understand the authors, find out which books interest you, and read more effectively. Take a look at the reading list, then check out my video overview.

Watch: Elliot presents the reading list. (This video is also a good introduction to philosophy and Fallible Ideas.)

If you have some interest in learning about reason, morality, liberalism, etc, please take a look at the reading list and watch the video. This was a big project to create a helpful resource and I highly recommend at least looking it over.

I also recorded two 3-hour discussions. I talked with other philosophers who are familiar with the material. We talk about what the books say and how they're valuable, who the authors are and what they think, why people have trouble reading, and some philosophical issues and tangents which come up.

If you love reading books, dive right in! But if you're like most people, you'll find podcasts easier. Most people find verbal discussion more fun and engaging than books. The podcasts will help you get information about what the books are like, which can help you become interested in the first place.

Buy: Alan Forrester Discussion

Buy: Justin Mallone Discussion


Elliot Temple | Permalink | Messages (23)

What People Need Excuses For

What issues people feel they need an excuse for, instead of just openly taking a position, is important. It says a lot about what they consider fully legitimate and what they consider more questionable.

For example, no one needs any kind of excuse to be pro-freedom, pro-science, or pro-education. People take those positions proudly.

But people do need excuses to mistreat people they label "mentally ill". They don't just say, "I don't like him, so let's use force against him!" Psychiatry makes up a bunch of excuses about medical science to help legitimize their otherwise-illegitimate actions.

Global warming is also an excuse. The greens don't just proudly say that factories and electricity are bad. They say we're forced to cut back on industrial civilization or else the world will be destroyed. They feel they need a really strong, compelling excuse for opposing material wealth and technology.

The political left doesn't want to admit they are anti-liberal. They feel they need excuses for being anti-liberal. Their favorite excuse is to lie and say they are "liberal" and "progressive". And many people claim capitalism is pretty good (they find it hard to proudly and fully oppose capitalism), but they use excuses about "excesses" and "public goods" to legitimize a mixed economy.

People often feel the need to have an excuse for shutting down discussion and being closed-minded. They don't just say, "I am opposed to critical discussion with people who have different views than I do." Instead they make excuses about how they'd love to discuss but they're too busy, or the other person is ruining the discussion by being too unreasonable.


Elliot Temple | Permalink | Messages (0)

Aristotle (and Peikoff and Popper)

I just listened to Peikoff's lectures on Aristotle. I also reread Popper's WoP introduction about Aristotle. some thoughts:

http://www.peikoff.com/courses_and_lectures/the-history-of-philosophy-volume-1-–-founders-of-western-philosophy-thales-to-hume/

btw notice what's missing from the lecture descriptions: Parmenides and Xenophanes.

this is mostly Peikoff summary until i indicate otherwise later.

Aristotle is a mixed thinker. some great stuff and some bad stuff.

Part of the mix is because it's ancient philosophy. They didn't have modern science and some other advantages back then. It's early thinking. So Aristotle is kinda confused about God and his four causes. It was less clear back then what is magical thinking and what's rational-scientific thinking.

Aristotle is bad on moderation. He thought (not his original idea) that the truth is often found between two extremes.

Aristotle invented syllogism and formal logic. this is a great achievement. very worthwhile. it has a bad side to it which is causing problems today, but i don't blame Aristotle for that. it was a good contribution, a good idea, and it's not his fault that people still haven't fixed some of its flaws. actually it's really impressive he had some great ideas and the flaws are so subtle they are still fooling people today. i'll talk about the bad side later.

it's called formal logic because you can evaluate it based on the form. like:

All M are P.
S is an M.
Therefore, S is P.

this argument works even if you don't know what M, P and S are. (they stand for middle, predicate and subject.) (the classical example is M=man/men, P=mortal, S=Socrates.) Aristotle figured out the types of syllogism (there's 256. wikipedia says only 24 of them are valid though.)

Aristotle was apparently good on some biology and other science stuff but i don't really know anything about that.

Aristotle started out as a student of Plato but ending up rejecting many of Plato's ideas.

Aristotle didn't say a ton about politics. What he said is mixed. Better than Plato.

Aristotle – like the Greeks in general (as opposed to e.g. pre-modern Christians) – cared about human happiness and life on Earth. and he thought morality was related to human happiness, success, effectiveness, etc. (as opposed to duty moralities from e.g. early Christians and Kant which say morality means doing your duty and this is separate from what makes you happy or makes your life good.)

Aristotle advocated looking at the world, empirical science. he invented induction.

Aristotle was confused about infinity. (Peikoff and some other Objectivists today like Harry Binswanger roughly agree with Aristotle's infinity mistakes.)

Aristotle was generally pro-human and pro-reason. in a later lecture Peikoff says the dark ages were fixed because European Christendom got some copies of Aristotle's writing from the Muslims and Jews (who were trying to reconcile him with their religions) and then Thomas Aquinas attempted to reconcile Aristotle with Christianity and this made it allowable for Christians to read and think about Aristotle which is what got progress going again.


now Popper's perspective, which Peikoff basically agrees with most of the facts about, but evaluates differently.

Popper agrees Aristotle did some great stuff and got a few things wrong. like Peikoff and a ton of other people. But there's a major thing Popper doesn't like. (BTW William Godwin mentioned disliking Aristotle and Plato but didn't say why.)

Aristotle wanted to say I HAVE KNOWLEDGE. this is good as a rejection of skepticism, but bad as a rejection of fallibility. Aristotle and his followers, including Peikoff, equivocate on this distinction.

Part of the purpose of formal logic is an attempt to achieve CERTAINTY – aka infallibility. that's bad and is a problem today.

Objectivism says it uses the word "certain" to refer to fallible knowledge (which they call non-omniscient knowledge. Objectivism says omniscience is impossible and isn't the proper standard of something qualify as knowledge). and Ayn Rand personally may have been OK about this (despite the bad terminology decision). but more or less all other (non-Popperian) Objectivists equivocate about it.

this confusion traces back to Aristotle who knew induction was invalid and deduction couldn't cover most of his claims. (Hume was unoriginal in saying induction doesn't work, not only because of Aristotle but also various others. i don't know why Hume gets so much credit about this from Popper and others. Popper wrote that Aristotle not only invented induction but knew it didn't work.)

and it's not just induction that has these problems and equivocations, it's attempts at proof in general ("prove" is another word, like "certain", which Objectivists use to equivocate about fallibility/infallibility). how do you justify your proof? you use an argument. but how do you justify that argument? another argument. but then you have an infinite regress.

Aristotle knew about this infinite regress problem and invented a bad solution which is still in popular use today including by Objectivism. his solution is self-evident, unquestionable foundations.

Aristotle also has a reaffirmation by denial argument, which Peikoff loves, which has a similar purpose. which, like the self-evident foundations, is sophistry with logical holes in it.

Popper says Aristotle was the first dogmatist in epistemology. (Plato was dogmatic about politics but not epistemology). And Aristotle rejected the prior tradition of differentiating episteme (divine, perfect knowledge) and doxa (opinion which is similar to the truth).

the episteme/doxa categorization was kinda confused. but it had some merit in it. you can interpret it something like this: we don't know the INFALLIBLE PERFECT TRUTH, like the Gods would know, episteme. but we do have fallible human conjectural knowledge which is similar to the truth (doxa).

Aristotle got rid of the two categories, said he had episteme, and equivocated about whether he was a fallibilist or not.

here are two important aspects of the equivocation and confusion.

  1. Aristotle claimed his formal logic could PROVE stuff. (that is itself problematic.) but he knew induction wasn't on the same level of certainty as deduction. so he came up with some hedges, excuses and equivocations to pretend induction worked and could reach his scientific conclusions. Popper thinks there was an element of dishonesty here where Aristotle knew better but was strongly motivated to reach certain conclusions so came up with some bullshit to defend what he wanted to claim. (Popper further thinks Aristotle falsely attributed induction to Socrates because he had a guilty conscience about it and didn't really want the burden of inventing something that doesn't actually work. and also because if Socrates -- the ultimate doubter and questioner -- could accept inductive knowledge then it must be really good and meet a high quality standard!)

  2. I talk about equivocating about fallible vs. infallible because I conceive of it as one or the other, with two options, rather than a continuum. But Peikoff and others usually look at a different way. instead of asking "fallible or infallible?" they ask something like "what quality of knowledge is it? how good is it? how justified? how proven? how certain?" they see a continuum and treat the issue as a matter of degree. this is perfect for equivocating! it's not INFALLIBLE, it's just 90% infallible. then when i talk about fallible knowledge, they think i'm talking about a point on the continuum and hear like 0% infallible (or maybe 20%) and think it's utter crap and i have low standards. so they accuse me and Popper of being skeptics.

the concept of a continuum for knowledge quality – something like a real number line on which ideas are scored with amount of proof, amount of supporting evidence/arguments, amount of justification, etc, and perhaps subtracting points for criticism – is a very bad idea. and look at it that way, rather than "fallible or not?" and "there is a known refutation of this or there isn't?" and other boolean questions is really bad and damaging.

Peikoff refers to the continuum with his position that ideas can be arbitrary (no evidence for it. reject it!), plausible (some evidence, worth some consideration), probable (a fair amount of evidence, pretty good idea), or certain (tons of evidence, reasonable people should accept it, there's no real choice or discretion left). he uses these 4 terms to refer to points on the continuum. and he is clear that it's a continuum, not just a set of 4 options.

But there is no something more beyond fallible knowledge, before infallible knowledge. And the ongoing quest for something fundamentally better than unjustified fallible knowledge has been a massive dead end. All we can do is evolve our ideas with criticism – which is in fact good enough for science, economics and every other aspect of life on Earth.


Elliot Temple | Permalink | Message (1)

Epistemology

I wrote:

The thing to do [about AI] is figure out what programming constructs are necessary to implement guesses and criticism.

Zyn Evam replied (his comments are green):

Cool. Any leads? Can you tell more? That's is what I have problems with. I cannot think of anything else than evolution to implement guesses and criticism.

the right answer would have to involve evolution, b/c evolution is how knowledge is created. i wonder why you were looking for something else.

one of the hard problems is:

suppose you:

  1. represent ideas in code, in a general way
  2. represent criticism in code (this is actually implied by (1) since criticisms are ideas)
  3. have code which correctly detects which ideas contradict each other and which don't
  4. have code to brainstorm new ideas and variants of existing ideas

that's all hard. but you still have the following problem:

two ideas contradict. which one is wrong? (or both could be wrong.)

this is a problem which could use better philosophy writing about it, btw. i'd expect that philosophy work to happen before AI gets anywhere. it's related to what's sometimes called the duhem-quine problem, which Popper wrote about too.

one of my own ideas about epistemology is to look at symmetries. two ideas contradicting is symmetric.

what do you mean by symmetries? how two ideas contradicting symmetric? could you give an example?

"X contradicts Y" means that "Y contradicts X". When two ideas contradict, you know at least one of them is mistake, but not which one. (Actually it's harder than that because you could be mistaken that they contradict.)

Criticism fundamentally involves contradiction. Sometimes a criticism is right, and sometimes the idea being criticized is right, and how do you decide which from the mere fact that they contradict each other?

With no additional information beyond "X and Y contradict", you have no way to take sides. And labelling Y a criticism of X doesn't mean you should side with it. X and Y have symmetric (equal) status. In order to decide whether to judge X or Y positively you need some kind of method of breaking the symmetry, some way to differentiate them and take sides.

Arguments are often symmetric too. E.g., "X is right because I said so" can be used equally well to argue for Y. And "X is imperfect" can be used equally well to argue against Y.

How to break this kind of symmetry is a major epistemology problem which is normally discussed in other terms like: When evidence contradicts a hypothesis, it's possible to claim the evidence is mistaken rather than the hypothesis. (And sometimes it is!) How do you decide?

So when two ideas contradict we know one of them at least is mistaken, but not which one. When we have evidence that seems to contradict a hypothesis we can never be sure that it indeed contradicts it. From the mere fact of contradiction, without additional information, we cannot decide which one is false. We need additional information.

Hypotheses are built on other hypotheses. We need to break the symmetry by looking at the hypotheses on which the contradicting ideas depend. And the question is: how would you do that? Is that right?

Mostly right. You can also look at the attributes of the contradicting ideas themselves, gather new observational data, or consider whatever else may be relevant.

And there are two separate questions:

  1. How do you evaluate criticisms at all?

  2. How do you evaluate criticisms formally, in code, for AIs?

I believe I know a lot amount about (1), and have something like a usable answer. I believe I know only a little about (2) and have nothing like a usable answer to it. I believe further progress on (1) -- refining, organizing, and clarifying the answer -- will help with solving (2).

Below I discuss some pieces of the answer to (1), which is quite complex in full. And there's even more complexity when you consider it as just one piece fitting into an evolutionary epistemology. I also discuss typical wrong answers to (1). Part of the difficult is that what most people believe they know about (1) is false, and this gets in the way of understanding a better answer.

My answer is in the Popperian tradition. Some bits and pieces of Popper's thinking have fairly widespread influence. But his main ideas are largely misunderstood and consequently rejected.

Part of Popper's answer to (1) is to form critical preferences -- decide which ideas better survive criticism (especially evidentiary criticism from challenging test experiments).

But I reject scoring ideas in general epistemology. That's a pre-Popper holdover which Popper didn't change.

Note: Ideas can be scored when you have an explanation of why a particular scoring system will help you solve a particular problem. E.g. CPU benchmark scores. Scoring works when limited to a context or domain, and when the scores themselves are treated more like a piece of evidence to consider in your explanations and arguments, rather than a final conclusion. This kind of scoring is actually comparable to measuring the length of an object -- you define a measure and you decide how to evaluate the resulting length score. This is different than an epistemology score, universal idea goodness score, or truth score.

I further reject -- with Popper -- attempts to give ideas a probability-of-truth score or similar.

Scores -- like observations -- can be referenced in arguments, but can't directly make our decisions for us. We always must come up with an explanation of how to solve our problem(s) and expose it to criticism and act accordingly. Scores are not explanations.

This all makes the AI project harder than it appears to e.g. Bayesians. Scores would be easier to translate to code than explanations. E.g. you can store a score as a floating point number, but how do you store an explanation in a computer? And you can trivially compare two scores with a numerical comparison, but how do you have a computer compare two explanations?

Well, you don't directly compare explanations. You criticize explanations and give them a boolean score of refuted or non-refuted. You accept and act on a single non-refuted explanation for a particular problem or context. You must (contextually) refute all the other explanations, rather have one explanation win a comparison against the others.

This procedure doesn't need scores or Popper's somewhat vague and score-like critical preferences.

This view highlights the importance of correctly judging whether an idea refutes another idea or not. That's less crucial in scoring systems where criticism adds or subtract points. If you evaluate one issue incorrectly and give an idea -5 points instead of +5 points, it could still end up winning by 100 points so your mistake didn't really matter. That's actually bad -- it essentially means that issue had no bearing on your conclusion. This allows for glossing over or ignoring criticisms.

A correct criticism says why an idea fails to solve the problem(s) of interest. Why it does not work in context. So a correct criticism entirely refutes an idea! And if a criticism doesn't do that, then it's harmless. Translating this to points, a criticism should either subtract all the points or none, and thus using a scoring system correctly you end up back at the all-or-nothing boolean evaluation I advocate.

This effectively-boolean issue comes up with supporting evidence as well. Suppose some number of points is awarded for fitting with each piece of evidence. The points can even vary based on some judgement of how importance each piece of evidence is. The importance judgement can be arbitrary, it doesn't even matter to my point. And consider evidence fitting with or supporting a theory to refer to non-contradiction since the only known alternatives basically consist of biased human intuition (aka using unstated, ambiguous ideas without figuring out what they are very clearly).

So you have a million pieces of evidence, each worth some points. You may, with me, wish to score an idea at 0 points if it contradicts a single piece of evidence. That implies only two scores are possible: 0 or the sum total of the point value of every piece of evidence.

But let's look at two ways people try to avoid that.

First, they simply don't add (or subtract) points for contradiction. The result is simple: some ideas get the maximum score, and the rest get a lower score. Only the maximum score ideas are of interest, and the rest can be lumped together as the bad (refuted) category. Since they won't be used at all anyway, it doesn't matter which of them outscore the others.

Second, they score ideas using different sets of evidence. Then two ideas can score maximum points, but one is scored using a larger set of evidence and gets a higher score. This is a really fucked up approach! Why should one rival theory be excluded from being considered against some of the evidence? (The answer is because people selectively evaluate each idea against a small set of evidence deemed relevant. How are the selections made? Biased intuition.)

There's an important fact here which Popper knew and many people today don't grasp. There are infinitely many theories which fit (don't contradict) any finite set of evidence. And these infinitely many theories include ones which offer up every possible conclusion. So there are always max-scoring theories, of some sort, for every position. Which makes this kind of scoring end up equivalent to the boolean evaluations I advocated in the first place. Max-score or not-max-score is boolean.

Most of these infinitely many theories are stupid which is why people try to ignore them. E.g. some of the form, "The following set of evidence is all correct, and also let's conclude X." X here is a completely unargued non sequitur conclusion. But this format of theory trivially allows a max-score theory for every conclusion.

The real solution to this problem is that, as Deutsch clearly explained in FoR (with the grass cure for the cold example), most bad ideas are rejected without experimental testing. Most ideas are refuted on grounds like:

  1. bad explanation

I was going to make a longer list, but everything else on my list can be considered a type of bad explanation. The categorizations aren't fundamental anyway, it's just organizing ideas for human convenience. A non sequitur is a type of bad explanation (non explanation). And a self-contradictory idea is a type of bad explanation too. And having a bad explanation (including none) of how it solves the problem it's supposed to solve is another important case. That gets into something else important which is understood by Popper and partly by Rand, but isn't well known:

Ideas are contextual. And the context is, specifically, that they address problems. Whether a criticism refutes an idea has to be evaluated in a particular context. The same idea (as stated in English) can solve one problem and fail to solve another problem. One way to approach this is to bundle ideas with their context and consider that whole thing the idea.

Getting back to the previous point, it's only ideas which survive our initial criticism (including doesn't blatantly contradict evidence we know offhand) that we take more interest in them and start carefully comparing them against the evidence and doing experimental tests. Testing helps settle a small number of important cases, but isn't a primary method. (Popper only partly understood this, and Deutsch got it right.)

The whole quest -- to judge ideas by how well (degree, score) they fit evidence -- is a mistake. That's a dead end and distraction. Scores are a bad idea, and evidence isn't the the place to focus. The really important thing is evaluating criticism in general, most of which broadly related to: what makes explanations bad?

BTW, what is an explanation? Loosely it's the kind of statement which answers why or how. The word "because" is the most common signal of explanations in English.

Solving problems requires some understanding of 1) how to solve the problem and 2) why that solution will work (so you can judge if the solution is correct). So explanation is required at a basic level.

So, backing up, how do you address all those stupid evidence-fitting rival ideas? You criticize them (by the category, not individually) for being bad explanations. In order to fit the evidence and have dumb conclusion, they have to have a dumb part you can criticize (unless the rival idea actually isn't so dumb as you thought, a case you have to be vigilant for). It's just not an evidence-based criticism (and nor should the criticism by done with unstated, based commonsense intuitions combined with frustration at the perversity of the person bringing an arbitrary, dumb idea into the discussion). And how do you address the non-evidence-fitting rival ideas? By rejecting them for contradicting the evidence (with no scoring).

Broadly it's important to take seriously that every flaw with an idea (such as contradicting evidence, having a self-contradiction, having a non sequitur, or having no explanation of how or why it solves the problem it claims to solve) either 1) ruins it for the problem context or 2) doesn't ruin it. So every criticism is either decisive or (contextually) a non-criticism. So evaluations of ideas have to be boolean.

There is no such thing as weak criticism. Either the criticism implies the idea doesn't solve the problem (strong criticism), or it doesn't (no criticism). Anything else is, at best, more like margin notes which may be something like useful clues to think about further and may lead to a criticism in the future.

The original question of interest was how to take sides between two contradicting ideas, such as an idea and a criticism of it. The answer requires a lot of context (only part of which I've covered above), but then it's short: reject the bad explanations! (Another important issue I haven't discussed is creating variants of current ideas. A typical reaction to a criticism is to quickly and cheaply make a new idea which is a little different in such a way that the criticism no longer applies to it. If you can do this without ruining the original idea, great. But sometimes attempts to do this run into problems like all the variants with the desired-traits-to-address-the-criticism ruin the explanation in the original idea.)


Elliot Temple | Permalink | Messages (0)