Elliot Temple | Permalink | Messages (3)

The Intellectual Social Game

GISTE wrote:

how does this concept of knowledge being contextual connect to parenting?

  • if a parent sees a problem with something his child is doing, it’s crucial for the parent to consider the child’s reasons for doing it. without incorporating the child’s reasons in the knowledge creation process, the parent is ignoring the context of the knowledge the child is acting on. doing so would reliably have the effect of the child either dismissing the parent's idea, or obeying it. instead, parent should ask child why he’s doing what he’s doing (to the extent the child wants such discussion), in order to help the child, and himself, evolve their initial ideas towards mutual understanding and agreement.

GISTE is playing the intellectual social game. that's why he uses words like "crucial", "incorporating", "reliably" and "evolve". and it's why he wrote a long paragraph with overly complex structure. (btw he made it a bullet point list but it's the only thing on the list. i didn't just omit the other bullet points from the quote.)

i call it a social game because people learn to play this game from other people, and it involves interacting with other people.

people learn social games by observing what others do and then try to approximate it themselves. this is how they learn other social games like smalltalk, being in an audience, or parenting.

the "game" terminology is used by Thomas Szasz and Jordan Peterson. i don't know who originated it.

they are games because there's a set of rules people are following. and if you play well, you get rewards (e.g. people like you, think you're smart, and want to play with you again).

the rules for social games are mostly unwritten. (some people are trying to change that, e.g. the Girls Chase book writes down a bunch of social dynamics and dating rules).

how do you learn a game with unwritten rules? you watch what other people do and copy it. (and maybe you make some mistakes and get punished and learn to be conservative.) that's how people learn social interaction.

this is why people don't do intellectual discussion in a precise, rigorous way. because they're just copying approximately what they say other people do, rather than understanding how to think, how discussion can facilitate learning, etc.

part of the intellectual discussion game is writing big words and fancy sentences. so people do that. even if you tell them not to. they are bad at learning from and following written rules. it's not how most people deal with life at all. (some people are good at following written rules in a specific area, e.g. programmers. but they're still usually normal people outside of their speciality.)

science works (or doesn't work...) mostly by people trying to copy the science game from other scientists, not by them learning the scientific method from a book and then following it.

the science game as practiced by most scientists living today is primarily a social game. it involves popularity, social status in the field, people with more authority and more control over money and hiring, people who get published in more or less prestigious journals, people with or without tenure, etc

and books don't tell you how to play the science social game and succeed in your interactions with other people. how do you get a reputation as a genius, get invited to the best parties, get paid well, get to decide who else is paid well (give out rewards to friends and allies), etc? you look at other people who are succeeding at these games and try to understand how it works.

you actually should read some books, btw, to do well at those games. but not books about science. stuff more like How To Win Friends and Influence People. most people don't read much, though i think reading books like that is more common with the most successful people.

people even use words as a social game. words have written rules (the dictionary). but people don't primarily learn how to use words from the dictionary. they hear and read other people using words, then they try to use those words in similar ways. they just guess the meanings from real world usage. it's very informal and error-prone.

people commonly correct others on mistaken word use when they are playing the social game badly. some word uses make you look like a child, fool, or ignoramus. people punish those with social rebukes.

but people rarely correct others just because their word usage is wrong according to the dictionary. (most people don't even know what the dictionary says, anyway, so they can't correct those mistakes.) so people end up misunderstanding many words and using them wrong.

this leads to problems when they try to discuss with someone like me who knows what words actually mean and uses the dictionary regularly. we're playing different games. i think my way is correct and has major advantages for truth seeking, so i don't want to switch games. and the other people don't know how to switch games. so it's hard to discuss.

people's use of words is very imprecise because it's based on loose guesses from what other people say. they don't know precisely what words mean. they just know enough to communicate with normal people without being rebuked.

people's arguments are imprecise because they use words imprecisely. and because their approach to arguing is to learn it as a social game. the whole social game approach involves lots of approximation and doing things that are sorta in the right ballpark and hoping no one says anything bad about it.

GISTE's approach to philosophy discussion works like that. (and it's the same for most other people). he knows what kinds of things people say in the intellectual discussion social game, and he strongly resists talking in a different way (e.g. more simple, clear, and childlike).

the quote above isn't even very bad, btw. people write way worse stuff all the time.

some of the professionals even study Kant in order to learn to make their writing harder to understand. then it filters down. non-philosopher intellectuals copy intellectual game playing behavior from academic philosophers. then some lesser intellectuals copy them and write material for more sophisticated lay people who pass it on to typical lay people who pass it on to idiots.

this is why it's so hard to find serious truth-seeking discussion. who does that? everyone just learns the intellectual social game instead!


Elliot Temple | Permalink | Messages (3)

Goals and Status Update

Let's talk about what I'm doing, why, and how well it's going.

I want to know things. I want to figure things out.

My philosophy goals are first of all about myself. That's important because I don't want anyone else's decisions or failures to be able to make me fail. I only want other people to have secondary relevance, not primary importance. I want to control my own success and failure.

So I prefer goals like learning things or writing good explanations, rather than being popular or getting useful feedback. I enjoy learning and being creative, and would be unhappy without it. I have high standards for skill and quality, and would be unhappy otherwise. Those are things I can do alone (including using existing books, videos, etc.)

In the past, after over 5,000 hours of super successful collaboration with David Deutsch, I had higher expectations regarding other people. But David stopped discussing philosophy and has proven irreplaceable. (Thomas Szasz was wonderful too, but died at age 92.)

Broadly, the more and better philosophy I read and write, the happier I am. I have other interests but philosophy is the most important and most interesting. Note that I think of philosophy broadly including economics, liberalism, philosophy of science, educational philosophy, parenting philosophy, relationships philosophy, etc. In these areas, and many more, the best thing to do is apply general philosophy concepts. So the good ideas end up being more about philosophy in general than about the particular field. Meanwhile most of the "experts" in these fields are largely useless because they don't know how to think (don't know the key philosophy concepts which apply to their field about how to learn, how to create and evaluate arguments, and more).

Philosophy is about how to think and how to live, and I like to both learn and apply this. I do generally favor more abstract fields (e.g. political philosophy over today's politics over trying to help win an individual congressional election), as a personal preference and because they're more important. (More general purpose issues are like teaching a man to fish instead of giving him a fish. E.g. helping people think better about politics instead of just telling them the answer to one political issue.)

Changing The World

The world is full of suffering, stupidity, ignorance, and unnecessary failure. People are blind to lots of it, and also well aware of lots. Things could be dramatically better using only current ideas (no new breakthroughs). I know some of the best ideas for changing the world, like Taking Children Seriously. Largely destroying the minds of 99.999% of all children is arguably our biggest problem. If people's ability to think wasn't crippled in their early years, then tons of people would be smarter, wiser and more skillful than me (or than anyone else you care to name). That'd be way better.

While I'd like to change the world, I choose projects which are personally fulfilling so I won't regret it even if the world doesn't change. I write things which I'll be happy to have written even if no one replies or learns from it. I avoid outreach projects focused on getting more people's attention, but which I won't be happy with if they fail to bring in a bunch more audience.

Most people trying to change the world skip the step where they put a massive amount of effort into figuring out which are the right changes. I've done that step. I will debate anyone on the matter, but it's hard to find anyone willing to discuss it much.

Audience

My audience is too small. There's around 150 newsletter subscribers and 150 Fallible Ideas discussion group members. The good news is there are some very active people who have been around for years. And the whole audience is much more engaged than is typical. I get 2.4x the open rate and 3.9x the click rate compared average newsletters.

The small audience is bad because it's not enough to effectively test new ideas with. I could write a great new piece which would resonate with people, but get no feedback, so I don't realize that writing more similar material would be effective.

150 people could be enough in theory. Let's say I write something really great and 20% of my audience loves it. They could then share it with their own friends, who would share it with their own friends, and so on.

Let's say other people know 50 people on average (tons of people have way more than 50 Facebook friends). So I get 30 people to share it and that makes 1500 people. Let's halve it for overlap (two people share to the same person) and to be conservative. The 750 people in the secondary audience will like and share it at a lower rate than the original group, let's say 5%. That's still 37 people sharing it next, up from 30, so it would grow. With this pattern, each additional phase of sharing would increase the number of people sharing it, and it'd quickly become huge. That's exponential growth (literally).

And other stuff can go right. If someone shares on Reddit, it could get upvotes. And it only has to impress one popular person with thousands of social media followers to dramatically increase the numbers.

Difficulty Spreading Ideas

I think the reason my material doesn't spread like this is it doesn't do a great job of offering people what they already want. Instead, I offer new ideas which disagree with some of their current beliefs. I explain why these ideas would improve their lives, but understanding that requires a bunch of thinking.

I believe there aren't enough intellectuals in the world who want to spend much time learning. This makes it hard to spread philosophy ideas to a large audience. I have other forms of evidence about this too, e.g. my extensive and unsuccessful efforts to find any intellectual discussion forum that isn't terrible. (My other criterion for forums is allowing philosophy discussion. There are good forums on limited topics such as Ruby programming).

While a larger audience would be useful to better test out new ideas on, I don't know a good way to get it, and I'm not the right person to go on social media and make a bunch of friends. I hope other people will do that, but I'm not counting on it.

I've tried some advertising but, as expected, it hasn't been very effective. The smart people who like my ideas are also the kind of people who use AdBlock and ignore ads.

I have around 8,000 page views per month for my websites, around half for Fallible Ideas and half for my blog. It doesn't generate many blog comments or incoming emails, except from people who already participate in my email discussion group.

Discussion

I've been gradually transitioning to do more learning alone and less in discussion. This is a personal decision, for my circumstances, which I don't recommend to others. (It's important to be able to spend time alone and do things independently. I already do that exceptionally well. This is just an adjustment of details.)

I was extremely successful learning from discussions, and you could be too. I don't mean exclusively discussions, but they can be a major component of learning. (Note: written discussions are the best if you're serious.)

Discussion is extremely effective until you're around 50% better than the people you're discussing with. The reason you can get significantly past them is because you have different strengths and weaknesses than them. If you're 10% better than someone on average, he's going to be better than you at lots of stuff. So you can still learn a ton from him.

By around 200% better than someone, it gets difficult to learn much from them. Their general quality standards and methods of thinking are too low for you, and you have huge leads in most or all subjects you care about. You could often win an argument with them even if you were wrong. You have to fix anything they say to have more details, follow the methods of reason better, etc, before it can fit into your thinking. You may be better off reading a book than talking to them.

I've been transitioning to less discussion despite being extremely successful at learning from discussions. Why? Because I used discussion efficiently to get around a 50% lead on my primary discussion partner, David Deutsch. Then he stopped discussing and that left me with more like a 500% lead over everyone else.

But if you're not at the very top, discussion is amazing. And if you are at the very top, you ought to discuss with me...

People worry about the blind leading the blind. That's an issue. But if no one in a whole discussion group (including you) knows what e.g. Popper or Rand meant on some point, it's not like you'd do better alone. The blind leading himself alone isn't a solution.

Money

It's extremely hard to make money from philosophy ideas, so I haven't tried much. I've sold around $500 on Gumroad recently. I've made over $15,000 from philosophy, mostly from consulting (~$6,700) and donations (~$8,000), which isn't much. If I didn't do any philosophy and spent that time on programming, I would have made a few million additional dollars. (The market rate for great, experienced programmers is over $300,000/yr total compensation.)

I don't need to make money from philosophy. It's much easier to make money programming. I work part time from home, it's fun, and it pays great. I want to spend some time on programming anyway. I prefer to do a variety of activities, not just philosophy. Making money from philosophy would still be good, though.

I've started selling more material primarily because people, being self-destructively stupid, pay less attention to free material. Even if they read it, they assume free material has less value, rather making a judgement about how good it is. You have to charge people money to help enable them to care and pay attention.

People have mistaken my generosity for low status and low value. Charging money is a communication mechanism which reduces some misunderstandings. I also like and approve of commercialism. I think it's good to charge for and to pay for value.

There are some issues. People expect polish if they pay money. But I usually try to learn as efficiently as I can, and that means only doing polishing in a minority of cases. Usually I can learn more by e.g. writing a second essay on the same topic, rather than polishing the first one.

People have many bad ideas about monetary value. E.g. they often assume longer is more valuable. Actually, making material shorter is superior – it takes more work for me but saves them time and helps them focus on the most important parts by excluding the lesser parts. (Organizing a lot of material into footnotes and other optional extras is another good approach, so the main material is short but the longer version is still accessible. Only a tiny fraction of people care about reading extra details, but they are the best people, so they matter the most.)

People also think if a book costs $10, then a few essays can only be worth around $1. They aren't considering the value of the ideas for their life. Nor are they thinking about how the book is only so cheap because it's mass produced for a large audience. That book could be 4,000 hours (2 years) of effort from an author who's time is worth $200/hr (the time of the best authors is valuable, and they're the ones you want to read). Add in all the work the publisher does to edit, format and market the book, and plenty of books have over $1,000,000 put into them. Getting it for $10 is an amazing bargain. It's unfair to compare other products to that and expect the same deal.

Similarly, tons of people complain when sophisticated software costs $5, and think iOS apps should be $1 at most. This is ridiculous and is dramatically lowering the quality of software available even for someone like me who is happy to pay.

If I spend 10 hours making something, that's loosely $2000 (it's kinda priceless since no one else can create comparable philosophy work). If I sell it for $20 to my small audience, that's dirt cheap. 10 customers would pay $200 which is 10% of the production costs. I could easily make far more money programming. And I'm far better at philosophy than programming.

(Why would my time be worth e.g. $200/hr? Because I have lots of valuable knowledge which could improve people's lives, which I have skill at communicating in a simple way. And I can do this fast. I can share ideas in a few minutes which could improve people's lives by thousands of dollars if they took them seriously. How did I develop this knowledge and skill? I'm very smart and I spent ~20,000 hours, not counting school or childhood, learning the ideas, learning to write well and quickly, learning to communicate in a simple way, etc. But note that the value of products is really what they do for the customer, not how hard to make they were, so hourly rates are just a loose approximation.)

I have a small audience because I create things which challenge most people's ideas. Most people don't like that. But for people who want to do better than mainstream convention, in the particular kinds of ways I talk about (better thinking rather than e.g. healthier eating), it's very valuable. For the right kind of person who wants it, my work is worth paying a lot for and there isn't much competition.

Most people aren't very interested in investing in themselves. I'll buy whatever books, videos, software, etc, will help me be better. I expect it to pay off in the long run, plus I like it. Most people only spend much money investing in themselves in socially-approved ways (like university courses or maybe a bestselling book).

Part of the issue is that people limit the value they get from philosophy. When reading philosophy, they put over 80% of their effort into preventing themselves from learning, into creating confusions, and into other sabotage. This is due to static memes and various other issues I've written about. This makes them very hard to help. And, yes, you do it, even though you don't realize it. I'm not talking about other people. I'm talking about you. And if you doubt it but are unwilling to discuss-debate the matter, that's a pretty good indication I'm right.

Even for people who like my writing, most of them would rather leave than take it seriously. If I pushed them harder to discuss and learn, and pointed out their personal mistakes, they would hate me and quit. They'd shoot the messenger rather than appreciate finding out about fixable problems in their lives. Even if we stick to intellectual issues, and they merely find out their views about reason, education, politics, etc, are mistaken, they'd still rather leave than discuss those issues to conclusions. Most people don't even want to discuss at all.

Writing a Book?

Books are a high-prestige format for sharing ideas. Especially if you get a major publisher (and get paid 7% royalties). But I'm not after prestige.

I want to use the best formats available for learning and discussion, not the best formats for impressing people. I'd rather have a small, engaged audience which understands stuff than a large audience of cheering-but-ignorant fans.

I prefer to deal with the kind of people who judge quality for themselves in rational ways. I'm not targeting people who follow what's popular. And I'm not targeting people who judge an essay by big words and complex sentences, rather than by how good the ideas are. (People who judge fancy writing negatively are fine!) Simple writing is superior for communicating ideas.

Books are too long and discourage discussion. They're also single-format. There's value in using a mix of short writing, long writing, audio, video, slides, and interactive software.

What's good about books is they can explain a large amount of content in a self-contained way. This is especially helpful for people who don't want to ask questions or take the initiative to put together ideas from multiple places. The ability for someone to learn a lot, alone, from a single, complete package is a good thing. (It'd be better if people were more willing to interact because the misunderstanding rate when reading philosophy books alone is extremely high – call it 98% – and talking to people who already understand the ideas is great for clearing up some misunderstandings.)

Books are difficult to write, and aren't ideal for the author's own learning. Doing 5-20 editing passes isn't the best way to learn. Trying to make the whole book fit together as an effective piece of communication is difficult and is different than understanding the ideas.

The difficulty of writing a book is dramatically reduced if each chapter is a standalone essay, so the book is similar to a series of blog posts. Popper and Rand did most of their non-fiction that way. But that also removes some of the upside of books.

I wrote a series of 22 blog posts in 2007 totaling around 57,000 words, which is the length of a short book (almost 200 pages). That only took me 37 days because I'm a very fast writer (due to learned skill and practice) and didn't edit much. I've worked to develop a skill of writing good first drafts so I don't need to edit as much. (For example, this essay is barely edited.)

The Fallible Ideas website has 74,000 words worth of essays, so it's equivalent to a book.

I'm not a natural writer, by the way. In my childhood I was good at chess, math and computers. Before I learned programming, I thought I might become a scientist. I didn't start learning to write until after reading The Fabric of Reality, which is a philosophy book disguised as a science book.

I was slow at reading and writing when I got into philosophy, and my writing was much worse than today. My early writing did have some redeeming qualities. I attempted to say what I meant instead of impressing people. Writing in a simple, direct, clear and honest way can get you pretty far.

I've now written over 800,000 words of blog posts (5.75 books the length of The Fabric of Reality), over 30,000 philosophy related emails, and lots of other stuff. That's way more effort than most people put into learning. Maybe I'm a "gifted genius prodigy" or whatever (I think that's an excuse people use to legitimize their own mediocrity), but I sure tried harder too. (And I had fun doing it. Learning and creating are the most fun things one can do in life.)

Problem: Tiredness

A main problem I have is getting tired. I don't have plenty of great and easy things to do while tired.

This is 4,000 words. Should I write five essays like this per day? Because if I only write one per day, I'll have a lot of time left over. I generally find writing 4,000 words pretty tiring.

Similarly, I can read 200+ pages in a day, but it's tiring and I'll have a lot of time left over. Even if I read 500 pages and I'm completely exhausted, that takes way less than 16 hours.

Switching between text-based activities (writing, reading) and audio-and-visual-based activities (talking, videos, audio books, text to speech) helps because they use somewhat different energy.

Tiredness is a harder problem because I condense things to increase the information density. I don't just watch a lecture you YouTube, I watch it at triple speed. I only read audio books and use text to speech at high speed. I only watch TV and movies at high speed. If I don't do it fast, it's too boring for me to do it all. This makes it more tiring per minute.

This tiredness issue is standard; it's not my personal problem. No one tries really hard all day, every day, or they burn out. The general understanding of wise people is that knowledge workers are limited to around 2-4 hours per day of serious intellectual work. I've seen this claim from many smart people and I haven't found any reasonable arguments to the contrary. For example, Jordan Peterson gave advice about how to study 10+ hours per day. He said you can't do it, don't try, and:

It is very rare for people to be able to concentrate hard for more than three hours a day.

I routinely do more than 3 hours, but much less than 16 hours, so I have to find other things to do. I've tried pushing through tiredness, which is possible in the short term, but I just end up so tired that it lasts through a full night's sleep. It's more efficient if you usually limit your mental exhaustion enough that you can wake up refreshed. I also try to nap when I think I can fall asleep, and I sleep in as much as I want with no alarm. Unlike most people, I'm definitely not sleep deprived.

Finding non-tiring activities I like is fundamentally hard because, even when I do easy stuff, I try to learn something, or I find a way it's interesting and write something about it. If there's nothing to learn or write about, then I don't like it, and I want to use programming to automate it or at least multitask by listening to an audio book.

How's Stuff Going?

I'm doing well at learning things and creating new ideas. I've been writing a lot lately and making some stuff using my voice too.

I have plenty of books in my queue to read, along with some documentaries, Leonard Peikoff lectures, Jordan Peterson lectures, and more. This is helping to make up for the shortage of people to discuss with.

I want to make better material for people to learn philosophy. Lots of my older material has good ideas but is disorganized. If I rewrite stuff, I can organize it better and can easily improve the quality of writing. I want to make it easier for people with limited time and interest to find the best ideas.

Some of my most important ideas are Paths Forward and Boolean Epistemology. These are major, original contributions to philosophy. I'm going to make new and improved explanations of them with better introductions.

Boolean Epistemology corrects Popper and Deutsch on fundamental epistemology. It's about judging ideas as refuted or non-refuted rather than with any kind of amount, weight, degree or score (including corroboration or a critical preference). I attempted to discuss it with Popperians back in 2010, but they were uninterested. Nor is there some other philosophy community which cares. Academic philosophers are terrible much like government funded science.

I'm happy with what I've learned and created recently. I'm currently in a highly productive period which I hope to continue indefinitely.

I've had periods in the past where I had serious difficulty finding good enough quality ideas and people to engage with. And I've had issues deciding what sort of audience to target writing for. One needs a context in which to write in order to guide decisions about what to include or exclude, style, etc. It doesn't have to be a real audience though, one can write alone by simply imagining what sort of reader he's hypothetically targeting, or by writing for oneself.

I am trying to focus more clearly on making things I think are good, which I like, regardless of reception. Previously I wrote a lot of things oriented towards getting valuable feedback from people, especially David Deutsch. I've had to gradually change that after searching extensively for good people to discuss with and finding the world is a lot worse off than I'd like it to be.


Elliot Temple | Permalink | Messages (13)

Comments on: Personality or performance?: The case against personality testing in management

Robert Spillane wrote:

Szasz's argument can be supported empirically by the many Australian work organisations whose managers secure psychological profiles on their subordinates despite overwhelming evidence that psychological (especially personality) tests have consistently and strikingly failed to predict work performance (Spillane, 1994).

I was particularly interested in this evidence. Psychologist Jordan Peterson (whose videos I generally like) has claimed the research shows that personality tests do correlate with various life outcomes. For example, he said agreeableness correlates to doing well at university (teachers like people who agree with them and grading is biased). I'd like to know if he's wrong about the correlation research (which I know is very different than understanding what's actually going on).

Peterson specifically says the big five personality traits (openness, conscientiousness, extraversion, agreeableness and neuroticism), plus IQ, are the important ones. He says psychologists don't check if their constructs are accounted for by the big five plus IQ because then they'd find out they haven't invented anything, they've just found a proxy for something that's already been discovered.

Peterson says they discovered these traits by asking people a wide variety of questions and finding the answers to some groups of questions are correlated. That is, if you give some conscientious answers you're likely to give other conscientious answers too. The point is that different questions are related, and the questions about personality ending up statistically falling into five groups.

Note that psychologists cannot be trusted to make true statistical claims. For example, the big five wikipedia page says:

Genetically informative research, including twin studies, suggest that heritability and environmental factors both influence all five factors to the same degree.[71] Among four recent twin studies, the mean percentage for heritability was calculated for each personality and it was concluded that heritability influenced the five factors broadly. The self-report measures were as follows: openness to experience was estimated to have a 57% genetic influence, extraversion 54%, conscientiousness 49%, neuroticism 48%, and agreeableness 42%.[72]

I know this is bullshit because I've researched heritability and twin studies before. (Yet More on the Heritability and Malleability of IQ is very good.) They define "heritability" to refer to a mathematical correlation which doesn't imply anything is passed down genetically from your parents. They do this to be misunderstood, on purpose, by people who think they're talking about the standard English meaning of "heritability". And their twin studies don't address gene-culture interactions, and they know that and dishonestly ignore it. They also look at the variation in traits, rather than the cause of the traits themselves (e.g. they would study why you're a little bit happier than some other people, then announce they found a gene controlling happiness.)

For example of a gene-culture interaction, a gene for being taller could correlate to basketball success. That doesn't actually mean that basketball success is genetically passed down. Becoming a good basketball player depends on cultural factors like whether basketball is popular in your society or even exists. Nevertheless they will correlate some gene to basketball success and announce they've discovered basketball skill is 60% hereditary. And they will imply this is determined by your genes and outside your control, and that it couldn't be changed by writing a popular blog post with new ideas. But that's completely false and the "heritability" they study says nothing about what interventions would be successful in changing the results. (In other words, when they say a trait is 60% genetically determined, that actually allows for the possibility that an essay would change 100% of the trait. The more educated psychologists know that and making misleading statements anyway because they believe these kinds of caveats don't really matter and the bulk of their conclusions are about right.)

So I read Spillane's paper: Personality or performance?: The case against personality testing in management1:

The failure of psychologists to produce laws of behaviour or discoveries of importance has stimulated the study of behaviour called reductionism.

Reductionism is refuted in The Fabric Of Reality, ch. 1.

To explain introverted behaviour by reference to an 'introvert trait' in a person betrays an insensitivity to logic since the 'explanation' is viciously circular.

Introvert is a loose description or label. It's a shortcut which condenses and summarizes many facts. E.g. I observed Joe reading a book instead of socializing three times, and I remember one word instead of three events.

I don't think "introverted" is very useful (it's too vague). But shortcut labels in general are OK, e.g. "Popperian", "Aristotelian", or "Kantian". These are more specific than "introverted" and I find them useful despite some ambiguity.

An explanation says why, how, or because. But calling someone introverted doesn't say why they're introverted. An explanation would say, "Joe is introverted because ..." It would then give a reason, e.g. because Joe found that many people are mean to him because he likes books. After you understand the reason for behavior, you can make better predictions. E.g. you won't be surprised if Joe is more outgoing at a book club meeting.

insurmountable problems for those who explain, say, the behaviour of individuals who withdrew their labour by reference to the traits 'aggression' or 'apathy'

"He didn't do much yesterday because he's apathetic" isn't an explanation. It's just a restatement with a synonym. Apathetic means not doing much. But why doesn't he do much?

This reminds me of: people often say they do stuff because they enjoy or like it. They find it fun or entertaining. And they act like that is an explanation and settles the matter. But why do they like it? What's fun about it? Often they are bad at introspection, uninterested in self-understanding, and don't know.

Maslow’s hypotheses have been vigorously tested and the results, far from supporting his theory, have invalidated it This would not have surprised Maslow himself who was bothered by the way his conjectures were so readily accepted as true and paraded as the latest example of erudite knowledge in management [emphasis added]

Sad story.

The results of personality tests, to which I now turn, are communications, not traits or needs, and they are particularly responsive to the demands of the social situation in which individuals are expected to perform. After decades of personality testing we can now say with confidence that the search for consistent personality and motivational traits has been strikingly unsuccessful (Mischel 1968). While self-descriptions on trait measures are reasonably consistent over short periods of time these measures change across social settings (Anastasi 1982). In other words, people answer questions about hypothetical situations in a reasonably consistent fashion, but when it comes to behaving in the world—the way the situation is perceived—the rewards and penalties obtained and the power one is able to exert influence the consistency of behaviour. It is not surprising, therefore, that efforts to predict performance from personality and motivational inferences have been consistently and spectacularly unsuccessful (Blinkhorn 1990; Fletcher, Blinkhorn & Johnson 1991; Guion & Gottier 1965).

The relevant part! It'd be a lot of work to check those cites though. Let's see what details Spillane provides.

For more that 30 years researchers have stated unequivocally that they cannot advocate the use of personality tests as a basis for making employment decisions about people (Guion & Gottier 1965; Guion 1991). Where significant predictable findings are reported they are barely above chance occurrence and explain only a small proportion (less than 10%) of the variance in behaviour which ‘is incredibly small for any source which is considered to be the basis of behavioural variations’ (Hunt 1965, p 10). [emphasis added]

This use of the word "explain" is standard in these fields and really bad. They use "explain" to talk about correlations, contrary to standard English. In regular English, explanations tell you why, how or because. The implication when they say "explain" is it's telling you why – that is, it's telling you about causes. But correlations aren't causes, so this use of language is dishonest.

The rest looks good.

In the face of low validity coefficients

This reminds me of Jordan Peterson who said psychologists used to underestimate their findings because the correlation coefficients they found were low. But then someone figured out to compare coefficients to other psychology research and call the top 25% of coefficients high no matter how low the actual numbers are! He thought this was a good idea. It reminds me of how poverty is now commonly defined to refer to relative poverty (being poorer than other people, no matter how wealthy you are).

On comparing three respected and widely used personality tests, two researchers found ‘little evidence that even the best personality test predict job performance, and a good deal of evidence of poorly understood statistical methods being pressed into service to buttress shaky claims (Blinkhorn & Johnson 1990, p 672).

Doh.

Poor validity is matched by poor internal consistency and test-retest reliability. In Cattell’s (1970) 16 personality factors, for example, only two out of 15 Alpha coefficients of internal reliability reach a statistically acceptable level, so testers cannot know what exactly the test has measured. This finding is not surprising given the vagueness of trait definitions and the fact that factor analysis ‘is a useful mathematical procedure for simplifying data but it does not automatically reveal basic traits. For example, the personality factors identified from ratings may partly reflect the rater’s conceptual categories’ (Mischel 1971).

Of course personality trait categorizations reflect the conceptual categories of the people who invented them. They chose what they thought was a question about personality to ask about in the first place.

It's like IQ tests, which all have a heavy cultural bias. So they don't accurately measure intelligence. But that doesn't necessarily make them worthless. Despite the bias, the results may still correlate to some types of success within the culture the tests are biased towards. In other words, an equally smart person who isn't as familiar with our culture will get a lower IQ score. But he may also, on average, go on to have less success (at getting high university grades or getting a high income) since he doesn't fit in as well.

IQ tests also deal with outliers badly. Some people are too smart for the test and see ambiguities in the questions and have trouble guessing what the questioners meant. Here's an example from testing the child of a friend of mine. They were asked what a cow and a pig have in common. And they tried answers like "mammal" or "four legs" or "both are found on farms". Nope, wrong! The right answer was they were both "animals". The child was too smart for the test and was marked wrong. The child was only told the right answer after the test was over, so they got a bunch of similar questions wrong too... Similarly, I recall reading the Richard Feynman scored like 125 on an IQ test, which is ridiculously low for him. He's the kind of person you'd expect to easily break 175 if the tests were meaningful that far from 100, which they aren't.

The technical deficiencies of most personality tests have been known for many years. Yet they are conveniently ignored by those with vested interests in their continued use. For example, the Edwards Personal Preference Scale is technically deficient in form and score interpretation and rests on poorly designed validation studies (Anastasi 1982). The limitations of the Myers-Briggs Temperament Indicator are well known: ‘The original Jungian concepts are distorted, even contradicted; there is no bi-modal distribution of preference scores; studies using the MBTI have not always confirmed either the theory or the measure' (Fumham 1992, p 60).

Cool. I may look those papers up. I'd really like one for the big five, though!

Testers rely on the validity of self-reports and assume that subjects have sufficient self-insight to report their feelings and behaviour accurately. However, evidence has shown that respondents frequency lack appropriate levels of self-awareness or are protected from exposing themselves by an army of defence mechanisms (Stone 1991).

Of course. So personality tests don't measure your real personality anymore than IQ tests measure your real intelligence. But, it could still be the case that people who claim to be agreeable on personality tests do better at university, on average (though without knowing why you can't understand what changes to our society would ruin the effect). One of the reasons I was interested by Peterson's comments on personality tests is he said basically the correlations exist and therefore there's something going on there even if we don't know what it is, and he's admitted that some of the big five personality traits aren't really understood, they are just names tacked on to the correlation which is the real discovery.

Correlations are worthless without any explanation. But they do have some explanatory context to put these correlations in. We already knew that some of people's communications reveal information about their preferences and skills. And it's not just what people openly say that matters, sometimes subtle clues are revealing. In that context, it could theoretically be possible to correlate some communications to some outcomes. It's like reading between the lines but then statistically checking if you're right very often or not.

Then there is the problem of faking which is so widespread that it is amazing that test scores obtained under conditions of duress or vested interest are taken seriously. The use of so called objective self-report tests requires the assumption that the subject’s score is free from artifacts that superficially raise or lower scores.
Yet many researchers list studies which show that personality tests are especially subject to faking (Anastasi 1982; Goldstein & Hersen 1990; Hogan 1991). So serious is this problem that one of the world’s best known personality psychologists, H J Eysenck (1976), will not endorse the use of his personality test where there is a vested interest in obtaining a particular result. Australian researchers have expressed similar reservations about the use of Cattell’s 16 personality factors in selection situations (Stone 1991; Spillane 1985). Yet the testing continues in the absence of countervailing evidence.

Right. I only had in mind only voluntary, confidential tests for personal use. If the test can affect getting a job offer, a raise, or college admissions, then of course people will lie. (People are really bad at introspection and personality tests could be a starting point for them to think about themselves. Yes a biased starting point, but still potentially useful for people who have no idea how to do better. I took some online personality tests in the past and found them interesting to think about. That's in the context of my belief that personality is changeable anyway. I never interpreted the tests as doing anything like authoritatively pronouncing what my life will be like, nor did I expect them to be unbiased or highly accurate.)

The claim that lie scales built into the tests weed out fakers is an insult to the intelligence of those who are subjected to them. Whyte (1956) explained 38 years ago how to fake these tests by summarising the strategies employed by bright people to make fools of the testers.

That sounds interesting. At least the test faking strategies. I bet if I look it up, the "lie scales" will boringly naive.

Then there is the question of cross-cultural applicability, fairness and discrimination. Most personality tests are derived from an Anglo-American environment and are therefore culturally biased. Such tests have been found to be sexually and racially discriminating (Anastasi 1982; Fumham 1992).

Of course they are. That doesn't make them worthless though. If your company is sexist and racist, then the white male who gets a higher test score may actually do better at your company... (Or have they updated the tests yet to promote "diversity" by biasing them in favor of brown females?)

Also, as far as hiring goes, I believe companies should use work sample tests. Typical interviews are extremely biased to find people who are socially-culturally similar to the interviewer, rather than people who would do a good job. It's also biased to outgoing people who are relaxed, rather than nervous, during interviews. Current hiring practices are so bad that many people are hired for programming positions who can't write working code. The trivial FizzBuzz work sample test actually improves hiring because the other hiring criteria being used, like interviews, are worthless.

Test scores can be interpreted in many ways. The most logical interpretation is that they reflect strategies adopted by the subject for the testing game. To argue that these strategies will necessarily equate to strategies adopted in the world of business is dishonest or naive.

Right. If you take a test because of personal curiosity, then you can try to answer honestly and see if the results say anything you find interesting. If personality tests were used for college admissions, then they'd be a test just like the SAT where you can read books telling you how to give answers that will get you admitted. It'd be funny if people wanted to retake a personality test to try again to get a better score, as they do now with the SAT.

Personality tests assess generalised attitudes and gloss over the rich subtleties of human behaviour.

Of course! Isn't that what they're supposed to do? They are trying to summarize a person – which is very complex – with e.g. scores on 5 continuums. Summary information like that necessarily loses a lots of detail. Does anyone deny it!?

Nowadays it is commonplace to hear apologists for personality testing admit that the tests don’t predict performance, but should be used nonetheless to ensure an appropriate fit of individual with organisational culture.

Seeking cultural fit at a company is one of the main excuses for not basing hiring primarily on work sample tests.

If companies cared more about work performance, they would come up with objective measures allowing no managerial discretion and then hand out bonus pay accordingly. (Some do this, many don't.)

... foist their crude ideas about human nature on to people who frequently don’t have the opportunity to assess their claims or to refuse to participate in the testing game.

A friend of mine got bored while taking an IQ tests and skipped the rest of the questions.

I got bored while taking a physics test at school, so I left most of it blank. The teacher didn't want to try to explain to anyone why a smart student who knew the material got a bad grade. Why rock the boat? So he just ignored the test result and asked me to take it again later. Grade falsification like this is common, and the amount of grade falsification depends on the teacher's opinion of you. A friend of mine went through school making friends with his teachers and then turning in many of his assignments weeks late and getting A's anyway.

No doubt one of the reasons for the continuing belief in personality traits and the instruments used to 'measure' them is the result of an outmoded inductivist view of science which emphasises confirming instances.

Yes. And induction is closely related to correlation. Induction involves picking out some pattern (correlation) from a data set and then extrapolating without thinking about explanations of the causal mechanisms. We know the sun will rise tomorrow because we know what it's made out of and what forces (gravity and the Earth's spin) are involved, not because of the correlation between 24 hours passing and the sun rising again.

But induction doesn't work because, among other reasons, there are always infinitely many patterns (and also explanations) which fit any finite data set. So too are there infinitely many patterns to be found in personality test data, and infinitely many explanations compatible with the test results. It's only by critical thinking about explanations that we can understand what's going on. Data can't guide us (contrary to the common claim that correlations hint at causations), we have to guide ourselves using data as a tool.

Final Comments

Even without any tests, people often use their personality as an excuse. They say they can't do some task well (e.g. go to a party and do business networking) because they are "introverted". Or, rather than simply say they don't like networking (and perhaps even giving some reasons), they say it makes them nervous or anxious because of its incompatibility with their personality type.

Many people would prefer to be victims of their personality, to have an excuse for their failures, rather than strive to better themselves.


Footnotes

1: Spillane, R. (1994) 'Personality or Performance? The Case Against Personality Testing in Management.' In A.R. Nankervis & R.L. Compton (eds) Strategic Human Resource Management, Melbourne: Nelson, Ch 14.


Update

Spillane comments on the Big Five in his book Psychomanagement: An Australian Affair:

In connection with any quantity which varies, such as job performance, the variation does not arise solely because of differences among personalities. So the correlation coefficient is used as an indicator of the proportion of the variation in job performance, which is related to personality scores. The correlation thus provides an answer to the question: how much does the variation in personality contribute to the variation in job performance? This question may be answered in terms of variance. The square of the correlation coefficient indicates the proportion of variation in job performance, which is accounted for by differences in personality scores. If the correlation between job performance and personality scores is +.9 then the proportion of variance accounted for is 81% (and 19% is unaccounted for) and personality would be a very strong predictor of job performance. For 50% of the variance of job performance to be accounted for by personality, a correlation coefficient of just over +.7 is required. Since important employment decisions are based on the assumption that personality scores predict job performance, one would expect and hope that the correlation coefficients are greater than +.7 otherwise decision makers will make a large number of rejecting and accepting errors.

What have the meta-analyses found? Four meta-analytic studies of the relationship between job performance and personality scores yielded the following average correlation coefficients: conscientiousness .21; neuroticism .15; extraversion .13; agreeableness .13; openness to experience .12. These results are worrying enough since the much-quoted result of .21 for conscientiousness means that the proportion of variance unaccounted for is 95.6%. Responsible decisions about hiring, promotion or training cannot be made on the basis of these figures.

However, the actual situation is far worse since it makes an important difference to the results when personality scores are correlated with ‘hard’ or ‘soft’ performance criteria. Soft criteria include subjective ratings whereas hard criteria include productivity data, salary, turnover/tenure, change of status. Since personality scores are better predictors of subjective performance ratings than objective performance measures, it is reasonable to conclude that raters rely on personality when evaluating job performance, thereby raising the question whether the relationship between personality and performance is the result of the bias of the rater rather than actual performance. In the much-quoted study by Barrick and Mount, the correlation coefficient dropped from .26 for soft criteria to .14 for hard criteria. The average correlation between the Big Five and job performance (hard criteria) was .07.2

The footnote is:

M.R. Barrick & M.K. Mount, ‘The Big Five Personality Dimensions and Job Performance: A Meta-Analysis’, Personnel Psychology, 1991, 44, pp. 1-26.

That article is freely available online. I read some and Spillane seems to be factually correct. It looks like Jordan Peterson is badly wrong.


Elliot Temple | Permalink | Messages (3)

Comments on An Eye for An I: Philosophies of Personal Power

Comments on An Eye for An I: Philosophies of Personal Power, primarily about Popper. "You" refers to the author, Robert Spillane, who I emailed.


I appreciated your comments on mistranslating Descartes on the "soul" as about "mind". I'd forgotten that idea. I learned that translation errors are a major issue from Popper. He found another major philosophical mistranslation:

The World of Parmenides, by Popper, in the Introduction:

Plato says explicitly (in the Timaeus, for example, but also in other places) that all he can tell us is at best only ‘truthlike’ and not the truth: it is, at best, like the truth. This term is usually translated by ‘probable’ ... I use the term ‘truthlikeness’, or ‘verisimilitude’, especially for theories. The word that Plato uses is really ‘similar’, and sometimes he says ‘similar to truth’; the word is also connected with ‘pictorial similarity or likeness’, and this seems, indeed, to be the root of the meaning.[1] According to Plato, humans can have only this kind of knowledge; he rarely calls it opinion, which is the usual term used, for example by his contemporary Isocrates, who says ‘We have only opinion.’

Back to your book:

Popper’s philosophy of critical rationalism has attracted widespread criticism because, despite his intentions, it leads to a radical scepticism.

I think you're mistaken about Critical Rationalism and I can defend it from skepticism. The broad issue is that one has to form a new, evolutionary understanding of what knowledge is. Without that, Popper appears to be a skeptic because he did reject some standard concepts of knowledge (not as a matter of taste, but because they just plain don't work).

One of Popper's main achievements was to reconcile knowledge with fallibility. The proof/skepticism false dichotomy had dominated philosophy since Aristotle, and isn't improved by equivocations about probability (99% proven doesn't actually make sense).

Those who embrace Popper’s worldview are concerned, if not obsessed with, deduction (since induction is a myth).

That's true of some of them. But it's not true of David Deutsch, myself, and the other Popperians I typically discuss with. (And I've found the others basically unwilling to discuss philosophy, so I don't think they matter.) I don't think it's true of Popper himself, either.

The basic reason people are attracted to deduction is to prove things. But someone who really understands Popper and fallibilism won't be so interested in proof. Popper himself was more interested in deduction early in his career (the LOGIC of scientific discovery) and less so in his better, later works.

A deductive proof is just as fallible as a standard English argument. Everyone knows what regular, commonsense arguments are. For example this argument is neither induction nor deduction: "Socialism doesn't work because there's no way to do rational economic calculation without prices. Is it better or more efficient to use up two tons of iron or two tons of aluminum in your project, or something else? Without prices you can't figure that out."

Rather than seek to prove things (deduction) or try to sorta approximate proof (as induction does), we should seek to explain and criticize. Which is what informal arguments often already do. So it's informal arguments which should matter most to Popperians!

By finding some of our errors and making fixes -- which can be done with informal arguments -- we can improve. This improvement is knowledge accumulation. It's not inductive. Deduction and logic do play a role sometimes, but aren't a primary focus.

Technically, knowledge is created by evolution. How knowledge is created is a very hard problem, and there have only been a handful of proposed solutions. Induction (wrong). Creationism (knowledge is magically created out of nothing). Design (knowledge is "created" by a designer who already contains all the complexity, which leads to regress). Abduction (inductivist equivocations). And conjectures and refutations (which is a form of evolution).

Evolution isn't deduction (or induction). It's a process of replication with variation and selection. Ideas, like genes, can replicate. The information can be copied, just like duplicating a file on a hard drive or downloading it from someone else's website. The information can also be varied and selected (which is what brainstorming and critical argument are about). This is Popper's position, clarified by Deutsch and myself (Popper didn't have a fully modern understanding of evolution, computation and way information flows in quantum physics).

For some indication of the physics, see Deutsch's books and his paper:

http://beginningofinfinity.com/books

https://graphene.limited/services--technologies/physics-of-triggering/Trigger-Physics/0104033v1.pdf

Abstract: The structure of the multiverse is determined by information flow.

Relating epistemology to physics is important because, contra a lot of nonsense about the "mind", thinking and knowledge creation are physical processes.

Why does evolution create knowledge? This question relies on correctly understanding what knowledge is. Not proof. Not justified ideas. Not infallible ideas. Not induced ideas. etc. But what?

Knowledge is information which solves problems. It's useful information. It's information with some purpose, some design, some adaptation, so that it actually works to do something.

From here, along with the appropriate background knowledge, it's straightforward to see that evolution creates knowledge. Evolution gradually generates information more and more in line with the selection criteria. That is, it creates information about how to meet the selection pressure. That is, it creates knowledge about how to solve the problem of meeting that selection pressure(s).

This leads to a further issue which is universal knowledge vs. knowledge limited to a particular purpose. Some problems are dumb and their solutions aren't valuable. Which I can answer if you like. It gets even further afield from standard philosophy into uniquely Popperian ways of thinking.

Deductivism, in Popper’s hands, leads to the conclusion that we should prefer the best-tested theories: theories which have survived repeated attempts to falsify them. These theories are not true, but they are to be preferred to theories which have been progressively falsified or theories which have not been subjected to attempts to disprove them.

"These theories are not true" is an error. What Popper meant, and what's true, is, "We don't know for certain that these theories are true". Some of our ideas may in fact be true, but we can't ever prove it with 100% infallible certainty.

Popper's fallibilism is easy to confuse with skepticism because he denies the possibility of proven knowledge, certain knowledge, and justified true belief.

Critics are bothered by the deep scepticism that infects Popper’s philosophy.

Using a medical metaphor ("infects") was a mistake. It's, as Szasz would have put it, the medicalization of everyday life.

Theories are bold guesses riddled with uncertainty and science is a game. Understandably, we want to know upon which theory we should rely on rational grounds for practical action.

That's pretty simple: you should act on an idea you don't know a refutation of.

Why? Because you're trying to avoid error, and refutations consist of pointing out errors.

Rather than complaining about uncertainty, it's crucial to think in terms of error-correcting processes. Popper applied this insight to Democracy (fixing bad rulers and policies without violence is a type of error correction). And it comes up with computer filesystems. The raw data on disk is riddled with uncertainty due to the unavoidable possibility of hardware error. But our use of computers is NOT riddled with uncertainty, because of the use of error-correcting software algorithms involving parity bits, checksums, etc.

Our lives don't have to be riddled with uncertainty, either. We can't prevent all error, but we can keep error under control by using the right thinking methods.

As for practical action, we should rely on the best-tested theory. But why should we prefer any theory at all? Indeed, why should we even accept the results of falsified experiments, for such an acceptance involves us in an inductive inference (an experiment falsified today will achieve the same result tomorrow)?

Remembering and using the results of past tests does not rely on a "the future will resemble the past" style inductive principle.

It instead is based on explanations of physics which say what sorts of changes happen and don't happen. This gives us an understanding of what kinds of changes to expect, or not, on what timeframes. As a simple example, the speed of light limit means I shouldn't expect a person standing a light-second away to change their mind in under one second after I come up with a great new argument.

Our understanding of the world involves many layers of abstraction on top of physics. At a higher level, we understand things like what forces exist and what kinds of things could or could not split the Earth in two. It'd take a huge amount of force to do that, and we know what kinds of physical processes can and can't create that force. So we don't have to worry that our footsteps will break the Earth. Not because the future will resemble the past, but because we understand the material structure of the Earth, its density, the energy bonding the atoms and molecules together, the energy required to separate that much matter in that configuration, etc.

Our understanding of physics used experimental tests in a critical role. We criticize ideas which contradict experiment.

It's up to a theory to say whether it applies at all times, or not.

A theory is welcome to say e.g. "The following is how the physical world worked in the 1900s, and the following is how it will work in the 2000s". But a theory can also say "This is how the physical world works in the 1900s and the 2000s and all other centuries."

An experiment done in the 1900s can refute, or not refute, either of those theories. They also both make predictions telling us what to expect in the future. The difference is one of them predicts the same experiment, done in 2017, will have the same result it had in 1917, and the other says the rules have changed over time and now it will get a different result.

Rather than assuming the future will resemble the past, we have hypotheses which claim it in particular respects, or don't. We then criticize those hypotheses. And lots of that criticism is non-empirical. We ask critical questions like WHY the laws of physics would suddenly and discontinuously change when the millennium passes on our calendar. If there is no answer, we reject that hypothesis as a bad explanation.

The empirical basis of objective science has nothing absolute about it. Science does not rest upon bedrock: it rises above a swamp.

Yes, foundations are highly overrated in philosophy. You can start anywhere and build up solutions to the problems layer by layer. Rather than seek an error-free starting place, we must accept we are fallible and errors are inevitable. Then we must recognize that errors are fixable, and start solving our problems. A swamp can be drained, or a platform can be built on top of it, etc. No matter where we start our inquiry, there will be problems in need of solving, rather than certainty that allows us to relax and retire with no more need for effort.

Popper does not seem too distressed to admit that the acceptance or rejection of observation statements ultimately rests on a decision reached through a process much like trial by jury.

Yes, trial by jury is a reasonable metaphor. Arguments are presented and judgements are made. That's gotten us into space, built skyscrapers and iPhones, etc. It works. As opposed to the alternatives which, rather than considering how to deal with the human condition, yearn for a different world with different rules and lament, and encourage the skeptics by saying that human judgement isn't good enough and needs to be aided by something to give it more certainty. (And then the skeptics see, correctly, that the "something" offered doesn't actually work.)

Popper tells us that science is neither a system of well-established statements, nor is it a system which steadily advances towards the truth.

That's unfair. Popper tells us science is a system which unsteadily advances towards the truth. Scientific breakthroughs don't come on a regular schedule, but they do happen.

Popper also says we never know how close to the truth we are, on an absolute scale. But that doesn't stop us from getting closer to it.

Science, he says, can never claim to have attained truth, or even a substitute for it, such as probability.

We can claim to have attained knowledge, which is a substitute for truth.

That knowledge is fallible, tentative (could be reconsidered in the future) and conjectural (based on human guesses, rather than methodically built up from foundations offering certainty).


Elliot Temple | Permalink | Messages (0)

Reply to David Stove on Popper

Popper and After: Four Modern Irrationalists, by David Stove criticizes Karl Popper's philosophy of knowledge.

But Stove's criticism doesn't focus on epistemology.

And Stove writes insults and other unserious statements. These are frequent and severe enough to stand out compared to other similar books. I give examples.

The book's organization is problematic as a criticism of Popper because it criticizes four authors at once. It only focuses on Popper for a few paragraphs at a time. It doesn't lay out Popper's position in detail with quotes and explanations of what problems Popper is trying to solve and how his ideas solve them.

First I discuss the book's approach and style. Then I address what I've identified as Stove's most important criticisms of Popperian philosophy.

My basic conclusion is that Stove doesn't understand Popper. His main criticisms amount to, "I don't understand it." Popper contradicts established philosophy ideas and some common sense; Stove doesn't know why and responds with ridicule. Stove is unable to present Popper's main ideas correctly (and doesn't really try, preferring instead to jump into details). And without a big-picture understanding of Popper, Stove doesn't know what to make of various detail statements.

Stove's Focus

Part 2, Ch. 3 begins:

Popper, Kuhn, Lakatos and Feyerabend have succeeded in making irrationalist philosophy of science acceptable to many readers who would reject it out of hand if it were presented to them without equivocation and consistently. It was thus that the question arose to which the first Part of this book was addressed: namely, how did they achieve this? My answer was, that they did so principally by means of two literary devices discussed in Part One. The question to which the present Part of this book is addressed is: how was irrationalist philosophy of science made acceptable to these authors themselves?

Stove says the first part discusses how Popper achieved influence. How did Popper convince readers? What literary devices did Popper use to fool people? And part two (of two) discusses the psychological issue of how Popper made irrationalism acceptable to himself.

By Stove's own account, he's not focusing on debating philosophy points. He does include epistemology arguments, but they aren't primary.

The problem Stove is trying to solve plays a major role in his thinking (as Popper would have said). And it's the wrong problem because it assumes Popper is an irrationalist and then analyzes implications, rather than focusing on analyzing epistemology. If Popper's philosophy is true, Stove's main topics don't matter.

Ridicule

Ch. 2:

It is just as well that Popper introduced this [methodological] rule. Otherwise we might have gone on indefinitely just neglecting extreme probabilities in our old bad way: that is, without his permission.

This is unserious and insulting. Popper's purpose was to discuss how to think well, not to give orders or permission.

To readers in whom the critical faculty is not entirely extinct, the episode has afforded a certain amount of hilarity.

This is mean.

I point out more examples of Stove's style as they come up.

Neutralizing Success Words

Ch. 1 discusses neutralizing success words. A success word like "knowledge" or "proof" implies an accomplishment. Compare "refuted" (a successful argument) to "denied" or "contradicted" (doesn't imply the denial has merit). Neutralizing knowledge yields idea – knowledge means a good idea, whereas an idea could be good or bad. Neutralizing proof yields argument – a proof is a type of successful argument, whereas a mere argument may not succeed.

Stove says Popper equivocates. Often, Popper uses success words with their normal meaning. But other times Popper changes the meaning.

It is the word "knowledge", however, which was the target of Popper's most remarkable feat of neutralization. This word bulks large in his philosophy of science (much larger than "discovery"), and in recent years, in particular, the phrase "the growth of knowledge" has been a favorite with him and with those he has influenced most. Some people have professed to find a difficulty, indeed, in understanding how there can be a growth-of-knowledge and yet no accumulation-of-knowledge.

There is accumulation-of-knowledge. Stove gives no cite, but I have a guess at what he's talking about. This quote is from C&R (Conjectures and Refutations) ch. 10 sec. 1, and there's a similar statement in LScD (The Logic of Scientific Discovery).

it is not the accumulation of observations which I have in mind when I speak of the growth of scientific knowledge, but the repeated overthrow of scientific theories and their replacement by better or more satisfactory ones.

The growth of knowledge doesn't consist of accumulating ever more observations (we need ideas). Nor are we simply accumulating more and more ideas, because scientific progress involves refuting, replacing and modifying ideas too. The growth of knowledge is more about quality than quantity.

Continuing the same Stove passage:

But then some people cannot or will not understand the simplest thing,

More ridicule.

and we cannot afford to pause over them. Let us just ask, how does Popper use the word "knowledge"?

Well, often enough, of course, like everyone else including our other authors, he uses it with its normal success-grammar. But when he wishes to give expression to his own philosophy of science he baldly neutralizes it. Scientific knowledge, he then tells us, is "conjectural knowledge". Nor is this shocking phrase a mere slip of the pen, which is what anywhere else it would be thought to be.

Expressing shock and talking about slips of the pen is not how one debates ideas seriously. But let's discuss conjectural knowledge.

Knowledge is good ideas. Sorting out good and bad ideas is one of the main problems in epistemology.

Conjectural serves two purposes. First, it indicates that knowledge is fallible (and lacks authority). Popper doesn't mean justified, true belief. He's not looking for perfect certainty or absolute guarantees against error.

Second, conjecture is the original source of the good ideas that constitute knowledge. Conjecture is, intentionally, an informal, tolerant, inclusive source. Even myths and superstitions can qualify as conjectures. There's no quality filter.

I think Stove's negative reaction has a thought process like this: No quality filter!? But we want good ideas. We need a quality filter or it's all just arbitrary! "Anything goes" can't achieve knowledge, it's irrationalism.

Popper has an answer:

Standard approaches do lots of quality filtering (sometimes all) based on the source of ideas.

Instead, all quality filtering should be done based on the content of ideas. This is done with criticism and human judgement, which lack authority but are good enough.

So we do have a quality filter, it's just designed differently and put in a different place.

For more, see Popper's introduction to C&R, On the Sources of Knowledge and of Ignorance. Excerpt from sec. XV:

The question about the sources of our knowledge can be replaced in a similar way [to the 'Who should rule?' issue]. It has always been asked in the spirit of: ‘What are the best sources of our knowledge—the most reliable ones, those which will not lead us into error, and those to which we can and must turn, in case of doubt, as the last court of appeal?’ I propose to assume, instead, that no such ideal sources exist—no more than ideal rulers—and that *all* ‘sources’ are liable to lead us into error at times. And I propose to replace, therefore, the question of the sources of our knowledge by the entirely different question: ‘*How can we hope to detect and eliminate error?*’

Continuing the same Stove passage:

On the contrary, no phrase is more central to Popper's philosophy of science, or more insisted upon by him. The phrase even furnishes, he believes, and as the title of one of his articles claims, nothing less than the "solution to the problem of induction" [28].

Note the lack of discussion of Popper's position.

In one way this is true, and must be true, because any problem clearly must yield before some one who is prepared to treat language in the way Popper does. What problem could there be so hard as not to dissolve in a sufficiently strong solution of nonsense? And nonsense is what the phrase "conjectural knowledge" is:

More insults.

just like say, the phrase "a drawn game which was won". To say that something is known, or is an object of knowledge, implies that it is true, and known to be true.

This is ambiguous on the key issue of fallibility.

Is Stove saying all knowledge must be infallible and known to be infallible? It must be the proven to be the perfect truth, with complete certainty, so that error is utterly impossible – or else it's not knowledge?

If that's Stove's view of knowledge, then I think he has a choice between irrationalism or skepticism. Because his demands cannot be met rationally.

Or if Stove's position is less perfectionist, then what is it? What allowances are made for fallibility and human limitations? How do they compare to Popper's allowances? And why is Popper mistaken?

(Of course only `knowledge that' is in question here). To say of something that it is conjectural, on the other hand, implies that it is not known to be true.

Does "known to be true" here mean infallibly proven? Or what?

And this is all that needs to be said on the celebrated subject of "conjectural knowledge"; and is a great deal more than should need to be said.

What's going on here is simple. Stove is scornful of a concept he doesn't understand. He doesn't appreciate or discuss the problems in the field. And he doesn't want to. He's unable to state a summary of Popper's view which a Popperian would agree with, and he wants the matter to be closed after three paragraphs.

Sabotaging Logical Expressions

Ch. 2:

What scientists do in such circumstances, Popper says, is to act on a methodological convention to neglect extreme probabilities

For example, how do you know a coin which flips 1000 heads in a row is unfair? Maybe it's a fair coin on a lucky streak.

Well, so what? I'm willing to risk a 2^-1000 chance of misjudging the coin. I'm far more likely to be struck be lightning than get the coin wrong. And the downside of misjudging the coin is small. If the downside were so large that I couldn't tolerate that much risk, I could flip the coin additional times to reduce the risk to my satisfaction (assuming I get more heads, that reduces the probability it's a fair coin).

So Popper offers: if you judge it's not a worthwhile issue to worry about, then don't worry about it. This judgement, like everything, could be a mistake, so it's always held open to criticism. That openness doesn't mean we think it's mistaken or spend our time searching for a mistake, it just means we recognize we have no infallible guarantee against error. We have to make fallible, criticizable judgements about what areas are problematic to focus attention on.

Stove dislikes this approach because he thinks you could do it to dismiss any problem. Stove fears arbitrarily creating a methodological convention to neglect any difficulty. The solution to this is criticizing bad methodological conventions. Stove (correctly) sees problems with some conventions that could be proposed, and those problems can be expressed as criticism.

The problem here is Stove's unfamiliarity with Popperian methods. Plus I think Stove wants methodological rules to guide thinking and reduce the scope for human judgement and creativity.

... Popper actually anticipated it. This is `the Quine-Duhem thesis': that "any statement can be held true come what may, if we make drastic enough adjustments elsewhere in the system [...]. Conversely, [...] no statement is immune to revision" [23].

There's an important logical point here. I wonder what Stove's answer to it is (he doesn't say). Popper offered some help with this issue, but not a full solution. That's OK because Popper's general approach of fallible judgement combined with error correction still works anyway.

Philosopher David Deutsch addressed the Quine-Duhem issue better. His two books offer refinements of Popper. (FoR ch. 1, 3, 7-8; BoI ch. 1-4, 10, 13.)

In short: You may try modifying whatever you want to rescue a statement, but those modifications have meaning and can be criticized. Ad hoc modifications commonly ruin the explanation which gave the idea value in the first place, or contradict vast amounts of existing knowledge without argument. If you can come up with a modification that survives immediate criticism, then it's a good contribution to the discussion (sometimes the error really is elsewhere in the system).

Other Thoughts

Ch. 3:

It is a favorite thesis with him that a scientific theory is, not only never certain, but never even probable, in relation to the evidence for it [3].

Right, because logically there's no such thing as evidence for a theory. There's only evidence which does or doesn't contradict a theory. And any finite set of evidence is logically compatible with (does not contradict) infinitely many theories, and those theories reach basically every conclusion.

What does Stove think of this?

These two theses [the one above and one other] will be acknowledged to be irrationalist enough; and they are ones upon which Popper repeatedly insists.

Stove doesn't present and discuss Popper's solution to the logical difficulties of positive support. Nor does Stove present his own solution. Instead he says it "will be acknowledged" that Popper's view is irrational, without argument. Stove treats it as if Popper only talked about this difficulty without also giving a solution. (The solution, in short, is that negative arguments don't face this difficulty.)

Ch. 3:

Scepticism about induction is an irrationalist thesis itself

Rather than present and discuss Popper's solution to the problem of induction, Stove simply asserts that the only alternative to induction is irrationalism. He goes on to discuss Hume at length rather than Popper.

Ch. 5:

One of these features, and one which is at first sight surprising in deductivists, is this: an extreme lack of rigor in matters of deductive logic.

Because Popper's main positions aren't about deduction. The technical reason that conjectures and refutations is able to create knowledge is that it's evolution, not deduction. The key to evolution is error correction, and that's also the key to Popper's philosophy, but Stove doesn't understand or discuss that. Stove only uses the word "evolution" once (in a Kuhn quote where it means gradual development rather than replication with variation and selection).

A core issue in Popper's philosophy is: "How can we hope to detect and eliminate error?" (as quoted earlier). Stove doesn't understand, present, or criticize Popper's answer to that question.


Note: My comments on Popperian thinking are summary material. There's more complexity. It's a big topic. There are books of details, and I can expand on particular points of interest if people ask questions.


Elliot Temple | Permalink | Message (1)

Explaining Popper on Fallible Scientific Knowledge

In The Logic of Scientific Discovery, sec. 85, Popper writes:

Science is not a system of certain, or well-established, statements; nor is it a system which steadily advances towards a state of finality. Our science is not knowledge (epistēmē): it can never claim to have attained truth, or even a substitute for it, such as probability.

Yet science has more than mere biological survival value. It is not only a useful instrument. Although it can attain neither truth nor probability, the striving for knowledge and the search for truth are still the strongest motives of scientific discovery.

What does Popper mean when he denies science is "knowledge (epistēmē)"? He explains (sec. 85):

The old scientific ideal of epistēmē—of absolutely certain, demonstrable knowledge—has proved to be an idol. The demand for scientific objectivity makes it inevitable that every scientific statement must remain tentative for ever.

His point here is fallibility. There's no way to ever prove an idea with finality so that there's no possibility of it ever being overthrown or improved in the future. There's no way to be 100% certain that a new criticism won't be invented later.

People consider Popper a skeptic because they see the options as infallibilism or skepticism. Popper does deny infallibilist conceptions of knowledge, but disagrees that infallibilism is a requirement of genuine knowledge.

In the first quote, Popper uses the word "knowledge" in two different senses, which is confusing. The first use is qualified as "epistēmē" and refers to view that we must find a way around fallibility or we don't have any knowledge. The second use, in "striving for knowledge", means good ideas (useful ideas, ideas which solve problems) as opposed to random, arbitrary or worthless ideas. The view that we have no way to judge some ideas as better than others is the skeptical position; in contrast, Popper says we can use criticism to differentiate ideas.

I'll now discuss individual pieces of the first quote.

[science] can never claim to have attained truth

Popper means that even if we had an idea with no errors, we have no means to absolutely prove it has no errors and then claim there are none. There are no methods which guarantee the elimination of all errors from any set of ideas.

An idea with no errors can be called a final or perfect truth. It can't be refuted. It also can't be improved. It's an end of progress. Human knowledge, by contrast, is an infinite journey in which we make progress but don't reach a final end point at which thinking stops.

Could there be unbounded progress while some ideas, e.g. 2+2=4, are never revisited? Yes but there's nothing to gain by being dogmatic, and there're no arguments which yield exceptions to fallibility. Just accept all ideas are potentially open to criticism, and then focus your research on areas you consider promising or find problematic. And if someone has a surprising insight contradicting something you were confident of, refute it rather than dismissing it.

[science] can attain neither truth nor probability

Regarding probability: There's no way to measure how close to the (perfect) truth an idea is, how much error it contains, or how likely it is to be (perfectly) true. The method of judging ideas by (primarily informal) critical arguments doesn't allow for establishing ideas as probable, and the alternative epistemological methods don't work (Popper has criticisms of them, including on logical grounds).

Also, probability applies to physical events (e.g. probability of a die rolling a 6), not to ideas. An idea either is (perfectly) true or it isn't. Probability of ideas is a metaphor for positive support or justification. I've addressed that issue under the heading: gradations of certainty.

Science is not a system of certain, or well-established, statements

What's good about scientific statements if they aren't well-established or certain? They aren't refuted. We've looked, but haven't found any errors in them. That's better than ideas which are refuted. I shouldn't accept or act on ideas when I'm aware of (relevant) errors in them.

My judgements are capable of being mistaken in general. But that isn't a criticism of any particular judgement. Ideas should be rejected due to critical arguments, not due to fallibility itself.

striving for knowledge and the search for truth

The human capacity for error ruins some projects (e.g. attaining absolute certainty, attaining epistēmē). But it doesn't prevent us from creating a succession of better and better ideas by finding and fixing some of our errors.


Elliot Temple | Permalink | Messages (0)

Frozen Comments

female "equality" is a type of feminist social justice, and is a major theme in Frozen.

let’s have 2 female leads and a weak man, and call it equality… uhhhhhhhh

another major social justice idea is that existing social structures are oppressive. which is also a main Frozen theme. it presents following your emotions as the solution to this oppression. the rules are mean, so ignore them and replace with whim and be free and empowered.

lion king says existing social structures can be oppressive or not. depends who’s in charge. Scar was oppressive but that was a solvable problem without getting rid of the structure.

but Frozen says you can’t reconcile existing social structures with your emotions, identity, etc

Moana sings about “who you are” and has some identity shit. and it says this causes some mild friction with society. but that fundamentally Moana is compatible with society and is even celebrated by her society without the society losing its nature or values.

in Lion King, when Simba accepts his societal role, function, duty and responsibilities, he makes things better. his responsibilities weren’t oppressing him, they were guiding him to do the right thing which was best for everyone.

in Pinocchio, when he acts responsibly, he saves his father from the whale and he becomes a real boy. first he acts contrary to his conscience, to society’s ideas, and makes his life a mess. then he acts more like how he knew he should (how society and his conscience say to) and that got his back life in order.

Moana is irresponsible in mild ways. a bit reckless. but what matters is: she decides to do something hard because it’s important for her society in a way that’s bigger than herself. it’s also personally fulfilling. that’s compatible. she decides to take on a burden, a responsibility, a difficult heroic quest.

and the Moana plague, Pinocchio whale and Scar tyrant are all like objective problems in the world. as opposed to Frozen where the primary problem is Elsa being emo, not the political plot. Simba being dumb in the middle is not the primary problem in the movie.

Pinocchio is dumb and is responsible for some of his own problems. but his emotion following is portrayed as bad. he wasn’t supposed to give in to temptation. (as opposed to Frozen where they are supposed to give in to their emotions). and then Pinocchio faces a major challenge in the world after.

Moana is never very dumb. at her worst, she thinks she’s failed and wants to give up. one scene later to give her some wisdom, she’s back at it.

in Moana, her semi-love-interest is an older man with a large power imbalance in his favor (he’s a demigod…). he’s cocky, funny and initially dismissive to Moana. he’s high status and knows it and is literally willing to say so. Moana is strong enough to push back and earn some respect.

http://www.metrolyrics.com/youre-welcome-lyrics-disney.html

I see what's happening yeah

You're face to face with greatness and it's strange

You don't even know how you feel

It's adorable!

Well, it's nice to see that humans never change

Open your eyes, let's begin

Yes, it's really me

It's Maui, breathe it in

I know its a lot; the hair, the bod

When you're staring at a demigod

What can I say except you're welcome

For the tides, the sun, the sky

Hey, it's okay, it's okay, you're welcome

that’s how his song begins when she meets him. and he shit tests her by sealing her in a cave with a giant boulder and stealing her boat and leaving

Anna doesn’t decide to be a hero. she doesn’t choose to face the dangers like wolves or giant snowman, they just happen to her. she never starts acting responsibly on purpose

she keeps gossiping. she’s super social. that’s not typical of adventure movies. but she spends her time talking and then like actually does things as a minor aside.

simba knows scar is dangerous and faces it anyway. same with pinocchio and sea+whale. same with Moana and maui, crab and fire boss

anna says elsa isn’t dangerous when she goes on journey

she isn’t setting out to face the scary unknown or slay a dragon. she’s just trying to talk with her sister like at home.

Anna’s most heroic moment is when she gets hit by a sword. b/c of self-sacrificing love, not courage. at least she knew she was stepping into danger (tho she was about to die anyway)

Moana sings about trying to choose a role in life (chief or explorer)

roles Ana plays include: clumsy-adorable girl, falling in romantic love girl, helpless girl who needs to be rescued, breadwinning provider, gossip, martyr, badguy puncher (in a comic way without strength), dismissive beta-orbiter-target

she doesn’t really play a princess role, but she does abuse her office to give Kristoff a job

she fakes confidence in a social way a couple times on journey

she never does anything to learn, grow, train, skill-up as is pretty standard in these movies.

the movie is about letting go of the structural organization of society, not having roles in life to guide you, and replacing it all with emotions – especially love.

Frozen also has no strong characters. the giant snowman/monster or random guards are the closest. the hero doesn’t even fight the monster. she just leaves and the bad guys fight it

the movie is so confused. changing the bad guy into the sister will do that, i guess.

the movie doesn’t even know if “cold” is good or bad. it can’t keep its metaphors straight b/c of the role change. she has cold powers. which are good, sorta. but Let It Go ends with “The cold never bothered me anyway.” besides a lie, this is a use of the regular meaning of cold (as bad)

and in Let It Go (all Frozen lyrics), Elsa sings:

And I'll rise like the break of dawn.

and

Here I stand, in the light of day.

But then when Anna shows up, Elsa sings:

Please go back home, your life awaits

Go enjoy the sun and open up the gates

It's contradictory about the sun. Elsa was singing how she gets to be in the sun now, but then she's like "nah you go be in the sun Anna".

later the trolls sing:

We’re not saying you can change her, ‘cuz people don’t really change 

We’re only saying that love of course is powerful and strange 

People make bad choices if they’re mad, or scared, or stressed 

Throw a little love their way and you’ll bring out their best 

True love brings out their best!

Frozen says Love is an Open Door (that's another song title)

Frozen replaces the hero’s journey with the lover’s journey.

in Frozen, you don’t pioneer by facing the unknown, you pioneer by falling in love... in regular movies you explore the scary unknown world and face challenges in the world. in Frozen, you explore your own emotions, and the challenges are your own emotions, and pretty much the whole world consists of emotions.

Frozen is a super social movie all about talking, relationships and emotions. it's heavy on romance, love, and dishonesty. Anna lies about her assertiveness with Kristoff and later lies about letting him tag along (faking non-needy high status even on a snowy mountain, because she thinks social reality always matters more than real reality). And Anna doesn't want Kristoff to tell the truth to Olaf about summer melting snowmen. And then there's what Elsa sings (emphasis added)

Don't let them in, don't let them see

Be the good girl you always have to be

Conceal, don't feel, put on a show

Make one wrong move and everyone will know

Putting on a show means lying.

A "wrong move" consists of one that lets everyone know. She's trying to hide the truth from them. She wants them blind ("don't let them see"). She wants them not to have knowledge. She considers enlightening and illuminating wrong.

The cold never bothered me anyway.

The cold did bother her. This is such a standard, modern, social lie. People say they didn't care anyway about stuff they did care about. Like if they don't get invited to a party they lie that they didn't want to go anyway.

And what are Elsa's ice powers a metaphor for? They are something about she doesn't fit in, she's not normal, and when she's emotional she can hurt people. I think the movie is ambiguous and Elsa is meant to fit many types of not fitting in, rather than it being about a particular type. As an example, Elsa could be a lesbian and trying to hide it (the voice actors like the idea). That would fit the movie fine. But the movie is vague and it could easily be something else instead, like she's a nervous dork. Or she could think she's a C student working really hard to get A's, but she's not smart enough for the perfect student role and worries she'll be revealed as a fraud if she slips up. Or she could be a non-cheerleader who worries if she slips up with her makeup and lets them see a pimple then people will realize she's not the beautiful girl she tries to present as. There are lots of ways people get nervous, worried and stressed. They try to fit into a role in society, and especially early on they aren't perfect at it and worry people will recognize the mismatch. And then they sometimes lash out when the pressure and stress upsets them. The pressure is often more self-imposed than the realize, but there's also frequently some genuine, important external pressures which they resent.

What is Frozen's solution? if you don't fit in, blame society. do whatever you feel like and people should be happy to support you. Frozen has no respect for the reasons society is organized as it is, no understanding of the purposes of society's structural organization. Frozen seems to think people can change their place in the world about as fast as they can change emotions.


Read my previous comments on Frozen.


Elliot Temple | Permalink | Messages (3)