Social Dynamics Discussion Highlights

This post contains highlights from my discussion at The Law of Least Effort Contributes to the Conjunction Fallacy on Less Wrong. The highlights are all about social dynamics.


I view LoLE [Law of Least Effort] as related to some other concepts such as reactivity and chasing. Chasing others (like seeking their attention) is low status, and reacting to others (more than they're reacting to you) is low status. Chasing and reacting are both types of effort. They don't strike me as privacy related. However, for LoLE only the appearance of effort counts (Chase's version), so to some approximation that means public effort, so you could connect it to privacy that way.


I think basically some effort isn't counted as effort. If you like doing it, it's not real work. Plus if it's hidden effort, it usually can't be entered into evidence in the court of public opinion, so it doesn't count. But my current understanding is that if 1) it counts as effort/work; and 2) you're socially allowed to bring it up then it lowers status. I see privacy as an important thing helping control (2) but effort itself, under those two conditions, as the thing seen as undesirable, bad, something you're presumed to try to avoid (so it's evidence of failure or lack of power, resources, helpers, etc), etc.


Maybe another important thing is how your work is.... oriented. I mean, are you doing X to impress someone specific (which would signal lower status), or are you doing X to impress people in general but each of them individually is unimportant? A woman doing her make-up, a man in the gym, a professor recording their lesson... is okay if they do it for the "world in general"; but if you learned they are actually doing all this work to impress one specific person, that would kinda devalue it. This is also related to optionality: is the professor required to make the video? is the make-up required for the woman's job?

You can also orient your work to a group, e.g. a subculture. As long as its a large enough group, this rounds to orienting to the world in general.

I think orienting to a single person can be OK if 1) it's reciprocated; and 2) they are high enough status. E.g. if I started making YouTube videos exclusively to impress Kanye West, that's bad if he ignores me, but looks good for me if he responds regularly (that'd put me as clearly lower status than him, but still high in society overall). Note that more realistically my videos would also oriented to Kanye fans, not just Kanye personally, and that's a large enough group for it to be OK.


Do the PUAs really have a good model of an average human, or just a good model of a drunk woman who came to a nightclub wanting to get laid?

PUAs have evidence of efficacy. The best is hidden camera footage. The best footage that I’m aware of, in terms of confidence the girls aren’t actors, is Mystery’s VH1 show and the Cajun on Keys to the VIP. I believe RSD doesn’t use actors either and they have a lot of footage. I know some others have been caught faking footage.

My trusted friend bootcamped with Mystery and provided me with eyewitness accounts similar to various video footage. My friend also learned and used PUA successfully, experienced it working for him in varied situations … and avoids talking about PUA in public. He also observed other high profile PUAs in action IRL.

Some PUAs do daygame and other venues, not just nightclubs/parties. They have found the same general social principles apply, but adjustments are needed like lower energy approaches. Mystery, who learned nightclub style PUA initially, taught daygame on at least one episode of his TV show and his students quickly had some success.

PUAs have also demonstrated they’re effective at dealing with males. They can approach mixed-gender sets and befriend or tool the males. They’ve also shown effectiveness at befriending females who aren’t their target. Also standard PUA training advice is to approach 100 people on the street and talk with them. Learning how to have smalltalk conversations with anyone helps people be better PUAs, and also people who get good at PUA become more successful at those street conversations than they used to be.

I think these PUA Field Reports are mostly real stories, not lies. Narrator bias/misunderstandings and minor exaggerations are common. I think they’re overall more reliable than posts on r/relationships or r/AmITheAsshole, which I think also do provide useful evidence about what the world is like.

There are also notable points of convergence, e.g. Feynman told a story ("You Just Ask Them?” in Surely You’re Joking) in which he got some PUA type advice and found it immediately effective (after his previous failures), both in a bar setting and later with a “nice” girl in another setting.

everyone lives in a bubble

I generally agree but I also think there are some major areas of overlap between different subcultures. I think some principles apply pretty broadly, e.g. LoLE applies in the business world, in academia, in high school popularity contests, and for macho posturing like in the Top Gun movie. My beliefs about this use lots of evidence from varied sources (you can observe people doing social dynamics ~everywhere) but also do use significant interpretation and analysis of that evidence. There are also patterns in the conclusions I’ve observed other people reach and how e.g. their conclusion re PUA correlates with my opinion on whether they are a high quality thinker (which I judged on other topics first). I know someone with different philosophical views could reach different conclusions from the same data set. My basic answer to that is that I study rationality, I write about my ideas, and I’m publicly open to debate. If anyone knows a better method for getting accurate beliefs please tell me. I would also be happy pay for useful critical feedback if I knew any good way to arrange it.

Business is a good source of separate evidence about social dynamics because there are a bunch of books and other materials about the social dynamics of negotiating raises, hiring interviews, promotions, office politics, leadership, managing others, being a boss, sales, marketing, advertising, changing organizations from the bottom-up (passing on ideas to your boss, boss’s boss and even the CEO), etc. I’ve read a fair amount of that stuff but it’s not my main field (which is epistemology/rationality).

There are also non-PUA/MGTOW/etc relationship books with major convergence with PUA, e.g. The Passion Paradox (which has apparently been renamed The Passion Trap). I understand that to be a mainstream book:

About the Author
Dr. Dean C. Delis is a clinical psychologist, Professor of Psychiatry at the University of California, San Diego, School of Medicine, and a staff psychologist at the San Diego V.A. Medical Center. He has more than 100 professional publications and has served on the editorial boards of several scientific journals. He is a diplomate of the American Board of Professional Psychology and American Board of Clinical Neuropsychology.

The main idea of the book is similar to LoLE. Quoting my notes from 2005 (I think this is before I was familiar with PUA): “The main idea of the passion paradox is that the person who wants the relationship less is in control and secure, and therefore cares about the relationship less, while the one who wants it more is more needy and insecure. And that being in these roles can make people act worse, thus reinforcing the problems.”. I was not convinced by this at the time and also wrote: “I think passion paradox dynamics could happen sometimes, but that they need not, and that trying to analyse all relationships that way will often be misleading.” Now I have a much more AWALT view.

The entire community is selecting for people who have some kinds of problems with social interaction

I agree the PUA community is self-selected to mostly be non-naturals, especially the instructors, though there are a few exceptions. In other words, they do tend to attract nerdy types who have to explicitly learn about social rules.

Sometimes I even wonder whether I overestimate how much the grass is greener on the other side.

My considered opinion is that it’s not, and that blue pillers are broadly unhappy (to be fair, so are red pillers). I don’t think being good at social dynamics (via study or “naturally” (aka via early childhood study)) makes people happy. I think doing social dynamics effectively clashes with rationality and being less rational has all sorts of downstream negative consequences. (Some social dynamics is OK to do, I’m not advocating zero, but I think it’s pretty limited.)

I don’t think high status correlates well with happiness. Both for ultra high status like celebs, which causes various problems, and also for high status that doesn’t get you so much public attention.

I think rationality correlates with happiness better. I would expect to be wrong about that if I was wrong about which self-identified rational people are not actually rational (I try to spot fakers and bad thinking).

I think the people with the best chance to be happy are content and secure with their social status. In other words, they aren’t actively trying to climb higher socially and they don’t have to put much effort into maintaining their current social status. The point is that they aren’t putting much effort into social dynamics and focus most of their energy on other stuff.


Elliot Temple | Permalink | Messages (0)

Updating My Less Wrong Commenting Policy

I thought about how to use LW comments better and get along with people better. I wrote notes about it:

  • Write comments that would be appreciated by a new observer, who hasn’t read any of my previous stuff, hasn’t read the sequences, and only skimmed the post he’s commenting under.
  • Only reply to comments I actually think are good. If I see any signs of low quality, hostility, or social aggression, don’t reply.
  • Make it clear to people in my bio, and at the end of some posts, that I’m open to more discussion by request. I can change policies and be more responsive if asked.
  • Avoid meta discussion. Lots of LW people don’t like it. I think meta discussion is very important and valuable, but I can write it at my own forums.
  • I plan to have two clearly distinguished commenting modes. I think a middle ground between them causes trouble and I want to avoid that.
    • Mode one is anything goes, zero pressure to reply, drop anything on a total whim with no explanation. This mode will be my default and means I’ll be replying less than I was.
    • For these comments, I’ll try to make most of my comments standalone and interesting. That means only engaging with people who say something significant and worthwhile. Otherwise I can write a monologue “reply” (that doesn’t engage with specifics of what they said, just talks about the topic) or not reply.
      • A short answer like “Yes” is OK too because it won’t annoy any readers. People who don’t get value from it will know it’s contextual and won’t mind.
      • It’s important to be careful about comments which rely on context but don’t obviously do so. People can think those comments are meant to stand alone when they aren’t. So try to make comments really blatantly be minor followups or else offer standalone value.
    • Mode two is high effort discussion after some mutual agreement to try to use some sort of written rules. Examples of discussion policies that could be used:
      • Discussion to agreement or length 5 impasse chain. Agreement can be about the topic or agreeing to stop the discussion – any sort of agreement is fine.
      • Discussion until agreement or someone claims that they made an adequate case that the other person ought to be able to learn from and be corrected by. They believe their case would persuade a neutral, reasonable observer. Plus minimum two followups to address parting questions (like which text constitutes the adequate case, and isn’t the case inadequate due to not addressing questions X and Y that were already asked?) or potentially be persuaded to change their mind about ending there.
      • Discussion to agreement or to one stated impasse plus two followups to address final questions and have a brief window to potentially solve the impasse.
  • The discussion mode I do not want is a medium effort discussion following unwritten rules (particularly social hierarchy climbing related rules). I prefer either small or large discussion. Either anyone can leave at any moment with no negative judgments or we set up a more organized, serious discussion project. I don’t want to half-ass adding transaction costs and commitments into discussions. Do that as a big enough deal to agree on and write down some discussion rules and policies, or don’t do it and stick to anarchy. Unwritten rules suck so either use written rules or no rules.
  • I don’t trust people to be OK with no-commitment discussion, despite having recently been told by a several people that that’s how LW works. I think LW mostly works by medium commitment discussion where there are social pressures. I think people routinely are judged for not replying.
    • It’s hard to deal with because asking people if they want a serious discussion, in reply to their topical comment, gets negative responses. They don’t want to state what sort of discussion they want (zero commitment, medium commitment with unwritten rules, or more serious project with written rules). I take people’s general dislike of stating what rules they are playing by, or want me to play by, as a piece of evidence that it’s unwritten rules and medium commitment that are commonly desired. I don’t think people are usually really 100% fine with me ignoring them an unlimited number of times without explanation because, hey, no commitment and no request for anything different. I think they’ll see me as violating unwritten rules saying I should be somewhat responsive. (I personally wouldn’t mind explaining why I think someone’s comments are bad and why I don’t want to reply, but I think the LW forum generally does mind me doing that. If people want to know that they are welcome to ask me at my forum about particular cases from LW. But I don’t like being asked privately because I want my regular audience to be able to see my answers.)
    • Broadly if anyone has any problem with me, I hope they’ll state it or ask a direct question about it. I don’t expect it but I do prefer it. I know people often ask for such things and don’t mean it, but I have an 18 year track record of public discussion archives showing I actually mean it, and I run a discussion community where such things are normal. Posts like “Why did no one reply to this?” or “Why didn’t you reply to this?” are well within norms at my forums, and I do get asked meta questions like why I dealt with a discussion in a particular way.
    • I don’t plan to ask other people at LW direct questions about problems I have with them, or state the problems, unless they ask me to do that and I find the request convincing (e.g. I can’t find signs of dishonesty or incompetence). Even then, I might ask them to forum switch first because I think other people at LW could easily take that discussion out of context and mind it (the context being the convincing request).
  • I would like some discussions where we try to make an organized, serious effort to seek the truth, resolve some disagreements, etc. But that is not the LW norm. The LW norm is mostly a mix of small and medium discussions. OK. My solution is to make big discussions available by request and otherwise do small discussions. This should be acceptable for both me and others.

View this post at Less Wrong.


Elliot Temple | Permalink | Message (1)

Discussion with gigahurt from Less Wrong

Discussion with gigahurt started here. He wrote (quoting me):

Disagreements can be resolved!

I see your motivation for writing this up as fundamentally a good one. Ideally, every conversation would end in mutual understanding and closure, if not full agreement.

At the same time, people tend to resent attempts at control, particularly around speech. I think part of living in a free and open society is not attempting to control the way people interact too much.

I hypothesize the best we can do is try and emulate what we see as the ideal behavior and shrug it off when other people don't meet our standards. I try to spend my energy on being a better conversation partner (not to say I accomplish this), instead of trying to make other people better at conversation. If you do the same, and your theory of what people want from a conversation partner accurately models the world, you will have no shortage of people to have engaging discussions with and test your ideas. You will be granted the clarity and closure you seek.

By 'what people want' I don't mean being only super agreeable or flattering. I mean interacting with tact, brevity, respect, receptivity to feedback, attention and other qualities people value. You need to appeal to the other person's interest. Some qualities essential to discussion, like disagreeing, will make certain folks back off, even if you do it in the kindest way possible, but I don't think that's something that can be changed by policy or any other external action. I think it's something they need to solve on their own.

Then I asked if he wanted to try to resolve one of our disagreements by discussion and he said yes. I proposed a topic related to what he'd written: what people want from a discussion partner and what sort of discussion partners are in shortage. I think our models of that are significantly different.


Post with gigahurt discussion tree and YouTube video playlist:

http://curi.us/2368-gigahurt-discussion-videos


Elliot Temple | Permalink | Messages (89)

Rationally Ending Intellectual Discussions

Discussions should end gracefully. There should be some clarity about why they’re ending and why that’s rationally acceptable. If someone wants to continue the discussion, they should have the opportunity to see why this ending is reasonable. (Or, failing that, they should have some evidence available to enable them to argue their case that it was unreasonable. They shouldn’t be left with little or no data about what happened.)

If you discuss poorly, it’s important that you can learn from that and do better next time. If you want it, you should have access to some critical feedback, some explanation of the other person’s perspective, or something to enable progress. If you’re running into problems interacting with people, but no one will tell you what the problem is, that’s bad for rational progress.

The more easily discussions can end, the more easily they can start. We don’t want discussing to be a big burden or commitment (it’s OK if a few discussions require high effort, when people have reason to voluntarily choose that, but that shouldn’t be the default).

Discussions can end by mutual agreement. If no one objects to the ending, that’s fine. That’s one way to end discussions. Pretty much everyone agrees on this. The difficulty is that sometimes people don’t agree. Someone wants to stop discussing but someone else wants to pursue the matter further.

I don’t think agreeing to disagree, by itself, is a good reason to end a discussion by mutual agreement. Disagreements can be resolved! There should be an extra factor like agreeing that it’s reasonable to prioritize other stuff (which we may leave implied without explicitly mentioning). There are many problems to work on, and time/energy/etc are scarce resources, so it’s fine to drop a disagreement (for now, indefinitely) if people think they can get more value working on other stuff.

Ending discussions when a single person wants to, for any reason, with no explanation, is problematic. For example, people will end discussions when they (the ideas they are biased in favor of) start losing the argument.

But we don’t want to trap people in discussions. Sometimes one guy has a lot of energy to discuss something forever but you want to stop. There are lots of legitimate reasons to end a discussion.

You can try to explain your reasons for ending a discussion, but no matter what you say, the other guy might disagree. This is a real concern because you don’t want to discuss extra with the most unreasonable people who give low quality counterarguments to all your reasons for stopping discussion. Meanwhile the most reasonable people tend voluntarily to let you out of discussions, so you discuss with them the least!?

There has to be a way to unilaterally end discussions. You end it without agreement. But it should have some protections against abuse. We don’t want it to be acceptable to arbitrarily end any discussion at any time for no reason or awful reasons. This is for your own sake, too, not just for the benefit of others. If I want to end a discussion and the other guy disagrees, I ought to consider that maybe I’m biased. I might be evading the issue, avoiding being corrected, etc. I shouldn’t just fully trust my own judgment. I should want some discussion policies that make it harder for me to be and stay irrational, biased, unreasonable, etc.

Note: Of course anyone can stop talking at any time for no reason. That’s a matter of freedom. No one should be held hostage. The issue is the consequences for your reputation. What is considered reasonable or rational? Some discussion ending behavior ought to be seen negatively by the community. Some good ways of ending discussions out to be encouraged, widespread and normal. Some bad ways of ending discussions should be disincentivized.

Note: Even if a discussion actively continues, you could participate in it on less than a daily basis. Discussions don’t have to be fast. Some norms are needed for what is stalling a discussion out (e.g. one reply per decade would be a way to pretend you didn’t end the discussion when basically you did). In my experience, people are usually pretty reasonable about discussion pacing, with a few notable exceptions. (The main problem I see is people who discuss a bunch initially and then never come back to it as soon as they go do something else or as soon as they sleep once.)

So we want a way to end a discussion, even if other people disagree with ending the discussion, but which has some protection against abuse (bad faith), error, bias, irrationality, etc. It should ideally provide some transparency: some evidence about why the discussion is ending that can be judged neutrally, positively or negatively by the audience. And it should provide some sort of feedback or learning opportunity for the guy who didn’t want to stop here.

So here’s the first draft of a policy: When you want to end a discussion, you are expected to write one final message which explains why you’re ending the discussion. You’re also expected to read one more message from the other guy, so he has one chance to point out that you’re making a mistake and he could potentially tell you why you should change your mind about ending the discussion.

What’s good about this policy? It helps limit abuse. It’s harder to end a discussion for a bad reason if you have to explain yourself. The other guy gets some info about what happened. The other guy has a chance at a rebuttal so you could potentially be corrected. And it’s not very time consuming. It puts a small, strict limit on how much more discussion happens after you decide you’d like to wrap it up.

This is a pretty minimal policy. I think it could be a good default expectation that LW could use for any discussion where people have each written 3+ messages (or maybe 5+ to reduce the burden? The number could be tuned if this was tried out for a while). That way it won’t add an extra burden to really small discussions. Tiny discussions would be discouraged by any overhead at all. Tiny discussions are also lower investment so ending them is a smaller deal. People haven’t formed a reasonable expectation of reaching a conclusion, getting their questions answered, or anything else. They’re just sharing some thoughts on an ad hoc, no-obligation basis. That’s fine. But for more substantive discussions, I think adding a little bit of an ending cost is reasonable.

The minimal policy has some downsides. If we had a policy that takes more effort, we could fix some problems and get some benefits. So I think for discussions that go longer (e.g. 10+ messages each) or when people mutually agree to make it a substantive discussion, then a longer but better approach could be used for unilaterally ending a discussion.

What are problems with the single parting message? It could be unclear. It could ignore a key issue. It could misrepresent the other guy’s positions or misrepresent what happened in the discussion. It could be poorly thought through and show major signs of bias.

What are the problems with a single rebuttal to the parting message that won’t be replied to? If it asks any clarifying questions, they won’t be answered. Any great points could be ignored without explanation.

So as a next step up, we could have a two-part discussion ending. Instead of one more back and forth (parting message + rebuttal), we could have two more back and forths. Initial parting message, rebuttal and questions, final parting message, and then final rebuttal.

BTW, the rebuttals are semi-optional. You can just decide to agree with the guy’s parting message if you want (maybe it makes sense to you once he explains his position). Or instead of a rebuttal or agreement, your other option is to write your own parting message. But you shouldn’t disagree with their parting message and then silently end the discussion with zero explanation of what’s going on.

Note: Parting messages don’t have to be very long. A few clearly written sentences can cover the key points (e.g. your opinion of the discussion, your final comments on some open issues, your reasons for ending). A bit longer is needed if you write fluff. And generally you should write a bit more if the discussion was longer.

With a two back-and-forth discussion ending, it’s still possible to dodge questions, avoid key issues, etc. It can take quite a few iterations to get some stuff cleared up, and that’s if people are being reasonable. Unreasonable people can sabotage discussions indefinitely.

So what about a three back-and-forth discussion ending? Or four or five? Nothing will be perfect or give a guarantee.

Let’s consider other approaches. What about a 5% ending? However many words you wrote in the discussion, you should write 5% of that number in the discussion ending phase. That seems kinda reasonable. That means for every 20 words of discussion you write, you’re potentially obligating yourself to one word later. This might need to be capped for very long discussions. This means your effort to end the discussion gracefully is proportional to the effort you put into the discussion.

This approach still suffers from being a fairly arbitrary cutoff. You just decide to end the discussion, say a few things that hopefully do a good job of exposing your reasoning to criticism and giving the other guy the chance to learn that he’s wrong, and say a few things to wrap up the open issues (like briefly answering a few key questions you hadn’t gotten to, so the discussion is left in a more complete form and your case is left adequately complete that someone could learn from you). I think that’s way better than nothing but still has significant potential to go wrong.

One useful technique is agreeing to escalate the commitment to the discussion. You can say “I will discuss X but only if you’ll agree to a 3 back and forth ending if you want to end the discussion when I don’t (which I’ll also agree to if I want to end it unilaterally).” It sometimes makes sense to not want to invest in a discussion then have it end abruptly in a way that’s unsatisfying and inconclusive from your perspective.

It makes sense to want a discussion to either be productive or else the other guy makes a clear claim – explained enough that you could learn from it – about what you’re doing wrong, so you have the opportunity to improve (or maybe to criticize his error and judge him, rather than being left with a lack of data). Someone could also explain why the discussion isn’t working in a no-fault way, e.g. you and he have some incompatible traits.

Saying “I’m busy” is broadly a bad excuse to end discussions. You were busy when you started, too, right? What changed? Sometimes people actually get significantly busier in an unforeseeable way in the middle of a discussion, but that shouldn’t be very common. Usually “I’m busy” is code for “I think your messages are low quality and inadequately valuable”. That claim isn’t very satisfying for the other guy without at least one example quote and some critical analysis of what is bad about the quote. Often people speak in general terms about low quality discussion without any quoted examples, which also tends to be unsatisfactory, because the person being criticized is like “Uhh, I don’t think I did that thing you’re accusing me of. I can’t learn from these vague claims. You aren’t showing me any examples. Maybe you misunderstood me or something.”

It can be reasonable to say “I thought I’d be interested in this topic but it turns out I’m not that interested.” You shouldn’t say this often but occasionally is OK. Shit happens. You can end a discussion due to your own mistake. When you do, you shouldn’t hide it. Let the other guy and the audience know that you aren’t blaming him. And maybe by sharing the problem you’ll be able to get some advice about how to do better next time. Or if you share the problem, maybe after a bunch of discussions you’ll be able to review why they ended and find some patterns and then realize you have a recurring problem you should work on.

Impasse Chains

Besides trying to end a discussion gracefully with a parting phase where a few things get explained, I have a different proposal: impasse chains.

An impasse is a statement about why the discussion isn’t working. We’re stuck because of this impasse. It’s explaining some problem in the discussion which is important enough to end the discussion (rather than being minor and ignorable). What if no one problem is ruinous but several are adding up to a major problem? Then the impasse is the conjunction of the smaller problems: group them together and explain why the group is an impasse.

Stating an impasse provides transparency and gives the other guy some opportunity to potentially address or learn from the discussion problem.

Impasses are meant to, hopefully, be solved. You should try to say what’s going wrong that, if it was changed to your satisfaction, you’d actually want to continue.

The other guy can then suggest a solution to the impasse or agree to stop. A solution can be a direct solution (fix the problem) or an indirect solution (a workaround or a better way to think about the issue, e.g. a reason the problem is misconceived and isn’t really a problem). You should also try to think about solutions to impasses yourself.

Sometimes the guy will recognize the impasse exists. Other times it’ll seem strange to him. He wasn’t seeing the discussion that way. So there’s some opportunity for clarification. Lots of times that someone wants to end a discussion, it follows some sort of fixable misunderstanding.

So far an impasse is just a way to think about a parting message, and you can hopefully see why continuing at least one more message past the initial impasse claim makes sense. So you may think this impasse approach just suggests having 2-5 messages (per person) to end discussions. And that’s decent – a lot of discussions do way worse – but I also have a different suggestion.

The suggestion is to chain impasses together.

So step 0, we discuss.

Step 1, I say an impasse and we try to solve it. This is an impasse regarding step 0.

Step 2, I decide the problem solving isn’t working in some way (otherwise I’d be happy to continue). So I state an impasse with the problem solving. This is an impasse regarding step 1.

Step 3, we try to solve the second impasse. Either this problem solving discussion satisfies me or I see something wrong with it. If it’s not working, I say an impasse with this discussion. This is the third impasse.

Each time an impasse is stated, the previous discussion is set aside and the impasse becomes the new topic of discussion. (Though a few closing comments on the previous discussion are fine and may be a good idea.) The impasse is either solved (and then you can return to the prior discussion) or leads to a new impasse. This can repeat indefinitely. You can have an impasse about the discussion of an impasse about the discussion of an impasse in the original discussion.

The impasses are chained together. Each one is linked to the previous one. This is different than multiple separate impasses with the original discussion. Here, we’re dealing with one impasse for the original discussion and then the other impasses in the chain are all at meta levels.

Note: If you see multiple impasses with the original discussion, often that means you tried to ignore one. Instead of bringing up the first one and trying to do problem solving, you let problems accumulate. It’s possible for multiple impasses to come up at the same time but it isn’t very common. In any case, you can deal with the impasses one at a time. Pick one to focus on. If it gets resolved, move on to the next one.

It doesn’t make sense to ask someone to discuss X further when he sees an impasse with discussion X (meaning a reason that discussion isn’t working). You’ll have to address that problem in some way or agree to stop. Discussing the problem itself is a different discussion than discussing X, so it should be possible to try it.

The more impasses chain, usually the clearer the situation gets. Each level tends to be simpler than the previous level. There are fewer issues in play. It becomes more obvious what to do. This helps but isn’t nearly enough to make impasse chains get addressed (either solve every impasse or agree to stop) in a reasonable amount of time.

Impasse chains often get repetitive. Problems reoccur. Suppose I think you keep saying non sequiturs. We can’t discuss the original topic because of that. So then we try to discuss that impasse. What happens? More non sequiturs (at least from my point of view)!

Some discussion problems won’t affect meta levels but some are more generic and will. You can try to say “OK given our disagreements about X, Y and Z, including methodology disagreements, what can we still do to continue problem solving which is neutral – which makes sense regardless of what’s correct about those open issues, and makes sense from both of our points of view?” Often what happens is you can’t think of anything. It’s hard. Oh well. Then you can mutually agree to end the discussion since neither of you knows a good way to continue.

When impasse chains get long, you tend to either have a lot of issues that are being set aside (given A, B, C, D, E, F, G, H are unresolved, what can we do?) or you have a lot of repetition (every meta level is the same problem, e.g. my belief that you’re replies are non sequiturs). Repetition is itself a reason to end the discussion. It’s just repeating and neither of us knows a way to fix it, so we can agree to stop.

This kind of ending is satisfying in a way. It gives transparency about why the discussion ended. It means (worst case scenario) I’ve gotten to make my case about what you’re doing wrong, and you’ve failed to answer it substantively, so now I’m reasonably content (not my favorite outcome, but what more would I hope to gain from continuing?).

So a policy can be e.g. to discuss until an impasse chain with the same problem 3 times in a row. Or to discuss until an impasse chain of at least length 5. Generally 5 is plenty. Reaching the 5th metal level can be quite fast and clarifying.

Impasses can be declared in bad faith. People can disagree about what is an impasse. Then what? Discussions have to proceed in ways that make sense to all parties. If someone thinks there is an impasse, then there is one, even if the impasse consists of his misconception. And yes bad faith is possible. What can be done about that? Transparency. Exposing it to daylight. Having systems like this that make bad faith more visible and easier to criticize. Having procedures that create more evidence of bad faith.

In general, by an impasse chain of length 5, if one person is being rational and the other isn’t, it gets really obvious who is who. This gives the rational guy reason to be satisfied with ending the discussion (he knows what happened and why, and he had some chances to try to solve it) and it gives evidence about both parties. If both people are fairly equal in skill or rationality, or both are pretty bad (even if unequally bad), then you can much more easily have muddy waters after an impasse chain of length 5. Oh well. Creating clearer impasse chains is a skill you can work on. You can learn from your attempts to create some clarity and what didn’t work and why, and try to do better next time. And you can try to learn from the other guy’s attempts to.

The impasse chain system is unnecessary for every discussion. It’s a bit heavyweight and high transaction cost to use all the time. But it’s pretty limited. If you agree to a 5 impasse chain, you’re always 5 messages away from getting out of the discussion. The only reason it’d take more is if the other guy said reasonable stuff. But if he says unreasonable stuff, you’re done in 5 messages, and some of those messages will often be just one paragraph or even one sentence long.

This approach is good when people want to claim they are open to debate and that their views have stood up to debate. That leads to disputes over what it means to be open to debate, etc. I propose being willing to enter into discussions terminated by a length 5 impasse chain (or mutual agreement) as a reasonable criterion for a (self-declared) serious intellectual to say he’s actually open to substantive debate about something and is actually addressing critics.

And the impasse chain approach can be requested when you want to have a discussion if and only if there will be a substantial ending to protect your effort investment. If you want to avoid a case of putting in a bunch of effort now and then the guy just leaves, you can ask for an impasse chain precommitment or 5 parting message precommitment or other way to help protect (and therefore enable) your energy investment into the discussion.

Concluding Thoughts

What’s the typical failure case look like? Joe is trying to have a rational discussion and Bob says “eh, your messages are lame; bye” and won’t answer questions or address arguments. Or, worse, Bob explains even less than that. If Bob would explain that much, at least people could see “OK Bob accused Joe of lame messages and gave zero arguments. Therefore Bob is lame.” Impasse chains or even just parting messages help enable problem solving as well as clarifying what happened in bad outcomes.

Often a discussion looks like this: Joe writes a blog post. Bob says some criticism. Joe sees many flaws in the criticism. Joe explains the flaws. Bob stops talking. This isn’t satisfying for Joe. He never got feedback on his post from post-misconception Bob. And Bob probably didn’t change his mind. And Bob didn’t even say what the outcome was. Or if Bob did change his mind about something, it’s often a partial change followed immediately by like a “you win; bye”. People routinely use conceding as a way to end discussions with no followups: no post mortem (learning about why the error happened and how to fix the underlying cause), no working out the consequences of the right ideas, etc. The correction doesn’t go anywhere.

That’s sad for Joe. He didn’t want to correct Bob just for fun. He wanted to correct Bob so it’d lead to something more directly beneficial to Joe. E.g. Joe’s correction could be criticized and that’d have value for Joe (he learns he was actually wrong in some way). Or Joe corrects Bob and then it leads to further discussion that’s valuable to Joe. If correcting people is pure charity – and you usually get ghosted without them admitting they were corrected – then people will even try to do it way less. There should be rewards like some sort of resolution to the issues and continuation. Discuss productively and keep going (and maybe Joe learns something later or, failing that, at least gets a good student who learns a bunch of things and may be able to suggest good ideas in the future), or say the impasse for why it’s not productive.

Often Bob thinks Joe is doing something wrong in the discussion but won’t explain it enough for Joe to have a reasonable opportunity to learn better. Note that cites are fine. If it’s already explained somewhere, link it. Just take responsibility for the material you refer people to: you’re using it as our proxy, to speak for you, so you ought to have similar willingness to address questions and criticisms as if you’d written it yourself (but if it’s popular stuff with lots of existing discussion, then you can address FAQs by referring the guy to the FAQ, can direct him to the forum for that community to get questions answered and only answer them yourself if the forum won’t answer, and you can link other blog posts, books, papers, etc. to address followup issues if those exist, etc.)

Impasse chains help address these problems and help make it harder to end discussions due to your own error and bias. And they provide opportunities to solve discussion problems instead of just giving up at the first problem, or in the alternative at least more transparency about the problems can be achieved.

See also

My prior article on Impasse Chains.

My articles on Paths Forward (about discussing in such a way that if you’re wrong and anyone knows it and is willing to tell you, you never block that off with no way for your error to be corrected), including the article Using Intellectual Processes to Combat Bias.

My debate policy.


View this post at Less Wrong.


Elliot Temple | Permalink | Messages (49)

Social Dynamics Summary Notes

These are some summary notes on social dynamics.

  • conformity
    • try to fit in
    • pandering
    • pleasing people
    • avoiding conflict
    • do whatever the group thinks is high status
      • follow trends
    • you need to already have friends. people are impressed by people who other people already like (pre-selection).
    • have standard interests like the right TV shows, music, movies and sports. talk about those. don’t say weird stuff.
      • you’re allowed to have interests people think they should have. lots of people think they should be more into art galleries and operas. you can talk about that stuff to people who don’t actually like it but pretend they want to. they’ll be impressed you actually do that stuff which seems a bit inaccessible but valuable to them.
  • law of least effort
    • being chased, not chasing
    • people come to you
    • opposite of tryhard
    • less reactive
      • don’t get defensive or threatened (important for confidence too)
      • hold frame without showing visible effort
      • but also don’t let people get away with attacking you
    • when you attack people, it should seem like the conflict isn’t your fault, was just a minor aside, no big deal to you, preferably you weren’t even trying to attack them
    • people do what you say
    • you don’t have to do what other people say
    • you generally aren’t supposed to care that much about stuff. instead, be kinda chill about life
      • if you get ahead while appearing this way, it looks like success comes naturally to you. that impresses people. (it should not look like you got lucky)
  • confidence
    • hide weakness
    • pretend to be strong
    • know what you’re doing, having a strong frame, have goals
    • be able to lead
    • best to already be leader of your social group, or at least high up like second in command
  • value
    • DHVs (demonstrations of higher value, e.g. mentioning high value things in passing while telling a story)
    • money, popularity, fame, social media followers, loyal friends, skills, knowledge, SMV (sexual market value, e.g. looks)
    • abundance mentality
    • well spoken, know other languages, can play an instrument or sing, cultured, can cook, etc.
    • virtues like being moral and enlightened are important. these are group specific. some groups value environmentalism, being woke, anti-racist signaling, inclusive attitudes towards various out groups and low status people (poor people, immigrants, disabled, homeless, drug addicts), etc. other groups value e.g. patriotism, toughness, guns, Christianity and limited promiscuity.
  • trend setting
    • this is hard and uncommon but possible somehow
    • mostly only available for very high status people (~top status in a small group can work; it doesn’t have to be overall societal status)
  • non-verbal communications
    • clothes send social signals
    • voice tones
    • eye contact
    • body language
    • posture
    • leaning in or having people lean to you
  • congruence
    • do not ever get caught faking social stuff; that looks really bad
  • compliance
    • getting compliance from other people, while expending low effort to get it, it socially great.
      • it can especially impress the person you get compliance from, even more than the audience
  • plausible deniability
    • there are often things (communications, actions) that a group understands but won’t admit that they understand the meaning of
    • there are ways to insult someone but, if called on it, deny you were attacking them, and most people will accept your denial
    • there are subtle, tricky rules about what is considered a covert attack that you’re allowed to deny (or e.g. a covert way to ask someone on a date, which you’re allowed to deny was actually asking them out if they say no) and what is an overt attack so denials would just make you look ridiculous.
    • social combat heavily uses deniable attacks. deniability is also great for risky requests
    • you’re broadly allowed to lie, even if most people know you’re lying, as long as it isn’t too obvious or blatant, so it’s considered deniable
    • basically social dynamics have their own rules of evidence about what is publicly, socially known or established. and these rules do not match logic or common analytical skill. so what people know and what is socially proven are different. sometimes it goes the other way too (something is considered socially proven even though people don’t know whether or not it’s true).
      • many social climbing techniques use the mismatch between what is socially known to the group and what is actually known to individuals. it lets you communicate stuff so that people understand you but, as far as the social group is concerned, you never said it.

Overall, high status comes from appearing to fit in effortlessly, while wanting to not being pushed into it, and not having social problems, weaknesses or conflicts. You can also gain status from having something valuable, e.g. money, looks, fame, followers or access to someone famous. Besides extreme cases, you still need to do pretty well at social skill even when you have value. Value is an advantage but if you act low status that can matter more than the value. If you have a billion dollars or you’re a movie star, you can get away with a ton and people will still chase you, but if you just have a million dollars or you’re really hot, then you can’t get away with so much.

Desired attitude: You have your own life going on, which you’re happy with. You’re doing your thing. Other people can join you, or not. It isn’t that big a deal for you either way. You don’t need them. You have value to offer, not to suck up to them, but because your life has abundance and has room for more people. You already have some people and aren’t a loaner. You only would consider doing stuff with this new person because they showed value X – you are picky but saw something good about them, but you wouldn’t be interested in just anyone. (Elicit some value from people and mention it so it seems like you’re looking for people with value to offer. You can do this for show, or you can do it for real if you have abundance. Lots of high status stuff is acting like what people think a person with a great life would do, whether you have that or not. Fake it until you make it!)

People socially attack each other. In this sparring, people gain and lose social status. Insults and direct attacks are less common because they’re too tryhard/reactive/chasing. It’s better to barely notice people you don’t like, be a bit dismissive and condescending (without being rude until after they’re overtly rude first, and even then if you can handle it politely while making them look bad that’s often better).

If you sit by a wall and lean back, you look more locked in and stable, so it appears that people are coming to you. Then you speak just slightly too softly to get people to lean in to hear you better, and now it looks like they care what you say and they’re chasing you.


These notes are incomplete. The responses I’d most value are some brainstorming about other social dynamics or pointing out data points (observed social behaviors) which aren’t explained by the above. Alternatively if anyone knows of a better starting point which already covers most of the above, please share it.


View on Less Wrong.


Elliot Temple | Permalink | Messages (11)

The Law of Least Effort Contributes to the Conjunction Fallacy

Continuing the theme that the “Conjunction Fallacy” experimental results can be explained by social dynamics, let’s look at another social dynamic: the Law of Least Effort (LoLE).

(Previously: Can Social Dynamics Explain Conjunction Fallacy Experimental Results? and Asch Conformity Could Explain the Conjunction Fallacy.)

The Law of Least Effort says:

the person who appears to put the least amount of effort out, while getting the largest amount of effort returned to him by others, comes across as the most socially powerful.

In other words, it’s higher status to be chased than to chase others. In terms of status, you want others to come to you, rather than going to them. Be less reactive than others.

Visible effort is a dominant issue even when it’s easy to infer effort behind the scenes. Women don’t lose status for having publicly visible hair and makeup which we can infer took two hours to do. You’re not socially permitted to call them out on that pseudo-hidden effort. Similarly, people often want to do learning and practice privately, and then appear good at stuff in front of their friends. Even if you can infer that someone practiced a bunch in private, it’s often socially difficult to point that out. Hidden effort is even more effective when people can’t guess that it happened or when it happened in the past (particularly childhood).

To consider whether LoLE contributes to the Conjunction Fallacy experimental results, we’ll consider three issues:

  1. Is LoLE actually part of the social dynamics of our culture?
  2. If so, would LoLE be active in most people while in the setting of Conjunction Fallacy research?
  3. If so, how would LoLE affect people’s behavior and answers?

Is LoLE Correct Today?

LoLE comes from a community where many thousands of people have put a large effort into testing out and debating ideas. It was developed to explain and understand real world observations (mostly made by men in dating settings across many cultures), and it’s stood up to criticism so far in a competitive environment where many other ideas were proposed and the majority of proposals were rejected.

AFAIK LoLE hasn’t been tested in a controlled, blinded scientific setting. I think academia has ignored it without explanation so far, perhaps because it’s associated with groups/subcultures that are currently being deplatformed and cancelled.

Like many other social dynamics, LoLE is complicated. There are exceptions, e.g. a scientist or CEO may be seen positively for working hard. You’re sometimes socially allowed to put effort into things you’re “passionate” about or otherwise believed to want to work hard on. But the broad presumption in our society is that people dislike most effort and avoid it when they can. Putting in effort generally shows weakness – failure to avoid it.

And like other social dynamics, while the prevalence is high, not everyone prioritizes social status all the time. Also, people often make mistakes and act in low social status ways.

Although social dynamics are subtle and nuanced, they aren’t arbitrary or random. It’s possible to observe them, understand them, organize that understanding into general patterns, and critically debate it.

Is there a rival theory to LoLE? What else would explain the same observations in a different way and reject LoLE? I don’t know of something like that. I guess the main alternative is a blue pill perspective which heavily downplays the existence or importance of social hierarchies (or makes evidence-ignoring claims about them in order to virtue signal) – but that doesn’t make much sense in a society that’s well aware of the existence and prevalence of social climbing, popularity contests, cliques, ingroups and outgroups, etc.

Would LoLE Be Active For Conjunction Fallacy Research?

People form habits related to high status behaviors. For many, lots of social behavior and thinking is intuitive and automatic before high school.

People don’t turn off social status considerations without a significant reason or trigger. The Conjunction Fallacy experiments don’t provide participants with adequate motivation to change or pause their very ingrained social-status-related habits.

Even with a major reason and trigger, like Coronavirus, we can observe that most people still mostly stick to their socially normal habits. If people won’t context switch for a pandemic, we shouldn’t expect it for basically answering some survey questions.

It takes a huge effort and established culture to get scientists to be less social while doing science. And even with that, my considered opinion is that over 50% of working scientists don’t really get and use the scientific, rationalist mindset. That’s one of the major contributors to the replication crisis.

How Would LoLE Affect Answers?

Math and statistics are seen as high effort. They’re the kinds of things people habitually avoid due to LoLE as well as other social reasons (e.g. they’re nerdy). So people often intuitively avoid that sort of thinking even if they could do it.

Even many mathematicians or statisticians learn to turn that mindset off when they aren’t working because it causes them social problems.

LoLE encourages people to try to look casual, chill, low effort, even a little careless – the opposite of tryhard. The experimental results of Conjunction Fallacy research fit these themes. Rather than revealing a bias regarding how people are bad at logic, the results may simply reveal that social behavior isn’t very logical. Behaving socially is a different thing than being biased. It’s not just an error. It’s a prioritization of social hierarchy issues over objective reality issues. People do this on purpose and I don’t think we’ll be able to understand or address the issues without recognizing the incentives and purposefulness involved.


View this post on Less Wrong.


Elliot Temple | Permalink | Messages (3)

Asch Conformity Could Explain the Conjunction Fallacy

I also posted this on Less Wrong.


This post follows my question Can Social Dynamics Explain Conjunction Fallacy Experimental Results? The results of the question were that no one provided any research contradicting the social dynamics hypothesis.

There is research on social dynamics. Asch’s conformity experiments indicate that wanting to fit in with a group is a very powerful factor that affects how people answer simple, factual questions like “Which of these lines is longer?” People will knowingly give wrong answers for social reasons. (Unknowingly giving wrong answers, e.g. carelessly, is easier.)

Conformity and other social dynamics can explain the conjunction fallacy experimental data. This post will focus on conformity, the dynamic studied in the Asch experiments.

This post assumes you’re already familiar with the basics of both the Asch and Conjunction Fallacy research. You can use the links if you need reminders.

First I’ll talk about whether conformity applies in the Conjunction Fallacy research setting, then I’ll talk about how conformity could cause the observed results.

Conformity in Groups

The Asch Conformity Experiments have people publicly share answers in a group setting. This was designed to elicit conformist behavior. Should we also expect conformist behavior in a different setting like the Conjunction Fallacy experiments setting? I suspect the different setting is a major reason people don’t connect the Asch and Conjunction Fallacy results.

I haven’t seen specific details of the Conjunction Fallacy research settings (in the text I read, details weren’t given) but I think basically people were given questionnaires to fill out, or something close enough to that. The setting is a bit like taking a test at school or submitting homework to a teacher. Roughly: Someone (who is not a trusted friend) will look over and judge your answers in some manner. In some cases, people were interviewed afterwards about their answers and asked to explain themselves.

Is there an incentive to conformity in this kind of situation? Yes. Even if there was no peer-to-peer interaction (not a safe assumption IMO), it’s possible to annoy the authorities. (Even if there were no real danger, how would people know that? They’d still have a reasonable concern.)

What could you do to elicit a negative reaction from the researchers? You could take the meta position that your answers won’t impact your life and choose the first option on every question. Effort expended on figuring out good answers to stuff should relate to its impact on your life, right? This approach would save time but the researchers might throw out your data, refuse to pay you, ask to speak with you, tell your professors about your (alleged) misbehavior (even if you didn’t violate any written rule or explicit request), or similar. You are supposed to abide by unwritten, unstated social rules when answering conjunction fallacy related questions. I think this is plenty to trigger conformity behaviors. It’s (like most of life) a situation where most people will try to get along with others and act in a way that is acceptable to others.

Most people don’t even need conformity behavior triggers. Their conformity is so automatic and habitual that it’s just how they deal with life. They are the blue pill normies, who aren’t very literal minded, and try to interpret everything in terms of its consequences for social status hierarchies. They don’t think like scientists.

What about the red pill autists who can read things literally, isolate scenarios from cultural context, think like a scientist or rationalist, and so on? Most of them try to imitate normies most of the time to avoid trouble. They try to fit in because they’ve been punished repeatedly for nonconformity.

(Note: Most people are some sort of hybrid. There’s a spectrum, not two distinct groups.)

When attending school people learn not to take questions (like those posed by the conjunction fallacy research) hyper literally. That’s punished. Test and homework questions are routinely ambiguous or flawed. What happens if you notice and complain? Generally you confuse and annoy your teacher. You can get away with noticing a few times, but if you complain about many questions on everything you’re just going to be hated and punished. (If people doubt this, we could analyze some public test questions and I can point out ambiguities and flaws.)

If you’re the kind of person who would start doing math when you aren’t in math class, you’ve gotten negative reactions in the past for your nonconformity. Normal people broadly dislike and avoid math. Saying “Hmm, I think we could use math to get a better answer to this.” is a discouraged attitude in our culture.

The Conjunction Fallacy research doesn’t say “We’re trying to test your math skills. Please do your best to use math correctly.” Even if it did, people routinely give misleading information about how literal/logical/mathematical they want things. You can get in trouble for using too much math, too advanced math, too complicated math, etc., even after being asked to use math. You can very easily get in trouble for being too literal after being asked to be literal, precise and rigorous.

So people see the questions and know that they generally aren’t supposed to sweat the details when answering questions, and they know that trying to apply math to stuff is weird, and most of them would need a large incentive to attempt math anyway, and the more rationalist types often don’t want to ruin the psychology study by overthinking it and acting weird.

I conclude that standard social behavior would apply in the Conjunction Fallacy research setting, including conformity behaviors like giving careless, non-mathematical answers, especially when stakes are low.

How Does Conformity Cause Bad Math?

Given that people are doing conformity behavior when answering Conjunction Fallacy research questions, what results should we expect?

People will avoid math, avoid being hyper literal, avoid being pedantic, not look very closely at the question wording, make normal contextual assumptions, and broadly give the same sorta answers they would if their buddy asked them a similar question in a group setting. Most people avoid developing those skills (literalism, math, ambiguity detection, consciously controlling the context that statements are evaluated in, etc.) in the first place, and people with those skills commonly suppress them, at least in social situations if not throughout life.

People will, as usual, broadly avoid the kinds of behaviors that annoy parents, teachers or childhood peers. They won’t try to be exact or worry about details like making probability math add up correctly. They’ll try to guess what people want from them and what other people will like, so they can fit in. They’ll try to take things in a “reasonable” (socially normal) way which uses a bunch of standard background assumptions and cultural defaults. That can mean e.g. viewing “Linda is a bank teller” as information a person chose to tell you, not as more like an out-of-context factoid chosen randomly by a computer, as I proposed previously.

Conformity routinely requires making a bunch of socially normal assumptions about how to read things, how to interpret instructions, how to take questions, etc. This includes test questions and similar, and most people (past early childhood) have past experiences with this. So many people won’t take conjunction fallacy questions literally.

People like the college students used in the research have taken dozens of ambiguous tests and had to figure out how to deal with it. Either they make socially normal assumptions (contrary to literalism and logic) without realizing they’re doing anything, or they noticed a bunch of errors and ambiguities but figured out a way to cope with tests anyway (or a mix like only noticing a few of the problems).

Conclusions

Conformity isn’t a straight error or bias. It’s strategic. It has upsides. There are incentives to do it and continue doing it (as well as major costs to transitioning to a different strategy).

If this analysis is correct, then the takeaway from the Conjunction Fallacy shouldn’t be along the lines of “People are bad at thinking.” It should instead be more like “People operate in an environment with complex and counter-intuitive incentives, including social dynamics.”

Social status hierarchies and the related social behaviors and social rules are one of the most important features of the world we live in. We should be looking to understand them better and apply our social knowledge more widely. It’s causally connected to many things, especially when there are interactions between people like interpretations of communications and requests from others, as is present in Conjunction Fallacy research.


Elliot Temple | Permalink | Messages (3)

Can Social Dynamics Explain Conjunction Fallacy Experimental Results?

I posted this on Less Wrong too.


Is there any conjunction fallacy research which addresses the alternative hypothesis that the observed results are mainly due to social dynamics?

Most people spend most of their time thinking in terms of gaining or losing social status, not in terms of reason. They care more about their place in social status hierarchies than about logic. They have strategies for dealing with communication that have more to do with getting along with people than with getting questions technically right. They look for the social meaning in communications. E.g. people normally try to give – and expect to receive – useful, relevant, reasonable info that is safe to make socially normal assumptions about.

Suppose you knew Linda in college. A decade later, you run into another college friend, John, who still knows Linda. You ask what she’s up to. John says Linda is a bank teller, doesn’t give additional info, and changes the subject. You take this to mean that there isn’t more positive info. You and John both see activism positively and know that activism was one of the main ways Linda stood out. This conversation suggests to you that she stopped doing activism. Omitting info isn’t neutral in real world conversations. People mentally model the people they speak with and consider why the person said and omitted things.

In Bayesian terms, you got two pieces of info from John’s statement. Roughly: 1) Linda is a bank teller. 2) John thinks that Linda being a bank teller is key info to provide and chose not to provide other info. That second piece of info can affect people’s answers in psychology research.

So, is there any research which rules out social dynamics explanations for conjunction fallacy experimental results?


Elliot Temple | Permalink | Messages (6)