[Previous] Rationally Ending Intellectual Discussions | Home | [Next] Updating My Less Wrong Commenting Policy

Discussion with gigahurt from Less Wrong

Discussion with gigahurt started here. He wrote (quoting me):

Disagreements can be resolved!

I see your motivation for writing this up as fundamentally a good one. Ideally, every conversation would end in mutual understanding and closure, if not full agreement.

At the same time, people tend to resent attempts at control, particularly around speech. I think part of living in a free and open society is not attempting to control the way people interact too much.

I hypothesize the best we can do is try and emulate what we see as the ideal behavior and shrug it off when other people don't meet our standards. I try to spend my energy on being a better conversation partner (not to say I accomplish this), instead of trying to make other people better at conversation. If you do the same, and your theory of what people want from a conversation partner accurately models the world, you will have no shortage of people to have engaging discussions with and test your ideas. You will be granted the clarity and closure you seek.

By 'what people want' I don't mean being only super agreeable or flattering. I mean interacting with tact, brevity, respect, receptivity to feedback, attention and other qualities people value. You need to appeal to the other person's interest. Some qualities essential to discussion, like disagreeing, will make certain folks back off, even if you do it in the kindest way possible, but I don't think that's something that can be changed by policy or any other external action. I think it's something they need to solve on their own.

Then I asked if he wanted to try to resolve one of our disagreements by discussion and he said yes. I proposed a topic related to what he'd written: what people want from a discussion partner and what sort of discussion partners are in shortage. I think our models of that are significantly different.


Post with gigahurt discussion tree and YouTube video playlist:

http://curi.us/2368-gigahurt-discussion-videos


Elliot Temple on August 14, 2020

Messages (89)

Glad to speak more about this. I will try to respond to each reply within 24 hours. I will check this forum at least once per day.

Regarding the proposed topic, I hope to have an interesting conversation. I am open to adjusting my own mental model. I hope we can help each other.

I listed a few preliminary qualities I think make a good conversation partner. They were very much off the top of my head. I do not know if the list is complete or has items that I would strike upon further reflection. At the same time I am okay using it as a basis of my current model, with a few clarifications.

First, a clarification regarding receptivity to feedback. A better way of saying this would be the ability to listen and make the other person feel heard. One common technique for this is echoing back in your own words the key ideas of the other person. Pointing at what makes sense and what doesn't. Especially what makes sense. I have found there are reasonable things even people who are wrong say. A Less Wrong article touches on it a little. (footnote 1 of https://www.lesswrong.com/posts/exa5kmvopeRyfJgCy/double-crux-a-strategy-for-resolving-disagreement in fact) The overall idea being to signal to the other person you value them.

Second, I would add the quality of avoiding rhetoric. That being language designed to vilify, humiliate, or in some way inflame the emotions of either the partner or the audience. I attribute this concept to stoicism, a good source is the section How to Speak Wisely in the book How to Think Like a Roman Emperor by Donald Robertson (p69. St. Martin's Publishing Group. Kindle Edition.)

If I reflect on the mental model I use in more general terms its goes something like this:

People cannot use reason when they are emotionally aroused, or its at least much more difficult.

People cannot use reason when they are exhausted.

I want to get through to people with reason.

Therefore, I should do what I can to not arouse their emotions or exhaust them.

In conversation, folks get emotionally aroused when their status, intelligence, competence, or other qualities are threatened. So from that I derive:

tact, respect, listening, anti-rhetoric

In conversation, folks get exhausted when too much comes at them at once, or if they feel like they have no traction withe the person they are speaking to. From that I derive:

brevity, listening

I await your model, critique, or any other direction you want to go.


gigahurt at 4:21 PM on August 14, 2020 | #17408 | reply | quote

> I listed a few preliminary qualities I think make a good conversation partner.

I think

1) The qualities that make a good conversation partner

and

2) The qualities that (99+% of) people want in a conversation partner

are significantly different things.

(1) is primarily about rationality. (2) is primarily about social dynamics.

I don't know if you'll agree.

I read you as talking more about (2) which is the one I meant to mainly bring up. E.g. you mention threats to status or self-esteem, which I think are mainly social dynamics issues. (Where did self-esteem come from? That wasn't your term. Besides threats to status, you mention threats to intelligence and competence, which I interpret as meaning threats to the positive evaluation of those traits by yourself (self-esteem) or others (status).)

If that makes sense so far, then what I'd like to bring up next is what people find threatening to status and self-esteem, and how that effects conversations. (You could say your initial thoughts about that or ask me to go first.)


curi at 6:25 PM on August 14, 2020 | #17409 | reply | quote

#17409

I appreciate the distinction you draw between (1) and (2) and I agree I only hit on items from (2).

The only addition I would make, and I might be wrong, is that (1) and (2) feed into a single system where both are essential. A reasonable analogy I think, would be of a discussion as a race car. (1) is the engine and (2) the aerodynamics. Without (1) you won't have the energy or power to get anywhere. But without (2) you will be dragged down by the social dynamics and could end up in an equally lacking position as folks with the opposite problem. (too much social, not enough rational)

As an aside, I see things like expertise, bias awareness, avoiding fallacies etc. as things that feed into (1).

I am on board to go after (2) and more specifically the things people find threatening to status and self esteem.

I'll try to put together a few thoughts, but am interested in your perspective as well:

# Signaling superiority as undermining standing

People keep an informal tally in their head about how they feel or do not feel about people. (social standing)

If person A uses language that makes person B seem inferior in front of C, it can trigger a few things:

C may actually believe B is inferior, lowering Bs social standing in C's mind.

C may not believe B is inferior, but B may have a theory of mind that predicts loss of standing.

The effect will be the same, B will feel threatened, and dig into their position to save or regain face. They may also disengage completely. The discussion turns from team work to war. In most cases from rationality to pure politics.

Some optimizations can be made when delivering information to avoid judgement of the other persons knowledge or capability. They also serve to illustrate how I think things can go wrong:

Avoid saying "Obviously, X". Better to say "X". "Obviously, X" may signal at stupidity on the part of the person one is talking to if it wasn't obvious for them.

Avoid saying "You don't make sense". Better to say "I am having trouble understanding, can you rephrase a different way". This takes the burden of understanding on to the listener rather then the speaker.

In isolation these things don't matter. However if one discusses for 15 minutes with patterns like this sprinkled through out, the other will very likely bail on the conversation.

# Extremely firm language as a prelude to conflict

When beginning a conversation folks may assert themselves very firmly before providing context or any wiggle room with which do discuss with the other person without coming into direct conflict. They don't speak with any amount of uncertainty.

On average, folks want to be on friendly terms with other people. However if the speaker comes in too firm, it creates a situation where the only way to proceed is by directly contradicting the other person and some people don't like to do that.

If I say "X" and you think "not X" its kind of like I've filled the conversation space with a statement that can only lead to conflict. If I said, "I am not sure what the right answer here is, but I'll put forward X to get us going", I've paved the way for you to be able to express yourself as well. Or even asking a question.

I think this could be a reason Socrates asked a lot of questions.

I'll leave it at that for now. Feel free to take the conversation in a different direction if this is not what you had in mind. I am also interested in your thoughts around what threatens people. The last thing I'll mention is connecting back to the qualities in (2) with an addition: In the same way we should avoid arousing emotions in others, we should keep ourselves calm (if we want to think rationally) and not be offended by use of phrases like 'Obviously, X' etc. We can broaden the people we can discuss with by both being more careful in how we speak, and more thick skinned ourselves. I think it also implies we can speak less carefully around people we know to be thick skinned/emotionally mature/highly rational.


gigahurt at 6:39 AM on August 15, 2020 | #17417 | reply | quote

> In isolation these things don't matter. However if one discusses for 15 minutes with patterns like this sprinkled through out, the other will very likely bail on the conversation.

I agree except I'd amplify it: I think they even a single, isolated use of those things sometimes makes a big difference.

It varies a lot by context, but bailing on a conversation is often a late indicator that happens after people get more and more unhappy for a while. Often you say X, the other guy doesn't like it, and his immediate reaction is to *pretend everything is fine*, so you get poor and delayed feedback. Later you say Y and he bails and blames Y, but really he's had a grudge about X the whole time and it's been ruining things and Y is just a downstream excuse. This is confusing because if you try to learn "don't say things like Y in future discussions" it's not going to solve the real problem.

---

Broadly I agree with your descriptions so far. They are largely compatible with how I see social dynamics. There are lots more details that could be added. E.g. I think social dynamics don't require a third party. They apply one-on-one and even alone (a lot of self-esteem is internalized, habitual status judgments applied to yourself).

*I think I view complying with social dynamics as much more damaging to rational discussion than you do. And so the shortage of anyone willing to go against social dynamics causes a shortage of available rational discussion.*

Why? One reason is people try to hide weakness, ignorance, failure, incompetence, etc. They want to look smart, wise, knowledgeable, competent, etc. But *what if they aren't*? How do you talk to them? They don't want you to speak or reveal the truth. They don't want their claims to be revealed as wrong with clear, decisive arguments. They don't want a clear outcome to debate where they lose.

One attempt to solve this problem is for people to try to be objective, neutral truth-seekers who don't take sides. This has been tried a lot and has helped a bit but basically hasn't solved the problem. I think it doesn't work that well because the main thing people *are* is a body plus a collection of ideas. If you reached tentative conclusions about some topics and then turn out to be wrong, that's meaningful. Just saying "I don't know; I have a neutral opinion" forever is the wrong approach. We do need to make judgments and form opinions and so on, and that does lead to the potential to be wrong, look dumb, etc. Objectivity is great but I don't think "I wasn't wrong; that idea I believed was wrong" works as a fix for the status threat.

Lots of learning and rationality works via error correction but social dynamics are hostile to error correction.

Another reason social dynamics ruin discussions is because people focus on social dynamics. They are more interested in gaining status than figuring out what's true. The social world basically uses it's own rules of evidence, argument, plausibility, conclusiveness, etc., which are significantly different than the rational rules. People are so used to thinking in the social way that they don't think in the rationalist, scientific way. This is one of the causes of why people have so much trouble being literal or reading your messages literally (literally = according to the non-social world). And people don't want to change this because their goal is to gain social status not to gain knowledge. And they are dishonest about this, and the widespread dishonesty is a major discussion problem.

Also I posted some stuff about social dynamics a few days ago, idk if you saw it: http://curi.us/2361-social-dynamics-summary-notes


curi at 1:07 PM on August 15, 2020 | #17420 | reply | quote

> I agree except I'd amplify [the way social missteps add up]: I think even a single, isolated use of those things sometimes makes a big difference.

Fair enough. There are certainly people who are extra sensitive.

> It varies a lot by context, but bailing on a conversation is often a late indicator that happens after people get more and more unhappy for a while. Often you say X, the other guy doesn't like it, and his immediate reaction is to *pretend everything is fine*, so you get poor and delayed feedback.

I appreciate this point, and it seems reasonable to me. At the same time its hard to verify in particular cases. It seems like something that would need to be addressed in people outside of the immediate conversation context. I would view this as a deficiency in social skill. The book 'Verbal Judo' by George Thompson has a model for this. He calls these types of peoples 'wimps'. He goes on to say that they will often go on to display passive-aggressive and back stabbling behavior. Too much aggression is a 'difficult person'. Just the right amount I believe he calls a 'nice guy'. In either case, the book isn't brilliant, but its an interesting model. His main recommendation for handling wimps is to call them out. This can be difficult if there dissent is completely silent though.

I would hypothesize the way to address the silent variety is through culture, which requires a group with enough cohesion to have social cues and so forth. To me this is maybe something I'd want to elaborate on at some point. Is there a threshold either relationship or culture-wise required before rational conversation can take place?

> I think social dynamics don't require a third party. They apply one-on-one and even alone (a lot of self-esteem is internalized, habitual status judgments applied to yourself).

Yes, I agree with this and it was an oversight on my part. At the same time, depending on the person, I think having others present can intensify negative emotions. Particularly if the third party is a respected authority. (boss, advisor)

> *I think I view complying with social dynamics as much more damaging to rational discussion than you do. And so the shortage of anyone willing to go against social dynamics causes a shortage of available rational discussion.*

I think your assertion here, your elaboration on the reasons they are damaging, and the reality that some people just don't have value to add are all fair. And I agree we may be slightly out of sync on our perception of social dynamics and how much they subtract from rational discussion. I'll elaborate more on why this may be below.

> One attempt to solve this problem is for people to try to be objective, neutral truth-seekers who don't take sides. This has been tried a lot and has helped a bit but basically hasn't solved the problem. I think it doesn't work that well because the main thing people *are* is a body plus a collection of ideas.

> [...]

> We do need to make judgments and form opinions and so on, and that does lead to the potential to be wrong, look dumb, etc. Objectivity is great but I don't think "I wasn't wrong; that idea I believed was wrong" works as a fix for the status threat.

You are probably right on this for a lot of people. I think this is a skill people need to cultivate. The less wrong article I referenced earlier explains it as treating ideas as objects. Stoic philosophy has a similar concept referred to as indifference. If I were to use my own terminology from earlier, it contributes to thicker skin. I think the object of ones thoughts should be:

"I want to find the truth about X"

and not

"The truth about X is Y"

I think when the object of someones thoughts become the latter, they couple their identity to their idea. This usually implies rushing to conclusions anyway. I have found folks are more likely to make mistakes due to rushing to conclusions (bias driven) versus actual logical errors. I find the simple act of prefixing what I say with phrases like "My hypothesis is..." is enough to let me completely throw out an idea if contradicting evidence is presented without any feeling of loss on my part.

Another helpful thing is if you have convinced yourself in the past that you have value. Decoupling your self worth from others. I most attribute this to the thinking of Alfred Adler. (summarized in the book 'The Courage to be Disliked' by Ichiro Kishimi. This one I did really enjoy and recommend though it has flaws to be sure)

I think the most rational state is one without emotional attachment, concern for status, standing, and authority. I think smart people are vulnerable to wanting intellectual status, and it makes them less rational.

> Lots of learning and rationality works via error correction but social dynamics are hostile to error correction.

I agree error correction is unavoidable. I think the business world is trying to correct for this with the 'celebrate failure' meme currently flowing through business literature. Its trying to remove the stigma of error correction. Easier said then done.

> Another reason social dynamics ruin discussions is because people focus on social dynamics. They are more interested in gaining status than figuring out what's true. The social world basically uses it's own rules of evidence, argument, plausibility, conclusiveness, etc., which are significantly different than the rational rules. People are so used to thinking in the social way that they don't think in the rationalist, scientific way. This is one of the causes of why people have so much trouble being literal or reading your messages literally (literally = according to the non-social world). And people don't want to change this because their goal is to gain social status not to gain knowledge. And they are dishonest about this, and the widespread dishonesty is a major discussion problem.

I agree that some folks are motivated by the wrong things. Two final points / elaborations. I am interested in hearing your thoughts.

1) My mental model for rationality includes the reality of social dynamics. I think yours does too or you wouldn't have thought about it the way you have before now. To elaborate though, what I mean by this is I consider my choices of how to most gracefully share things and prompt feedback as part of my rational process to maximize results assuming other people to be flawed. This may be the source of why I seem to think its less damaging. I have (perhaps incorrectly) accepted this as something I cannot change, and have focused on how to work around it. I think you are looking for solutions and norms that try and lessen its impact more broadly through institutionalizing appropriate behavior- is that fair? It kind of goes back to my earlier question: If your game it might be worth exploring as a thought experiment how we could do better then Less Wrong in terms of an online community that maximizes rational participation. (though I think LW is pretty good in terms of articles at least)

2) Your point about the passive-aggressive types really struck home. The concept of type I and type II error came to mind after some reflection. (https://en.wikipedia.org/wiki/Type_I_and_type_II_errors) It seems to me that being overly accommodating in the way I speak could lead to many false positives (type I error) in terms of good conversation partners. I probably humor people too much. Being a jerk will lead to false negatives (someone bails even though they have good knowledge/rationality... type II error). I think you are claiming that the number of true positive conversation partners are lacking. That may be true, and my position that its not may be due to type I error. I am not sure. In either case. Another question we could consider is, should one optimize to minimize type 1 or type 2 error?

> Also I posted some stuff about social dynamics a few days ago, idk if you saw it: http://curi.us/2361-social-dynamics-summary-notes

I reviewed it briefly when it came through my feed.


gigahurt at 5:55 PM on August 16, 2020 | #17452 | reply | quote

I liked some things about your reply. Lots of it makes sense to me. I also disagree with some stuff.

But there's a problem. It says a lot of different things. The discussion is at risk of getting huge, complicated, overwhelming.

I'd generally prefer to reply to or discuss two things at a time (one of my choice and one of your choice). Maybe a few more but not so many at once.

I also don't want to lose track of or ignore all the stuff you said either.

So what can I do? I decided to organize the discussion into a tree diagram. That'll help keep track of stuff and make it easier to see what's been replied to.

Once the tree is made, new ideas can be added to it, and you can pick any node in the tree and ask me to reply to that next. that'll let you get whatever you think is important answered, step by step.

I am streaming making the tree. It's going quite slowly because I'm also explaining how to do it to InternetRules and walking him through it and discussing what everything means and talking about the topics like what the world is like and social dynamics stuff.

The tree is not done yet and will take days.

So you could watch the first video and comment now, or you can wait for me to finish the tree. Hopefully one of those options is OK with you and the general approach above sounds OK to you.

Streaming Tree Making, part 1: https://youtu.be/ICXs3Km_2yY (working on tree starts around 12.5min into the video)

Note: When the video finishes processing, it'll be 4 hours. If you see it as 2 hours long, that is the *last* 2 hours. It'll take some hours before the full video is available.


curi at 5:15 PM on August 17, 2020 | #17455 | reply | quote

> So you could watch the first video and comment now, or you can wait for me to finish the tree. Hopefully one of those options is OK with you and the general approach above sounds OK to you.

Make sense, I will comment on the video, but also hold off on posting anything more until the tree is done. I watched the video last night. Thank you for putting that together. I was not in a situation where I could easily take notes, but I did pick up on a few things. Below are my impressions.

* I appreciate the sincerity with which you are trying to understand my position. I will reciprocate as best I can.

* Thank you for the feedback to include 'then' explicitly. I will do that moving forward.

* I will put extra effort into brevity moving forward.

* I will likely experiment with the tree making idea. My current process involves paraphrasing what people say one sentence per paragraph, but I see how exposing the structure could be beneficial. The closest I have come to reading about a technique like this was sentence diagramming in Pinker's "The Sense of Style". Any recommendations for further reading?

* The discussion about David Deutsch and the rationality of conversation not correlating with the intellectual status of one's partner was interesting. It seems to indicate intellectual standing does not select for rationality, or at least its contribution is lower then one would expect.

* The point about getting better team members when teams are selected randomly as a function of MMR was interesting. In games I have played MMR is a function of winning and losing matches while taking into account the MMR of the opposing team and scaling MMR gain and loss relative to that. If rational discussion could have an MMR, how would it be calculated?

* The entire video left me with the impression that I falsely assumed things about the place you were coming from. I see more clearly how you are trying to build a process by which to discuss things with people rationally, and also the seriousness with which you take that endeavor. I think the bar we each set for minimum acceptable rational communication may be different. My hope is I can re-calibrate/sharpen my own bar by continuing the discussion.

There were many funny moments as well. I also am sure I missed some important ideas in my summary above. In either case, I'll go into standby until the tree is done, and then we can drive further discussion off the tree. Thank you again for putting that together.


gigahurt at 5:07 AM on August 18, 2020 | #17461 | reply | quote

> I will likely experiment with the tree making idea. My current process involves paraphrasing what people say one sentence per paragraph, but I see how exposing the structure could be beneficial. The closest I have come to reading about a technique like this was sentence diagramming in Pinker's "The Sense of Style". Any recommendations for further reading?

Yeah I have written stuff about idea trees (and there’s more to come). See http://curi.us/2311-making-idea-trees

You can also download a collection of example trees at https://cdn.discordapp.com/attachments/692857444807999560/725564061684727888/Tree\_Examples.zip

I dislike sentence diagramming compared to grammatical trees. There are grammar experts who make trees like mine. The key words to search for are “dependency grammar”. Wikipedia has some decent info and there are academic papers. I haven’t found a good tutorial for it though. I wrote an article teaching grammar but it’s not tree oriented.


curi at 12:45 PM on August 18, 2020 | #17465 | reply | quote

Video 2 gets to the end of #17409

https://youtu.be/hCiwQK9TSX4

The first couple minutes are just dealing with audio problems. I'll try to clip that out later.


curi at 6:04 PM on August 18, 2020 | #17466 | reply | quote

peter m

Just wanted to say a few things,

would a diagram help with the Gigahurt tree ?

I know that there is one somewhere -the screenshot of

the discussion a few days ago was hardly clear & very

small, getting it enlarged whilst trying to follow the

discussion thread was difficult to say the least.

Oh and perhaps something important concerning my own discussions on the (Author) Q+Answers site. Being a

committed objectivist among perceived & open subjective philosophers answering there.. I persisted on the site over a very long time period (with a reachable history

under "peter m") and something quite odd happened ;

one of the most prolific philosophy answerers there turned out to Non-believe in philosophy, he disbelieved that there could even be a knowledge-producing-Philosophy-CATAGORY At all..

Evident when I confronted him directly there recently.

I could see then that those "subjectives" of the same ilk & belief-principles could hardly have thought differently from this Alfa Subjective Philosopher too.

so my Question now is...

Have any of you found this on ANY Other INDEPENDENT Social Media site/s ?

I will look under the above "Discussion info" in case or in hope of an answer.

Peter m.


one of those on Yahoo Answers "philosophy category". at 6:04 PM on August 18, 2020 | #17467 | reply | quote

peter m

"social dynamics are hostile to error-correction".

Well they quite obviously would be wouldn't they

(those "social dynamics").

So it says something when anyone RECONFIRMS THAT THEY

ARE AN Objectivist, don't you think?

And in-that-vein I posted one previous first (Gigahurt

tree) reply.. which was about NON- objectivists

identified so to speak ; and what they are likely to

believe-in or not (this is important because as you

may know objectivists are very interested in the future

of ANY discussion.

It's not-too-obvious future path and it's more obvious

environmental & deductive LOCAL impact initially.

Peter m.


one of those on Yahoo Answers "philosophy category". at 12:30 PM on August 19, 2020 | #17475 | reply | quote

Peter, I don't understand you. If you want to communicate with me, I think you'll have to get a lot closer to standard communication practices in our society and keep things simpler.


curi at 12:32 PM on August 19, 2020 | #17476 | reply | quote

I watched the second video. Thank you for the continued critique. I am learning a lot. I'll skip my impressions for now to keep the tree manageable.


gigahurt at 6:34 PM on August 19, 2020 | #17490 | reply | quote

peter m

o.k. Fair enough.

When I read in your recent comments critiquing the "Introduction to (Youtube independent) Philosophy course"

you said significant things there about Popper, a

oe rather the modern objectivist philosopher. You wrote, "..After making initial videos as if Popper didn't exist

the Philosophy crash course people actually made a video about Popper ..(which)..basically has no clue WHAT POPPER SAID & didn't discuss his refutation of induction.. doesn't discuss objective knowledge ..or.. Conjectures and Refutations.. the video discusses.. the demarcation criterion and the standard myth about Popper.. THEN THE VIDEO LIES (at 7.05 of https://rationalessays.com/lying)

with POPPER SAID TO BELIEVE whatever-is-most-probable

given our current data. So "jesus christ" if you have NO CLUE Don't make an Educational video spreading misinformation to a million(!) people."

Quite beautiful and pertinent i think that short piece

of commenting is if you don't mind me saying.

Condensing a lot of worthwhile information in a succinct

form.

I tried to access your (2) https.rationalessays.com but

could not so I asssume that some part of that million

people (or more) could not either.

For what it's worth I am working on some non-standard

type information which I hope will clarify that objectivist BEHAVIOUR as against the misinformed non-progressive inductive-type-behaviour, specifically futuristic prediction/ predicting especially as in LOCAL environments (distinct ones as against say the objective earth/ earthly environment).

I was more than pleasantly surprised then when I saw

and read your above objective philosophy assessment -if

I can call it that- of the PBS Introduction to Philosophy

series particularly in which You-didn't-thankfully-Hold-Back in your criticism.

This was just the thing that I have been doing I Hope(!)

for years on the Yahoo media Portal of "Yahoo Answers"

and if felt good to see that someone had the courage to tell-it-likr-it-is on the "portal" of Youtube (with a bigger audience and with a Philosophy "teaching Crash Course" so to speak.

(Not long after my recent journey on Yahoo I came t

the surprise conclusion there that few if not all

the resident answerers were not Just Subjective

Philosophy believers ; but that THEY COULDN'T POSSIBLY

BELIEVE that there existed (there) a Category Like the

Philosophy.. One which actually produced knowledge (I'd

been taught the opposite in a unique History of

Ideas University course many years ago now).

Virtually contradictory of course I've realised how-to-put-it in simpler, less stark language such as The subjective (philosophy) practitioners on Yahoo MUST believe that THEY CAN'T produce any New philosophy Knowledge as such ; but they CAN KNOW who are-the-ones-who do. And so their belief in such "authoritative-fake-news-philosophy" et al relies on a steady-stream-of-such fake

or erroneous philosophy - Authoritative Philosophy selected by ANYONE of them so to speak.

So I am indebted to your own comment and appreciated task of finding the philosophy general truth in those Introductory Youtube Crash Course Philosophy Videos.

Also extremely glad for the whole endeavour, one which provides a great opportunity for others who are seeking better objective truth in general philosophy, here and now.


one of those on Yahoo Answers "philosophy category". at 4:09 PM on August 20, 2020 | #17521 | reply | quote

#17521 My article on lying moved to https://www.elliottemple.com/essays/lying

I sometimes can't tell which of my stuff you're talking about. Links help. But I'm glad you liked the stuff.

But please stop posting in this thread (= on this page). You're off topic and we're trying to discuss something else here. You can search for a relevant topic to post at (use "List All Posts" or "Archive" on the sidebar, or google search for "site:curi.us search terms") or click "Open Discussion" on the sidebar and post about any topic there.


curi at 4:20 PM on August 20, 2020 | #17523 | reply | quote

I watched the third video. You mentioned impressions would be okay as they aren't adding to the tree, so I'll share a few from both the second and third video. I will avoid the discussion topic:

* The footnote I mention when I reference the LW Double Crux article can be found between the conclusion and the labyrinth illustration. You can ctrl-f on '[1]'. I don't think you need the context because it seems you already understand.

* Overall, I continue to appreciate the way you are engaging with the topic and me. It makes me want to do better. Some ways I will try to improve: Reduce the number of new topics per interaction, break down sentences more, break down paragraphs more, focus on clarity (Superflous 'In either case,'), and be mindful of word choice (hope v. goal). You are demonstrating talk does not have to be cheap, or treated cheaply.

* I appreciate the asides about Popper. I recently discovered Popper, specifically 'The Open Society and Its Enemies' (which I discovered via Fooled by Randomness by Talib). One of the points I took away was the distinction between institutionalized power versus discretionary power. Institutional power being formalized external process like laws. Discretionary power being when decisions are left to a person or organization to judge as they go along. His argument is that institutional power is superior because it externalizes the process in a way that it can be improved. I see a theme of your work as an attempt to institutionalize effective communication practice. Is that fair? (your emphasis on text discussions, the fact you have a discussion policy, your impasse chain concept, and bringing up how to rationally end a discussion on LW all seem to support this)

* I checked out your website. I have also read nearly everything by Rand, but its been 15 years since I finished the last one. They motivated me through college, but have faded as I consumed more books over the years. I feel like I need to revisit them after interacting with you. I have also never read David Deutsch, so I plan to read The Fabric of Reality. Again, this is just to reaffirm my appreciation.


gigahurt at 7:07 PM on August 20, 2020 | #17524 | reply | quote

> One of the points I took away [from Popper in OSE] was the distinction between institutionalized power versus discretionary power. Institutional power being formalized external process like laws. Discretionary power being when decisions are left to a person or organization to judge as they go along. His argument is that institutional power is superior because it externalizes the process in a way that it can be improved. I see a theme of your work as an attempt to institutionalize effective communication practice. Is that fair? (your emphasis on text discussions, the fact you have a discussion policy, your impasse chain concept, and bringing up how to rationally end a discussion on LW all seem to support this)

Yes. This idea is a theme from many thinkers. I think it's commonly known as rule of man vs. rule of law, which I often call it. There are lots of articles online using those terms. Rule of man allows bias and favoritism to affect the outcome a lot. Written rules and transparency work better. It's important for not just government but also discussions.


curi at 10:45 PM on August 20, 2020 | #17525 | reply | quote

#17524 I posted this (written a few days ago) since it's relevant to what you said.

http://curi.us/2367-forum-moderation-needs-anti-bias-policies


curi at 2:54 PM on August 21, 2020 | #17528 | reply | quote

#17524 Taleb is brilliant. One of the best thinkers of our time and he suffers no fools. Dude is great at catching out charlatans like Pinker.

If that dude is not in your corner you better believe you've taken a few wrong turns.


Periergo at 6:49 PM on August 21, 2020 | #17535 | reply | quote

Fails to get the point.


Periergo at 6:53 PM on August 21, 2020 | #17539 | reply | quote

Discussion Tree Created!

> There are certainly people who are extra sensitive.

I commented on this in my video. I’m not writing about it for now because it’s tangential.

> Businesses trying to celebrate failure.

I think that’s an attempt at a patch, band aid, or creating special case/exception, rather than reforming the core principles of social hierarchies.

> > It varies a lot by context, but bailing on a conversation is often a late indicator that happens after people get more and more unhappy for a while. Often you say X, the other guy doesn't like it, and his immediate reaction is to *pretend everything is fine*, so you get poor and delayed feedback.

> its hard to verify in particular cases

Yeah but you can get evidence. I talked about this in my video. Some ways:

- People admit later that they were upset about X.

- People later say something bitter that brings X back up.

- People usually get upset if you point out that they lied. They often don’t admit this right away but then bring it up later, either to complain about it (like as evidence of my bad faith) or they try to accuse me of lying to get me back.

- After X you pick up on signs the person is upset and say so. They admit it (but weren’t going to give you this info promptly on their own initiative). Or they deny it then later admit it.

- You come up with models to explain what’s going on, e.g. their message quality drops and you attribute this to being upset after critically considering other explanations.

> You are probably right on this for a lot of people. I think this is a skill people need to cultivate.

I agree that there’s a rationality skill about looking at ideas objectively, arguing all sides of the debate not just “your” side, etc. Lots of people are bad at this but some people get pretty good at it.

I was trying to say something different. I think unlimited neutrality doesn’t work in life. We need to act. We need to reach conclusions about ideas and act on them. Sometimes you’ll turn out to be mistaken about an idea that you based a bunch of life choices on. That idea is part of your life. It’s connected to you. If it’s wrong, you need to rethink more stuff than if some other idea is wrong. This is something we can rationally deal with. Mistakes happen. Sunk costs happen. That’s life. You have to make a reasonable try for good results given the available information/resources/options/etc. If you do that, no shame on you, rationally. Dealing with this takes more than knowing you shouldn’t be attached to your ideas. Ideas are part of us. We do accept, act on, use, etc. some ideas. We make choices and don’t always just keep our options open, sit on the fence, remain neutral, etc. And that’s fine and good.

With skill, we can deal with this from a rational perspective. But social status judgments aren’t fair or reasonable about it. So if you have some attachment to the status hierarchy, this kinda thing can screw you. The rules of social create winners and losers, bad incentives, etc.

> My mental model for rationality includes the reality of social dynamics. I think yours does too or you wouldn't have thought about it the way you have before now.

Yes.

> […] I consider my choices of how to most gracefully share things and prompt feedback as part of my rational process to maximize results assuming other people to be flawed. This may be the source of why I seem to think its less damaging. I have (perhaps incorrectly) accepted this as something I cannot change, and have focused on how to work around it.

I think playing the social game destroys everyone who does it a lot. It turns them dishonest and irrational. You can do it a bit and get away with it, but not very much. (There’s lots more to say about this. This could be an area to focus on next.)

On a related note, because ideas/truths are all connected, being dishonest or irrational about one thing is a *huge* problem. Trying to isolate and contain errors is really problematic. One limit on your learning or progress can flip the switch from unbounded progress to bounded progress, which is a big deal (and related to The Beginning of Infinity. This is an approximation. The issue is more complicated.

> I think you are looking for solutions and norms that try and lessen its impact more broadly through institutionalizing appropriate behavior- is that fair?

Yes. (I talked about this a bit in the video. Feel free to ask more about it.)

> It kind of goes back to my earlier question: If your game it might be worth exploring as a thought experiment how we could do better then Less Wrong in terms of an online community that maximizes rational participation. (though I think LW is pretty good in terms of articles at least)

Some comments on LW I wrote 3 years ago, the second time I tried to talk with them: Less Wrong Lacks Representatives and Paths Forward.

I’ve been a forum owner since 2002. The FI community goes back to 1994. I think it has better discussion norms than LW, but its norms clash with standard social dynamics considerably more, so they don’t work badly for most people. (I do think there’s room for a variety of forums. Actually there are way too few. But there’s also lots of room for improvement while maintaining useful diversity.)

> […] I think you are claiming that the number of true positive conversation partners are lacking. […] should one optimize to minimize type 1 [false positive] or type 2 [false negative] error?

I think false negatives are way worse. Great people (positive outliers) matter the most. Try to attract and not alienate them. If you false negative someone cool, that’s a big deal. Partly because of the shortage I claim exists. And partly because false negatives are more permanent than false positives (in a discussion context).

False positives are *really really bad* **if** you stop thinking. If you decide someone is great and your judgment is final, that’s worse than missing out on interacting with someone awesome. But if you keep evaluating people, then if you false positive someone you get another chance to change your mind every time you interact further. So false positives can get corrected.

Stuff like my Debate Policy is designed to enable false negatives (where I reject people I shouldn’t) to be corrected. This is important for me partly so I don’t miss out on good people and partly because it makes it easier for me to negative someone. Before I had this policy I spent more energy engaging with people I’d already judged negatively, just in case. The policy is a cheaper way to protect myself.

----

Here is my tree so far. Any corrections for the nodes already in it? E.g. if you’re not satisfied with how some of your points were represented.

Tree PDF (includes ideas from this comment):

https://curi.us/files/diagrams/curi-gigahurt-discussion-tree.pdf

I can export another format if it’d be better: Freemind, OPML, png, Markdown, Text, MindNode.

Video 4 link (solo tree making + writing this reply):

https://youtu.be/VOhJKNCTmaU


curi at 12:20 AM on August 22, 2020 | #17554 | reply | quote

As we have the tree now, I am going to share what I think about each branch in terms of priority explicitly. That way you know what I am most interested in exploring first. I am open to compromising though. It seems we can make the conversation more focused as we go back and forth without fear of losing unexplored branches.

I think it would be good to focus on one or two things at a time. You can decide, I responded to everything for this round.

---

>> Businesses trying to celebrate failure.

> I think that’s an attempt at a patch, band aid, or creating special case/exception, rather than reforming the core principles of social hierarchies.

Yes, you are right. It doesn't move the dynamics away from social status, it just changes the way the dynamics work. I may virtue signal by revealing small failures, and then attempt to hide big ones to maximize my status.

As an aside, having dealt with this meme directly, it often becomes an excuse to become careless instead of one to test bold hypothesis.

I think we generally agree here and I vote for de-prioritizing it in favor of some of the other tracks.

---

>>>> In isolation these things don't matter. However if one discusses for 15 minutes with patterns like this sprinkled through out, the other will very likely bail on the conversation.

>>> Often you say X, the other guy doesn't like it, and his immediate reaction is to *pretend everything is fine*, so you get poor and delayed feedback.

>> its hard to verify in particular cases

> People admit later that they were upset about X.

In addition to the quoted text, you also mention other ways people end up revealing their hidden mental state which I agree with.

I am in agreement with the assumption that a person may pretend everything is fine, which hurts the other person's opportunity for real time feedback and therefore the ability to get better.

The root of this thread started with social missteps that cause people to disengage. I am looking for clarity as I see two ways of interpreting how it has developed:

1) Delayed feedback makes it hard to improve in this space which makes it hard to find rational conversation partners. The argument being that rational conversation partners are in short supply for folks without certain social graces and they may never be able to improve.

2) People pretend things are okay, and pretending is itself irrational. The argument being that rational conversation partners are in short supply partially because people will pretend everything is okay rather then confront disagreement, which is an irrational act. Thus the total pool of rational partners lower in proportion to the number of pretenders.

I more or less asserted number 1. I think you are asserting number 2. We could dig into 2.

I would vote for continuing this thread at high priority if you are asserting 2, or something like it, as it will let us flesh out our social dynamics models more and understand how compatible they are with rational discussion.

If you are asserting 1, I think it goes into solutions, and I'd de-prioritize until we get through other problem oriented topics.

---

>>> Objectivity is great but I don't think "I wasn't wrong; that idea I believed was wrong" works as a fix for the status threat.

>> You are probably right on this for a lot of people. I think this is a skill people need to cultivate.

> [Agree on skill of looking at issues from different angles...but there is a distinction between that skill and unlimited neutrality] I think unlimited neutrality doesn’t work in life. We need to act. We need to reach conclusions about ideas and act on them. Sometimes you’ll turn out to be mistaken about an idea that you based a bunch of life choices on. That idea is part of your life. It’s connected to you. If it’s wrong, you need to rethink more stuff than if some other idea is wrong. This is something we can rationally deal with. [...]

I agree there is a distinction between considering various perspectives, and neutrality towards ideas. I also would draw further distinction between my position and unlimited neutrality.

To clarify my position:

1) I think its productive to assume we may know things, and therefore act on that knowledge.

2) I think all knowledge should be considered provisional. (there is a difference between having a best current theory and thinking the matter is settled) We should doubt what we know. My mantra for this is "the feeling of being wrong without realizing it is identical to the feeling of being right". I think this helps to prevent over confidence. Over confidence leads to errors. Error correction is easier when you have already considered you might be wrong, and have been transparent about it with others. In many cases you never actually enter a state of being wrong, because you scaled your confidence appropriately with the evidence.

3) The rational person would never willfully make a mistake in the present moment, or not use their best model, but would be able to identify past mistakes as new information comes in.

This would lie between unlimited neutrality, which I am interpreting as similar to Greek skepticism (beliefs are never justifiable) and dogmatic belief. Dogmatic belief being:

>What else can we say but that they chose not to think what their opponents were thinking? Here then is a trenchant clue to understanding our subject: belief marks the line at which our thinking stops, or, perhaps better, the place where we confine our thinking to a carefully delineated region.

>Carse, James P.. The Religious Case Against Belief (p. 44). Penguin Publishing Group. Kindle Edition.

If my clarification is admissible in your mind, I think we are in agreement.

To me the next thing to discuss, is whether or not this attitude is one:

4) Many other people share, or is teachable to many people

5) Sufficiently removes fear of status loss, or sufficiently insures oneself against status loss to enable rational conversation

I am willing to continue to drill into this and think it may be productive to settle if you are interested as well. At this point, this mental model is still one of my best to help me engage in rational conversation.

---

> But social status judgments aren’t fair or reasonable about [being wrong]. So if you have some attachment to the status hierarchy, [being wrong] can screw you. The rules of social create winners and losers, bad incentives, etc.

Yes, to me a major contributor to the unfairness of this situation is hind sight bias and the curse of knowledge. People trick themselves into thinking they could have done better without adequately accounting for 20/20 hindsight and expertise differences.

---

>> My mental model for rationality includes the reality of social dynamics. I think yours does too or you wouldn't have thought about it the way you have before now.

> Yes.

>> […] I consider my choices of how to most gracefully share things and prompt feedback as part of my rational process to maximize results assuming other people to be flawed. This may be the source of why I seem to think its less damaging. I have (perhaps incorrectly) accepted this as something I cannot change, and have focused on how to work around it.

> I think playing the social game destroys everyone who does it a lot. It turns them dishonest and irrational. You can do it a bit and get away with it, but not very much. (There’s lots more to say about this. This could be an area to focus on next.)

> On a related note, because ideas/truths are all connected, being dishonest or irrational about one thing is a *huge* problem. Trying to isolate and contain errors is really problematic. One limit on your learning or progress can flip the switch from unbounded progress to bounded progress, which is a big deal (and related to The Beginning of Infinity. This is an approximation. The issue is more complicated.

To make sure I am understanding the meaning: it sounds like you are identifying two issues that come from incorporating social dynamics into ones operating model versus attempting to eliminate them:

1) Social dynamics or social games transform a rational person into an irrational one over time. (aside: Peter Keeting maybe? on chapter 5 of second reading of Fountainhead)

2) Even if you can remain rational, accepting the game will place limits on progress.

In terms of (1), I have confessed previously I do this. Does it make me uncomfortable? Yes. Could it be destroying me? Maybe.

I think a major point of consideration is whether or not refusing to play the game is better or worse for the person or group more broadly. For example, if I stopped playing the game at work, then in all likelihood I would be removed from the game. If I play the game, then my boss might treat me according to status, but I can treat my own reports as rationally as possible, encourage truth seeking, etc. If I don't play the game, then my boss or peers may disown me and replace me with someone less pro-rationality, pro-democracy, etc. If the world is irrational and rejects those who won't play the game, then a way to change the world is to play the game, infiltrate the heirarchy, and change things within your span of control. If rationality is better for progress, then my sub-group should win out in the long term. (confounding factors could prevent this)

I am open to further discussion here. I would rank it high in priority as it may be a crux of why our perspectives differ, and also I have personal incentive to not be destroyed.

In terms of (2), I agree. Wherever social concern blocks coordination towards the common good this will be limiting. If we had a world where folks prioritized truth over social status, we'd get to consensus a lot faster and be able to act on that consensus. I would rank exploring this (point 2) lower in priority only because I think we are in agreement.

---

>> I think you are looking for solutions and norms that try and lessen its impact more broadly through institutionalizing appropriate behavior- is that fair?

> Yes. (I talked about this a bit in the video. Feel free to ask more about it.)

I would want to talk more about this, but I would de-prioritize for now, as I think gaining resolution on some of the other topics first could make for a richer conversation about solutions, which this topic represents to me.

---

>> It kind of goes back to my earlier question: If your game it might be worth exploring as a thought experiment how we could do better then Less Wrong in terms of an online community that maximizes rational participation. (though I think LW is pretty good in terms of articles at least)

> Some comments on LW I wrote 3 years ago, the second time I tried to talk with them: Less Wrong Lacks Representatives and Paths Forward.

> I’ve been a forum owner since 2002. The FI community goes back to 1994. I think it has better discussion norms than LW, but its norms clash with standard social dynamics considerably more, so they don’t work badly for most people. (I do think there’s room for a variety of forums. Actually there are way too few. But there’s also lots of room for improvement while maintaining useful diversity.)

To me this also feeds into a solutions conversation I would like to have at some point, after we close on the social dynamic problem discussion. Assuming you are still interested by that time.

---

>> […] I think you are claiming that the number of true positive conversation partners are lacking. […] should one optimize to minimize type 1 [false positive] or type 2 [false negative] error?

> I think false negatives are way worse. Great people (positive outliers) matter the most. Try to attract and not alienate them. If you false negative someone cool, that’s a big deal. Partly because of the shortage I claim exists. And partly because false negatives are more permanent than false positives (in a discussion context).

> False positives are *really really bad* **if** you stop thinking. If you decide someone is great and your judgment is final, that’s worse than missing out on interacting with someone awesome. But if you keep evaluating people, then if you false positive someone you get another chance to change your mind every time you interact further. So false positives can get corrected.

> Stuff like my Debate Policy is designed to enable false negatives (where I reject people I shouldn’t) to be corrected. This is important for me partly so I don’t miss out on good people and partly because it makes it easier for me to negative someone. Before I had this policy I spent more energy engaging with people I’d already judged negatively, just in case. The policy is a cheaper way to protect myself.

We are in agreement here. I appreciate the distinction you make between the cost of reversing your judgement. Specifically, the idea that false positives can be easily fixed so long as you keep thinking, but false negatives do immediate damage that are hard to repair. The payoff matrix seems to be one where we want to minimize false negatives the most, and then do our best with the false positives.

I think there is another distinction here between negatively judging someone as a poor conversation partner, and accidentally saying something that causes them to false negative you. They both cause the same missed opportunity. I think this ties back to the other threads that deal with the specifics of social dynamics. One of my primary goals is to avoid getting false negatived. (I think of myself as a true positive, haha)

This particular thread could go many directions. What I wonder most is how one can more effectively signal being a true positive to try and quickly dissolve concerns around social status, and also quickly understand if the other person has truth seeking goals as well. This may fall into solutions as well, so I will vote to de-prioritize this ones until the earlier points are resolved.

---

Metatopics:

> Here is my tree so far. Any corrections for the nodes already in it? E.g. if you’re not satisfied with how some of your points were represented.

Based on my first pass I am satisfied.

> I can export another format if it’d be better: Freemind, OPML, png, Markdown, Text, MindNode.

I am okay with PDF as a consumer.

If you want to share update responsibilities, I am open to that. Though, I would need some guidance. I have Xmind at the moment, and it looks like it openly opens xmind files.

> Video 4 link (solo tree making + writing this reply):

> https://youtu.be/VOhJKNCTmaU

I have not watched this yet. I prioritized understanding and thinking about your post. I will check it out sometime this weekend.


gigahurt at 7:12 PM on August 22, 2020 | #17575 | reply | quote

Watched the 4th video. Thank you for that. A few comments:

* I will check out the inferential distance material on less wrong.

* I enjoyed your description of exponential vs. linear back off when it comes to finding common ground. I have employed a linear approach in my own interactions (unconsciously). Framing it as a binary tree problem is helpful and a model I plan to experiment with.

* Let ideas die in your stead - Popper. Thank you for this quote/mantra

* Glad you came back to Less Wrong as I may not have heard of you otherwise.


gigahurt at 1:27 PM on August 23, 2020 | #17598 | reply | quote

peter m

"Let ideas DIE IN YOUR STEAD".

Whatever did sir Karl mean by that?

It would take a lot more than the space or time allowed

here for the long version of that (of which there is at

least one on Yahoo Answers now).

But fortunately Popper provides informally so to speak

a way of introducing this if not explaining it also.

If you had read your Popper more extensively you too

may have seen it.

For he many times introduces the notion of "c.r.i.t.i.c.i.s.m." doesn't he?

And this criticism replaces already outdated "criticism"

and at least significantly MODIFIES IT.

And by that I mean significantly MODIFIES the notion

and idea of HOW WE LOGICALLY USE criticism.

I mean, I wouldn't here have been motivated to authoritatively mention the fact that I don't see much

IF ANY of this word in the above arguments.

Whereby this-is-a-pity so to speak ; for I and probably others may have got the idea that arguments and logic

trees that come following criticism of Youtube's Introduction on Crash Course on Philosophy, may be some SIGNIFICANT-part-of-Philosophy, important philosophy as Least AS Important as that criticism of the above Youtube

(philosophy) Course.

And you should understand that it's great that we can all Agree-to-be- Less Wrong than say "them out there".

But gentlemen I can assure you that without significant

and simplified (= Aim + Method) objective evaluation

of ideas then our evaluation will carry little weight ;

little authoritative convincing of say the subjective

philosophy of each of us & more importantly those

who cannot answer back or who will never have a relevant

voice.

Which as you may know includes almost every person

and writer deceased.

(that this is an answer to the fact that we must work

alone in our thankless task of agreement in a foolproof

joint method of confirmatory knowledge, I hope you

agree. My intervention here is just a reminder that

one of our best, if not the best, objective philosophers

around NEVER worked without using his own version

of cutting-edge-criticism. The word as a learning

mechanism which should carry it's user to a desired

aim, one PERHAPS of confirmation and PERHAPS of

authority too).


see Yahoo Answers at 9:54 PM on August 23, 2020 | #17601 | reply | quote

> 1) Social dynamics or social games transform a rational person into an irrational one over time. (aside: Peter Keeting maybe? on chapter 5 of second reading of Fountainhead)

> […]

> In terms of (1), I have confessed previously I do this. Does it make me uncomfortable? Yes. Could it be destroying me? Maybe.

> […]

> I am open to further discussion here. I would rank it high in priority as it may be a crux of why our perspectives differ, and also I have personal incentive to not be destroyed.

re personal: are you actively trying to climb significantly higher in the status hierarchy? are you in an insecure position and putting a bunch of effort into maintaining status? are you young, e.g. under 20? if none of those then my first guess is slow danger, not urgent issue.

I think most people are destroyed during childhood (destroy = become irrational, bounded, and usually socially oriented). The survivors are generally partially broken and gradually get worse over time. Social dynamics are a major factor in both the childhood problems and the later declines.

Babies are ~rational – able to learn, observe, correct error, be curious, etc. Their parents are irrational, confusing, authoritarian, etc. When parents make requests, demands, etc., and enforce them overtly (rules, punishments) or covertly (being less friendly and helpful, frowning, passive-aggressive responses), the key thing that happens to very young children is they *don’t understand what they’re supposed to do*. And the parent doesn’t answer enough questions and help enough for the kid to make sense of the world. So what kids end up doing is deciding life doesn’t make sense and you just have to muddle through while being perpetually confused. And you have to learn social dynamics to better predict and manipulate your parents behavior, and get along with them and others. Logical arguments mostly don’t work on the authorities in your life but social dynamics do.

Make any sense? Too much of a jump to a related topic at once? We can slow down or back up if needed. Also the details may not be needed if you accept the general proposition that parents mostly treat kids irrationally and kids basically learn they’re in a world where other people have power over them and you have to climb the status hierarchy to stop being the victim, while generally *not* learning how to science and logic (attempts at those often leads to clashes with other people instead of success).

----

Video 5: https://youtu.be/Pk2RUwen4K0

I updated the tree pdf.

Off topic: Do you know much about Solomonoff induction?


curi at 10:59 PM on August 23, 2020 | #17602 | reply | quote

gigahurt, I think this video will interest you. I talk with Max about a discussion he had with TAG on Less Wrong.

https://youtu.be/TTCKfMnNZgU

(skip the first couple minutes)


curi at 9:07 PM on August 25, 2020 | #17617 | reply | quote

>> From tree: We can work around social dynamics and minimize harm.

> Social dynamics or social games transform a rational person into an irrational one over time.

> re personal: [...] are you in an insecure position and putting a bunch of effort into maintaining status? [...]

I would say the only criteria that applies to me at the moment is being insecure in my current position.

>I think most people are destroyed during childhood (destroy = become irrational, bounded, and usually socially oriented). The survivors are generally partially broken and gradually get worse over time. Social dynamics are a major factor in both the childhood problems and the later declines.

My childhood was a journey. (like most others) My dad is a priest and both my parents and three of my four siblings are all extremely devout. I was raised Christian: I would argue a culture that places a premium on social dynamics and rules. My parents encouraged me to read a lot though, so eventually I bumped into Sagan's Demon Haunted World, Ayn Rand, and Greek Philosophy which annihilated my belief. (specifically, because my life started going so much better)

My point is that certain knowledge does bring one back to a more rational approach. I think its a two way street. So, I agree that many people tumble into adolescence, adulthood, and other life phases in an irrational state, but its possible to go the other way. I'd argue we can repair what has been destroyed.

> Babies are ~rational – able to learn, observe, correct error, be curious, etc. [...] They *don’t understand what they’re supposed to do*. And the parent doesn’t answer enough questions and help enough for the kid to make sense of the world. So what kids end up doing is deciding life doesn’t make sense and you just have to muddle through while being perpetually confused. And you have to learn social dynamics to better predict and manipulate your parents behavior, and get along with them and others.

I never thought of it this way, but it makes sense. I read parts of your taking children seriously write up a week or so ago, and I have been trying to be non-coercive with my own kids since then. Its off topic, but someday later it'd be fun to talk about autonomy and democracy more. They are both things I care a lot about.

> Logical arguments mostly don’t work on the authorities in your life but social dynamics do.

> [...]

> parents mostly treat kids irrationally and kids basically learn they’re in a world where other people have power over them and you have to climb the status hierarchy to stop being the victim, while generally *not* learning how to science and logic (attempts at those often leads to clashes with other people instead of success).

## Main focus

Yes. Social dynamics influence everything.

Your post walks through how social dynamics are hammered into people at an early age. I agree.

Maybe the argument forming is something like:

* If you accept any social dynamics then you aren't a rational conversation partner

* Childhood makes most people accept social dynamics

* All people go through childhood

* Therefore most people aren't rational conversation partners

I disagree on the first premise.

I also don't think the above is your position (not trying to straw man), your concern has seemed focused on social status, but we have been moving between the terminology of status and social dynamics fluidly.

I agree that being overly concerned with social status stops rational conversation. In many cases it inhibits conversation full stop. At the same time I don't think all social dynamics are toxic enough to undermine rational conversations. I also don't think most people walk around worried about status to a disqualifying degree. I would say maybe 20% suffer from this status concern.

Our original topic centered on the availability of rational conversation partners. For me to change my mind, I'd need to be convinced an essential attribute was missing from most of the people I encounter.

I'd like to try and define what a rational conversation partner is. I think its the most likely thing to make me change my mind.

> Make any sense? Too much of a jump to a related topic at once?

I was good with your reply.

----

> Video 5: https://youtu.be/Pk2RUwen4K0

> I updated the tree pdf.

Thank you.

> Off topic: Do you know much about Solomonoff induction?

I have heard a few people mention it since joining less wrong. After you asked, I looked it up. I am not sure I understand it, but I think it goes something like this: generate all possible hypotheses that explain a given behavior sorted by complexity (lowest first). As new data flows in, eliminate the hypotheses that don't accurately predict the data. The least complex hypothesis left will always be your best model for the given problem. It seems more deductive then inductive to me: if its not this or this or this, then it must be this. I may be missing the point as the articles I found layered in probability and bitstreams and many other things which I am guessing build clarity and additional constraints, but I did not take the time to understand in depth.

Also, apologies for delay, I know I said more lets go back and forth more rapidly, other obligations have been filling up my time. I am not dodging the argument. On weekends I can probably do 3 or so replies in a day.


gigahurt at 3:29 PM on August 26, 2020 | #17629 | reply | quote

> gigahurt, I think this video will interest you. I talk with Max about a discussion he had with TAG on Less Wrong.

> https://youtu.be/TTCKfMnNZgU

I'll check this out


gigahurt at 3:30 PM on August 26, 2020 | #17630 | reply | quote

> I'd argue we can repair what has been destroyed.

Yes we can. But there isn't enough knowledge about how to do it, and people drastically underestimate the difficulty, and there are many different obstacles to overcome. And people commonly self-sabotage their repair process, or abandon or avoid it. And they're so dishonest (with themselves more than others) which makes it very hard.

Nevertheless I've been working on this issue.

> I never thought of it this way, but it makes sense. I read parts of your taking children seriously write up a week or so ago, and I have been trying to be non-coercive with my own kids since then. Its off topic, but someday later it'd be fun to talk about autonomy and democracy more. They are both things I care a lot about.

Be careful about making big parenting changes. Sometimes people keep changing policies each time they learn something new – that's too chaotic and confusing. And unconventional stuff takes planning and caution to use well. And there's a ~100% rate of people misunderstanding TCS initially before discussing it.

> Also, apologies for delay

Apologies not required.

FYI, I've found basically people either discuss with me or they don't. I haven't had problems with discussions being too slow.

Delays are fairly often an early warning about quitting a discussion but that's not my guess at the moment.

> ## Main focus

I'll reply to that part later.


curi at 7:05 PM on August 26, 2020 | #17635 | reply | quote

> And they're so dishonest (with themselves more than others)

https://www.elliottemple.com/essays/lying


curi at 7:06 PM on August 26, 2020 | #17636 | reply | quote

> Yes. Social dynamics influence everything.

> Your post walks through how social dynamics are hammered into people at an early age. I agree.

> Maybe the argument forming is something like:

> * If you accept any social dynamics then you aren't a rational conversation partner

I don’t think it’s all or nothing. I think there are major, widespread themes of social dynamics that push things away from reason.

> * Childhood makes most people accept social dynamics

> * All people go through childhood

> * Therefore most people aren't rational conversation partners

> I disagree on the first premise.

> I also don't think the above is your position (not trying to straw man), your concern has seemed focused on social status, but we have been moving between the terminology of status and social dynamics fluidly.

Social dynamics are the unwritten rules for how social interactions work. People judge each other socially, and do social actions and react to social actions, and this is a generic name for however they do it. Social dynamics are closely connected with social status in the eyes of those you interact with (do they approve of disapprove), and more broadly with group status hierarchies. Part of the social rules is to treat people according to their status.

> I agree that being overly concerned with social status stops rational conversation. In many cases it inhibits conversation full stop. At the same time I don't think all social dynamics are toxic enough to undermine rational conversations. I also don't think most people walk around worried about status to a disqualifying degree. I would say maybe 20% suffer from this status concern.

In childhood, people learn social dynamics to the point that it’s mostly automatic, habitual and unconscious. It becomes intuitive and second nature. People who don’t learn it well stand out and are called autistic or crazy.

> Our original topic centered on the availability of rational conversation partners. For me to change my mind, I'd need to be convinced an essential attribute was missing from most of the people I encounter.

> I'd like to try and define what a rational conversation partner is. I think its the most likely thing to make me change my mind.

I think a good model is that there are *two worlds*: the social and objective (or scientific or rational) worlds. Each has its own rules of epistemology and etiquette. Different epistemologies means e.g.: different rules of evidence, different methods for reaching conclusions and deciding what’s true (or treated as true), different ways to communicate. Sentences often have two different meanings depending on which way someone is reading it (social or objective, which is often called “literal” when it comes to reading). Sometimes the two meanings are similar but there are often notable differences. It’s ~impossible to have very productive rational-scientific conversations when both people are speaking and reading socially, or when one is and one isn’t. (You can do it some if you have low standards. It can help people.)

People in general are bad at mode switching (specifically between social and objective modes) and have pretty low, limited skill at objective mode. Even people on rationalist forums like LW, who try to have objective mode conversations, are unable to do it very consistently or thoroughly. This leads to clashes with someone being more consistently rational because they keep responding to unintended social interpretations of things he says, and who reads some of their social comments literally. It also leads to clashes and chaos when they two people doing this talk with each other, because they don’t go back and forth between social and objective at the same times, for the same issues – they’re both doing e.g. 50% social but it’s not the same 50%. There’s also major variation in social subcultures, and poor overall convergence on some of the unwritten rules, which amplifies the chaos.

There are other ways to approach the matter, e.g. the static memes in BoI. There’s convergence between several approaches. The two world model isn’t exact. There’s overlap and interplay between worlds. But I’ve found it particularly helpful.

If the broad outline makes sense, I can give some examples next.

PS in the future please use "-" for bullet points instead of "*" because asterisk is used for italics. it'll work better in my markdown editor and if your bullet point has italics or bold within it.


curi at 5:21 PM on August 27, 2020 | #17647 | reply | quote

>> - If you accept any social dynamics then you aren't a rational conversation partner

> I don’t think it’s all or nothing. I think there are major, widespread themes of social dynamics that push things away from reason.

I agree, and would like to identify the themes. I don't know how systematic you want to get, but I would be in favor of giving it the best shot we can. My hypothesis is our disagreement (if we end up having any) will be in what we do and do not find permissible in a rational conversation. It's not all or nothing, but I think both of us have a line. For example, I find explicit and some implicit superiority/inferiority signaling unacceptable, I think folks need to engage as equals. (equal as in the US Declaration of independence) That doesn't fully define my position, but part of the value I am getting from our discussion is better understanding of my own position.

> Social dynamics are the unwritten rules for how social interactions work. People judge each other socially, and do social actions and react to social actions, and this is a generic name for however they do it. Social dynamics are closely connected with social status in the eyes of those you interact with (do they approve of disapprove), and more broadly with group status hierarchies. Part of the social rules is to treat people according to their status.

I agree social status dynamics are a subset of social dynamics.

I think social rules exist which are not tied to status and may even enhance rational discussion, for example: wait for your turn to talk.

I am also on the fence if all social status dynamics are purely bad. For example, I think some social status rules exist that simultaneously encourage rationality and lower status:

- A boss who tells rather then persuades/discusses is often seen as a bad boss.

- Arrogant people often are punished out of spite.

- Trying to overtly signal status often ends in losing it.

- Signaling confidence boosts status, being wrong about what you signaled confidence about about hurts you even more then you gained.

> In childhood, people learn social dynamics to the point that it’s mostly automatic, habitual and unconscious. It becomes intuitive and second nature. People who don’t learn it well stand out and are called autistic or crazy.

I agree people who do not learn social dynamics stand out in a way that hurts them. I wish it could be avoided and it makes me sad to think about it.

>> I'd like to try and define what a rational conversation partner is. I think its the most likely thing to make me change my mind.

> I think a good model is that there are *two worlds*: the social and objective (or scientific or rational) worlds. Each has its own rules of epistemology and etiquette. Different epistemologies means e.g.: different rules of evidence, different methods for reaching conclusions and deciding what’s true (or treated as true), different ways to communicate. Sentences often have two different meanings depending on which way someone is reading it (social or objective, which is often called “literal” when it comes to reading). Sometimes the two meanings are similar but there are often notable differences.

> It’s ~impossible to have very productive rational-scientific conversations when both people are speaking and reading socially, or when one is and one isn’t. (You can do it some if you have low standards. It can help people.)

If by socially you mean 100% socially and by productive you mean productive for non-empathetic objectives I agree. I think if one relaxes the 100% slightly and the 'very productive' clause, one starts to make progress. If we graphed this with productivity on the y-axis and rational/social mix on the x-axis, I think it would be somewhere between a linear function and power function.

Brief aside on the concept of productivity: Some times people actually want 100% social conversations. These can still be productive (building bonds) and perhaps even rational things to engage in, but I am okay distinguishing that type of productivity from truth-seeking productivity.

> People in general are bad at mode switching (specifically between social and objective modes) and have pretty low, limited skill at objective mode. Even people on rationalist forums like LW, who try to have objective mode conversations, are unable to do it very consistently or thoroughly. This leads to clashes with someone being more consistently rational because they keep responding to unintended social interpretations of things he says, and who reads some of their social comments literally. It also leads to clashes and chaos when they two people doing this talk with each other, because they don’t go back and forth between social and objective at the same times, for the same issues – they’re both doing e.g. 50% social but it’s not the same 50%. There’s also major variation in social subcultures, and poor overall convergence on some of the unwritten rules, which amplifies the chaos.

To me the inability to mode switch and mode switching out of time are both arguments in favor of a power function between productivity and rational/social mix.

To me the variation in subculture and convergence makes it harder to move positively along the x-axis.

I would be interested in hearing more about mode switching, I don't mind reading other sources if necessary. I feel like I operate in both modes simultaneously, but I may not be noticing what I am actually doing.

For example, I may be working through a problem with someone at work, the entire time I feel as if my social functions are completely engaged to avoid offending, overwhelming, or diminishing them, but we both still arrive at some sort of rational/logical answer to the problem we are discussing. Importantly, I contribute rational points to that discussion. (I am not just coaxing them on)

Other mental modes I have experienced: flow/intense focus, mindfulness, psychedelics and other drugs. I mention this for two reasons. First, I have reason to believe I can distinguish between different operating modes at some base level. Second, the one I'd most associate with productive rationality is flow/intense focus and I don't think I can do that while talking with people. Though even that is hard for me to agree with because my intuition is usually pretty strong when I am focused and that is very prone to bias type territory.

Its not the best evidence to only claim yourself as the counter-example, but I don't have much beyond this.

> There are other ways to approach the matter, e.g. the static memes in BoI. There’s convergence between several approaches. The two world model isn’t exact. There’s overlap and interplay between worlds. But I’ve found it particularly helpful.

> If the broad outline makes sense, I can give some examples next.

I am happy to hear examples.

> PS in the future please use "-" for bullet points instead of "*" because asterisk is used for italics. it'll work better in my markdown editor and if your bullet point has italics or bold within it.

Will do. As an aside, I think it was in your conversation with Max you mentioned the benefit of rendering raw markdown to encourage people to copy/paste/quote. I appreciate that feature too.

Also, I am working on a tree of your Lying essay. I will share when I am done. I especially like the idea of matching confidence qualifiers accurately to statements, though its still something I am working on.


gigahurt at 6:30 PM on August 28, 2020 | #17653 | reply | quote

#17653 I'm too exhausted to find some simple, minimal examples right now but here are two more complex examples to start:

http://curi.us/2167-analyzing-how-culture-manipulates-you-by-pulling-your-puppet-strings

And David Deutsch (a physicist) wrote in *The Fabric of Reality* (this passage differs from my view, but has significant overlap/agreement and reports notable evidence observed first hand):

> I have sometimes found myself on the minority side of fundamental scientific controversies. But I have never come across anything like a Kuhnian situation. Of course, as I have said, the majority of the scientific community is not always quite as open to criticism as it ideally should be. Nevertheless, the extent to which it adheres to ‘proper scientific practice’ in the conduct of scientific research is nothing short of remarkable. You need only attend a research seminar in any fundamental field in the ‘hard’ sciences to see how strongly people's behaviour *as researchers* differs from human behaviour in general. Here we see a learned professor, acknowledged as the leading expert in the entire field, delivering a seminar. The seminar room is filled with people from every rank in the hierarchy of academic research, from graduate students who were introduced to the field only weeks ago, to other professors whose prestige rivals that of the speaker. The academic hierarchy is an intricate power structure in which people's careers, influence and reputation are continuously at stake, as much as in any cabinet room or boardroom — or more so. Yet so long as the seminar is in progress it may be quite hard for an observer to distinguish the participants’ ranks. The most junior graduate student asks a question: ‘Does your third equation really follow from the second one? Surely that term you omitted is not negligible.’ The professor is sure that the term *is* negligible, and that the student is making an error of judgement that someone more experienced would not have made. So what happens next?

> In an analogous situation, a powerful chief executive whose business judgement was being contradicted by a brash new recruit might say, ‘Look, I've made more of these judgements than you've had hot dinners. If I tell you it works, then it works.’ A senior politician might say in response to criticism from an obscure but ambitious party worker, ‘Whose side are you on, anyway?’ Even our professor, *away from the research context* (while delivering an undergraduate lecture, say) might well reply dismissively, ‘You'd better learn to walk before you can run. Read the textbook, and meanwhile don't waste your time and ours.’ But in the research seminar any such response to criticism would cause a wave of embarrassment to pass through the seminar room. People would avert their eyes and pretend to be diligently studying their notes. There would be smirks and sidelong glances. Everyone would be shocked by the sheer impropriety of such an attitude. In this situation, appeals to authority (at least, overt ones) are simply not acceptable, even when the most senior person in the entire field is addressing the most junior.

> So the professor takes the student's point seriously, and responds with a concise but adequate argument in defence of the disputed equation. The professor tries hard to show no sign of being irritated by criticism from so lowly a source. *Most* of the questions from the floor will have the form of criticisms which, if valid, would diminish or destroy the value of the professor's life's work. But bringing vigorous and diverse criticism to bear on accepted truths is one of the very purposes of the seminar. Everyone takes it for granted that the truth is not obvious, and that the obvious need not be true; that ideas are to be accepted or rejected according to their content and not their origin; that the greatest minds can easily make mistakes; and that the most trivial-seeming objection may be the key to a great new discovery.

> So the participants in the seminar, while they are engaged in science, do behave in large measure with scientific rationality. But now the seminar ends. Let us follow the group into the dining-hall. Immediately, normal human social behaviour reasserts itself. The professor is treated with deference, and sits at a table with those of equal rank. A chosen few from the lower ranks are given the privilege of being allowed to sit there too. The conversation turns to the weather, gossip or (especially) academic politics. So long as those subjects are being discussed, all the dogmatism and prejudice, the pride and loyalty, the threats and flattery of typical human interactions in similar circumstances will reappear. But if the conversation happens to revert to the subject of the seminar, the scientists instantly become scientists again. Explanations are sought, evidence and argument rule, and rank becomes irrelevant to the course of the argument. That is, at any rate, my experience in the fields in which I have worked.

Note in particular the claims about normal human social behavior and about people having separate social and scientific modes. Sadly I have doubts about how good most scientists actually are (DD says that *overt* appeals to authority are frowned on in research contexts, and I agree, but I think more hidden social dynamics are still common).


curi at 8:28 PM on August 28, 2020 | #17655 | reply | quote

## Analyzing How Culture Manipulates You by Pulling Your Puppet Strings

I spent 20 minutes with this article. As the title mentions it shows the techniques some people use to pull the strings of other people. By pull strings I mean exploit social conventions to illicit reactions without much thinking involved on the part of the receiver. I think your primary intention was to use it as a case study of how the social world operates.

I appreciated a number of your observations from this article, I will stick to those that seem directly relevant to our discussion:

> real time conversations pressure people to respond quickly without enough thought, while people are emotional and facing social pressure.

I agree and don't think people take this into account enough. If everyone would read and write more it would all become less real time and we could get to the bottom of things more easily.

> Yet it pulls people's "negativity is honest" string, even though everyone knows it means the opposite of what it said.

Never thought of this but it rings true. Its a heuristic that causes people to misjudge situations. "Negativity is honest" is what I do not like about the 'celebrate failure' meme I mentioned earlier.

Pulling strings seems related to, if not possibly synonymous with, exploiting bias. (bias as in Tversky and Kahneman)

I did a quick search for this bias online and found nothing. I think it's a research opportunity.

The intentional exploitation of bias is wrong. Like you, I believe we should spend time trying to rid ourselves of this behavior. I actually would connect this back to the very beginning of our conversation and the idea of anti-rhetoric.

I concede you cannot have a rational conversation with someone who constantly tries to pull your strings. I would guess maybe 20% of people do this and overlap substantially with those concerned about status. My revised guess accounting for everything is 30% disqualification of the population.

----

## Excerpt from FoR

This example illustrates how the scientific community operates using two modes: one rational and one social.

> So the participants in the seminar, while they are engaged in science, do behave in large measure with scientific rationality. But now the seminar ends. Let us follow the group into the dining-hall. Immediately, normal human social behaviour reasserts itself. [...] But if the conversation happens to revert to the subject of the seminar, the scientists instantly become scientists again. Explanations are sought, evidence and argument rule, and rank becomes irrelevant to the course of the argument.

I am going to suggest an alternative hypothesis: The scientists are continually operating under social status rules. A single mode.

Scientists behave according to social rules both in the dining hall as well as the lecture hall. If the scientist answers questions like a CEO he will lose status. Taking questions from junior scientists with deference allows the senior scientist to show his maturity and how much he values finding the truth. Being able to defend his work is virtuous and status building.

> in the research seminar any such [authority based] response to criticism would cause a **wave of embarrassment** to pass through the seminar room. People would **avert their eyes** and **pretend to be diligently studying their notes**. There would be **smirks** and **sidelong glances**. Everyone would be shocked by the sheer impropriety of such an attitude.

Everything I highlighted looks like social dynamics. Averting your eyes and pretending to study are shunning behaviors. Smirks and sidelong glances silently build social consensus. Social pressure forces the senior scientist to present in a certain way. I am glad this is the case. There is something better about this state of affairs then average human interactions, but it is not the ideal either.

I have heard people say technology is not good or evil. Social status rules and social dynamics more broadly may be the same way. We both agree that status can control human behavior. I think the group values are what make the overall context good or evil. If social status rules are used to encourage truth seeking behavior then social status rules are good. If social status rules are used to maintain and build power then social status rules are evil.

The power-hungry scientist can be made good by the truth seeking status rules of his occupation. The truth seeking executive can be corrupted by the authoritarian status rules of his.

I accept the two world theory, but only if the worlds are a function of values and not of social dynamics. The first world is truth seeking. The second world is everything else.

I look forward to hearing what you think. The above is a hypothesis, I am open to being corrected.

---

Also I made a tree of your Lying article

https://www.is-this-normal.net/public/lying.png

I enjoyed it. The biggest take away for me, as previously mentioned, was the thinking around speech qualification and trying to be more precise with that.


gigahurt at 3:00 PM on August 29, 2020 | #17660 | reply | quote

> My hypothesis is our disagreement (if we end up having any) will be in what we do and do not find permissible in a rational conversation.

I expect disagreement to be about how to interpretation actions and situations. I expect that *if* you accept my interpretations (which can be quite strong and negative), you’ll agree there’s impermissible, irrational stuff going on.

> It's not all or nothing, but I think both of us have a line.

A line for what? A line for what we’ll put up with in conversation?

> I find explicit and some implicit superiority/inferiority signaling unacceptable, I think folks need to engage as equals.

I think that’s unclear. People are not equal in all attributes. In what senses should they engage as equals? I agree with equality in the sense that a discussion tree node has to be treated the same regardless of who added it – it has to be criticized regarding its content.

Some people know more than others about the topics being discussed, have skills the others don’t, etc. Trying to ignore this can cause trouble. Trying to use it for social climbing can also cause trouble.

> I am also on the fence if all social status dynamics are purely bad.

“Purely bad” is way too strong. I made some comments on these in my video. Maybe we could discuss them one at a time *using examples*.

> If we graphed this with productivity on the y-axis and rational/social mix on the x-axis, I think it would be somewhere between a linear function and power function.

Please sketch an illustration of what you mean.

> I would be interested in hearing more about mode switching, I don't mind reading other sources if necessary. I feel like I operate in both modes simultaneously, but I may not be noticing what I am actually doing.

Mode switching is usually approximate (or they go back and forth fast so they’re aware of both). The point is two main themes of how people think and interpret, but they’re often aware of both. People are often like 80% social. Sometimes 80% objective. Sometimes other mixes. Sometimes they lose sight of one mode in extreme ways. E.g. organizations that are really out of touch with reality, or people in discussions who are really blind to what your words actually literally say. Or people who are playing a game, like chess, and are really focused on what moves/actions objectively makes sense in the game, not on the social meaning of their in-game actions.

> Second, the one I'd most associate with productive rationality is flow/intense focus and I don't think I can do that while talking with people.

I think that’s common and shows how rough social pressures are. The active social dynamics make it hard to think well! Some distance is needed to do one’s best thinking.

> By pull strings I mean exploit social conventions to illicit reactions without much thinking involved on the part of the receiver.

Some of this stuff is resilient to the receiver thinking about it, knowing some stuff about what’s going on, and having time to think about how to react. It can still be hard to deal with in that case.

> My revised guess accounting for everything is 30% disqualification of the population.

I claim over 90%. It’s hard to pick a single number b/c how do you score mixed conversations that have a bit of rationality mixed in?

> I am going to suggest an alternative hypothesis: The scientists are continually operating under social status rules. A single mode.

I think they manage to focus on objective reality sometimes or science wouldn’t get anywhere. They even manage a little in seminars, despite all the people there (enough to share a bit of objective info that people can think over later).

> Scientists behave according to social rules both in the dining hall as well as the lecture hall. If the scientist answers questions like a CEO he will lose status. Taking questions from junior scientists with deference allows the senior scientist to show his maturity and how much he values finding the truth. Being able to defend his work is virtuous and status building.

I agree.

> Everything I highlighted looks like social dynamics.

I agree. “shocked” and “impropriety” are both social too.

> Social pressure forces the senior scientist to present in a certain way. I am glad this is the case.

I don’t think you get good scientists by socially pressuring people into faking it. I think the faking makes it harder to spot the real scientists, screws up the signal/noise ratio of science, and basically doesn’t lead to anything good.

> There is something better about this state of affairs then average human interactions, but it is not the ideal either.

I think it’s worse, not better. Pushing stuff underground, whether it’s e.g. science-related irrationality or racism, makes it harder for people to know what the situation is, discuss it, criticize errors, etc. It’s harder to correct errors that people are hiding and lying about.

People get tricked into pursuing whole careers in academia only to find out after investing 10+ years that there’s a ton of faking going on.

> I have heard people say technology is not good or evil. Social status rules and social dynamics more broadly may be the same way.

I disagree. I think basically we should look at examples.

Here are some examples http://curi.us/2371-analyzing-quotes-objectively-and-socially

Video of writing this message: https://youtu.be/QdFSwxM3zRg


curi at 1:13 AM on August 30, 2020 | #17667 | reply | quote

gigahurt, is this you? https://twitter.com/gigahurt


Anonymous at 11:08 AM on August 30, 2020 | #17669 | reply | quote

gigahurt wrote:

> # Signaling superiority as undermining standing

> People keep an informal tally in their head about how they feel or do not feel about people. (social standing)

> If person A uses language that makes person B seem inferior in front of C, it can trigger a few things:

> C may actually believe B is inferior, lowering Bs social standing in C's mind.

> C may not believe B is inferior, but B may have a theory of mind that predicts loss of standing.

> The effect will be the same, B will feel threatened, and dig into their position to save or regain face. They may also disengage completely. The discussion turns from team work to war. In most cases from rationality to pure politics.

Is an example of someone writing in objective mode? Social dynamics appear forgotten. This looks to me like just talking about the issues.

I'm curiuos if gigahurt would deny it in light of e.g.:

> I am going to suggest an alternative hypothesis: The scientists are continually operating under social status rules. A single mode.


Anonymous at 11:29 AM on August 30, 2020 | #17672 | reply | quote

The twitter account is not mine. I am @gigahurt_. Though I am generally not engaged in social media.

> Is an example of someone writing in objective mode? Social dynamics appear forgotten. This looks to me like just talking about the issues.

> I'm curiuos if gigahurt would deny it in light of e.g.:

I am trying to talk about the issues in both cases. Based on your comment I think you detect a distinction between my earlier and later statements. Would you be able to it share with me? I don't see it, but maybe due to an unspoken assumption I am making.


gigahurt at 12:04 PM on August 30, 2020 | #17673 | reply | quote

#17673 When you suggested the scientists were doing a single, social mode, I guessed that you broadly believed that about people. If you're proposing scientists are being social, I'd expect you to think most people are, rather than to think a bunch of other groups are doing better than scientists. Plus the broader context was curi's claims about the world in general. So I was wondering if your own text earlier was a counter example to your one mode idea. Maybe I was wrong and you were just commenting on the one case DD discussed, not even on scientist behavior in general.


Anonymous at 12:13 PM on August 30, 2020 | #17674 | reply | quote

Social dynamics blog category. And I just put up two new posts that are in it.

http://curi.us/archives/list_category/120


curi at 12:34 PM on August 30, 2020 | #17676 | reply | quote

> I expect disagreement to be about how to interpretation actions and situations. I expect that *if* you accept my interpretations (which can be quite strong and negative), you’ll agree there’s impermissible, irrational stuff going on.

The above is also plausible to me. I will critique your interpretations to drive this branch of the conversation and reveal my own perspective.

> A line for what? A line for what we’ll put up with in conversation?

The line between a rational conversation and an irrational one. I think examples will get us there.

>I think that’s unclear. People are not equal in all attributes. In what senses should they engage as equals? I agree with equality in the sense that a discussion tree node has to be treated the same regardless of who added it – it has to be criticized regarding its content.

My fault on the unclear declaration of independence reference. I agree people are not equal in terms of skills, intelligence, physical capability, and many other characteristics. I agree discussion tree nodes should be treated the same regardless of author.

To try and clarify what I meant by equality: First, equal before the law. Second, we should assume everybody as equal in the abstract. Only when we consider individual people should we start to distinguish the ways in which we are different. And when we engage with other individuals, we should not seek to dominate them, but live alongside them.

> “Purely bad” is way too strong. I made some comments on these in my video. Maybe we could discuss [the hypothetical examples of social dynamics being good] one at a time *using examples*.

Yes. Sounds good.

> Please sketch an illustration of what you mean.

Here is what I was thinking.

https://www.is-this-normal.net/public/rational-social-mix.png

I probably should have done this as part of my initial description, the thought did cross my mind. The blue line is roughly y=x^2. Pink is roughly y=x^6. The overall idea being that you get poor (less then linear) returns on rationality until you hit a certain point, then you start having big gains.

> People are often like 80% social. Sometimes 80% objective. Sometimes other mixes. Sometimes they lose sight of one mode in extreme ways.

Yes I agree on this. I think the point I am trying to make around modes is that because its not 100%, there is only 1 mode with varying gradations between social and rational.

My assumption, perhaps incorrectly, was that modes are mutually exclusive. I tend to describe things which are not mutually exclusive as gradients between two extremes. I double checked the dictionary, and I see my working definition of modes does not match with common usage.

With that correction I think we are in agreement, let me know if you disagree. To reiterate, people always operate with at least some social and some rational.

You made the point that there are break points (I think that was the terminology you used, I often listen to the videos while walking, which makes it hard to take notes) and I agree those are worth exploring. I want to know what you think are the most important qualities in a rational conversation partner. I also think looking at examples also makes sense. Maybe in addition to looking at the examples we can also talk about what would have been better or more rational in those cases.

> I claim over 90%. It’s hard to pick a single number b/c how do you score mixed conversations that have a bit of rationality mixed in?

Yes, I think that is a key question. I think I score conversations with much lower rationality mixes as still rational enough. You have said a lot about the way lack of rationality damages people and situations during our conversation. Those descriptions have given me pause and even have caused me to shift my own stance slightly. I'm still not convinced more than 90% of people are disqualified though.

> I don’t think you get good scientists by socially pressuring people into faking it. I think the faking makes it harder to spot the real scientists, screws up the signal/noise ratio of science, and basically doesn’t lead to anything good.

I see / heard the points you made on the video regarding this. I prefer scientists who love truth and rationality over scientists who fake it. Still, I don't believe social dynamics are avoidable. Social status rules which award status to people who are rational seem preferable to me than other rules.

Additionally, I don't think rationality covers all values which are important. Specifically, caring about other people, particularly people who are not very similar to you.

> I think basically we should look at examples.

> Here are some examples http://curi.us/2371-analyzing-quotes-objectively-and-socially

I think we have two options regarding examples. For either, I would appreciate if we could:

– Look at each of our interpretations of how rational it is.

– Describe the ideal.

The two sources of examples are your article and the examples I gave of social dynamics which support rationality. Where do you want to start?

---

> Video of writing this message: https://youtu.be/QdFSwxM3zRg

Thank you for sharing the process.

- Point noted on 'try and' versus 'try to'.

- I fixed the inaccuracy in my tree regarding neural pathways. You are correct, you did not say that. That was a hypothesis I came up with to explain why lying in one area of your life makes it harder to be honest in others. I tagged and built the tree after taking linear notes and was sloppy.

- I agree audio/video is good in many cases as well. The main thing I do not like doing in real time is debating. It does not give me enough time to think.

- I think your Yes No Philosophy material is paid only correct? Not saying I won't spring on it, just want to understand if there is any zero cost way to start.


gigahurt at 3:57 PM on August 30, 2020 | #17679 | reply | quote

Anonymous, I tried to clear up confusion in my last post. Let me know if not.

I believe that 99% of people operate with some level of social at all times. People trying their best to be rational still have some social. Because I interpreted modes as mutually exclusive, a two mode view would be incompatible with my perspective. My article explains how I had my definition of modes wrong.


gigahurt at 4:02 PM on August 30, 2020 | #17680 | reply | quote

> - I think your Yes No Philosophy material is paid only correct? Not saying I won't spring on it, just want to understand if there is any zero cost way to start.

I'm writing new stuff about it and more but that'll take months or years. Some of that will be free or pay-what-you-want.

There's archived forum discussions about YesNo and there are relevant articles. Some are at http://curi.us/1595-rationally-resolving-conflicts-of-ideas

Oh yeah, I went through a bunch of YesNo stuff with max recently:

https://www.youtube.com/watch?v=TjJ-icZOJVo

https://www.youtube.com/watch?v=2TSJ7aoJr2Q

BTW you should probably watch Overview of Fallible Ideas Philosophy. David Deutsch's two books are also particularly good for broader context.


curi at 4:29 PM on August 30, 2020 | #17681 | reply | quote

> I want to know what you think are the most important qualities in a rational conversation partner.

0. No initiating force.

1. Honesty.

2. Wanting to understand lots of stuff to high standards.

3. Good methods and attitudes re error correction.

4. Persistence and effort.

5. Not placing boundaries on truth seeking. No conclusions off limits.

6. Willing to deal with meta and methodological issues.

(3) and (4) are needed to achieve (2).

Sometimes past effort can adequately substitute for current effort.

(5) and (6) are optional to some extent, depending on the situation.

They're all related and social dynamics clash with all of them.

BTW, I define "rational" as using methods that are good for error correction. This is similar to LW's view and different than the "rational = being correct" view.


curi at 4:49 PM on August 30, 2020 | #17682 | reply | quote

> The two sources of examples are your article and the examples I gave of social dynamics which support rationality. Where do you want to start?

Let's start with my examples. I don't know if you will dispute my interpretations of those quotes a bunch or not. I think that's an important discussion branch point.

If you're going to defend some of those quotes much then I think that'd be good to go into more detail analyzing them (one at a time).

If not much disagreement, then we could quickly move on to some of your examples. I think real, concrete examples (real world quotes) would be best. And I'd suggest examples of stuff you think is best and most rational, e.g. highly productive discussions from your favorite thinkers. That way if I point out flaws it'd make the most difference to you.

> Still, I don't believe social dynamics are avoidable.

I don't agree with this. I think it might be important to talk about later. In short, I don't think that things are hopeless if I'm right. I'm saying closer to "things can be way better" than to "doom and gloom".


curi at 5:02 PM on August 30, 2020 | #17683 | reply | quote

I stopped updating my discussion tree for the time being. I think the conversation is organized enough atm.


curi at 5:06 PM on August 30, 2020 | #17684 | reply | quote

#17679

> - I think your Yes No Philosophy material is paid only correct? Not saying I won't spring on it, just want to understand if there is any zero cost way to start.

In addition to what curi suggested you could also start by analyzing the free sample material: https://www.yesornophilosophy.com/argument


Andy Dufresne at 6:26 PM on August 30, 2020 | #17685 | reply | quote

Retweet of:

> "Everything can be free if we fire the people who stop you from stealing stuff" is apparently considered an NPR-worthy political innovation now, rather than the kind of brain fart an undergrad might mumble as they come to from major dental work https://twitter.com/_natalieescobar/status/1299018604327907328

## curi's rubric for syncing on interpretations of examples

0. No initiating force.

- Pass, did not call for termination of NPR employees or other similar force related activities.

1. Honesty.

- Not enough information, I can't find the original quote or the tweet this was retweeted from. It's unclear to me if deleting tweets is honest.

2. Wanting to understand lots of stuff to high standards.

- Fail, twitter feels incompatible with this one.

3. Good methods and attitudes re error correction.

- Not enough information.

4. Persistence and effort.

- Not enough information.

5. Not placing boundaries on truth seeking. No conclusions off limits.

- Fail. 'is apparently considered' contempt and disbelief. I agree with your analysis that it is not an argument.

6. Willing to deal with meta and methodological issues.

- Not enough information.

## my rubric. Would be interesting to find examples where our rubrics do not sync up in sum. (like all passes on one, a single fail on the other)

- tact - Fail.

- respect- Fail.

- listening - Not enough information.

- anti-rhetoric - Fail.

- brevity - Pass. Though it makes me feel like I have an incomplete list. It's so brief it's hard to say whether or not it is true or false.

- listening - Not enough information.

## Ideal way to try and convey the same content

Example:

NPR wrote an article that claims. "Everything can be free if we fire the people who stop you from stealing stuff". Here is the link. I interpreted this as actually saying what it means. I think this is a bad argument because if the people are gone that prevent you from stealing they are also not there preventing other people from stealing from you. Things go from having a cost to being completely unavailable.

Twitter has a character limit. Its a platform that seems to encourage witty, profound sounding proclamations. Not a platform that enables 'wanting to understand lots of stuff to high standards'.


gigahurt at 4:40 PM on September 1, 2020 | #17726 | reply | quote

> - Not enough information, I can't find the original quote or the tweet this was retweeted from. It's unclear to me if deleting tweets is honest.

The tweet being replied to was already deleted before I wrote my post. Deleting tweets is pretty common on Twitter, particularly when they get criticized and get a bunch of negative attention (high follower count critic or criticism goes viral). I didn't see the original tweet until someone posted it in comments on my post.

Sometimes people delete tweets if they don't get enough positive responses in a short time window. They use the response from the first 10% of their audience to estimate whether the other 90% will like it. I think that kinda thing may be more common on some other platforms like instagram.

I've had problems with forums (like facebook and discord) where deleting or editing posts is possible, particularly deleting. People often want to change or hide what they said after they receive a critical reply. Sometimes people delete or hide entire public forums with all the archives. Also sometimes when someone is banned from a forum, a bunch of their content is also deleted. E.g. the *default* on Discord is to delete the past 24 hours of messages from someone when banning them. I've lost two conversations to being banned this way, which I'd been planning to save copies of soon (the admins who banned me were not participating in the conversations and I got zero warning). I think one of the admins didn't even intend to delete my messages.

My Discord server says this

> Don't delete discussion or remove it with editing. Only use delete and edit for immediate or small corrections, e.g. hitting send by accident or a typo. When you send a message, you should view it as permanently part of the public record, especially if anyone has replied or 5 minutes has passed.

Overall I prefer when software doesn't allow editing or deleting. I think that works better overall than trying to use them only in good ways.

---

> NPR wrote an article that claims. "Everything can be free if we fire the people who stop you from stealing stuff". Here is the link.

That wasn't a quote, it was a straw man characterization. It was in quotes because it was presented like (hypothetical) speech. Your rewrite makes it sound like an actual quote.

I'd venture guesses in some cases where you said not enough information. Other than that I didn't object to your judgments.

I agree that Twitter is a bad platform for discussion. I mostly just read Twitter with this "list" which is a feed from a few people I chose. It's public and you can browse at the URL without even having an account:

https://twitter.com/i/lists/1157142697246748672

I rarely write stuff on Twitter but I like to read a few people. If they wrote the same stuff on another website, e.g. their own blog, instead of Twitter, I'd be totally willing to switch venues (unlike most of their audience).

Twitter also encourages political outrage and focusing on current news events instead of e.g. political philosophy and economics (the things we need to think effectively about the current political situation).


curi at 5:03 PM on September 1, 2020 | #17727 | reply | quote

Found your comments at the bottom.

1. Honesty.

- Fail. I don't think something becomes NPR-worthy (whatever that means) by allowing someone to voice their perspective. It makes it sound like it was an editorial by someone at NPR.

3. Good methods and attitudes re error correction.

- Fail, didn't even get to it because of criteria (4)

4. Persistence and effort.

- Fail. Condemning what someone says in so few words when they have said so much amounts to failure from my perspective.

6. Willing to deal with meta and methodological issues.

- Not enough information. Still didn't come up.

- listening - Fail. For the same reason as 'persistence and effort'.


gigahurt at 5:07 PM on September 1, 2020 | #17728 | reply | quote

# Last twitter one

I did not intentionally play dumb thinking that was a quote. It read like one. I think the use of the quote in combination with the contemptuous response to it made it seem more real then other times people quote things people did not say. e.g. Its like when someone says, 'tldr' and you think, 'you should really take the time read that.' You know I am writing fiction.

# Yudkowsky

> Post removed from main and discussion on grounds that I've never seen anything voted down that far before. Page will still be accessible to those who know the address.

## curi's rubric for syncing on interpretations of examples

0. No initiating force.

- Fail. Censorship on a platform anyone can sign up for seems like force. I think force should be allowed if there is rule of law in play and you must opt in. E.g. Policy of no profanity -> you sign up for the service -> you curse -> they delete your content

1. Honesty.

- Pass. Though its a bad reason, I believe it was his reason. I also believe he believed the article would be accessible to those who know the address.

2. Wanting to understand lots of stuff to high standards.

- Fail. I would say implied by the use of force.

3. Good methods and attitudes re error correction.

- Fail. I would say implied by the use of force.

4. Persistence and effort.

- Fail. I would say implied by the use of force.

5. Not placing boundaries on truth seeking. No conclusions off limits.

- Fail. I would say implied by the use of force.

6. Willing to deal with meta and methodological issues.

- Not enough information.

## my rubric. Would be interesting to find examples where our rubrics do not sync up in sum. (like all passes on one, a single fail on the other)

- tact - Fail. I would say implied by the use of force. Not appropriate when dealing with others.

- respect- Fail. I would say implied by the use of force.

- listening - Fail.

- anti-rhetoric - Pass. I don't think he used rhetoric.

- brevity - Pass.

## Ideal way to try and convey the same content

Not possible to convey the same content in a rational way in my mind.


gigahurt at 5:24 PM on September 1, 2020 | #17729 | reply | quote

# Last twitter one

Also to be clear, I missed the image in the twitter quote until just now. Its even weirder to me to me to quote at all if you have a screenshot attached.


gigahurt at 5:28 PM on September 1, 2020 | #17730 | reply | quote

#17729 I'm not convinced that Yudkowsky's action was force.

and re twitter: yeah twitter's way of presenting quotes, images and threads is often kinda confusing. And people in general often use quote marks in ways that can lead to confusion. I've had a lot of conflict with people trying to get them to quote correctly (proper/clear formatting + accurate attribution + never misquoting (not even changing one letter)) on my forums. It's actually been one of the main, recurring points of conflict.


curi at 5:33 PM on September 1, 2020 | #17731 | reply | quote

>> NPR wrote an article that claims. "Everything can be free if we fire the people who stop you from stealing stuff". Here is the link.

> That wasn't a quote, it was a straw man characterization.

I looked at the NPR article.

https://www.npr.org/sections/codeswitch/2020/08/27/906642178/one-authors-argument-in-defense-of-looting

Wow. Ultra-bad.

It does seem to say something like "Everything can be free if we fire the people who stop you from stealing stuff". Here's what the article says:

> Can you talk about rioting as a tactic? What are the reasons people deploy it as a strategy?

>> ... It ... attacks the very way in which food and things are distributed. It attacks the idea of property, and it attacks the idea that in order for someone to have a roof over their head or have a meal ticket, they have to work for a boss, in order to buy things that people just like them somewhere else in the world had to make under the same conditions. It points to the way in which that's unjust. And the reason that the world is organized that way, obviously, is for the profit of the people who own the stores and the factories. So you get to the heart of that property relation, and demonstrate that without police and without state oppression, we can have things for free.

So rioting is good. And we can get free stuff by getting rid of the police, property rights etc. The article is defending violence. Horrible.


Anonymous at 11:02 PM on September 2, 2020 | #17771 | reply | quote

# Dagon

> I think I've given away over 20 copies of _The Goal_ by Goldratt, and recommended it to coworkers hundreds of times.

> Thanks for the chance to recommend it again - it's much more approachable than _Theory of Constraints_, and is more entertaining, while still conveying enough about his worldview to let you decide if you want the further precision and examples in his other books.

> It's important to recognize the limits of the chain metaphor - there is variance/uncertainty in the strength of a link (or capacity of a production step), and variance/uncertainty in alternate support for ideas (or alternate production paths)

> Most real-world situations are more of a mesh or a circuit than a linear chain, and the analysis of bottlenecks and risks is a fun multidimensional calculation of forces applies and propagated through multiple links.

## curi's rubric

0. No initiating force.

- Pass. No use of force.

1. Honesty.

- Pass. While I do sense social mixed in, I don't think he is misrepresenting his conclusions.

2. Wanting to understand lots of stuff to high standards.

- Pass. I believe he read both books. I don't have reason to believe he isn't pursuing knowledge.

3. Good methods and attitudes re error correction.

- Pass. I don't think he was hostile to learning something new. At least not conclusively. His social undertones signaled he thought of himself as an expert, so maybe if pressed he would have failed this

4. Persistence and effort.

- Fail. He could have followed up on the post. At the same time, I think you could have demanded less effort. (replying in thread)

5. Not placing boundaries on truth seeking. No conclusions off limits.

- Pass. I don't think this was shown to be lacking.

6. Willing to deal with meta and methodological issues.

- Pass. I don't think this was shown to be lacking.

## my rubric

- tact - Pass. I don't think he mistreated you.

- respect- Pass. I wouldn't have clocked him as disrespecting me in the same situation, just not being interested.

- listening - Pass. I don't see evidence or have reasons to believe he did not read what you wrote.

- anti-rhetoric - Fail. I think the posturing about 'fun multidimensional calculation' counts as rhetoric.

- brevity - Pass.

I think it was pretty close to passable according to my rubric.

I think some of my judgments are probably lenient. Feel free to elaborate if you disagree with me. Which part of your rubric do you think people most frequently fail? I would speculate persistence and effort.

To get persistence and effort from someone often involves building a relationship. This can also make them more honest, in my experience.

One methodological concern I have with using these examples is that they are all with online strangers. When I initially challenged the assertion that there are enough rational conversation partners I did it by reviewing my own list. Everyone on that list I am friends or coworkers with.

I think there are exceptions. Your experience with DD being one. Perhaps me. Perhaps other people that contribute to this form and discord. I think you need extremely high trait openness to build a relationship with someone random online about intellectual topics.

I may need to revise my assertion. There are plenty of rational conversation partners if you are willing to make friends with them first. I think without friendship most people will fail persistence and effort.

I think friends are most likely to fail placing boundaries on truth.

## Ideal

> I think I've given away over 20 copies of _The Goal_ by Goldratt, and recommended it to coworkers hundreds of times.

> Thanks for the chance to recommend it again - it's much more approachable than _Theory of Constraints_, and is more entertaining, while still conveying enough about his worldview to let you decide if you want the further precision and examples in his other books.

> I think the chain metaphor has limits- there is variance/uncertainty in the strength of a link (or capacity of a production step), and variance/uncertainty in alternate support for ideas (or alternate production paths).

> I think most real-world situations are more of a mesh or a circuit than a linear chain. The calculations are complex, but if they are computed then the model will outperform the chain model. I learned all about mesh and circuit modeling here. (link) What are your thoughts?

> _Follows up on the post_


gigahurt at 3:58 PM on September 3, 2020 | #17783 | reply | quote

#17783 I think Dagon fails 1-6, especially honesty and high standards. Some points are fairly ambiguous without additional context (that is available elsewhere). I'm not sure if you read my analysis at http://curi.us/2371-analyzing-quotes-objectively-and-socially or if you plan to discuss it later or what.

I think that ideal version would also be bad. You improved the ending by hypothetically imagining he'd learned a bunch of stuff and could link to educational material (which I think is false), but the first three paragraphs are still awful, e.g. multiple brags and the comments about a chain being limited are ~incoherent.

I could do into detail but wanted to first check what's going on with the analysis I already wrote.

> At the same time, I think you could have demanded less effort. (replying in thread)

You think clicking a link to my reply constitutes a meaningful, problematic effort? Or clicking the right section in the table of contents on the sidebar? One can't really have a discussion with people unless they view that level of effort as negligible. Just writing a paragraph is orders of magnitude harder.

If I had replied inline in thread, it would have buried my reply on a forum with major biases for allocating attention away from material that is buried in that way. And I wanted to write something of general interest, not a parochial reply only of interest to Dagon (who wasn't interested).

> Which part of your rubric do you think people most frequently fail?

Honesty and boundaries.

> I may need to revise my assertion. There are plenty of rational conversation partners if you are willing to make friends with them first. I think without friendship most people will fail persistence and effort.

Make friends how, given that they aren't going to have a rational conversation until *after* being your friend? If rational discussion is held hostage to establishing rapport via non-rational methods first, that seems like quite a limit on people's rationality. And it seems like a concession that people's participation on discussion forums is mostly irrational (since they mostly aren't friends).

I disagree in any case, but it's a significantly updated claim, and more limited, so I think it's progress.


curi at 5:33 PM on September 3, 2020 | #17785 | reply | quote

> I may need to revise my assertion. There are plenty of rational conversation partners if you are willing to make friends with them first. I think without friendship most people will fail persistence and effort.

If someone knows how to discuss rationally, why would they discuss irrationally?

I'd somewhat understand 1) not discussing 2) discussion that's fairly removed from rationality issues, like small talk. But why have an intellectual conversation irrationally when you know how to do better? What's the upside there? What value is there to gain? What is *better* about irrationality than rationality? I think it's strictly worse.


curi at 5:38 PM on September 3, 2020 | #17786 | reply | quote

#17785 There's a potential problem with going into detail on the Dagon example. I don't know how much of our disagreement about it is from

1) disagreeing about social dynamics and conversation

or

2) your lack of familiarity with Goldratt

Possibly you'll change your mind significantly about it if (2) is addressed, meaning we won't resolve disagreements about (1).


curi at 1:59 PM on September 4, 2020 | #17820 | reply | quote

> I'm not sure if you read my analysis at http://curi.us/2371-analyzing-quotes-objectively-and-socially or if you plan to discuss it later or what.

I did read your analysis. I thought I would start with my own impressions rather than comment on yours. This should help to balance out any kind of anchoring.

> I think that ideal version would also be bad. You improved the ending by hypothetically imagining he'd learned a bunch of stuff and could link to educational material (which I think is false), but the first three paragraphs are still awful, e.g. multiple brags and the comments about a chain being limited are ~incoherent.

I agree the first three paragraphs are not ideal. I even played with changing the heading of that section but left it to keep it consistent with the previous posts. I probably should've changed it to something else.

Regarding brags, this type of thing doesn't bother me. It is social for sure, but I don't know if it's rationality limiting. It is less pleasant, but still possible to understand what people are trying to say when they speak like this. I think his language is relatively tame. For example if he followed up in your other post I wouldn't give what he said in the first post a second thought. I think it's only because he failed to meet effort expectations that we are even spending time reflecting on it.

Regarding incoherence, I don't know if I would call it that. I have periphery knowledge of the theory of constraints. I am not a scholar, but not completely ignorant either. I think he is saying if you try to identify the bottleneck, it can be harder than it looks because in real life production capacity is not constant and is better described by a mean and a distribution. At any given moment the bottleneck could shift from place to place in the system due to randomness. That may or may not be true. One could still talk about there being a most likely bottleneck. I don't think he did enough to show how it was limiting. He should have provided an example or more theory. At the same time, I don't think it's incoherent. From a third-party perspective, I don't think the counterexample of software optimization does enough to refute his claim either. Specifically, he said the theory was limiting not wrong. Things described as limiting still work in some cases, one of those cases could be software.

>> At the same time, I think you could have demanded less effort. (replying in thread)

> You think clicking a link to my reply constitutes a meaningful, problematic effort? Or clicking the right section in the table of contents on the sidebar? One can't really have a discussion with people unless they view that level of effort as negligible. Just writing a paragraph is orders of magnitude harder.

I agree writing a paragraph is harder. I also agree he fails on effort. My only claim is it is possible to keep the bar lower. It all adds up. Not posting inline may have:

- Lowered social expectation to reply, as he did't have a reply on the new post

- Unintentionally signaled to him the communication was not with him, but for an audience (feelings of 'its a trap')

- Its non-standard to reply to comments in this way. Non-standard equates to more effort.

I don't know how it would have turned out if you responded inline with a one liner. Probably still a low probability of serious engagement. At the same time, I think the odds would have been better inline.

>> Which part of your rubric do you think people most frequently fail?

> Honesty and boundaries.

I get boundaries.

What kind of dishonesty do you think is most prevalent?

>> I may need to revise my assertion. There are plenty of rational conversation partners if you are willing to make friends with them first. I think without friendship most people will fail persistence and effort.

> Make friends how, given that they aren't going to have a rational conversation until *after* being your friend? If rational discussion is held hostage to establishing rapport via non-rational methods first, that seems like quite a limit on people's rationality. And it seems like a concession that people's participation on discussion forums is mostly irrational (since they mostly aren't friends).

I agree needing to build rapport places a limit on rational conversation partners. I also agree this is a concession, and not something I had thought about before. I would say I have lowered my estimate for total rational conversation partners available at this moment.

On the other hand, I had only been replying on less wrong for a month, and I bumped into someone like you, so it's not zero either.

Also, it's unclear to me if the number of people open to rational conversation after some rapport building is super low.

> I disagree in any case, but it's a significantly updated claim, and more limited, so I think it's progress.

You are correct it is significantly updated. I think my position as originally stated was too strong and did not signal clearly the amount of investment required for rational conversation to spin up. This was not clear in what I wrote or even in my own mind at the time. Still, certain parts of my position have remained consistent. Specifically, the need to focus on yourself and your own actions to put the other person at ease. (A.k.a. making friends) I don't think you can fight social with rational. You have to confront social with social to pave the way for rational. At least in organic situations. I can see institutions like science or a particularly well specified/moderated forum getting around this limitation.

---

> If someone knows how to discuss rationally, why would they discuss irrationally?

It may be a number of people on forums like less wrong only want to feel rational, but don't actually want to be rational.

Another way to look at it is, because social and rational modes are mixed, you are seeing rational behavior but towards a social goal. Specifically, the goal of wanting to feel or appear as rational.

> I'd somewhat understand 1) not discussing 2) discussion that's fairly removed from rationality issues, like small talk. But why have an intellectual conversation irrationally when you know how to do better? What's the upside there? What value is there to gain? What is *better* about irrationality than rationality? I think it's strictly worse.

I think some people just do not care. Or stated in kinder terms, they have different priorities. Talking stuff through like this is *really* time-consuming. Lots more time to play video games, watch Netflix, and other things if you skip conversations like this. They want to play video games, watch Netflix, and also be able to talk to their friends about how rational they are because they comment on less wrong.

---

> #17785 There's a potential problem with going into detail on the Dagon example. I don't know how much of our disagreement about it is from

> 1) disagreeing about social dynamics and conversation

> or

> 2) your lack of familiarity with Goldratt

> Possibly you'll change your mind significantly about it if (2) is addressed, meaning we won't resolve disagreements about (1).

I shared some stuff above which probably will help you calibrate how much I know about Goldratt compared to yourself. You can decide if you would like us to dig more into Dagon.


gigahurt at 9:58 PM on September 4, 2020 | #17842 | reply | quote

> Regarding brags, this type of thing doesn't bother me. It is social for sure, but I don't know if it's rationality limiting.

It's revealing about where their effort is going and what their goals are. Not trying for rational progress is limiting re making rational progress.

When someone like that does try to have a rational conversation, then the same mindset that wants to brag and gain status also does other things like want to avoid looking bad and weak. That leads to issues like taking criticism badly and avoiding making clear, refutable claims.

> I think he is saying if you try to identify the bottleneck, it can be harder than it looks because in real life production capacity is not constant and is better described by a mean and a distribution.

That can't make it harder to find bottlenecks than it looks (in light of introductory TOC knowledge) because variance in production is a major, repeated theme of TOC, including in *The Goal*.

One can't correct or limit TOC by stating things TOC already emphasized.

Reviewing what Dagon said:

>> It's important to recognize the limits of the chain metaphor - there is variance/uncertainty in the strength of a link (or capacity of a production step), and variance/uncertainty in alternate support for ideas (or alternate production paths)

The way this is incoherent is it says there's a limit but then the rest of the sentence doesn't present a limit. The pre-dash and post-dash parts of the sentence don't go together to make a thought. One can try to guess meanings anyway but it doesn’t communicate a reasonable idea and I'm unable to guess any meaning where it has a decent point.

It's similar to saying "It's important to recognize the limits of a pet dog – there's variance in dog weights and variance in dog run speeds."

> I think the odds would have been better inline.

I agree but I think doing it that way had too much downside to be worth it for me in that situation. And I think this example helps my case about the shortage of available rational discussion: his preference for inline was irrational and contributed to preventing productive discussion.

> It may be a number of people on forums like less wrong only want to feel rational, but don't actually want to be rational.

I agree. I think that fits my claims.

> Another way to look at it is, because social and rational modes are mixed, you are seeing rational behavior but towards a social goal. Specifically, the goal of wanting to feel or appear as rational.

I partly agree. (Lots of people pursue that goal badly and irrationally.)

> I think some people just do not care. Or stated in kinder terms, they have different priorities.

I agree.

> On the other hand, I had only been replying on less wrong for a month, and I bumped into someone like you, so it's not zero either.

I’ve been going around the internet for ~20 years and found that hardly anyone wants to discuss. I’ve had well over 100k website visitors and that has led to little discussion. You bumped into me at LW because I go around and make myself visible. I think LW is the #1 most prominent public, English place to try to find rational discussion. It’s not like there are a bunch of other similar places to look. I don’t know any promising places to try that I haven’t gotten to yet.

> You are correct it is significantly updated. I think my position as originally stated was too strong and did not signal clearly the amount of investment required for rational conversation to spin up. This was not clear in what I wrote or even in my own mind at the time. Still, certain parts of my position have remained consistent. Specifically, the need to focus on yourself and your own actions to put the other person at ease. (A.k.a. making friends) I don't think you can fight social with rational. You have to confront social with social to pave the way for rational. At least in organic situations. I can see institutions like science or a particularly well specified/moderated forum getting around this limitation.

I don’t think investing in rapport or walled garden entry (e.g. phd and social networking in academia) leads to many rational conversations.

If it did work, some of those conversations would be publicly readable. There’d be plenty of examples. Where are they?


curi at 11:25 PM on September 4, 2020 | #17843 | reply | quote

>> Regarding brags, this type of thing doesn't bother me. It is social for sure, but I don't know if it's rationality limiting.

> It's revealing about where their effort is going and what their goals are. Not trying for rational progress is limiting re making rational progress.

I agree. It's a leading indicator the conversation may not be worth the investment. I just don't see it as blocking. As your analysis points out he is communicating social and objective meaning. Until the social communication takes over, rational conversation is not completely blocked.

>> I think he is saying if you try to identify the bottleneck, it can be harder than it looks because in real life production capacity is not constant and is better described by a mean and a distribution.

> That can't make it harder to find bottlenecks than it looks (in light of introductory TOC knowledge) because variance in production is a major, repeated theme of TOC, including in *The Goal*.

While I want to agree with you, it would be very social of me to do so. After watching many of your videos, and talking with you, I respect you. At the same time, that respect, or even the fact Goldratt might agree are both arguments from authority.

If I care about this, then I should probably go read Goldratt. From my outsider perspective (to clarify: I have not read Goldratt, only learned about queue theory from a class) I have to take into account the possibility one or both of you have misunderstood the information. There is disagreement and you both have read the same text. I am unwilling to assume he is lying about reading the text. So, to tie back to an earlier point you made, maybe my gap in Goldratt is too large to see the situation as clearly as you do.

>> Dagon: It's important to recognize the limits of the chain metaphor - there is variance/uncertainty in the strength of a link (or capacity of a production step), and variance/uncertainty in alternate support for ideas (or alternate production paths)

> The way this is incoherent is it says there's a limit but then the rest of the sentence doesn't present a limit. The pre-dash and post-dash parts of the sentence don't go together to make a thought. One can try to guess meanings anyway but it doesn’t communicate a reasonable idea and I'm unable to guess any meaning where it has a decent point.

I think your last point here is the most important one. If you tried but were unable to come up with an explanation to make his words make sense, that's all you can really do other than ask questions. You did ask a question. He responded. You responded. He failed to respond. The failure to respond is where he failed.

>> I think the odds would have been better inline.

> I agree but I think doing it that way had too much downside to be worth it for me in that situation. And I think this example helps my case about the shortage of available rational discussion: his preference for inline was irrational and contributed to preventing productive discussion.

Fair enough. I agree with you that Dagon disqualified himself as a rational conversation partner in this instance. I found everything tolerable until he stopped responding. I think you did to. You continued to engage and also kept your social down, even though you found his approach lacking. I fail him on effort. I know you said you fail him on all six.

>> On the other hand, I had only been replying on less wrong for a month, and I bumped into someone like you, so it's not zero either.

> I’ve been going around the internet for ~20 years and found that hardly anyone wants to discuss. I’ve had well over 100k website visitors and that has led to little discussion. You bumped into me at LW because I go around and make myself visible. I think LW is the #1 most prominent public, English place to try to find rational discussion. It’s not like there are a bunch of other similar places to look. I don’t know any promising places to try that I haven’t gotten to yet.

Yes. It makes sense I would bump into you quickly based on the effort and seriousness with which you pursue discussion in this space.

Perhaps you will be the last person, or it'll take months before I encounter another person. I need to collect more data. I have very limited experience in this area.

Generally, I avoid social media. (I include forums and sites like less wrong in this category) I get most of my information directly from books. (I would say 90%, outside of life experience) I interact primarily with people face-to-face. All the rational conversations I thought about when I answered your question way back on the less wrong forum were conversations I have face-to-face with people.

I mention this only to make clear I don't have the same sample size on the Internet you do. My claim that there are plenty of rational conversation partners comes from experience primarily composed of face-to-face conversations. It also comes from an approach to conversation which gives the other person a lot of affordance in terms of social behavior. The oversight I made was not remembering the investment I have made in many of the people I consider rational conversation partners.

>> You are correct it is significantly updated...

> I don’t think investing in rapport or walled garden entry (e.g. phd and social networking in academia) leads to many rational conversations.

I do think investing in rapport leads to rational conversations. Rational conversation can get you into trouble if you do it with people you do not trust or who do not value you. Rapport building is the process of building up social trust and safety.

Regarding walled gardens, my comment was primarily to acknowledge the work you are engaged in, trying to institutionalize conversation practices to make it more rational. That might work. If there was a forum with more structure and more rules to try and encourage rational conversation I would be willing to give it a try. I have not seen this attempted.

> If it did work, some of those conversations would be publicly readable. There’d be plenty of examples. Where are they?

I think the reason there aren't many publicly readable rational conversations is more attributable to the fact people don't like writing than they don't like having rational conversations. I think there are YouTube videos and podcasts that demonstrate people having rational conversations. This is sad because I prefer writing.

---

I noticed discord requires phone verification now. Is that permanent?


gigahurt at 8:29 AM on September 5, 2020 | #17851 | reply | quote

> I agree. It's [bragging] a leading indicator the conversation may not be worth the investment. I just don't see it as blocking.

I don't think the action of bragging itself is necessarily blocking (it can cause trouble, e.g. when the other person responds with their own bragging, and people get into a bragging contest). I think the social mindset that causes bragging is blocking. It causes other things too.

> I think the reason there aren't many publicly readable rational conversations is more attributable to the fact people don't like writing than they don't like having rational conversations. I think there are YouTube videos and podcasts that demonstrate people having rational conversations. This is sad because I prefer writing.

I don't think that's the explanation. Lots of people do write lots of stuff.

But I don't agree about videos and podcasts. I suggest you pick an example.

> I interact primarily with people face-to-face. All the rational conversations I thought about when I answered your question way back on the less wrong forum were conversations I have face-to-face with people.

I've run into this sort of claim a lot and it's hard to resolve because it focuses on personalized, non-shareable evidence. Let's focus on the public situation more first and maybe come back to this later.

> discord requires phone verification

It's needed due to harassment.

>> That can't make it harder to find bottlenecks than it looks (in light of introductory TOC knowledge) because variance in production is a major, repeated theme of TOC, including in *The Goal*.

> I have to take into account the possibility one or both of you have misunderstood the information. There is disagreement and you both have read the same text. I am unwilling to assume he is lying about reading the text.

I don't think he's lying about reading some Goldratt books. People often badly misunderstand what they read. I do think his presentation of his Goldratt expertise was dishonest.

If the issue is merely whether *The Goal* covers variance, maybe a few quotes can resolve this. The string "fluctuat" appears in the book 37 times, e.g.:

> "Yeah, sure,’’ I say. "But what’s the big deal about that?’’ "The big deal occurs when dependent events are in combination with another phenomenon called ‘statistical fluctuations,’’’ he says. "Do you know what those are?’’

---

> If I say that I’m walking at the rate of "two miles per hour,’’ I don’t mean I’m walking exactly at a constant rate of two miles per hour every instant. Sometimes I’ll be going 2.5 miles per hour; sometimes maybe I’ll be walking at only 1.2 miles per hour. The rate is going to fluctuate according to the length and speed of each step. But over time and distance, I should be *averaging* about two miles per hour, more or less.

> The same thing happens in the plant. How long does it take to solder the wire leads on a transformer? Well, if you get out your stopwatch and time the operation over and over again, you might find that it takes, let’s say, 4.3 minutes on the average. But the actual time on any given instance may range between 2.1 minutes up to 6.4 minutes. And nobody in advance can say, "This one will take 2.1 minutes... this one will take 5.8 minutes.’’ Nobody can predict that information.

---

> What’s happening isn’t an averaging out of the fluctuations in our various speeds, but an *accumulation* of the fluctuations. And mostly it’s an accumulation of slowness*—because dependency limits the opportunities for higher fluctuations.* And that’s why the line is spreading. We can make the line shrink only by having everyone in the back of the line move much faster than Ron’s average over some distance.

---

> What’s bothering me now is that, first of all, there is no real way I could operate a manufacturing plant without having dependent events and statistical fluctuations. I can’t get away from that combination. But there must be a way to overcome the effects.

---

> A mathematical principle says that in a linear dependency of two or more variables, the fluctuations of the variables down the line will fluctuate around the maximum deviation established by any preceding variables. That explains what happened in the balanced model.

---

> "We would have had another set of statistical fluctuations to complicate things,’’ I say. "Don’t forget we only had two operations here. You can imagine what happens when we’ve got dependency running through ten or fifteen operations, each with its own set of fluctuations, just to make one part. And some of our products involve hundreds of parts.’’
 



curi at 9:06 AM on September 5, 2020 | #17854 | reply | quote

> I think the social mindset that causes bragging is blocking. It causes other things too.

Bragging is a bad sign. It probably indicates broader problems below the surface, but it's unclear to me the way in which it is blocking. If you want to share more I am open.

> I don't think [people not liking writing] is the explanation. Lots of people do write lots of stuff.

Yes, though as you point out later not in this particular genre. You are right though, blaming it all on writing is too imprecise.

I do think there are forces that disincentive writing though, particularly for disagreements. I think there is additional vulnerability writing down what you think versus just saying it. I think people are less forgiving of slip-ups in writing. I think people expect a higher level of quality. Things written down last longer, so mistakes last longer too.

> But I don't agree about videos and podcasts. I suggest you pick an example.

While I don't like the producer of the YouTube video because they stole the content, and also framed it in a very rhetorical manner, I do think the content itself is reasonable. I spent time trying to track this back to its original source but eventually gave up.

https://www.youtube.com/watch?v=c8kI5BQvl9w

>> I interact primarily with people face-to-face...

> I've run into this sort of claim a lot and it's hard to resolve because it focuses on personalized, non-shareable evidence. Let's focus on the public situation more first and maybe come back to this later.

That works.

> People often badly misunderstand what they read.

I agree.

I don't think misunderstanding by itself violates either of our rationality rubrics though.

> I do think his presentation of his Goldratt expertise was dishonest.

Would you be able to expand on the way it was dishonest? I am not being dense, I just don't want to assume.

---

Goldratt citations:

Thank you for putting that together, I appreciate it. That does paint a fairly clear picture that Goldratt was aware of and wrote about handling what I thought Dagon might mean as I was trying to give him the benefit of the doubt.

So he probably was misunderstanding, or not communicating his point in a clear manner.


gigahurt at 7:46 PM on September 5, 2020 | #17871 | reply | quote

#17871 OK I'm generating a transcript of the video. I have low expectations because I've heard Sam Harris before and written about him:

https://curi.us/2135-bad-sam-harris-brain-scanning-research-paper

https://curi.us/2136-criticism-of-sam-harris-the-moral-landscape

https://curi.us/2137-sam-harris-vs-capitalism

I also have information re SH in relation to DD but it's not necessary.

>> I do think his presentation of his Goldratt expertise was dishonest.

> Would you be able to expand on the way it was dishonest?

He is not a Goldratt expert, but brags that he is.

One of the ways he did that, which may be invisible if you aren't familiar with Goldratt's books, was when Dagon brought up the *Theory of Constraints* book. That's a brag that he reads more obscure, dense, non-fiction Goldratt stuff while more introductory books like *The* Goal are good to recommend to his inferiors.

> Bragging is a bad sign. It probably indicates broader problems below the surface, but it's unclear to me the way in which it is blocking. If you want to share more I am open.

Bragging takes effort and attention away from understanding each other. It wastes thinking and communication bandwidth. That can be fairly small sometimes (not always) but it adds up when people do lots of social things.

The mindset that causes bragging is a social climber mindset. It also causes other things like hiding weakness, confusion and ignorance, and trying to get audiences on *your side* – that you're a biased sports fan for – instead of figure out what's true.


curi at 8:10 PM on September 5, 2020 | #17872 | reply | quote

#17872 Dagon was also dishonest by that pretending he wasn't bragging when he was.


curi at 8:11 PM on September 5, 2020 | #17873 | reply | quote

#17873 Dagon was also dishonest by pretending to thank me and be an ally when actually he was disagreeing with me and making vague attacks on my view.


curi at 8:12 PM on September 5, 2020 | #17874 | reply | quote

#17875 They apparently limited free plans to 40min of transcribing per upload so here's part 2 of 2:

https://otter.ai/s/wX75t6y8QlSHF8lKWGCR_Q


curi at 3:21 PM on September 6, 2020 | #17887 | reply | quote

> > But I don't agree about videos and podcasts. I suggest you pick an example.

> While I don't like the producer of the YouTube video because they stole the content, and also framed it in a very rhetorical manner, I do think the content itself is reasonable. I spent time trying to track this back to its original source but eventually gave up.

> https://www.youtube.com/watch?v=c8kI5BQvl9w

I thought it was awful. I stopped well before the end. I don't know what seemed valuable to you.

My video commentary: https://youtu.be/3_ONc0VoZkM


curi at 4:13 PM on September 6, 2020 | #17889 | reply | quote

>>> But I don't agree about videos and podcasts. I suggest you pick an example.

>> While I don't like the producer of the YouTube video because they stole the content, and also framed it in a very rhetorical manner, I do think the content itself is reasonable. I spent time trying to track this back to its original source but eventually gave up.

>> https://www.youtube.com/watch?v=c8kI5BQvl9w

> I thought it was awful. I stopped well before the end. I don't know what seemed valuable to you.

> My video commentary: https://youtu.be/3_ONc0VoZkM

Regarding the Trump shithole countries quote, that quote came from Democrats who attended a meeting with the President. Trump denied it, so it's actually disputed.

It does sound like the sort of thing he might say to me. OTOH, the very fact that it sounds like something he might say makes it politically useful for the Democrats to lie about. Given that I think the Democrats would be willing to falsify ballots to steal an election, I would not put it past them to lie about what was said in a meeting in order to hurt Trump politically.


Anonymous at 4:24 PM on September 6, 2020 | #17891 | reply | quote

## Main point / argument

First, I want to start off with a concession. We started off this conversation with the question "is there a shortage of rational conversation partners?" Since our conversation began I have gained a greater understanding in how you evaluate that, taken some of those values on myself, and also realized how many people fall short of those values particularly on the Internet and when dealing with strangers. Even friends, when compared to the standards we have discussed, oftentimes fall short. So, when compared to the ideal, I concede there are a shortage of rational conversation partners.

The world is not as rosy as I thought it was, or as rosy as I made it out to be in my original response on LW. I think the knowledge I have gained so far in this exchange will help me optimize the conversations I have in the future, as well as my overall thought process.

However, I want to introduce one nuance to this concession. From a practical perspective lack of perfectly rational conversation partners does not stop progress only makes it less efficient.

### Lack of perfectly rational conversation partners does not stop progress only makes it less efficient

#### Premises

- The goal of rational conversation is to advance our understanding.

- Strangers will operate socially by default.

- The more people trust you the more rational they are willing to be.

- The process of building trust is the process of making friends.

- We can transform strangers into friends. By doing this we gain access to someone who can discuss at sufficient (say 80%+) rationality.

- A conversation with an 80% rational conversation partner will have moments that are rational and moments that are social.

- We can ignore the social moments and harvest the rational moments.

- Every time we harvest a rational moment we advance our understanding

- From a practical perspective we can proceed with imperfect conversation partners and advance our understanding.

#### Conclusion

- Lack of perfectly rational conversation partners does not stop progress only makes it less efficient.

If you agree with this, then progress can still be made without perfectly rational conversation partners. And the question becomes what is the best use of our time? Optimizing rational conversation partners or some other part of the philosophical value chain?

If you do not agree with this, then progress cannot be made without correcting this. And it makes sense we need to focus on and unblock this before proceeding on to something else.

I think progress can be made.

Since we started talking about Q theory, I am partially thinking in those terms. When I think about my own bottlenecks to understanding and contribution of insights to society, I suspect: my own ignorance, my imperfect memory, my inability to make the right connections, my own confidence, and other personal failings are my primary bottlenecks. Perfectly rational conversation partners are not my bottleneck.

---

## Further discussion on other topics

### Dishonesty

>>> I do think his presentation of his Goldratt expertise was dishonest.

>> Would you be able to expand on the way it was dishonest?

> He is not a Goldratt expert, but brags that he is.

This one is tricky, because we don't know if Dagon actually thinks he is an expert or knows he is not and pretends to be one. If he thinks he is an expert the bragging is honest, and maybe even merited from a subjective perspective. He may not realize he is wrong. If you say something that you think is true and you have reason to believe it is, but it is not, aren't you still considered honest? Dogan's belief could be 'I read and understood both books!'

> Dagon was also dishonest by pretending to thank me and be an ally when actually he was disagreeing with me and making vague attacks on my view.

I think Dagon was genuinely excited to recommend *The Goal*. In that sense I think his thank you was genuine as well. Specifically, he was thankful you brought up Goldratt.

I think it is possible to thank someone for a portion of their contribution and criticize them for another without it being dishonest. I don't see how this is dishonest. I don't think you think mixing agreement and disagreement is dishonest either, which makes me think I am missing a nuance in your observation. What led to your judgment he was signaling you were allies such that subsequent criticism read as betrayal?

### Bragging

One thing you mentioned Dagon did which was an issue was bragging. You claim bragging, like many other social behaviors, steals away from rational productivity.

I agree it is inefficient from a truth seeking perspective to brag.

One problem I think we need to deal with to diagnos and hold people accountable for bragging is how to diagnose it clearly. The difficulty in diagnosing this, in my mind, should also temper how much energy we spend mitigating it.

In your Lying essay you made the point that you should be transparent with your audience about where you are coming from when you make a claim. I agree with that.

Dagon's comments where he expalins how he gives away books and references other pieces of Goldratt's work could be interpreted as him signaling where he is coming from. He is saying, 'you can trust me because I am very passionate and well read about this topic.' At the same time, maybe his primary motive was just to look impressive. How do we know?

You also mentioned Dagon was dishonest because he was bragging but pretending that he was not bragging.

How do you distinguish between someone who is not bragging and someone who is bragging but pretending not to? Or someone strictly not bragging and someone who is proud, but trying to suppress it for the sake of social or rational purposes? Is there any practical difference between any of these scenarios when it comes to having a rational conversation?

---

Comments re: Video analyzing the YouTube video I shared:

- I like that you started the video by defining a rubric to judge the video.

- I agree that two people agreeing with each other isn't as productive as two people disagreeing with each other. I think both types of conversation can be rational but the one where views are fairly aligned has less potential for generating a breakthrough.

- I like the idea of withholding judgment until the end of a particular section.

- Standards are lower in voice: yes

- I think it's fair to say if they zoned in on an example it might be more efficient. These were not your exact words, but I feel like you expressed this sentiment.

- I think you wanted the conversation to get more done than either of them had in mind. I think Sam's primary question was something like: aren't you jumping to conclusions too quickly about people being racist? I think they both learned things about each other's position. No breakthroughs, but the engagement was not purely social. I found it serviceable (though not ideal)

> I thought it was awful. I stopped well before the end. I don't know what seemed valuable to you.

I think both speakers were representing their views honestly. I think they also were making effort to share their views and understand the views of their partner. I don't think they got to the bottom of it. But still, I think both of them are searching for truth. Even if they did not make progress their was value in testing their own claims and not finding compelling counter arguments.

---

## Meta

Also as a side note, apologies for the delayed response. My children have begun online school (online because of the virus), and it is taking up a good deal of my discretionary time helping them with that.


gigahurt at 7:01 PM on September 9, 2020 | #17954 | reply | quote

#17954

Re the Harris discussion, can you quote a few sentences you thought were good?

> - Lack of perfectly rational conversation partners does not stop progress only makes it less efficient.

I agree that people don't need to be perfect. But irrationality sometimes leads to things like refusing to discuss further, refusing to answer a question or criticism, or refusing to think about an idea. It often does stop discussions rather than making them less efficient. Do you agree?


curi at 10:55 PM on September 9, 2020 | #17968 | reply | quote

> Re the Harris discussion, can you quote a few sentences you thought were good?

I spent a lot of time putting together a simplified version of their discussion. But it's pretty long, and I know you asked for a sentence or two. Rather than give a sentence or two or inundate you with a huge amount of content to sift through, I'll explain what I see as beneficial about this conversation:

They began out of sync on the topic and also on understanding each other. The main point of contention seemed to be the criteria or standard by which one can categorize a statement as racist or not. At the beginning of the conversation they both had caricature-like views of the other's position. WA thought Sam required the person declaring themselves as a racist as the only admissible evidence. Sam thought WA believed anything that could be construed as racist should be construed as racist. By the end both had gained a clearer understanding of the others perspective. Sam conceded that with sufficient context assumptions can be made. WA conceded that cancel culture is too aggressive. There was even some error correction. Specifically, Sam's interpretation of the Laura Ingram quote. While the conversation is not a paragon of rationale conversation, I think it is serviceable.

> I agree that people don't need to be perfect. But irrationality sometimes leads to things like refusing to discuss further, refusing to answer a question or criticism, or refusing to think about an idea. It often does stop discussions rather than making them less efficient. Do you agree?

I agree that people sometimes refuse to discuss further, refuse to answer a question or criticism, and refuse to think about an idea. I also agree these types of behaviors are not productive and would categorize them as irrational. I also think it's fair to say those types of reactions end the discussion rather than make it less efficient.

If I agree, which I have, that the Internet has a shortage of rational conversation partners, then would you want to move on to discussing solutions? I am okay remaining on the current topic, changing to solutions, ending the conversation, or revisiting leaf nodes of the conversation tree we both find interesting.


gigahurt at 4:56 PM on September 13, 2020 | #18009 | reply | quote

#18009

> I spent a lot of time putting together a simplified version of their discussion. But it's pretty long, and I know you asked for a sentence or two.

Feel free to post the long version. I can skim or do targeted search for something relevant. Also I'd still like some specific sentences.

> While the conversation is not a paragon of rationale conversation, I think it is serviceable.

Why did you pick it? I was looking for an impressive example with e.g. progress on improving human knowledge, not *sometimes people go from egregiously straw manning each other to not doing that and getting closer to some well known, standard positions*.

> If I agree, which I have, that the Internet has a shortage of rational conversation partners, then would you want to move on to discussing solutions?

I think it's worth exploring this more first.

> I agree that people sometimes refuse to discuss further, refuse to answer a question or criticism, and refuse to think about an idea. I also agree these types of behaviors are not productive and would categorize them as irrational. I also think it's fair to say those types of reactions end the discussion rather than make it less efficient.

I think most discussions are in serious danger of this happening if continued in certain ways, including the SH/Woke discussion: let me take over talking for either of them and make a bunch of correct arguments, and they'd refuse to continue. The conversation only continued as much as it did due to the limited nature of the arguments made (which may be due to lack of skill and knowledge rather than suppression, idk).

If you agree with that, the next issue to consider is what things cause conversations to end, and what don't. Where are the limits and what are their causes? And sorting out the rational and irrational limits, e.g. people will stop discussing if you're just flaming but that's OK (there are edge cases where e.g. someone is accused of flaming but doesn't think they're flaming, but never mind, I don't think we need to worry about those currently).


curi at 5:21 PM on September 13, 2020 | #18010 | reply | quote

> Feel free to post the long version. I can skim or do targeted search for something relevant. Also I'd still like some specific sentences.

Here it is with less words. I tried to be honest about the edits, but I may have made mistakes. For a canonical understanding of either opinion the audio recording is recommended:

> WA: So then [for Laura Ingrham to] traffic in those words replacement, and give it an explicitly race related Valence. And then to turn around and deny that you're trafficking and race baiting. It just beggars belief

> SH: But I mean, the problem with the dog whistle hypothesis is that it really is unfalsifiable it is conspiracy thinking of a sort that gives us all these conspiracy theories that are that whose adherence just cannot be reasoned with

> WA: I really want to stay away from always and never and you know, yes, if you turn your dog whistle detector up to 11 and you try to see it everywhere. That's a bad approach. But I also think it's a bad approach to never see it. And I think you're verging in this case and in other cases on never seeing it. I think you're narrowing the the possible field to literally utterances where someone says I am a racist.

> WA: [When] you're Laura Ingram, in the quote, we're discussing, you're tipping your hand by saying no, this is about what conservatives being replaced.

> SH: it's possible that I didn't understand the context of this Ingram quote

> WA: She means people from South and Central America. It's very clear in context

> SH: right. Okay. So yeah, I didn't know that. The connection immigration here I thought she was more or less just saying we have. Basically she was just summarizing intersectional identity politics and let's get rid of the old white conservatives.

> WA: It's much easier for Donald Trump took shithole countries, you know, shithole countries. And and to say, I think they're shithole countries, because they have black people in them.

> SH: Well, again, no, I mean, for me, it's not. the question is, are these utterances evidence of the crime?

> WA: so what utterances are ones that fall into that?

> SH: [Using the N word on apprentice tapes]

> WA: So I guess I would be worried about setting the bar at, you have to be a celebrity who's taped on camera repeatedly using the N word.

> SH: That's not the bar if you're going to refer to shithole countries, as a, a rich guy, pseudo billionaire who likes everything in his life gilded seems to be the variable there is not race, the salient variable, the necessary variable in order to understand the utterance is not race, it is squalor and poverty and disease. And it's he's talking about the developing world. And if you could find me a country filled with white people who are as poor and chaotic as what you find in Congo, well, then he's talking about them to AI right or he would be

> WA: [I think it was race based]

> SH: If statements like that are unequivocal signs of the speaker's racism Then they have to work for other speakers and then anyone no matter how blameless their record, if they say something about shithole countries, they're racist, right. And I just don't think that runs through.

> WA: No, I think I think context matters.

> SH: But this is what has given us cancel culture. It's the narrow fixation on the magical power of words, given their worst conceivable interpretation

> WA: I take your point about how these things can be under determined and how, at the very least, like with the example that I started with, [...] if you accuse someone of something and they are very plausibly able to deny it, you lose a point, I get that. On the other hand, the solution cannot be: We will never call anyone a racist unless they say I am a racist.

> SH: But that's a straw man version of my position here. I mean, clearly, there are racists who will answer to the name. Clearly there are people who are, you know, bigoted and they had and they're so lacking in introspection, or awareness of these issues, that they don't even understand the shape of the dark cloud, that they're trailing behind them, right? They're not ideological racists, but they're Archie Bunker types. But then there are people who just are using words in ways that would have been quite normal 20 years ago, and everything is being subjected to a different Litmus test.

I see progress getting to a more rational space. While I agree this it is not immediately helpful to the listener from a factual perspective, I think its an illustration of people engaging on a controversial topic and moving towards a more rational place. It did not degrade into name calling, threats, or folks dropping out. Its a low bar, but better then most humans would do in this situation.

> Why did you pick it? I was looking for an impressive example with e.g. progress on improving human knowledge, not *sometimes people go from egregiously straw manning each other to not doing that and getting closer to some well known, standard positions*.

Apologies for missing the mark expectation wise. I picked it because it was a long form conversation I had listened to recently that was not thoroughly one sided and where the rational/social mix seemed to be reasonable.

I hypothesize not many public conversations exist that improve human knowledge. I think progress more often happens within a single mind, or between people with such a high quality relationship they talk too much to record 99% of it for posterity.

I do think people have personal breakthroughs during public conversation (I have had some in this conversation), but that's a pretty low bar considering how ignorant most of us are.

Pushing human knowledge is quite a high bar. I don't think that's the bar for rational conversation. I think it just needs to contain reasons and exploration. I don't even think it needs to make progress. Meaning, I don't think the people having it need to change their views or discover something new. I think they just need to provide reasons and listen to the reasons with an open attitude.

>> gigahurt: [Move on to solutions now?]

> curi: I think it's worth exploring this more first.

Sounds good.

> If you agree with that, the next issue to consider is what things cause conversations to end, and what don't. Where are the limits and what are their causes? And sorting out the rational and irrational limits, e.g. people will stop discussing if you're just flaming but that's OK (there are edge cases where e.g. someone is accused of flaming but doesn't think they're flaming, but never mind, I don't think we need to worry about those currently).

Okay. Here are some initial thoughts:

- Lack of new data. If I have shared everything I know and the other person has as well and the conversation starts to go in circles then the conversation will end.

- Lack of common understanding. If I try to talk to someone about a topic they know nothing about and lack the foundation to learn, then they will end the conversation.

- Lack of trust. If a conversation moves towards an area that makes the person feel insecure then they will end the conversation.

- Lack of priority. If the person has other things in their lives that are higher priority then they will end the conversation.

What are your thoughts? What items would you add?

----

Meta: As I mentioned kids school is saturating time during the week. My family is making adjustments, but its a work in progress. I am probably going to be less responsive until late October, but I will still do my best.


gigahurt at 5:26 AM on September 19, 2020 | #18069 | reply | quote

#18069 gigahurt, I'm being harassed by someone who has repeatedly made fake accounts and pretended to be a new person who wants to learn from me. They've been stalking, sock puppeting and harassing me on and off for two years, and they're active currently.

I've just been banned from Less Wrong because the sock puppeter followed me there and caused trouble with the goal of preventing me from having productive discussions there. He was recently commenting here as Periergo and then trolling anonymously.

I noticed that:

- your comments show signs of using privacy protection or hacking software which sends false information to my web server

- you didn't want to register a phone number with Discord

- you didn't accept my request to follow you on Twitter and your tweets are not publicly visible

- your Less Wrong account was created recently

- you aren't using your real name

I want to respect your privacy, and I think it's highly unlikely that you're the stalker, but I've been fooled before. For my peace of mind, would you please send a Direct Message to @curi42 from your Twitter account @gigahurt_ to confirm that you control it? That would let you keep your tweets private while linking you to an account created in 2008.


curi at 11:51 AM on September 19, 2020 | #18072 | reply | quote

No worries. I am sorry to hear bout the recent ban on less wrong. I replied on twitter. Let me know if there is more I can do.


gigahurt at 5:09 PM on September 19, 2020 | #18077 | reply | quote

OK, I can confirm for everyone that gigahurt is not Andy. Thanks.


curi at 5:26 PM on September 19, 2020 | #18078 | reply | quote

> - Lack of new data. If I have shared everything I know and the other person has as well and the conversation starts to go in circles then the conversation will end.

>

> - Lack of common understanding. If I try to talk to someone about a topic they know nothing about and lack the foundation to learn, then they will end the conversation.

>

> - Lack of trust. If a conversation moves towards an area that makes the person feel insecure then they will end the conversation.

>

> - Lack of priority. If the person has other things in their lives that are higher priority then they will end the conversation.

I agree those happen and that, to a first approximation, they are OK. There can be cases where they’re problematic too. Due to their perceived legitimacy, they’re sometimes used as excuses when actually ending conversations for other reasons.

I think lots of conversations end because someone is biased in favor of “their” side/position/conclusion, and is losing the debate.

I think lots end because people don’t want to answer challenging questions. Often they have no good answer and don’t want to admit it.

I think conversations often end because people don’t want to address certain specifics, and don’t want to admit or explain that preference, and then they take conversational actions to avoid the specifics. If the other person doesn’t take some sort of unclear hint and back off, then the person will leave (often with an excuse).

There are boundaries for what ideas people think are reasonable to consider. If you go outside the boundaries, they shut down conversation.

Lots of people, when it comes down to it, simply will not answer short, simple, direct questions with literally correct answers (when there’s a possibility it leads to them being wrong about something they’d rather not be wrong about).

There are lots of ideas about rationality that people approve of in the abstract but then hate when applied to themselves or to some other specific cases. E.g. lots of people think my Paths Forward is great … until I judge they’re wrong about something that they don’t want to be wrong about, and I want them to have a Path Forward so that, if I’m right, correcting them is possible. They like Paths Forward when they see it as a criticism of how dumb most people are, but leave when they’re asked to change themselves in some way.

The overall situation is: it’s extremely hard to correct people on important points when they don’t want to be corrected on those points. And there are a lot of points people don’t want to be corrected on.

And: People (like Harris) can look somewhat rational in reasonably controlled circumstances, but would quickly reveal substantial irrationality if challenged in certain ways, e.g. being “grilled” with clear, short, specific questions about facts for the purpose of showing how one of their claims is in conflict with uncontroversial facts.

Overall, I think it’s very hard for intellectual controversies to be resolved because of how people do and don’t discuss. E.g. there’s basically no way to get any inductivist to go through Popper’s arguments, and various other arguments, in an organized way, and try to reach a conclusion. Lots of people essentially bet their careers on induction being correct, but ~none of them will answer Popper or endorse any particular answer to Popper that anyone else has provide. This sort of situation repeats with many other topics. Arguments exist to refute some positions, and tons of people ignore those arguments and won’t debate, and there’s a severe shortage of conversation capable of resolving these disagreements.

> Meta: As I mentioned kids school is saturating time during the week. My family is making adjustments, but its a work in progress. I am probably going to be less responsive until late October, but I will still do my best.

ok


curi at 12:28 PM on September 21, 2020 | #18102 | reply | quote

#18102 PS I don't always answer everything you say. If you want an answer to a particular thing, please say so. I don't mind if you bring things up again in a new way or using a quote of what you already said. I'm not trying to avoid them. Not answering something, from me, is not a hint that it crossed some line or is a sensitive subject or something like that.


curi at 12:30 PM on September 21, 2020 | #18103 | reply | quote

Apologies for the long lull. Heading into the end of December I should have some cycles to try and engage again due to time off. Our conversation is not entirely in my RAM anymore. I am going to try and connect back to the main theme and probe a small topic. Two posts ago you summarized our topic and your position as follows.

> The overall situation is: it’s extremely hard to correct people on important points when they don’t want to be corrected on those points. And there are a lot of points people don’t want to be corrected on.

> Arguments exist to refute some positions, and tons of people ignore those arguments and won’t debate, and there’s a severe shortage of conversation capable of resolving these disagreements.

I think the above is fair enough. I don't know if I would contest these anymore after our previous conversation.

I agree there is a shortage of rational conversation partners. At the same time, we need rational conversation partners to operate effectively in the world. Operating from a single perspective and single personal history exposes us to a lot of potential error regarding topics which, properly understood, help us survive and move towards truth. My approach so far has been to convert folks who seem uninterested in rational conversation at first into people who want to engage in it through personal connection and friendship. I have not found a better way. Using that technique, I have 2 or more rational-enough conversation partners in each of my spheres of operation (technical work, people work, personal life, etc). The conversations lead to corrections on my side, which give me confidence I am getting feedback. Between those relationships and books I am mapping the terrain as best I can.

I think personal connection is the only way to create rational conversation partners. I know it is a strong claim, and I am not an expert and may be wrong, but I think it could be good to motivate the conversation. It may be that rational debate in the public sphere is not possible. It may be that the path forward is surrender of that territory and turning towards the private. It may be the only way to make progress in the public sphere is with some distance between parties. Like one person reading a book or paper another person wrote years ago in an open way.


gigahurt at 6:38 AM on December 12, 2020 | #19139 | reply | quote

#19139 What specific causal mechanism would make public discussions fail but would not apply to (some types of?) private discussions? Do you have something in mind? Or are you just looking at some broad results and thinking maybe that's the pattern but the cause is unknown to you?


curi at 11:32 AM on December 13, 2020 | #19151 | reply | quote

gigahurt

I am looking at broad results and thinking their might be a pattern. I suppose one way to decisively disprove my theory would be to point at a single instance of rational public conversation. Any suggestions?


Anonymous at 6:52 AM on December 24, 2020 | #19232 | reply | quote

#19232 I think we have different standards/goals for what is a productive discussion. We judge conversations in different ways. And this makes it hard to analyze this stuff and reach the same conclusions. So e.g. I think I'd reach a different conclusion about some of the private conversations that you're thinking about but I don't have access to.

Some context:

There is a culturally normal way of looking at conversations and rationality. It's pretty broad and vague. People can both think in a normal way and disagree. It's hard down to nail down the borders of it. But many things aren't near the borderline, so they can be identified as normal or unusual and most people would agree.

Like most people, I started that way using knowledge I picked up from my culture. Over the years I found it was wrong in some ways and started diverging. I've now diverged quite a bit in how I judge conversations (and, more broadly, thinking, learning, decision making and problem solving).

I've found a lot of things don't work as well as people think. Academia's failings is an example that a significant subculture has a lot of criticisms of that you may have some familiarity with. And a lot of what academia does is a conversation of sorts, like publishing papers and then some get critical replies, some get followups expanding on the ideas, etc.

I think standards can be objective. It's not just a choice of how much quality you want. Some things work at goals and some don't. E.g. someone may read a book about quitting smoking or have a back-and-forth conversation about it, with the goal of getting effective help quitting smoking. Then, at they time, they might think "this was great; I learned a lot". But then they fail to quit smoking and the ideas from the book or conversation didn't actually help significantly. So, despite the positive impression the person had, objectively it wasn't effective.

I've also found my bias is consistently to overestimating effectiveness, not underestimating. I think I'm still systematically biased that way, both due to cultural intuitions (in other words, my first impressions tend to overrate effectiveness of conversations, people's competence, and similar things – I've recalibrated that but less so than my logical analysis) and for a methodological reason. The methodology is basically identifying errors and problems and adding them to the list of errors i know about. If you do that, you start way too positive when you don't know a lot of errors, then you find more and gradually reach more negative conclusions. So in a particular case you might see 2 flaws out of 10 and then a decade later you know about more flaws and can see 6 out of 10. But you aren't going to find every flaw. Requiring having a specific, clear, explicit reason for each flaw makes non-flawed the default or null hypotheses, which is a systematic bias. There's extreme social pressure to have this sort of bias. If you go up to people and say "I randomly sampled 100 people and found a lot of idiocy, therefore I'll assume you're an idiot until proven otherwise" (or some more subtle version of that) you're going to get very negative reactions. Our culture thinks competence is the default, and expects it, and moving the default when you interact with someone is seen like assuming they might be a bad person who is unusually dumb. BTW, David Deutsch did the same thing. The result was that, in our conversations, whoever had a more negative interpretation of someone and their posts (we'd talk privately about public forum discussions) was right ~100% of the time rather than an unbiased 50% of the time. Because the more negative interpretation was always for a specific reason which could then be explained to the other person who'd be like "oh, that makes sense, i hadn't thought of that [so I defaulted to a more positive view]".

---

Anyway I didn't get to what to do about this or how to move forward (unfortunately I don't have a simple, easy answer in mind, anyway – I wasn't building up to that) but I think that's enough for this message.


curi at 1:13 PM on December 24, 2020 | #19233 | reply | quote

> think we have different standards/goals for what is a productive discussion. We judge conversations in different ways. And this makes it hard to analyze this stuff and reach the same conclusions. So e.g. I think I'd reach a different conclusion about some of the private conversations that you're thinking about but I don't have access to.

Am I correct that you are suggesting an even stronger position, that rational conversation does not take place publicly (what I stated) or even privately within a personal relationship? (which given the private conversations you suspect you could demonstrate) Or is the assertion that perhaps our scales of rational conversation are still out of sync enough that by my measure you would disagree across the board, and render the distinction between public and private moot? (which would be fair enough)

In this post (and I am not critiquing only observing) I see a shift in language from rationality to productivity/effectiveness. As you mentioned, reading a book on quitting smoking, having the arguments resonate, but then having no affect on behavior. To me that is a much broader conversation, and allows things I would not permit in conversation striving to be rational. For instance, rhetoric in general I see as effective communication that does not appeal to a person's rational faculty but rather emotional. I see rational conversation as fundamentally anti-rhetorical (going out of the way to blunt emotions). At the same time I am undecided on whether or not rational conversation is strictly superior to rhetoric/propaganda oriented communication when it comes to productivity and results. In fact its almost certainly not as effective at least in the short term. In the long term rational conversation still may not be of benefit, as a function of how much error correction you are forgoing, not gaining through other means, and the harm it causes. (again this is through the lens of productivity/effectiveness) Basically unless I am using people as a means towards error correction, it maybe largely ineffective/counter-productive to engage in rational conversation. (as the stereotypical 'boss' type seems to believe) The main point of evidence that supports this point is how much of our world is not governed by rational conversation. From an emergent perspective i would expect to see more rational conversation if it outperformed other modes.

I hypothesize rational conversation is strictly superior to other forms of conversation when your goal is virtue. (that being virtue as defined by Greek Antiquity/Christianity/etc)

I have more thoughts on virtue and how it relates to rationality, but I will leave it here for now.

I am interested to hear what you think.


gigahurt at 1:13 PM on January 2, 2021 | #19331 | reply | quote

Philosophically, rationality is stuff that *is capable of progress* (error correction, improvement). It doesn't get stuck or refuse to listen to criticism or to new ideas.

Practically, rationality is *the most effective way of approaching stuff*. There are some edge cases (how do you deal with irrational people? short answer, after considering walking away, is you rationally take into account the situation & constraints you're dealing with, even when that means doing/saying things that superficially seems irrational. It sometimes may be rational to use rhetoric, but it's bad to be in a life situation where that's common. Like how lying routinely was rational for lots of people in the USSR but only because they were stuck in a bad situation.).

I think rationality (and morality) are the most practical and effective things. They aren't compromises with downsides like lower effectiveness.

> From an emergent perspective i would expect to see more rational conversation if it outperformed other modes.

It's suppressed by people's hostility to it and their focus on social status hierarchies and dynamics. People also don't know how to do it (they certainly didn't learn it in school).

---

I don't think private conversations make a big difference. There are some advantages and ways they can work better, but basically it's still the same people who aren't very good at rationality.

I think virtually everyone has *bounded rationality*. There are limits on what progress or improvements they're open to. They may think a little outside the box, but only a limited amount. They may listen to some criticism, but only if it's adequately similar to what they already know, doesn't violate various taboos, etc. Even within these boundaries, most people are pretty bad at rationality and accomplish far less than they could.

In some sense, unbounded rationality is the only kind. It's a similar issue to freedom of speech. If there are boundaries on what you can say, then you don't have freedom of speech, even if there is a bunch of stuff that is within bounds. E.g. if you could criticize everything except social justice, that would be majorly different than a situation with actual free speech. One of the reasons is that issues are so interconnected and there's an ongoing pressure towards consistency (e.g. criticism of social-justice-adjacent ideas gets censored, and then also ideas adjacent to those, and so on, because they lead to and imply disallowed criticisms of social justice ... or in the alternative those other things are not censored and you move towards a system where anything can be said because what's the point of maintaining a strict, arbitrary rule against a few specific utterances when you can explain everything related to them and the ideas are not effectively suppressed whether the last piece of the issue is stated explicitly or not).

---

For a more concrete approach to the matter: IME most people are very resistant to some (not all) corrections of simple, factual matters. Similarly it's sometimes (not always) very hard to get clear, unambiguous, correct answers to simple, direct questions about factual matters. People often do things like contradict themselves *and screw up the discussion enough that can't be fixed*, or repeatedly provide clarifications that don't clarify a key ambiguity. This stuff is generally not thorough – they do clarify some things, answer some questions successfully, etc. – which may be why it looks to other people like some partial success and rationality, but I see it as something being fundamentally broken and incompatible with unbounded progress (and this results in lots of stuff actually getting stuck – it's not just a theoretical problem).

You might think if someone was stuck on X and Y you could find some workaround, Z, that works even if the X and Y don't get fixed. That does work sometimes. But often it doesn't. One reason is that people are often coming up with excuses to avoid the conclusion that X, Y or Z are all aiming at, so no matter the way of reaching the conclusion, they'll sabotage it. A lot of rationality issues have some kinda generality and power to affect many things.


curi at 1:10 PM on January 8, 2021 | #19416 | reply | quote

Want to discuss this? Join my forum.

(Due to multi-year, sustained harassment from David Deutsch and his fans, commenting here requires an account. Accounts are not publicly available. Discussion info.)