stucchio and Mason
stucchio retweeted Mason writing:
"Everything can be free if we fire the people who stop you from stealing stuff" is apparently considered an NPR-worthy political innovation now, rather than the kind of brain fart an undergrad might mumble as they come to from major dental work https://twitter.com/_natalieescobar/status/1299018604327907328
There’s no substantial objective-world content here. Basically “I disagree with whatever is the actual thing behind my straw man characterization”. There’s no topical argument. It’s ~all social posturing. It’s making assertions about who is dumb and who should be associated with what group (and, by implication, with the social status of that group). NPR-worthy, brain fart, undergrad, mumble and being groggy from strong drugs are all social-meaning-charged things to bring up. The overall point is to attack the social status of NPR by associating it with low status stuff. Generally smart people like stuchhio (who remains on the small list of people whose tweets I read – I actually have a pretty high opinion of of him) approve of that tribalist social-political messaging enough to retweet it.
Yudkowsky
Eliezer Yudkowsky wrote on Less Wrong (no link because, contrary to what he says, someone did make the page inaccessible. I have documentation though.):
Post removed from main and discussion on grounds that I've never seen anything voted down that far before. Page will still be accessible to those who know the address.
The context is my 2011 LW post “The Conjunction Fallacy Does Not Exist”.
In RAZ, Yudkowsky repeatedly brings up subculture affiliations he has. He read lots of sci fi. He read 1984. He read Feynman. He also refers to “traditional rationality” which Feynman is a leader of. (Yudkowsky presents several of his ideas as improvements on traditional rationality. I think some of them are good points.) Feynman gets particular emphasis. I think he got some of his fans via this sort of subculture membership signaling and by referencing stuff they like.
I bring this up because Feynman has a book title "What Do You Care What Other People Think?": Further Adventures of a Curious Character. This is the sequel to the better known "Surely You're Joking, Mr. Feynman!": Adventures of a Curious Character.
Yudkowsky evidently does care what people think and has provided no indication that he’s aware that he’s contradicting one of his heroes, Feynman. He certainly doesn’t provide counter arguments to Feynman.
Downvotes are communications about what people think. Downvotes indicate dislike. They are not arguments. They aren’t reasons it’s bad. They’re just opinions. They’re like conclusions or assertions. Yudkowsky openly presents himself as taking action because of what people think. It’s also basically just openly saying “I use power to suppress unpopular ideas”. Yudkowsky also gave no argument himself, nor did he endorse/cite/link any argument he agreed with about the topic.
Yudkowsky is actually reasonably insightful about social hierarchies elsewhere, btw. But this quote shows that, in some major way, he doesn’t understand rationality and social dynamics.
Replies to my “Chains, Bottlenecks and Optimization”
https://www.lesswrong.com/posts/Ze6PqJK2jnwnhcpnb/chains-bottlenecks-and-optimization
Dagon
I think I've given away over 20 copies of _The Goal_ by Goldratt, and recommended it to coworkers hundreds of times.
Objective meaning: I took the specified actions.
Social meaning: I like Goldratt. I’m aligned with him and his tribe. I have known about him for a long time and might merit early adopter credit. Your post didn’t teach me anything. Also, I’m a leader who takes initiative to influence my more sheep-like coworkers. I’m also rich enough to give away 20+ books.
Thanks for the chance to recommend it again - it's much more approachable than _Theory of Constraints_, and is more entertaining, while still conveying enough about his worldview to let you decide if you want the further precision and examples in his other books.
Objective meaning: I recommend The Goal.
Social meaning: I’m an expert judge of which Goldratt books to recommend to people, in what order, for what reasons. Although I’m so clever that I find The Goal a bit shallow, I think it’s good for other people who need to be kept entertained and it has enough serious content for them to get an introduction from. Then they can consider if they are up to the challenge of becoming wise like me, via further study, or not.
This is actually ridiculous. The Goal is the best known Goldratt book, it’s his best seller, it’s meant to be read first, and this is well known. Dagon is pretending to be providing expert judgment, but isn’t providing insight. And The Goal has tons of depth and content, and Dagon is slandering the book by condescending to it in this way. By bringing up Theory of Constraints, Dagon is signaling he reads and values less popular, less entertaining, less approachable non-novel Goldratt books.
It's important to recognize the limits of the chain metaphor - there is variance/uncertainty in the strength of a link (or capacity of a production step), and variance/uncertainty in alternate support for ideas (or alternate production paths).
Objective meaning (up to the dash): Goldratt’s chain idea, which is a major part of your post, is limited.
Social meaning (up to the dash): I’ve surpassed Goldratt and can look down on his stuff as limited. You’re a naive Goldratt newbie who is accepting whatever he says instead of going beyond Goldratt. Also calling chains a “metaphor” instead of “model” is a subtle attack to lower status. Metaphors aren’t heavyweight rationality (while models are, and it actually is a model). Also Dagon is implying that I failed to recognize limits that I should have recognized.
Objective meaning continued: There’s some sort of attempt at an argument here but it doesn’t actually make sense. Saying there is variance in two places is not a limitation of the chain model.
Social meaning continued: saying a bunch of overly wordy stuff that looks technical is bluffing and pretending he’s arguing seriously. Most people won’t know the difference.
Most real-world situations are more of a mesh or a circuit than a linear chain, and the analysis of bottlenecks and risks is a fun multidimensional calculation of forces applies and propagated through multiple links.
Objective meaning: Chains are wrong in most real world situations because those situations are meshes or circuits [both terms undefined]. No details are given about how he knows what’s common in real world situations. And he’s contradicting Goldratt who actually did argue his case and know math. (I also know more than enough math so far and Dagon never continued with enough substance to potentially strain either of our math skills sets).
Social meaning: I have fun doing multidimensional calculations. I’m better than you. If you knew math so well that it’s a fun game to you, maybe you could keep up with me. But if you could do that, you wouldn’t have written the post you wrote.
It’s screwy how Dagon presents himself as a Goldratt superfan expert and then immediately attacks Goldratt’s ideas.
Note: Dagon stopped replying without explanation shortly after this, even though he’d said how super interested in Goldratt stuff he is.
Donald Hobson
I think that ideas can have a bottleneck effect, but that isn't the only effect. Some ideas have disjunctive justifications.
Objective meaning: bottlenecks come up sometimes but not always. [No arguments about how often they come up, how important they are, etc.]
Social meaning: You neglected disjunctions and didn’t see the whole picture. I often run into people who don’t know fancy concepts like “disjunction”.
Note: Disjunction just means “or” and isn’t something that Goldratt or I had failed to consider.
Hobson then follows up with some math, socially implying that the problem is I’m not technical enough and if only I knew some math I’d have reached different conclusions. He postures about how clever he is and brings up resistors and science as brags.
I responded, including with math, and then Hobson did not respond.
TAG
What does that even mean?
Objective meaning: I don’t understand what you wrote.
Social meaning: You’re not making sense.
He did give more info about what his question was after this. But he led with this, on purpose. The “even” is a social attack – that word isn’t there to help with any objective meaning. It’s there to socially communicate that I’m surprisingly incoherent. It’d be a subtle social attack even without the “even”. He didn’t respond when I answered his question.
abramdemski
There is another case which your argument neglects, which can make weakest-link reasoning highly inaccurate, and which is less of a special case than a tie in link-strength.
Objective meaning: The argument in the OP is incomplete.
Social meaning: You missed something huge, which is not a special case, so your reasoning is highly inaccurate.
The way you are reasoning about systems of interconnected ideas is conjunctive: every individual thing needs to be true.
Objective meaning: Chain links have an “and” relationship.
Social meaning: You lack a basic understanding of the stuff you just said, so I’ll have to start really basic to try to educate you.
But some things are disjunctive: some one thing needs to be true.
Objective meaning: “or” exists. [no statement yet about how this is relevant]
Social meaning: You’re wrong because you’re an ignorant novice.
(Of course there are even more exotic logical connectives, such as implication or XOR, which are also used in everyday reasoning. But for now it will do to consider only conjunction and disjunction.)
Objective meaning: Other logic operators exist [no statement yet about how this is relevant].
Social meaning: I know about this like XOR, but you’re a beginner who doesn’t. I’ll let you save face a little by calling it “exotic”, but actually, in the eyes of everyone knowledgeable here, I’m insulting you by suggesting that for you XOR is exotic.
Note: He’s wrong, I know what XOR is (let alone OR). So did Goldratt. XOR is actually easy for me, and I’ve used it a lot and done much more advanced things too. He assumed I didn’t in order to socially attack me. He didn’t have adequate evidence to reach the conclusion that he reached; but by reaching it and speaking condescendingly, he implied that there was adequate evidence to judge me as an ignorant fool.
Perhaps the excess accuracy in probability theory makes it more powerful than necessary to do its job? Perhaps this helps it deal with variance? Perhaps it helps the idea apply for other jobs than the one it was meant for?
Objective meaning: Bringing up possibilities he thinks are worth considering.
Social meaning: Flaming me with some rather thin plausible deniability.
I skipped the middle of his post btw, which had other bad stuff.
johnswentworth
I really like what this post is trying to do. The idea is a valuable one. But this explanation could use some work - not just because inferential distances are large, but because the presentation itself is too abstract to clearly communicate the intended point. In particular, I'd strongly recommend walking through at least 2-3 concrete examples of bottlenecks in ideas.
This is an apparently friendly reply but he was lying. I wrote examples but he wouldn’t speak again.
There are hints in this text that he actually dislikes me and is being condescending, and that the praise in the first two sentences is fake. You can see some condescension in the post, e.g. in how he sets himself up like a mentor telling me what to do (and note the unnecessary “strongly” before “recommend”. And how does he know the idea is valuable when it’s not clearly communicated? And his denial re inferential distance is actually both unreasonable and aggressive. The “too abstract” and “could use some work” are also social attacks, and the “at least 2-3” is a social attack (it means do a lot) with a confused objective meaning (if you’re saying do >= X, why specify X as a range? you only need one number.)
The objective world meaning is roughly that he’s helping with some presentation and communication issues and wants a discussion of the great ideas. But it turns out, as we see from his following behavior, that wasn’t true. (Probably. Maybe he didn’t follow up for some other reason like he died of COVID. Well not that because you can check his posting history and see he’s still posting in other topics. But maybe he has Alzheimer’s and he forgot, and he knows that’s a risk so he keeps notes about stuff he wants to follow up on, but he had an iCloud syncing error and the note got deleted without him realizing it. There are other stories that I don’t have enough information to rule out, but I do have broad societal information about them being uncommon, and there are patterns across the behavior of many people.)
MakoYass
I posted in comments on different Less Wrong thread:
curi:
Are you interested in extended discussion about this, with a goal of reaching some conclusions about CR/LW differences, or do you know anyone who is?
MakoYass:
I am evidently interested in discussing it, but I am probably not the best person for it.
Objective meaning: I am interested. My answer to your question is “yes”. I have agreed to try to have a discussion, if you want to. However, be warned that I’m not very good at this.
Social meaning: The answer to your question is “no”. I won’t discuss with you. However, I’m not OK with being declared uninterested in this topic. I love this topic. How dare you even question my interest when you have evidence (“evidently”) that I am interested, which consists of me having posted about it. I’d have been dumb to post about something I’m not interested in, and you were an asshole to suggest I might be dumb like that.
Actual result: I replied in a friendly, accessible way attempting to begin a conversation, but he did not respond.
Concluding Thoughts
Conversations don’t go well when a substantial portion of what people say has a hostile (or even just significantly different) social (double) meaning.
It’s much worse when the social meaning is the primary thing people are talking about, as in all the LW replies I got above. It’s hard to get discussions where the objective meanings are more emphasized than the social ones. And all the replies I quoted re my Chains and Bottlenecks post were top level replies to my impersonal article. I hadn’t said anything to personally offend any of those people, but they all responded with social nastiness. (Those were all the top level replies. There were no decent ones.) Also it was my first post back after 3 years, so this wasn’t carrying over from prior discussion (afaik – possibly some of them were around years ago and remembered me. I know some people do remember me but they mentioned it. Actually TAG said later, elsewhere, to someone else, that he knew about me from being on unspecified Critical Rationalist forums in the past).
Even if you’re aware of social meanings, there are important objective meanings which are quite hard to say without getting offensive social meaning. This comes up with talking about errors people make, especially ones that reveal significant weaknesses in their knowledge. Talking objectively about methodology errors and what to do about them can also be highly offensive socially. Also objective, argued judgments of how good things are can be socially offensive, even if correct (actually it’s often worse if it’s correct and high quality – the harder to plausibly argue back, the worse it can be for the guy who’s wrong).
The main point was to give examples of how the same sentence can be read with an objective and a social meaning. This is what most discussions on rationalist forums where explicit knowledge of social status hierarchies is common look like to me. It comes up a fair amount on my own forums too (less often than at LW, but it’s a pretty big problem IMO).
Note: The examples in this post are not representative of the full spectrum of social behaviors. One of the many things missing is needy/chasing/reactive behavior where people signal their own low social status (low relative to the person they’re trying to please). Also, I could go into more detail on any particular example people want to discuss (this post isn’t meant as giving all the info/analysis, it’s a hybrid between some summary and some detail).
Update: Adding (on same day as original) a few things I forgot to say.
Audiences pick up on some of the social meanings (which ones, and how they see them, varies by person). They see you answer and not answer things. They think some should be answered and some are ignorable. They take some things as social answers that aren’t intended to be. They sometimes ignore literal/objective meanings of things. They judge. It affects audience reactions. And the perception of audience reactions affects what the actual participants do and say (including when they stop talking without explanation).
The people quoted could do less social meaning. They’re all amplifying the social. There’s some design there; it’s not an accident. It’s not that hard to be less social. But even if you try, it’s very hard to avoid any problematic social meanings, especially when you consider that different audience members will read stuff differently, according to different background knowledge, different assumptions about context, different misreadings and skipped words, etc.
Messages (14)
I haven't decided whether to post this on LW. I think it's important, relevant stuff. But I think most people there would interpret it as a social attack and I'd get a bunch of downvotes and some hostile comments. Anyone who actually wants my stuff could find it here so maybe I shouldn't share it where it's unwanted. Though they don't admit that content of this nature is unwanted when they explain what sort of website LW is, so that's problematic, but I don't really want to post this for the purpose of further revealing their forum as not being what it claims to be. If anyone else wants more evidence along those lines they can post it to LW as a link post or along with some comments.
#17658 I think you should post it. What you call analyzing what they say "socially" looks very subjective to me. Maybe they can criticize your subjective interpretations. I suspect many will be unhappy with your um...less than charitable interpretations.
If you think it is important and relevant stuff, you should post it and not be worried about what they think of you.
I think it is the only way you can get better at interpretation. Comparing what you interpret with reality is important. You'll never get perfect alignment between state of affairs and your interpretations but I am sure you can improve substantially.
You should socially analyze this response too, I am curious what you come up with.
> Comparing what you interpret with reality is important.
Their explicit feedback on the meta analysis does not constitute the "reality" of what the original quotes meant. It's not a way to compare with reality.
> less than charitable interpretations.
Do you have a criticism of any particular claim?
Post Updated
I added a couple paragraphs at the end of the post.
https://twitter.com/ESYudkowsky/status/1299847233840373760
More Yudkowsky vs. Paths Forward.
#17663 A response tweet is an example of social lying:
He "only" blocks people in two vague, broad scenarios, which include in cases where he misunderstood something and allowed no opportunity for his error to be fixed. This constitutes him "always" being interested in other povs, even though he gave two exclusions (so just literally not always) and even though people have different viewpoints on what constitutes fighting. He's virtue signaling about how open minded and reasonable he is while not paying attention to what his words mean (he's just thinking about how he expects people with similar social viewpoints to interpret him).
Correcting My Error
I misinterpreted johnswentworth's reply initially (when it was written a few weeks ago). I underestimated how blatantly people will lie, so I took it as positive instead of negative, and I expected him to speak to me ever again (he didn't). His message again with emphasis added:
> *I really like what this post is trying to do. The idea is a valuable one.* But this explanation could use some work - not just because inferential distances are large, but because the presentation itself is too abstract to clearly communicate the intended point. In particular, I'd strongly recommend walking through at least 2-3 concrete examples of bottlenecks in ideas.
I knew it had several red flags but I thought the italic part was so strongly positive that he wouldn't say that if he thought my post wasn't any good. I was wrong. I think he believes it's socially permitted to heavily exaggerate praise to soften the blow of criticism when doing a mentoring/teacher role (also just in general they can lie to encourage students). I think that's what enabled him to lie more flagrantly than people usually do. I think he expected people in the know to recognize that he was condescending to a beginner who is so newbie he doesn't even know he needs to give appropriate examples.
There were other interpretation issues but I think that's the main one and I wouldn't have been tricked otherwise. I was inadequately suspicious of the possibility of really overt lying. (And it wasn't brown nosing or anger or some other standard categories of overt lying that I'm more familiar with detecting.)
---
one could claim he was interested and read my followups and then just didn’t feel the need to write a reply. but if the idea is so valuable and he likes it so much, then basically either he should care to say something about the additional info he asked for. or if the new info clarified it was worse than he thought, he ought to say something about what's wrong with it and why he changed his mind. he doesn't like ideas so much and then change his mind when he gets a few examples every day does he? if he's giving out praise like this regularly (so it's no big deal to him and compatible with just going silent and not caring) then he's a chronic liar and that requires an explanation.
alternatively he may have just opened his notifications, everything gets marked read (it’s a bad system), and lost track of it. is he incompetent enough to lose track of a "valuable" idea that he fully honestly "really like[s]", and the direct reply providing the followup info he personally asked for? could be but not the best guess IMO.
so i think my updated explanation is the best.
#17677 I reviewed some of his recent comment history (LW UI is very bad for reviewing comment history – basically just a list of recent things with a load more button at the bottom). He has 880 total comments. I think he knows how to use notifications and can reply when he wants to.
His (recent) comments are full of status signals and avoid much back and forth discussion. He doesn't go deep on stuff but pretends to. He's pretty careless with giving out overly strong praise to other people too. There's some chronic liar/suckup/flatterer elements to his messages. I maintain my position that condescending to a beginner and/or softening criticism helped enable him to lie to me more than i expected. The chronic social lying makes it easier to tell stronger lies when you also have an excuse.
i skimmed this. (this is a preface so that ppl know i didnt read everything before commenting, cuz that might effect my message)
it seems scary that the replies had so much social meaning compared to objective meaning
i wasnt able to look thru the social meaning well, it seems bad that i couldnt see through it well, and that makes me think i could be duped by social messages that try to make me think they are like discussing or doing good things or something, when they actually have social meaning that i missed.
the replies to you seemed like they were treating you lower social status than them selfs.
i think there can be comments where people mention how low social status they are compared to someone else.
i think the reason to treat your self as low social status/dumb/not knowing much, would be so that: if you are wrong, it wouldnt be as big of a deal
if you make like a really big claim, and your really excited about it, and you think its like really cool and awesome, but then a high social status person says you were wrong, that might hurt your feelings and social status a lot
so if you preface that your dumb low social status, it doesnt have as big of an effect if high social status person says you were wrong.
#17692 You can share stuff you aren't sure about and ask if there's social stuff in it.
This link no longer works in the stucchio Mason retweet:
https://twitter.com/_natalieescobar/status/1299018604327907328
For context, it was in reference to this NPR book review of "In Defense of Looting":
https://www.npr.org/sections/codeswitch/2020/08/27/906642178/one-authors-argument-in-defense-of-looting
ci
The specific passage Mason was referring to is italicized below:
> **Can you talk about rioting as a tactic? What are the reasons people deploy it as a strategy?**
>
> It does a number of important things. It gets people what they need for free immediately, which means that they are capable of living and reproducing their lives without having to rely on jobs or a wage — which, during COVID times, is widely unreliable or, particularly in these communities is often not available, or it comes at great risk. That's looting's most basic tactical power as a political mode of action.
>
> *It also attacks the very way in which food and things are distributed. It attacks the idea of property, and it attacks the idea that in order for someone to have a roof over their head or have a meal ticket, they have to work for a boss, in order to buy things that people just like them somewhere else in the world had to make under the same conditions. It points to the way in which that's unjust. And the reason that the world is organized that way, obviously, is for the profit of the people who own the stores and the factories. So you get to the heart of that property relation, and demonstrate that without police and without state oppression, we can have things for free.*
>
> Importantly, I think especially when it's in the context of a Black uprising like the one we're living through now, it also attacks the history of whiteness and white supremacy. The very basis of property in the U.S. is derived through whiteness and through Black oppression, through the history of slavery and settler domination of the country. Looting strikes at the heart of property, of whiteness and of the police. It gets to the very root of the way those three things are interconnected. And also it provides people with an imaginative sense of freedom and pleasure and helps them imagine a world that could be. And I think that's a part of it that doesn't really get talked about — that riots and looting are experienced as sort of joyous and liberatory.
without reading curi's analysis here (we talked about it in tutoring max 34), here's my analysis of 3 replies to curi's LW post "Chains, Bottlenecks and Optimization". I didn't do a second pass on stuff I felt like I covered reasonably well the first time.
-------
## Donald Hobson
Donald Hobson's reply
I didn't spot any explicit social stuff on the first pass looking for particular works/phrases that are red flags.
### Second pass and comparing to list of questions about social dynamics:
DH is putting in some effort, though he's making general comments rather than responding to curi particularly. He's not asking Qs to try and understand more. He's expecting a reaction, like "tell me why I'm wrong" rather than "I think there's a problem with your idea".
There's no direct response to what curi wrote, but there could have been. e.g. DH could have quoted "How strong is a chain? How hard can you pull on it before it breaks? It’s as strong as its weakest link." or something, that would have been relevant.
DH also contradicts curi without arguing against curi's case, just stating an alternative. e.g curi said:
> If you measure the strength of every link in the chain, and try to combine them into an overall strength score for the chain, you will get a bad answer.
And DH replied:
> On the other hand, Cojunctive arguments are weaker than the weakest link. If I have an argument that has 100 steps, and relies on all the steps holding for the argument to hold, then even if each step is 99% certain, the whole argument is only 37% certain.
It sounds like he's contradicting curi but he doesn't say so or say what the contradiction is. This means curi needs to reply questioning DH, instead of DH questioning curi - it's a tactic that shifts DH into a higher social position b/c now curi needs to do the work to figure out what DH meant or point out why DH is wrong (making curi expend more effort).
DH is also not explaining himself fully, giving references, or explaining why curi is wrong. So this is DH giving hints and expecting curi to figure things out.
DH also doesn't look needy, rather he looks calm and somewhat casual. He's a bit loose with some technical references (indicating LoLE and signalling expertise).
DH is re-framing the issue to be "refute DH" rather than "refute curi" which is again a social status play.
### Donald Hobson's full reply
> I think that ideas can have a bottleneck effect, but that isn't the only effect. Some ideas have disjunctive justifications. You might have genetic evidence, and embryological evidence, and studies of bacteria evolving in test tubes, and a fossil record, all pointing towards the theory of evolution. In this case, the chains are all in parallel, so you can add their strength. Even if all the fossil record is completely discounted, you would still have good genetic evidence, and visa versa. If, conditional on the strongest line of evidence being totally discounted, you would still expect the other lines might hold, then the combination is stronger than the strongest.
>
> If I have 10 arguments that X is true, and each argument is 50/50 (I assign equal probability to the argument being totally valid, and totally useless) And if these arguments are totally independant in their validity, then P(not X) < 1/1000. Note that total independance means that even when told with certainty that arguments 0 ... 8 are nonsense, you still assign 50% prob to argument 9 being valid. This is a strong condition.
>
> Disjunctive arguments are stronger than the strongest link.
>
> On the other hand, Cojunctive arguments are weaker than the weakest link. If I have an argument that has 100 steps, and relies on all the steps holding for the argument to hold, then even if each step is 99% certain, the whole argument is only 37% certain.
>
> For those of you looking for a direct numeric analogue to resistors, there isn't one. They have similar structure, resistors with x+y and 1/x, probabilities with x\*y and 1-x, but there isn't a (nice) function from one to another. There isn't f st f(x+y)=f(x)\*f(y) and f(1/x)=1-f(x)
## Dagon
Dagon's reply
curi and I discussed this a bit in t34. I'll mention everything I see tho, but I'm not going to put in as much effort over all as fresh examples.
### first pass - low level
Dagon starts with some statements about himself / self-aggrandisement. There's a weird apology. He then contradicts the first paragraph somewhat.
Dagon then has a few short statements which challenge curi's post but only semi-directly. He doesn't quote anything, and there's some reframing going on. He states some things like they're fact which signals expertise and LoLE.
### second pass
Dagon's first paragraph is setting a social scene (that he's high status and has expertise). he knows lots about goldratt because he's given away "the goal" and wants to keep recommending it or whatever.
Dagon is pointing things out in a teacher-esq sort of way, e.g.
> It's important to recognize the limits of the chain metaphor - there is variance/uncertainty in the strength of a link (or capacity of a production step), and variance/uncertainty in alternate support for ideas (or alternate production paths).
He's communicating that he's an expert and that the's pointing out something curi has missed. This is reframing, and potentially outright dishonest if he read the whole post, and negligent dishonest if he didn't (b/c he's acting like it).
He's also communicating that he *knows more than Goldratt*, that he's pointing something out that Goldratt got wrong. Which begs the questions: "why does he recommend it? are there not better things to recommend?"
He's expecting curi to respond with more detail or something and on the whole it's a low-effort response (both LoLE and actually LE). The conversation is now about his idea.
(This reframing to talk about your own ideas feels like it's going to be pretty common).
### Dagon's full reply
> I think I've given away over 20 copies of _The Goal_ by Goldratt, and recommended it to coworkers hundreds of times. Thanks for the chance to recommend it again - it's much more approachable than _Theory of Constraints_, and is more entertaining, while still conveying enough about his worldview to let you decide if you want the further precision and examples in his other books.
>
> It's important to recognize the limits of the chain metaphor - there is variance/uncertainty in the strength of a link (or capacity of a production step), and variance/uncertainty in alternate support for ideas (or alternate production paths). Most real-world situations are more of a mesh or a circuit than a linear chain, and the analysis of bottlenecks and risks is a fun multidimensional calculation of forces applies and propagated through multiple links.
## abramdemski
abramdemski's reply
### first pass
immediately: this is a longer reply (the longest I think) and has quotes.
He quotes curi and then acts like he's going to point out a problem (will he?):
>>There are special cases. Maybe the links are all equally strong to high precision. But that’s unusual. Variance (statistical fluctuations) is usual. Perhaps there is a bell curve of link strengths. Having two links approximately tied for weakest is more realistic, though still uncommon.
>
>There is another case which your argument neglects, which can make weakest-link reasoning highly inaccurate, and which is less of a special case than a tie in link-strength.
(note: that's the end of abramdemski's paragraph - it's just that one sentence)
he points out the conjunctive/disjunctive thing. There's a bit of condescension when he says stuff like:
> (Of course there are even more exotic logical connectives, such as implication or XOR, which are also used in everyday reasoning. But for now it will do to consider only conjunction and disjunction.)
it's condescending because it's basic (like wow, implication and XOR, such advanced concepts). this is reframing to a degree, treating curi like he doesn't know what he's talking about (relative status elevation).
> Following the venerated method of multiple working hypotheses, then, we are well-advised to come up with as many hypotheses as we can to explain the data. [...]
he's appealing to the status of some idea and implying that curi is dumb for contradicting it, while associating himself. he's reframing because he's not asking if there's like a contradiction or something, he's just stating it as fact. it's now the responsibility of curi to refute him and the method of multiple working hypotheses (or point out why there's not a contradiction).
he also says we're "well-advised" which is adding extra judgement (implying his judgement is worth something extra) to the idea. this is strictly unnecessary, he could just say "we are advised".
He's also talking in "we"s, which reframes the conversation to 'curi vs the mob' or similar.
abramdemski continues to quote curi, but then offers a false concession/agreement with:
> I can agree that a weighted average is not what is called for in a conjunctive case. It is closer to correct in a disjunctive case.
That's not really substantial; he's not saying curi's right, just that curi's not super wrong, that *some* part is good. he then sort of contradicts himself with the second question, but they're both similar in the way they treat curi/curi's idea. both sentences are more signalling of expertise and are designed to make abramdemski look better (like more magnanimous or generous or something), while making curi look like he's missed something and sorta 'on the right track'.
abramdemski says:
> But let us focus on your claim that we should ignore non-bottlenecks. You already praised ideas for having excess capacity, IE, being more accurate than needed for a given application. But now you are criticizing a practice of weighing evidence too accurately.
he puts the word "praised" in curi's mouth. he's also misrepresented curi with "more accurate than needed", which directly contradicts curi's statement: "Excess capacity means the idea is more than adequate to accomplish its purpose." I doubt abramdemski realised he did this - it looks like a mistake ppl who hold his views would make b/c they're interpreting what curi said through their own lense (which ignores curi's statements: "I expect large inferential distance. I don’t expect my intended meaning to be transparent to readers here")
this is designed to reframe the conversation to be about what abramdemski is saying.
like a lot of comments abramdemski brings up probability even tho curi didn't mention it.
abramdemski then quotes curi saying "Excess capacity means..." which indicates he is either ignoring the contradiction mentioned above or didn't understand it. he pretends (either intentionally or b/c he's fooling himself) that he did, though.
he then finishes with some questions that are either using terms abramdemski introduced, or already answered by curi in the post. he's expecting curi to put more effort in to understand why curi is wrong and implicitly demanding a reframing of the topic to be about what he thinks it is, not what curi posted. He's also implicitly demanding curi reply to those Qs particularly, even tho none of the 3 are good questions.
### abramdemski's full reply
>> There are special cases. Maybe the links are all equally strong to high precision. But that’s unusual. Variance (statistical fluctuations) is usual. Perhaps there is a bell curve of link strengths. Having two links approximately tied for weakest is more realistic, though still uncommon.
>
>There is another case which your argument neglects, which can make weakest-link reasoning highly inaccurate, and which is less of a special case than a tie in link-strength.
>
> The way you are reasoning about systems of interconnected ideas is conjunctive: every individual thing needs to be true. But some things are disjunctive: some one thing needs to be true. (Of course there are even more exotic logical connectives, such as implication or XOR, which are also used in everyday reasoning. But for now it will do to consider only conjunction and disjunction.)
>
> A conjunction of a number of statements is -- at most -- as strong as its weakest element, as you suggest. However, a disjunction of a number of statements is -- at worst -- as strong as its strongest element.
>
> Following the venerated method of multiple working hypotheses, then, we are well-advised to come up with as many hypotheses as we can to explain the data. Doing this has many advantages, but one worth mentioning is that it increases the probability that one of our ideas is right.
>
> Since we don't want to just believe everything, we then have to somehow weigh the different hypotheses against each other. Our method of weighing hypotheses should ideally have the property that, if the correct hypothesis is in our collection, we would eventually converge to it. (Having established that, we might ask to converge as quickly as possible; and we could also name many other desirable properties of such a weighting scheme.)
>
> Concerning such weighting schemes, you comment:
>
>> Don’t update your beliefs using evidence, increasing your confidence in some ideas, when the evidence deals with non-bottlenecks. In other words, don’t add more plausibility to an idea when you improve a sub-component that already had excess capacity. Don’t evaluate the quality of all the components of an idea and combine them into a weighted average which comes out higher when there’s more excess capacity for non-bottlenecks.
>
> I can agree that a weighted average is not what is called for in a conjunctive case. It is closer to correct in a disjunctive case. But let us focus on your claim that we should ignore non-bottlenecks. You already praised ideas for having excess capacity, IE, being more accurate than needed for a given application. But now you are criticizing a practice of weighing evidence too accurately.
>
> I previously agreed that a conjunction has strength at most that of its weakest element. But a probabilistic account gives us more accuracy than that, allowing us to compute a more precise strength (if we have need of it). And similarly, a disjunction has strength at least that of its weakest element -- but a probabilistic account gives us more accuracy than that, if we need it. Quoting you:
>
>> Excess capacity means the idea is more than adequate to accomplish its purpose. The idea is more powerful than necessary to do its job. This lets it deal with variance, and may help with using the idea for other jobs.
>
> Perhaps the excess accuracy in probability theory makes it more powerful than necessary to do its job? Perhaps this helps it deal with variance? Perhaps it helps the idea apply for other jobs than the one it was meant for?
the above is also posted to my site here: https://xertrov.github.io/fi/posts/2020-09-02-analysing-lw-replies-for-social-dynamics/
Discussing this post with Max on stream now (and did some of this stuff on the previous stream):
https://youtu.be/geuRduhbnmU