Criticizing "Against the singularity hypothesis"

I proposed a game on the Effective Altruism forum where people submit texts to me and I find three errors. I wrote this for that game. This is a duplicate of my EA post. I criticize Against the singularity hypothesis by David Thorstad.

Introduction

FYI, I disagree with the singularity hypothesis, but primarily due to epistemology, which isn't even discussed in this article.

Error One

As low-hanging fruit is plucked, good ideas become harder to find (Bloom et al. 2020; Kortum 1997; Gordon 2016). Research productivity, understood as the amount of research input needed to produce a fixed output, falls with each subsequent discovery.

By way of illustration, the number of FDA-approved drugs per billion dollars of inflation-adjusted research expenditure decreased from over forty drugs per billion in the 1950s to less than one drug per billion in the 2000s (Scannell et al. 2012). And in the twenty years from 1971 to 1991, inflation-adjusted agricultural research expenditures in developed nations rose by over sixty percent, yet growth in crop yields per acre dropped by fifteen percent (Alston et al. 2000). The problem was not that researchers became lazy, poorly educated or overpaid. It was rather that good ideas became harder to find.

There are many other reasons for drug research progress to slow down. The healthcare industry, as well as science in general (see e.g. the replication crisis), are really broken, and some of the problems are newer. Also maybe they're putting a bunch of work into updates to existing drugs instead of new drugs.

Similarly, decreasing crop yield growths (in other words, yields are still increasing but by lower percentages) could have many other causes. And also decreasing crop yields are a different thing than a decrease in the number of new agricultural ideas that researchers come up with – it's not even the right quantity to measure to make his point. It's a proxy for the actual thing his argument relies on, and he makes no attempt to consider how good or bad of a proxy it is, and I can easily think of some reasons it wouldn't be a very good proxy.

The comment about researchers not becoming lazy, poorly educated or overpaid is an unargued assertion.

So these are bad arguments which shouldn't convince us of the author's conclusion.

Error Two

Could the problem of improving artificial agents be an exception to the rule of diminishing research productivity? That is unlikely.

Asserting something is unlikely isn't an argument. His followup is to bring up Moore's law potentially ending, not to give an actual argument.

As with the drug and agricultural research, his points are bad because singularity claims are not based on extrapolating patterns from current data, but rather on conceptual reasoning. He didn't even claim his opponents were doing that in the section formulating their position, and my pre-existing understanding of their views is they use conceptual arguments not extrapolating from existing data/patterns (there is no existing data about AGI to extrapolate from, so they use speculative arguments, which is OK).

Error Three

one cause of diminishing research productivity is the difficulty of maintaining large knowledge stocks (Jones 2009), a problem at which artificial agents excel.

You can't just assume that AGIs will be anything like current software including "AI" software like AlphaGo. You have to consider what an AGI would be like before you can even know if it'd be especially good at this or not. If the goal with AGI is in some sense to make a machine with human-like thinking, then maybe it will end up with some of the weaknesses of humans too. You can't just assume it won't. You have to envision what an AGI would be like, or what many different things it might be like that would work (narrow it down to various categories and rule some things out) before you consider the traits it'd have.

Put another way, in MIRI's conception, wouldn't mind design space include both AGIs that are good or bad at this particular category of task?

Error Four

It is an unalterable mathematical fact that an algorithm can run no more quickly than its slowest component. If nine-tenths of the component processes can be sped up, but the remaining processes cannot, then the algorithm can only be made ten times faster. This creates the opportunity for bottlenecks unless every single process can be sped up at once.

This is wrong due to "at once" at the end. It'd be fine without that. You could speed up 9 out of 10 parts, then speed up the 10th part a minute later. You don't have to speed everything up at once. I know it's just two extra words but it doesn't make sense when you stop and think about it, so I think it's important. How did it seem to make sense to the author? What was he thinking? What process created this error? This is the kind of error that's good to post mortem. (It doesn't look like any sort of typo; I think it's actually based on some sort of thought process about the topic.)

Error Five

Section 3.2 doesn't even try to consider any specific type of research an AGI would be doing and claim that good ideas would get harder to find for that and thereby slow down singularity-relevant progress.

Similarly, section 3.3 doesn't try to propose a specific bottleneck and explain how it'd get in the way of the singularity. He does bring up one specific type of algorithm – search – but doesn't say why search speed would be a constraint on reaching the singularity. Whether exponential search speed progress is needed depends on specific models of how the hardware and/or software are improving and what they're doing.

There's also a general lack of acknowledgement of, or engagement with, counter-arguments that I can easily imagine pro-singularity people making (e.g. responding to the good ideas getting harder to find point by saying some stuff about mind design space containing plenty of minds that are powerful enough for a singularity with a discontinuity, even if progress slows down later as it approaches some fundamental limits). Similarly, maybe there is something super powerful in mind design space that doesn't rely on super fast search. Whether there is, or not, seems hard to analyze, but this paper doesn't even try. (The way I'd approach it myself is indirectly via epistemology first.)

Error Six

Section 2 mixes Formulating the singularity hypothesis (the section title) with other activities. This is confusing and biasing, because we don't get to read about what the singularity hypothesis is without the author's objections and dislikes mixed in. The section is also vague on some key points (mentioned in my screen recording) such as what an order of magnitude of intelligence is.

Examples:

Sustained exponential growth is a very strong growth assumption

Here he's mixing explaining the other side's view with setting it up to attack it (as requiring a super high evidential burden due to such strong claims). He's not talking from the other side's perspective, trying to present it how they would present it (positively); he's instead focusing on highlighting traits he dislikes.

A number of commentators have raised doubts about the cogency of the concept of general intelligence (Nunn 2012; Prinz 2012), or the likelihood of artificial systems acquiring meaningful levels of general intelligence (Dreyfus 2012; Lucas 1964; Plotnitsky 2012). I have some sympathy for these worries.[4]

This isn't formulating the singularity hypothesis. It's about ways of opposing it.

These are strong claims, and they should require a correspondingly strong argument to ground them. In Section 3, I give five reasons to be skeptical of the singularity hypothesis’ growth claims.

Again this doesn't fit the section it's in.

Padding

Section 3 opens with some restatements of material from section 2 which was also in the introduction some. And look at this repetitiveness (my bolds):

Near the bottom of page 7 begins section 3.2:

3.2 Good ideas become harder to find

Below that we read:

As low-hanging fruit is plucked, good ideas become harder to find

Page 8 near the top:

It was rather that good ideas became harder to find.

Later in that paragraph:

As good ideas became harder to find

Also, page 11:

as time goes on ideas for further improvement will become harder to find.

Page 17

As time goes on ideas for further improvement will become harder to find.

Amount Read

I read to the end of section 3.3 then briefly skimmed the rest.

Screen Recording

I recorded my screen and made verbal comments while writing this:

https://www.youtube.com/watch?v=T1Wu-086frA


Update: Thorstad replied and I wrote a followup post in response: Credentialed Intellectuals Support Misquoting


Elliot Temple | Permalink | Messages (0)

Critiquing an Axiology Article about Repugnant Conclusions

I proposed a game on the Effective Altruism forum where people submit texts to me and I find three errors. I wrote this for that game. This is a duplicate of my EA post. I criticize Minimalist extended very repugnant conclusions are the least repugnant by Teo Ajantaival.

Error One

Archimedean views (“Quantity can always substitute for quality”)

Let us look at comparable XVRCs for Archimedean views. (Archimedean views roughly say that “quantity can always substitute for quality”, such that, for example, a sufficient number of minor pains can always be added up to be worse than a single instance of extreme pain.)

It's ambiguous/confusing about whether by "quality" you mean different quantity sizes, as in your example (substitution between small pains and a big pain), or you actually mean qualitatively different things (e.g. substitution between pain and the thrill of skydiving).

Is the claim that 3 1lb steaks can always substitute for 1 3lb steak, or that 3 1lb pork chops can always substitute for 1 ~3lb steak? (Maybe more or less if pork is valued less or more than steak.)

The point appears to be about whether multiple things can be added together for a total value or not – can a ton of small wins ever make up for a big win? In that case, don't use the word "quality" to refer to a big win, because it invokes concepts like a qualitative difference rather than a quantitative difference.

I thought it was probably about whether a group of small things could substitute for a bigger thing but then later I read:

Lexical views deny that “quantity can always substitute for quality”; instead, they assign categorical priority to some qualities relative to others.

This seems to be about qualitative differences: some types/kinds/categories have priority over others. Pork is not the same thing as steak. Maybe steak has priority and having no steak can't be made up for with a million pork chops. This is a different issue. Whether qualitative differences exist and matter and are strict is one issue, and whether many small quantities can add together to equal a large quantity is a separate issue (though the issues are related in some ways). So I think there's some confusion or lack of clarity about this.

I didn't read linked material to try to clarify matters, except to notice that this linked paper abstract doesn't use the word "quality". I think, for this issue, the article should stand on its own OK rather than rely on supplemental literature to clarify this.

Actually, I looked again while editing, and I've now noticed that in the full paper (as linked to and hosted by PhilPapers, the same site as before), the abstract text is totally different and does use the word "quality". What is going on!? PhilPapers is broken? Also this paper, despite using the word "quality" in the abstract once (and twice in the references), does not use that word in the body, so I guess it doesn't clarify the ambiguity I was bringing up, at least not directly.

Error Two

This is a strong point in favor of minimalist views over offsetting views in population axiology, regardless of one’s theory of aggregation.

I suspect you're using an offsetting view in epistemology when making this statement concluding against offsetting views in axiology. My guess is you don't know you're doing this or see the connection between the issues.

I take a "strong point in favor" to refer to the following basic model:

We have a bunch of ideas to evaluate, compare, choose between, etc.

Each idea has points in favor and points against.

We weight and sum the points for each idea.

We look at which idea has the highest overall score and favor that.

This is an offsetting model where points in favor of an idea can offset points against that same idea. Also, in some sense, points in favor of an idea offset points in favor of rival ideas.

I think offsetting views are wrong, in both epistemology and axiology, and there's overlap in the reasons for why they're wrong, so it's problematic (though not necessarily wrong) to favor them in one field while rejecting them in another field.

Error Three

The article jumps into details without enough framing about why this matters. This is understandable for a part 4, but on the other hand you chose to link me to this rather than to part 1 and you wrote:

Every part of this series builds on the previous parts, but can also be read independently.

Since the article is supposed to be readable independently, then the article should have explained why this matters in order to work well independently.

A related issue is I think the article is mostly discussing details in a specific subfield that is confused and doesn't particularly matter – the field's premises should be challenged instead.

And another related issue is the lack of any consideration of win/win approaches, discussion of whether there are inherent conflicts of interest between rational people, etc. A lot of the article topics are related to political philosophy issues (like classical liberalism's social harmony vs. Marxism's class warfare) that have already been debated a bunch, and it'd make sense to connect claims and viewpoints to that the existing knowledge. I think imagining societies with different agents with different amounts of utility or suffering, fully out of context of imagining any particular type of society, or design or organization or guiding principles of society, is not very productive or meaningful, so it's no wonder it's gotten bogged down in abstract concerns like the very repugnant conclusion stuff with no sign of any actually useful conclusions coming up.

This is not the sort of error I primarily wanted to point out. However, the article does a lot of literature summarizing instead of making its own claims. So I noticed some errors in the summarized ideas but that's different than errors in the articles. To point out errors in an article itself, when its summarizing other ideas, I'd have to point out that it has inaccurately summarized the ideas. That requires reading the cites and comparing them to the summaries. Which I don't think would be especially useful/valuable to do. Sometimes people summarize stuff they agree with, so criticizing the content works OK. But here a lot of it was summarizing stuff the author and I both disagree with, in order to criticize it, which doesn't provide many potential targets for criticism. So that's why I went ahead and made some more indirect criticism (and included more than one point) for the third error.

But I'd suggest that @Teo Ajantaival watch my screen recording (below) which has a bunch of commentary and feedback on the article. I expect some of it will be useful and some of the criticisms I make will be relevant to him. He could maybe pick out some things I said and recognize them as criticisms of ideas he holds, whereas sometimes it was hard for me to tell what he believes because he was just summarizing other people's ideas. (When looking for criticism, consider if I'm right, does it mean you're wrong? If so, then it's a claim by me about an error, even if I'm actually mistaken.) My guess is I said some things that would work as better error claims than some of the three I actually used, but I don't know which things they are. Also, I think if we were to debate, discussing the underlying premises, and whether this sub-field even matters, would acutally be more important than discussing within-field details, so it's a good thing to bring up. I think my disagreement with the niche that the article is working within is actually more important than some of the within-niche issues.

Offsetting and Repugnance

This section is about something @Teo Ajantaival also disagrees with, so it's not an error by him. It could possibly be an error of omission if he sees this as a good point that he didn't know but would have wanted to think of but didn't. To me it looks pretty important and relevant, and problematic to just ignore like there's no issue here.

If offsetting actually works – if you're a true believer in offsetting – then you should not find the very repugnant scenario to be repugnant at all.

I'll illustrate with a comparison. I am, like most people, to a reasonable approximation, a true believer in offsetting for money. That is, $100 in my bank account fully offsets $100 of credit card debt that I will pay off before there are any interest charges. There do exist people who say credit cards are evil and you shouldn't have one even if you pay it off in full every month, but I am not one of those people. I don't think debt is very repugnant when it's offset by assets like cash.

And similarly, spreading out the assets doesn't particularly matter. A billion bank accounts with a dollar each, ignoring some adminstrative hassle details, are just as good as one bank account with a billion dollars. That money can offset a million dollars of credit card debt just fine despite being spread out.

If you really think offsetting works, then you shouldn't find it repugnant to have some negatives that are offset. If you find it repugnant, you disagree with offsetting in that case.

I disagree with offsetting suffering – one person being happy does not simply cancel out someone else being victimized – and I figure most people also disagree with suffering offsetting. I also disagree with offsetting in epistemology. Money, as a fungible commodity, is something where offsetting works especially well. Similarly, offsetting would work well for barrels of oil of a standard size and quality, although oil is harder to transport than money so location matters more.

Bonus Error by Upvoters

At a glance (I haven't read it yet as I write this section), the article looks high effort. It has ~22 upvoters but no comments, no feedback, no hints about how to get feedback next time, no engagement with its ideas. I think that's really problematic and says something bad about the community and upvoting norms. I talk about this more at the beginning of my screen recording.

Update after reading the article: I can see some more potential reasons the article got no engagement (too specialized, too hard to read if you aren't familiar with the field, not enough introductory framing of why this matters) but someone could have at least said that. Upvoting is actually misleading feedback if you have problems like that with the article.

Bonus Literature on Maximizing or Minimizing Moral Values

https://www.curi.us/1169-morality

This article, by me, is about maximizing squirrels as a moral value, and more generally about there being a lot of actions and values which are largely independent of your goal. So if it was minimizing squirrels or maximizing bison, most of the conclusions are the same.

I commented on this some in my screen recorded after the upvoters criticism, maybe 20min in.

Bonus Comments on Offsetting

(This section was written before the three errors, one of which ended up being related to this.)

Offsetting views are problematic in epistemology too, not just morality/axiology. I've been complaining about them for years. There's a huge, widespread issue where people basically ignore criticism – don't engage with it and don't give counter-arguments or solutions to the problems it raises – because it's easier to go get a bunch more positive points elsewhere to offset the criticism. Or if they already think their idea already has a ton of positive points and a significant lead, then they can basically ignore criticism without even doing anything. I commented on this verbally around 25min into the screen recording.

Screen Recording

I recorded my screen and talked while creating this. The recording has a lot of commentary that isn't written down in this post.

https://www.youtube.com/watch?v=d2T2OPSCBi4


Elliot Temple | Permalink | Messages (0)

Finding Errors in The Case Against Education by Bryan Caplan

I proposed a game on the Effective Altruism forum where people submit texts to me and I find three errors. I wrote this for that game. This is a duplicate of my EA post.

Introduction

I'm no fan of university nor academia, so I do partly agree with The Case Against Education by Bryan Caplan. I do think social climbing is a major aspect of university. (It's not just status signalling. There's also e.g. social networking.)

I'm assuming you can electronically search the book to read additional context for quotes if you want to.

Error One

For a single individual, education pays.

You only need to find one job. Spending even a year on a difficult job search, convincing one employer to give you a chance, can easily beat spending four years at university and paying tuition. If you do well at that job and get a few years of work experience, getting another job in the same industry is usually much easier.

So I disagree that education pays, under the signalling model, for a single individual. I think a difficult job search is typically more efficient than university.

This works in some industries, like software, better than others. Caplan made a universal claim so there's no need to debate how many industries this is viable in.

Another option is starting a company. That's a lot of work, but it can still easily be a better option than going to university just so you can get hired.

Suppose, as a simple model, that 99% of jobs hire based on signalling and 1% don't. If lots of people stop going to university, there's a big problem. But if you individually don't go, you can get one of the 1% of non-signalling jobs. Whereas if 3% of the population skipped university and competed for 1% of the jobs, a lot of those people would have a rough time. (McDonalds doesn't hire cashiers based on signalling – or at least not the same kind of signalling – so imagine we're only considering good jobs in certain industries so the 1% non-signalling jobs model becomes more realistic.)

When they calculate the selfish (or “private”) return to education, they focus on one benefit—the education premium—and two costs—tuition and foregone earnings.[4]

I've been reading chapter 5 trying to figure out if Caplan ever considers alternatives to university besides just entering the job market in the standard way. This is a hint that he doesn't.

Foregone earnings are not a cost of going to university. They are a benefit that should be added on to some, but not all, alternatives to university. Then univeristy should be compared to alternatives for how much benefit it gives. When doing that comparison, you should not subtract income available in some alternatives from the benefit of university. Doing that subtraction only makes sense and works out OK if you're only considering two options: university or get a job earlier. When there are only two options, taking a benfit from one and instead subtracting it from the other as an opportunity cost doesn't change the mathematical result.

See also Capitalism: A Treatise on Economics by George Reisman (one of the students of Ludwig von Mises) which criticizes opportunity costs:

Contemporary economics, in contrast, continually ignores the vital connection of income and cost with the receipt and outlay of money. It does so insofar as it propounds the doctrines of “imputed income” and “opportunity cost.”[26] The doctrine of imputed income openly and systematically avows that the absence of a cost constitutes income. The doctrine of opportunity cost, on the other hand, holds that the absence of an income constitutes a cost. Contemporary economics thus deals in nonexistent incomes and costs, which it treats as though they existed. Its formula is that money not spent is money earned, and that money not earned is money spent.

That's from the section "Critique of the Concept of Imputed Income" which is followed by the section "Critique of the Opportunity-Cost Doctrine". The book explains its point in more detail than this quote. I highly recommend Reisman's whole book to anyone who cares about economics.

Risk: I looked for discussion of alternatives besides university or entering the job market early, such as a higher effort job search or starting a business. I didn't find it, but I haven't read most of the book so I could have missed it. I primarily looked in chapter 5.

Error Two

The answer would tilt, naturally, if you had to sing Mary Poppins on a full-price Disney cruise. Unless you already planned to take this vacation, you presumably value the cruise less than the fare. Say you value the $2,000 cruise at only $800. Now, to capture the 0.1% premium, you have to fork over three hours of your time plus the $1,200 difference between the cost of the cruise and the value of the vacation.

(Bold added to quote.)

The full cost of the cruise is not just the fare. It's also the time cost of going on the cruise. It's very easy to value the cruise experience at more than the ticket price, but still not go, because you'd rather vacation somewhere else or stay home and write your book.

BTW, Caplan is certainly familiar with time costs in general (see e.g. the last sentence quoted).

Error Three

Laymen cringe when economists use a single metric—rate of return—to evaluate bonds, home insulation, and college. Hasn’t anyone ever told them money isn’t everything! The superficial response: Economists are by no means the only folks who picture education as an investment. Look at students. The Higher Education Research Institute has questioned college freshmen about their goals since the 1970s. The vast majority is openly careerist and materialist. In 2012, almost 90% called “being able to get a better job” a “very important” or “essential” reason to go to college. Being “very well-off financially” (over 80%) and “making more money” (about 75%) are almost as popular. Less than half say the same about “developing a meaningful philosophy of life.”[2] These results are especially striking because humans exaggerate their idealism and downplay their selfishness.[3] Students probably prize worldly success even more than they admit.

(Bold added.)

First, minor point, some economists have that kind of perspective about rate of return. Not all of them.

And I sympathize with the laymen. You should consider whether you want to go to university. Will you enjoy your time there? Future income isn't all that matters. Money is nice but it doesn't really buy happiness. People should think about what they want to do with their lives, in realistic ways that take money into account, but which don't focus exclusively on money. In the final quoted sentence he mentions that students (on average) probably "prize worldly success even more than they admit". I agree, but I think some of those students are making a mistake and will end up unhappy as a result. Lots of people focus their goals too much on money and never figure out how to be happy (also they end up unhappy if they don't get a bunch of money, which is a risk).

But here's the more concrete error: The survey does not actually show that students view education in terms of economic returns only. It doesn't show that students agree with Caplan.

The issue, highlighted in the first sentence, is "economists use a single metric—rate of return". Do students agree with that? In other words, do students use a single metric? A survey where e.g. 90% of them care about that metric does not mean they use it exclusively. They care about many metrics, not a single one. Caplan immediately admits that so I don't even have to look the study up. He says 'Less than half [of students surveyed] say the same [very important or essential reason to go to university] about “developing a meaningful philosophy of life.”' Let's assume less than half means a third. Caplan tries to present this like the study is backing him up and showing how students agree with him. But a third disagreeing with him on a single metric is a ton of disaagreement. If they surveyed 50 things, and 40 aren't about money, and just 10% of students thought each of those 40 mattered, then maybe around zero students would agree with Caplan about only the single metric being important (the answers aren't independent so you can't just use math to estimate this scenario btw).

Bonus Error

Self-help gurus tend to take the selfish point of view for granted. Policy wonks tend to take the social point of view for granted. Which viewpoint—selfish or social—is “correct”? Tough question. Instead of taking sides, the next two chapters sift through the evidence from both perspectives—and let the reader pick the right balance between looking out for number one and making the world a better place.

This neglects to consider the classical liberal view (which I believe, and which an economist ought to be familiar with) of the harmony of (rational) interests of society and the individual. There is no necessary conflict or tradeoff here. (I searched the whole book for "conflict", "harmony", "interests" and "classical" but didn't find this covered elsewhere.)

I do think errors of omission are important but I still didn't want to count this as one of my three errors. I was trying to find somewhat more concrete errors than just not talking about something important and relevant.

Bonus Error Two

The deeper response to laymen’s critique, though, is that economists are well aware money isn’t everything—and have an official solution. Namely: count everything people care about. The trick: For every benefit, ponder, “How much would I pay to obtain it?”

This doesn't work because lots of things people care about are incommensurable. They're in different dimensions that you can't convert between. I wrote about the general issue of taking into account multiple dimensions at once at https://forum.effectivealtruism.org/posts/K8Jvw7xjRxQz8jKgE/multi-factor-decision-making-math

A different way to look at it is that the value of X in money is wildly variable by context, not a stable number. Also how much people would pay to obtain something is wildly variable by how much money they have, not a stable number.

Potential Error

If university education correlates with higher income, that doesn't mean it causes higher income. Maybe people who are likely to get high incomes are more likely to go to university. There are also some other correlation isn't causation counter-arguments that could be made. Is this addressed in the book? I didn't find it, but I didn't look nearly enough to know whether it's covered. Actually I barely read anything about his claims that university results in higher income, which I assume are at least partly based on correlation data, but I didn't really check. So I don't know if there's an error here but I wanted to mention it. If I were to read the book more, this is something I'd look into.

Screen Recording

Want to see me look through the book and write this post? I recorded my process with sporadic verbal commentary:

https://www.youtube.com/watch?v=BQ70qzRG61Y


Elliot Temple | Permalink | Messages (0)

Misquoting Is Conceptually Similar to Deadnaming: A Suggestion to Improve EA Norms

Our society gives people (especially adults) freedom to control many aspects of their lives. People choose what name to go by, what words to say, what to do with their money, what gender to be called, what clothes to wear, and much more.

It violates people’s personal autonomy to try to control these things without their consent. It’s not your place to choose e.g. what to spend someone else’s money on, what clothes they should wear, or what their name is. It’d be extremely rude to call me “Joan” instead of “Elliot”.

Effective Altruism (EA) has written norms related to this:

Misgendering deliberately and/or deadnaming gratuitously is not ok, although mistakes are expected and fine (please accept corrections, though).

I think this norm is good. I think the same norm should be applied to misquoting for the same reasons. It currently isn’t (context).

Article summary: Misquoting is different than sloppiness or imprecision in general. Misquoting puts words in someone else’s mouth without their consent. It takes away their choice of what words to say or not say, just like deadnaming takes away their choice of what name to use.

I’d also suggesting applying the deadnaming norm to other forms of misnaming besides deadnaming, though I don’t know if those ever actually come up at EA, whereas misquoting happens regularly. I won’t include examples of misquotes for two reasons. First, I don’t want to name and shame individuals (especially when it’s a widespread problem and it could easily have been some other individuals instead). Second, I don’t want people to respond by trying to debate the degree of importance or inaccuracy of particular misquotes. That would miss the point about people’s right to control their own speech. It’s not your place to speak for other people, without their consent, even a little bit, even in unimportant ways.

I’ll clarify how I think the norm for deadnaming works, which will simultaneously clarify what I think about misquoting. There are some nuances to it. Then I’ll discuss misquoting more and discuss costs and benefits.

Accidents

Accidental deadnaming is OK but non-accidental deadnaming isn’t. If you deadname someone once, and you’re corrected, you should fix it and you shouldn’t do it again. Accidentally deadnaming someone many times is implausible or unreasonable; reasonable people who want to stop having those accidents can stop.

While “mistakes are expected and fine”, EA’s norm is that deadnaming on purpose is not fine nor expected. Misquotes, like deadnaming, come in accidental and non-accidental categories, and the non-accidental ones shouldn’t be fine.

How can we (charitably) judge what is an accident?

A sign that deadnaming wasn’t accidental is when someone defends, legitimizes or excuses it. If they say, “Sorry, my mistake.” it was probably a genuine accident. If they instead say “Deadnaming is not that bad.” or “It’s not a big deal.” or “Why do you care so much?”, or “I’m just using the name on your birth certificate.” then their deadnaming was partly due to their attitude rather than by accident. That violates EA norms.

When people resist a correction, or deny the importance of getting it right, then their mistake wasn’t just an accident.

For political reasons, some people resist using other people’s preferred name or pronouns. There’s a current political controversy about it. This makes deadnaming more common than it would otherwise be. Any deadnaming that occurs in part due to political attitudes is not fully accidental. Similarly, there is a current intellectual controversy about whether misquoting is a big deal or whether, instead, complaining about it is annoyingly pedantic and unproductive. This controversy increases the frequency of misquotes.

However, that controversy about misquotes and precision is separate from the issue of people’s right to control their own speech and choose what words to say or not say. Regardless of the outcome of the precision vs. sloppiness debate in general, misquotes are a special case because they non-consensually violate other people’s control over their own speech. It’s a non sequitur to go from thinking that lower effort, less careful writing is good to the conclusion that it’s OK to say that John said words that he did not say or choose.

People who deadname frequently claim it’s accidental when there are strong signs it isn’t accidental, such as resisting correction, making political comments that reveal their agenda, or being unapologetic. If they do that repeatedly, I don’t think EA would put up with it. Misquoting could be treated the same way.

Legitimacy

Sometimes people call me “Elliott” and I usually say nothing about the misspelling. I interpret it as an accident because it doesn’t fit any agenda. I don’t know why they’d do it on purpose. If I expected them to use my name many times in the future, or they were using it in a place that many people would read it, then I’d probably correct them. If I corrected them, they would say “oops sorry” or something like that; as long as they didn’t feel attacked or judged, and they don’t have a guilty conscience, then they wouldn’t resist the correction.

My internet handle is “curi”. Sometimes people call me “Curi”. When we’re having a conversation and they’re using my name repeatedly, I may ask them to use “curi”. A few people have resisted this. Why? Besides feeling hostility towards a debate opponent, I think some were unfamiliar with internet culture, so they don’t regard name capitalization as a valid, legitimate choice. They believe names should be formatted in a standard way. They think I’m in the wrong by wanting to have a name that starts with a lowercase letter. They think, by asking them to start a name with a lowercase letter, I’m the one trying to control them in a weird, inappropriate way.

People resist corrections when they think they’re in the right in some way. In that case, the mistake isn’t accidental. Their belief that it’s good in some way is a causal factor in it happening. If it was just an accident, they wouldn’t resist fixing the mistake. Instead, there is a disagreement; they like something about the alleged mistake. On the EA forum, you’re not allowed to disagree that deadnaming is bad and also act on that disagreement by being resistant to the forum norms. You’re required to go along with and respect the norms. You can get a warning or ban for persistent deadnaming.

People’s belief that they’re in the right usually comes from some kind of social-cultural legitimacy, rather than being their own personal opinion. Deadnaming and misgendering are legitimized by right wing politics and by some traditional views. Capitalizing the first letter of a name, and lowercasing the rest, is a standard English convention/tradition which some internet subcultures decided to violate, perhaps due to their focus on written over spoken communication. I think misquoting is legitimized primarily by anti-pedantry or anti-over-precision ideas (which is actually a nuanced debate where I think both standard sides are wrong). But viewpoints on precision aren’t actually relevant to whether it’s acceptable or violating to put unchosen words in someone else’s mouth. Also, each person has a right to decide how precise to be in their own speech. When you quote, it’s important to understand that that isn’t your speech; you’re using someone else’s speech in a limited way, and it isn’t yours to control.

When someone asks you not to deadname, you may feel that they’re asking you to go against your political beliefs, and therefore want to resist what feels like politicized control over your speech, which asks you to use your own speech contrary to your values. However, a small subset of speech is more about other people than yourself, so others need to have significant control over it. That subset includes names, pronouns and quotes. When asked not to misquote, instead of feeling like your views on precision are being challenged, you should instead recognize that you’re simply being asked to respect other people’s right to choose what words to say or not say. It’s primarily about them, not you. And it’s primarily about their control over their own life and speech, not about how much precision is good or how precisely you should speak.

Control over names and pronouns does have to be within reason. You can’t choose “my master who I worship” as a name or pronoun and demand that others say it. I’m not aware of anyone ever seriously wanting to do that. I don’t think it’s a real problem or what the controversy is actually about (even though it’s a current political talking point).

Our culture has conflicting norms, but it does have a very clear, well known norm in favor of exact quotes. That’s taught in schools and written down in policies at some universities and newspapers. We lack similarly clear or strong norms for many other issues related to precision. Why? Because the norm against misquoting isn’t primarily about precision. Misquoting is treated differently than other issues related to precision because it’s not your place to choose someone else’s words any more than it’s your place to choose their name or gender.

Misquotes Due to Bias

Misquotes usually aren’t random errors.

Sometimes people make a typo. That’s an accident. Typos can be viewed as basically random errors. I bet there are actually patterns regarding which letters or letter combinations get more typos. And people could work to make fewer typos. But there’s no biased agenda there, so in general it’s not a problem.

Most quotes can be done with copy/paste, so typos can be avoided. If someone has a general policy of typing in quotes and keeps making typos within quotes, they should switch to using copy/paste. At my forum, I preemptively ask everyone to use software tools like copy/paste when possible to avoid misquotes. I don’t wait and ask them to switch to less error-prone quoting methods after they make some errors. That’s because, as with deadnaming, those errors mistreat other people, so I’d rather they didn’t happen in the first place.

Except for typos and genuine accidents, misquotes are usually changed in some way that benefits or favors the misquoter, not in random ways.

People often misquote because they want to edit things in their favor, even in very subtle ways. Tiny changes can make a quote seem more or less formal or tweak the connotations. People often edit quotes to remove some ambiguity, so it reads as an author more clearly saying something than he did.

Sometimes people want their writing to look good with no errors, so they want to change anything in a quote that they regard as an error, like a comma or lack of comma. Instead of respecting the quote as someone else’s words – their errors are theirs to make (or to disagree are errors) – they want to control it because they’re using it within their own writing, so they want to make it conform to their own writing standards. People should understand that when they quote, they are giving someone else a space within their writing, so they are giving up some control.

People also misquote because they don’t respect the concept of accurate quotations. These misquotes can be careless with no other agenda or bias – they aren’t specifically edited to e.g. help one side of a debate. However, random changes to the wordings your debate partners use tend to be bad for them. Random changes tend to make their wordings less precise rather than more precise. As we know from evolution, random changes are more likely to make something less adapted to a purpose rather than more adapted.

If you deadname people because you don’t respect the concept of people controlling their name, that’s not OK. If you are creating accidents because you don’t care to try to get names right, you’re doing something wrong. Similarly, if you create accidental misquotes because you don’t respect the concept of people controlling their own speech and wordings, you’re doing something wrong.

Also, imprecision in general is an enabler of bias because it gives people extra flexibility. They get more options for what to say, think or do, so they can pick the one that best fits their bias. A standard example is rounding in their favor. If you’re 10 minutes late, you might round that down to 5 minutes in a context where plus or minus five minutes of precision is allowed. On the other hand, if someone else is 40 minutes late, you might round that up to an hour as long as that’s within acceptable boundaries of imprecision. People also do this with money. Many people round their budget up but round their expenses down, and the more imprecise their thinking, the larger the effect. If permissible imprecision gives people multiple different versions of a quote that they can use, they’ll often pick one that is biased in their favor, which is different than a fully accidental misquote.

Misquotes Due to Precise Control or Perfectionism

Some non-accidental misquotes, instead of due to bias, are because people want to control all the words in their essay (or book or forum post). They care so much about controlling their speech, in precise detail, that they extend that control to the text within quotes just because it’s within their writing. They’re used to having full control over everything they write and they don’t draw a special boundary for quotations; they just keep being controlling. Then, ironically, when challenged, they may say “Oh who cares; it’s just small changes; you don’t need precise control over your speech.” But they changed the quote because of their extreme desire to exactly control anything even resembling their own speech. If you don’t want to give up control enough to let someone else speak in entirely their own words within your writing, there is a simple solution: don’t quote them. If you want total control of your stuff, and you can’t let a comma be out of place even within a quote, you should respect other people wanting control of their stuff, too. Some people don’t fully grasp that the stuff within quotes is not their stuff even though it’s within their writing. Misquotes of this nature come more from a place of perfectionism and precise control, and lack of empathy, rather than being sloppy accidents. These misquotes involve non-random changes to make the text fit the quoter’s preferences better.

Types of Misquotes

I divide misquotes into two categories. The first type changes a word, letter or punctuation mark. It’s a factual error (the quote is factually wrong about what the person said). It’s inaccurate in a clear, literal way. Computers can pretty easily check for this kind of quotation error without needing any artificial intelligence. Just a simple string comparison algorithm can do it. In this case, there’s generally no debate about whether the quote is accurate or inaccurate. There are also some special rules that allow changing quotes without them being considered inaccurate, e.g. using square brackets to indicate changes or notes, or using ellipses for omitted words.

The second type of misquote is a misleading quote, such as taking words out of context. There is sometimes debate about whether a quote is misleading or not. Many cases are pretty clear, and some cases are harder to judge. In borderline cases, we should be forgiving of the person who did it, but also, in general, they should change it if the person being quoted objects. (Or, for example, if you’re debating someone about Socrates’ ideas, and they’re the one taking Socrates’ side, and they think your Socrates quote is misleading, then you should change it. You may say all sorts of negative things about the other side of the debate, but that’s not what quotation marks are for. Quotations are a form of neutral ground that should be kept objective, not a place to pursue your debating agenda.)

Here’s an example of a misleading quote that doesn’t violate the basic accuracy rules. You say, “I do not think John is great.” but I quote you as saying “John is great.” The context included an important “not” which has been left out. I think we can all agree that this counts as misquoting even though no words, letters or punctuation marks were changed. And, like deadnaming, it’s very rude to do this to someone.

Small Changes

Sometimes people believe it’s OK to misquote as long as the meaning isn’t changed. Isn’t it harmless to replace a word with a synonym? Isn’t it harmless to change a quote if the author agrees with the changed version? Do really small changes matter?

First of all, if the changes are small and don’t really matter, then just don’t do them. If you think there’s no significant difference, that implies there’s no significant upside, so then don’t misquote. It’s not like it takes substantial effort to refrain from editing a quote; it’s less work not to make changes. And copy/pasting is generally less work than typing.

If someone doesn’t mind a change to a quote, there are still concerns about truth and accuracy. Anyone in the audience may not want to read things he believes are exact quotes but which aren’t. He may find that misleading (and EA has a norm against misleading people). Also, if you ever non-accidentally use inaccurate quotes, then reasonable people will doubt that they can trust any of your quotes. They’ll have to check primary sources for any quotes you give, which will significantly raise the cost of reading your writing and reduce engagement with your ideas. But the main issue – putting words in someone’s mouth without their consent – is gone if they consent. Similarly, it isn’t deadnaming to use an old name of someone who consents to be called by either their old or new name.

However, it’s not your place to guess what words someone would consent to say. If they are a close friend, maybe you have a good understanding of what’s OK with them, and I guess you could try to get away with it. I wouldn’t recommend that and I wouldn’t want to be friends with someone who thought they could speak for me and present it as a quote rather than as an informed guess about my beliefs or about what I would say. But if you want to quote your friend (or anyone else) saying something they haven’t said, and you’re pretty sure they’d be happy to say it, there’s a solution: ask them to say it and then quote them if they do choose to say it. On the other hand, if you’re arguing with someone, you’re in a poor position to judge what words they would consent to saying or what kind of wording edits would be meaningful to them. It’s not reasonable to try to guess what wording edits a debate opponent would consent to and then go ahead with them unilaterally.

Inaccurately paraphrasing debate opponents is a problem too, but it’s much harder to avoid than misquoting is. Misquoting, like deadnaming, is something that you can almost entirely avoid if you want to.

The changes you find small and unimportant can matter to other people with different perspectives on the issues. You may think that “idea”, “concept”, “thought” and “theory” are interchangeable words, but someone else may purposefully, non-randomly use each of those words in different contexts. It’s important that people can control the nuances of their wordings when they want to (even if they can’t give explicit arguments for why they use words that way). Even if an author doesn’t (consciously) see any significant difference between his original wording and your misquote, the misquote is still less representative of his thinking (his subconscious or intuition chose to say it the other way, and that could be meaningful even if he doesn’t realize it).

Even if your misquote would be an accurate paraphrase, and won’t do a bunch of harm by spreading severe misinformation, there’s no need to put quote marks around it. If you’re using an edited version of someone else’s words, so leaving out the quote marks would be plagiarism, then use square brackets and ellipses. There’s already a standard solution for how to edit quotes, when appropriate, without misquoting. There’s no good reason to misquote.

Cost and Benefit

How costly is it to avoid misquotes or to avoid deadnaming? The cost is low but there are some reasons people misjudge it.

Being precise has a high cost, at least initially. But misquoting, like misnaming, is a specific case where, with a low effort, people can get things right with high reliability and few accidents. Reducing genuine accidents to zero is unnecessary and isn’t what the controversy is about.

When a mistake is just an accident, correcting it shouldn’t be a big deal. There is no shame is infrequent accidents. Attempts to correct misquotes sometimes turn into a much bigger deal, with each party writing multiple messages. It can even initiate drama. This is because people oppose the policy of not misquoting, rather than a cost inherent in a policy of not misquoting. It’s the resistance to the policy, not the policy itself, which wastes time and energy and derails conversations.

Most of the observed conversational cost, that goes to talking about misquotes, is due to people’s pro-misquoting attitudes rather than due to any actual difficulty of avoiding misquotes. This misleads people about how large the cost is.

Similarly, if you go to some right wing political forums, getting people to stop deadnaming would be very costly. They’d fight you over it. But if they were happy to just do it, then the costs would be low. It’s not very hard to very infrequently make updates to your memory about the names of a few people. Cost due to opposition to doing something correctly should be clearly differentiated from the cost of doing it correctly.

To avoid misquotes, copy and paste. If you type in a quote from paper, double check it and/or disclaim it as potentially containing a typo. Most books are available electronically so typing quotes in from paper is usually unnecessary and more costly. Most cases of misquoting that I’ve seen, or had a conflict over, involved a quote that could have been copy/pasted. Copy/pasting is easy not costly.

Avoiding misquotes also involves never adding quotation marks around things which are not quotes but which readers would think were quotes. For example, don’t write “John said” and then a paraphrase then also quote marks around it in order to make it seem more exact, precise, rigorous or official than it is. And don’t put quote marks around a paraphrase because you believe you should use a quote, but you’re too lazy to get the quote, and you want to hide that laziness by pretending you did quote.

Accurate quoting can be more about avoiding bias than about effort or precision. You have to want to do it and then resist the temptation to violate the rules in ways that favor you. For some people, that’s not even tempting. It’s like how some people resist the temptation to steal while others don’t find stealing tempting in the first place. You can get to the point that things aren’t tempting and really don’t take effort to not do. Norms can help with that. Due to better anti-stealing norms, many more people aren’t tempted to steal than aren’t tempted to misquote. Anyway, if someone gives in to temptation and steals, deadnames or misquotes, that is not an accident. It’s a different thing. It’s not permissible at EA to deadname because you gave in to temptation, and I suggest misquoting should work that way too.

What’s the upside of misquoting? Why are many people resistant to making a small effort to change? I think there are two main reasons. First, they confuse the misquoting issue with the general issue of being imprecise. They feel like someone asking them not to misquote is demanding that they be a more precise thinker and writer in general. Actually, people asking not to be misquoted, like people asking not to be deadnamed, don’t want their personal domain violated. Second, people like misquoting because it lets lets them make biased changes to quotes. People don’t like being controlled by rules that give them less choice of what to do and less opportunity to be flexible in their favor (a.k.a. biased). Many people have a general resistance to creating and following written policies. I’ve written about how that’s related to not understanding or resisting the rule of law.

Another cost of avoiding misquotes is that you should be careful when using software editing tools like spellcheck or Grammarly. They should have automatic quote detection features and warn you before making changes within quotes, but they don’t. These tools encourage people to quickly make many small changes without reading the context, so people may change something without even knowing it’s within a quote. People can also click buttons like “correct all” and end up editing quotes. Or they might decide to replace all instances of “colour” with “color” in their book, do a mass find/replace, and accidentally change a quote. I wonder how many small misquotes in recent books are caused this way, but I don’t think it’s the cause of many misquotes on forums. Again, the occasional accident is OK; perfection is not necessary but people could avoid most errors at a low cost and stop picking fights in defense of misquotes or deadnaming.

If non-accidental misquoting is prohibited at EA, just like deadnaming, then it will provide a primary benefit by defending people’s control over their own speech. It will also provide a secondary benefit regarding truth, accuracy and precision. It’s debatable how large that accuracy benefit is and how much cost it would be worth. However, in this case, the marginal cost of that benefit would be zero. If you change misquoting norms for another reason which is worth the cost by itself, then then the gain in accuracy is a free bonus.

There are some gray areas regarding misquoting, where it’s harder to judge whether it’s an error. Those issues are more costly to police. However, most of the benefit is available just by policing misquotes which are clearly and easily avoidable, which is the large majority of misquotes. Doing that will have a good cost to benefit ratio.

Another cost of misquoting is it can gaslight people, especially with small, subtle changes. It can cause them to doubt themselves or create false memories of their own speech to match the misquote. It takes work to double check what you actually said after reading someone quote you, which is a cost. Many people don’t do that work, which leaves them vulnerable. There’s a downside both do doing or not doing that work. That’s a cost imposed by allowing misquotes to be common and legitimized.

Tables

Benefits and costs of anti-misquoting norms:

Benefits Costs
Respect people’s control over their speech Avoiding carelessness
Accuracy Resisting temptation
Prevent conflicts about misquotes Not getting to bias quotes in your favor
No hidden, biased tweaks in quotes you read Learning to use copy/paste hotkeys
Less time editing quotes Not getting full control over quoted text like you have over other text in your post
Quotes and paraphrases differentiated Not getting to put quote marks around whatever you want to
Filter out persistent misquoters Lose people who insist on misquoting
Effort to spread and enforce norm

For comparison, here’s a cost/benefit table for anti-deadnaming norms:

Benefits Costs
Respect people's control over their name Avoiding carelessness
Accuracy Resisting temptation
Filter out persistent deadnamers Lose people who insist on deadnaming
Not getting to call people whatever you want
Effort to spread and enforce norm

Potential Objections

If I can’t misquote, how can I tweak a quote wording to fit my sentence? Use square brackets.

If I can’t misquote, how can I supply context for a quote and keep it short? Use square brackets or explain the context before giving the quote.

What if I want to type in a quote but then I make a typo? If you’re a good enough typist that you don’t mind typing extra words, I’m sure you can also manage to use copy/paste hotkeys.

What if I’m quoting a paper book? Double check what you typed in and/or put a disclaimer that it’s typed in by hand.

What if an accident happens? As with deadnaming, rare, genuine accidents are OK. Accidents that happen because you don’t really care about deadnaming or misquoting are not fine.

Who cares? People who think about what words to say and not say, and put effort into those decisions. They don’t want someone else to overrule those decisions. Whether you’re one of those people or not, people who think about what to say are people you should want to have on your forum.

Who else cares? People who want to form accurate beliefs about the world and have high standards don’t want to read misquotes and potentially be fooled by them or have to look stuff up in primary sources frequently. It’s much less work for people to not misquote in the first place than for readers (often multiple readers independently) to check sources.

Is it really that big a deal? Quoting accurately isn’t very hard and isn’t that big a deal to do. If this issue doesn’t matter much, just do it in the way that doesn’t cause problems and doesn’t draw attention to quoting. If people would stop misquoting then we could all stop talking about this.

Can’t you just ignore being misquoted? Maybe. You can also ignore being deadnamed, but you shouldn’t have to. It’s also hard enough to have discussions when people subtly reframe the issues, and indirectly reframe what you said (often by replying as if you said something, without claiming you said it), which is very common. Those actions are harder to deal with and counter when they involve misquotes – misquotes escalate a preexisting problem and make it worse. On the other hand, norms in favor of using (accurate) quotes more often would make it harder to be subtly biased and misleading about what discussion partners said.

Epistemic Status

I’ve had strong opinions about misquoting for years and brought these issues up with many people. My experiences with using no-misquoting norms at my own forum have been positive. I still don’t know of any reasonable counter-arguments that favor misquotes.

Conclusion

Repeated deadnaming is due to choice not accident. Even if a repeat offender isn’t directly choosing to deadname on purpose, they’re choosing to be careless about the issue on purpose, or they have a (probably political) bias. They could stop deadnaming if they tried harder. EA norms correctly prohibit deadnaming, except by genuine accident. People are expected to make a reasonable (small) effort to not deadname.

Like deadnaming, misquoting violates someone else’s consent and control over their personal domain. People see misquoting as being about the open debate over how precise people should be, but that is a secondary issue. They should have more empathy for people who want to control their own speech. I propose that EA’s norms should be changed to treat misquoting like deadnaming. Misquoting is a frequent occurrence and the forum would be a better place if moderators put a stop to it, as they stop deadnaming.

Norms that allow non-accidental misquoting alienate some people who might otherwise participate, just like allowing non-accidental deadnaming would alienate some potential participants. Try to visualize in your head what a forum would be like where the moderators refused to do anything about non-accidental deadnaming. Even if you don’t personally have a deadname, it’d still create a bad, disrespectful atmosphere. It’s better to be respectful and inclusive, at a fairly small cost, instead of letting some forum users mistreat others. It’s great for forums to enable free speech and have a ton of tolerance, but that shouldn’t extend to people exercising control over something that someone else has the right to control, such as his name or speech. It’s not much work to get people’s names right nor to copy/paste exact quotes and then leave them alone (and to refrain from adding quotation marks around paraphrases). Please change EA’s norms to be more respectful of people’s control over their speech, as the norms already respect people’s control over their name.


Elliot Temple | Permalink | Messages (0)

How I Misunderstood TCS

I saw the blog "Taking Children Seriously" Is Bad. I agree. I’ve thought of more and more flaws with TCS as time has gone on and I’ve written some criticism. I’ve also put warnings/disclaimers on some of my old TCS writing. Also the TCS founders are bad people who are responsible for a harassment campaign against me. Anyway, I wanted to share some thoughts on how/why I didn’t notice TCS’s flaws sooner.


I think I misunderstood TCS for a bunch of reasons, but in a way where the version of TCS in my head was better than what David and Sarah meant. One thing that happened was DD said there was knowledge on some topics, and I believed him and tried to learn/understand it. Then I created some of it.

DD often let me talk a lot while making some comments, and he didn’t tell me when I was saying things that were new to him, which was misleading. I often thought I was figuring out things he already knew with some hints/help, when actually he was hiding his ignorance from me. The best example of this is my method for avoiding coercion, which was part of my attempt to learn (and organize and write down publicly) existing TCS knowledge, but was actually me creating new knowledge. And I’m not sure that to this day DD learned my avoiding coercion method or agrees with it or likes it. But without my method, how do you always find common preferences (quickly, not given unbounded time)? TCS has no real, substantive, usable answer. Just discuss and try, while trying to not be irrational and not coerce. TCS also lacks details for how to have a rational discussion. I’ve tried to understand/create rational methods more than TCS (or Popper) ever did with ideas like Paths Forward, Impasse Chains, decisive arguments, debate trees, and idea-goal-context decision making. TCS never had methods with that level of specificity and usefulness.

DD told me I was really good at drawing out explicit statements of knowledge he already had. But I think a lot of what happened is I brought up issues – via questions, criticism or explanations – which he hadn’t actually thought of. That prompted him to make new explicit statements to address my new ideas.

TCS had very broad, abstract claims like “problems are soluble”, as well as simple examples and naive advice. Examples of naive advice are that custody courts and child protective services aren’t very dangerous and you shouldn’t worry about them. Also saying that child predators are very rare and not really a concern even when saying that children are full adults in principle and advocating abolishing age of consent laws. Another example of the lack of substance in TCS advice was DD suggesting to tell teachers to let your child use the phone whenever he wants. If teachers (or babysitters, daycare workers, camp workers, etc.) would actually listen to that kind of request, that would be wonderful. But we don’t live in that world. And if we did, parents would be able to think of the idea “ask them to let my child use the phone whenever he wants” without DD’s help. It’s not a very clever idea; most parents could come up with that themselves (if they had the sort of goals where it’d be a good idea – most TCS-inclined parents would want their kid to be able to phone for help but some other parents wouldn’t actually want that).

Another thing that happened, from my perspective, is I won a lot of arguments. I criticized a lot of genuine errors. I thought that was important and useful, and would lead to progress. DD encouraged and liked it. It was useful practice for my own intellectual development. Before I found DD/TCS I was way above average at critical debate, logic, etc. But now I’ve improved a ton compared to my past self. The critical discussions had value for me but weren’t much use for changing the world. It didn’t help people much. They tended not to learn from criticism. And other people in the audience (besides whoever I was directly replying to) tended not to learn much even if they were making very similar mistakes to what I commented on, and they also tended not to learn much from my example about how to debate, think critically, get logic right, etc.

TCS seemed right and important to me because I used ideas related to it and won arguments. That made it seem to me like people were doing worse than TCS and TCS was a clear improvement. While TCS or any sort of gentle parenting has some improvements over mean parenting, I don’t think that was really the issue. I could have won a lot of arguments using other ideas too. The bigger issue is that people are bad at arguing, logic, learning and following ideas correctly, etc. So yeah they wouldn’t get even the basics of TCS right. In some sense, TCS didn’t seem to need more advanced or complex ideas because people weren’t learning and using the main ideas it did say. TCS is like “be way nicer to your kids guys” and then people post about how they’re mean to their kids and blind to it. They needed more practical help. They needed more guidance to actually learn ideas and integrate them into their lives. These are some of the things I’ve been working on with CF. TCS didn’t do that. It wasn’t actually very good.

TCS actually had ideas that were against being organized or methodical, or intentionally following long term goals. It was more like “follow the fun” and “being untidy helps you be creative” which are just personal irrationalities and errors of DD and SFC, not principles with anything to do with Popperian epistemology. I did OK at learning and making progress despite the lack of structure, but most people didn’t, and I think I would have learned more and faster with more organization and structure. I’ve now imposed more structure on my life and organized things more and it is not self-coercive for me; I’m fine with it and find it useful. I understand that for DD it would be self-coercive, but many people can do it some without major downsides, and DD is wrong and should really work on fixing his flaws. TCS never told people to practice anything but practice is a key part of turning intellectual ideas into something that makes a difference in your daily life (rather than only affecting some decisions that you use conscious analysis for, which often leads to clashes between your conscious and subconscious if you don’t do any practice).

This article itself isn’t very organized, but that’s an intentional choice. I’d rather put organizing and editing effort into epistemology articles for the CF website than into this article. I want to write this article cheaply (in terms of resource use like effort). Similarly, I could write a lot of detailed criticism of TCS and of DD’s books, but I don’t want to because I have other things to do. I’ve made some intentional choices about what to prioritize. My CF site has the stuff I think is most important to put energy into. It avoids parenting, relationships and politics. I think stuff about rationality itself is more important because it’s needed to deal with those other topics well. On a related note, I would like to study math and physics, but I don’t, because I don’t want to take the energy away from my philosophy work. TCS discouraged that kind of resource budgeting choice. But I don’t feel bad or self-coerced about it. I think it’s a good choice. I don’t have time or energy to do everything that would be nice to do. Prioritizing is part of life. If you don’t prioritize in a conscious or intentional way, you’ll still end up doing some things and not others. The difference will be some more important things don’t get done. Unintentionally not doing some things because you run out of time and energy won’t lead to better outcomes than making some imperfect, intentional, conscious decisions.

It’s important not to fight with yourself and suppress your desires with willpower. It’s important not to consciously choose some priorities that your subconscious disagrees with. People don’t live up to this perfectly. It’s a good goal to try to do better at, but don’t get paralyzed or sad about it. Just don’t purposefully suppress with willpower and think that’s a good longterm strategy to never improve.

It’s pretty common to like something subconscious/emotionally/intuitively and also think it’s important. That’s an achievable, realistic thing. Not everyone is really conflicted about prioritizing whatever their main interest or profession is. Some people like something and prioritize it and that works well for them. It’s not really all that special that I like philosophy, and do it, and I’m OK with deprioritizing math and physics even though those would be fun too. I don’t think DD can do it though, which is part of why he started TCS but later abandoned it – he has poor control over his priorities and they’re unstable. In retrospect, when he wrote over 100 blog posts about politics for his blog Setting the World to Rights, that was a betrayal of TCS. He could and should have written 100 articles about parenting instead (or if he didn’t want to, then don’t found a parenting movement and recruit people to join it in the first place – choose the politics blog instead).

Also, by saying things were very abusive, monstrous, etc., TCS implied the current state of the world was better than it is. Saying TCS was practical and immediately achievable also implied the world is better than it is. I didn’t realize how screwed up the world is and TCS was wrong about it. The world being more screwed up makes TCS thinking less reasonable. (It doesn’t affect abstract principles but it affects applications.) While TCS said most of the world is better than reality, it said all other parenting is really bad. It’s actually pretty common for people to notice errors in their speciality, think it’s a big problem, and assume other specialties aren’t so screwed up. It’s been said that people reading a newspaper article about their profession often see that it’s full of glaring, basic errors … but then for some reason they believe the same newspaper on every other topic. TCS saw parenting errors but believed the same society was reasonable on other topics. (TCS got some of the errors wrong, but there are plenty of real errors in everything so when you decide to be a harsh critic you’ll often get some things right. Or put another way, everything has lots of room for improvement. If you just try to point out flaws, then it’s not so hard to be right some. If you try to suggest viable ways to improve things, that’s much harder, because your suggestions will contain flaws too.)

The best parts of TCS were short, abstract general principles. Their applications of those principles were not so good. The best principles were unoriginal and came from Popper (rationality stuff) or classical liberalism (freedom, cooperative relationships, mutual benefit, win/win solutions). They were open about getting ideas from those two sources. What was more original were the specific applications to parenting, but those weren’t so good… What happened is I learned TCS by trying to understand and apply the principles myself. I reinvented a lot of the applications while trying to figure out the details because TCS didn’t have enough details and because I cared much more about the principles than about parenting (so did DD, who, for that reason, should not have founded a parenting movement – it would have been better if he made a philosophy blog instead, as I have done). Anyway when I worked out applications of the principles myself I came up with a lot of different conclusions without realizing it. That’s a common thing people do when they read something and don’t discuss much, but I was discussing with DD all the time and he didn’t tell me that I was coming up with new and different ideas, and he didn’t express disagreement with the stuff I came up with, which was really misleading to me. An example is that I figured out that TCS implies having only one child (at a time), but DD and SFC didn’t say that and I doubt they believe it, but I don’t recall DD ever expressing disagreement with that idea. TCS also said a bunch of stuff about getting helpers, but what I figured out is its principles suggest that even having a co-parent is very problematic because it gets in the way of taking individual responsibility for a very hard, unconventional project you’re doing where you need full control and can’t rely on others to be rational participants. Not having a co-parent is also very problematic so there’s a hard problem there that TCS doesn’t address at all. (Having only one kid has some problems too, btw. There are downsides to address which TCS hasn’t tried to develop knowledge about.) Having little other help besides a co-parent is reasonably realistic though – much more so than having other helpers who are actually TCS. Thinking you could have lots of TCS helpers is also related to the incorrect adequate society mindset of TCS.


Elliot Temple | Permalink | Messages (0)

Some Flaws in Objectivism

I’m a big fan of Objectivism and Ayn Rand. As evidence, I present to you my Learn Objectivism website with an outline of Atlas Shrugged (AS) and detailed analysis of the first chapters. However, there are some parts of Objectivism that I disagree with.

I disagree with the limited communication between characters in AS and The Fountainhead (FH). Rand thinks that appearances, faces, eyes and expressions communicate more effectively than they actually do.

I disagree with some of the ideas about romance and sex, particularly given the limited communication between characters. I also think John Galt acted like a creepy stalker with Dagny Taggart. I say more in podcasts: one and especially two.

I think Galt should have given his big speech ten years earlier instead of waiting until so much harm and damage was done. I understand not giving the world his motor and other practical help. But I think it’s fine to tell them about philosophy. Philosophy is very hard to use and benefit from without actually being rational and understanding it (much harder than math, science, engineering, etc.) So I think it was basically safe to share philosophy ideas with the world. Rand partly agrees with this since Francisco gave his speech about money at a party and didn’t mind people hearing it.

So I think it was deeply unfair and unjust to destroy the world without explaining first. Explaining both allows more people to agree with Galt and take his side, and also allows people the opportunity to give counter-arguments and change Galt’s mind. This is relevant to my own career. I’m concerned that my society is too corrupt to help with e.g. a scientific breakthrough or to sell my brains to work on other people’s goals (e.g. working for a big company or government in a way that significantly and uniquely helps them, rather than in a job where I’d be easily replaceable with someone else). I’m partially on strike like Galt in AS. But I think my philosophy writing is fine and basically that bad people won’t be helped by it. It takes too much learning to use it, at which point you’d be a good person. It’s not designed for enabling shortcuts (which wouldn’t work). (Note: I’m from U.S.A but I don’t think another country is significantly better.)

Broadly, I think Roark, Galt and others should have shared more ideas and been more open to public debate. I don’t think being willing to debate gives sanction, legitimacy or help to a corrupt society. Actually I think it makes society look bad if you’re open to debate but society’s representatives or members won’t debate you. Being willing to debate, and winning or being refused, helps reveal society’s inferiority (if that’s true), especially if you debate rationally instead of treating debate as a contest for using rhetorical tricks (even if debate were a dumb contest, if an outsider wins that contest, then society looks less powerful, smart, etc.). There are exceptions, like I think it was fine for Dagny to refuse to debate the biased question “Is Rearden Metal a lethal product of greed?” on a hostile, unfair radio show. But if people were willing to have fairer debates in more neutral settings, then I think engaging in debate is good (at least enough for it to become repetitive). Similarly, I think Galt should have written and attempted to publish a book before striking (if he was unable to get it published, that’s OK – then he did his part by trying).

In real life, Rand debated more than her characters did, but not enough. But Popper and other intellectuals I like also didn’t debate enough. And I know it was much harder before the internet. Inadequate debating is a widespread problem rather than something specific about Objectivism. I find that Objectivists online seem about equally willing to debate as Popperians or various other groups. They’ll informally argue a bit but they lack Paths Forward or rational debate policies and it’s hard to get any kind of organized, conclusive debate with followups over time.

I don’t think Galt should have worked a menial job. His time is limited and precious. His friends could have easily given him the same amount of money he made from that job so he could live a modest lifestyle and spend more time doing physics research and working on the strike. And if he invented one extra thing in his lifetime, or did a better job leading the strike, that would have been more than worth the money to them.

I don’t think Howard Roark should have refused money from his friends and worked in a quarry, either. He could have lived modestly with their money and done architecture research or taken up a hobby.

I think Rand over-emphasized politics, although she did say that philosophy is more important and that Objectivists shouldn’t try to form a political party or influence elections – the world needs philosophical education not political activism. Many Objectivists have not listened and are overly into politics. I also disagree with some of Rand’s criticism of anarchy, though I agree that current anarchists and libertarians are mostly bad.

I disagree with Rand’s advocacy of induction (the mainstream, conventional philosophical idea allegedly explaining most learning), though she never said much about it and admitted that she didn’t know the details. Some of her followers have emphasized it much more than she did and have attacked the anti-inductivist philosopher Karl Popper (in unfair, unreasonable, ignorant ways).

I don’t like how dramatic, extremist and absolutist Rand’s characters are (which I think reflects on some of her own thinking). I’ll give some examples of what I mean from AS:

I would give my life not to let it be otherwise

I think that I would give my life for just one more year on the railroad

He was the only man—with one exception—to whom I could have given my life!

If I should lose my life, to what better purpose could I give it?

I think I would give the rest of my life for one year as your furnace foreman. But I can’t.

I didn’t care whether either one of us lived afterwards, just to see you this once!

if hell is the price—and the measure—then let me be the greediest of the three of us

Happiness is possible only to a rational man, the man who desires nothing but rational goals, seeks nothing but rational values and finds his joy in nothing but rational actions.

I wouldn’t approach him. The only homage I can still pay him is not to cry for forgiveness where no forgiveness is possible.

I’d give anything now to have him back, but I own nothing to offer in such repayment, and I’ll never see him again, because it’s I who’ll know that there is no way to deserve even the right to ask forgiveness.

The angular planes of his cheeks made her think of arrogance, of tension, of scorn—yet the face had none of these qualities, it had their final sum: a look of serene determination and of certainty, and the look of a ruthless innocence which would not seek forgiveness or grant it.

There’s a character mockingly nicknamed Non-Absolute. After he improves, Rearden says:

You’re a full absolute now, and you know it.

Objectivism tries to be more of a broad, complete philosophy than most of its rivals. This leads to sharing more of Rand’s ideas and therefore sharing more mistakes. Karl Popper wrote about fewer topics (though more than most thinkers) and wasn’t very good on most topics besides epistemology. Eli Goldratt, Thomas Szasz or Ludwig von Mises wrote about fewer topics than Popper, and it’s harder to find flaws in what they did write because they focused on sharing only their best ideas.

When people only write about their specialty, it hides many of their weaknesses. That’s fine. I’m not saying it’s better to cover more or fewer topics; both are reasonable. Just be careful comparing people by the flaws you can find anywhere in their writing; that isn’t fair to people like Rand who covered more topics. A fairer way to compare would be to pick only one topic, which multiple people wrote about, and then look for flaws only on that topic. And if someone made some mistakes, don’t assume they’re no better on any other topic. I don’t think people should be discouraged from writing about many topics even if they can’t keep the quality as high as if they focused only on a couple topics. Overall, I think there’s value to be gained by reading Rand’s ideas about many topics, and I’d be worse off if she’d picked only three to share publicly. I myself do write about many topics rather than only sharing ideas about epistemology.


Elliot Temple | Permalink | Message (1)

Fraud By Companies That Don’t Serve Consumers

When I recently wrote Capitalism Means Policing Big Companies, I was thinking about big companies that sell to consumers or about the system as a whole which sells to consumers. I wasn’t thinking about a large dairy farm that sells to a middleman that sells to Kraft that sells to consumers, and then considering the dairy farm or middleman as independent companies who can perhaps say “Whatever Kraft tells consumers is not our responsibility; we disclosed everything about our product appropriately to the people we sold it to.”

I suspect fraud is a bigger problem for how consumers are treated than for how businesses are treated. This is partly because consumers are more vulnerable and unsophisticated. Advertising to consumers is different than advertising to businesses.

But companies don’t like taking the dishonesty or fraud on themselves. For example, trucking companies aren’t happy for their drivers to accurately disclose everything and then have the trucking company itself lie to regulators. Instead, they pressure their drivers to hide all problems and falsify documents so that the trucking company can keep its hands cleaner.

I imagine it’s the same throughout supply chains. Kraft doesn’t want its suppliers to disclose problems to it that it will then have to knowingly, purposefully hide. Kraft would instead pressure its suppliers to do some of the dishonesty themselves instead of pushing it all on Kraft. And that would extend all the way down the chain. The middlemen don’t want all the dishonesty either. They want their suppliers to cover some stuff up so they don’t have to. Each company wants its business partners to lie to it, so they can keep their own hands cleaner, and they will tend to do business with whoever is willing to do that for them. That’s actually similar to how consumers tend to do business with companies that lie to them and tell them what they want to hear.

On a related note, Amazon tries to get out of liability and responsibility for what many of its delivery drivers do by having them superficially work for other companies even though Amazon exercises a ton of micromanagement and control over their jobs (so their independence seems more like facade than reality). Amazon puts a lot of work into trying to have a lot of the dirt be on other people’s hands instead of their own. They aren’t just happy for their suppliers and service providers to be honest and clean, which would make it clearer that Amazon is dirty.

The facade of the independence of many Amazon drivers reminds me of Uber. From what I saw a while ago, Uber pretends that its workers not only aren’t its employees but also aren’t its contractors. Uber claims they are just a service provider for drivers, and that drivers contract directly with riders. So rather than Uber paying its workers at all, Uber claims that actually the drivers pay Uber for Uber’s services (primarily providing software). So Uber is allegedly like a company that makes an online appointments calendar app that some e.g. dentists and hair salons pay for. This facade is not kept up consistently and I don’t think it’s gotten much media attention.

To double check about Uber, I just did a web search. First thing I clicked:

Uber Driver Agreement updated JAN 2022: defines that Drivers Pay Uber

Uber doesn't pay drivers...
Drivers pay Uber for access to the driver app..

Uber’s contract terms from that post:

We are not hiring or engaging you to provide any service;
You are engaging us to provide you access to our Platform.

Another example is many companies, like Tesla or Apple, use natural resources which have to be mined, often in countries with less law and order. A lot of that is indirect – they buy e.g. batteries from someone else who bought raw resources like lithium. Or companies like Nike or clothing brands use sweatshop labor in poorer countries (often indirectly via some other companies who own the sweatshops). The U.S. companies who sell to consumers like me never want to know about any abuses going on out of their direct sight. They want their suppliers to tell them everything is wonderful and humane (maybe they want it to actually be that way, or maybe not, but they tend to make a lot of decisions based on lower prices not humaneness). Apple isn’t telling Foxconn, “Don’t worry; just tell us the truth about your abusive practices; we don’t care; we’ll just lie to the public but we want all internal documents to be truthful.” That would result in whistleblowers exposing Apple. Apple and others actually make some inadequate attempts to police their suppliers and get them to clean up their acts. Or Apple will go through several layers of indirection (other companies) to limit their connection to abuses. Sometimes they stop using suppliers, even indirectly, who are caught doing human rights abuses (especially if it gets media attention). But on the whole, they keep profiting off of human rights abuses in foreign countries – my point is that they put work into having deniability on their end.

Another example of wanting dishonesty from companies you do business with is carbon offsets (more). Really offsetting carbon is hard, but companies like Apple want to market themselves as carbon neutral, and are happy to pay for carbon offsets with very low, inadequate standards. If challenged, Apple would blame their suppliers of carbon offsets and say they had no idea that some of them were committing fraud. Apple doesn’t want to know about the fraud in the carbon offset business; they want to be isolated from it and protected from liability, responsibility, or being the ones doing the lying. But actually Apple should know better and should do reasonable due diligence. Apple’s marketing about being carbon neutral appears to be fraudulent and to be an example of the lack of reasonable enforcement of “no initiating force (including fraud)” by the government.

(I have not investigated specifically which carbon offset suppliers Apple uses. Although I’m convinced that the carbon offset industry does a bunch of fraud, there could be a few better providers, and it’s not impossible that Apple in particular sought out and used those providers. Apple is just an example. Throughout this article I’ve named some well known example companies, which I think is useful for readers, but they’re meant to be representative of much wider problems involving many other companies.)

In conclusion, although I have less familiarity with large companies that don’t sell to consumers, I don’t think they’re innocent. I think fraud and force are spread out through the supply chain, not just concentrated in the last step which advertises to consumers. Often the last step in the global supply chain is a U.S. or European company which tries harder than average to appear to have clean hands.


Elliot Temple | Permalink | Messages (0)

Stable and Unstable Ideas

Debating or discussing people’s unstable ideas tends to be awful.

Stable ideas are believed over time. They tend not to change. People tend to consistently believe them in multiple contexts. People tend not to forget about them. There may be some special exceptions where they don’t realize it applies, but that’s the exception to the general pattern of applying their stable idea.

Stable ideas are more integrated into people’s thinking and have survived more self-criticism and often more external criticism too.

Unstable ideas are often ad hoc excuses or rationalizations made up during the current conversation. If you try to argue with them, you’ll often find them replaced with some other idea later in the same conversation. Since you don’t have a stable target to argue with, it’s hard to debate. The person often forgets or ignores their own idea, or changes their mind in the middle of trying to make a multi-step argument (often with no acknowledgement that they changed their mind, and no learning, they just start saying new things which are also unstable).

When you get an error correction for a stable idea, that’s useful. Without that error correction, you likely would have made the mistake next month and next year. You already have a history of keeping and using the idea over time, so it’s worth effort to improve. And you already had plenty of time to find the error yourself but you didn’t find it.

With unstable ideas, you could usually find an error in them yourself if you tried. They wouldn’t last a week even with no discussion or debate. They were never going to become integrated into your thinking and used for many things. They’re generally just temporary ideas made up to address specific local, short-term optima. They’re biased and unreasonable excuses and rationalizations. They come from e.g. starting to lose an argument and needing to find a way to disagree with some threatening idea. So you just carelessly throw out some opposing idea you just made up.

It can be tricky to figure out which ideas are unstable, ad hoc, half-baked junk you just made up.

When we have discussions, we always say some things which we didn’t think of before in that exact form. When you respond to someone else, who said something you haven’t heard before, then you have to think of a response which is partly new. Even if what they said is very similar to what you’ve heard before, and your response is similar to something you’ve said before, it will often be partly new.

So what’s the difference between a customized response that you just created and an unstable idea? The customized response is in line with your existing, stable ideas. It’s created by and implied by your stable ideas. It fits with your longer term thinking. It’s similar to things you’ve said and thought before.

It can be OK to share unstable ideas in discussions. They should be labelled as unstable in some way. Generally they’re suitable for less adversarial discussions, not for debates. The most common way to label them is brainstorming. You guys are cooperatively trying to come up with ideas about something, so you throw stuff out there which is lower quality and less thought through. You do brainstorming together. That’s fine as long as everyone knows what it is. You don’t want people to be misled that your brainstormed ideas are you actual ideas you believe and take seriously.

Only stable ideas are suitable for debating. To debate ideas, you need to actually have some ideas you favor which you can consistently advocate for over a conversation without contradicting yourself or forgetting about the ideas you’re debating for.

It’s common in debates that people make arguments and then seem to forget about them or drop them later. When you recognize people are saying unstable ideas, that they don’t take seriously, you should generally avoid debating with those ideas. They can create an unlimited number of unstable, unserious objections to what you’re saying. To make progress, the root cause has to be addressed.

The concept of unstable ideas is similar to ad hoc ideas and some other terms. But I think it’s a useful concept with some differences. Ad hoc means something is done for a particular purpose – e.g. having no answer to an argument so you make something up. Unstable ideas emphasizes that there won’t be any continuity. You can’t debate with it because the person who said it won’t be a consistent advocate of it. It has no consistent advocates. Ideas are often both ad hoc and unstable. Unstable contrasts with stable – ideas that you’ve believed over time and which have survived your error correction and which you’ve integrated into your thinking.

Another way to get unstable ideas is to tell people your ideas. Some people will then say they agree, but it’s a new idea to them which they haven’t thought through yet. Often, it won’t really last. They’ll forget about it, never take it seriously, never think it through, etc. This can cause a lot of trouble in discussions. If they would say their objections, you could explain the idea more. Instead they concede/agree and say no further argument or explanation is needed, but they don’t follow up appropriately, so they’re being dishonest with you (and usually, more importantly, being dishonest with themselves).

There are some ideas you can accept quickly. Sometimes you can change your mind rapidly and be confident right away that you’re going to stick to this new idea over time rather than drop it by tomorrow. Others require taking your time with and thinking over more before you decide whether you accept them. They need to spend some time as a “maybe”. Not giving them “maybe” treatment for a while is a way of sabotaging learning and refusing them the attention needed to ever adopt them as your stable idea.

Agreeing to ideas is a common tactic for changing the topic in a discussion. It’s often done because of disagreement, not agreement. Often the disagreement involves some dishonesty, confusion or problem, so it’s problematic to talk about, so the person desires to change the topic and doesn’t want to say why, so they pretend to agree with stuff so the discussion can move on to building on that stuff. Building on stuff you don’t really understand or agree with is not going to work, so now the discussion is a waste of time and gives misleading feedback and confusing results. The person maybe like “OK so you agree with X and Y … I’m really struggling to understand why you don’t accept Z.” It’s because you didn’t really agree about X. You gave them bad data which they’re now trying to understand.

Being suspicious of early concessions and fake agreement is socially problematic. People are often touchy about it. They tend to get upset if you suggest that, contrary to what they just said, they don’t really understand and agree yet, and we should continue with the topic further. They just said they’re done with it and that it’s settled, and if you don’t believe them that can be offensive. But improperly agreeing with things is widespread, so suspecting it could be going on and taking steps to check for it is not insulting or offensive – it’s reasonable. People should give arguments and analysis to convince you and themselves that their agreement is proper and complete, rather than just asserting it and expecting the assertion to be accepted.

This is similar to how bias is widespread, but if you see signs of bias in someone’s comments and raise the issue of potential bias, they often get offended. That unreasonable response is an indication you were probably right about the bias, though it’s certainly not definitive. You can have irrational attitudes to bias and be a touchy defensive person but not be biased about a particular topic. People like that are biased about some topics but not every topic. And they might communicate some red flags about bias even when they have an unbiased view.


Elliot Temple | Permalink | Messages (0)