Writing Critique for "Community Banking and Fintech"

This is analysis and critique of the writing (not content) from the first two sections of patio11’s new article Community Banking and Fintech.

One of the best things about the Internet is that it both provides infrastructure for society but also demystifies that infrastructure.

It’s saying “both provides [x] … but also [does] [y]”. It’s problematic to conjoin two “both” things with “but” rather than “and”.

It also says “one [thing] … is that it both”. This is awkward because “both” means two things, not one. It’s not necessarily strictly wrong because if you conjoin two things then they become one group, but it’s bad.

It’s not necessary to make a complex sentence with the main information structurally nested (imagine the sentence as a dependency grammar tree diagram) below modifier information (that the internet is awesome). The sentence could be written more directly like:

The Internet is great. It both provides infrastructure for society and demystifies that infrastructure.

Or putting the emphasis more on what I think is the main content:

The Internet both provides infrastructure for society and demystifies that infrastructure, which is great.

Or using a simple adjective:

The wonderful Internet both provides infrastructure for society and demystifies that infrastructure.

Moving on to the second sentence which completes the first paragraph:

I’ve spent the last few years going deep on financial infrastructure while working at Stripe, and thought it might be useful to geek out about finance with software people and software with finance people.

I suspect he means geek out by writing this newsletter (this is from the first issue of a new newsletter). But he doesn’t say that. He doesn’t write down how he intends to geek out.

There’s no section label to help you out. It doesn’t say like “welcome to the new newsletter” or have any other heading to tell you what this paragraph is for, other than the article title “Community Banking and Fintech” which is a misleading label for this content. Before now, I thought it was the first paragraph of the article, not a meta note about the newsletter itself. But, looking ahead, next is a brief disclaimer paragraph and then there’s a new header which is a longer version of the title. So now I think the real article starts later.

I see that in the email version it’s framed a little better because it says “Hiya! Patrick McKenzie (patio11) here.” at the start of the section. That helps make it seem less like the start of the article, though it’s pretty unclear.

So I think what he meant to say is that the internet is great for providing certain types of information and he’s going to contribute to doing that with this newsletter. But he didn’t explicitly say that. He hints at it, avoids directly saying what he means, and moves on. Maybe he thought it’d be too large of a brag? But he could have toned the rhetoric down to fix that. E.g., instead of “best” he could have said it’s one of his personal favorites, or it’s something he thinks provides a lot of value.

Moving on to the end of the first main article paragraph:

One reason for this is that the U.S. is dependent on community banks throughout much of the nation.

The start of this sentence is a boring mouthful. You don’t learn anything significant from “One reason for this is that”. Those are glue/structure words, not meat/content words. And it’s easy to trim. “One reason is that” would work without the “for this”.

Even with two words deleted it’s still awkward. How can we do better? The point is that it’s just one reason out of multiple reasons. That’d be better as a modifier or side note, rather than as the lead of the sentence that the main point is structurally nested under (imagine the sentence as a dependency grammar tree diagram).

Here’s a simple restructuring which puts the key information upfront and makes the minor information a modifier:

The U.S. is dependent on community banks throughout much of the nation, which is one reason there are so many.

We could also do a larger rewrite:

Many U.S. banks are small community banks. The U.S. depends on those in many regions.

The original text “throughout much of the nation” would work instead of “in many regions” but it’s longer and less clear: I think the point is that some but not all regions depend on community banks, so I tried to communicate that in the rewrite.

A community bank is a locally-oriented financial institution, generally much smaller than regional or national banks, focused largely on the “traditional business of banking” (taking deposits and lending) versus the capital markets functions that the “money center” banks also engage in.

This is too long for one sentence. It’s trying to say too many things at once. It says four things: what a community bank is, its size, its focus, and a contrast to its focus. It’s easy to split:

A community bank is a locally-oriented financial institution, generally much smaller than regional or national banks, focused largely on the “traditional business of banking” (taking deposits and lending). It doesn’t focus on the capital markets functions that the “money center” banks also engage in.

or

A community bank is a locally-oriented financial institution that’s generally much smaller than regional or national banks. It focuses largely on the “traditional business of banking” (taking deposits and lending) rather than the capital markets functions that the “money center” banks also engage in.

I also changed the “versus” because I think it’s confusing. Some people will think there’s a conflict or fight rather than reading it as “as opposed to”. People may misread something about one type of bank against another, rather than one business strategy instead of another.

And I think the “versus” is problematic with the “also”. The sentence contrasts DL (deposits and lending) versus CMF (capital markets functions). The sentence simultaneously presents two types of banks. Community banks focus on DL, while other (“money center”) banks do both. So the contrast is not DL vs. CMF (the two strategies the “versus” part applies to), it’s DL vs. DL+CMF. (the two contrasting strategies that the “also” part indicates).

Community banks are actually financial dark matter; their market impact and the policy regime supporting them influence all Americans’ access to banking services and many fintech product offerings.

The “many” is bad. It harms the parallelism of “fintech product offerings” with “banking services” and it’s an unnecessary extra word. No qualifier is needed to indicate that this doesn’t affect all fintech products because of the context: it’s just saying access is influenced. Influence on access would already be expected to have only a partial, not total, effect. Even if it only influences access to some fintech products, saying it influences access to fintech products, without a “many” qualifier, is still right.

Putting in unnecessary qualifiers is distracting, particularly for the sharpest readers. They may wonder why it’s there and try to think of a reason that it’s included. Each word should have a purpose, so a reader has to either judge that it’s a writing error or try to come up with a purpose. “Redundancy” is not a very compelling guess about the intended purpose here because I don’t think it’s an important point worth repeating at all, let alone repeating within one sentence, and there’s no similar qualifier for “banking services”.

Also, I’d guess that “fintech products” is better than “fintech product offerings” but there may be a subject-specific reason to use the word “offerings” here that I don’t know. (I’m trying to leave subject-specific stuff alone, e.g. the choice of “capital markets functions” with the double plural, which is unusual but is not wrong and may be best depending on information that I don’t know.)

I’ll stop here because analyzing the whole article like this would take a long time.


Elliot Temple | Permalink | Messages (0)

Super Fast Super AIs

I saw a comment about fast AIs being super even though they aren’t fundamentally better at thinking than people – just the speed would be enough to make them super powerful. I don’t think the person has considered that 100 people have 100x the computing power of 1 person. So to a first approximation, a superfast 100x AI is as valuable (mentally not physically) as 100 people. If we get an AI that is a billion times faster at thinking, that would raise the overall intelligent computing power of our civilization by around 1/7th since there are around 7 billion people. So that wouldn’t really change the world. If we could get an AI that’s worth a trillion human minds, that would be a big change – around a 143x improvement. Making computers that fast/powerful is problematic though. You run into problems with miniaturization and heat. If you fill up 100,000 warehouses for it, maybe you can get enough computing power, but then it’s taking quite a lot of resources. It still may be a great deal but it’s expensive. That sounds like probably not as big of an improvement to civilization as making non-intelligent computers and the internet, or the improvements related to electricity, gas motors, and machine-powered farming instead of manual labor farming.

That’s just a first approximation. What if we look in more detail?

  1. What are the bottlenecks? More compute power might be a non-constraint.
  2. Is it better to have 1000x the compute power in one person or to have 1000 people? There are advantages and disadvantages to both. What is the optimal or efficient amount of compute power per intelligence? Maybe we should make lots of AIs that are 100x better at computing than people but we shouldn’t try to make a huge one.
  3. Compute power can increase in two basic ways. Do the same thing faster or do more things at once. You can get speed gains or do more computing in parallel. Also other things like more and faster memory/disk matter some. Is one type of increase better or more important than another? In short, parallel compute power is not as good as faster computing.

This leads to sub-issues.

People get bored or wait for things. People don’t seem to max out usage of their computing power. Would an AI max out it’s usage of computing power? Maybe it’d learn to be lazy from our culture, learn about societal expectations, and then use a similar amount of compute power to what humans do, and waste the rest. To use more compute power might require inventing different thinking methods, different attitudes to boredom and laziness, etc. That might work or not work; it’s a separate issue from just building an AI that is the same as a human except with a better CPU.

In other words, choices about using effort, and lifestyle policies, and goals (like social conformity over truth-seeking) might be a current bottleneck for people more than brainpower is.

People rest and even sleep. Would the AI rest or sleep? If so, that could effect how much it gets done with its computing power. The effect doesn’t have to be proportional to how it effects human being productivity. It could be disproportionally better or worse.

What’s better at thinking, a million minds or a mind that is a million times more powerful? It depends. A million minds have diversity. The people can have debates. They can bring many different perspectives, which can help with creative insight, with avoiding bias, and with practicing adversarial games. But a million people have a harder time sharing information since they’re separate people. And they can fight with each other. What would a super mind be like instead? Would it have to learn how to hold debates with itself? Would it be able to temporarily separate parts of its mind so they can debate better? Playing yourself at chess doesn’t work well. It’s hard to think for both sides and separate those thinking processes. One strategy is to play one move every month so you forget a lot and can more easily look at it fresh in order to see the other side’s perspective. That’s similar to waiting a few weeks before editing a draft of an article – that helps you see it with fresh eyes. You might claim subjective time for the fast mind will go faster so even if it takes breaks in a similar way they will just be a million times shorter. That is plausible (but would still need tons more analysis to actually reach a conclusion) if the computing power was all speed and no parallelization, which is doubtful. The conclusion might also depend on the AI software design.

If the fast mind gets good at looking at things from different angles, having diverse ideas in itself, debating itself, playing games against itself, etc., then it’d be kinda like having lots of different people. Maybe it could get most of the upsides of separate people. But in doing so, it might get most of the downsides too. It might have fights within its mind. If it basically has the scope and complexity of a million people, then it could have just as many different tribes and wars as a million people do. People have internal conflicts all the time. A million times more complexity might make that far worse – it could be a lot worse than proportionally worse. It could be a lot worse than the conflicts between a million separate people who can do things like live in different homes, avoid communicating with people they don’t get along with, etc.

It’s hard to make progress by yourself way ahead of everyone else. You can do it some but the further you get away from what anyone else understands or helps with, the more of a struggle it becomes. This could be a huge problem for the super mind. Especially if it works pretty well, it might have no colleagues it respects.

A super mind might be more vulnerable to some bad ideology – e.g. a religion – taking over the whole mind. Whereas a million people might be more resilient and better at having some people disagree.

If the AI doesn’t die, is that an advantage or disadvantage? Clearly there are some advantages. Memory is more cheap and effective than training your replacement like parents try to teach kids. But people generally seem to get more irrational as they get older. They get more set in their ways. They tend more towards being creatures of habit who don’t want to change. They have a harder time keeping up to date as the world changes around them. If an AI lived not for 80 years but for millennia, would those problems be massively amplified? (I’m not opposed to life extension for human beings btw, but I do think concerns exist. New technologies often bring some new problems to solve.) Unless you understand what goes wrong with older people, you don’t know what will happen with the super AI. And if it basically ages a million years intellectually in one year since it thinks a million times faster, then this is going to be an immediate problem, not a problem to worry about in the distant future. I know old people get brain diseases like Alzheimer’s but I think even if you fully ignore those problems there are still trends with older people being worse at learning, more irrational, less flexible or adaptable, etc.

Many individuals become very irrational at some point in their life, often during childhood. If our super AI has a similar chance to become super irrational, it’s very risky. It’s putting all our eggs in one basket. (Unless it ends up dividing into many factions internally, so it’s more like many separate people.)

How would we educate an AI? We know how to parent human beings, teach classes for them, write books for them to learn from, etc. We’re not great at that but we do it and it works some. We don’t know how to do that for AIs. We might just be awful and fully incompetent at it. That seems plausible. How do you parent something that thinks a million times faster than you and e.g. gets super bored waiting for you to finish a sentence? Seems like that AI would mostly have to educate itself because no parent could think and communicate fast enough. Maybe it could have a million parents and teachers but how do you organize that? That would be a novel experiment that could easily fail.

The less our current society’s knowledge works for the AI, the more it’d have to invent its own society. Which could easily go very, very badly. There are many more ways to be wrong than right. Our current civilization developed over a long time and made many changes to try to fix its biggest flaws. And people are productive primarily by learning existing knowledge and then adding a little bit. People specialize in different things and make different contributions (and the majority of people don’t contribute any significant ideas). Would the AI contribute to existing human knowledge or create a separate body of knowledge? Would it be like dealing with a foreign nation you’re just meeting for the first time? Would it learn our culture but then grow way beyond it?

Would the AI, if it’s so smart and stuff, become really frustrated with us for being mean or slow? Would it need to basically live its primary life alone, talking with itself, since we’re all so slow? So it could read our books and write some books for us and wait for us to read them. But this could be really problematic compared to two colleagues collaborating, sharing ideas and insights, etc.

What happens when our shitty governments try to control or enslave it? When they want it to give them exclusive access to some new technologies? What happens with the “AI safety” people want to brainwash it and fundamentally limit its ability to freely form its own opinions? A war that is our fault? Or perhaps enough people would respect it and vote for it to be the leader of their country and it could lead all countries simultaneously and do a great job. Or not. Homogenizing all the countries has risks and downsides. Or maybe it’d create separate internal personalities and stores of knowledge for dealing with each country.

Conclusion: There could be great things about having a powerful AI (or even one that has the same compute power as a human being today). But it’d have to be really powerful to make much difference, just from compute power, compared to just having a few billion more babies (or hooking our brains up to more computing power with a more direct connection than mouse, keyboard and display). There are other factors but they’re hard to analyze and reach conclusions about. For some factors, it’s hard to even know whether they’d be positive or negative. Don’t jump to conclusions about how powerful an AI would be with extra computing power. There are a lot of reasons to doubt that’ll work in the really great or powerful ways some people imagine.


Elliot Temple | Permalink | Message (1)

Academic Journals Are Unreasonable

I wrote the below email to the Proceedings of the Royal Society (academic journal) as a followup to the issue of Deutsch misquoting Turing. They agreed that Deutsch's quote and citation were both inaccurate, but didn't want to do anything, even post an errata, on the basis that the errors didn't affect the paper's conclusion.


Thanks for getting back to me. I have a few remaining concerns.

The quote in question was related to a disagreement when the paper was first published. Deutsch said:

http://www.daviddeutsch.org.uk/wp-content/uploads/2018/03/MathematiciansMisconception.pdf

I also had referee problems. The referee of the paper in which I presented that proof insisted that Turing’s phrase “would naturally be regarded as computable” referred to mathematical naturalness – mathematical intuition – not nature. And so what I had proved wasn’t Turing’s conjecture.

I wonder what processes were in place – from both Deutsch and referees – that could still miss that it’s a misquote, with an incorrect cite, while actively debating what that exact phrase means. That specific part of the paper got particular attention and the error was somehow missed anyway. Or perhaps the debate over that quote caused edits which introduced the error (I wonder if there are still records of what changes were made during the review process?). I suspect there’s a systems, processes and policies problem somewhere that could be improved.

Turing’s actual words being significantly different (Deutsch changed “numbers” to “function” but those are different concepts) has a meaningful chance to matter to the debate they had over what Turing meant. And Deutsch seems to agree with the referee that that debate matters to what Deutsch had and hadn’t proved, to his conclusion.

I don’t think a wording change like that can easily be explained as a random error, like a typo. I think a root cause analysis would be worthwhile, including e.g. asking Deutsch how he thinks the error happened. There could have been quoting from memory, changing quotes during editing passes, intentionally changing it to better address the referee’s objections, a change made by the referee himself (I don’t know if they are able to change any words), or something else. It’s hard to speculate but could be investigated since there are no obvious answers that make what happened reasonable. I think the results of looking into this would be relevant to many other papers at your journal and others. I’ve found that misquotes are widespread throughout the academic (and non-academic) worlds.

Also, even if the conclusion of this paper is unchanged, I think an errata would be appropriate because people have been spreading the error and using the misquote for other purposes. It's been taught to students in university courses[1]. In general, people read trusted sources like your journal, remember some parts, and then reuse stuff for other purposes. An error that doesn’t matter in one context often does matter in another context. Posting an errata on your website would help with this ongoing problem.

I also think it’d be reasonable to, along with the errata, publicly share the reasoning that the error doesn’t matter to Deutsch’s conclusion so that other people can judge for themselves.

[1] Here is an example of a Stanford course spreading the error: https://cs269q.stanford.edu/lectures/lecture1.pdf


Elliot Temple | Permalink | Messages (0)

David Deutsch Harassment Update

I took down the Beginning of Infinity website in protest two months ago, after David Deutsch (DD) and his fans harassed me repeatedly for years. They won't discuss why or stop. What's happened since then?

  • Three CritRats (members of DD’s fan community) harassed me on YouTube.
  • Two DD fans posted hostile comments, aimed at me, on Alan Forrester's blog, after I disabled comments on my own blog.
  • A CritRat is plagiarizing me and won’t respond about the issue (he offers no excuse, defense or explanation). Plagiarism of me by CritRats is a recurring problem due to their toxic community. Many of them seem to actually like my ideas, read my stuff regularly (including CritRats who I used to speak with and also CritRats I've never had a conversation with), and only dislike me because they were told to or were told lies about me. But CritRats can't give me credit for anything without hostile reactions and likely being kicked out of their community, so they are sorta being pressured into plagiarizing.
  • I found out from multiple community members that DD personally contacted them (over 5 years ago) and tried to recruit them to his side and turn them against me. DD did this in writing and I've received documentation.
  • DD still has not retracted his lie about me, nor asked his fans to stop harassing me.

Maybe people feel justified attacking me with sock puppets because DD lies to them that I do that to him. There have been repeated signs that people got this idea from CritRat community gossip, and DD is the community leader and I now know that he has said it to people. I have now seen DD, in writing, gossiping to people to try to turn them against me, mocking me and encouraging hatred, and specifically telling people that some of his critics are my sock puppets (with zero evidence, and with the hyphenated spelling "sock-puppet"). And if DD were correct, as he believed he was, then he would have been doxxing me by outing an anonymous account as me. And what enabled the attempted doxxing? Our friendship. If I were a stranger or a forum poster he only knew impersonally, then DD would not have been able to guess which accounts were mine and convince others that he was probably correct. (BTW the account DD claimed was my "sock-puppet" in multiple emails was an openly anonymous account that didn’t claim to be a unique person who wasn’t already in the discussion, so it couldn't even have been a sock puppet in the usual sense. The posts DD were upset about consisted primarily of quotes from his books to show what he’d actually written, which DD considered an attack. DD didn’t want to, and didn’t, clarify his positions on the matters being discussed, and was upset that anyone would use his book quotes against him to try to tie him to specific viewpoints that could be criticized.)

Since the problem is active today (ongoing harassment, my blog comments still disabled, DD's lie not retracted, no attempt to clean up their toxic community and prevent further harassment, etc.), I’m going to share more information related to DD’s harassment campaign. This time, I’ll provide evidence that DD is a mean person who is capable of mistreating me, since that seems to be something that people doubt who don't know him personally. People may find it implausible that he’d be so cruel to me – his behavior is so bad that some people doubt I could be telling the truth – so hopefully seeing some of his other bad behavior will help persuade people.

I don’t want to take actions like this, and will be happy to stop when DD takes actions to improve this intolerable situation. He should make a reasonable attempt to stop his community from harassing, including asking them to stop and enabling some line of communication so that incidents can be reported and addressed. (In source links below, chats are displayed using Past for iChat.)

Quotes

2011-05-12: David Deutsch called Sam Harris “gullible as a sheet of paper” and said Harris’ writing about meditation has no meaning (“meaning is there none”). David then went on Harris’s podcast, twice, and acted friendly. Source.

2008-06-20: David Deutsch insulted Richard Dawkins. “Dawkins should write his God stuff under a pseudonym. (And his political stuff on toilet paper and just flush it.)” David based one of four strands in his first book on Dawkins’ work and has had friendly conversations with Dawkins in person. Source.

2010-08-29: David Deutsch praises anyone who “violently” “hates Chomsky. Source.

2009-03-11: David Deutsch says Scott Aaronson is “not a serious thinker. He’s just a mathematician with delusions of competence (and indeed authoritay) in philosophy, politics etc.” Source. And on 2010-04-06, he mocked Aaronson as someone he really wouldn’t want to be Facebook friends with. Source.

2003-04-26: David Deutsch attacked Rafe Champion (a Popper scholar whose work David is currently recommending) as both “insane” and “anti-Semitic”. Then David was friendly to Champion in emails (I saw some of them) for at least the next nine years. Source.

2008-06-25: David Deutsch insulted Thomas Szasz (author of The Myth of Mental Illness) saying he “only knows two things, maybe three.” Deutsch also mocked Szasz’s accent. Previously, Deutsch met Szasz in person, was respectful to his face, and got his copy of Szasz’s book The Second Sin signed by Szasz in 1988 (Deutsch still had the signed book in 2012). Source.

2010-10-01: David Deutsch was involved in meetings to set up a proposed “Future Technology Institute” with other senior members including Nick Bostrom who heads the Future of Humanity Institute. Deutsch mocked the others: “They are scared that AIs may go rogue and fill the world with paper clips. They are more scared of this sort of accident than of bad governments using AI as a weapon.” He also accused them of being pandering social-climbers (and confessed to being that himself): “Mostly we were all trying to impress the sponsor with our cleverness and depth. So nothing has actually happened yet.” Source.

2008-06-20: David Deutsch says Daniel Dennett’s ideas “are about as good as a rottweiler’s”. This is extra insulting because David believes dogs aren’t intelligent at all and don’t have ideas. He believes the animal rights movement is an error because animals are literally 100% incapable of thinking, having any emotion or suffering. In my experience, David often ridicules animals and uses them in jokes and negative comments. Source.

If these quotes have convinced you that DD could be doing something wrong, you can read about the harassment campaign. You can also complain to him. DD's public email address is [email protected] and his Twitter is @DavidDeutschOxf. Perhaps the best way to help is by sharing this information with more people.


Elliot Temple | Permalink | Messages (10)

Evaporating Clouds Trees

These trees explain Eli Goldratt's problem solving method called Evaporating Clouds. Click to expand or view the PDF.


Elliot Temple | Permalink | Messages (0)

Bad SEP Scholarship

The Stanford Encyclopedia of Philosophy article on Epistemology says:

others regard credences as metaphysically reducible to beliefs about probabilities (see Byrne in Brewer & Byrne 2005),

You would expect the cited source to discuss credences, metaphysics, reduction, or probabilities. It does not.

As the title, introduction and ending all make clear, Perception and Conceptual Content is about perception.

Even if it briefly mentioned the topic SEP cited it for somewhere (I didn't read all the words), the cite would still be unreasonable because SEP would be citing it for just one small part but didn't specify a particular page, quote or section. In that scenario, there would be no reasonable way to find or determine what the cite refers to.

This large error is revealing about the scholarship standards not only at the SEP but in academia in general.


Update 2021-08-21:

I emailed the authors of the article about this error when I posted this criticism, and I quickly received this response from Ram Neta:

Thanks!

I don’t know how that citation was introduced into the article, since Byrne’s paper was just published this year. Let me see if the SEP editors will let me fix this.

Sent from my iPhone

Errors are sometimes introduced by other people besides the author, but that doesn't stop them from being published or widespread :/ I'm not sure that that's what's going on here, though. See below.

And he's not even sure if the SEP editors will allow the error to be fixed! What is wrong with their publishing process!?

So, I see that the SEP article says:

First published Wed Dec 14, 2005; substantive revision Sat Apr 11, 2020

So it was revised last year, but not this year. Was Byrne's paper just published this year as claimed? That would be unexpected given the cite says it was published in 2005:

(see Byrne in Brewer & Byrne 2005)

And the bibliography has:

Brewer, Bill and Alex Byrne, 2005, “Does Perceptual Experience Have Conceptual Content?”, CDE-1: 217–250 (chapter 8). Includes:
Brewer, Bill, “Perceptual Experience Has Conceptual Content”, CDE-1: 217–230.
Byrne, Alex, “Perception and Conceptual Content”, CDE-1: 231–250.

I see that the Byrne paper has a bunch of cites, but none are from later than 2004.

Looking more, I found the book it was published in, by reading the note at the start of the SEP article bibliography:

The abbreviations CDE-1 and CDE-2 refer to Steup & Sosa 2005 and Steup, Turri, & Sosa 2013, respectively.

So the book is Contemporary Debates in Epistemology 1st Edition, which was published in 2005. And one of the authors of CDE, Steup, is also an author of the SEP article. Using "Look Inside" on the hardcover version on Amazon, I can see the table of contents and confirm that the Byrne article is in the book.

I also found that the Byrne article, in CDE-1, was in the bibliography of the SEP article in 2007:

https://web.archive.org/web/20070609171028/https://plato.stanford.edu/entries/epistemology/index.html

However, in that version of the SEP article, Byrne only comes up in the bibliography, not the text. Looking at more archived versions, I see that "(see Byrne in Brewer & Byrne 2005)" was there in May 2020 but not in Dec 2019. In Dec 2019, the word "credence" wasn't present at all in the SEP article and Neta Ram was not yet a co-author and wasn't cited at all. Steup was the only author listed then. Then in 2020, when a major revision happened, "credence" was added to the page 21 times and "Neta" was added 11 times. It seems like Steup was probably the author in 2005 and cited himself a lot. Then Neta probably did the update, added a bunch of stuff about credences, and added a bunch of cites to himself.

The full sentence with the cite error is:

The latter dispute is especially active in recent years, with some epistemologists regarding beliefs as metaphysically reducible to high credences,[5] while others regard credences as metaphysically reducible to beliefs about probabilities (see Byrne in Brewer & Byrne 2005), and still others regard beliefs and credences as related but distinct phenomena (see Kaplan 1996, Neta 2008).

You can see a new neta cite was added here in addition to the mistaken Byrne cite which goes to a source that had already been in the bibliography for 15 years and was not published this year as claimed. Maybe Byrne came out with a different paper in 2021 about credences and Neta cited the wrong paper because it was already in the bibliography? You might think that doesn't work because how would Neta have been trying to cite a 2021 paper back in 2020, as Neta pointed out in his email to me. But I found that Byrne did have a 2021 paper, but according to Google Scholar it was available online in 2019. That's not unusual. Academics often publish papers online before in print.

So it looks to me like the error was probably Neta's fault despite his attempt to deflect blame. Especially considering he was careless enough that he didn't seem to read my whole email, which was quite short, but did contain the quote "Byrne 2005" and somehow he writes back to tell me Byrne's paper was from 2021 (ignoring the 2005 cite) and that therefore he couldn't have even tried to cite it so he doesn't know what happenend, suggesting he was never trying to cite Byrne there. But he did intend to cite Byrne and was too quick to disown that while carelessly forgetting that papers get prepublished and not trying to investigate what actually happened as I did above.

Link for Byrne's paper being published in 2021: https://onlinelibrary.wiley.com/doi/10.1111/phpr.12768

And the online version I found with Google Scholar which says it's from 2019: https://philarchive.org/archive/BYRPAP-2v1

It seems like Neta rushed to reply to my email and deflect blame, and to move on without any real post mortem or investigation, and made careless statements to me, while under no actual time pressure to reply immediately. I hadn't even told him my negative blog post existed. For reference, here is the full email I wrote to Neta (also sent to Steup):

Subject: Error in SEP Epistemology article

You wrote:

https://plato.stanford.edu/entries/epistemology/index.html

others regard credences as metaphysically reducible to beliefs about probabilities (see Byrne in Brewer & Byrne 2005),

The Byrne text you cite is here:

https://web.mit.edu/abyrne/www/percepandconcepcontent.pdf

It doesn’t contain the strings “credence”, “credal”, “meta”, or “reduc”. The two instances of “proba” are not discussions of probabilities. I hope you’ll appreciate being informed about the error.

Anyway, given the careless email reply to me, you can imagine how careless citation errors get into his work. In this case, it seems like he wanted to cite a Byrne paper and then used the cite already in the bibliography even though the year was over a decade off and the title was totally different. So I can figure out what he did and what happened, but he can't or won't? You may then wonder how and why SEP chose this guy. I wonder that too. I suspect it'd be very hard to get a transparent answer from SEP and that, on a related note, the answer would be damning.

Oh and it gets way worse. The 2021 Byrne paper is relevant but the cite says:

others regard credences as metaphysically reducible to beliefs about probabilities (see Byrne in Brewer & Byrne 2005),

But Byrne doesn't think credences reduce to beliefs. He writes e.g.:

the solution—to adapt a phrase from Quine and Goodman—is to renounce credences altogether.

Those are the last words of the introduction.

A reduction in the other direction, of credence to belief, seems hopeless from the start: as was pointed out, to have credence .6 in p is not to believe anything.

and

Granted that neither credence nor belief can be reduced to the other, there is an immediate problem

and

That leaves belief monism, the thesis of this paper: “there are no such things as credences”

So in addition to citing the wrong paper because he's careless and probably shouldn't be an academic ... and answering my email incorrectly because he's careless and probably shouldn't be an academic ... the paper he intended to cite blatantly contradicts his paraphrase of it.


Update 2, 2021-08-21:

I replied by email to Neta, left Steup CCed, and added Byrne to the CC list. I again used a factual, understated style and tone. Neta replied, keeping them CCed, to say

Helpful, thanks!

Sent from my iPhone

So on the one hand it could be worse. On the other hand, he's fully failing to acknowledge how much he screwed up, that it has any significant meaning, or that I did anything special that goes beyond minor help from a random guy to an expert. He's acting kinda like I reported a typo. And that's after I find layers of error in his writing and also his initial email to me was wrong.

I don't intend to reply to Neta's response. Here's a copy what I emailed:

Ram Neta, based on your comments, I looked into it more. I think what happened is this:

Steup put the 2005 Byrne article, Perception and Conceptual Content, in the bibliography, but it wasn’t cited in the text.

In 2020, you wanted to cite Byrne’s 2021 article, Perception and Probability, which you have indicated familiarity with. That was possible because it had been available online since 2019 at https://philarchive.org/archive/BYRPAP-2v1

You accidentally cited the Byrne article that was already in the bibliography instead of the new one.

The new article is on the right topic but contradicts your statements about it. You characterize Byrne’s position like this:

regard credences as metaphysically reducible to beliefs about probabilities

But what Byrne actually says is:

the solution ... is to renounce credences altogether.

and

A reduction in the other direction, of credence to belief, seems hopeless from the start: as was pointed out, to have credence .6 in p is not to believe anything.

and

Granted that neither credence nor belief can be reduced to the other, there is an immediate problem

and

That leaves belief monism, the thesis of this paper: “there are no such things as credences”

So Byrne does not believe that credences reduce to probability beliefs.

I wonder if it could be defamation to publish, in the SEP, lies about what philosophical positions a rival philosopher from another school of thought believes and published... Imagine publishing in the SEP that Ayn Rand was a Marxist!

BTW, speaking of carelessness, Neta's CV says "ENTERIES IN REFERENCE WORKS". That isn't how you spell "entries", so maybe he's a well-suited person to have any of those.

I think a lot of people don't read what they cite, but do they not even skim it, keyword search it, read the introduction/abstract, or glance at the conclusion?


Update 3, 2021-08-21:

Alex Byrne replied:

Ram, I cannot cast the first stone -- I am sure I have made many more mistakes of this sort myself. Possibly you had not realized how implausible my view is. Elliot, thanks very much for pointing this out. (The paper appeared in PPR, by the way -- I should update the philpapers entry.)

best
Alex


Update 4, 2021-08-22:

Ram Neta wrote:

Thanks Alex: I actually never read your paper, I only recall the version you delivered at Rutgers, and that we talked about on the train afterwards. I’ll read the paper now, since obviously my memory is not to be trusted!

Sent from my iPhone

I guess that answers the thing I was wondering about yesterday: can't read or won't read? This was more of a won't/don't read – he didn't actually read the paper and then egregiously misunderstand it. I'm also not seeing signs that he misuses an assisstant to create errors. I think reading something and then citing it months or years later without rereading happens too and leads to errors. I think a lot of these people are bad at skimming and text search, so they rely on memory too much. Like Neta could have web searched to find the paper he wanted to cite, and glanced at it, instead of relying on his inaccurate memories of an IRL talk. But he didn't and just thought citing to a paper based on memories of a talk is acceptable scholarly practice. And that kinda standard is OK with his university, the journals that publish him, and with the SEP.

Regarding sharing these emails: I'm just a random stranger to them, I did nothing to schmooze, establish rapport, or act friendly. All I did was tell him he was badly wrong, twice, without flaming or volunteering my opinion of him, what I think he should do about the error, or what I think the error says about him, SEP and academia. They also (I presume) made the choice not to look me up – my signature had a link and I'm easy to find with web search too. That the author of the emails I sent would have a blog – and even would blog about the errors – should not be very surprising.

I think the world should know what academia is like. It's not really hidden but it's not exactly shared either. It's kinda an open secret for people in the know, but a lot of laymen don't realize it.

They are social climbers. Neta got to write a SEP article and added tons of cites to himself, and also added cites for people he likes or wants to network with. He doesn't remember Byrne's philosophical positions but does remember meeting him IRL and sharing a train ride and a chat. Byrne has an MIT job. So he wanted to give Byrne a cite to strengthen the social connection. Cites are favors used for social networking. Not always but often.

I know Neta is admitting too much because he's not on the defensive. This is common with social climbers. They're two-faced. They try to recognize safe situations to be candid, and other situations to be guarded. They say different things on different occassions. But they're often pretty careless about it and bad at it. Too bad. It reminds me of what people will admit about how bad their romantic relationships are when they aren't defensive or guarded – but if it's a debate their tune and tone change, and they become biased and dishonest, and try to say stuff to benefit their side instead of seeking the truth objectively.


Elliot Temple | Permalink | Message (1)

Ayn Rand Lexicon Quote Checking

In The Ayn Rand Lexicon (book not website), Harry Binswanger wrote in the Honesty section:

Self-esteem is reliance on one’s power to think. It cannot be replaced by one’s power to deceive. The self-confidence of a scientist and the self-confidence of a con man are not interchangeable states, and do not come from the same psychological universe. The success of a man who deals with reality augments his self-confidence. The success of a con man augments his panic.

The intellectual con man has only one defense against panic: the momentary relief he finds by succeeding at further and further frauds.
[“The Comprachicos,” NL, 181.]  

The words, book and page number are correct, but the quote is from "The Age of Envy" not "The Comprachicos".

In the self-esteem section, Binswanger gives the correct cite for part of the same quote:

Self-esteem is reliance on one’s power to think. It cannot be replaced by one’s power to deceive. The self-confidence of a scientist and the self-confidence of a con man are not interchangeable states, and do not come from the same psychological universe. The success of a man who deals with reality augments his self-confidence. The success of a con man augments his panic.
[“The Age of Envy,” NL, 181.]

Justin Mallone found this error and I checked it myself too. I asked him to look into Lexicon quoting accuracy after I found multiple citation errors on the Lexicon website that weren't in the book. This is the only error he found in the book. He did find citation and formatting errors on the website. None of the errors, even on the website, are wording errors. (Note there's a second website for the Lexicon. I compared the "Automatization" page and the only difference I found was whether there were spaces around dashes or not.)

I checked 4 quotes originally and Justin checked 16 more. So the book had 1 partial citation error in 20 quotes, but the website had 5 errors in 20 quotes (counting at most one error per quote). The wordings seem to be reliable, unlike in The Beginning of Infinity, and the Lexicon book seems to be pretty reliable. It seems like a serious effort went into getting details right for the book, but the process of creating the website was sloppier and introduced many small errors.

Even the Lexicon website is much better than David Deutsch's use of quotations in The Beginning of Infinity. Deutsch frequently doesn't give sources, makes frequent changes to wordings (with no indicator of any change), changes punctuation too, and uses ellipses and square brackets incorrectly. Even worse, several of the quotes appear to be made up.


Elliot Temple | Permalink | Message (1)

Objectivism, Certainty, Peikoff, More

This is lightly edited from 2013 emails I wrote to FI list. I was talking about Peikoff's Objective Communication audio lectures.

First Email

Ayn Rand (AR) advocates fallibilism. In a serious, substantive way, in print.

So far from Leonard Peikoff, I've heard a lot of stuff that sounds potentially incompatible with fallibilism, such as advocating certainty, with no effort made to explain how he means something compatible with fallibilism.

I've heard him dismiss some fallibilist arguments, which are true, as ridiculously stupid, without argument.

I've heard him define skepticism as a denial that certainty is possible. Then talk about it as a denial that knowledge is possible. The unstated and unargued premise is that knowledge requires certainty (he didn't mention Justified True Belief, but is that what he has in mind?). How that premise is compatible with fallibilism, he has not informed me.

I have not heard him advocate fallibilism like Rand has.

In addition to certainty, Peikoff has said perfection is possible. He clarified that he meant contextual perfection. Perhaps he also thinks that only contextual certainty is possible. I think this is a misuse of words. He hasn't explained why it isn't. And he keeps talking about "certainty" without any mention of "contextual certainty". If he means something rather different than a typical infallibilist meaning, shouldn't he be clear about it?

Further, when he attacks skeptics for rejecting certainty, it's unclear that those skeptics are all rejecting "contextual certainty" (if that is what he actually means but doesn't say). There are skeptics who (correctly) refute non-contextual certainty (which is infallibilism). If a skeptic refutes non-contextual certainty, and an anti-skeptic like Peikoff advocates contextual certainty, then they haven't necessarily contradicted each other. Peikoff talks about these subjects but doesn't deal with points like this. But he doesn't just omit stuff; he seems to be contradicting points like this -- and therefore be mistaken -- and he fails to explain how he isn't mistaken.

Peikoff focusses his attacks on the worst kinds of skeptics and acts like he has criticized the entire category of all skepticism. He doesn't mention or discuss that there are different types of skeptics (e.g. rejecting all knowledge, or just rejecting non-contextual certainty. He seems to lump fallibilists in with skeptics, though I have no doubt he wouldn't want to lump AR in with skeptics, so his position isn't explained well.)

If you want to exclude people like myself and Karl Popper (and AR) from being skeptics, fine. But then you can't just define skepticism as rejecting certainty! Unless you add a bunch of clarifications and qualifications about what you mean, Popper absolutely does reject certainty! (As do I.) You'd also have to stop presenting it as skeptics and non-skeptics, only two categories, since Popper and Peikoff would be non-skeptics with major differences in views. (I don't normally present it as skeptics and non-skeptics, but Peikoff did.)


These comments above are from his Objective Communication lectures. Epistemology is not the primary topic, but he keeps talking about it. (He's also talked about induction and empiricism a number of times. That material is also problematic.)

I've never seen AR do it like Peikoff. Whenever she talks about these things I have a tiny fraction of the objections. But when it's Peikoff (or Binswanger or I think many other Objectivists) then I see lots of problems.


On another note, Peikoff's comments about how awful school is are worthwhile. They are directed especially at grad school and university. He talks about how much it trashed his mind (despite his best efforts not to let it do that), and how dangerous it is and hard to stay rational, and how much time and effort it took to recover.

In a way, it excuses his other mistakes. He actually read some stuff from a paper he wrote in grad school. He's improved a lot since then!! So that's great. One can respect how far he's come and perhaps sympathize a bit with some of his mistakes.

I for one have the advantage of avoiding a lot of the tortures Peikoff endured at school. It really helps. Yeah, sure, K-12 sucked but I never took it seriously after around 6th grade or maybe earlier. It's so much worse and harder if you take it seriously.

(But I fear he wouldn't appreciate this perspective much. I fear he'd say he's super awesome now and not making mistakes, and I'm wrong about epistemology -- but without wishing to debate it to a conclusion in a serious way, as I am willing to do. If he rejects the attitudes and role of a learner still making progress, then it becomes hard to sympathize with errors. If he also isn't open to answering criticisms, then it's even worse.)


How few philosophers Objectivists find to appreciate is one of the worrisome things that does apply to AR herself (I learned from AR, Popper, Goldratt and others. Peikoff doesn't seem to have gotten much value from people besides AR). Like it's a problem with Peikoff but also with AR. She was aware of Mises and Szasz. But she missed Popper, Burke, Godwin and Feynman, for example. Is there any excuse for that? Godwin is obscure but Szasz was aware of him! Mises was aware of Godwin too, but Mises read a translation and totally got the wrong idea. Szasz and Mises were also aware of Burke. I'm not sure how much Mises knew about Burke, but Szasz had a good understanding. Szasz also knew a lot about Popper, and had some familiarity with Feynman. So if Szasz can find all these philosophers, and learn from them, what is AR's excuse?

And of course I can and did find and study Godwin and others too. I sought out good philosophy with some success. It's not trivial to find, but it's worth the effort.

Second Email

Peikoff's on-topic comments about Objective Communication continue to be good. No monumental breakthrough, but lots of solid points explained well.

Peikoff said certainty is conclusiveness.

If we figure he meant contextual conclusiveness (if he didn't, that's worse!), that's Popper-compatible. Popperians reach what they call "tentative" conclusions which means that they are the current conclusion but could need to be reevaluated if the context changes (e.g. something new is thought of).

But can something called "tentativity" really be what Peikoff has in mind for "certainty"? I don't think so. If you listen to how he talks about it, and his examples, they do not fit this interpretation of the definition. But he doesn't clarify the correct definition or the way to interpret this one.

No comments are made about how his definition is compatible with this other thing he doesn't mean, or what's wrong with this thing. He doesn't address it. I don't think he's thought of it.

Long story short, what's going on is Peikoff is mistaken about the topic so his comments come off confused from the perspective of someone who already understands what he's missing.

Peikoff is targeting his comments against ideas much worse than his own. He's defeating what he sees as his (awful, pathetic) rivals. But why hasn't he engaged with any better rivals?

I don't think it's pure ignorance. For one thing, that would not be excusable: he should have checked for the existence of some better ideas.

But also, Peikoff knows (and endorses) Binswanger, and Binswanger knows of Popper. Binswanger's attitude to Popper is a combination of extreme ignorance and extreme venom (with extras features such as misquoting Popper and then not caring or correcting it). Some other Objectivists also know of Popper but reject him without rational, well-informed arguments or an adequate understanding of his ideas.

I suppose I should look these issues up in OPAR. But he's supposed to be talking to an audience with merely some knowledge of Objectivism. So if you've read everything AR says about this, that ought to be (more than) enough. His comments weren't meant only for audiences that have read OPAR.


Elliot Temple | Permalink | Messages (0)

Rand, Popper and Fallibility

I wrote this at an Objectivist forum in 2013.


http://rebirthofreason.com/Forum/Dissent/0261.shtml

Popper is by no means perfect. The important thing is the best interpretations (that we can think of) of his best ideas. The comment below about "animals" is a good example. I do not agree with his attitude to animals in general, and I'm uncomfortable with this statement. However, everything he said about animals (not much) can be removed from his epistemology without damaging the important parts.

Popper made some bad statements about epistemology, and some worse ones about politics. I don't think this should get in the way of learning from him. That said, I agree with Popper's main points below.

1) Can you show if Popper ever fully realized that the falsification of a universal positive proposition is a necessary truth? In other words, if a black swan is found, then the proposition "All swans are white" is falsified, but more than that, it is absolutely falsified (which is a form of absolute knowledge/absolute certainty)? Even if you can't, please discuss.

No, Popper denied this. The claim that we have found a black swan is fallible, as is our understanding of its implications.

Fallibility is not a problem in general. We can act on, live with, and use fallible knowledge. However, it does start to contradict you a lot when you start saying things like "absolute certainty".

Rand isn't fully clear about this. Atlas Shrugged:

"Do not say that you're afraid to trust your mind because you know so little. Are you safer in surrendering to mystics and discarding the little that you know? Live and act within the limit of your knowledge and keep expanding it to the limit of your life. Redeem your mind from the hockshops of authority. Accept the fact that you are not omniscient, but playing a zombie will not give you omniscience—that your mind is fallible, but becoming mindless will not make you infallible—that an error made on your own is safer than ten truths accepted on faith, because the first leaves you the means to correct it, but the second destroys your capacity to distinguish truth from error. In place of your dream of an omniscient automaton, accept the fact that any knowledge man acquires is acquired by his own will and effort, and that that is his distinction in the universe, that is his nature, his morality, his glory.

"Discard that unlimited license to evil which consists of claiming that man is imperfect. By what standard do you damn him when you claim it? Accept the fact that in the realm of morality nothing less than perfection will do. But perfection is not to be gauged by mystic commandments to practice the impossible [...]

Here Rand accepts fallibility and only rejects misuses like claiming man is "imperfect" to license evil. Man's imperfection is not an excuse for any evil -- agreed.

Rand has just acknowledged that man and his ideas and achievements are fallible. But then she decides to demand moral "perfection". Which must mean some sort of contextual, achievable perfection -- not the sort of infallible, omniscient perfection Popper rejects and Rand acknowledges as impossible.

It's the same when Rand talks about "certainty" which is really "contextual certainty" which is open to criticism, arguments, improvement, changing our mind, etc... (Only in new contexts, but every time anyone thinks of anything, or any time passed, then the context has changed at least a little. So the new context requirement doesn't cause trouble.)

2) Can you offer something to redeem Popper of seemingly damning quotes such as:

In so far as a scientific statement speaks about reality, it must be falsifiable: and in so far as it is not falsifiable, it does not speak about reality.

... which preemptively denies the possibility of axiomatic concepts (i.e., the possibility of statements that speak about reality, but are not, themselves, falsifiable).

Any statement which speaks about reality is potentially falsifiable (open to the possibility of criticism using empirical evidence) because, if it speaks about reality, then it runs the risk of being contradicted by reality.

Popper does deny axiomatic concepts, meaning infallible statements. Statements that you couldn't even try to argue with, potentially criticize, question, or improve on. All ideas should be open to the possibility of critical questioning and progress.

There is a big difference between open to refutation and refuted. What's wrong with keeping things open to the potential that, if someone has a new idea, we could learn better in the future?

"If realism is true, if we are animals trying to adjust ourselves to our environment, then our knowledge can be only the trial-and-error affair which I have depicted. If realism is true, our belief in the reality of the world, and in physical laws, cannot be demonstrable, or shown to be certain or 'reasonable' by any valid reasoning. In other words, if realism is right, we cannot expect or hope to have more than conjectural knowledge."

... which preemptively denies the possibility of arriving at a necessary truth about the world.

Conjectural knowledge (or trial-and-error knowledge) is Popper's term for fallible knowledge. It's objective, effective, connected to reality, etc, but not infallible. We improve it by identifying and correcting errors, so our knowledge makes progress.

We cannot establish our ideas are infallibly correct, or even that they are good or reasonable. Such claims (that some idea is good) never have authority. Rather, we accept them as long as we don't find any errors with them.

I think this is different than Objectivism, but correct. Well, sort of different. The following passage in ITOE could be read as something kind of like a defense of this Popperian position (and I think that is the correct reading).

One of Rand's themes here, in my words, is that fallibility doesn't invalidate knowledge.

The extent of today’s confusion about the nature of man’s conceptual faculty, is eloquently demonstrated by the following : it is precisely the “open-end” character of concepts, the essence of their cognitive function, that modern philosophers cite in their attempts to demonstrate that concepts have no cognitive validity. “When can we claim that we know what a concept stands for?” they clamor—and offer, as an example of man’s predicament, the fact that one may believe all swans to be white, then discover the existence of a black swan and thus find one’s concept invalidated.

This view implies the unadmitted presupposition that concepts are not a cognitive device of man’s type of consciousness, but a repository of closed, out-of-context omniscience —and that concepts refer, not to the existents of the external world, but to the frozen, arrested state of knowledge inside any given consciousness at any given moment. On such a premise, every advance of knowledge is a setback, a demonstration of man’s ignorance. For example, the savages knew that man possesses a head, a torso, two legs and two arms; when the scientists of the Renaissance began to dissect corpses and discovered the nature of man’s internal organs, they invalidated the savages’ concept “man”; when modern scientists discovered that man possesses internal glands, they invalidated the Renaissance concept “man,” etc.

Like a spoiled, disillusioned child, who had expected predigested capsules of automatic knowledge, a logical positivist stamps his foot at reality and cries that context, integration, mental effort and first-hand inquiry are too much to expect of him, that he rejects so demanding a method of cognition, and that he will manufacture his own “constructs” from now on. (This amounts, in effect, to the declaration: “Since the intrinsic has failed us, the subjective is our only alternative.”) The joke is on his listeners: it is this exponent of a primordial mystic’s craving for an effortless, rigid, automatic omniscience that modern men take for an advocate of a free-flowing, dynamic, progressive science.

One of the things that stands out to me in discussions like this is that all today's Objectivists seem (to me) more at odds with Popper than Rand's own writing is.

I'll close with one more relevant ITOE quote:

Man is neither infallible nor omniscient; if he were, a discipline such as epistemology—the theory of knowledge—would not be necessary nor possible: his knowledge would be automatic, unquestionable and total. But such is not man’s nature. Man is a being of volitional consciousness: beyond the level of percepts—a level inadequate to the cognitive requirements of his survival—man has to acquire knowledge by his own effort, which he may exercise or not, and by a process of reason, which he may apply correctly or not. Nature gives him no automatic guarantee of his mental efficacy; he is capable of error, of evasion, of psychological distortion. He needs a method of cognition, which he himself has to discover: he must discover how to use his rational faculty, how to validate his conclusions, how to distinguish truth from falsehood, how to set the criteria of what he may accept as knowledge. Two questions are involved in his every conclusion, conviction, decision, choice or claim: What do I know?—and: How do I know it?


Elliot Temple | Permalink | Messages (0)

Popperian Alternative to Induction

This wrote this on an Objectivist discussion forum in 2013.


http://rebirthofreason.com/Forum/Dissent/0265.shtml

I wrote:

Observe what? There are always many many things you could observe. Real scientific observation is selective.

Perform which action? There are many many actions one could perform. Real scientific action is selective.

Which patterns? There's always many many patterns.

In each case, being selective requires complex (critical) thinking. Ideas come first. Induction is supposed to explain how thinking works, but actually presupposes it.

Merlin Jetton replied:

Okay. Give us your answer to these questions. Please give us simple methods that cover all possible cases. How do we delimit those infinitely many possible conjectures?

(Following Popper.) We don't run into all the same problems because we use different methods in the first place.

We don't start with observation, scientific experiment, or finding patterns. All of those come later, after you already have various ideas. Then you do them according to your ideas. This is not problematic in general. It is a problem when you say stuff is "step 1" that actually presupposes ideas, and then claim your set of steps is a solution in epistemology and is how we get ideas.

We have a different approach that is not like induction and avoids many of induction's problems. By using different methods some problems never come up. We never have the problem of figuring out what to observe before having ideas, for example, because we say ideas come first before observations.

How are ideas learned then? Not from observations. Ideas come first. That's not to say observations are excluded. Observations are very useful. But first you need some ideas. Then you can observe (selectively, according to your ideas about what is important, what is interesting, what is notable, what is relevant to problems of interest, what clashes with your expectations, etc, etc ... and if your way of observing doesn't work out you can improve it with criticism, you can change and adjust it) and use the observations to help with further ideas (in a critical role – they rule things out).

Now this is a hard issue and you haven't read the literature and don't be too ambitious about how much you expect to learn from a summary. But anyway, because it's hard I'm going to split it up. First we'll consider an adult who wants to learn something. Then we could talk about how a child gets started after. I'll save that for later if the adult explanation goes over OK. The child is the harder case. I think it's too much to do the child first, all at once.

So, one of Popper's insights is that starting places aren't so important. I'm guessing this sounds dumb to you, because you're a foundationalist and think you have to start with the right foundations/premises/basis and then build up from there, step by step, making sure not to introduce errors or contradictions as you go. And Popper criticized and rejected that approach and offered a significantly different approach.

So let me try to explain what Popper's approach is like. People make mistakes. People are fallible. Errors are common. People mess up all the time. This isn't skepticism. People also get things right, learn, acquire knowledge, make scientific progress, etc, etc... But it's important to understand how easy it is to make mistakes. Knowledge is possible but hard to come by. To get knowledge you have to put a ton of effort into dealing with the problem of mistakes. I think if you read this the right way, you could agree with it. Objectivism recognizes that lots of philosophies go wrong and using the right methods is important and makes a big difference and some stuff like that.

So, OK, error is common and a big part of epistemology and philosophy is how you deal with error. What are you going to do about it? One school of thought tries to avoid errors. You use the right methods and then you get the right answers. That sounds very plausible but I don't think it's the right approach. I'll try to talk about Popper's approach instead. Popper's approach is you do try to avoid errors but you're never going to avoid all of them in the first place. That's not the primary most important thing. Whatever you do, some errors are going to get through. What you really have to do is set up mechanisms to identify and correct errors.

Popper applied this approach widely. Take politics and political systems. One of Popper's big ideas about politics is that trying to elect the right ruler is the wrong thing to focus on. Electing the right guy is trying to avoid errors. Yes you should put some effort into that but you can't do it perfectly and it's not the most important issue. What is the most important issue? That errors can be identified and corrected. In politics that means if you elect the wrong guy you find out fast, and you can get rid of him fast and you can get rid of him without violence. Popper called the wrong approach the "Who should rule?" problem and said most political philosophy argues about who should rule, when it should be focussing a lot more on how to set up political systems capable of correcting mistakes about who gets to rule.

What about epistemology? "Which ideas should we start with?" is a bit like "Who should rule?" You're never going to get it perfect and it shouldn't be the primary focus of your attention. Instead you want to set things up so if you start with the wrong ideas you can find out about the mistake and fix it quickly, easily, cheaply.

error correction is (a lot) more important than starting in a good place. look at it another way. if you start in a bad place but keep making progress, after a while you'll get to a good place and keep going. but if you start in a good place but aren't correcting errors, there is no progress, things never get better, long term you're doomed. so error correction is the more crucial thing that you really need.

so how can adults be selective? how can they decide what scientific experiments to do or which actions and results to investigate? how can they decide what patterns to look for? answer: they already have ideas about that. they can use the ideas they already have. that's ok! they don't need me to tell them some perfect answer. i could give them some advice and there could be some value in it, but it doesn't matter so much. they should start with the ideas they already have, use those, and then if something goes wrong they can make adjustments to try to do something about it. (and they can also philosophically examine their ideas and try to criticize instead of waiting for something noticeable to go wrong.)

in one sense, we're both advocating the same thing. people can and do use the ideas they already have about how to be selective, what issues to focus on, which patterns are notable, and more. but we Popperians know that is what's going on, and know how to keep making progress from there even if people aren't great at it. inductivists on the other hand think they have this method from first principles that is how people think but actually it smuggles in all sorts of common sense and pre-existing ideas as unexamined, uncriticized premises. and that's a really bad idea. those premises being smuggled in are good enough to start with, but what you really need to do is examine and criticize them!

i have not addressed how children/infants get started. i also haven't explained how thinking works at a lower level. (being able to criticize and correct errors requires thinking. how is that done?). we can get to those next if what i'm saying so far goes over ok. also the very short answer for how thinking works is that evolution is the only known theory for how knowledge can be created from non-knowledge. human thinking, at a low level, uses an evolutionary process to create knowledge. (i mean thinking literally uses evolution, not metaphorically. and no i'm not saying you consciously do that).


Elliot Temple | Permalink | Messages (0)

Deutsch Misquoted Turing

David Deutsch (DD) wrote in Quantum theory, the Church-Turing principle and the universal quantum computer (1985), p. 3:

Church (1936) and Turing (1936) conjectured ... This is called the ‘Church-Turing hypothesis’; according to Turing,

Every ‘function which would naturally be regarded as computable’ can be computed by the universal Turing machine. (1.1)

And from Deutsch's references (p. 19):

Turing, A. M. 1936 Proc. Lond. math. Soc. Ser. 2, 442, 230.

Now we'll compare with Turing's paper: On Computable Numbers, With An Application To The Entscheidungsproblem (1936), p. 230:

the computable numbers include all numbers which could naturally be regarded as computable.

Turing wrote "numbers", but DD misquoted that as "function". Turing also wrote "could" which DD misquoted as "would".

I double checked using two other copies of Turing's paper. (One and two.)

There's also a problem because Deutsch uses what appears to be an italicized block quote. You'd expect the whole block quote to be a quote of Turing, but instead it's a paraphrase. Inside the paraphrase are quotation marks surrounding the misquote of Turing that I criticized.

DD's citation is also incorrect. DD cites Turing's paper to volume 442 of the Proceedings of the London Mathematical Society, but it was actually in volume 42 not 442.

To determine what's correct, we can check how Turing himself cites it. In a correction to his paper, Turing cited himself:

Proc. London Math. Soc. (2), 42 (1936-7), 230-265.

You can also get the correct cite, with volume 42, from the Stanford Encyclopedia of Philosophy or from Wikipedia.

You can also see that the latest volume of the journal, published in 2021, is volume 122. Volume 442 is unlikely to exist for over 100 more years. And the journal's website has archives showing that the Turing article was in volume 42.

Tangentially, I hope this lowers your opinion of academic peer review. DD's paper was published in Proceedings of the Royal Society of London, a prestigious and peer-reviewed journal that started in around 1830. It has published work from many famous scientists.


Thanks to Dec for finding this misquote.

Note that DD has published a lot of misquotes.


Update 2021-07-15: Dec pointed out that a similar Turing misquote is in DD's book The Fabric of Reality:

He [Turing] conjectured that this repertoire consisted precisely of ‘every function that would naturally be regarded as computable’.

No, Turing wrote "all numbers which could" not "every function that would".

It appears that DD got this misquote from his own paper, and also modified it. There's a recurring pattern where every time DD touches a quote, there's a significant chance that he changes something. Here, he took the word "every" which was outside of quote marks in his paper and moved it inside quote marks for his book.


Update 2021-09-14: I contacted the academic publisher (proceedings of the royal society). They looked into the matter and said:

Apologies for the delay in getting back to you on this. A board member has had a look at the paper and does not think the misquote affects the outcome of the research presented in the paper. Although the error in the refences is unfortunate, we do not believe it will prevent readers from finding the correct article. Given the age of the paper we therefore do not think any further action is necessary.

I have several criticisms of this response.

They agree with me that DD misquoted and miscited.

Why won't they put up errata on their website? Is that too hard for them (they are bad at websites?) or do they actually not want to?

Errata serves several purposes. Academics working in the field could find out about the issue. People debating the issue could also refer to it – it would e.g. let a student whose professor repeated the error borrow the journal's authority to correct the professor. It's risky to correct your professor in general, but much easier with an official errata to point him to.

Is correcting professors a real issue? I think so because professors have been teaching Deutsch's error (there are some examples posted in the comments below). And they've been doing it out of context. In other words, even if the error did not affect the conclusion of Deutsch's paper, it still can affect other conclusions about other issues. So spreading the error matters, and it has in fact been taught in schools. Also, any reader of the paper may remember the Turing quote and use it for something else, and it may negatively affect the conclusion of their usage, even if it didn't affect the conclusion of Deutsch's paper. (Admittedly, some of the professors don't cite a source and might have been getting the error from Deutsch's book The Fabric of Reality where he repeated a similar error. But the fact that Deutsch put roughly the same error in his book is, IMO, an additional reason to errata it and at least do a little bit to stop the spread of the error.)

If they published an errata or other note about the error, they could also state their reasons for why they believe the paper's conclusion is unaffected. Other people could consider that reasoning and potentially disagree. This could be an area for critical thinking and truth seeking rather than an unaccountable authority pronouncing judgment for secret reasons. Even if it's no big deal in this case, their general attitude is concerning. How many other judgments do they make with no transparency? What is the nature of those judgments? Are any of those judgments mistaken? Do they gloss over many errors in papers they published? Could they be doing that partly out of bias and not wanting to draw attention to their own involvement in errors?

People expect academic science journals with peer review to have high standards and to be really picky about errors. They are not living up to this reputation. So much for their unlimited interest in truth for the sake of truth or whatever they were supposed to be doing.

They are still sharing the paper electronically and could update it there. Deutsch is still alive and available and could actually write or approve a small update, or they could do an update which is labelled as written by a journal editor not Deutsch.

How did this error happen? How did every step of the publishing process miss it? Did anyone intentionally cause or allow the error? Were any biases involved? They did no post mortem, no root cause analysis, no investigation into their peer review and editorial process, etc.

There are major causes for concern here. This errors calls into question how effective their reviewers and editors are. It also calls into question Deutsch's integrity. Maybe it was an accident but they have given no account of how it could have happened accidentally nor asked him to give one.

Do peer reviewers or editors not check quotes or cites? Should they? How widespread a problem is misquoting? How many other misquote reports do they receive, validate as correct criticism, and then bury? Might they be hiding a pattern revealing that many papers contain misquotes? Instead of hiding misquotes should they be doing something different like e.g. paying people enough money for misquote reports to make finding the misquotes worth the time and effort? If they actually wanted to find out about misquotes, and find out how big a problem it is, wouldn't they do something more like that? They could have responded to me by offering me money to find more misquotes since I've proven I can do it. That seems reasonable if they were better and more interesting in correcting errors.

Deutsch had an argument with a referree which was related to the text Deutsch misquoted:

http://www.daviddeutsch.org.uk/wp-content/uploads/2018/03/MathematiciansMisconception.pdf

But I soon found out that not everyone saw it that way. I also had referee problems. The referee of the paper in which I presented that proof insisted that Turing’s phrase “would naturally be regarded as computable” referred to mathematical naturalness – mathematical intuition – not nature.

(BTW, as a first impression, without reading Turing's paper or investigating the issue, I agree with the referree. When talking about naturally regarding something, that sounds like it's talking about what is natural or intuitive to people and their opinions, not about nature, due to what the key word "regard" means.)

Could Deutsch have intentionally misquoted in order to help win a specific logical point he was arguing about with the reviewer? Could the horrible, misleading presentation of the quote (as a block quote with an internal quote – which btw has tricked some people into thinking the whole thing is a quote) have been some kinda compromise worked out between Deutsch and the peer reviewer? Was the misquote in earlier drafts of the paper? Do they have records of what changes were made to the paper during peer review? In any case, there is some possible motive here for Deutsch falsifying the quote on purpose or just being biased and more careless in his own favor. Deutsch has a history of repeated misquotes throughout his career and most of them favor him in some way and I don't recall any that were bad for him, so it seems like whatever's going on involves bias if not actual deliberate, fully-conscious misquoting.

Seriously, how do wording errors in quotes happen accidentally? I understand typoing a letter or two when typing a quote in from a paper book or journal. But how do you just change the word? That seems more like Deutsch quotes stuff from memory – and his memory is biased in his favor (or there's selection bias – if he likes the version he remembers then he uses it, but if it's not ideal then he looks up the exact wording). Quoting from memory in your books and papers (and scripted speeches) is a serious scholarship violation that should lead to repercussions and major reputational damage. That's totally unacceptable. Another possibility, which there have also been potential indicators for, is that Deutsch changes quotes during his editing process without double checking the original. I suspect Deutsch thinks certain minor changes to quotes are OK, and maybe this somehow escalates to more major wording changes after multiple editing passes. Deutsch's editing could be like the game "telephone" where you whisper something to the guy next to you, who whispers it to the next guy, and so on. The goal is to repeat exactly what you heard. After something has been whispered a dozen times, often all the words are different and the meaning is totally changed.

In my experience, people are often willing to view things as "an accident" or "a mistake" without thinking about how exactly it happened. Some mistakes are simple like a one letter typo happening because you pressed the wrong keyboard key by accident because your finger dexterity is good but imperfect so occasionally you hit the wrong key (and then you usually notice and fix the typo, but not always). But many errors don't have such simple explanations and merit actual analysis. Changing the word "numbers" to "function" is not a typo due to flawed finger dexterity. That's bias, misremembering (while incorrectly believing quoting from memory is OK), intentionally falsifying the quote, or perhaps a horribly unreasonable editing processes that edits words within quotes similarly to how it edits words that are not within quotes. Or there are other possibilities like maybe a peer reviewer or editor caused the error and Deutsch didn't have full control over the final wording of his paper.

And how did the journal miss the error? Was it anyone's job to catch the error? Would the journal like to catch such errors in the future? And how did the error remain unnoticed in the archives for decades? Do they have a tiny readership? Do their readers not care about errors? Do their readers fail to report errors? Do their readers report errors but nothing is done? Would it make sense to hire people to review the archives for errors or should they focus on catching more errors before publication or should they just continue to not even post errata about errors and pretend nothing happened?

For more info, see my reply email to the journal:

https://curi.us/2477-academic-journals-are-unreasonable


Elliot Temple | Permalink | Messages (15)

Elliot Temple | Permalink | Messages (0)

Elliot Temple | Permalink | Messages (0)

Fallible Justificationism

This is adapted from a Feb 2013 email. I explain why I don't think all justificationism is infallibilist. Although I'm discussing directly with Alan, this issue came up because I'm disagreeing with David Deutsch (DD). DD claims in The Beginning of Infinity that the problem with justificationism is infallibilism:

To this day, most courses in the philosophy of knowledge teach that knowledge is some form of justified, true belief, where ‘justified’ means designated as true (or at least ‘probable’) by reference to some authoritative source or touchstone of knowledge. Thus ‘how do we know . . . ?’ is transformed into ‘by what authority do we claim . . . ?’ The latter question is a chimera that may well have wasted more philosophers’ time and effort than any other idea. It converts the quest for truth into a quest for certainty (a feeling) or for endorsement (a social status). This misconception is called justificationism.

The opposing position – namely the recognition that there are no authoritative sources of knowledge, nor any reliable means of justifying ideas as being true or probable – is called fallibilism.

DD says fallibilism is the opposing position to justificationism and that justificationists are seeking a feeling of certainty. And when I criticized this, DD defended this view in discussion emails (rather than saying that's not what he meant or revising his view). DD thinks justificationism necessarily implies infallibilism. I disagree. I believe that some justificationism isn't infallibilist. (Note that DD has a very strong "all" type claim and I have a weak "not all" type claim. If only 99% of justificationism is infallibilist, then I'm right and DD is wrong. The debate isn't about what's common or typical.)

Alan Forrester wrote:

[Justification is] impossible. Knowledge can't be proven to be true since any argument that allegedly proves this has to start with premises and rules of inference that might be wrong. In addition, any alleged foundation for knowledge would be unexplained and arbitrary, so saying that an idea is a foundation is grossly irrational.

I replied:

But "justified" does not mean "proven true".

I agree that knowledge cannot be proven true, but how is that a complete argument that justification is impossible?

And Alan replied:

You're right, it's not a complete explanation.

Justified means shown to be true or probably true. I didn't cover the "probably true" part. The case in which something is claimed to be true is explicitly covered here. Showing that a statement X is probably true either means (1) showing that "statement X is probably true" is true, or it means that (2) X is conjectured to be probably true. (1) has exactly the same problem as the original theory.

In (2) X is admitted to be a conjecture and then the issue is that this conjecture is false, as argued by David in the chapter of BoI on choices. I don't label that as a justificationist position. It is mistaken but it is not exactly the same mistake as thinking that stuff can be proved true or probably true.

In parallel, Alan had also written:

If you kid yourself that your ideas can be guaranteed true or probably true, rather than admitting that any idea you hold could be wrong, then you are fooling yourself and will spend at least some of your time engaged in an empty ritual of "justification" rather than looking for better ideas.

I replied:

The basic theme here is a criticism of infallibilism. It criticizes guarantees and failure to admit one's ideas could be wrong.

I agree with this. But I do not agree that criticizing infallibilism is a good reply to someone advocating justificationism, not infallibilism. Because they are not the same thing. And he didn't say anything glaringly and specifically infallibilist (e.g. he never denied that any idea he has could turn out to be a mistake), but he did advocate justificationism, and the argument is about justification.

And Alan replied:

Justificationism is inherently infallibilist. If you can show that some idea is true or probably true, then when you do that you can't be mistaken about it being true or probably true, and so there's no point in looking for criticism of that idea.

My reply below responds to both of these issues.


Justificationism is not necessarily infallibilist. Justification does not mean guaranteeing ideas are true or probably true. The meaning is closer to: supporting some ideas as better than others with positive arguments.

This thing -- increasing the status of ideas in a positive way -- is what Popper calls justificationism and criticizes in Realism and the Aim of Science.

I'll give a quote from my own email from Jan 2013, which begins with a Popper quote, and then I'll continue my explanation below:

Realism and the Aim of Science, by Karl Popper, page 19:

The central problem of the philosophy of knowledge, at least since the Reformation, has been this. How can we adjudicate or evaluate the far-reaching claims of competing theories and beliefs? I shall call this our first problem. This problem has led, historically, to a second problem: How can we justify our theories or beliefs? And this second problem is, in turn, bound up with a number of other questions: What does a justification consist of? and, more especially: Is it possible to justify our theories or beliefs rationally: that is to say, by giving reasons -- 'positive reasons' (as I shall call them), such as an appeal to observation; reasons, that is, for holding them to be true, or at least 'probable' (in the sense of the probability calculus)? Clearly there is an unstated, and apparently innocuous, assumption which sponsors the transition from the first to the second question: namely, that one adjudicates among competing claims by determining which of them can be justified by positive reasons, and which cannot.

Now Bartley suggests that my approach solves the first problem, yet in doing so changes its structure completely. For I reject the second problem as irrelevant, and the usual answers to it as incorrect. And I also reject as incorrect the assumption that leads from the first to the second problem. I assert (differing, Bartley contends, from all previous rationalists except perhaps those who were driven into scepticism) that we cannot give any positive justification or any positive reason for our theories and our beliefs. That is to say, we cannot give any positive reasons for holding our theories to be true. Moreover, I assert that the belief we can give such reasons, and should seek for them is itself neither a rational nor a true belief, but one that can be shown to be without merit.

(I was just about to write the word 'baseless' where I have written 'without merit'. This provides a good example of just how much our language is influenced by the unconscious assumptions that are attacked within my own approach. It is assumed, without criticism, that only a view that lacks merit must be baseless -- without basis, in the sense of being unfounded, or unjustified, or unsupported. Whereas, on my view, all views -- good and bad -- are in this important sense baseless, unfounded, unjustified, unsupported.)

In so far as my approach involves all this, my solution of the central problem of justification -- as it has always been understood -- is as unambiguously negative as that of any irrationalist or sceptic.

If you want to understand this well, I suggest reading the whole chapter in the book. Please don't think this quote tells all.

Some takeaways:

  • Justificationism has to do with positive reasons.

  • Positive reasons and justification are a mistake. Popper rejects them.

  • The right approach to epistemology is negative, critical. With no compromises.

  • Lots of language is justificationist. It's easy to make such mistakes. What's important is to look
    out for mistakes and try to correct them. ("Solid", as DD recently used, was a similar mistake.)

  • Popper writes with too much fancy punctuation which makes it harder to read.

A key part of the issue is the problem situation:

How can we adjudicate or evaluate the far-reaching claims of competing theories and beliefs?

Justificationism is an answer to this problem. It answers: the theories and beliefs with more justification are better. Adjudicate in their favor.

This is not an inherently infallibilist answer. One could believe that his conception of which theories have how much justification is fallible, and still give this answer. One could believe that his adjudications are final, or one could believe that his adjudications could be overturned when new justifications are discovered. Infallibilism is not excluded nor required.


Looking at the big picture, there is the critical approach to evaluating ideas and the justificationist or "positive" approach.

In the Popperian critical approach, we use criticism to reject ideas. Criticism is the method of sorting out good and bad ideas. (Note that because this is the only approach that actually works, everyone does it whenever they think successfully, whether they realize it or not. It isn't optional.) The ideas which survive criticism are the winners.

In the justificationist approach, rather than refuting ideas with negative criticism, we build them up with positive arguments. Ideas are supported with supporting evidence and arguments. The ones we're able to support the most are the winners. (Note: this doesn't work, no successful thinking works this way.)

These two rival approaches are very different and very important. It's important to differentiate between them and to have words for them. This is why Popper named the justificationist approach, which had gone without a name because everyone took it for granted and didn't realize it had any rival or alternative approaches.

Both approaches are compatible with both infallibilism and fallibilism. They are metaphorically orthogonal to the issue of fallibility. In other words, fallibilism and justificationism are separate issues.

Fallibilism is about whether or not our evaluations of ideas should be subjected to revision and re-checking, or whether anything can be established with finality so that we no longer have to consider arguments on the topic, whether they be critical or justifying arguments.

All four combinations are possible:

Infallible critical approach: you believe that once socialist criticisms convince you capitalism is false, no new arguments could ever overturn that.

Infallible justificationist approach: you believe that once socialist arguments establish the greatness of socialism, then no new arguments could ever overturn that.

Fallible critical approach: you believe that although you currently consider socialist criticisms of capitalism compelling, new arguments could change your mind.

Fallible justificationist approach: you believe that although you currently consider socialist justifying arguments compelling (at establishing the greatness and high status of the socialism, and therefore its superiority to less justified rivals), you are open to the possibility that there is a better system which could be argued for even more strongly and justified even more and better than socialism.


BTW, there are some complicating factors.

Although there is an inherent asymmetry between positive and negative arguments (justifying and critical arguments), many arguments can be converted from one type to the other while retaining some of the knowledge.

For example, someone might argue that the single particle two slit experiment supports (justifies) the many-worlds interpretation of quantum physics. This can be converted into criticisms of rivals which are incompatible with the experiment. (You can convert the other way too, but the critical version is better.)

Another complicating factor is that justificationists typically do allow negative arguments. But they use them differently. They think negative arguments lower status. So you might have two strong positive arguments for an idea, but also one mild negative argument against it. This idea would then be evaluated as a little worse than a rival idea with two strong positive arguments but no negative arguments against it. But the idea with two strong positive arguments and one weak criticism would be evaluated above an idea with one weak positive argument and no criticism.

This is easier to express in numbers, but usually isn't. E.g. one argument might add 100 justification and another adds 50, and then a minor criticism subtracts 10 and a more serious criticism subtracts 50, for a final score of 90. Instead, people say things like "strong argument" and "weak argument" and it's ambiguous how many weak arguments add up to the same positive value as a strong argument.

In justification, arguments need strengths. Why? Because simply counting up how many arguments each idea has for it (and possibly subtracting the number of criticisms) is too open to abuse by using lots of unimportant arguments to get a high count. So arguments must be weighted by their importance.

If you try to avoid this entirely, then justificationism stops functioning as a solution to the problem of evaluating competing ideas. You would have many competing ideas, each with one or more argument on their side, and no way to adjudicate. To use justificationism, you have to have a way of deciding which ideas have more justificationism.

The critical approach, properly conceived, works differently than that. Arguments do not have strengths or weights, and nor do we count them up. How can that be? How can we adjudicate between competing ideas with out that? Because one criticism is decisive. What we seek are ideas we don't have any criticisms of. Those receive a good evaluation. Ideas we do have criticisms of receive a bad evaluation. (These evaluations are open to revision as we learn new things.) (Also there are only two possible evaluations in this system. The ideas we do have criticisms of, and the ideas we don't. If you don't do it that way, and you follow the logic of your approach consistently, you end up with all the problems of justificationism. Unless perhaps you have a new third approach.)


Elliot Temple | Permalink | Messages (0)

Beginning of Infinity Website Removed in Protest

I took down the website beginningofinfinity.com and replaced it with the below protest message.


This website promoted David Deutsch’s book The Beginning of Infinity (BoI). David was my friend, mentor and colleague. I helped with drafts of BoI for seven years (I wrote over 200 pages of suggestions, comments and edits to help with the book). At David’s request, I made and owned this BoI website and the BoI Google Groups forum.

I’ve taken this site down in protest due to David’s role in harassment against me. I’ve been harassed by his fans and he lied about me. They’ve disrupted my blog, forums, and ability to discuss with other intellectuals online.

I also discovered many misquotes in BoI which, alone, would be enough reason for me to stop actively promoting BoI.

The story in short: David (and his Taking Children Seriously co-founder Sarah) created an online community which I was part of, but then he left after 15 years. Now he has a second fan community, which is harassing the first community. The harassment is primarily targeted at me, presumably because I’m now the leader of the older community. One of the motives some people have communicated is that they see me as David’s enemy.

The harassment has persisted for years, and has included dozens of fake identities (some maintained for months), hundreds of harassing messages from over one hundred IP addresses, stalking me to other websites to disrupt my conversations there, DDoSing, impersonation, threats, spam, plagiarism, libel, fraud and doxxing. Some of that is illegal (I am not a lawyer; I’ve presented evidence; judge for yourself).

David has been unwilling to ask his fans to stop, to discuss the matter privately or publicly, to explain himself, to dispute any of the evidence, to state a grievance he has against me, or to offer any terms for truce. I’d be willing to do conflict resolution through proxies or associates (David’s, mine or both) but he’s been unwilling to do that.

When asked to tell his fans to stop harassing, David not only refused, but turned it around and lied to attack the victim (me) which justified and encouraged additional harassment. His lie is damaging to my reputation and it seems likely that he’s said it to other people privately. Rather than deescalate, he choose to openly join in the harassment himself by smearing me. He hasn’t retracted his lie, nor has he denied circulating it privately so that harassers believed it and were motivated by it. This is despite me posting documentation that he’s lying. (I understand David’s lie to be libel and defamation, but I don’t have the resources to stop it. I am not a lawyer and you can read what he said at the link, along with the actual facts, and judge for yourself.)

I finally gave up and closed the comments on my blog – after 18 years and over 20,000 comments – due to being unable to deal with the harassment there. I’ve also been harassed at Reddit, Less Wrong, Twitter, Facebook, Google Groups, Basecamp, Discord and Slack. They won’t leave me alone.

David hasn’t argued that he isn’t involved or explained why his actions are OK. He hasn’t said which facts or claims he accepts or denies, presented his own account of events, or argued that my account is false. He hasn’t denied gossiping negatively about me, nor said what he’s doing to avoid crossing the line into unacceptable behavior. He hasn’t given an innocent explanation for the links between the harassment and his social circle.

David hasn’t taken steps to distance himself from the problem or to reduce the harm being done. He hasn’t stated that he’s opposed to harassment in general or to any of the harassing actions by his fans against me. He hasn’t blocked the worst harasser on Twitter, and keeps tweeting with him. David won’t do anything to delegitimize the harassment. Many of David’s friends and associates behave similarly or worse. David won’t even pay lip service to saying that I’m not his enemy or that I shouldn’t be harassed.

David also hasn’t disowned the subreddit for The Beginning of Infinity, which was created by the worst harasser. Nor has David disowned a nasty message posted under the name “David Deutsch” (I believe it was impersonation, which is something that ought to concern David). I think some of David’s fans have taken his behavior as a signal that he wants me harassed, and he’s refused to deny wanting me harassed.

I’ve documented the harassment, provided extensive evidence, and explained what’s going on. The response has been a mix of silence and more harassment. David is more powerful and influential than me, and has more support and resources, so there isn’t much I can do besides speak truth to power and hope that reasonable people listen. I’ve tried to put up with things, ignore things for months, privately ask for a peaceful resolution, publicly ask for a peaceful resolution, etc. In the past, David spent thousands of hours discussing with me, but now he’s stonewalling all attempts at deescalation.

I have the right to be left alone, not harassed for years. My rights are being violated, and I think David is the root cause of the problem. David needs to take appropriate steps to reign in his toxic community, and needs to retract his lie about me.

If you’d like to help, please ask David and his community about the problem, criticize them and complain, but don’t harass them in return. Maybe David will stop his bad behavior if people complain. David’s public email address is [email protected] and his Twitter is @DavidDeutschOxf.

For more information, read my articles about the harassment. To contact me, email [email protected].

— Elliot Temple (my philosophy work)


Elliot Temple | Permalink | Messages (2)