[Previous] Improvements to Comments | Home | [Next] Recording Setup

Critical Rationalism Epistemology Explanations

I discussed epistemology in a recent email:

I really enjoyed David Deutsch's explanation of Popper's epistemology and since reading Fabric of Reality I've read quite a bit of Popper. I've become convinced that Deutsch's explanation of Popper is correct, but I can also see why few people come away from Popper understanding him correctly. I believe Deutsch interprets Popper in a way that is much easier to understand.

Yes, I agree. DD refined and streamlined Critical Rationalism, and he's a better writer than Popper was. Popper made the huge breakthrough in the field and wrote a lot of good material about it, but there's still more work to do before most people get it.

Plus, I think he actually adds some ideas to Popper that matter that make it less misleading. Popper was struggling himself to understand his own theories, so it's understandable that he struggled to explain some parts of it.

I agree. I don't blame Popper for this, since he had very original and important ideas. He did more than enough!

(For example, it was problematic to refer to good theories as 'improbable' rather than 'hard to vary.' In context, I feel Popper meant the same thing, but the words he chose were problematic for conveying the meaning to others.)

So I've been wondering if it's possible to boil Popper's epistemology (with additions and interpretations from Deutsch) down to a few basic principles that seem 'self evident' and then to draw necessary corollaries. If this could be done, it would make Popper's epistemology much easier to understand.

Here is what I've come up with so far. (I'm looking for feedback from others familiar with Popper's epistemology as interpreted and adjusted by Deutsch to point out where I got it wrong or are missing things..)

Criteria for a Good Explanation:

1. We should prefer theories that are explanations over those that are not.

This is an approximation.

The point of an idea is to solve a problem (or multiple problems). We should prefer ideas which solve problems.

Many interesting problems require explanations to solve them, but not all. Whether we want an explanation depends on the problem being addressed.

In general, we want to understand things, not just be told answers to trust on authority. So we need explanations of how and why the answers will work, that way we can think for ourselves, recognize what sort of situations would be an exception, and potentially fix errors or make improvements.

But some problems don't need explanations. I might ask my friend, who is good at cooking, "How long should I boil an egg?" and just want to hear a number of minutes without any explanation. Finding out the number of minutes solves my cooking problem. I didn't want to try to understand how cooking eggs works, and I didn't want to debate the matter or check my friend's ideas for errors, I just wanted it to come out decently. It can be reasonable to prioritize what issues I investigate more and which I don't.

2. We should prefer explanations that are hard to vary over ones that can easily be adjusted to fit the facts because a theory that can be easily adjusted to fit any facts explains every possible world and thus explains nothing in the actual world.

Hard to vary given what constraints?

Any idea is easy to vary if there are no constraints. You can vary it to literally any other idea, arbitrarily, in one step.

The standard constraint on varying an idea is that it still solves (most of) the same problems as before. To improve an idea, we want to make it solve more and better problems than before with little or no downside to the changes.

The problems ideas solve aren't just things like "explain the motion of balls" or "help me organize my family so we don't fight". Another important type of problem is understanding how ideas fit together with other ideas. Our knowledge has tons of connections where we understand ideas (often from different fields) to be compatible, and we understand how and why they are compatible. Fitting our knowledge together into a unified picture is an important problem.

The more our knowledge is constrained by connections to problems and other ideas, the more highly adapted it is to that problem situation, and therefore the harder it is to vary while keeping the same or greater level of adaptation. The more ideas are connected to other problems and ideas, the less wiggle room there is to make arbitrary changes without breaking anything.

Fundamentally, "hard to vary" just means "is knowledge". Knowledge in the CR view is adapted information. The more adapted information is, the more chance a random change will make it worse instead of better (worse and better here are relative to the problem situation).

There are many ways to look at knowledge that are pretty equivalent. Some ways are: ideas adapted to a problem situation, ideas that are hard to vary, non-arbitrary ideas, ideas that break symmetries (that give you a way to differentiate things, prefer some over others, evaluate some as better than others, etc. You can imagine that, by default, there's tons of ideas and they all look kinda equally good. And when two ideas disagree with each other, by default that is a symmetric situation: either one could be mistaken and we can't take sides. Knowledge lets us take sides; it helps us break the symmetry of "X contradicts Y, therefore also Y contradicts X" and helps us differentiate ideas so they don't all look the same to us.)

3. A theory (or explanation) can only be rejected by the existence of a better explanatory theory.

Ideas should be rejected when they are refuted. A refutation is an explanation of how/why the idea will not solve the problem it was trying to solve. (Sometimes an idea is proposed as a solution to multiple different problems. In that case, it may be refuted as a solution to some problems while not being refuted as a solution for others. In this way, criticism and refutation are contextual rather than universal.)

You don't need a better idea in order to decide that an idea won't work – that it fails to solve the problem you thought it solved. If it simply won't work, it's no good, whether you have a better idea or not.

These are fairly basic and really do seem 'self evident.' But are they complete? What did I miss?

I then added a number of corollaries that come out of the principles to explain the implications.

1. We should prefer theories that are explanations over those that are not.
a. Corollary 1-1: We should prefer theories that explain more over those that explain less. In other words, we should prefer theories that have fewer problems (things it can’t explain) over ones that have more problems.

Don't judge ideas on quantity of explanation. Quality is more important. Does it solve problems we care about? Which problems are important to solve? Which issues are important to explain and which aren't?

Also, we never need to prefer one idea over another when they are compatible. We can have both.

When two ideas contradict each other, then at least one is false. We can't determine that one is false by looking at their positive virtues (how wonderful are they, how useful are they, how much do they explain). Instead, we have to deal with contradictions by figuring out that an idea is actually wrong, we have to look at things critically.

b. Corollary 1-2: We should prefer actual explanations over pseudo-explanations (particularly explanation spoilers) disguised as explanations.
c. Corollary 1-3: If the explanatory power of a theory comes by referencing another theory, then we prefer the other theory because it’s the one that actually explains things.
2. We should prefer explanations that are hard to vary over ones that can easily be adjusted to fit the facts because a theory that can be easily adjusted to fit any facts explains every possible world and thus explains nothing in the actual world.
a. Corollary 2-1: We should prefer explanations that have survived the strongest criticisms or tests we have currently been able to devise.

Criticisms don't have strengths. A criticism either explains why an idea fails to solve a problem, or it doesn't.

See: https://yesornophilosophy.com and http://curi.us/1595-rationally-resolving-conflicts-of-ideas and especially http://curi.us/1917-rejecting-gradations-of-certainty

Popper and DD both got this wrong, despite DD's brilliant criticism of weighing ideas in BoI. The idea of arguments having strengths is really ingrained in common sense in our culture.

b. Corollary 2-2: We should prefer explanations that are consistent with other good explanations (that makes it harder to vary), unless it violates the first principle.
3. A theory (or explanation) can only be rejected by the existence of a better explanatory theory.
a. Corollary 3-1: We should prefer theories (or explanations) that suggest tests that the previously best explanation can’t pass but the new one can. (This is called a Critical Test.)
b. Corollary 3-2: It is difficult to devise a Critical Test of a theory without first conjecturing a better theory first.
c. Corollary 3-3: A theory that fails a test due to a problem in a theory and a theory that fails a test due to some other factor (say experimental error) are often indistinguishable unless you have a better theory to explain which is which.

Yes, after a major existing idea fails an experimental test we generally need some explanatory knowledge to understand what's going on, and what the consequences are, and what we should do next.


Elliot Temple on July 17, 2018

Messages (41)

I read this article on Kindle, via Send to Kindle app and I couldn't differentiate the quotes from normal text, they appear the same.


Anonymous at 2:36 PM on July 19, 2018 | #10332 | reply | quote

Sounds like Kindle is completely broken. They are standard html blockquote tags.


Anonymous at 2:39 PM on July 19, 2018 | #10333 | reply | quote

Refutation

>Ideas should be rejected when they are refuted. A refutation is an explanation of how/why the idea will not solve the problem it was trying to solve.

> You don't need a better idea in order to decide that an idea won't work – that it fails to solve the problem you thought it solved.

By "rejected", do you mean "thrown away"? Throwing away ideas for how to solve a problem just because they don't work disables you from modifying them to find a solution.

Throwing away ideas only if there's a strictly better idea doesn't have that problem, because whatever improvements could be made to the worse ones could be made to the better one.


Evan at 3:58 PM on October 20, 2018 | #11304 | reply | quote

Quality vs quantity of explanation

>>a. Corollary 1-1: We should prefer theories that explain more over those that explain less. In other words, we should prefer theories that have fewer problems (things it can’t explain) over ones that have more problems.

>Don't judge ideas on quantity of explanation. Quality is more important. Does it solve problems we care about? Which problems are important to solve? Which issues are important to explain and which aren't?

What is there to quality except problems which are supersets and subsets of other problems? E.g. the solution to the problem of AGI solves the problem of aging and the problem of nuclear disarmament, so solving AGI is a more important problem. That's quantity in the sense the emailer was talking about.


Evan at 5:43 PM on October 20, 2018 | #11305 | reply | quote

#11304

> By "rejected", do you mean "thrown away"? Throwing away ideas for how to solve a problem just because they don't work disables you from modifying them to find a solution.

no you don't throw it away, and for the reason you said. you can try to fix it and, failing that keep it in the pile of false ideas which you can look through for inspiration or metaphorical spare parts (or try again to fix, later, after learning something new). (minor tangent: this is what Popper said, and Deutsch said, and me. i thought you had read a fair amount of them, so i don't know where the throwing away misinterpretation is coming from, since we all contradict it and we never advocate it *and* you already understood why it's wrong and what is better, in line with Popper.)

rejected means. you 1) think it's false (your best judgment). 2) you *do not act on it*, b/c you are *rejecting it as a solution to your problem*. (that does not mean rejecting a *different idea*, which is similar, if you can invent one, nor does it mean rejecting the idea as a solution to a *different* problem like the problem of having a flawed intellectual starting point to try to build on, which can be useful, which is the point in the prior paragraph.)

throwing away ideas is bad even if you have a strictly better one *in the usual sense* – strictly better at solving the original problem – b/c the worse idea could still contain some "spare parts" that the better idea doesn't have. however "strictly better" could be interpreted broadly enough to mean strictly better as spare parts in addition to strictly better at its primary function, in which case you can safely throw away strictly worse ideas, sorta, except the thing is the better idea will contain the worse idea and more and that's how it managed to be strictly better in the stronger sense.

> What is there to quality except problems which are supersets and subsets of other problems? E.g. the solution to the problem of AGI solves the problem of aging and the problem of nuclear disarmament, so solving AGI is a more important problem. That's quantity in the sense the emailer was talking about.

problems are not countable without a measure (a method of counting) and no reasonable measures are known. even if we found some measures, we should not expect them to actually be good for the purpose of judging idea quality – we'd need a particular measure designed to that purpose. but then we're building knowledge of idea quality (like which problems are more important to solve, which is not what would normally be considered a quantity issue) into the measure itself, so we're still basically judging by idea quality.

also it's debatable that building an AGI will lead in some kind of direct or immediate way to solving the aging problem or the problem of preventing nuclear war (which i take as the real problem being referred to there). i don't think it would, nor does ET nor DD.

but anyway, the *reason* you think that solving AGI will be so beneficial (and btw i do agree it'd be really valuable) is your understanding of the value of that achievement – it's your judgment relating to a quality issue (value of achievement, what the achievement would do), not a quantity issue, that is leading you to think it would solve aging and nuclear war. you think it'd solve those because of your understanding of what the accomplishment would be, what sorta benefits it'd have, etc, not b/c of anything that looks like a quantity measurement.


Dagny at 5:59 PM on October 20, 2018 | #11306 | reply | quote

#11305

>>>a. Corollary 1-1: We should prefer theories that explain more over those that explain less. In other words, we should prefer theories that have fewer problems (things it can’t explain) over ones that have more problems.

>>Don't judge ideas on quantity of explanation. Quality is more important. Does it solve problems we care about? Which problems are important to solve? Which issues are important to explain and which aren't?

>What is there to quality except problems which are supersets and subsets of other problems?

Elliot said what is there to quality:

>Does it solve problems we care about? Which problems are important to solve? Which issues are important to explain and which aren't?

>E.g. the solution to the problem of AGI solves the problem of aging and the problem of nuclear disarmament, so solving AGI is a more important problem. That's quantity in the sense the emailer was talking about.

More important than what? Than an idea that only solves the nuclear disarmament? As Elliot said, if both ideas don't contradict each other then we can have both, if not, we shouldn't choose one on the basis of "solves more problems". If they contradict we should create another idea that will solve the contradiction. The solution will show that one or both ideas were wrong.

Also, why do you think that the solution to AGI solves aging and the problem of nuclear disarmament?


Guilherme at 3:39 PM on October 21, 2018 | #11310 | reply | quote

> Also, why do you think that the solution to AGI solves aging and the problem of nuclear disarmament?

He presumably thinks they will be superintelligent and rapidly self-improving and rapidly solve virtually all problems. That kinda view.

---

Also, regarding the conversation, supersets are rarely a useful way to compare because of how rarely you can actually compare interesting things as supersets. E.g. suppose the rapid AGI development thing was correct and the AGIs would find a cure for aging a week after AGI is developed. Solving aging a week later is *not* a superset of solving aging right now, it's a different thing with some but not all of the same benefits.


Anonymous at 3:45 PM on October 21, 2018 | #11311 | reply | quote

>When two ideas contradict each other, then at least one is false. We can't determine that one is false by looking at their positive virtues (how wonderful are they, how useful are they, how much do they explain). Instead, we have to deal with contradictions by figuring out that an idea is actually wrong, we have to look at things critically.

All those positive virtues do not deal with the content of the contradicting ideas and how they relate to one another. It's like ignoring the contradiction and choosing by following some authority.


Guilherme at 4:17 PM on October 21, 2018 | #11312 | reply | quote

Quality vs quantity of explanation

>i don't know where the throwing away misinterpretation is coming from

Deutsch's use of the phrase "refuted and abandoned" in BoI (p341), my mis-remembering his use of the pencil+paper+waste paper basket analogy in BoI) p355, and the use of 'discard' here (1st instance) https://conjecturesandrefutations.com/?s=discard.

>>>>a. Corollary 1-1: We should prefer theories that explain more over those that explain less. In other words, we should prefer theories that have fewer problems (things it can’t explain) over ones that have more problems.

>>>Don't judge ideas on quantity of explanation. Quality is more important. Does it solve problems we care about? Which problems are important to solve? Which issues are important to explain and which aren't?

>> What is there to quality except problems which are supersets and subsets of other problems? E.g. the solution to the problem of AGI solves the problem of aging and the problem of nuclear disarmament, so solving AGI is a more important problem. That's quantity in the sense the emailer was talking about.

>problems are not countable without a measure (a method of counting) and no reasonable measures are known. even if we found some measures, we should not expect them to actually be good for the purpose of judging idea quality – we'd need a particular measure designed to that purpose. but then we're building knowledge of idea quality (like which problems are more important to solve, which is not what would normally be considered a quantity issue) into the measure itself, so we're still basically judging by idea quality.

Maybe not all problems, but problems of explaining the unexplained seem to be orderable into sets and subsets. Like, it's conceivable that solving the problem of how to make money is a sub-problem of someone's problem of how to do stuff that requires a stove. This is how I understand the importance of philosophy - philosophical problems are sub-problems of many other problems - it's hard to do things correctly and without making huge mistakes unless you understand philosophy. See page 449 of BoI about some problems being more fundamental than others. To me that seems to imply subsets and supersets.

>also it's debatable that building an AGI will lead in some kind of direct or immediate way to solving the aging problem or the problem of preventing nuclear war (which i take as the real problem being referred to there). i don't think it would, nor does ET nor DD.

The AGI could read all the research on aging much more quickly than a human could today. It would be good at making arguments and explaining what is preventing uncooperative states from approaching discussions rationally.

I said nuclear disarmament because I agree with Deutsch that problems shouldn't be prevented but solved and didn't want to solve the apparent contradiction - can you? The criterion of nuclear war not happening soon should be met - is that a counterexample to the "solutions, not preventions" idea? Problems are inevitable - that's a good argument for why ultimately the strategy should be like "nuclear-war-proof-cities and buildings". I guess a solutions-focused way to phrase disarmament would be "discard the vehicles for the execution of government employees' genocidal ideas, because they've been morally refuted" - or "turn them into safer vehicles for the execution of those impulses" (by disabling the nukes, thereby turning them into effectively toy replicas or whatever).


Evan at 10:28 PM on October 21, 2018 | #11314 | reply | quote

Refutation

>Ideas should be rejected when they are refuted. A refutation is an explanation of how/why the idea will not solve the problem it was trying to solve.

> You don't need a better idea in order to decide that an idea won't work – that it fails to solve the problem you thought it solved.

Doesn't DD disagree with this (p8 of https://arxiv.org/pdf/1508.02048.pdf requires a rival theory for a theory to be refuted)? Are there any written arguments about this online (esp involving Deutsch) that I could read? Anyone know why he disagrees?


Evan at 10:49 PM on October 21, 2018 | #11315 | reply | quote

> Like, it's conceivable that solving the problem of how to make money is a sub-problem of someone's problem of how to do stuff that requires a stove.

I don't think cooking and money making stuff offer strict subset/superset relationships. To make money, you have to take extra actions to sell stuff which aren't part of cooking. There is *partial* overlap.

Philosophy is powerful because it offers methods of thinking, and thinking is needed in other fields. But it doesn't tell you everything you need to know about those other fields (e.g. it doesn't tell you how to cook, only certain things about how to learn to cook) – it's not a superset, it just has partial overlap.

> I said nuclear disarmament because I agree with Deutsch that problems shouldn't be prevented but solved and didn't want to solve the apparent contradiction - can you?

You seem to be bringing some political ideas into this which you haven't explained. Disarming is a type of prevention, whereas creating a world where no one wants to hurt each other would be more of a full solution IMO (cuz if you disarm while still wanting to kill each other, that's only a partial solution).

I'm no doubt bringing in some political ideas too: I (and ET and DD) believe we live in a world where nuclear disarmament is an ongoing activism issue by certain anti-American types who want to disarm us – which would foolishly lead to more violence.

> The AGI could read all the research on aging much more quickly than a human could today. It would be good at making arguments and explaining what is preventing uncooperative states from approaching discussions rationally.

That depends on the details of the AGI. Not all possible AGIs would be faster than humans at reading.

And why would it be good at making arguments? You are bringing in some ideas about AGI that ET/DD/etc disagree with. They think AGIs would be universal explainers, just like humans are, rather than having some fundamental advantage. They think AGIs would need an education, and we'd be just as capable of doing education badly as we currently do with human students.


Dagny at 10:56 PM on October 21, 2018 | #11316 | reply | quote

#11315

DD quit philosophy around the time I figured out https://yesornophilosophy.com (years before creating that educational material). I asked him to engage with it but he never engaged much. He chose not to learn and respond to it, I think for similar reasons to why he quit philosophy in general. There were some related debates, like about whether justificationism is inherently and necessarily infallibilist (I said no). You can search the BoI archives (available on the google group or at http://curi.us/ebooks ) for some of the later discussions with DD and also especially i remember there was a debate on FoR list. i found a post from it. i think it spanned weeks and many subject lines, but here's (at the end of this comment) a sample to start finding stuff.

It's certainly not the only disagreement that DD chose not to address. Another was my criticism of "hard to vary" which I shared before BoI was published (a version of it is in the blog post at the top). you can also find some debate about mirror neurons on FoR list, which I don't think DD adequately addressed. DD also dissented from Szasz in some major ways, shortly before quitting, and didn't explain that adequately or address Szasz's (or my) arguments that i had previously thought him to be an advocate of.

as to the linked paper, that's long after DD quit philosophy. he no longer thinks well about issues like that. he's intentionally distanced himself from the epistemology community that he had helped create. he doesn't want to know what we think, persuade us, be persuaded by us, answer or ask questions, give or receive criticism, etc. there are no paths forward and so his comments no longer matter. his 2 books predate his quitting philosophy, but for a 2016 paper *he is no longer the same person who thought of the ideas in FoR and BoI*, nor is he now the kind of person who debates and seriously studies such things.

---

Begin forwarded message:

From: Elliot Temple <[email protected]>

Subject: Solidity is degree of justification (was: Mirror neurons at autopsy)

Date: December 22, 2012 at 11:44:40 AM PST

To: [email protected]

On Dec 18, 2012, at 2:53 PM, David Deutsch <[email protected]> wrote:

> (Well, I say we know this -- but that explanation is not nearly as solid as the ones above.

This is justificationism.

-- Elliot Temple

http://fallibleideas.com/

---

From: David Deutsch <[email protected]>

Subject: Re: Criticism and explanation versus justification

Date: January 1, 2013 at 5:53:16 PM PST

To: <[email protected]>

Reply-To: [email protected]

---

From: Anonymous Person <[email protected]>

Subject: Justificationism

Date: December 31, 2012 at 12:05:42 PM PST

To: [email protected]

Reply-To: [email protected]

---

From: Elliot Temple <[email protected]>

Subject: Re: Solidity is degree of justification (was: Mirror neurons at autopsy)

Date: December 25, 2012 at 3:58:41 PM PST

To: [email protected]


curi at 11:15 PM on October 21, 2018 | #11317 | reply | quote

Ranking problems

>>E.g. the solution to the problem of AGI solves the problem of aging and the problem of nuclear disarmament, so solving AGI is a more important problem. That's quantity in the sense the emailer was talking about.

>More important than what? Than an idea that only solves the nuclear disarmament?

More important than either solving aging or solving nuclear war.

>As Elliot said, if both ideas don't contradict each other then we can have both, if not, we shouldn't choose one on the basis of "solves more problems".

I agree you shouldn't choose one already-instantiated solution over another, but it seems like e.g. it's worse to solve problems in, say, video games for years (like, as a character in the game) and then die at age 80 from aging than it is to solve aging first, and then be able to play video games or do whatever else for much longer. There's an apparently normative ordering of problems to solve. I give below an explanation of this ordering using subsets/supersets.

---

>Also, regarding the conversation, supersets are rarely a useful way to compare because of how rarely you can actually compare interesting things as supersets. E.g. suppose the rapid AGI development thing was correct and the AGIs would find a cure for aging a week after AGI is developed. Solving aging a week later is *not* a superset of solving aging right now, it's a different thing with some but not all of the same benefits.

>I don't think cooking and money making stuff offer strict subset/superset relationships. To make money, you have to take extra actions to sell stuff which aren't part of cooking. There is *partial* overlap.

Ah, good point.

>Philosophy is powerful because it offers methods of thinking, and thinking is needed in other fields. But it doesn't tell you everything you need to know about those other fields (e.g. it doesn't tell you how to cook, only certain things about how to learn to cook) – it's not a superset, it just has partial overlap.

Maybe people should choose to work on the most fundamental problems first because their solutions can be used to solve less fundamental problems faster. It would be too much boring work to solve them without fundamental knowledge (e.g. it would be too boring to solve aging without an AGI, or factorize a large prime number without a quantum computer, or cook toast without a stove.) Solving AGI first is more fun. That is because the explanation of how it works is more fundamental than explanations of specific things people can do. The repertoires (in terms of physical transformations) had by the classes of objects that the theories explain can be ordered into supersets and subsets.


Evan at 6:12 PM on October 27, 2018 | #11323 | reply | quote

>> I said nuclear disarmament because I agree with Deutsch that problems shouldn't be prevented but solved and didn't want to solve the apparent contradiction - can you?

>You seem to be bringing some political ideas into this which you haven't explained. Disarming is a type of prevention

What do you think of my attempted non-prevention, solutions-focused phrasing then? "Preventing death" isn't preventing problems, or preventing anything really, it's disabling things that destroy problems. Like creating interventions for aging. We want to retain the freedom to explore states of mind like alzheimers or whatever and have whatever fun there is to be had there (not "preventing" alzheimer's, like "preventing" kids from playing video games), but only if the technology exists to get us back to a non-aging state.

>> The AGI could read all the research on aging much more quickly than a human could today. It would be good at making arguments and explaining what is preventing uncooperative states from approaching discussions rationally.

>That depends on the details of the AGI. Not all possible AGIs would be faster than humans at reading.

Doesn't how much you can read and understand increase quickly with working memory, which the AGI could have more of with hardware add ons?

>And why would it be good at making arguments? You are bringing in some ideas about AGI that ET/DD/etc disagree with. They think AGIs would be universal explainers, just like humans are, rather than having some fundamental advantage. They think AGIs would need an education, and we'd be just as capable of doing education badly as we currently do with human students.

I don't think they'd have a fundamental advantage over humans at creating arguments, they'd just be good at it like humans. Isn't figuring out education necessary to figuring out AGI?


Evan at 6:40 PM on October 27, 2018 | #11324 | reply | quote

Disagreements with DD

>It's certainly not the only disagreement that DD chose not to address. Another was my criticism of "hard to vary" which I shared before BoI was published (a version of it is in the blog post at the top).

Is that this?:

>Hard to vary given what constraints?

>Any idea is easy to vary if there are no constraints. You can vary it to literally any other idea, arbitrarily, in one step.

>The standard constraint on varying an idea is that it still solve (most of) the same problems as before. To improve an idea, we want to make it solve more and better problems than before with little or no downside to the changes.

Because I don't see how that's a criticism of hard to vary, Deutsch usually gives the qualification "while still solving the same problems it was created in order to solve". Is the last sentence I quoted a criticism of that standard constraint?

I agree it's an unintuitive and clunky phrase. "Hard to safely vary" would be good.

>There were some related debates, like about whether justificationism is inherently and necessarily infallibilist (I said no). You can search the BoI archives (available on the google group or at http://curi.us/ebooks ) for some of the later discussions with DD and also especially i remember there was a debate on FoR list. i found a post from it. i think it spanned weeks and many subject lines, but here's (at the end of this comment) a sample to start finding stuff.

How do I search the FoR list?


Evan at 7:20 PM on October 27, 2018 | #11325 | reply | quote

> I don't think they'd have a fundamental advantage over humans at creating arguments, they'd just be good at it like humans. Isn't figuring out education necessary to figuring out AGI?

Oh, then why would they solve aging or nuclear war? Wouldn't that they be more like solving pregnancy, then? Presumably building the hardware would be more convenient than pregnancy. But then you build it and you just have another person. And we have billions of those already. And it has no legs, arms or eyes, unless you build those too, which increases the expense.

And not *all possible* AGIs have a design that allows plugging in extra memory or CPUs. You want an AGI from a limited subset of all designs which has certain characteristics that you see advantages in. Perfectly reasonable of you, but be careful with strong phrases like *all possible*.

I don't think reading speed is primarily constrained by CPU or RAM. I think *human reading is not constrained by those things*. People don't use all their brainpower while reading (or anything else), they leave some idle. People waste large amounts of computational resources. And what we really care about is *thoughtful* reading, yes reading books off an SSD is faster than using eyes to read off a screen or page, but the bottleneck will be thinking related rather than on inputting the information. (I think human speed readers may be able to hit bottlenecks related to eyes or something more in that ballpark when reading very easy books, but most people do not come near that bottleneck, and it's not a relevant bottleneck when reading books that require more thought. I've read some very easy material at ~1200 wpm before and that number is increasable with more practice and skill, but people normally read at more like 200 wpm for material that isn't even very hard, and I rarely read at anywhere near 1200 wpm b/c i slow down for material requiring more thought.)

Also, more importantly, if you take a person and let him read 1000 more books ... most people wouldn't even want to. They dislike reading. And even if they did read 1000 books, they'd still be an idiot with awful ideas who didn't understand much of what they read. It wouldn't solve their problems. Similarly if you let someone live an extra thousand years, so he had more time to learn and think, i think he still wouldn't do much – it's not an issue of resources like time or computing power. Like how you can give a person a week to make an important decision and, often, he will barely do better than if he had a day.

Maybe, as part of figuring out the education problem better, we'll make much better books. But then is AGI really the key breakthrough? Or is it the books the enable billions of human beings to be much better thinkers! If AGIs are just people, *possibly* scaled by computational resources (but debatably not because people waste tons of computational resources and i'm not even convinced it's a proportion rather than just all the extra no matter how much that is) ... then the educational improvements are the breakthrough and the AGIs won't do so much. If you build a million AGIs with a thousand times the compute power of a human, it's like increasing the population by a billion, which we've already tried without it accomplishing very much intellectually. *Is your plan to build so many AGIs with so much compute power that it's like increasing the population by 100x or more?* Have you done any math on that? My wild guess is that currently all the compute power of all computing devices in existence is less than the compute power of all human brains in existence, so building compute power that's a large multiple of 7 billion human brains sounds quite hard, but I haven't really investigated the math on this. Have you and this is your plan, or is there something else to it? I figured you just thought AGIs would magically be super smart and design better AGIs which would design better AGIs which would quickly lead to god-like brilliance (as the majority of AGI fans seem to think), but if it's not that then you better give more explanation of how you get from building the first AGI to solving aging.

> Deutsch usually gives the qualification "while still solving the same problems it was created in order to solve"

That text is not in BoI. The shorter section, "solving the same problem", also isn't in BoI. No "while solving" either.

There are some somewhat similar things, e.g. search for "vary while" and there's 5 results, such as:

> Shaffer and Wheeler were describing the same attribute: being hard to vary while still doing the job.

But anyway all knowledge is infinitely variable while still solving the same problems or doing the same job: just add some extra, irrelevant crap. So that doesn't adequately address the variation rules.

And as ET said:

>> There are many ways to look at knowledge that are pretty equivalent.

DD doesn't know that. He thinks his particular way of looking at knowledge (hard to vary) is special and superior. That's his mistake. The way of looking at knowledge is OK. Not super useful because when you flesh out the details you don't end up with anything more actionable than the older idea of critical thinking. But worth knowing, *just like all the others* – e.g. that knowledge is adapted information or that knowledge is problem solving information or that (Elliot's original idea) knowledge is information which breaks the default symmetry of a contradiction (by default you have no way to take sides when two ideas contradict, and you have to do something to break that symmetry – knowledge is ways of taking sides instead of merely saying "in a contradiction, at least one must be false").

---

Search FoR group at https://groups.yahoo.com/neo/groups/fabric-of-reality/info


Dagny at 8:19 PM on October 27, 2018 | #11326 | reply | quote

Dagny I would have said re:AGI's aren't *fundamentally* better at criticism (they wouldn't sue a different epistemology) that AGI's will just be better at compute power, but I have to criticize your argument that people waste compute power today. Most people do, but we (AGI developers) know why they do. People that exist today get distracted by fields other than aging research, nuclear risk reduction, etc because of a coercive education system and prevalent bad philosophy.

>And not *all possible* AGIs have a design that allows plugging in extra memory or CPUs.

>be careful with strong phrases like *all possible*.

Right, you said "all possible" and I kind of ignored that, because I thought the implications of *some* AGIs being better at thinking than humans for this argument were obvious.

>I don't think reading speed is primarily constrained by CPU or RAM. I think *human reading is not constrained by those things*. People don't use all their brainpower while reading (or anything else), they leave some idle. People waste large amounts of computational resources.

Right - doing what?

>Maybe, as part of figuring out the education problem better, we'll make much better books. But then is AGI really the key breakthrough? Or is it the books the enable billions of human beings to be much better thinkers!

Interesting! Better books would make reading fun even for someone with bad philosophy.

>I figured you just thought AGIs would magically be super smart and design better AGIs which would design better AGIs which would quickly lead to god-like brilliance (as the majority of AGI fans seem to think)

>but if it's not that then you better give more explanation of how you get from building the first AGI to solving aging.

Buildling more Aubrey deGreys at least, that would be significant.

I think working memory is something creating detailed criticism depends on. People could run VR experiments in their head for, e.g., anti-aging technologies better if they could just hold more conditions in their mind at once. It would allow side effects to be identified much quicker. Like how chess and go engines today can see side effects of a chess move way down the line.

>all knowledge is infinitely variable while still solving the same problems or doing the same job: just add some extra, irrelevant crap. So that doesn't adequately address the variation rules.

It seems to me like adding crap doesn't produce a variant of the theory unless the crap contradicts parts of the theory. Adding the existence of carbon to general relativity doesn't create a different version of general relativity than the one you see in textbooks because it doesn't contradict it.

*Changing* crap (like changing the laws of thermodynamics to say that there do exist perpetual motion machines) does count as varying a theory.

>And as ET said:

>> There are many ways to look at knowledge that are pretty equivalent.

>DD doesn't know that.

At 13:30 here, Deutsch says he's gone through different definitions. "Pretty equivalent" is a vague label, but I don't see what he fails to understand about knowledge in that regard.

Does he disagree with any of the other definitions you gave?

https://www.stitcher.com/podcast/the-ted-interview/e/56853300?autoplay=true&fbclid=IwAR1XUgN36irzl-5Kw9qwMMgu4zMeuC1IuhBZznb9RdVnEaLJ3BOUoyZZdos


Evan at 11:43 AM on October 28, 2018 | #11327 | reply | quote

Have vs sue

>they wouldn't sue a different epistemology

I meant *have* a different epistemology, sorry.


Evan at 11:45 AM on October 28, 2018 | #11328 | reply | quote

> I have to criticize your argument that people waste compute power today. Most people do, but we (AGI developers) know why they do. People that exist today get distracted by fields other than aging research, nuclear risk reduction, etc because of a coercive education system and prevalent bad philosophy.

Distractions or focusing on a less optimal area are not the type of waste of compute power I had in mind. I think people never use all their compute power, even when focused on an important topic and doing their best. Also, this isn't about most people, it's literally everyone: I don't think *I* ever use all my compute power. I just don't think compute power is currently the bottleneck. I'm sure extra computing resources (that are more directly accessible than my iPhone) would be useful sometimes, but I don't think it'd rapidly change the world. Would it change the world when combined with much, much better educational ideas? That's hard to evaluate because I think the educational ideas would change the world, by themselves, without the AGI or computing stuff.

> Buildling more Aubrey deGreys at least, that would be significant.

Do you mean cloning his mind into computers, or just making AdG caliber people? If we were better at education, we could make millions of people who are more productive and smarter than him; and if we aren't better at education, AGI won't help. So I don't think AGI (and the its potential to create some people with some extra compute power) is the issue. BTW if compute power were the main issue, it may turn out to be easier to hook computers up to human beings than to invent AGI. No law of physics stops human hardware from being augmented with a more direct interface than a keyboard and mouse, touch screen, or eye tracking! Making a person in silicon sounds harder to me than connecting silicon to a person, though it's really hard to know which is harder to invent because there are a bunch of unknowns involved in each project.

> At 13:30 here, Deutsch says he's gone through different definitions. "Pretty equivalent" is a vague label, but I don't see what he fails to understand about knowledge in that regard.

He thinks "hard to vary" has some sort of special status, some particular importance, rather than just being a slight rephrasing (with slightly different emphasis) of various other things he could have said instead. He thinks he came up with something really special and innovative. He's mistaken about that, and his book fails to argue the point and compare. Everything else he says in the book could be rewritten, pretty easily, without talking about "hard to vary" – but he thinks "hard to vary" is essential to the book and to epistemology. He doesn't discuss or work out how equivalent it is to other ways to looking at knowledge, or what all the qualifiers it needs are and how they change it from what the short version sounds like. He's paying selective attention to one thing of many because he incorrectly deems it more important – he has some familiarity with lots of the others, but he thinks his version is better in some way that is not explained or correct.

> It seems to me like adding crap doesn't produce a variant of the theory unless the crap contradicts parts of the theory.

DD would never agree with you. Logically, "X" and "X or Y" are different theories. He's made similar points himself (no specific source, sorry). Y can be any of infinitely many different things (errors or trivial truths are easy enough to come by, e.g. form statements by iterating N through the integers, from 1 to positive infinity, and have Y be "N = N" or "N = N + 1" (substitute in the current specific number that N is to get another statement)).

BTW that reminds me that DD has never seriously, rigorously addressed what *is* a variant of a theory vs. a non-variant. I don't think it's a very important problem to make more precise, in general, however it's elevated to crucial importance if you think "hard to vary" is super important. (I'm confident DD would agree that "X or Y" is a variant of "X", but I have no idea how he'd draw the line in other, less clear cases, and I think he doesn't know either – I think it's an unsolved problem.)

> Right - doing what?

Not using it. Leave it idle. You know how bad software on your Mac will be slow while leaving a lot of compute power idle? You can see idle compute power in Activity Monitor (like Task Manager on PC). Even good software never gets consistent 100% utilization over time. How much compute power is used depends on software design – software has to actually know how to use compute power or it won't use it, it's not used automatically.

It's kinda like that with people *and* then, on top of that, they generally don't like thinking. People dislike analyzing things, going into detail on things, etc, and avoid it. (Cuz of their awful educations.) Most people actually want to be bored a lot (I'd call it bored, they would use other terms) because bored means not doing things they hate, so they like it better. In school, they hate having to do "work" and think about what the teacher told them to, and if they finish early they don't ask for more to do, and they will get in trouble for socializing, so often they just sit there and waste time. Some people want more to do and do things – take out a book, move on to the next chapter – but most people prefer not to do that, they don't try to optimize their life for using a lot of their compute power.

> Interesting! Better books would make reading fun even for someone with bad philosophy.

Sure, but that's a separate problem from AGI. AGI just means creating a person in silicon, even if he's just as irrational as other people. Which would be great in lots of ways, just not automatically as great as some people seem to think.


Dagny at 1:20 PM on October 28, 2018 | #11329 | reply | quote

Also general purpose hardware compute speed for single threaded code isn't increasing very fast now. So if you think AGIs can run shitty software and it'll just go 10x faster b/c of a 10x faster CPU ... I don't think that'll get us huge wins when we're having trouble getting much past 4 ghz chips.

Will AGIs get huge benefits from having more CPUs with more cores? It's non-obvious how to take huge advantage of that.

And even just fast single thread performance doesn't automatically mean you can do the same stuff faster. For example, if the code calls a SLEEP or WAIT function (wait 500 milliseconds then continue), a faster cpu won't speed that up. Why would it do that? Well people do things like put food in the microwave then wait 3 minutes instead of trying to think during that time. People wait in line, or wait on hold on the phone, or whatever, and they could read a book or something productive but they just don't. People sing in the shower instead of thinking about anything important – and they don't sing as fast as possible, they match the song tempo. Or they have some success at something and then they stop thinking about it and have a celebration dinner, instead of continuing to do it more. They don't even try to do it more until later – so there's a large time window where they aren't working on it and more CPU wouldn't help. And people don't think much while playing sports or at the gym or during sex or lots of other times. People watch TV in a thoughtless way instead of trying to give much thought to what they watch. I think they actually waste tons of their time in ways kinda like this, where they aren't trying to think much, they have set most of their compute power to be idle, on purpose, by design, similar to just calling a sleep function in code.

If you gave people more CPU speed, some of them would be more painfully bored, others would drink more. I think lots of people would find it a problem – it makes all the ways they waste time be worse for them because they are wasting even more than before.

(Also, to give credit, I learned lots of what I'm saying to you from ET. In prior comments too.)


Dagny at 1:53 PM on October 28, 2018 | #11330 | reply | quote

AGI/Education and what is a variant of a theory

I agree education would be extremely good and basically as good as AGI, but genetically created people would still have to waste time going to the bathroom, sleeping, eating, making money, and buying living supplies, until those problems are automated. An AGI wouldn't have to.

And crucially I think that figuring out human learning is the same problem as building AGI. The knowledge we'll use to build an AGI program (that remembers unexplained observations and works until it has explained them) will also be able to correct the coercion in our education system, by allowing kids to use resources (e.g time, or resources contained in e.g. laboratories, libraries, or the minds of teachers) to solve their problems.

>> It seems to me like adding crap doesn't produce a variant of the theory unless the crap contradicts parts of the theory.

>DD would never agree with you. Logically, "X" and "X or Y" are different theories. He's made similar points himself (no specific source, sorry). Y can be any of infinitely many different things (errors or trivial truths are easy enough to come by, e.g. form statements by iterating N through the integers, from 1 to positive infinity, and have Y be "N = N" or "N = N + 1" (substitute in the current specific number that N is to get another statement)).

I don't see where I used the idea that "X" and "X or Y" are the same theory. And adding some n=n+1 would prevent a theory from solving the problems it solves without the n=n+1 detail, so it's excluded by Deutsch's statement of the criterion.


Evan at 7:59 PM on November 1, 2018 | #11337 | reply | quote

> I don't see where I used the idea that "X" and "X or Y" are the same theory. And adding some n=n+1 would prevent a theory from solving the problems it solves without the n=n+1 detail, so it's excluded by Deutsch's statement of the criterion.

I don't know what you're talking about. You didn't say that, I did. I am telling you that "X or Y" is a variant of "X". And it solves the same problems. I'm hungry. What is a *correct* answer to how to solve that problem? "Eat McDonalds". And also, "Eat McDonalds or play Overwatch" is a true answer. And also "Eat McDonalds or 3=3+1" is also a correct answer. It's not wrong. One or the other will work, as it says. And it works if you change the 3's to 4's or 5's or 6's.

I can assure you that DD knows this kinda stuff about logic, and agrees with these logical facts.


Dagny at 8:24 PM on November 1, 2018 | #11341 | reply | quote

Wait, I got a bit lost. I was saying they are variants. You said:

> It seems to me like adding crap doesn't produce a variant of the theory unless the crap contradicts parts of the theory.

This seems to be a claim that "X or Y" is not a new or different theory – therefore it's the same theory.

If you add one thing and you don't get a variant, aren't you saying it's still the original thing, unchanged? Or are you claiming that adding one thing just make a new non-variant? If so, you'll really have to clarify what you think the rules are for what counts as a variant (something DD ought to do as well, but has not done).


Dagny at 8:28 PM on November 1, 2018 | #11342 | reply | quote

Ideas of the form "X or Y" where X and Y contradict seem like they should be ruled out as explanations (and therefore ruled out as variants of an explanation). If we can't solve conflicts between a set of N conflicting ideas, their union doesn't seem like one explanation. It's the union of at most N explanations.

Aren't ideas of the form "X or Y" where X and Y are consistent and Y is something irrelevant we already know (like 8=8) usually just labeled "X" by a person with background knowledge that includes Y? E.g. Einstein's field equation contains an 8, and presumably anyone who understands them also knows and has no criticism of "8=8".


Evan at 9:07 PM on November 4, 2018 | #11347 | reply | quote

>If you add one thing and you don't get a variant, aren't you saying it's still the original thing, unchanged? Or are you claiming that adding one thing just make a new non-variant? If so, you'll really have to clarify what you think the rules are for what counts as a variant (something DD ought to do as well, but has not done).

So I just gave one rule, but tentatively, I'll say a variant is any permutation of any of the information variables that instantiate the explanation. Analagous to a variant of a gene.


Evan at 9:10 PM on November 4, 2018 | #11348 | reply | quote

The claims you're making are not DD's claims and they don't interest me much. I don't think they constitute a thought-out system, and I don't see what problem they are even trying to solve that I don't already have a better solution to.


Dagny at 10:33 PM on November 4, 2018 | #11349 | reply | quote

They refute your argument against "hard to vary" that every idea is infinitely variable.


Evan at 11:10 PM on November 22, 2018 | #11389 | reply | quote

If you want my attention, quote the thing you think you're refuting and clearly state why it's false. Try harder – more serious, more organized, clearer, etc. Don't just tell me that somewhere above, in a bunch of stuff I didn't agree with, thought parts were wrong, etc, is a refutation where you won the argument. I disagree. Give some kinda explanation showing how you won, telling the whole story, cuz apparently I missed it.


Dagny at 12:16 AM on November 23, 2018 | #11391 | reply | quote

*You* are the one that gave a nonspecific criticism of an unquoted thing ("the claims you're making"), you self-unaware piece of garbage. In the comment you replied to on Nov 4th, I *did* quote the thing I was refuting.


Evan at 12:54 PM on November 26, 2018 | #11407 | reply | quote

Evan, please don't post here unless you dramatically change your attitude. What you're doing is unwelcome.


curi at 1:04 PM on November 26, 2018 | #11408 | reply | quote

#11407

Evan is mad. Apparently because he thinks Dagny is self-unaware (made a mistake).


Anonymous at 1:07 PM on November 26, 2018 | #11409 | reply | quote

> Evan is mad. Apparently because he thinks Dagny is self-unaware (made a mistake).

I don't think that's why Evan is mad. But I can't articulate very well what I think Evan is mad about. It has to do with social stuff. Maybe Evan thinks Dagny is trying to make Evan look bad or to put Evan down and thereby gain relative social status over him or something.


a new anonymous at 8:57 AM on November 27, 2018 | #11412 | reply | quote

#11412

i didn't explain my comment well. i tried to use the word "Apparently" to convey that i'm speaking of only what evan said (rather than addressing his hidden premises). and evan only talked about dagny making a mistake and being self-unaware. he didn't mention his (hidden) premises.

example hidden premise that evan has (but consciously or not): dagny is trying to win at a social game. the social game has winners and losers. the winner wins over the audience. the audience concludes that the winner is the smart one and his arguments are right, and the loser is the dumb one and his arguments are wrong. in this game, the one trying to win is not truth-seeking.

and i think evan thinks that he's not playing this social game. but his actions seem to contradict this. he insulted dagny ("piece of garbage") which is a way to try to win at such a social game.


the same anon as before, now i'm 44783 at 3:21 PM on November 27, 2018 | #11426 | reply | quote

>and i think evan thinks that he's not playing this social game. but his actions seem to contradict this. he insulted dagny ("piece of garbage") which is a way to try to win at such a social game.

you're using "contradict" incorrectly. My actions are consistent with your paranoid theory about me, yes. They are also consistent with the negation of that paranoid theory. Elliot you are emotionally troubled, petty, and inauthentic. And an uninteresting person because you waste all your time doing that.


Evan at 9:20 PM on November 27, 2018 | #11433 | reply | quote

#11433

Evan, I’m not sure I understand correctly but are you thinking that I’m Elliot ? I’m asking cuz you seemed to be replying to me and you addressed me as Elliot. Or maybe you didn’t mean to do that.


44783 at 5:05 AM on November 28, 2018 | #11435 | reply | quote

no I don't think you're elliot


Evan at 11:33 PM on November 28, 2018 | #11441 | reply | quote

https://medium.com/@thecratway/how-does-knowledge-grow-839256058ce

> We seek out criticisms to our guess and look for methods to falsify it. If we falsify the guess then we guess something different and repeat. If we fail to falsify the guess, we accept it but only tentatively.

In CR, the term "falsify" commonly refers to *empirical* refutation only. To avoid ambiguity and miscommunication, it's best to avoid the term (alone) entirely, and use "empirically falsify". And use something else like "criticize" or "refute" to speak generically.

> According to Popper, we can disprove a theory but never fully prove it.

You can't *fully* disprove a theory. Your criticism could be wrong.


curi at 5:08 PM on July 30, 2019 | #13180 | reply | quote

https://medium.com/@thecratway/a-practice-in-refutation-9a060d9abdc9

> Indeed. I will also add that we don't reject a theory just from 1 failed observation. We must also have a better theory in place. One that explains what the previous theory successfully explained, and accounts for the mismatch in observation.

If it's a universal theory (X), and you (tentatively) *accept* one failed observation, and accept the arguments about why it's a counter-example, then you must reject the theory, immediately. It is false. You may temporarily accept a replacement, e.g. "X is false but I will keep using it as an approximation in low-risk, low-consequences daily situations for now until I figure out a better replacement. A replacement could be a new theory in the usual sense, but could also e.g. be a new combination of X + additional info which more clearly specifies the boundaries of when X is a good approximation and when it's not."

For a non-universal theory Y which applies to a domain D, then the same reasoning applies for one failed *relevant* observation – a counter-example within D.

> The rest of the author of the article goes on to restate (without giving credit) The Dunhem-Quine thesis, of which Popper was aware of and dealt with.

Popper published *Logik der Forschung* in 1934 which explains and addresses that problem. Quine's *Two Dogmas of Empiricism* came out in 1951. As a result, some Popperians think it ought to be called the Dunhem-Popper-Quine thesis. Rather than this issue refuting of Popper, he deserves credit as an originator of the idea (in addition to credit for creating CR, the philosophy which best addresses the problem).

> Logically the thesis states that when an experiment is at odds with a theory, it does not necessarily mean the theory is false, there could be something wrong with the experiment.

There could also be something wrong with the theoretical framework being used to interpret both the experiment and the theory.


curi at 1:55 PM on August 12, 2019 | #13292 | reply | quote

TheRatWay wrote on the FI Discord re https://curi.us/2124-critical-rationalism-epistemology-explanations#c13292 :

> Curi I am a bit confused at your response here.

>> If it's a universal theory (X), and you (tentatively) accept one failed observation, and accept the arguments about why it's a counter-example, then you must reject the theory, immediately. It is false.

> As I understood it before, we don't reject it until we have a better explanation. Like for the theory or relativity, we have "failed observations" at the quantum level right? But we don't reject it because we don't have another better theory yet. What am I missing?

If you know something is false, you should never accept it because it's false.

The theory of relativity is accepted *as true* by (approximately) no one. Call it R. What people accept is e.g. "R is a good approximation of the truth (in context C)." This meta theory is not known to be false. I call it a meta theory because it contains R within it, plus additional commentary governing the use of R.

This meta theory, which has no known refutation, is better than R, which we consider false.

KP and DD did not make this clear. I have.

If you believe a theory is false, you must find a variant which you don't know to be false. You should never act on known errors. Errors are purely and always bad and *known* errors are always avoidable and best to avoid. Coming up with an *great* variant can be hard, but a quick one like "Use theory T for purposes X and Y but not otherwise until we know more." is generally easy to create and defend against criticism (unless the theory actually shouldn't be used at all, in any manner).

This is fundamentally the same issue as fixing small errors in a theory.

If someone points out a criticism C of theory T and you think it's small/minor/unimportant (but not wrong), then the proper thing to do is create a variant of T which is not refuted by C. If the variant barely changes anything and solves the problem, then you were correct that C was minor (and you can see that in retrospect). Sometimes it turns out to be harder to create a viable variant of T than you expected (it's hard to accurately predict how important every criticism is before you've come up with a solution. that can be done only approximately, not reliably).

It's easy to make a variant if you allow arbitrary exceptions. "Use T except in the following cases..." That is in fact better than "Always use T" for a T with known exceptions. It's better to state and accept the exceptions than accept the original theory with no exceptions. (It's a different matter if you are doubtful of the exceptions and want to double check the experiments or something. That's fine. I'm just talking from the premises that you accept the criticism/exception.) You can make exceptions for all kinds of issues, not just experiments. If someone criticizes a writing method for being bad for a purpose, let's say when you want to write something serious, then you can create the variant theory consisting of the writing method plus the exception that it shouldn't be used for serious writing. You can take whatever the criticism is about and add an exception that the theory is for people in situations where they don't care about that issue.

Relativity is in the situation or context that we know it's not universally true but it works great for many purposes so we think there's substantial knoweldge in it. No one currently has a refutation of that view of relativity, that meta theory which contains relativity plus that commentary.


curi at 2:02 PM on August 14, 2019 | #13300 | reply | quote

Anonymous at 10:33 AM on November 12, 2019 | #14299 | reply | quote

#14299 I tried speaking to that guy. He doesn't want to discuss.

https://twitter.com/curi42/status/1193355932937244673

Is there a reason you think that post is important or some specific ideas in it that you want comments on?


curi at 11:36 AM on November 12, 2019 | #14300 | reply | quote

Want to discuss this? Join my forum.

(Due to multi-year, sustained harassment from David Deutsch and his fans, commenting here requires an account. Accounts are not publicly available. Discussion info.)