You’re telling me that that’s not the right way to make a decision, but I’m still not seeing the details of the alternative approach you recommend. Can you please spell it out in similar terms - specifically, in terms that make clear how it can be performed in a chosen amount of time (say, a week)?This can't be answered completely directly because part of the point is to think about epistemology in a different way. Creative thinking does not follow a specific formula. (Or at least, the formula is complicated enough we don't know all the exact details – or we'd have AGI already.)
Making decisions requires creative thought. The structure of creative thought is: solve problems using the method of guesses and criticism, which leads to a new situation with new problems.
(Guesses and criticism is the only method which creates knowledge. It's literally evolution, which is the only solution ever figured out to the problem of creating knowledge. I'm hoping you have some familiarity with this already from Popper and DD, or I could go into more detail.)
This structure is not a series of steps to be done in order. For example, guesses come before criticism to have something to criticize, but also afterwards to figure out how to deal with the criticism. And criticisms are themselves guesses. And criticisms need their own criticism to find and improve mistakes, or they'll be dumb.
And as one works on this, his understanding of the problem may improve. At which point he's in a new situation which may raise new problems already, before the original problem is resolved.
One can list options like, in response to criticism of a guess: revise understanding of that guess, make brand new alternative guesses, adjust the existing guess not to be refuted, criticize the criticism, or revise understanding of the problem.
But there's no flowchart saying which to do, when. One does one's best. One thinks and uses judgment. But some methods are bad and there's criticisms of them.
The important thing, like Popper explained about democracy, is not so much what one is doing right now, but if and how effectively mistakes are being found and improved.
Everyone has to start where they are. Use the best judgment one has. But improve it, and keep improving it. It's progress that's key. Methods shouldn't be static. Keep a lookout for problems, anything unsatisfactory, and then make adjustments. If that's hard, it's OK, exchange criticism with others whose set of blind spots and mistakes doesn't exactly overlap with one's own.
What if one misses something? That's why it's important to be open to discussion and to have some ways for ideas from the public to reach you. So if anyone doesn't miss it, you can find out. (http://fallibleideas.com/paths-forward) What if everyone misses something? It can happen. Actually it does happen, routinely. There's nothing to be done but accept one's fallibility and keep trying to improve. Continual progress, forever, is the only good lifestyle.
While there isn't a rigid structure or flowchart to epistemology, there is some structure. And there are some good tips. And there are a bunch of criticisms that one should be familiar with and then not doing anything they refute.
The win/win arbitration model provides a starting point with some structure. People have an idea of how arbitration works. And they have an idea of how a win/win outcome differs from a compromise or win/lose outcome.
Internal to the arbitration, creative thought (which means guesses and criticism) must be used. How do arbitrations end in time? Participants identify the problem that it might not, guess how to finish in time, and improve those ideas with criticism. That is, in a pretty fundamental way, the basic answer to everything. Whatever the problem is, guess at the solution and improve the guesses with criticism.
This raises questions like:
- what if one can't think of any guesses for something?
- what if one has some bad guesses, but can't think of any criticisms?
- what if one has several guesses and gets stuck deciding between them?
- what if different sides in an arbitration disagree strongly and get stuck?
- what if no one has any ideas for what would be a win/win solution?
- what if the sides in the arbitration keep fighting instead of discussing rationally
- what if the arbitration runs into resource limits?
- what if there is one or more issues no one has an answer to, how can arbitration work around those?
Rather than a flowchart, epistemology offers answers to all of these questions. Does that make sense? Would you agree that the loose method above, plus answers to all questions like this (and all criticisms) would be sufficient and satisfactory?
If you agree with the approach of addressing those questions (plus you can add some), and it would persuade you, then I'll do that next. Part of the reason the discussion is tricky is because we're starting with different ideas of what the goalposts should be.
I would also like to give more in the way of concrete examples but that's very hard. I can tell you why it's hard and try some examples.
People use these methods, successfully, hundreds of times per day. They get win/win solutions in mental arbitrations, routinely. Most of these are individual, and some are in small groups, and it isn't routine in large groups.
Examples of these come off as trivial. I'll give some soon.
People also get stuck sometimes. And what they really want are examples of how to solve the problems they find hard, get stuck on, and are irrational about. But I can't provide one-size-fits-all generic examples that address whatever individual readers are stuck on. And even if only talking to one person, I'd have to find out what their problems are, and solve them, to provide the desired examples.
If I wasn't concerned about privacy, I could give examples of problems that I had a hard time with, and solved. But it wouldn't do any good. People will predictably react by thinking my solution wouldn't work for them because they are different (true), or that problem I struggled with was always easy for them (common), or knowing my solution to my problem won't solve their problems (true).
Here are some examples of routine win/win arbitrations:
Guy is hungry but doesn't want to miss TV show. Decides to hit pause. Solved. (Other people would grab some food during a commercial. The important thing is the person doing it fully prefers it for their life.)
People want to eat together, but want different types of food. Go to a food court with multiple restaurants. Solved.
Person wants to buy something but hesitates to part with their money. Thinks about how awesome it would be, changes mind, happily buys. Solved.
Person wants to buy something but hesitates to part with their money. Estimates the value and decides it's not actually worth it. Changing mind about wanting it, happily doesn't buy. Solved.
Person wants to find their keys so they can leave the house, but doesn't feel like searching. Thinks about how great the sushi will be, finds he now wants to search for the keys, does so happily. Solved.
Person wants to get somewhere in car but is in unwanted traffic, some part of his personality wants to get mad. He thinks about how getting mad won't help, doesn't get mad.
All life is creative problem solving, and people do it routinely. And people change their mind about things, even emotions, routinely, in a win/win way without regrets or compromise. But people don't find these examples convincing, because they see these examples as unlike whatever they find hard and therefore notable. Or they find some of these hard, e.g. they hate looking for their keys, or have "road rage" problems.
Here's a more complex hypothetical example.
I want to borrow my child's book, which is in the living room, but he's not home. I have conflicting ideas about wanting the book now, but not wanting to disturb his things. While I want to respect his property, that doesn't feel concretely important, so I'm not immediately satisfied. I resolve this by remembering he specifically asked me never to disturb his things after a previous mistake. I don't want to violate that, so I change my attitude and am concretely satisfied that I shouldn't borrow his book, and I'm happy with this result.
I go on to brainstorm what to do instead. I could read a different book. I could buy the ebook from Amazon instantly (many people would consider this absurd, but books are very very cheap compared to the value of getting along slightly more smoothly with one's family). I could write an email instead of reading. I could phone my kid and ask permission.
Here is where examples can get tricky. Which of those solutions do I do? Whichever one I'm happy with. It depends on the exact details of my ideas and preferences. But whichever option works for me might not work so well for a reader imagining themselves in a similar situation. Their problem situation is different than mine, and needs its own creative problem solving applied to it.
And what if I don't like any of these options, can't think of more, and get stuck? Well, WHY? There is some reason I'm getting stuck, and there is information about what the problem is and why I'm stuck. What I should do depends on why I'm stuck. And why you would be stuck in a similar situation won't be the same as why I got stuck. You won't identify with my way of getting stuck, nor with what solutions work to get me unstuck.
So, I decide that phoning is easy, and I don't like giving up without trying when trying is cheap. So I phone.
9/10 times in similar situations with similarly reasonable requests, kid says yes. This time, kid says no.
9/10 scenarios kinda like this where kid says no, I HAPPILY accept this and move on to figuring out what else to do. This is easy to be happy to go along with because I respect (classical) liberal values, and I know there are great options available in life which don't violate them, so I'm not losing out.
1/10 times, I tell my kid how I'm really eager to read the book, and there's no electronic version for sale.
Then, 9/10 times, kid says "oh ok, then go ahead". 1/10s he still says no.
If he still says no, 9/10 I accept it because I care about respecting his preferences for his property, and I have plenty of alternative ways to have a good day. I want both a good day and to respect his property, and I can have both. And I don't want to be pushy and intrude on his life over something minor – it's not even worth the transaction costs of making a big deal out of – so I won't.
And 1/10 times I say "i'm sorry to bug you about this, but i ran out of stuff to do and was actually kinda sad, and then i thought of this one thing i wanted to do, which is read this book, and i got excited, and i'm really dreading going back to my problem of being bored and sad. so, please? what's the big downside to you?"
And then 9/10 times kid agrees, but 1/10 times he says "still no, sorry, but i wrote private notes in the margins of that book, do not open it".
And the pattern continues, but additional steps get exponentially rarer. The pattern is that at each step, usually one finds a way to prefer that outcome, and sometimes one doesn't and continues. Note at each step how it's harder to continue asking, it takes more unusual reasons.
DD persuaded me of the rule of thumb that approximately 90% of interpersonal conflicts, dealt with rationally, get resolved per step trying to resolve. I know this isn't intuitive in a world where people routinely fight with their families.
If you disagree, it's not so important. If someone's methods are wrong, and it causes any problems, and someone else knows better, that's no big deal. Methods can be criticized and changed. Correct or not, the approach in the example is – like many others – just fine as a starting point.
All of life can and should go smoothly with problem solving and progress. It often doesn't because of irrationality, because of not understanding the right epistemology, because of bad values, because of anti-rational memes, because of deeply destructive parenting and education practices. All of those are solvable problems which change people's intuitions about what lifestyles work, but which do not change what epistemology is true.
As a final example, let's take cryonics. Here is something I can say about it: I have given some arguments which you have not criticized and I have not found refutations for anywhere else. On the other hand, if you tell me any arguments against my position, I will either refute ALL of them or change my mind in some way to reach an uncriticized position. (Note refuting includes not just saying why the argument is false, but also for example why it's true but doesn't actually contradict my position.)
You create a 10% estimate in a vague way, which you describe as a subjective estimate of a feeling. This hides your actual reasoning, whatever it is, from criticism – not just criticism by me but also by yourself.
You gather arguments on all sides, but you don't analyze them individually and judge what's true or not and why. I do. That is a very key thing – to actually go through the arguments and sort out what's right and wrong, to learn things, to figure the subject out. It's only by doing that, not just kinda making up an intuitive conclusion, that progress and problem solving happen.
You see the situation as many arguments on both sides and want a method for how to turn those many arguments into one conclusion.
I see the situation as many arguments, which can be analyzed and dealt with. Many are false, and one can look through them and figure things out. My current position is that literally every known pro-cryonics-signup argument is false in the context of my situation, and most people's situations.
(Context is always a big deal. People in different situations can correctly reach different conclusions specific to their situation. For example a rich person with a strongly pro-cryonics wife might find signing up increases marital harmony, and has no downsides that bother him, even though he doesn't believe it can work.)
It's this critical analysis of the specific arguments by which one learns, by which progress happens, etc. It always comes down to critical challenges: no matter how great some side seems, if there is a criticism of it, that criticism is a challenge that must be answered, not in any way glossed over.
If the criticism cannot be refuted (today), one must change his mind to something no longer incompatible with the point (pending potential new ideas). It's completely irrational and destructive of problem solving to carry on with any idea which has any criticism one can't address.
There are many ways to deal with criticisms one can't directly refute. And these methods are themselves open to criticism. We could talk more about how to do this. But the key point is, any method which doesn't do this is very bad. Such as justificationism, and the specific version of it you outlined, which allow for acting contrary to outstanding unanswered criticisms.
The first may be only a point of clarification. While I certainly agree that we rationally choose which correlations to pay attention to on the basis of explanations, I think we have a problem that those explanations themselves emerge from analysis of other correlations, which were paid attention to because of other explanations, and so on, right back to correlations that we arbitrarily decide we don’t need to explain, such as that every time we measure the fundamental physical constants we get the same answers. This seems to me to tell us that explanations can’t be viewed as inherently better than correlations - they are part and parcel of a single process, just as science proceeds by an alternation between hypothesis formation and hypothesis testing. What am I missing?Explanations come from brainstormed guesses in relation to problems. (And are improved with criticism for error-correction, or else the quality will be awful.)
There is no process which starts with correlations and outputs explanations (or more generally, knowledge).
Most correlations are due to coincidence. They are not important.
A correlation matters when referred to in an explanation. It has no special interest otherwise. Just like dust particles, blades of grass, mosquitos, copper atoms. There's dust all over the place, most is not important, but some can be when mentioned in an explanation.
The issue of getting started with learning is not serious, because it doesn't really matter where one starts. Start somewhere and then make improvements. The important thing is the process of improvement, not the starting point. One can start with bad guesses, which are not hard to come by.
Also we do have an explanation of why different experiments measuring the speed of light in a vacuum get the same answer. Because they measure the same thing. Just like different experiments measuring the size of my hand get the same answer. No big deal. The very concepts of different photons all being light, and of them all having the same speed, are explanatory ideas which make better sense out of the underlying reality.
The second one is possibly also just something I’m misunderstanding. For any pioneering technology that we have not yet perfected - SENS, cryonics, whatever - there are always explanations for why it is feasible (or, in the case of cryonics, why part of has already been achieved even though we won’t know that for sure until the rest of it also has) and other explanations for why it isn’t. I think what you’re saying is that the correct thing to do is to debate these explanations and eventually come up with an agreed winner, and that in the meantime the correct thing to do is to triage, by debating explanations for what we should do in the absence of an agreed winner between the first set of explanations, and act on the basis of an agreed winner between that second set of explanations. But I don’t see how that can work in practice, because the second debate will typically come down to the same issues as the first debate, so it will take just as long. No?A second debate on the topic, "given the context of issues X, Y, Z being unresolved, now what?" cannot come down to the same issues as the first debate, because they're specifically excluded.
It may be helpful to look at it in terms of what IS known. Part of the context is people do know some things about SENS, cryo, or whatever topic. So there is an issue of, given that known stuff, what does it make sense to do about it?
When discussions get stuck in practice, it's not because of ignorance. If no one knows X yet, that doesn't make two people disagree, since that's the same for both of them, it's a point in common. The causes of disagreements between people are things like irrationality or different background knowledge like values or goals; perhaps someone has a lifetime of tangled thinking that's hard to sort out. The solution to those things are (classical) liberal values like tolerance, individualism, leaving people alone, and only interacting for mutual (self-perceived) benefit.
Take for example:
http://www2.technologyreview.com/sens/
The reason those debates didn't resolve your differences is because those people directed their creativity towards attacking SENS, not truth-seeking. Rational epistemology only works for people who choose to use it. The debate format was also deeply unsuited to making progress because it allowed very little back-and-forth to ask questions and clear up misunderstandings. It wasn't set up for creating mutual understanding, none of your opponents wanted to understand SENS, the results were predictable, but that has nothing to do with what's possible. (BTW, awful as this sounds, it isn't such a big deal, since they aren't going to use violence against you. Not even close. So you can just go on with SENS and work together with some better people.)
BTW notice the key thing about that debate: you could answer all of their criticisms. ALL. Specifically, not vaguely.
And I think you know that if you couldn't, that'd be a serious problem for SENS.
Take the claim, "even though these [SENS] categories are sometimes so general as to be almost meaningless, they still omit many age-related changes that contribute to senescence, including age-related increases in oxidative damage and changes in gene expression."
If you had no answer to that, SENS would be in trouble. It only takes one criticism to refute something. But you had the answer. And not in some vague way like, "I feel SENS is 10% likely to work, down from 20% before hearing that argument". But specifically you had an actual answer that makes the entire difference between SENS being refuted and SENS coming out completely fine.
This is a good example of how things can actually get resolved in debates. Like the claim about oxidative damage, that can be resolved, you knew how to resolve it. Progress can be made, things can be figured out. (Though not for those who aren't doing truth-seeking.)
Challenges like the oxidative damage argument can routinely be answered and discussions can resolve things. What you said should have worked. It only didn't because the other guy was not using anything resembling a rational epistemology, and did not want progress in the discussion.
The third one is where I’m really hanging up, though. You say a lot about good and bad explanations, but for the life of me I can’t find anything in what you’ve said that explains how you’re deciding (or are claiming people should decide) HOW good an explanation needs to be to justify a particular course of action.Answer: that is the wrong question.
There is no such thing as how epistemologically good an explanation is.
The way to judge explanations I'm proposing is: refuted or non-refuted. Is there a criticism pointing out any flaw whatsoever? Yes or no?
No criticism doesn't justify anything. It just makes more sense to act on ideas with no known flaws (non-refuted) over ideas with known flaws (refuted).
One common concern is criticisms pointing out minor flaws, e.g. a typo, or that a wording is unclear. The answer is: if the criticism really is minor, then it will be easy to fix, so fix it. Create a new idea (a slight modification of the old idea) to which the criticism doesn't apply.
Or explain why a particular thing that seems like a flaw in some vague general way is not a flaw in this specific context (problem situation). Meaning: it seems "bad" in some way, but it won't prevent this approach from working and solving the problem in question.
For example, someone might say, "It'd be nice if the instruments on the space shuttle were 1000x more accurate. It's bad to have inaccurate instruments. That's my criticism." But a space shuttle has limited finite goals, it's not supposed to be perfect and do everything, it's only supposed to do specific things such as bring supplies to the space station, land on the moon, or complete specific experiments. Whatever the particular mission is, if it can be completed with the less accurate instruments, then the "inaccurate instruments are bad" criticism doesn't apply.
In the case of cryonics, you’ve read a bit about where the practice of cryonics is today and you’ve come to the conclusion that it doesn’t currently justify signing up, because you prefer the arguments that say the preservation isn’t good enough to the ones that say it is. But you don’t say where the analysis process should stop.Stop when there is exactly one non-refuted idea. I am unaware of any non-refuted criticisms of my position on the matter.
This has nothing to do with preferring some arguments. I am literally unaware (despite looking) of any argument to sign up with Alcor or CI, that I can't refute right now today. (Though as I mentioned above, I have in mind my situation or most situations, but not all people's situations. In unusual situations, unusual actions can make sense.)
In your method you talk about gathering arguments for both sides. I have tried to do that for cryonics, but I've been unable to find any arguments on the pro-cryonics side that survive criticism. Why do you think give it a 10% chance to work? What are any arguments? And meanwhile I've given arguments against signing up which you have not individually, specifically refuted. E.g. the one about organizations that are bad at things don't solve hard problems because problems are inevitable so without ongoing problem solving it won't work.
I think a lot of the reason debates get stuck is specifically because of justificationist epistemology. People don't feel the need to give specific arguments and criticisms. Instead they do things like create arbitrary justification/solidity/goodness scores that are incapable of resolving the disagreements between the ideas.
For example, you say:Creative thinking. Guess whether it's a good idea and why. Improve this understanding with criticism.percentage of undamaged brain cells could be tried in a measure because we have an explanatory understanding that more undamaged cells is better. And we might modify the measure due to the locations of damaged cells, because we have some explanatory understanding about what different region of the brain do and which regions are most important.We might, yes, or we might not. How do you decide whether to do so?
And if you decide that we should take account of location, why stop there? Suppose that someone has proposed a reason why neurons with more synaptic connections to other neurons matter more. It might be a really really hand-wavey explanation, something totally abstract concerning the holographic nature of memory for instance, but it might be consistent with available data and it might also be really hard to falsify by experiment.Almost all refutation is by argument, not experiment. (See: section about grass cure for the cold in FoR, where DD explains that even most ideas which are empirical and could be dealt with by experiment, still aren't).
Since you call it "hand-wavey", what you mean is you have a criticism of it. The thing to do is state the criticism more clearly, and challenge the idea: either it answers the criticism or it gets thrown out.
So, should we take it into account and modify our measure of damage accordingly? What’s worse, we don’t even know whether we have even heard all the relevant explanations that have been proposed, even ignoring all the ones that will be proposed in the future. There might be ones that we don’t know that conflict with the ones we do know, and that we might eventually decide are better than the ones we do know. Shouldn’t we be taking account of that possibility somehow?Yes. One should make reasonable efforts to find out about more ideas, and not to block off other people telling one ideas (http://fallibleideas.com/paths-forward).
You will ask what's reasonable, how much is enough. Answer: creative thinking on that point. Guess what's the right amount of effort to put into these things (given limits like resource constraints) and refine the guess with some critical thinking until it seems unproblematic to one. Then, be open to criticism about this guess from others, and try to notice if things aren't going well and one should reconsider.
This seems to bring one inexorably back to the probabilistic approach. Spelling it out in more detail, the probabilistic approach seems to me to consist of the following steps:How? This vague step hides a thousand problems in its details.
- Gather, as best one can in the time one has decided to spend, all the arguments recommending either of the alternative courses of action (such as, sign up with Alcor or don’t);
- Subjectively estimate how solid the two sets of arguments feel;
- Estimate how often scientific consensus has, in the past, changed its mind between explanations that initially were felt to differ in solidity by that kind of amount, and how often it hasn’t (with some kind of weighting for how long the prevailing has been around);This has a "future will resemble the past" element without a clear explanation of what will be the same and what context it depends on.
And it glosses over the details of what happened in the various cases, and the explanations of why.
It also gives far too much attention to majority opinion rather than substantive arguments.
It's also deeply hostile to large innovations in early stages. Those frequently start with a large majority disagreeing and feeling the case for the innovation has very low solidity.
If you look at the raw odds that a new idea is a brilliant innovation, they suck. There are more ways to be wrong than right. You need more specific categories like, "new ideas which no one has any non-refuted criticism of" – those turn out valuable at much higher rates.
- Use that as one’s estimate of one’s likelihood of being right that the seemingly more solid of the two sets of explanations is indeed the correct set, hence that the course of action that that set recommends is the correct course;This approach involves no open-ended creative thinking and not actually answering many specific criticisms and arguments. Nor does it come up with an explanation of the best way to proceed. It does not create knowledge.
- decide what probability cutoffs motivate each of the three possible ways forward (sign up and focus on something else until some new item of data is brought to one’s attention, don’t sign up and focus on something else until some new item of data is brought to one’s attention, or decide to spend more time now on the question than one previously wanted to), and act accordingly.
This proposed justificationist method does not even try to resolve conflicts between ideas. It doesn't try to figure out what's right, what's wrong, or why. There's no part where anything gets figured out, anything gets solved, anyone learns anything about reality. It's kind of like a backup plan, "What if rational thinking fails? What if progress halts? Under that constraint, what could we do?" Which is a bad question. It's never a good idea to use irrational methods as a plan B when rational methods struggle.
One of the weirder things about discussing justificationism is, I know you frequently don't use the method you propose. It's only to the extent that you don't use this method that you get anywhere. Like at http://www2.technologyreview.com/sens/
You didn't present your subjective feeling of the solidity of SENS, or estimates about how often a scientific consensus has been right, or anything like that. You did not gather all the anti-SENS arguments and then estimate their solidity and give them undeserved partial credit without figuring out which are true and which false. Instead, you gave specific and meaningful arguments, including refuting ALL their criticisms of SENS. Then you concluded in favor of SENS not on balance – you didn't approach it that way – but because the pro-SENS view is the one and only non-refuted option available for answering the debate topic.
Continue reading the next part of the discussion.
Messages