Big Hero 6 Comments

Big Hero 6 movie comments. You should watch the movie before reading this.

SPOILERS

starts with main character (hiro) getting arrested for peaceful activities that should be legal. no one objects to this state of affairs, including hiro himself who thinks the illegal activities are good things to do.

within the first 10 minutes the movie is telling HUGE HUGE HUMONGOUS GIGANTIC SUPER BIG lies about what university is like, by presenting a completely fake school lab scene that’s waaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaay cooler than real ones. Also extremely expensive even after you tone it down from sci-fi to realism.

so then after the utopia-paradise convinces the main character to go to school, he wants to make something awesome to get in. so he ... pulls out a pencil, a pencil sharpener, and paper. umm wait what? this is a sci fi world with advanced robotics technology, this guy is super into technology ... but he doesn’t use an iPad like device or even a computer keyboard for writing, brainstorming, etc? he handwrites?? c’mon. wtf.

so then hiro makes awesome amazing future tech in a short time period, PRIOR TO attending school, in hopes of getting in. note his ability to make this indicates he very much does not need school, even the super awesome fake school.

so then he gets 2 choices: get in to school, or sell out to a capitalist guy criticized for caring about “self-interest”. hiro turns down a fortune at age 14 without really thinking about it or finding out details of his options. which the professor mentor dude and his elder brother both treat as wise.

so after the professor and his brother die, why doesn’t he consider going with the capitalist option at that point?

chasing baymax scene is kinda ridiculous. the SLOW MOVING robot keeps being in sight a little ways away, but hiro is constantly running full speed and just missing him and not catching up.

no, low battery does not make robots act drunk.

Hiro lies to his parent figure when leaving the house to chase Baymax, and lies more when returning. no significant explanation is necessary. plenty of kids watching who understand the necessity of heavily lying to parents...

movie plays a lot on the theme of an outsider (the robot) who doesn’t understand cultural stuff that normal people take for granted. this lack of understanding is supposed to be humorous. one concrete example is when the robot doesn’t understand fist bumping.

the policeman completely ignores his report of major violence and danger. he treats robots like a fantasy story, even though this sci-fi world has significant robotics and Hiro is giving the police report with a very impressive robot standing next to him. that’s really bad. i think it’s a bit unrealistic. i don’t think police are quite that bad. at least for adults. maybe they are when a kid is doing the report. i don’t know. in any case, i think it comes off as identifiable to the children in the audience – the authorities in their lives (parents and teachers primarily) repeatedly won’t believe them, ignore reasonable requests, make them try to deal with stuff on their own. that theme is very realistic for kids.

so why doesn’t Baymax save any photographs or videos from the camera it uses for eyesight? that sure would have been convenient at the police station. also Hiro should have taken a picture or video with his smartphone or something.

the friend group in the movie are all very strong personality archetypes. this isn’t very realistic. most people are more mild, with a bit of some archetypes but also a lot of mild-mannered normalcy, compromises, etc. there’s general pressure on people not to be strong outliers. the strong extremes of the archetypes are a bit rare, but more entertaining and striking for movies.

after the go to the mansion and get upgrades, Hiro does a few grand in property damage while having Baymax show off his new rocket fist. no one takes notice of this.

when Hiro goes flying around on Baymax, he almost dies a few times. people don’t take brushes with death nearly seriously enough. they are too focused on the actual outcome instead of something more like the set of possible outcomes and their probabilities.

they also ignore the issue of acceleration forces acting on Hiro while he rides (and he’s frequently only attached to Baymax at like 2 points on his feet or knees which would put a ton of strain on those points). going high speed then changing direction very abruptly to go high speed another way requires a better setup or you like blackout or die.

also they fly around the city for all to see, which is really stupid given their intent to fight someone using this technology. better if it’s a secret, keep the element of surprise.

now i figure they have enough evidence to get help from the cops or military. like they have Baymax’s medical scan of the badguy. Baymax has some data. and their car got trashed and they got chased through the streets, some stuff must have gotten on camera and had witnesses. but they don’t consider that at all, even though the micro-bot army is VERY VERY dangerous and serious and bringing in the military really is called for, and it’s extremely reckless and stupid for them to go after the guy themselves and also to do it without leaving full data and notes behind in case they die so other the military at least has their info in order to fight if necessary.

the girls in the friend group are real thin.

for the fight after watching the teleporter video, they mostly fight in a sort of one-at-a-time way that is really convenient for showing what’s happening more easily i guess, but pretty damn lame if you think about it.

so Hiro himself, the protagonist a lot of the audience is meant to identify with, becomes kinda murderous pretty abruptly. that’s treated as just how even the best people are.

@Baymax tests and creation: so the first time it gets past saying Hi, on attempt 84 (a very low number, presented as a very high number), the medical scan works perfectly on the first try of that subsystem. that’s completely ridiculous.

so the capitalist dude doesn’t turn out to be the badguy. also he actually spends huge piles of government money.

the professor guy is pretty dumb. his daughter participated in the test voluntarily. now he wants to be a murderer. he also doesn’t seem to mind doing millions of dollars of property damage that hurts people other than his target, and he doesn’t seem to mind trying to kill Hiro and friends who has has no grudge against.

i don’t think the intended moral of the story is that irrational emotional family attachments are one of the more dangerous forces remaining internal to peaceful Western society. yet that’s kinda there.

Hiro and friends have massively higher tolerance for danger and brushes with death than most people. also they are wrong and it’s bad. and it doesn’t even occasion comment in the movie.

so remember how the acceleration was really unrealistic when flying earlier? it’s a lot worse now. when they are through the portal, Hiro climbs onto the pod thing the girl is in and hangs on to that while Baymax flies around pushing it. so now he doesn’t have the special attachment points between his suit and Baymax that kept him from falling off before. but he doesn’t fall off. cuz ... no reason. he barely even tries to hold on, sometimes letting go with his hands and just kinda crouching on it.

shooting the rocket fist to get them out is pretty stupid. cuz he could just shoot it directly away from them and that’d work too and then Baymax would be saved too. it’s not like there was a big hurry, they waste time saying bye. #physics

don’t they have backup copies of the robot’s memory cards, design schematics, etc, etc???

the news broadcast indicates the heroes don’t get credit. why? and how do they manage anonymity after all the public displays?

Elliot Temple | Permalink | Comments (0)

Elliot Temple | Permalink | Comment (1)

Two Firefight Errors

This post contains large spoilers for the book Firefight by Brandon Sanderson. He is my favorite fantasy author. In this post, I explain two errors I noticed in the book.

1) Is Regalia a High Epic?

“Abigail isn’t a High Epic,” Tia said.

“What?” Exel said. “Of course she is. I’ve never met an Epic as powerful as Regalia. She raised the water level of the entire city to flood it. She moved millions of tons of water, and holds it all here!”

“I didn’t say she wasn’t powerful,” Tia said. “Only that she isn’t a High Epic—which is defined as an Epic whose powers prevent them from being killed in conventional ways.”
That made sense to me, but it's contradicted when David kills Regalia:
“No!” My arms trembled. I shouted, then brought the blade down.

And killed my second High Epic for the day.
The page with the ISBN number also calls Regalia a High Epic:
Summary: “David and the Reckoners continue their fight against the Epics, humans with superhuman powers, except they may have met their match in Regalia, a High Epic who resides in Babylon Restored, the city formerly known as the borough of Manhattan”— Provided by publisher.

2) Geometry



The left diagram shows how they were trying to find Regalia’s base. That part makes sense. Every time she appears, you know she’s in a radius of that point, so you draw a circle. The overlap of all circles is where her base could be. (You can also have circles for places she didn’t appear, which then rule out that circle.)

To narrow it down further, you need any circle which overlaps part of the remaining possible city area, not none or all. As you can see in the right hand diagrams, this new circle could be near an existing circle, or off in a new area. Both work. But the book says:
From what I eventually worked out, my points had helped a lot, but we needed more data from the southeastern side of the city before we could really determine Regalia’s center base.
That doesn’t make sense. The key thing is the distance of data points so they overlap part of the remaining area where the base could be. This can be done from any direction.

Elliot Temple | Permalink | Comments (0)

Mindless or Perfect?

https://news.ycombinator.com/item?id=8859974
They're not wrong in that people often rationalize their 'instinctual' choices, but to imply that nobody is cognizant of any of their thoughts or biases and that we're all slaves to our lizard brains is a bit of a stretch.
This is a massive false dichtomy.

On the one hand is people are biased in ways they aren't aware of, and aren't in control of their lives. And therefore are slaves to genes.

On the other hand is, since people aren't animal slaves, people aren't super biased, super unaware of their biases, and so on.

I think the truth is pretty straightfoward here: people are hugely biased, unaware of it, and bad at controlling their lives. But it doesn't mean they are slaves. They could do something about it. But evading and dismissing the issue, or accepting that's how people are, won't fix it.

Control over your life is possible but not automatic. It shouldn't be treated as either both possible and automatic, or neither. That's the false dichtomy again.

What will fix this problem is philosophy. Read Ayn Rand and Karl Popper, among others. Or come discuss this matter at the Fallible Ideas Discussion Group. Learn better ideas and integrate them into your life so you actually live by them.

The problem of bias (and more generally mistakes) is real, but also solvable. You don't have to choose from a false dichotomy of denying the problem exists (or downplaying it heavily) or else accepting the problem as a negative feature of human life. You can recognize the problem exists and then take steps to deal with it. A lot is actually known about how to handle this, but most people don't bother learning it.

Elliot Temple | Permalink | Comments (0)

Lying About Poker

http://www.europokerpro.com/games/fixed-limit-holdem-rooms/
5: When you start steaming give up for the day and save your bankroll.
this is super bad advice if you’re trying to make money (rather than looking at it as paying money for fun). which i think is most of the audience (some of the other tips look clearly oriented to people trying to make money, e.g. 2 and 4).

if you’re trying to make money at poker, you cannot be a player who steams. if you’re a steamer, you should expect to be a losing player.

the article tip implies it’s OK to be a steamer and try to make money at poker. this is a dirty and nasty lie, encouraging losing players to lose more money (in the disguise of tips to help them win money).
The edges are really small in all the fixed limit games. For an example you can rarely expect to earn more than 5Big bets/100 hands on average even in the good games
and this is lying about what people can reasonably expect to win. 5 big bets per 100 hands is not “really small”. it sounds pretty great.

http://www.thepokerbank.com/strategy/other/winrate/
• 1 – 4 bb/100 = Great. A solid winrate if you can sustain it.
• 5 – 9 bb/100 = Amazing. This is a very high winrate at any level. Consider moving up.
• 10+ bb/100 = Immense. Very, very few have a winrate like this. You probably have a small sample size though.
i don’t know fixed limit holdem that well, but a big bet in it (they were talking 5 big bets/100 hands) is (always?) double the big blind. so they might have just called 10big blinds/100 as a “really small” edge!?!?

let’s check more info by searching fixed limit info in particular:

http://forumserver.twoplustwo.com/17/small-stakes-limit/current-expected-ceiling-earning-flh-hu-shorthanded-1486606/
I think 3.5 big bets per 100 hands (after rake, but before rakeback and bonuses?) is not even close to attainable online right now. I'm not sure what is, but more HU and shorthanded targeting weaker players would obviously boost your winrate. But the best I seem to hear of is around 1.5 BB/100 if avg players at table was like 4 or something. I'm not even 100% on those figures bc it's all pretty much anecdotal.
and
People that won 3 BB/100 were massive bumhunters and/or constantly buttoning people which is basically cheating. They were not playing regularly and winning 3 BB/100.
http://www.pokerstrategy.com/forum/thread.php?threadid=37634
In Fixed Limit Holdem we know that good player will have average income 1-2 BB (Big Bets) per 100 hands.
so the article was super fucking lying about 5 big bets per 100 hands being “really small”. completely and utterly lying to people about what sort of income poker might offer. to lure in suckers, i guess.

btw who are the suckers who are fooled? they must be dumb. i don't play poker. maybe it'd help if they were better philosophers. i catch lies like this because i have good thinking methods.

Elliot Temple | Permalink | Comments (0)

Understanding Objectivism Comments

Comments on excerpts from Understanding Objectivism by Leonard Peikoff.
Q: Ayn Rand once said that the attribute that most distinguished her was not intelligence but honesty. Could she have been referring to a concept that subsumes the virtue of honesty, and the lack of any innocent dishonesty such as rationalism?

A: “Innocent dishonesty” strikes me as a self-contradiction. If a person is dishonest, then he’s guilty; if he’s innocent, he tried his best, then he was honest. So you probably mean the correct method—she was not only honest, but had the correct method of thinking. And was that simply the result of honesty on her part? I have to respectfully disagree with Miss Rand’s self-assessment. I never agreed with that, and I argued with her for decades on that point. I regarded myself as thoroughly honest, and I never came anywhere near coming up with her philosophy on my own. I think that to explain the origination of an actual new philosophy, honesty is a valuable and necessary condition, but does not go the whole way. You cannot get away from the fact that you have to be a genius on top of being honest.
Peikoff regards himself as 100% honest and using that as a premise to reach conclusions about reality (including ones contrary to Ayn Rand). This is an arrogant and ridiculous method. You should not assume your own greatness as the starting point of your arguments, and see what the world is like only on that assumption.

The specifics here are terrible too. Who is 100% "thoroughly honest"? No one. No one is literally perfect on honesty. Which is great – it means there is room for improvement, there are opportunities to be better (for those who look for them).

Rather than emphasizing self-improvement maximally, Peikoff starts attributing success to vague, inborn(?), magical "genius".
Let me take a different situation. In college, should you let the professor in the class know you disagree every time you do? Is your silence a sanction of evil? If it is, I certainly sanctioned a tremendous amount of evil in my fourteen years as a college student, so I have to come out as the world’s greatest monster, next to Kant. This was my policy: Sometimes I spoke up, and sometimes I didn’t, according to a whole constellation of factors. I realized that I couldn’t speak up every time, because I disagreed with everything. And since it was philosophy, every disagreement was vital, so I would have had to have equal time with the professor and the rest of the class, which would be ludicrous. So I simply had to accept from the outset, “An awful lot is going to go by that is irrational, horrendous, depraved, and I will be completely silent about it. I have no choice about that.”
This has the same kind of issue as the previous quote. Here Peikoff's reasoning starts with the premise that he's pretty good, and doesn't consider that he may be making a big mistake. Rather than argue the substance of the issue, he just kinda treats any criticism of himself as an absurdity to reject.

Peikoff's argument here is careless too. If he sanctioned a lot of evil, it wouldn't make him the second worst person, after Kant. Lots of people have done that, and Kant (and others) did a lot of other stuff wrong too.

As I read this, Peikoff basically confesses to huge sins and flaws, makes a half-hearted defense, and then says it can't be that bad because then life would be too hard and too demanding, and surely morality can't demand anyone be more than Peikoff is.

There's so much wrong here. Peikoff did not have to go to college. He did sanction college by going and then keeping mostly quiet and going along with it. You know Roark got expelled from school? And Roark says he should have quit earlier. Peikoff acts completely unlike Roark, and makes himself fit in enough to pass, but can't conceptualize that he may have done something wrong. Isn't that weird? He isn't taking Roark or Rand seriously. Rand is pretty clear about the moral sanction issue, and Peikoff's attitude is that a particular version of refusing to sanction evil (getting equal speaking time to his professors) is "ludicrous" and anyway he can't be so bad, so let's just start compromising principles.
Now and then I would speak up; much of the time I kept silent. And I do not regard that as a sanction of evil, and I could certainly not have survived fourteen years of university if every moment had to be total war against every utterance.
Why go to fourteen years of evil university? There are alternatives. He did sanction it. He describes himself sanctioning it and his only defenses consist of thinking there were no better alternatives (while hardly considering any) and thinking he can't be too bad.
For instance, once in a very early phase I tried to be a thorough intellectualist—that is, I was going to function exclusively cerebrally, without the aid of emotions (I was young at the time). And I remember very clearly that I went to a movie with the idea of having absolutely no emotions—that is, they would have nothing to do with my assessment of the movie and were just going to be pushed aside. I was going to try to judge purely intellectually as the movie went by. And I had a checklist in advance, certain criteria: I was going to judge the plot, the theme, the characters, the acting, the direction, the scenery, and so on. And my idea was to formulate to myself in words for each point where I thought something was relevant, how it stood on all the points on my checklist. And to my amazement, I was absolutely unable to follow the movie; I did not know what was going on. I needed to sit through it I can’t remember how many times, and I discovered that what you have to do is simply react, let it happen, feel, immerse yourself in it. And what happens is that your emotions give you an automatic sum. You just simply attend to it with no checklist, no intellectualizing, no thought, just watch the movie, like a person.
Peikoff used a bad intellectual method of movie watching, it didn't work, and then he just gives up completely instead of trying other intellectual methods. That's dumb. It's perfectly possible to watch movies a lot better than passively, and to think during them, while still following the plot. His checklist approach sounds pretty bad, but it's not that or nothing. That's a false dichotomy.
Q: What were Ayn Rand’s reasons for not wanting to be a mother?

A: Primarily I would say because she was committed from a very early age to a full-time career as a novelist and writer. She did not want to divert any of her attention to anything else. She wanted to pursue that full-time, and it was simply not worth it to her to divert any time from that goal, by her particular hierarchy of interests and values. Beyond that, she had no interest in teaching. She was very different, for instance, from me in that regard. She was not interested in taking someone and bringing them along step by step, which is essential to being a parent. She wanted a formed mind that she could talk to on the level as an equal. She had more of the scientific motivation, rather than the pedagogical motivation. So it was as simple as that.
I thought this was notable. For one thing, Rand did a lot of teaching, including with Peikoff personally. As Peikoff describes it (in various places), Rand helped him along a great deal, incrementally, over years and years. I think Rand couldn't find enough equals or peers; I think it's interesting Peikoff reports she wanted that.

"So it was as simple as that" is a bad comment to end with. It doesn't add anything. And it's wrong – the issues aren't simple. Maybe Peikoff hasn't noticed there's lots of interesting stuff here?
Sometimes you have to expect to be momentarily overcome with the sheer force of the evil in a given situation. I want to speak for myself, from this point on, from my own experience, because I’m not prepared to make a universal law out of this, so I offer it to you for what it’s worth. There are times and situations where, despite my knowledge of philosophy, I feel overwhelmed by the evil in the world—I feel isolated, alienated, lonely, bitter, malevolent—and this is, to me, inescapable at times in certain contexts. I’ll give you an example. A few weeks ago, I went to a debate at a large university, on the subject of the nuclear freeze. One of the debaters, my friend, was eloquent, but it was a hopeless situation. The audience of college students was closed, irrational, hostile, dishonest by every criterion outlined tonight. They wouldn’t listen for a moment, they were rude—they were real modern hooligans—and when they did speak up, it was utterly without redeeming features—a whole array of out-of-context questions, sarcasm, disintegrated concretes—it was a real modern spectacle in the worst sense of the term. After a couple of hours of this, I was angry, I was resentful, I was hostile. And I felt (and I underscore the word “felt”), “This is the way the world is. What is the point of fighting it? They don’t want to know. I’m going to retire and stop lecturing and let the whole thing blow up, and to hell with it.” I was depressed. And of course, once I was in that mood, I was more negative about everything, so when I saw the headlines in the Times the next day, I felt worse. Even the long lines at the bank were further evidence that the world is rotten.

The point here is that I don’t think that I made a mistake. I think you have to react to concretes.
How is this not a mistake!? Again I read it as Peikoff describing another big mistake he made. But then he doesn't see himself as having done anything wrong. Why? He tells us about why this isn't good, it's clear enough why it'd be better not to handle it this way, and yet somehow this non-ideal isn't a mistake for some unstated reason.

Elliot Temple | Permalink | Comments (0)

Leonard Peikoff Betrays Israel

From Understanding Objectivism, the book version, and this is from a student talking:
Now consider three examples, a couple of them historical. One, Adolf Hitler announces that he’s going to take over as much of the world as possible. And in each instance, when his armies march into another country, he is ready with a pretext: that he is only acting in defense of the German people. The second example would be, in 1967, the State of Israel, anticipating, by means of military intelligence, that they were about to be attacked on all sides, attacked first and blew up the opposing air forces, and claimed that they did that in self-defense.
Never mind what he's talking about, it doesn't matter to my point. Just note the Israel example. Then when Leonard Peikoff is commenting, I'll show you this out of order. Second he says:
This was certainly not a rationalistic presentation. He started with some examples—leaving aside the Israel one, which was inappropriate for the reasons we just mentioned—but Hitler is certainly a good example on the topic of force;
OK, now what are the reasons Peikoff says not to use the Israel example?
So the rule here is, do not pick controversial concretes. Pick a nice range of concretes. But when you’re trying to understand, the examples should be simple and straightforward. Then, when you grasp it, you can take up trickier cases. So it is a bad idea to combine concretizing with devil’s advocacy. I learned this teaching elementary logic, and I thought I’d get two birds for the price of one, and in illustrating a certain fallacy, which is a simple fallacy to understand, I threw in an argument for political isolationism, and the example aroused the class, and it became so controversial that they began to challenge the logical point involved, because they disagreed with the political point. And I lost the entire logical issue on the example. I learned from that that examples cannot be controversial; they have to be illuminating.
First, an aside, Peikoff is wrong here. He should have learned the lesson that his class had this big misconception, this big hole in their rationality. When they "began to challenge the logical point involved, because they disagreed with the political point", that was a huge thinking and methodology mistake the students were making. Which Peikoff correctly recognized. But then instead of teaching them how to think better, he thought in the future he should avoid this issue coming up. Instead of teaching the students to be good at this, his plan is to avoid this thing where their flaws come out. :(

Now about Israel, there's a really big problem here. The issue is, the Israel example shouldn't be controversial. By acting as if it's controversial, he is deferring to anti-semites. He's treating their disagreement as a legitimate controversy, and sanctioning it. Instead of Roark's "don't think of them" type approach, he's treating evil as this important thing and saying to change your actions according to the demands of evil. You have to choose different examples because some anti-semites will get offended by the Israel example, you have to think of them and give their anti-semitic concerns due consideration.

He's letting anti-semites drag the conversation into a distorted reality where Israel's right to exist is a controversy. That's granting them way too much. There are some issues where you can say it's controversial and respect the other side, but there are other issues where you must not grant them the sanction of being legitimate opponents whose dissent constitutes an intellectual controversy. Anti-semitism hasn't made the issue of Israel's self-defense intellectually or rationally controversial.

Peikoff went out of his way to try to stop a student from acting according to reality – a reality in which Israel's self-defense is a clear cut example similar to the World War II example. Peikoff demanded the student show greater respect to anti-semites, and help them fake reality by pretending there is an intellectual controversy where there isn't one. This is a betrayal of Israel.

PS In fairness, I want to add that I mostly like the book so far, I think it's pretty good, you could learn some things from it.

Elliot Temple | Permalink | Comments (0)

Aubrey de Grey Discussion, 24

I discussed epistemology and cryonics with Aubrey de Grey via email. Click here to find the rest of the discussion. Yellow quotes are from Aubrey de Grey, with permission. Bluegreen is me, red is other.
Did you receive my email?
Hi - yes, I got it, but I couldn’t think of anything useful to say.
If you want to stop talking, or adjust the terms of the conversation (e.g. change the one message at a time back and forth style), please say so directly because silence is ambiguous.

But see my comments below. I don't think we're at an impasse. I think what you said here was particularly productive.
We have reached an impasse in which you insist on objecting to my failure to address various of your points, but I object to your failure to address my main point, namely that there is no objective measure of the rebuttability of a position. I am grateful for your persistence, since it has certainly helped me to gain a better understanding of the rationality IN MY OWN TERMS, i.e. the internal consistency, of my position, and in retrospect it is only because of my prior lack of that understanding that I didn’t zero in sooner on that main point as the key issue. But still it is the main point. I don’t see why I should take the time to read things to convince me of something that I’m already conceding for sake of argument, i.e. that Aubreyism is epistemologically inferior to Elliotism. And I also don’t see why I should take the time to work harder to convince you of the value of cryonics when you haven’t given me any reason to believe that your objections (i.e. your claim that Alcor’s arguments are rebuttable in a sense that my arguments for SENS are not, and moreover that that sense is the correct one) are objective.

Also, for a lot of the things I haven’t replied to it’s because I’m bemused by your wording. To take the latest case: when I’ve asked you for examples where science could have gone a lot faster by using CR rather than whatever else was used, and you have cited cases that I think are far more parsimoniously explained by sociological considerations, you’ve now come back with the suggestion that "Lots of the sociological considerations are explained by the philosophical issues I'm talking about”. To me that’s not just a questionable or wrong statement, it’s a nonsensical one. My point has nothing whatever to do with the explanations for the sociological considerations - it is merely that if you accept that other issues than the CR/non-CR question (such as the weight that rationalists give to the views of irrationalists, because they want to sleep with them or whatever) slow things down, you can’t argue that the CR/non-CR question slowed them down.
When I say something you think is nonsense, if you ignore that and try to continue the rest of the conversation, we're going to run into problems. I meant what I said, and it's important to my position, so please treat it seriously. By ignoring those statements, it ends up being mysterious to me why you disagree, because you aren't telling me your biggest objections! They won't go away by themselves because they are what I think, not random accidents.

In this case, there was a misunderstanding. You took "explained" to mean, "make [a situation] clear to someone by describing it in more detail". But I meant, "be the cause of". (Both of those are excerpts from a dictionary.) I consider bad epistemology the cause of the sociological problems, and CR the solution. I wasn't talking about giving abstract explanations with no purpose in reality. I'm saying philosophy is they key issue behind this sociological stuff.

I regard this sort of passing over large disagreements as a methodological error, which must affect your discussions with many people on all topics. And it's just the sort of topic CR offers better ideas about. And I think the outcome here – a misunderstanding that wouldn't have been cleared up if I didn't follow up – is pretty typical. And misunderstandings do happen all the time, whether they are getting noticed and cleared up or not.


I think the sex issue is a great example. Let's focus on that as a representative example of many sociological issues. You think CR has nothing to do with this, but I'll explain how CR has everything to do with it. It's a matter of ideas, and CR is a method of dealing with ideas (an epistemology), and such a method (an epistemology) is necessary to life, and having a better one makes all the difference.

Chart:

Epistemology -> life ideas -> behavior/choices

What does each of those mean?

1) Epistemology is the name of one's **method of dealing with ideas**. That includes evaluating ideas, deciding which ideas to accept, finding and fixing problems with ideas, integrating ideas into one's life so they are actually used (or not), and so on. This is not what you're used to from most explicit epistemologies, but it's the proper meaning, and it's what CR offers.

2) Life ideas determine one's behavior in regard to sex, and everything else. This is stuff like one's values, one's emotional makeup, one's personality, one's goals, one's preferences, and so on. In this case, we're dealing with the person's ideas about sex, courtship and integrity.

3) Behavior/choices is what you think it means. I don't have to explain this part. In this case it deals with the concrete actions taken to pursue the irrational woman.


You see the sex example as separate from epistemology. I see them as linked, one step removed. Epistemology is one's method of dealing with (life) ideas. Then some of those (life) ideas determine sexual behavior/choices.

Concretizing, let's examine some typical details. The guy thinks that sex is a very high value, especially if the woman is very pretty and has high social status. He values the sex independent of having a moral and intellectual connection with her. He's also too passive, puts her on a pedestal, and thinks he'll do better by avoiding conflict. He also thinks he can compromise reason in his social life and keep that separate from his scientific life. (Life) ideas like these cause his sexual behavior/choices. If he had different life ideas, he'd change his behavior/choices.

Where did all these bad (life) ideas come from? Mainly from his culture including parents, teachers, friends, TV, books and websites.

Now here's where CR comes in. Why did he accept these bad ideas, instead of finding or creating some better ideas? That's because of his epistemology – his method of dealing with ideas. His epistemology let him down. It's the underlying cause of the cause of the mistaken sexual behavior/choices. A better epistemology (CR) would have given him the methods to acquire and live by better life ideas, resulting in better behavior/choices.

Concretizing with typical examples: his epistemology may have told him that checking those life ideas for errors was unnecessary because everyone knows they're how life works. Or it told him an ineffective method of checking the ideas for errors. Or it told him the errors he sees don't matter outside of trying to be clever in certain intellectual discussions. Or it told him the errors he sees can be outweighed without addressing them. Or it told him that life is full of compromise, win/lose outcomes are how reason works, and so losing in some ways isn't an error and nothing should be done about it.


If he'd used CR instead, he would have had a method that is effective at finding and dealing with errors, so he'd end up with much better life ideas. Most other epistemologies serve to intellectually disarm their victims and make it harder to resist bad life ideas (as in each of the examples in the previous paragraph). Which leads to the sociological problems that hinder science.


Everyone has an epistemology – a method of dealing with ideas. People deal with ideas on a daily basis. But most people don't know what their epistemology is, and don't use an epistemology that can be found in a book (even if they say they do).

The epistemologies in books, and taught at universities, are mostly floating abstractions, disconnected from reality. People learn them as words to say in certain conversations, but never manage to use them in daily life. CR is not like that, it offers something better.

Most people end up using a muddled epistemology, fairly accidentally, without much control over it. It's full of flaws because one's culture (especially one's parents) has bad ideas about epistemology – about the methods of dealing with ideas. And one is fallible and introduces a bunch of his own errors.

The only defense against error is error-correction – which requires good error-correcting methods of dealing with ideas (epistemology) – which is what CR is about. It's crucial to learn about what one's epistemology is, and improve it. Or else one will – lacking the methods to do better – accept bad ideas on all topics and have huge problems throughout life.

And note those problems in life include problems one isn't aware of. Thinking your life is going well doesn't mean much. The guy with the bad approach to sex typically won't regard that as a huge problem in his life, he'll see it a different way. Or if he regards it as problematic, he may be completely on the wrong track about the solution, e.g. thinking he needs to make more compromises with his integrity so he can have more success with women.


PS This is somewhat simplified. Epistemology has some direct impact, not just indirect. And I don't regard the sociological problems as the only main issue with science, I think bad ideas about how to do science (e.g. induction) matter too. But I think it's a good starting place for understanding my perspective. Philosophy dominates EVERYTHING.

Continue reading the next part of the discussion.

Elliot Temple | Permalink | Comments (0)

Aubrey de Grey Discussion, 23

I discussed epistemology and cryonics with Aubrey de Grey via email. Click here to find the rest of the discussion. Yellow quotes are from Aubrey de Grey, with permission. Bluegreen is me, red is other.
Elliot, you seem to be missing a very fundamental point here, namely: you seem to be working from the assumption that it’s my job to refute your position to your satisfaction. That is no more my job than it is yours to refute mine to my satisfaction.
If you care about reason, that requires dealing with criticism, to resolution. Reason requires criticisms must not be ignored, they have to be addressed (not by you personally. there must be answers you endorse, whoever writes them). This is crucial to reason so that you don't continue with bad ideas indefinitely even though better ones are known. It allows error correction instead of entrenching errors.

It is your right and privilege to live a different lifestyle than this. But then you wouldn't be a rational intellectual.
If you think that Alcor’s or CI's refutations of concerns about cryonics (the ones you’ve definitely already found, because they are in their FAQs) are less compelling than mine about SENS, you’re entitled to your opinion, but my sincere opinion is that they are every bit as compelling. I put it to you that the evaluation of how compelling an argument is is an EXTREMELY subjective thing, both to you and to me, arising essentially from how immediately a refuttion of it comes to mind. So, it is hopeless to try to agree whether this or that argument is more compelling than the other argument: each of us must make his own judgement on that, and then act on that judgement in the indicated way - by seeking more information, or by accepting a particular conclusion a likely enough to be right that further investigation is not a priority.
I don't know why you're speaking to me at all when you hold the irrationalist position that reaching agreement in truth-seeking discussion is hopeless. (I also don't know why you are sufficiently satisfied with irrationalism that you are unwilling to read the books refuting it and offering a better way.)
Again I repeat my bottom line: you have not given me the slightest reason to believe that people’s failure to adhere to CR (or to Elliotism) is appreciably slowing the progress of science and technology.
I gave you examples and explanations, which you largely didn't reply to. Then you state I gave you no reason. That's unreasonable on your part.
Maybe I can explain what kind of reason I would accept as valid evidence for that. Arguably, when quantum theory and relativity supplanted classical physics, they did so by taking seriously the incompatibility between wave-theoretic and particle-theoretic descriptions of light, and such like, which had been basically swept under the carpet for ages. My impression is that that isn’t actually an item of evidence for your position, because (a) it was a long time ago, when many fewer people were any good at science; (b) it hadn’t really been all that swept under the carpet - it was just that no one had come up with a resolution; and (c) even to the extent that it had been, the key point is that there was clear data that needed to BE so swept, whereas in the case of Copenhagen versus Everett (which I’m not sure is the same as Schrödinger versus Heisenberg, but I don’t think that matters for present purposes) there is no such data, since both theories make the same predictions. If I’m wrong, and the lack of a widespread adoption of a CR-like method of reasoning back then seems likely to have substantially delayed the arrival of modern physics, persuade me.
I have tried to persuade you (in a way in which I could find out I'm mistaken, too), but you are taking steps to prevent persuasion. I cannot persuade you unilaterally. What you have done includes:

- Not replying to many points and questions.

- Not giving appropriate feedback on initial statements so we can iterate to the point of you understanding what I'm saying. Miscommunication and misunderstanding are to be expected and there has to be iteration of an error-correcting process for effective communication of ideas. (Communication being necessary to me persuading you.)

- Not being willing to read things, study issues, put enough effort into learning the topics.

A specific detail: I can't reasonably be expected to persuade you about the history of science first, as you propose. What needs to happen first is you understand what is a CR-like method of reasoning, so you can accurately evaluate which scientists did that and which didn't. But you don't want to read the texts explaining what is a CR-like method of reasoning, or ask the questions to understand it. You aren't finding out from existing material or from a heavy back-and-forth process adequate to cover a large topic.
Or take another example from the past. If you’re right that science is so slowed by this, how can it be so hard to identify an example (one that isn’t far more parsimoniously explained by sociological considerations such as I outlined in my last email)?
Lots of the sociological considerations are explained by the philosophical issues I'm talking about. Because you don't know what CR is, you can't tell what is a consequence of CR or non-CR.

We have, for example, an educational theory. Where does short-term thinking, bias, egos, etc come from? Significantly, from bad educational practices. Education is fairly directly an epistemology issue and CR offers some better ideas about what educational techniques work or not.

Regarding statistics, yes scientists believe they should be done right, and sometimes there are time and money issues. But lots of people don't know what doing them right means. There are philosophical misconceptions about how to use statistics correctly which would be problems even with more time and money. (An example is the inductivist misconception that correlations hint at causation, which isn't a funding issue.)

The underlying problem is you don't understand where I'm coming from and what the world would look like if I'm right. That can't be settled by looking at examples. I gave you initial statements of Elliotism. The rational way to proceed is to iterate on that (you give feedback, ask questions, I reply, etc, understanding is iteratively created) in order to understand what I'm saying.
And remember, what I really mean here is not “science” in the DD sense, i.e. the improved “understanding” (whatever that is) of nature, but technology, i.e. the practical application of science. Computers today rely absolutely on the fact that we no longer adhere to classical physics, but they rely not at all on the fact that most people work with Copenhagen rather than Everett. The passage you quote from BoI totally doesn’t help, because it stops at “understanding”, “knowledge”, “explanations” etc, which in my book are simply smoke and mirrors until and unless they translate into practical consequences for technology. Not even implemented technology - technological proposals, like SENS, would be fine.
You have an anti-philsophical outlook and don't understand the perspective of DD, me, Popper, etc. If you want to understand and address such matters, there are ways you can, which we could focus on. I've tried to indicate how that can happen, e.g. with iterative discussion of how CR works. If you'd rather simply leave critics unanswered, just tell me you don't want to talk.
I read FoR, but I don’t think I ever read BoI. Perhaps part of why is that I found FoR to be fatally flawed on about page 4, as I think I mentioned earlier. DD is a great thinker, whom I hugely admire, but that doesn’t mean I think all his thinking is correct or relevant to my own priorities. And you haven’t given me any new motivation to read BoI.
I don't think you mentioned that. And I just searched the discussion and I'm not finding it.

If you would say your criticism of FoR, that'd be great. When people share criticisms in public, then progress can be made. I know DD wrote the book partly in hopes of receiving such criticism so human knowledge could advance. But you and many others with similar methods withhold criticism and dodge lots of discussion and then human knowledge creation is slowed.

Sharing your FoR criticism could help advance our discussion, too. It's topical and I've been trying to get direct criticisms from you. If you tell me what is unacceptable to you, then I could address it or concede. And if I address ALL issues you have with my view, that's how persuasion would happen. Since you already accepted your view has flaws, if you had NO objections you'd accept mine.

If you're right about FoR being flawed, you have an important insight that others could learn from. If you're mistaken, by sharing your criticism you would expose it to criticism and you could learn about your error from others. If you'd prefer to retreat from rational discussion instead, that is your choice.

Continue reading the next part of the discussion.

Elliot Temple | Permalink | Comments (0)

Aubrey de Grey Discussion, 22

I discussed epistemology and cryonics with Aubrey de Grey via email. Click here to find the rest of the discussion. Yellow quotes are from Aubrey de Grey, with permission. Bluegreen is me, red is other.
I don't agree with your look-at-scientific-contributions method, but in any case you don't have the input data to use it. Yet somehow you think you've gotten a conclusion from it.
I guess I was too abbreviated: what I meant was that if disproportionate scientific progress were made by those with a minority view about how to reason, it wouldn’t be the minority view for long (at least not within science), and that hasn’t happened.
This claim, dealing with a field you don't want to study, brings up dozens of difficult issues which I think you don't want to discuss to resolution. I don't know what to do with this. Do you?

I'll mention a few example issues:

- If everyone thinks like this, who will try stuff in the first place? Who will be the early adopters? Is your plan to rely on people who *disagree with you* about this matter to be the ones to find, test, and then persuade you of innovations?

- The cause of success is something people disagree about, e.g. someone might attribute DD's success to him being an outlier genius, rather than to his philosophy.

- Small sample size. And many people don't know which scientists were Popperians. Take a hypothetical scientist who admires 100 scientists who were especially effective. 70 of them might be Popperians without him knowing.

- Judging which scientists actually are Popperians is difficult and requires philosophical skill to do accurately.

- You're proposing people would do something because it makes sense to do. But sometimes people are irrational and act in ways that don't make sense.

- It's a bit like asking if capitalism is so much better, why doesn't it dominate the whole world yet? There are many things that can block the uptake of good ideas other than the idea being mistaken.

Because there are many reasons things might not work out as you propose, you shouldn't rely on that way of looking at it. Instead, the only reasonable thing to do is look at the actual merits and content of CR arguments, not the unargued reactions of others. Look at the substantive ideas and arguments, not the opinions of others.

Either you personally should consider CR ideas, or (preferably since it's not your field) others should and you could read some summary work and be persuaded by that and reference it if challenged. So CR arguments get answered (or accepted), and there is a way for you to find out about new ideas (via the work you endorse, which provides targets for criticism, being refined or refuted). But you don't want to take responsibility for this, and nor do *lots* other people, and so the the march of progress is dramatically delayed.
Your arguments in your books about topics like mitochondria are much more detailed and rigorous than what you said to me about cryonics.
Um sure, but that’s becaue I referred you to alcor.org and cryonics.org. I deny that the arguments given there are much (indeed any) less detailed and rigorous than those I give about mitochondria etc.
You didn't want to refer me to specific material, and I was unable to find material in the same league as your stuff. I wrote to you explaining problems with some material I found (I didn't find equivalent problems in your books). If I misjudged it, or they offer better material, you could tell me.

You do things like consider all the challenges SENS has to deal with to work, and address each. Where is the equivalent cryonics material?

There is a great deal of detailed scientific knowledge about mitochondria (which you carefully studied and learned). Where is the equivalent cryonics material?
Scientific progress is much slower than it could be, today. This can be seen by surveying scientific fields. I've already given you some examples like the social sciences and medical retractions. You didn't give alternative interpretations or criticisms. Now you deny it after leaving those points unanswered, without exposing your reasoning to criticism.
Apologies again for over-brevity. Of course there are many reasons why scientific progress is much slower than it could be, but my contention is that inferiority of scientific method is not a significant one of them. Rather, the reasons are lack of funding from public sources beholden to the public (who certainly don’t reason well), self-serving short-termist competition between scientists fomented by that lack of funding, egos, that sort of thing. Theer is also a big contribution from poor interpretation (for example, poor use of statistics), but again that is not because scientists don’t believe statistics should be done right, it’s because they find it more important to publish than to be correct.
Here you bring up complex and controversial philosophical issues, including freedom and capitalism. What do you think I should do? Try to explain a bunch of philosophy when you have one foot out the door, while previous attempts to explain other philosophy are unresolved? Ask why you're confident in your judgments of these issues even though your philosophy is under-specified and under-studied, and you've chosen not to read a lot of the material on these topics? Guess that you might not recognize your paragraph as bringing up a bunch of complex and controversial philosophical issues, and guess what your reasoning might be, and try to preemptively answer it? Tell you that your perspective here contains mistakes relevant to SENS funding, so our philosophical differences do matter? Any suggestions?

I would know how to handle these things if we were both using my preferred methods. But you use your own methods in the discussion, and I don't know how to work with those. I don't know how issues like these are to be resolved with your discussion methods.


You deal with philosophy issues routinely, but you don't want to study it, and nor do you want to outsource that and endorse the conclusions in some specific writing. So you end up doing a mix of reinventing half of the wheel badly, plus outsourcing-by-accident to people whose names you don't even know so there's no accountability. You're accepting a bunch of ideas (e.g. induction) that you picked up somewhere and you don't know clearly who to hold accountable, which books are involved, where to look up details of their reasoning if I question it, etc. You're outsourcing philosophy thinking third-hand: some people have ideas and others decide they were successful and still others are impressed and spread the ideas through the culture to you.
Concerning quantum physics, I am not a specialist, but my understanding is that the Copenhagen and Everett interpretations make exactly the same predictions about observable data, and thus cannot be experimentally distinguished. My question then is, who cares which is correct? The passage you quote from topics/24387 "> seems to me to acknowledge this: it says that the only real problem with the Copenhagen model is that it’s nonsensical. What exactly is wrong with “shut up and calculate” if it works?
Did you read The Beginning of Infinity? Do you or anyone else have answers to it? Do you want me to rewrite it with less editing? Quote it? Will you be pleased with a reference to it, telling you where to get answers?

I also don't think it makes sense to drop the random sampling topic (for example) and take up this new one – won't we run into the same discussion problems again on this new topic? I expect to; do you disagree?

BoI:
Although Schrödinger’s and Heisenberg’s theories seemed to describe very dissimilar worlds, neither of which was easy to relate to existing conceptions of reality, it was soon discovered that, if a certain simple rule of thumb was added to each theory, they would always make identical predictions. Moreover, these *predictions* turned out to be very successful.

With hindsight, we can state the rule of thumb like this: whenever a measurement is made, all the histories but one cease to exist. The surviving one is chosen at random, with the probability of each possible outcome being equal to the total measure of all the histories in which that outcome occurs.

At that point, disaster struck. Instead of trying to improve and integrate those two powerful but slightly flawed explanatory theories, and to explain why the rule of thumb worked, most of the theoretical-physics community retreated rapidly and with remarkable docility into instrumentalism. If the predictions work, they reasoned, why worry about the explanation? So they tried to regard quantum theory as being *nothing but* a set of rules of thumb for predicting the observed outcomes of experiments, saying nothing (else) about reality. This move is still popular today, and is known to its critics (and even to some of its proponents) as the ‘shut-up-and-calculate interpretation of quantum theory’.

This meant ignoring such awkward facts as (1) the rule of thumb was grossly inconsistent with both theories; hence it could be used only in situations where quantum effects were too small to be noticed. Those happened to include the moment of measurement (because of entanglement with the measuring instrument, and consequent decoherence, as we now know). And (2) it was not even *self*-consistent when applied to the hypothetical case of an observer performing a quantum measurement on another observer. And (3) both versions of quantum theory were clearly describing *some* sort of physical process that *brought* about the outcomes of experiments. Physicists, both through professionalism and through natural curiosity, could hardly help wondering about that process. But many of them tried not to. Most of them went on to train their students not to. This counteracted the scientific tradition of criticism in regard to quantum theory.

Let me define ‘bad philosophy’ as philosophy that is not merely false, but actively prevents the growth of other knowledge. In this case, instrumentalism was acting to prevent the explanations in Schrödinger’s and Heisenberg’s theories from being improved or elaborated or unified.
To understand what this means more, it's important to read the whole book and engage with it's ideas, e.g. by asking questions about points of confusion or disagreement, and criticizing parts you think may be mistaken, and discussing those things to resolution. Or if you don't do that, I think you should say more "I don't know"s instead of e.g. making the philosophical claims that shut up and calculate works, Aubreyism works, etc.

I think you want to neither answer the points in BoI and elsewhere (including by endorsing someone else's answer for use as your own), nor defer to them, nor be neutral. Isn't that irrational?

Continue reading the next part of the discussion.

Elliot Temple | Permalink | Comments (0)

Aubrey de Grey Discussion, 21

I discussed epistemology and cryonics with Aubrey de Grey via email. Click here to find the rest of the discussion. Yellow quotes are from Aubrey de Grey, with permission. Bluegreen is me, red is other.
Hi Elliot - thanks again - I sincerely wish I could allocate more time to this, but I’m just not seeing the value. Yes I know that until only one or two hundred years ago essentially everyone was so bad at the scientific method that progress was much slower than it could have been, but I’m not seeing that that’s the case any more. If you’re saying no, we’re still going a lot slower than we could because we’re reasoning poorly, and if you’re right, then you or others who are following your methods (such as DD, presumably) should be contributing very disproportionately to scientific progress, but I’m not seeing that happening. Cryonics is part of biology, so I’m not getting why you say I approach biology in a better way than I approach cryonics, but in any event I claim I approach all aspects of biology (including cryonics) in the same way.
DD has contributed very disproportionately to scientific progress. But that's a tiny sample size. I'm not a scientist, by choice. I don't agree with your look-at-scientific-contributions method, but in any case you don't have the input data to use it. Yet somehow you think you've gotten a conclusion from it. You're making a mistake which defends other mistakes (they can pile up like that).

Your arguments in your books about topics like mitochondria are much more detailed and rigorous than what you said to me about cryonics.

Scientific progress is much slower than it could be, today. This can be seen by surveying scientific fields. I've already given you some examples like the social sciences and medical retractions. You didn't give alternative interpretations or criticisms. Now you deny it after leaving those points unanswered, without exposing your reasoning to criticism.

Let's look at one field more closely. Quantum physics is screwed up. DD has explained:

http://vimeo.com/5490979 (First 15 minutes.)

DD says progress with Everett's theory was slow over last 50 years. He speaks to the irrational philosophy of Everett-dissenting physicists. Then he proposes philosophical mistakes *by Everett people* as the thing to change to improve the field's progress. It's like, "Most quantum physicists are using irrational philosophies and wasting their careers. But even in that context, the philosophical mistakes of the pro-Everett physicists are big enough to focus on instead."

In 2012, answering in a physics context, "What would it look like that would be different to the way things are at the moment?", DD wrote:

https://groups.yahoo.com/neo/groups/Fabric-of-Reality/conversations/topics/24387
For instance, there'd be:

In theoretical physics: Work on the structure of the multiverse, its implications for the theory of probability, deeper explanations of various quantum algorithms, deeper understanding of the Heisenberg Picture....

In philosophy: Work on things like personal identity, the relationship between multiple universes and multiple copies in a single universe, morality in the multiverse...

In theoretical physics, experimental physics and philosophy: Cessation of work whose only interest is in the context of believing nonsensical 'interpretations'...

In physics teaching: Excision of anti-rational ideologies such as positivism or shut-up-and-calculate from physics classes.
Physicists are spending a great deal of effort on the philosophical equivalent of denying dinosaurs existed (as DD explains in the video and in BoI), rather than doing productive work on issues like those above. That slows progress dramatically.

In BoI, in "A Physicist’s History of Bad Philosophy", DD writes:
READER: But then why is it that only a small minority of quantum physicists agree?

DAVID: Bad philosophy.
DD spends the chapter explaining. No one has refuted his arguments.

Here is an example, specifically, of a bad pro-Everett paper which goes wrong epistemologically (because of justificationism not CR): http://users.ox.ac.uk/~everett/docs/Wallace%20epistemology.pdf

If examples like this would change your mind, more could be provided. Or if detailed criticism of this paper would change your mind, that could be provided.

So when you say science isn't going slow (and philosophy issues lack big consequences), without addressing the problems with any scientific fields, I think you're mistaken. And you're doing it in such a way that, if you are mistaken, you won't find out.

Continue reading the next part of the discussion.

Elliot Temple | Permalink | Comments (0)

Aubrey de Grey Discussion, 20

I discussed epistemology and cryonics with Aubrey de Grey via email. Click here to find the rest of the discussion. Yellow quotes are from Aubrey de Grey, with permission. Bluegreen is me, red is other.
OK look, one more time. I’m all about practicalities. I’m starting from the position that I make decisions in whats really close to the optimal way,
This claim of being close to limits of progress is completely contrary to _The Beginning of Infinity_, which you (or any writing by anyone, which you endorse) haven't offered criticism of.
when taking into account the need to limit the time to make them. The challenges you give to my position seem to me to be no more than dancing around the practicalities - arguing that other methods are better without addressing the trade-off between quality and speed, or without addressing the magnitude of the difference (how often would you come to a better view than me because of a better reasoning mething? Once every million years?).
The typical person mistakenly accepts win/lose non-solutions on a daily basis.

The magnitude of the difference is: it's such a big issue it's qualitative, not quantitative. It's a more important difference than merely 100x better. It's John Galt vs. Jim Taggart. It's reason vs. irrationality.

The idea of a quality/speed tradeoff or compromise is a misconception. And an excuse for arbitrary irrationality. It's the kind of thing that blights people's lives on a daily basis, as well as hindering scientific progress.

There do exist quality/speed tradeoffs in some sense of the term. But NOT in the sense of ever requiring acting on arbitrary ideas, win/loses, non-solutions, or known-to-be-refuted ideas. Which is what you say you do, on a regular basis. Every time you do that it's a big mistake that Elliotism would have handled differently by finding a non-refuted non-arbitrary idea in a timely manner and using that.
When I look back at history and I see people making mistakes, I see those mistakes arising from lack of information, or from prejudice, etc - I can’t think of a single case where the mistake arose from using induction or justificationism rather than CR.
The mistakes don't arise from lack of information. Even deep space has lots of information, like _The Beginning of Infinity_ discusses.

How did Louis Pasteur refute the spontaneous generation theory? He did experiments in which he looked at the conditions under which food and wine would spoil. They wouldn't spoil unless germs got in. Why didn't anybody do those experiments before? People knew food spoiled before Louis Pasteur came along. Microscopes had been around since the 17th century. So why did it take until the mid 19th century? People weren't looking for an explanation or for a solution to the relevant problems. They had methodology problems. Huge scientific opportunities are routinely passed over, for decades (or much longer) because people are bad at philosophy, bad a thinking, bad at science.

Most inductivists have had unproductive careers, never figuring out anything very important. I'm guessing you treat it as natural that most people aren't geniuses, and miss lots of stuff. You sort of expect the status quo. But what you're used to is caused by deeply irrational thinking methods. Rational methods open up unbounded human potential.

Prejudice, etc, are epistemology-methodology issues too.
I'd very much have expected you to raise such an example by now.
I gave several such examples in my previous email, e.g. the explanationless correlation studies in the social sciences. That's a bunch of justificationists wasting their careers using justificationist methods that will never work.

You apparently didn't understand what was being said (typically both our faults, communication is hard) and didn't ask for more explanation (your fault, big methodology error that really messes up communication, discussion and learning).
It’s true that I’m pretty unsure whether I’m elaborating a good justification for my own methods, because after all I am making it up as I go along - but conversely I still claim that there’s a good chance that my methods to indeed withstand scrutiny (again, measured in terms of practicalities), simply because I’m unaware of any substantive changes having occurred in my methods for a good few decades.
Not having learned anything major in philosophy in the last few decades is a terrible argument that your ideas are good enough and you can stop worrying about learning.

And if you aren't having many problems in practice, it could be because you're actually doing an unrefined version of Elliotism. It's not an argument that any of the philosophy you're advocating is any good.

Your stated methods don't withstand scrutiny. Early on I criticized them. You conceded they have big flaws. Then you claimed they are practical anyway, basically because you assume better isn't possible. More recently I also pointed out (for example) that the random sampling stuff doesn't work at all, a topic you dropped without ever saying a way to do it.

There is a better way to think, you aren't at the limits of progress. So I explained it, and you said it wouldn't work in a timely fashion. Why? What's the criticism of my position? You didn't understand it well enough to answer, and also didn't ask questions and give feedback to find out more about it. And we've been kind of stuck there, plus going on some tangents to discuss some other misconceptions.

You haven't understood Elliotism's way of getting timely non-refuted non-arbitrary ideas to act on because of the very thinking methodology errors you believe are harmless. That includes e.g. being unwilling to read things explaining how to do it, which really messes up your ability to learn anything complex. Then, somehow, you blame me or my ideas when you straight up refused to make the effort required to learn something like Elliotism. If you're busy, fine, but that isn't a flaw of Elliotism or a failure on my part. It'd be you choosing not to find time to learn about something important enough to make a reasonable judgment about it.
At the bottom line: why do you think we still, after all this discussion, disagree about cryonics?
Primarily because we're both more interested in epistemology and discussed that more. And a major feature of the cryonics part of the discussion was your epistemology view (and mine to the contrary) that it'd take too long to work out the cryonics issues in the amount of detail I think is needed to correctly judge that sort of complex issue. (An amount of detail which I think you exceed in your biology thinking.)

Secondarily because you didn't answer a lot of what I said about cryonics and resisted giving arguments (which I kept asking for) either directly criticizing my position or explaining and arguing yours. This is a result of your methodology which doesn't pay enough attention to individual precise ideas and criticisms, and instead jumps from a vague understanding to an arbitrary conclusion.

(I suspect you approach biology in a different, significantly better way. But if you understood the correct thinking methodology, and what you actually did in biology, that'd enable you to compare and make valuable refinements. So philosophy still matters.)

Continue reading the next part of the discussion.

Elliot Temple | Permalink | Comments (0)

Alex Epstein Attacks Liberty

Alex Epstein, founder of the Center for Industrial Progress, has a new book out, The Moral Case for Fossil Fuels. Some parts are good, important, and not being said by many others. It's getting promoted by Objectivists and others.

When the book was a draft, I urged Alex to change some parts of it which were incompatible with the cause of liberty. Here's part of my explanation to Alex, from April, of why he shouldn't attack the tobacco industry:
cigarettes are a good thing. the tobacco industry is ... industry. it sells things to voluntary customers that they value more than what they pay (even despite large taxes and other regulatory hinderances). we should have generally positive opinions of tobacco companies.
Alex does need to pick his battles. It's not his job to write a tangent defending the tobacco industry. So I suggested the best way to handle the topic was not to bring it up, and focus on fossil fuels. Instead, Alex kept the same approach as in the draft of the book: he unnecessarily brings up the tobacco industry, then attacks it.

Alex presents himself as an Objectivist thinker, a strong advocate of capitalism, and a champion of industrial progress. But here he's an attacker of industry, who has chosen to persist with his attack after the draft was criticized (this isn't an issue he overlooked, he's doing this on purpose). He writes:
To leaders of the fossil fuel industry:

Here’s a typical communications plan of yours to win over the public.

“We will explain to the public that we contribute to economic growth.”

“We will explain to the public that we create a lot of jobs.”

“We will link our industry to our national identity.”

“We will stress to the public that we are addressing our attackers’ concerns—by lowering the emissions of our product.”

“We will spend millions on a state-of-the-art media campaign.”

Why doesn’t it work? Well, imagine if you saw the same plan from a tobacco company. It would tie increased tobacco sales to economic growth, to job creation, to national identity, to reducing tar. Would you be convinced that it would be a good thing if Americans bought way more tobacco?

I doubt it, because none of these strategies does anything to address the industry’s fundamental problem, the fact that use of the industry’s core product, tobacco, is viewed as a self-destructive addiction. So long as that is true, the industry will be viewed as an inherently immoral industry. And so long as that is true, no matter what the industry does, its critics will always have the moral high ground.

You might say that it’s offensive to compare the fossil fuel industry to the tobacco industry—and you’d be right. But in the battle for hearts and minds, you are widely viewed as worse than the tobacco industry.

Your attackers have successfully portrayed your core product, fossil fuel energy, as a self-destructive addiction that is destroying our planet, and characterized your industry as fundamentally immoral. In a better world, the kind of world we should aspire to, they argue, the fossil fuel industry would not exist.
Alex wrote this so it could easily be read as saying that in a better world, the tobacco industry would not exist. He says it's offensive to the fossil fuel industry to compare it to the tobacco industry (implication: because the fossil fuel industry is good, but the tobacco industry is bad). Alex, as an intentional tactic, used weasel words to create some deniability about his meaning. But it's clear enough what he's saying (siding with leftists anti-smoking ideas) and how it will be taken, and he knows that.

Tobacco is an industry which brings pleasure to millions of people on a fully voluntary basis. Any capitalist thinker should be pleased. Instead, Alex suggests cigarette smokers are mentally ill (setting up a reinterpretation of free voluntary trade for mutual benefit as somehow being a bad thing like feeding an addiction). Alex sounds like a leftist attacking people for having hobbies he disapproves of.

Alex is setting a precedent of making exceptions to freedom and capitalism. He's communicating that if you disagree with the customers of an industry, go ahead and attack it. And he isn't merely attacking it as a mistake people are free to make, a personal lifestyle choice that he personally disagrees with. Talk of concepts like "addiction" is saying the customers aren't actually making a free choice, but are being controlled by sinister addictive forces. That is extraordinarily dangerous, because it's saying that when certain issues come up (like addiction) then free trade is a bad idea that doesn't work. This opens the door for limiting free trade in more industries (e.g. gambling, freemium games, MMOs, candy, alcohol, coffee, anything people like a lot, etc) and ultimately the entire economy.

South Park defended smokers well. Alex not only won't defend them, he won't even leave the topic alone. On this issue, Alex is going out of his way to take the other side, the side of limiting freedom, and destroying liberty with exceptions.

What kind of Objectivist would compromise with evil on even one issue? What sort of principled industrial capitalist thinker would make an exception for a particular industry he doesn't like? How can he take psychiatry, one of the major tools being used to excuse limits on freedom, and push its agenda? (Which, FYI, Alex has repeatedly done elsewhere as well.) Objectivism explains that these compromises benefit evil. Example compromisers were Friedrich Hayek and Milton Friedman, who Ayn Rand argued were especially harmful to the cause of freedom. Alex Epstein is putting himself in the same category as them.

Whatever you personally think of the tobacco industry, you should be able to see that it's a capitalist industry which this supposed champion of capitalism and industry has gone out of his way to attack. Whatever you think of psychiatry, you should be able to see that, rightly or wrongly, it's used as a reason to limit freedom in some situations. That's incompatible with being a champion of total freedom like Ayn Rand.

Alex may not take Ayn Rand seriously. But I do. So I'm pronouncing moral judgment. Alex Epstein is dangerous and immoral.

Elliot Temple | Permalink | Comments (0)

Aubrey de Grey Discussion, 19

I discussed epistemology and cryonics with Aubrey de Grey via email. Click here to find the rest of the discussion. Yellow quotes are from Aubrey de Grey, with permission. Bluegreen is me, red is other.
Hi Elliot - I’m in a busy phase right now so apologies for brevity. To me the purpose of our debate is to answer the question “Is Aubrey coming to substantively incorrect conclusions about what to do or say (such as about cryonics) as a result of using epistemologically invalid methods of reasoning?”. I’m not interested in the question “Is Aubrey’s method of reasoning epistemologically invalid?” except insofar as it can be shown that I would come to different conclusions (but in the same amount of time) if I adopted a different strategy. Similarly, I’m not interested in the question "Is Aubrey coming to incorrect conclusions about what to do or say (such as about cryonics) as a result of having incomplete information/understanding about things OTHER than what method of reasoning is best?” (which seems to be what happened to you in relation to recycling,
Sort of. If I'd had a better approach to reasoning, I could have found out about recycling sooner. If I hadn't already been learning a better method of reasoning, I might have stayed in favor of recycling after seeing those articles, and many other people have done. I think you're trying to create a distinction I disagree with, where you don't give reasoning methods credit in most of life, even though they are involved with everything.
and was also what happened to me in relation to my career change from computer science), because such examples consist only in switching to triage at a point that turned out to be premature (I could have discovered in my teens that biologists were mostly not interested in aging, which is all I needed to know in order to decide that I should work on aging rather than AI, but I didn’t consider that possibility), not in having a triage step per se. I’m quite sure that epistemology is hard, but I’m not interested in what’s epistemologically valid unless there is some practical result for my choices.
OK I see where you're coming from better now.
It’s the same as my attitude to the existence of God: I am agnostic, not because I’ve cogitated a lot and decided that the theist and atheist positions are too close to call, but because I know I’m already doing God’s work for reasons unrelated to my beliefs, hence it makes no difference to my life choices what my beliefs are. I’m perfectly happy to believe that induction can be robustly demonstrated to be epistemologically invalid - in fact, as I said before, I already think it seems to be - but why should I care? - you haven’t told me.
Because misunderstanding how knowledge is created (in science and more generally) blocks off ways of making progress. It makes it harder to learn anything. It slows down biology and every other field. More below.
I’m surprised at your statement about random sampling - I mean, clearly the precision of the fairness will be finite, but equally clearly the precision can be arbitrarily good, so again I don’t see why it bothers you - but again, I also don't see why I should care that I don’t see, because you haven’t given me a practical reason to care, i.e. a reason to suspect that continuing the debate may lead to my coming to different conclusions about what to do or say in the future (about cryonics or anything else).
I don't know how you propose to do arbitrarily good sampling, or anything that isn't terrible. That isn't clear to me at all, nor to several people I asked. I think it's a show-stopper problem (one of many) demonstrating the way you actually think is nothing like your claims.

I don't know how many steps I can skip for this and still be understood. You seem bored with this issue, so let's try several. I think you're assuming you have a fair ordering, and that arbitrarily fair/accurate information occurs early in the ordering. And you decide what's a fair ordering by knowing in advance what answer you want, so the sampling is pointless.
I’ll just answer this specific point quickly:
We're trying to decide what to get for dinner. I propose salmon sushi or tuna sushi. You propose pizza. We get sushi with 67% odds. Is that how it's supposed to work? (Note I only know the odds here because I have a full list of the ideas.)

But wait. I don't care what God's favorite natural number is; that's irrelevant. So there's infinite sushi variants like, "Get salmon sushi, and God's favorite natural number is 5" (vary the number).

Now what? Each idea just turned into infinite variants. Do we now say there are 2*infinity variants for sushi, and 1*infinity for pizza? And get sushi with what odds?
Sory for over-brevity there. What we do is we put the numbers in some order, and for each number N we double the number of variants for each of sushi and pizza by adding “God’s favourite number is N” and “God’s favourite number is not N" - so the ratio of numbers of variants always stays at 2. I can’t summon myself to care about the difference between countably and uncountably infinite classes, in case that was going to be your next question.
I think you missed some of the main issues here, e.g. that getting sushi with 67% odds is a stupid way to handle that situation. It doesn't deal with explanations or criticism (why should we get which food? does anyone mind or strongly object? stuff like that is important). And it's really really arbitrary, like I could mention two more types of sushi and now it's 80% odds? Why should the odds depend on how many I mention like that? That's a bad way of making decisions. I was trying to find out what you're actually proposing to do that'd be more reasonable.

Also sampling in the infinite case is irrelevant here because you knew you wanted a 67% result beforehand (and your way of dealing with infinity here consists of just doing something with it that gets your predetermined answer).

I do think the different classes of infinity matter, because your approach implies they matter. You're the one who wanted numbers of variants to be a major issue. That brings up issues like powersets, like it or not. I think the consequences of fixing your approach to fully resolve that issue are far reaching, e.g. no longer looking at numbers of ideas. And then trying to figure out what to do instead.
More generally, you’re absolutely right that I’m making this up as I go along - I’m figuring out why what I do works. What do I mean by “works”? - I simply mean, I’ve found over the years that I rarely (though certainly not never) make decisions or form opinions that I later revise, and that as far as I can see, that’s not because I’m not open to persuasion or because I move to triage too soon, but because I have a method for forming opinions that really truly is quite good at getting them right, and in particular that it’s a good balance (pretty much as good as it can be) between reliability of the decision and time to make it.
From my perspective, you're describing methods that couldn't work. So whether you were a good thinker or a bad one, you wouldn't be describing what you actually do. This matters to the high-value possibility of critical discussion and improvement of your actual methods.

BTW here is another argument that you don't think the way you claim: What you're claiming is standard stuff, not original. But we agree you think better than most people. So wouldn't you be doing something different than them? But your statements about how you think don't capture the differences.
Take this debate. I’ve given you ample opportunity to come up with reasons why my advocacy for signing up for cryopreservation is mistaken. Potential reasons fall into two classes: data that I didn’t have (or didn’t realise I had) that affects the case, and flaws in my reasoning methods that have resulted in my drawing incorrect conclusions from the data I did have. You’ve been focusing me on the latter, and I’ve given you extended opportunity to make your case, because you’re (a) very smart and articulate and fun to talk to and (b) aligned with someone else I greatly admire. But actually all you’ve ended up doing is being frustrated by the limited amount of time I’m willing to allocate to the debate (even though for someone as busy as me it wasn’t very limited at all). That’s not actually all you’ve done, of course - from my POV, the main thing you’ve done is reinforce my confidence that the way I make decisions works well, by failing to show me a practical case where it doesn’t.
I'm not frustrated. I like you. I'm trying to speak to important issues unemotionally.

If I were to be frustrated, it would not be by you. I talk to a lot of people. I bet you can imagine that most are much more frustrating than you are.

Suppose I were to complain that people don't want to learn to think better, don't want to contribute to philosophy, don't want to learn the philosophy that would let them go be effective in other fields, don't want to stop approximately destroying the minds of approximately all children, etc.

Would I be complaining about you? No, you'd be on the bottom of the list. You're already doing something very important, and doing it well enough to make substantial progress. For the various non-SENS issues, others ought to step up.

Further, I don't know that talking with me will help with SENS progress. On the one hand, bad philosophy has major practical consequences (more below). But on the other hand, if you see things more my way, it will give you less common ground with your donors and colleagues. One fights the war on aging with the army he has, now not later. If the general changes his worldview, but no one else does, that can cause serious problems.

Maybe you should stay away from me. Reason is destabilizing (and seductive), and maybe you – rightly – have higher priorities. While there are large practical benefits available (more below), maybe they shouldn't be your priority. People went to space and built computers while having all sorts of misconceptions. If you think current methods are enough to achieve some specific SENS goals, perhaps you're right, and perhaps it's good for someone to try it that way.

So no I'm not frustrated. I can't damn you, whatever you do. I don't know what you should do. All I can do is offer things on a voluntary basis.

The wrong way of thinking slows progress in fields. Some examples:

The social sciences keep doing inadequately controlled, explanationless, correlation studies because they don't understand the methods of making scientific progress. They're wasting their time and sharing false results.

Quantum physicists are currently strongly resisting the best explanation (many worlds). Then they either try to rationalize very bad explanations (like Copenhagen theory) or give up on explanations (i.e. shut up and calculate). This puts them in a very bad spot to improve physics explanations.

AI researchers don't understand what intelligence is or how knowledge can be created. They don't understand the jump to universality, conjectures and refutations, or the falseness of induction and justificationism. They're trying to solve the wrong problems and the field has been stuck for decades.

Philosophers mostly have terrible ideas and make no progress. And spread those bad ideas to other fields like the three examples above.

Feynman offers some examples:

http://neurotheory.columbia.edu/~ken/cargo_cult.html
I explained to her that it was necessary first to repeat in her laboratory the experiment of the other person--to do it under condition X to see if she could also get result A, and then change to Y and see if A changed. Then she would know the the real difference was the thing she thought she had under control.

She was very delighted with this new idea, and went to her professor. And his reply was, no, you cannot do that, because the experiment has already been done and you would be wasting time. This was in about 1947 or so, and it seems to have been the general policy then to not try to repeat psychological experiments, but only to change the conditions and see what happened.
Repeating experiments is wasting time? What a stupid field that isn't going to figure anything out (and indeed it hasn't). And Feynman goes on to discuss how someone figured out how to properly control rat maze running by putting in sand so they can't hear their footsteps – and that got ignored and everyone just kept doing inadequately controlled rat studies.


What about medicine or biology? I don't know the field very well but I've seen articles saying things like:

http://articles.mercola.com/sites/articles/archive/2012/07/12/drug-companies-on-scientific-fraud.aspx
Former drug company researcher Glenn Begley looked at 53 papers in the world's top journals, and found that he and a team of scientists could NOT replicate 47 of the 53 published studies—all of which were considered important and valuable for the future of cancer treatments!
Stuff like this worries me that perhaps current methods are not good enough for SENS to work. But somehow despite problems like this, tons of medicine does work. Maybe it's OK, somehow. More on this below.

http://www.ahrp.org/cms/content/view/846/94/
Many journals don’t even have retraction policies, and the ones that do publish critical notices of retraction long after the original paper appeared—without providing explicit information as to why they are being retracted.
The article has various unpleasant stats about retractions.
It is worth noting that the results of *most negative clinical trials are never published*—neither are they disclosed anywhere, except in sponsors’ confidential files and FDA marketing submissions.
95% confidence is useless if there were 19 unpublished failures. Even one unpublished negative result matters a lot. Not publishing negative results is a huge problem.

http://www.retractionwatch.com/2014/11/03/shigeaki-kato-up-to-28-retractions-with-three-papers-cited-nearly-700-times/
Former University of Tokyo researcher Shigeaki Kato has notched his 26th, 27th, and 28th retractions, all in Nature Cell Biology. The three papers have been cited a total of 677 times.
Note how much work is built partly on top of falsehoods. Lots more retraction info on that blog; it's not pretty.

Note that all of these examples are relevant to fighting aging, not just the medical stuff.

You never know when a physics breakthrough will have an implication for chemistry which has an implication for biology.

You never know when progress in AI could lead to uploading people into computers and making backup copies.

Better social sciences or psychology work could have led to better ways to handle the pro-aging trance or better ways to deal with people to get large donations for SENS.

So many academic papers are so bad. I've checked many myself and found huge problems with a majority of them. And there's the other problems I talked about above. And the philosophy errors I claim matter a lot.

So, how does progress happen despite all this?

How come you're making progress while misunderstanding thinking methods? Does it matter?

Here's my perspective.

Humans are much better, more awesome, powerful and rational things than commonly thought. Fallible Gods. Really spectacular. And this is why humans can still be effective despite monumental folly. Humans are so effective that even e.g. losing 99% of their effectiveness to folly (on average, with many people being counterproductive) leaves them able to make progress and even create modern civilization.

And it's a testament to the human spirit. So many people suffer immensely, grit their teeth, and go on living – and even producing – anyway. Others twist themselves up to lie to themselves that they aren't suffering while somehow not knowing they're doing this, which is hugely destructive to their minds, and yet they go on with life too.

I think it's like Ayn Rand wrote:
"Don't be astonished, Miss Taggart," said Dr. Akston, smiling, "and don't make the mistake of thinking that these three pupils of mine are some sort of superhuman creatures. They're something much greater and more astounding than that: they're normal men—a thing the world has never seen—and their feat is that they managed to survive as such. It does take an exceptional mind and a still more exceptional integrity to remain untouched by the brain-destroying influences of the world's doctrines, the accumulated evil of centuries—to remain human, since the human is the rational."
John Galt is a normal man. That is what's possible. You fall way short of him. Philosophy misconceptions and related issues drop your effectiveness by a large factor, but you lack examples of people doing better so the problem is invisible to you. Most people are considerably worse off than you.

The world doesn't have to be the way it is. So much better is possible. BoI says the same thing in several ways, some subtle, I don't know if you would have noticed.

People do so much stuff wrong, drop their effectiveness massively, and then have low expectations about what humans can do.

It's important to understand that if you have problems, even huge ones, you won't automatically or presumably notice them. And actually you should expect to have all sorts of problems, some huge, some unnoticed – you're fallible and only at the beginning of infinity (of infinite progress). This makes it always important to work on philosophy topics like how problems are found and solved. It should be a routine part of every life to work on that kind of thing, because problems are part of life.

Here's a specific example. The Mitochondrial Free Radical Theory of Aging, by Aubrey de Grey, p 85:
In gerontology, as in any field of science, the development of a hypothesis involves a perpetual oscillation between creative and analytical thinking. Advances of understanding are rarely achieved by purely deductive analysis of existing data; instead, scientists formulate tentative and incomplete generalisations of that data, which allow them to identify which questions are useful to ask by further observation or experiment. ...

The above is, in fact, so universally accepted as a cornerstone of the scientific method that some may wonder why I have chosen to belabor it. I have three reasons.
This is all wrong. Tons of errors despite being short and – as you say – widely accepted.

Does it matter? Well, you wouldn't have written it if you didn't think it mattered.

Since your current concern is whether my claims matter, I'm going to focus on why they do, rather than arguing why they are true. So let's just assume I'm right about everything for a minute. What are the consequences of that?

One mistake in the passage is the deduction/data false dichotomy for approaches. This has big practical consequences because people look for progress in two places, both wrong. That they figure anything out anyway is a testament – as above – to how amazing humans are.

It also speaks to how much people's actual methods differ from their stated methods. People routinely do things like say they are doing induction, like you imply in the passage. Even though induction impossible and has never been used to figure anything out a single time in human history. So then what you must actually do is think a different way, get an answer, and then credit induction for the answer.

Is this harmless? No! Lots of times they try to do induction or some other wrong method and end up with no answer. There are so many times they didn't figure anything out, but could have. People get stuck on problems all the time. Not consciously or explicitly understanding how to think is a big aspect of these failures.

Knowing the right philosophy for how to think allows one to better compare what one is doing to the right way. Everyone deviates some and there's room for improvement. Most people deviate a lot, so there's tons of room for improvement.

And understanding what you're doing exposes it to criticism better. The more thinking gets done in a hidden and misunderstood way, the more it's shielded from criticism.

Understanding methods correctly also allows a much better opportunity to come up with potentially better methods and try different things out. You could improve the state of the art. Or if someone else makes a breakthrough, then if you understand what's going on then you would be in a much better position to use his innovation.

You have an idea about a pro-aging trance. It's a sort of philosophical perspective on society, far outside your scientific expertise. How are you to know if it's right? By doing all the philosophy yourself? That'd be time consuming, and you've acknowledged philosophy is a serious and substantive field and you don't have as much expertise to judge this kind of question as you could. Could you outsource the issue? Consult an expert? That's tough. How do you know who really is a philosophy expert, and who isn't, without learning the whole field yourself? Will you take Harvard or Cambridge's word for it? I really wouldn't recommend that. Many prestigious philosophers are terrible.

What if you asked me? I think whether I said you're right or wrong about the pro-aging trance, either way, you wouldn't take my word for it. That's fine. This kind of thing is really hard to outsource and trust an answer without understanding yourself. Whatever I said, I could give some abbreviated explanations and it's possible you'd understand, but also quite possible you wouldn't understand my abbreviated explanations and we'd have to discuss underlying issues like epistemology details.

And the issue isn't just whether your pro-aging trance idea is right or not. Maybe it's a pretty good start but could be improved using e.g. an understanding of anti-rational memes.

And if it's right, what should be done about it? Maybe if you read "How Does One Lead a Rational Life in an Irrational Society?" by Ayn Rand, you'd understand that better. (Though that particular essay is hard to understand for most people. It clashes with lots of their background knowledge. To understand it, they might need to study other Rand stuff, have discussions, etc. But then when one does understand all that stuff, it matters, including in many practical ways.)

I think millions of people won't shift mindsets as abruptly as you hope. One reason is because of anti-life philosophies, which you don't address. Which I don't think you know what those are, as I mean them.

One aspect of this is that lots of people don't like their lives. They aren't happy, they aren't having a good time. Most of them won't admit this and lie about it. And it's not like they only dislike their lives, they like some parts too, it's mixed. Anyway they don't want to admit this to themselves (or others). Aging gives them an excuse, a way out, without having to face that they don't like their lives (and also without suicide, which is taboo, and it's hard for people to admit they'd rather be dead).

There's other stuff too, which could be explained much faster if you had certain philosophical background knowledge I could reference. The point for now is there's a bunch of philosophical issues here and getting them right matters to SENS. You basically say people are rationalizing not having effective anti-aging technology, and that does happen some, but there's other things going on too. Your plan as you present it is focused on addressing the doubts that anti-aging technology is ready, but not other obstacles.

Does it matter if you're right about the pro-aging trance? Well, you think so, or you wouldn't bring it up. One reason it matters is because if the pro-aging trance doesn't end, it could prevent large-scale funding and effort from materializing. And some other things besides doubts about SENS effectiveness may also need to be addressed.

For example, there's bad parenting. This does major harm to the minds of children, leaving them less able to want and enjoy life, less able to think rationally, and so on. Dealing with these problems – possibly by Taking Children Seriously, or something focused on helping adults, or a different way – may be important to SENS getting widespread acceptance and funding. It's also important to the quality of scientists available to keep working on SENS, beyond the initial stages, as each new problem at later ages is found.

Part of what the pro-aging trance idea is telling people is there's this one major issue which people are stuck on and have a coping strategy for. And you even present this coping as like a legitimate reasonable way to deal with a tough situation. This underplays how irrational people are, which is encouraging to donors by being optimistic. As mentioned earlier, sometimes people succeed at stuff, somehow, despite big problems, so SENS stuff could conceivably work anyway. But it may be that some of the general irrationality issues with society are going to really get in the way of SENS and need more addressing.

(And people learning epistemology is a big help in dealing with those. If people understand better how they are thinking, and how they should think, that's a big step towards improving their thinking.)

Ending Aging by Aubrey de Grey:
The most immediately obvious actions would be to lobby for more funding for rejuvenation research, and for the crucial lifting of restrictions on federal funding to embryonic stem cell research in the United States, by writing letters to your political representatives, demanding change.
The very questionable wisdom of government science is a philosophical issue with practical consequences like whether people should actually do this lobbying. Perhaps it'd help more to lobby for lower taxes and for government+science separation instead. Or maybe it'd be better to create a high quality Objectivist forum which can teach many people about the virtues of life, of science, of separating the government from science, and more.

This is an example of a philosophical issue important to SENS. Regardless of whether you're right in this case, getting philosophical issues like this correct at a higher rate is valuable to SENS.
I’ve had a fairly difficult time convincing my colleagues in biogerontology of the feasibility of the various SENS components, but in general I’ve been successful once I’ve been given enough time to go through the details. When it comes to LEV, on the other hand, the reception to my proposals can best be described as blank incomprehension. This is not too surprising, in hindsight, because the LEV concept is even further distant from the sort of scientific thinking that my colleagues normally do than my other ideas are: it’s not only an area of science that’s distant from mainstream gerontology, it’s not even science at all in the strict sense
Here you're trying to use philosophical skills to advance SENS. You're trying to do things like understand why people are being irrational and how to deal with it. Every bit of philosophical skill could help you do this better. Elliotism contains valuable ideas addressing this kind of problem.

OK, so, big picture. The basic thing is if you know the correct thinking methods, instead of having big misconceptions about how you think, you can think better. This has absolutely huge practical consequences, like getting more right answers to SENS issues. I've gone through some real life examples. Here are some simplified explanations to try to get across how crucially important epistemology is.

Say you're working on some SENS issue and the right thinking method in that situation involves trying five different things to get an answer. You try three of them. Since you don't know the list of things to do, you don't realize you missed two. So 40% of the time you get stuck on the issue instead of solve it.

Later you come up with a bad idea and think it over and look for flaws. You find two but don't recognize them as flaws due to philosophy misconceptions. You miss another flaw because you don't try a flaw-finding method you could have. Even if you knew that method, you still might skip it because you don't understand how thinking works, how you're thinking about an issue, and when to use that method.

Meanwhile, whenever you think about stuff, you spend 50% of your time on induction, justificationism, and other dead ends. Only half your thinking time is productive. That could easily be the case. The ratio could easily be worse than that.

And you have no experiences which contradict these possibilities. How would you know what it's like to think way more effectively, or that it's possible, from your past experiences? That you've figured out some stuff tells you nothing about what kind of efficiency rate you're thinking at. Doing better than some other people also does not tell you the efficiency rate.

These problems are the kinds of things which routinely happen to people. They can easily happen without being noticed. Or if some of the negative consequences are noticed, they can be attributed to the wrong thing. That's common. Like if a person believes he does thinking by some series of false and irrelevant steps, he'll try to figure out which of those steps has the problem and try some adjustments to those steps. Whereas if he knew how he actually thought, he'd have a much better opportunity to find and fix his actual problems.

You may find these things hard to accept. The point is, they are the situation if I'm right about philosophy. So it does matter.

Continue reading the next part of the discussion.

Elliot Temple | Permalink | Comments (0)

Aubrey de Grey Discussion, 18

I discussed epistemology and cryonics with Aubrey de Grey via email. Click here to find the rest of the discussion. Yellow quotes are from Aubrey de Grey, with permission. Bluegreen is me, red is other.
Why are ideas with more variants better, more likely to be true, or something like that? And what is the Aubreyism thing to say there, and how does that concept work in detail?
Because they have historically turned out to be. Occam’s Razor, basically.
How do you know what happened historically? How does that tell you what will work in a particular case now?

What you wrote is a typical inductivist statement. The idea is there are multiple observations of history supporting the conclusion (that ideas with more variants turn out to be better). Then add an inductive principle like "the future is likely to resemble the past". Meanwhile no explanation is given for why this conclusion makes sense. Is induction what you mean?
Yes it is what I mean. I agree, we have no explanation for why the future has always resembled the past, and thus no basis for the presumption that it will continue to do so. So what? - how does Elliotism depart from that? And more particularly, how do you depart from it in your everyday life?
Popper (and DD) refuted induction. How do you want to handle this? Do you want me to rewrite the content in their books? I don't think that's a good approach.

Do you think the major points you're contradicting of Popper's (and DD's) work have been refuted, by you or someone else? If not, why reject them?

My friend thinks I should copy/paste BoI passages criticizing induction and ask if you have criticism. But I think that will encourage ad hoc replies out of context. And it's hard to judge which text to include in a quote for someone else. And I don't think you want to read from books. And I haven't gotten a clear picture of what you want to know or what would convince you, or e.g. why you think induction works. What do you think?
Also that isn't Occam's Razor, which is about favoring simpler ideas. More variants isn't simpler. At least I don't think so. Simpler is only defined vaguely, which does allow arbitrary conclusions. (There have been some attempts to make Occam's Razor precise, which most people aren't familiar with, and which don't work.)
Ah, I see the answer now. More variants is simpler, yes, because there’s a fixed set of things that can vary, each of which is either relevant or irrelevant to the decision one is trying to make. So, having more variants is the consequence of having more things that can vary be irrelevant to the decision on is trying to make - which is the same as having fewer be relevant. Which is also the same as being harder to vary in the DD sense, if I recall it correctly.
- The coin flipping procedure wouldn't halt. So what good is it?
I’m not with you. Why wouldn’t it halt? It’s just a knockout tournalemt starting with 2^n players. Ah, are you talking about the infinite case? There, as I say, one indeed doesn’t do the flipping, one uses the densities. A way to estimate the densities would be just to sample 100 ideas that are in one of the two competing groups and see how many are in which group.
Yes I meant the infinite case. By sample do you mean a random sample? In the infinite case, how do you get a random sample or otherwise make the sample fair?
Yes I mean random. I don’t understand your other question - why does it matter what randomisation method I use?
The random sampling you propose is impossible to do. There is no physical process that random samples from an infinite set with equal probability.

Even setting infinity aside, I don't think your proposal was to enumerate every variant on a numbered list and then do the random sample using the list. Because why sample to estimate when you already have that list? But without a list of the ideas (or equivalent), I don't know how you suggest to do the sampling, without infinity, either.

This would be easier to comment on if it was more clear what you were proposing. And I prefer not to assume people are proposing impossible nonsense, rather than asking what they mean (whereas you think Elliotism's timeliness is impossible, and prefer to claim that without specifics, over asking more about how Elliotism works). And I won't be surprised if you now say you actually meant something that's unlike what I think sampling is, or say you don't care if the sampling is unfair or arbitrary (which I tried to ask about but didn't get a direct reply to).

It seems like your position is ad hoc and you hadn't figured out in advance how it works (e.g. working out the issues with sampling), figured out what the problems in the field to be addressed are, or researched previous attempts at similar positions or alternatives (and you don't want to research them, preferring to reinvent the wheel for some reason?).
Also, could you provide an example of using your method?
I think I’ve answered that above, by my explanation of why seeking the alternative with more close variants is the same as Occam’s razor.
I mean an example like:

We're trying to decide what to get for dinner. I propose salmon sushi or tuna sushi. You propose pizza. We get sushi with 67% odds. Is that how it's supposed to work? (Note I only know the odds here because I have a full list of the ideas.)

But wait. I don't care what God's favorite natural number is; that's irrelevant. So there's infinite sushi variants like, "Get salmon sushi, and God's favorite natural number is 5" (vary the number).

Now what? Each idea just turned into infinite variants. Do we now say there are 2*infinity variants for sushi, and 1*infinity for pizza? And get sushi with what odds?

Should we have a sort of competition to see who can think up the most variants for their dinner choice to increase its odds? Will people who are especially clever with powersets win arguments, since they can better manufacture variants?

Or given your comments above about hard to vary, should I perhaps claim that there are fewer types of sushi than of pizza, so sushi is the better meal?


Could you adjust the example to illustrate how your approach works? I don't know how to use it.
Or perhaps you'll explain to me there's a way to live with a bunch of unanswered questions – and a reason to want to.
I think that’s exactly what I’m doing - Aubreyism is precisely that.
But you just attempted to give answers to many questions, rather than tell me why those questions didn't need answers.
Um, sure - my answers were an explanation for why a bunch of OTHER questions don’t need answers.
What are some example questions that don't need answers?
Excessive rumination is something you – but not me – think is a consequence of Elliotism. A consequence of what specific things, for what reason, I'm unclear on. Tell me.
Well, for example, I think caring about what randomisation method to use (above) is excessive rumination.
I think you're dramatically underestimating the complexity of epistemology and the importance of details, and treating epistemology unlike you treat biology. In science, I think you know that details matter, like what sampling method is used in an experiment. And in general know that seemingly minor details can change the results of experiments, and can't just be ignored.

I think you see epistemology as a field where smart amateurs can quickly make stuff up that sounds about right and reasonably expect to do as well as anyone, whereas you wouldn't treat biology that way. You don't treat epistemology like a rigorous science.

This is common. Many scientists make statements straying into epistemology and other areas of philosophy (and sometimes even politics), and claim their scientific expertise still applies (and many people in the audience seem to accept this). They don't recognize field boundaries accurately, or recognize that there is a lot to learn about philosophy (or politics) that wasn't in their science education. This happens routinely.

A good example was Estep and other scientists wrote a criticism of SENS which discussed a bunch of philosophy of science (which is a sub-field of epistemology). No one writing it even claims philosophy credentials. Yet they act like they're writing within their expertise, not outside it. This was then judged by expert judges, none of whom were selected for having philosophy expertise. This is then presented as expert discussion even though there's a bunch of philosophy discussion but no philosophy experts. Look at their own summary:

http://www2.technologyreview.com/sens/docs/estepetal.pdf
1) SENS is based on the scientifically unsupported speculations of Aubrey de Grey, which are camouflaged by the legitimate science of others; 2) SENS bears only a superficial resemblance to science or engineering; 3) SENS and de Grey’s writings in support of it are riddled with jargon- filled misunderstandings and misrepresentations; 4) SENS’ notoriety is due almost entirely to its emotional appeal; 5) SENS is pseudoscience. We base these conclusions on our extensive training and individual and collective hands-on experience in the areas covered by SENS, including the engineering of biological organisms for the purpose of extending life span.
2,4,5 are primarily philosophy issues. 1 and 3 are more of a mix because they partly raise issues of whether some specific scientific SENS arguments are correct. Then after making mostly philosophy claims, they say they base their conclusions on their scientific expertise. (Note: how or whether to base conclusions is an epistemology issue too.)

Then you thought I'd have to rely on your answer to Estep to find fault with his paper, even though philosophy is my field.

Do you see what I'm talking about? My position is that philosophy is a real field, which has knowledge and literature that matter. And you won't understand it if you don't treat it that way. What do you think?

I think my interest in the sampling method is a consequence of my mathematical knowledge, not of Elliotism.

It won't have been excessive even if I'm mistaken, because if I'm mistaken (and you know better) then I'll learn something. Or do you think it would be somehow excessive to want to learn about my mistake, if I'm wrong?

I don't see how I could use Aubreyism (on purpose, consciously) without knowing how to do the sampling part. That strikes me as pretty important, and I don't understand how you expect to gloss it over. I also don't see why I should find Aubreyism appealing without having an answer to my arguments about sampling (and some other arguments too).

Regardless, if there was a reason not to question and ruminate about some category of things, I could learn that reason and then not do it. So excessive rumination would not be built into Elliotism. It wouldn't be a problem with Elliotism, only potentially a problem with my ignorance of how much to ruminate about what.

Elliotism says that "how much to ruminate about what" is a topic open to knowledge creation. How will making the topic open to critical thinking lead to the wrong answer? What should be done instead?

So I ask again: why is excessive rumination a consequence of Elliotism? Which part of Elliotism causes or requires it? (And why don't you focus more on finding out what Elliotism is, before focusing on saying it's bad?)
I wrote about how the amount of time (and other resources) used on an arbitration is tailored to the amount of time one thinks should be used. I'm not clear on what you objected to. My guess is you didn't understand, which I would have expected to take more clarifying questions.
Maybe I don’t understand, but what you’ve seemed to be saying about that is what I’m saying is identical to what I do - triaging what you elsewhere describe as Elliotism, by reaching a point where you’re satisfied not to have answers.
I think you don't understand, and have been trying to teach me induction (among other things), and arguing with me. Rather than focusing on the sort of question-asking, misunderstanding-and-miscommunication-clearing-up, and other activities necessary to learn a complex philosophy like CR or Elliotism.

This is something I don't know how to handle well.

One difficulty is I don't know which parts of my explanations you didn't understand, and why. I've tried to find out several times but without much success. Without detailed feedback on my initial explanations, I don't know what to change (e.g. different emphasis, different details included, different questions and criticisms answered) for a second iteration to explain it in a way more personalized to your worldview. Communicating about complex topics and substantial disagreements typically requires many iterations using feedback.

I did try explaining some things multiple ways. But there are many, many possible ways to explain something. Going through a bunch semi-randomly without feedback is a bad approach.

I think there's also confusion because you don't clearly and precisely know what your position is, and modify it ad hoc during the discussion – often trying to incorporate points you think are good without realizing how they contradict other aspects of your position (e.g incorporating DD's epistemology for hard to vary, while using Occam's razor which is contradicted by DD's epistemology). Above you say, "Ah, I see the answer now," (regarding redefining Occam's Razor after introducing it) indicating that you're working out Aubreyism as you go along and it's a moving target. This nebulous and changing nature makes Aubreyism harder to differentiate from other positions, and also serves to partially immunize it from criticism by not presenting clear targets for criticism. (And it's further immunized because you accept things like losing, arbitrariness and subjectivity – so what's left to criticize? Even induction, which Popper says is an impossible myth, becomes possible again if you're willing to count reaching arbitrary conclusions as "induction".)

By contrast, my epistemology position hasn't changed at all during this discussion, and has targets for criticism such as public writing.

Also your figure-stuff-out-as-you-go approach makes the discussion much longer than if you knew the field and your position when we started. I don't mind, but it becomes unfair when you blame the discussion length on me and complain about it. You think I ask too many questions. But I don't know what you think I should do instead. Make more assumptions about what your positions are, and criticize those?

An example is you say you use some CR. But CR is a method of dealing with issues, of reaching conclusions. So what's left to do after that? Yet you, contrary to CR, want to have CR+triage. (And this while you don't really know what CR is.) And then you advocate justificationism and induction, both of which contradict the CR you claim to be (partly) using. I don't know what to make of this without asking questions. Lots of questions, like to find out how you deal with these issues. I could phrase it more as criticism instead of questions, but questions generally work better when a position is vague or incomplete.

(Why didn't I mention all of these things earlier? Because there's so many things I could mention, I haven't had the opportunity to discuss them all.)

Perhaps I should have written more meta discussion sooner, more like I've done in this email, rather than continuing to try in various ways to get somewhere with substantive points. DD for one would say I shouldn't be writing meta discussion even now. There are a bunch of ways meta discussion is problematic. Perhaps you'll like it, but I'm not confident.

One of DD's common strategies would be to delete most of what you write every email and ask a short question about a point of disagreement, and then repeat it (maybe with minor variations, or brief comments on why something isn't an answer) for the next three emails, without explaining why it matters. Usually ends badly. Here's an example of how I could have replied to you, in full:
On Nov 2, 2014, at 9:22 AM, Aubrey de Grey wrote:
On 28 Oct 2014, at 02:39, Elliot Temple wrote:
In the infinite case, how do you get a random sample or otherwise make the sample fair?
why does it matter what randomisation method I use?
Do you believe that all possible sampling methods would be acceptable?

If not, then in the infinite case, how do you get a random sample or otherwise make the sample fair?
This approach controls the discussion, avoids meta discussion, and is short. If you want me to write to you in this style, I can do that. But most people don't like it. It also needs a larger number of iterations than is necessary with longer emails.

I instead (in broad strokes) tried to explain where I was coming from earlier on, and now have been trying to explain why your position is problematic, and throughout I've tried to answer your questions and individual points you raise. Meanwhile you do things like ask what would persuade me, but don't answer what would persuade you. And you talk about how Aubreyism works while not asking many questions about how Elliotism works. And you make claims (e.g. about Elliotism having a timeliness flaw) and I respond by asking you questions to try to find out why you think that, so I can answer, so then you talk about your ideas more instead of finding out how Elliotism works.

I let this happen. I see it happening, see problems with it, but don't know how to fix it. I'm more willing than you to act like a child/learner/student, ask questions and not control discussion. And I have more patience. I don't think this discussion flow is optimal, but I don't know what to do about it. I don't know how to get someone to ask more questions and try to learn more. Nor do I know how to explain something to someone, so that they understand it, without adequate feedback and questions regarding my initial explanation, to give me some indication of where to go with iteration 2 (and 3 and 4). When the feedback is vague or non-specific, or sometimes there is none, then what is one to say next? Tough problem.

Big picture, one can't force a mind, and one can't provide the initiative or impetus for someone to learn something. People make their own choices. I think it's mostly out of my hands. Sometimes I try to explain to people what methods they'll have to use if they want to learn more (e.g. ask more questions), but it usually goes badly, e.g. b/c they say "Well maybe you should learn more" (I'm already trying to, very hard, and they aren't, and they're trying to lie about this reality) or they just don't do it and don't tell me what went wrong.
Why do you think Elliotism itself is lacking, rather than the lacking being in your incomplete understanding of Elliotism?
I could equally ask "Why do you think Elliotism itself is not lacking, rather than the lacking being in your incomplete understanding of Elliotism?”.
I'm open to public debate about this, with all comers. I've been taking every reasonable step I can figure out to find out about these things, while also being open to any suggestions from anyone about other steps to take.

Additionally, I have studied the field. In addition to reading things like Popper, I've also read about other approaches. And have sought out discussion with many people who disagree. I've made an extensive effort to find out what alternative views there are, and what's good about them, and what criticisms they have relevant to CR and Elliotism.

This includes asking people if they know anything to look into more, anyone worth talking to, etc. And looking at all those leads. It also includes work by others besides myself. There has been a collaborative effort to find any knowledge contrary to Popper.

E.g. an Australian Popperian looked over the philosophy books being taught in the Australian universities to check for anything good. He later checked over 200 university philosophy curriculums, primarily from the US, using their websites. Looking for new ideas, new leads, material not already refuted by Popper, material that may answer one of Popper's arguments, anything unexpected, and so on. (Nothing good was found.)


This is not to say Elliotism is perfect, but I've made an extensive effort to find and address flaws, and continue to make such an effort. If there are any flaws, no one knows them, or they're keeping the information to themselves. (Or in your case, we can consider the matter pending, but so far you haven't presented any new challenge to CR or Elliotism.)

What I've found is there are a lot of CR and Elliotism arguments which no one has refutations of. But e.g. there are no unanswered inductivist arguments.


A more parallel question to ask me is why I think induction is lacking, rather than the lacking being with my understanding of induction. The reason is because I've made every effort to find out about induction and how it works and what defenses of it exist for the criticisms I have.

Induction could be better than I know – but in that case it's also better than any inductivist knows, too. It's better in some unimagined way which no one knows about. (Or maybe some hermit knows and hasn't told anyone.)

The current state of the debate – which I've made every effort to advance, and which anyone may reply to whenever they want – is that induction faces many unanswered questions and criticisms, while CR/Elliotism don't. Despite serious and responsible effort, I have been unable to find any inductivist or writing with information to the contrary.

Whereas with Elliotism, you're just initially encountering it and don't know much about it (or much about the rest of the field), so I think you should have a more neutral undecided view.


None of these things would be a major issue if you wanted to simply debate some points, in detail, to a conclusion. But they become major issues when you consider giving up on the discussion, try to form an opinion without answering some of my arguments, think questioning aspects of your position is excessive rumination, don't want to read some arguments relevant to your claims (which is like a form of judging ideas by source instead of content. You treat the sources of written in a book by Popper or on a website by Elliot differently than the source of written in an email by Elliot), etc.
Recall: my claim is that you actually perform Aubreyism, you just don’t realise it. It could be that I understand Elliotism better than you, just as it could be that you understand it better than I. Right?
Elliotism is not defined by what I actually do.

For example, if what I actually do involves any induction ever, then Elliotism is false. In that case, you'd be right about that and I'd be wrong. But that wouldn't mean you understand what Elliotism is better than me.
How could we know? Using Aubreyism, we’d know by looking at how you and I have actually made decisions, changed our minds etc in the past, and comparing those actions with the descriptions of Aubreyism and Elliotism. Using Elliotism as you describe it, I’m not sure how we would decide.
If you could find any counter-example to Elliotism from real life, that would refute it.

By a counter-example I mean something that contradicts Elliotism, not merely something Elliotism says is unwise. If I or anyone else did something Elliotism says is impossible, Elliotism would be false.

If it turned out that I wasn't very good at doing Elliotism, but did nothing that contradicts what Elliotism claims about reality, then it could still be the case that people can and should do exclusively Elliotism.

What I (and you) personally do has little bearing on the issues of what epistemology is true.


A different way to approach these things is critical discussion focusing on what explanations and logic make sense. What should be done, and why? What's possible to do? What plans about what to do are actually ambiguous and ill-defined?

For example, induction is a lot like saying, "Take a bunch of data points. Plot them on a graph. Now draw a curve connecting them and continue it along the paper too. Now predict that additional data points will (likely) fall on that curve." But there are infinite such curves you could draw, and induction doesn't say which one to draw. That ambiguity is a big non-empirical problem. (Some people have tried to specify which curve, but there are problems with their answers.)

Note this initial argument about induction, like all initial arguments, doesn't cover everything in full. Because I don't know which additional details are important to your thinking, and there's far too many to include them all indiscriminately. The way to get from initial statements of issues to understanding generally involves multiple rounds of clarifying questions.
What about the win/win vs win/lose issue?
I go with arbitrary win/lose, i..e. coin flips.
Do you understand that that doesn't count as a "solution" for BoI's "problems are soluble"? By a solution DD means only a win/win solution. But you're trying to make losing and non-solutions a fundamental feature of epistemology, contrary to BoI. Do you have some criticisms of BoI? Do you think DD was mistaken not to include a chapter about how most problems will never be solved and you have to find a way to go through life that copes with losing in regard to most issues that come up?


Or instead of asking questions, should I simply state that you're contradicting BoI, have no idea what you're talking about, and ought to reread it more carefully? And add that I've seen the same misconceptions with many other beginners. And add that people who read books quietly on their own often come away with huge misunderstandings, so what you really need to do is join the Fallible Ideas discussion group and post public critical analysis as you go along (not non-specific doubts after finishing the book). It's important to discuss the parts of BoI you disagree with – using specific quotes while having the context fresh in memory – and it's important to do this with BoI's best advocates who are willing to have public discussions (they can be found on FI list, which was created by merging BoI list, TCS list, and a few others). If I was more pushy like this, would that help? I'm capable of a variety of styles and approaches, but have had difficulty soliciting information about what would actually be helpful to you, or what you want. This style involves less rumination, drawn-out discussion, etc. I'm guessing you won't appreciate it or want to refute its claims. What would you like? Tell me.
You might want to read Popper's essay "The Myth of the Framework”.
I might, but on the other hand I might consider the time taken to do so to be a case of excessive rumination.
What would it take to persuade you of Elliotism or interest you in reading about epistemology? What would convince you Aubreyism is mistaken?

For example, will the sampling issue get your attention? Or will you just say to sample arbitrarily using unstated (and thereby shielded from criticism) subjective intuition? You've already recommended doing things along those lines and don't seem to mind, so what would you mind?
You could tell me which things you considered false from what I said, and why. I don't know which are Aubreyism-compatible and which contradict Aubreyism. And you could tell me how you think persuasion should work. It takes more communication.
Quite - maybe, excessively more.
How am I supposed to answer your objections if you don't tell them to me? Or if I'm not to answer them, what do you expect or want to happen?
What I was asking was, can you concisely summarise a particular, concrete thing about which your mind was changed? - a specific question (ideally a yes/no question) that you answer differently now that you did before you encountered DD and his ideas. And then can you summarise (as concisely as possible) how you came to view his position as superior to yours. I’m presuming that the thing will be a thing about how to make decisions, so your answer to the second question needs to be couched in terms of the decision-making method that you favoured prior to changing your mind.
Yes/no question: Is recycling a good idea? The typical residential stuff where you sort your former-trash for pickup.

My old position: yes.

DD's position: no.

What happened? A few arguments, like pointing out the human cost of the sorting. Links to some articles discussing issues like how much energy recycling plants use and how some recycling processes are actually destroying wealth. Answers to all questions and criticisms I had about the new position (I had some at the time, but don't remember them now).

Another thing I would do is take an idea I learned and then argue it with others who don't know it. Then sometimes I'd find I could win the argument no problem. But other times I'd run into some further issue to ask DD about.

In other words: arguments and discussion. That's it. There's no magic formula. You seem to think there are lessons to be learned from my past experience and want to know what they are. But I already incorporated them into Elliotism (and into my explanation of how persuasion can happen) to the extent that I know what they are. To the extent I missed something, I will be unable to tell you that part of my experience, even if I remember it, because I don't know it's important and I can't write everything down including every event I regard as unimportant.

If you want raw data, so you can find the parts you think are important, there are archives available. But if you want summary from me, then it's going to contain what I regard as the important parts, basically discussion, answering all criticisms and questions, reading supplementary material, etc, all the stuff I've been talking about.

The story regarding epistemology is similar to above, except spread out over many questions and over years. And it involves a lot of mixing of issues, rather than going one topic at a time. E.g. discussing parenting and education, or politics. Epistemology has implications for those fields, *and vice versa*.

One thing I can add, that I think was really helpful, is reading lots of stuff DD wrote (anywhere, to me or not). That provided a good examples and showed what level of precise answering of all the issues is reasonably achievable. Though not fully at first – it takes a lot of skill not to miss 95% of what he's doing and getting right. And it takes skill to ask the right questions or otherwise find out more than his initial statement (there's always much more, though many people don't realize that). Early on, even if one isn't very good at this, one can read discussions he had with others and see what questions and counter-arguments they tried and see what happened, and see how DD always has further answers, and see what sorts of replies are productive, and so on. One can gradually get a better feel for these things and build up skill.


By an effort, people can understand each other and reality better. There's no shortcut. That's the principle, and it's my history. If you want to learn philosophy, you can do that. If you'd rather continue with ideas about how life is full of losing in arbitrary ways and induction, which are refuted in writing you'd rather skip reading, you can do that instead.

Continue reading the next part of the discussion.

Elliot Temple | Permalink | Comments (0)