Tim Cook vs Freedom

Tim Cook is gay. He decided to tell the world and use it as an opportunity to campaign against freedom – while invoking the names of Dr. Martin Luther King and Robert F. Kennedy. Cook writes:
The world has changed so much since I was a kid. America is moving toward marriage equality, and the public figures who have bravely come out have helped change perceptions and made our culture more tolerant. Still, there are laws on the books in a majority of states that allow employers to fire people based solely on their sexual orientation. There are many places where landlords can evict tenants for being gay, or where we can be barred from visiting sick partners and sharing in their legacies. Countless people, particularly kids, face fear and abuse every day because of their sexual orientation.
In context, it's clear he's saying it's bad that people can be fired or evicted for being gay.

Cook opposes free trade. He opposes freedom of association. If I don't want to hire someone, with my money, isn't that an issue of freedom of association? Isn't it an issue of freedom not to spend my money on things I don't want? (And isn't it the same issue if I have hiring authority as a proxy for someone else?)

An employer should be able to fire people for no reason at all. Cook wants to make a list of government-approved and governemnt-disapproved reasons for firing, so that we can live in a totalitarian country.

Cook doesn't want a free market where landlords use whatever criteria they deem best for deciding who to rent to. He wants the government to step in and control privately owned buildings. I advocate people interacting only for freely chosen mutual benefit, when they voluntarily want to. Cook advocates that I not be allowed to think for myself about homosexuality issues (is homosexuality so simple there's no room for diversity of opinion?). Cook wants his intolerance of some opinions to be enforced by the government, using guns if necessary.

Cook doesn't want free choice and free thought. He doesn't want freedom. He wants the government to decide how people should act, and make them. He's an authoritarian who wants to force his vision of utopia on everyone else, even though we don't want it.

And Cook is so blind to issues like freedom that it doesn't occur to him to comment on them. He doesn't bother trying to tell us how he isn't destroying freedom. He's so immersed in authoritarian thinking that he doesn't see any legitimate concerns about freedom. He hasn't noticed the issue of freedom and figured out a way to get what he wants while preserving freedom. Freedom isn't on his mind. Diversity of thought isn't on his mind. He's busy demanding "tolerance" of what's already tolerated (tolerance doesn't require liking something or trading with someone), but doesn't consider his own intolerance.

And all this is being said in a tone of moral righteousness. By attacking the American value of freedom, he thinks he's a moral crusader, standing up for justice. Cook values his privacy, but he thought trying to destroy the future of civilization was just so important he had to sacrifice his privacy for the cause.

And Cook is an altruist.
At the same time, I believe deeply in the words of Dr. Martin Luther King, who said: “Life’s most persistent and urgent question is, ‘What are you doing for others?’ ” I often challenge myself with that question, and I’ve come to realize that my desire for personal privacy has been holding me back from doing something more important. That’s what has led me to today.

Elliot Temple | Permalink | Messages (0)

Aubrey de Grey Discussion, 14

I discussed epistemology and cryonics with Aubrey de Grey via email. Click here to find the rest of the discussion. Yellow quotes are from Aubrey de Grey, with permission. Bluegreen is me, red is other.
If all you do is partial CR and have two non-refuted options, then they have equal status and should be given equal probability.

When you talk about amounts of sureness, you are introducing something that is neither CR nor dice rolling.
I think you answer this with this:
A reason "strong refutation" seems to make sense is because of something else. Often what we care about is a set of similar ideas, not a single idea. A refutation can binary refute some ideas in a set, and not others. In other words: criticisms that refute many variants of an idea along with it seem "strong”.
That’s basically what I do. I agree with all you go on to say about closeness of variants etc, but I see exploration of variants (and choice of how much to explore variants) as coming down to a sequence of dice-rolls (or, well, coin-flips, since we’re discussing binary choices).
I don't know what this means. I don't think you mean you judge which variants are true, individually, by coin flip.

Maybe the context is only variants you don't have a criticism of. But if several won their coin flips, but are incompatible, then what? So I'm not clear on what you're saying to do.


Also, are you saying that amount of sureness, or claims criticisms are strong or weak (you quote me explaining how what matters is which set of ideas a criticism does or doesn't refute), play no role in what you do? Only CR + randomness?
Also, if you felt 95% sure that X was a better approach than Y – perhaps a lot better – would you really want to roll dice and risk having to do Y, against your better judgment? That doesn't make sense to me.
It makes sense if we remember that the choice I’m actually talking about is not between X and Y, but between X, Y and continuing to ruminate. If I’ve decided to stop ruminating because X feels sufficiently far ahead of Y in the wiseness stakes, then I could just have a policy of always going with X, but I could equally step back and acknowledge that curtailing the rumination constitutes dice-rolling by proxy and just go ahead and do the actual dice-roll so as to feel more honest about my process. I think that makes fine sense.
I think you're talking about rolling dice meaning taking risks in life - which I have no objection to. Whereas I was talking about rolling dice specifically as a decision making procedure for making choices. And that was in context of making an argument which may not be worth looking up at this point, but there you have a clarification if you want.

To try to get at one of the important issues, when and why would you assign X a higher percent (aka strength, plausibility, justification, etc) than Y or than ruminating more? Why would the percents ever be unequal? I say either you have a criticism of an option (so don't do that option), or you don't (so don't raise or lower any percents from neutral). What specifically is it that you think lets you usefully and correctly raise and lower percents for ideas in your decision making process?

I think your answer is you judge positive arguments (and criticisms) in a non-binary way by how "solid" arguments are. These solidity judgments are made arbitrarily, and combined into an overall score arbitrarily. Your defense of arbitrariness, rather than clearly explained methods, is that better isn't possible. If that's right, can you indicate specifically what aspects of CR you consider sometimes impossible, in what kinds of situations, and why it's impossible?

(Most of the time you used the word "subjective" rather than "arbitrary". If you think there's some big difference, please explain. What I see is a clear departure from objectivity, rationality and CR.)
The ways to deal with fallibilism
Do you mean something different here than “fallibility”?
I meant fallibilism, but now that you point it out I agree "fallibility" is a clearer word choice.
are doing your best with your current knowledge (nothing special), and also specifically having methods of thinking which are designed to be very good at finding and correcting mistakes.
Sure - and that’s what I claim I do (and also what I claim you in fact do, even though you don’t think you do).
I do claim to do this. Do you think it's somehow incompatible with CR?

I do have some different ideas than you about what it entails. E.g. I think that it never entails acting on a refuted idea (refuted in the actor's current understanding). And never entails acting on one idea over another merely because of an arbitrary feeling that that idea is better.
You've acknowledged your approach having some flaws, but think it's good enough anyway. That seems contrary to the spirit of mistake correction, which works best when every mistake found is taken very seriously.
Oh no, not at all - my engagement in this discussion is precisely to test my belief that my approach is good enough.
Yes, but you're arguing for the acceptance of those flaws as good enough.
I realize you also think something like one can't do better (so they aren't really flaws since better isn't achievable). That's a dangerous kind of claim though, and also important enough that if it was true and well understood, then there ought be books and papers explaining it to everyone's satisfaction and addressing all the counter-arguments. (But those books and papers do not exist.)
Not really, because hardly anyone thinks what you think. If CR were a widely-held position, there would indeed be such books and papers, but as far as I understand it CR is held only by you, Deutsch and Popper (I restrict myself, of course, to people who have written anything on the topic for public consumption), and Popper’s adherence to it is not widely recognised. Am I wrong about that?
I think wrong. Popper is widely recognized as advocating CR, a term he coined. And there are other Critical Rationalists, for example:

http://www.amazon.com/Critical-Rationalism-Metaphysics-Science-Philosophy/dp/0792329600

This two volume CR book has essays by maybe 40 people.

CR is fairly well known among scientists. Example friendly familiar people include Feynman, Wheeler, Einstein, Medawar.

And there's other people like Alan Forrester ( http://conjecturesandrefutations.com ).

I in no way think that ideas should get hearings according to how many famous or academic people think they deserve hearings. But CR would pass that test.


I wonder if you're being thrown off because what I'm discussing includes some refinements to CR? If the replies to CR addressed it as Popper originally wrote it, that would be understandable.

But there are no quality criticisms of unmodified-CR (except by its advocates who wish to refine it). There's a total lack of any reasonable literature addressing Popper's epistemology by his opponents, and meanwhile people carry on with ideas contradicting what Popper explained.

I wonder also if you're overestimating the differences between unmodified CR and what I've been explaining. They're tiny if you use the differences between CR and Justificationism as a baseline. Like how the difference between Mac and Windows is tiny compared to the difference between a computer and a lightbulb.


Even if Popper didn't exist, any known flaws to be accepted with Justificationism ought to be carefully documented by people in the field. They should write clear explanations about why they think better is impossible in those cases, and why not to do research trying for better since it's bound to fail in ways they already understand, and the precise limits for what we're stuck with, and how to mitigate the problems. I don't think anything good along these lines exists either.
Since we agreed some time ago that mathematical proofs are a field in which pure CR has a particularly good chance of being useful,
I consider CR equally useful in all fields. Substitute "CR" for "reason" in these sentences – which is my perspective – and you may see why.
Sorry, misunderstanding - what I meant was “Since mathematical proofs are a field in which I have less of a problem with a pure CR approach than with most fields, because expert consensus nearly always turns out to be rather rapidly achieved”
I don't think lack of expert consensus in a field is problematic for CR or somehow reduces the CR purity available to an individual.

There are lots of reasons expert consensus isn't reached. Because they don't use CR. Because they are more interested in promotions and reputation than truth. Because they're irrational. Because they are judging the situation with different evidence and ideas, and it's not worth the transaction costs to share everything so they can agree, since there's no pressing need for them to agree.

What's the problem for CR with consensus-low fields?
This is a general CR approach: do something with no proof it will work, no solidity, no feeling of confidence (or if you do feel confidence, it doesn't matter, ignore it). Instead, watch out for problems, and deal with them as they are found.
Again, I can’t discern any difference in practice between that and what I already do.
Can you discern a difference between it and what most people do or say they do?
I don’t think our disparate conclusions with regard to the merits of signing up with Alcor arise from you doing the above and me doing something different; I think they arise from our having different criteria for what constitutes a problem. And I don’t think this method allows a determination of which criterion for what constitutes a problem is correct, because each justifies itself: by your criteria, your criteria are correct, and by mine, mine are. (I mentioned this bistability before; I’ve gone back to your answer - Sept 27 - and I don’t understand why it’s an answer.)
Criteria for what is a problem are themselves ideas which can be critically discussed.

Self-justifying ideas which block criticism from all routes are a general category of idea which can be (easily) criticized. They're bad because they block critical discussion, progress, and the possibility of correction if they're mistaken.
And here is a different answer: You cannot mitigate all the infinite risks that are logically possible. You can't do anything about the "anything is possible" risk, or the general risks inherent in fallibility. What you can do is think of specific categories of risks, and methods to mitigate those categories. Then because you're dealing with a known risk category, and known mitigation methods – not the infinite unknown – you can have some understanding of how big the downsides involved are and the effectiveness of time spent on mitigation. Then, considering other things you could work on, you can make resource allocation decisions.
Same answer - I maintain that that’s what I already do.
Do you maintain that what I've described is somehow not pure CR? The context I was addressing included e.g.:
It seems to me that the existence of cases where people can be wrong for a long time constitutes a very powerful refutation of the practicality of pure CR, since it means one cannot refute the argument that there is a refutation one hasn’t yet thought of.
You were presenting a criticism of CR, and when I talked about how to handle the issues, you've now said stuff along the lines of that's what you already do, indicating some agreement. Are you then withdrawing that criticism of CR? If so, do you think it's just you specifically who does CR (for this particular issue), or most people?

Or more precisely, the issue isn't really whether people do CR - everyone does. It's whether they *say* they do CR, whether they understand what they are doing, and whether they do it badly due to epistemological confusion.

Continue reading the next part of the discussion.

Elliot Temple | Permalink | Messages (0)

Aubrey de Grey Discussion, 13

I discussed epistemology and cryonics with Aubrey de Grey via email. Click here to find the rest of the discussion. Yellow quotes are from Aubrey de Grey, with permission. Bluegreen is me, red is other.
So here’s an interesting example of what I mean. I woke up this morning and realised that there is indeed a rather strong refutation of my binary chop argument below, namely “don’t bother, just use X+Y - one doesn’t need to take exactly the minimum amount of time needed, only enough".
I object to the concept of a "strong refutation". I don't think there are degrees or quantities of refutation.

A reason "strong refutation" seems to make sense is because of something else. Often what we care about is a set of similar ideas, not a single idea. A refutation can binary refute some ideas in a set, and not others. In other words: criticisms that refute many variants of an idea along with it seem "strong".

People have some ability to guess whether it will be easy or hard to proceed by finding a workable close variant of the criticized idea. And they may not understand in detail what's going on, so it can seem like a hunch, and be referred in terms of strong or weak criticism.

But:

  • Refuting more or fewer variant ideas is different than degrees of strength. Sometimes the differences matter.
  • Hunches only have value when actually there's some reasonable underlying process being done that someone doesn't know how to put into words. Like this. And it's better to know what's going on so one can know when it will fail, and try to improve one's approach.
  • People can only kinda estimate the prospects for CLOSE variants handling the criticism and continuing on similar to before. This gives NO indication of what may happen with less close variants.
  • This stuff is pretty misleading because either you're aware of a variant idea that isn't refuted, or you aren't. And you can't actually know in advance how well variants you aren't aware of will work.
But consider: yesterday I came up with the binary chop argument and it intuitively felt solid enough that I thought I’d spent enough time looking for refutations of it by the time I sent the email. I was wrong - and for sure I’ve been wrong in the same way many times in the past. But was I wrong to be sure enough of my argument to send the email? I’d say no. That’s because, as I understand your definition of a refutation, I can’t actually fix on a finite Y, because however large I choose Y to be I can always refute it by a pretty meaningful argument, namely by reference to past times when I (or indeed whole communities) have been wrong for a long time.
There are never any guarantees of being correct. Feeling sure is worthless, and no amount of that can make you less fallible.

We should actually basically expect all our ideas to be incorrect and one day be superseded. We're only at the BEGINNING of infinity.

The ways to deal with fallibilism are doing your best with your current knowledge (nothing special), and also specifically having methods of thinking which are designed to be very good at finding and correcting mistakes.

You've acknowledged your approach having some flaws, but think it's good enough anyway. That seems contrary to the spirit of mistake correction, which works best when every mistake found is taken very seriously.

I realize you also think something like one can't do better (so they aren't really flaws since better isn't achievable). That's a dangerous kind of claim though, and also important enough that if it was true and well understood, then there ought be books and papers explaining it to everyone's satisfaction and addressing all the counter-arguments. (But those books and papers do not exist.)
Since we agreed some time ago that mathematical proofs are a field in which pure CR has a particularly good chance of being useful,
I consider CR equally useful in all fields. Substitute "CR" for "reason" in these sentences – which is my perspective – and you may see why.
I direct you to the example of the “Lion and Man” problem, which was incorrectly “solved” for 25 years. It seems to me that the existence of cases where people can be wrong for a long time constitutes a very powerful refutation of the practicality of pure CR, since it means one cannot refute the argument that there is a refutation one hasn’t yet thought of. Thus, we can only answer “yes stop now” in finite time to "Have I done enough effort? Should I do more effort or stop now?” if we’ve already made a quantitative (non-boolean), and indeed subjective and arbitrary, decision as to how much risk we’re willing to take that there is such a refutation.
The possibility of being mistaken is not an argument to consider thinking about an issue indefinitely and never act. And the risk of being mistaken, and consequences, are basically always unknown.

What one needs to do is come up with a method of allocating time, with an explanation of how it works and WHY it's good, and some understanding of what it should accomplish. Then one can watch out for problems, keep an ear open for better approaches known to others, and in either case consider changes to one's method.

This is a general CR approach: do something with no proof it will work, no solidity, no feeling of confidence (or if you do feel confidence, it doesn't matter, ignore it). Instead, watch out for problems, and deal with them as they are found.


And here is a different answer: You cannot mitigate all the infinite risks that are logically possible. You can't do anything about the "anything is possible" risk, or the general risks inherent in fallibility. What you can do is think of specific categories of risks, and methods to mitigate those categories. Then because you're dealing with a known risk category, and known mitigation methods – not the infinite unknown – you can have some understanding of how big the downsides involved are and the effectiveness of time spent on mitigation. Then, considering other things you could work on, you can make resource allocation decisions.

It's only partially understood risks that can be mitigated against, and it's that partial understanding that allows judging what mitigation is worthwhile.

Continue reading the next part of the discussion.

Elliot Temple | Permalink | Messages (0)

Aubrey de Grey Discussion, 12

I discussed epistemology and cryonics with Aubrey de Grey via email. Click here to find the rest of the discussion. Yellow quotes are from Aubrey de Grey, with permission. Bluegreen is me, red is other.
Just mentioning a quantity in some way doesn't contradict CR.
Fully agreed - but:
The question is, "Have I done enough effort? Should I do more effort or stop now?" That is a boolean question.
Not really, because the answer is a continuum. If X effort is not enough and X+Y effort is enough, then maybe X+Y/2 effort is enough and maybe it isn’t. And, oh dear, one can continue that binary chop forever, which takes infinite time because each step takes finite time. I claim there’s no way to short-circuit that that uses only yes/no questions.
"Is infinite precision useful here? yes/no."

"Is one decimal enough precision for solving the problem we're trying to solve? yes/no"

You don't have to use only yes/no questions, but they play a key role. After these two above, you might use some method to figure out the answer to adequate precision. Then there'd be some more yes/no questions:

"Was that method we used a correct method to use here?"

"Is this answer we got actually the answer that method should arrive at, or did we follow the method wrong?"

"Have we now gotten one answer we're happy with and have no criticism of? Can we, therefore, proceed with it?"
Plus, in the real world, at some point in that process one will in fact decide either that both the insufficiency of X and the sufficiency of X+Y are rebutted, or than neither of them is (which of the two depending on one’s standard for what constitutes a rebuttal) - which indeed terminates the binary chop, but not usefully for a pure-CR approach.
Rebuttals are useful because they have information about the topic of interest. What to do next would depend on what the rebuttals are. Typically they provide new leads. When they don't, that is itself notable and can even be thought of as a lead, e.g. one might learn, "This is much more mysterious than I previously thought, I'll have to look for a new way to approach it and use more precision" – which is a kind of lead.


The standard of a rebuttal, locally, is: does this flaw pointed out by criticism prevent the idea from solving the problem we're trying to solve? yes/no. If no, it's not a criticism IN CONTEXT of the problem being addressed.

But the full standard is much more complicated, because you may say, "Yes that idea will solve that problem. However it will cause these other problems, so don't do it." In other words, the context being considered may be expanded.
Why not roll dice to decide between those remaining ideas? That would be some CR, and timely. Do you think that's an equally good approach? Perhaps better because it eliminates bias.
Actually I’m fine with that (i.e., I recognise that the triage is functionally equivalent to that). In practice I only roll the dice when I think I’m sure enough that I know what the best answer is - so, roughly, I guess I would want to be rolling three dice and going one way if all of them come up six and the other way otherwise - but that’s still dice-rolling.
There's a big perspective gap here.

I had in mind rolling dice with equal probability for each result.

If all you do is partial CR and have two non-refuted options, then they have equal status and should be given equal probability.

When you talk about amounts of sureness, you are introducing something that is neither CR nor dice rolling.

Also, if you felt 95% sure that X was a better approach than Y – perhaps a lot better – would you really want to roll dice and risk having to do Y, against your better judgment? That doesn't make sense to me.

Continue reading the next part of the discussion.

Elliot Temple | Permalink | Messages (0)

Aubrey de Grey Discussion, 11

I discussed epistemology and cryonics with Aubrey de Grey via email. Click here to find the rest of the discussion. Yellow quotes are from Aubrey de Grey, with permission. Bluegreen is me, red is other.
I wouldn't draw a distinction there. If you don't know more criticisms, and resolved all the conflicts of ideas you know about, you're done, you resolved things. Whether you could potentially create more criticisms doesn't change that.
OK, of everything you’ve said so far that is the one that I find least able to accept. Thinking of things takes time - you aren’t disputing that. So, if at a given instant I have resolved all the conflicts I know about, but some of what I now think is really really new and I know I haven’t tried to refute it, how on earth can I be “done"?
As you say, you already know that you should make some effort to think critically about new ideas. So, you already have an idea that conflicts with the idea to declare yourself done immediately.

If you know a reason not to do something, that's an idea that conflicts with it.
Ah, but hang on: what do I actually know, there? You’re trying to make it sound boolean by referring to “some” effort, but actually the question is how much effort.
The question is, "Have I done enough effort? Should I do more effort or stop now?" That is a boolean question.

Just mentioning a quantity in some way doesn't contradict CR.
What I know is my past experience of how long it typically took to come up with a refutation of an idea that (before I tried refuting it) felt about as solid as the one I'm currently considering feels. That’s correlation, plain and simple. I’m solely going on my hunch of how solid what I already know feels, or converseiy how likely it is that if I put in a certain amount of time trying to refute what I think I will succeed. So it’s quantitative. I can never claim I’m “done” until I’ve put in what I feel is enough effort that putting in a lot more would still not bring forth a rebuttal. And that estimated amount of effort again comes from extrapolation from my past experience of how fast I come up with rebuttals.

To me, the above is so obvious a rebuttal
I think your rebuttal relies on CR being incompatible with dealing with any sort of quantity – a misconception I wasn't able to predict. Otherwise why would a statement of your approach be a rebuttal to CR?

It's specifically quantities of justification – of goodness of ideas – that CR is incompatible with.
of what you said that it makes no sense that you would not have come up with it yourself in the time it took you to write the email. That’s what I meant about your answers getting increasingly weak.
We have different worldviews, and this makes it hard to predict what you'll say. It's especially hard to predict replies I consider false. I could try to preemptively answer more things, but some won't be what you would have said, and longer emails have disadvantages.
I mean that it’s becoming easier and easier to come up with refutations of what you’re saying, and it seems to me that it’s becoming harder and harder for you to refute what I say - not that you’re finding it harder, but that the refutations you're giving are increasingly fragile. To my ear, they’re rapidly approaching the “that’s dumb, I disagree” level. And I don’t know what situation there would be that would make them sound like that to you too. You said earlier on that "It's hard to keep up meaningful criticism for long” and I said "That’s absolutely not my experience” - this is what I meant.
Justificationists always sneak in some an ad hoc, poorly specified, unstated-and-hidden-from-criticism version of CR into their thinking, which is why they are able to think at all.
This is what you were doing when saying you clarified that meant Aubreyism step 1 to include creative and critical thinking.
Yes, absolutely. I don’t think I know what pure justificationism is, but for sure I agree (as I have since the start of our exchange) that CR is a better way to proceed than just by hunches and correlations.

Proceed by which correlations? Why those instead of other ones? How do you get from "X correlates with Y [in Z context]" to "I will decide A over B or C [in context D]"? Are any explanations involved? I don't know the specifics of your approach to correlations.

We've discussed correlations some, but our perspectives on the matter are so different that it wasn't easy to create full mutual understanding. It'll take some more discussion. More on this below.
Thus, indeed Aubreyism is a hybrid between the two - it uses CR as a way to make decisions, but with a triage mechanism so that those decisions can be made in acceptable time. I’m fine with the idea that the triage part contributes no value in and of itself, because what it does do, instead, is allow the value from the CR part to manifest itself in real-world actions in a timely fashion.
Situation: you have 10 ideas, eliminate 5-8 with some CR tools, and run out of time to ponder.

You propose deciding between the remaining ideas with hunches. You say this is good because it's timely. You say the resulting value comes from CR + timeliness.

Why not roll dice to decide between those remaining ideas? That would be some CR, and timely. Do you think that's an equally good approach? Perhaps better because it eliminates bias.

I suspect you'll be unwilling to switch to dice. Meaning you believe the hunches have value other than timeliness. Contrary to your comments above.

What do you think?
More generally, going back to my assertion that you do in fact make decisions in just the same way I do, I claim that this subjective, quantitative, non-value-adding evaluation of how different two conflicting positions feel in their solidity, and thus of how much effort one should put into further rebutting each of them, is an absolutely unavoidable aspect of applying CR in a timely fashion.
In my view, I explained how CR can finish in time. At this point, I don't know clearly and specifically why you think that method doesn't work, and I'm not convinced you understand the method well enough to evaluate. Last email, I pointed out that some of your comments are too vague to be answerable. You didn't elaborate on those points.

Bigger picture, let's try to get some perspective.

Epistemology is COMPLEX. Communication between different perspectives is VERY HARD.

When people have very different ideas, misunderstandings happen constantly, and patient back-and-forth is needed to correct them. Things that are obvious in one perspective will need a lot of clarification to communicate to another perspective. An especially open minded and tolerant approach is needed.

We are doing well at this. We should be pleased. We've gotten somewhere. Most people attempting similar things fail spectacularly.

You understand where I'm coming from better now, and vice versa. We know outlines of each other's positions. And we have a much more specific idea of what we do and don't agree about. We've discovered timely CR is a key issue.

People get used to talking to similar people and expect conversations to proceed rapidly. Less has to be communicated, because only differences require much communication. People often omit some details, but the other guy with many shared premises fills in the blanks similarly. People also commonly gloss over disagreements to be polite.

So people often experience communication as easy. Then when it isn't, they can get frustrated and give up in the face of misunderstandings and disagreements.

And justificationism is super popular, so epistemology conversations often seem to go smoothly. Similar to how most regular people would smoothly agree with each other that death from aging is good. Then when confronted with SENS, problems start coming up in the discussion and they don't have the skills to deal with those problems.

Talking to people who think differently is valuable. Everyone has some blind spots and other mistakes, and similar people will share some of the same weaknesses. A different person, even if worse than you, could lack some of your weaknesses. Trading ideas between people with different perspectives is valuable. It's a little like comparative advantage from economics.

But the more different someone is, the more difficult communication is. Attitudes to discussion have to be adjusted.

We should be pleased to have a significant amount of successful communication already. But the initial differences were large. There's still a lot of room to understand each other better.

I think you haven't discussed some details so far (including literally not replying to some points) – and then are reaching tentative conclusions about them without full communication. That's fine for initial communication to get your viewpoint across. It works as a kind of feeling out stage. But you shouldn't expect too much from that method.

If you want to reach agreement, or understand CR more, we'll have to get into some of those details. We now have a better framework to do that.

So if you're interested, I think we may be able to focus the discussion much more, now that we have more of an outline established. To start with:

Do you think you have an argument that makes timely CR LITERALLY IMPOSSIBLE, in general, for some category of situations? Just a yes or no is fine.

Continue reading the next part of the discussion.

Elliot Temple | Permalink | Messages (0)

Leftist Lying: The Issue Is Never the Issue

From Take No Prisoners: The Battle Plan for Defeating the Left by David Horowitz, on leftist lying:
Dishonesty is endemic to the progressive cause because its radical goals cannot be admitted; the dishonesty is a cultural inheritance, instinctive and indispensable. It is no coincidence that Barack Obama, a born-and-bred leftist, is the most compulsive and brazen liar ever to occupy the White House. His true agenda is radical and unpalatable, and therefore he needs to lie about it. What other presidential candidate could have successfully explained away his close association for twenty years with an anti-American racist, Jeremiah Wright, and an anti-American terrorist, William Ayers? Who but the ignorant and the progressively blind could have believed him?

The radical sixties were something of an aberration in that its activists were uncharacteristically candid about their goals. A generation of “new leftists” was rebelling against its Stalinist parents, who had pretended to be liberals to hide their real beliefs and save their political skins. New leftists despised what they thought was the cowardice behind this camouflage. As a “New Left,” they were determined to say what they thought and blurt out their desires: “We want a revolution, and we want it now.” They were actually rather decent to warn others about what they intended. But when they revealed their goals, they set off alarms and therefore didn’t get very far.

Those who remained committed to leftist goals after the sixties learned from their experience. They learned to lie. The strategy of the lie became the new progressive gospel. It is what Alinsky’s Rules for Radicals is really about. Alinsky understood the mistake sixties radicals had made. His message at the time, and to the generations who came after, is easily summarized: Don’t telegraph your goals; infiltrate the Democratic Party and other liberal institutions and subvert them; treat moral principles as dispensable fictions; and never forget that your political agenda is not the achievement of this or that reform but political power to achieve the socialist goal. The issue is never the issue. The issue is always power—how to wring power out of the democratic process, how to turn the political process into an instrument of control, how to use that control to fundamentally transform the United States of America, which is exactly what Barack Obama, on the eve of his election, warned he would do.
I recommend the book.

Though beware, some of the scholarship is flawed. Justin brought this passage to my attention:
In the fifth year of Obama’s rule, forty-seven million Americans were on food stamps and a hundred million were receiving government handouts, while ninety-three million Americans of working age had given up on finding a job and left the work force.
The ninety-three million statistic is given without a source. I investigated a bit and I don't think it's accurate.

But I still think it's a great book.

Elliot Temple | Permalink | Message (1)

Bad Correlation Study

Here is a typical example of a bad correlation study. I've pointed out a couple flaws, which are typical.

http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3039704/
Chocolate Consumption is Inversely Associated with Prevalent Coronary Heart Disease: The National Heart, Lung, and Blood Institute Family Heart Study
These data suggest that consumption of chocolate is inversely related with prevalent CHD in a general population.
Of 4,679 individuals contacted, responses were obtained from 3,150 (67%)
So they started with a non-random sample. The two thirds of people who responded were not random.

This non-random sample they studied may have some attribute, X, much more than the general population. It may be chocolate+X interactions which offer health benefits. This is a way the study conclusions could be false.

They used a "food frequency questionnaire". So you get possibilities like: half the people reporting they didn't eat chocolate were lying (but very few of the people admitting to eating chocolate were lying). And liars overeat fat much more than non-liars, and this fat eating differential (not chocolate eating) is the cause of the study results. This is another way the study conclusions could be false.

They say they used "used generalized estimating equations", but do not provide the details. There could be an error there so that their conclusions are false.

They talk about controls:
adjusting for age, sex, family CHD risk group, energy intake, education, non-chocolate candy intake, linolenic acid intake, smoking, alcohol intake, exercise, and fruit and vegetables
As you can see, this is nothing like a complete list of every possible relevant factor. There are many things they did not control for. Some of those may have been important, so this could ruin their results.

And they don't provide details of how they controlled for these things. For example, take "education". Did they lump together high school graduates (with no college) as all having the same amount of education, without factoring in which high school they went to and how good it was? Whatever they did, there will be a level of imprecision in how they controlled for education, which may be problematic (and we don't know, because they don't tell us what they did).


This is just a small sample of the problems with studies like these.


People often reply something like, "Nothing's perfect, but aren't the studies pretty good indications anyway?" The answer is, if it's pretty good anyway, they ought to understand these weaknesses, write them down, and then write down why their results are pretty good indications anyway. Then that reasoning would be exposed to criticism. One shouldn't assume the many weaknesses of the research can be glossed over without actually writing them down, thoroughly, and writing down why it's OK, in full, and then seeing if there are criticisms of that analysis.

Elliot Temple | Permalink | Messages (0)

Front Page Magazine DOES NOT Censor Comments [Updated]

I posted the comment below at Front Page Magazine. It went in the moderation queue and then was soon deleted. Blog comment censorship is super lame.
What do you mean US fought Cold War "without equivocation"???

http://spectator.org/articles/38080/jimmy-carter-chronicles

In the immediate days after the Soviets invaded Afghanistan in late December 1979, President Carter responded with shock and a sense of deep, palpable betrayal. After all, he and Leonid Brezhnev, just six months earlier, at the Vienna Summit, had literally hugged and kissed. Why would the Soviets do this?

...

The Democratic president had long lamented America's "inordinate fear of communism," from which he had hoped to unshackle the nation."

...

... 1978 press conference, "We want to be friends with the Soviets."
Update: The comment went up eventually. The user interface is misleading because it first showed the comment as pending, and then later as gone. Perhaps it was a caching issue. In any case, the comment did get posted. So I take everything back.

Elliot Temple | Permalink | Messages (0)