curi blog comments http://curi.us/comments/recent Explanations for the curious en-us Anon69 Atlas Shrugged Theme: Don't Overreach
I'm still chewing on the software audit stuff. Will listen to your podcast soon.

> @#9585: None of those explain specifically why there is a hostile reading of what you said, even once, let alone repeatedly. The plausible deniability strategy does offer an explanation of what's going on. It seems to be *the only explanation that any of us can think of* which accounts for what happened.

The plausible deniability explanation comes into play not with the first alleged statement w/ hostility but when interpreting subsequent replies, particularly if you called out an alleged hostility.

Such an explanation assumes a correct reading/interpretation that the first statement as hostile.

You are saying there has been no alternative explanations offered to the plausible deniability theory. I was challenging the assumption behind it: that the original statement(s) were in fact hostile.

As far as how to challenge a wrong interpretation of hostility. You said:

> the only realistic way you could be different is if you had a 1 in 10 million outlier childhood

So someone would need to communicate what their childhood was like.

> or you had made a massive self-improvement effort (which would have involved understanding the problem and then doing things about it, in which case you'd be able to explain the problem yourself very well, and your solution).

what is "your solution" here?

So it seems like there is virtually no chance for most ppl to remedy a wrong interpretation of being hostile by you. It's an extremely high bar. Once you judge someone as being hostile, you are unlikely to ever find out otherwise, right?

> And look at the pattern:
> > There you go, jumping to conclusions again...sigh.
> Very hostile and normal.
> > Hint: it wasn't.
> Standard nastiness.

Could you expand a bit about what you meant by hostile and nasty? The more I've thought about it, I'm less sure I understand what you mean. I looked up those words in the dictionary and trying to understand which meaning you have in mind.

What's your reading of Dagny's message: #9556? Any nastiness or hostility there in your view?

Thanks for the discussion, it's been helpful.]]>
Mon, 19 Feb 2018 10:07:14 +0000 http://curi.us/comments/show/9593 http://curi.us/comments/show/9593
curi Atlas Shrugged Theme: Don't Overreach Mon, 19 Feb 2018 03:25:56 +0000 http://curi.us/comments/show/9592 http://curi.us/comments/show/9592 curi Atlas Shrugged Theme: Don't Overreach
http://curi.us/2095-youre-a-complex-software-project-introspection-is-auditing

the post also fixes the last sentence, which is broken here.]]>
Mon, 19 Feb 2018 01:22:53 +0000 http://curi.us/comments/show/9591 http://curi.us/comments/show/9591
curi Atlas Shrugged Theme: Don't Overreach
your consciousness gets to *audit* the software and do maintenance and add features. the heart of the software was written in childhood and you don't remember much of it. think of it like a different team of programmers wrote it, and now you're coming in later.

you don't have full access to the source code for your audit. you can see source code for little pieces here and there, run automated tests for little pieces here and there, read some incomplete docs, and do manual tests for sporadic chunks of code.

and your attitude is: to ignore large portions of the limited evidence available to you about what the code does. that is, the best evidence of what the code says available is *your own behavior*. but you want to ignore that in favor of believing what you think the code does. you think the conclusions of your audit, which ignores the best evidence (your behavior – actual observations of the results of running code), and doesn't even know that it's a software audit or the circumstances of the audit, should be taken as gospel.

you find it implausible there are hostile software functions that could be running without your audit noticing. your audit has read 0.001% of the source code during the last year, but you seem to think the figure is 99%.

introspection skills means getting better at auditing. this can help a ton, but there's another crucial approach: you can learn about what people in our culture are commonly like. this enables you to audit whether you're like that in particular ways, match behavior to common code, etc. b/c i know far more about cultural standard software (memes) than you, and also i know what the situation is (as just described and more) and you don't, i'm in a much better position to understand you than you are. this doesn't apply to your idiosyncrasies, i know even less than you about those, but i know that and avoid claims about the areas where i don't know, such as when you write down the standard output of some standard software modules, at length, and i recognize them.]]>
Mon, 19 Feb 2018 01:13:32 +0000 http://curi.us/comments/show/9590 http://curi.us/comments/show/9590
curi Programming and Epistemology
You're overreaching. You are trying to say complex stuff when you should be trying to get simple stuff right and build up to complex stuff later. The result is it's the wrong complex stuff, and it's nonsense because the complexity wasn't created from layers of well-designed simplicity.

http://fallibleideas.com/overreach]]>
Sun, 18 Feb 2018 23:18:36 +0000 http://curi.us/comments/show/9589 http://curi.us/comments/show/9589
curi Atlas Shrugged Theme: Don't Overreach
---

@#9584 A standard pattern is to be mean to people in socially calibrated ways. One of the major strategies is plausible deniability. Attacks which you can deny are attacks are powerful. They are hard to defend against b/c, if challenged, the challenger is attacked further as jumping at shadows and as imagining hostility and being the source of hostility himself. Meanwhile the attack still works, because people intuitively understand it. This relies on a double standard for evaluation: people normally evaluate intuitively, but if an issue is raised then they evaluate with conscious logic. To exploit this, statements are made which evaluate differently with the two different evaluation strategies. Everything you say and do fits the standard pattern of having automated the use of this strategy in many cases, which is totally normal and actually very hard to avoid or undo. The only realistic way you could be different is if you had a 1 in 10 million outlier childhood or you had made a massive self-improvement effort (which would have involved understanding the problem and then doing things about it, in which case you'd be able to explain the problem yourself very well, and your solution).

Instead of denying this stuff, you ought to learn about the standard ways people are mean. Then you could evaluate if you do them, and if your friends do them, and so on. But you try to prejudge the issue instead of being curios and wanting to learn. People are mean to each other all the time in our culture, it's this huge problem, and you're busy feeling attacked (when the thing is you're a *victim* of something big and powerful and nasty) and not looking for opportunities to learn and reform.

---

And look at the pattern:

> There you go, jumping to conclusions again...sigh.

Very hostile and normal.

> Hint: it wasn't.

Standard nastiness.

> I've always thought there was a strange unfriendliness, or roughness, or hostile-ness to ppl responding to comments here....
>
> Like ... sabotage ...

Blaming others, admitting your own hostile perspective (which is the cause of your nasty comments).

The way people work is

1) have a hostile perspective, e.g. this

2) make nasty comments without consciously intending it.

the (1) part is conscious – you even wrote it out – and then the (2) part follows in some way that allows the person to sleep at night without viewing themselves as an asshole.

> The other times when I've noticed you coming to a conclusion prematurely, assuming too much, not asking questions

Asked for details on the accusation (what you were referring to), you added more accusations in a non-specific way that no one could refute. You shouldn't have done that even once, but you did it twice.

> The "sigh" was not me being hostile, but was a momentary feeling of sadness (perhaps also bewilderment)

First you blatantly attack him. Then you don't even treat it as a matter to truth-seek or problem-solve about, you just assert he's wrong about a very basic, standard, common sense interpretation of reality. It's such an extreme attack on his frame, his connection to reality.

I know the reason you're doing it is to hold on to your own frame, which you found challenged. You can't face Dagny's version of reality, so you attack it to protect your own (pretense of) self-esteem. That motivation doesn't prevent it from being a hostile lashing out at Dagny which is quite pressuring and nasty to him.

> I can honestly say there is were no negative feelings when I wrote my last post. Or now. You may not believe it, but that's ok. Just wanted to point it out because I think it's interesting that there is such a misunderstanding.

Already discussed some. The use of the word "interesting" is also an extremely standard attack.

> I think you may be looking at me through the lens of "normal people" and guessing at what I'm thinking and feeling. That's my best guess for why I am seeing so many errors [by you] about what I've said. In my last response, do you notice all the errors [you] made that I clarified in my response?

This is very hard to read because your writing quality dropped way below normal. That's very standard while upset or hostile. And it seems to be making accusations against Dagny, from a position of ignorance, instead of being curious. And when I feel in word in brackets to try to parse what it says, I notice a theme in what's omitted. The reason it's hard to read is some of the key accusation words were simply left out and implied.

> I'm not really clear on how you arrived at this conclusion, consider all I said was:

This is half-assed fake curiosity. A thin pretense. And the meaning is: you should not have arrived at that conclusion, given the evidence you are obviously being unreasonable. This is not a truth-seeking attitude, it's social pressure and attacking thinly disguised as your own confusion ("not really clear") and sorta suggesting curiosity that doesn't exist. (You might get confused b/c you might be curious right now, when you read this. But it's not plausible that you were curious at the time of writing this text, curiosity doesn't fit as the thing motivating this particular wording.)

> I further explained what I meant by it grabbing me and how it grabbed me (very different than what you thought I meant).

This is calling Dagny wrong in a non-truth-seeking-oriented way. Rather than try to discuss productively, you throw in Dagny's wrongness in a parenthetical with an intensifier. Standard, nasty tactics.

> I doubt that ET believes that one MUST be read AS first before FH, no matter what.

Straw man attack plus all-caps.

> Do you accept those as arguments that your conclusion is false? How did you miss them the first time?

Very hostile framing.

> You've misunderstood what I meant by FH "grabbing me, misunderstood my goals w/ suggesting changes to the AS summary, and misunderstood my reasons for starting with FH over AS (confusing it w/ respect for ET).

The point is to assert, again, that Dagny is wrong and you are right. You treat it as a dick size contest and act like a jerk to him, rather than trying to discuss productively.

------

So, on and on, over and over, you're hostile in very standard ways. No that isn't random misunderstanding. You are a product of your culture. It shows. None of the *quite extreme* (and rarer than 1 in 10 million) things necessary to change that have happened in your life.

------

By contrast, Dagny's comments were nothing like this. They were more like this:

> you read hostility into things (e.g. some criticism), falsely, then get hostile and emotional yourself. you are the source of the issue.

Educational.

> the reason you think there's hostility around FI, compared to other places, is cuz FI has ppl who don't put all the standard work into not saying what they mean, in socially standard ways, to avoid saying anything that would trigger hostility in normal people like you. at other places, people act as they normally do – like they are scared of everyone, everyone is fragile and easily triggered, and they have to be super carefully socially. there is cause to act that way – to tip toe around everyone (and then rationalize it and be blind to the fact one is doing it) – but FI is a different kind of place where people aim at other things truth-telling. FI is for people who want it. you're, apparently, on the borderline – you like FI some but also you're easily triggered and hostile, and that's putting you off some.

Educational

> how do you know you're honest and your introspection is correct?

Educational.

> you seem clueless that such statements should not be believed merely b/c they are asserted – including you should not believe it, yourself.

This one is interesting. "clueless" is a negative statement. But it seems accurate and is part of a helpful explanation. Removing it is not so easy because he's trying to convey something here that's both strong and negative, so he can't hedge or be positive without harming the message. Maybe there's a solution but it's not easy. [1]

[1] Theoretically I think there is a solution, but it may be out of reach to fix this on Dagny's end, just with a little wording adjustment. I think solving this could require either other people changing or else another 100 years of philosophy progress to help us understand how to handle such things better. Years is a bad unit of philosophy progress but I want to give some sense of scale and I don't know a better way.) There are also other options involving approaching discussion quite differently, e.g. just don't even try to tell people important negative info they aren't eager to hear, and only share much when they are begging you for the info.]]>
Sun, 18 Feb 2018 23:15:42 +0000 http://curi.us/comments/show/9588 http://curi.us/comments/show/9588
wut? oh my god it's turpentine Programming and Epistemology
I don't understand this sentence.]]>
Sun, 18 Feb 2018 23:09:18 +0000 http://curi.us/comments/show/9587 http://curi.us/comments/show/9587
cant_KANT Programming and Epistemology Sun, 18 Feb 2018 23:06:25 +0000 http://curi.us/comments/show/9586 http://curi.us/comments/show/9586 anon69 Atlas Shrugged Theme: Don't Overreach
-sloppiness
-laziness
-rushing, making errors as a result
-assuming what the other person knows things they don’t
-differences in word usage between people, sub cultures, etc
-having learned the wrong meaning of words or phrases]]>
Sun, 18 Feb 2018 21:30:02 +0000 http://curi.us/comments/show/9585 http://curi.us/comments/show/9585
Anon69 Atlas Shrugged Theme: Don't Overreach
> I can honestly say there is were no negative feelings when I wrote my last post. Or now. You may not believe it, but that's ok. Just wanted to point it out because I think it's interesting that there is such a misunderstanding.

...and how it lead to you coming to the following interpretation?

> In context, the text "you may not believe [the facts]" basically reads as "you may be stupid and wrong", and the "that's ok" reads as condescension, and the final quoted sentence really hammers in the condescension. it's like "it's cute that you think that" but a little more disguised.]]>
Sun, 18 Feb 2018 20:59:19 +0000 http://curi.us/comments/show/9584 http://curi.us/comments/show/9584
curi Atlas Shrugged Theme: Don't Overreach
So on the one hand we have a standard pattern which are culture teaches everyone to do.

And on the other hand we have ... random chance? You still have not provided any alternative explanation for how the ambiguity happens, it's cause.

Random chance is a bad explanation even in a single case, when you're saying you just happened to randomly do something that is just like a major cultural theme, but that's a coincidence and it has nothing to do with culture.

Random chance becomes a truly awful explanation when you repeat the same thing over and over. It was random coincidence 10 times in a row!? No. Randomness wouldn't explain why it keeps happening.

But you aren't giving any alternative explanation other than the two I've brought up: plausible deniability and random coincidence.]]>
Sun, 18 Feb 2018 20:44:48 +0000 http://curi.us/comments/show/9583 http://curi.us/comments/show/9583
Anon69 Atlas Shrugged Theme: Don't Overreach
* * * * * * * * * * * *

Setup: [A] is a mean statement. [B] is a non-mean statement. C is an ambiguous statement that has a true meaning of [A] or [B]

Scenario 1 (plausible deniability strategy):

1. Bob says C (true meaning: A)
2. Mary claims Bob really meant A
3. Bob disagrees, adds some details, making it more ambiguous or seem closer to B

Scenario 2 (misunderstanding):

1. Bob Says C (true meaning: B)
2. Mary claims bob really meant A
3. Bob disagrees, adds some details, making it more ambiguous or seem closer to B

* * * * * * * * * * *

In the first scenario Mary nails it but in the second scenario she incorrectly interprets C as A.

How does Mary sort out whether scenario 1 or 2 happened? They both basically look the same from her perspective.

I think you will say something about pattern matching, similar to:

> everything you're saying, and all your reactions – including all the denials – are totally standard and fit meme patterns. you haven't done a single thing to contradict my interpretation

Can elaborate on what you mean by meme patterns? I don't see how you would be able to spot scenario #2, to the extend it overlaps with the patterns you are looking for.]]>
Sun, 18 Feb 2018 20:16:34 +0000 http://curi.us/comments/show/9582 http://curi.us/comments/show/9582
Cartman Atlas Shrugged Theme: Don't Overreach
It means enacting the defensiveness meme, regardless of your conscious feelings.

If you act according to the logic of defensiveness, and act *as if* you were defensive ... it doesn't really matter whether you're consciously aware of it, or consciously intending it, with no rationalizations, self-blinding, fooling yourself, etc.]]>
Sun, 18 Feb 2018 19:30:42 +0000 http://curi.us/comments/show/9581 http://curi.us/comments/show/9581
curi Atlas Shrugged Theme: Don't Overreach
Mine is the plausible deniability strategy, which is ubiquitous. In this strategy, people often start with a mean idea and tweak it to add some ambiguity, some other plausible story. This can be done unconsciously.

Your explanation is ... what?; You have not been responsive about this. Random coincidence every time? You haven't said that, and that would be a bad explanation. You haven't offered a different explanation either.]]>
Sun, 18 Feb 2018 19:24:37 +0000 http://curi.us/comments/show/9580 http://curi.us/comments/show/9580
Anon69 Atlas Shrugged Theme: Don't Overreach > That question rests on the premise that X statement, which you found condescending, was the actual intended meaning.

Oops, that was a half-typed statement that I left in by accident. You can ignore, I may finish later.]]>
Sun, 18 Feb 2018 18:29:53 +0000 http://curi.us/comments/show/9579 http://curi.us/comments/show/9579
Anon69 Atlas Shrugged Theme: Don't Overreach
Is it accurate to say I came here *to* attack? This thread started somewhat randomly, as I was reading AS and remembered this blog post, and had a thought about the AS summary. Dagny went in an unexpected direction but which I thought was interesting to explore. In my view, various misunderstandings happened, or in your view: attacks happened.

> how do you think you learned to write such condescending things while not even consciously knowing what you're doing?

That question rests on the premise that X statement, which you found condescending, was the actual intended meaning.

> everything you're saying, and all your reactions – including all the denials – are totally standard and fit meme patterns. you haven't done a single thing to contradict my interpretation. you don't even know how to meaningfully contradict it or what it would take to provide counter-evidence or counter-argument. which is also standard.

What are some examples of meaningful contradiction, counter-evidence, or counter argument? This might be really helpful -- I wouldn't be surprised to find out I'm bad at those things.

> you want to drop it now when it's turned around and you're on the defensive instead of the attack, which is biased and unfair to your victims

There is more misunderstanding here.

I don't want to drop it, I'm just unsure how to proceed when such a deep misunderstanding exists, and as you point you out, I'm not providing meaningful counter-evidence or argument.

I also don't feel like it's turned on me, although I could see why you feel like it has. From my view, it's more of a stall-out / unsure how to proceed. But I'm interested in continuing if there's a fruitful way forward, or if you have any questions or suggestions for discussion.

I'm also not defensive (if you mean the feeling of being defensive).

> when you got pushback on attacking you then posted more attacks instead of stopping.

Hmm, I guess I'm not sure what you mean by attacks or what attacks you are referring to? I guess from my view, my recent posts are trying to clarify / restate my views where there seems to be misunderstanding.

> there's a big problem with dropping it: but if the underlying issue is not solved, the same thing will happen again: you will start attacking again later.

Yes, you thinking that I'm attacking (when I'm not), or your view: me actually attacking (when *I* think I'm not) is a problem.]]>
Sun, 18 Feb 2018 18:27:00 +0000 http://curi.us/comments/show/9578 http://curi.us/comments/show/9578
curi Atlas Shrugged Theme: Don't Overreach
Saying X which implies Y, and saying Y, are different things. the implication is also questionable but even if it was a 100% solid implication it'd still be different. one difference is X also implies A, B, C, D, etc. which implications you think are important, or even know about, is non-obvious.

> Yes, if I had meant it that way, that would be nasty.

why do you think you know why you behave the way you do? you don't. it's not a random accident that, in the midst of writing many other mean things, you wrote this thing this way. you're enacting memes.

how do you think you learned to write such condescending things while not even consciously knowing what you're doing? do you think it's a coincidence? you are giving no alternative account that explains why your statements are this way, rather than being other statements which only have your claimed intended meaning without alternative nasty readings.

everything you're saying, and all your reactions – including all the denials – are totally standard and fit meme patterns. you haven't done a single thing to contradict my interpretation. you don't even know how to meaningfully contradict it or what it would take to provide counter-evidence or counter-argument. which is also standard.

> But I'm willing to move on / continue other discussions if you are.

well you could have followed up about the Russia investigation discussion but instead you interrupted discussing to attack people, highly aggressively and persistently.

you want to drop it now when it's turned around and you're on the defensive instead of the attack, which is biased and unfair to your victims – when you got pushback on attacking you then posted more attacks instead of stopping. but anyway there's a big problem with dropping it: but if the underlying issue is not solved, the same thing will happen again: you will start attacking again later.]]>
Sun, 18 Feb 2018 17:51:19 +0000 http://curi.us/comments/show/9577 http://curi.us/comments/show/9577
Anon69 Atlas Shrugged Theme: Don't Overreach
Right. Dagny says I'm having negative emotions, and I say I'm not (contradiction). Doesn't that mean we disagree about that matter? Maybe I'm misunderstanding what you mean.

> > I can honestly say there is were no negative feelings when I wrote my last post. Or now. You may not believe it, but that's ok. Just wanted to point it out because I think it's interesting that there is such a misunderstanding.

> In context, the text "you may not believe [the facts]" basically reads as "you may be stupid and wrong", and the "that's ok" reads as condescension, and the final quoted sentence really hammers in the condescension. it's like "it's cute that you think that" but a little more disguised.

I see how many things here are ambiguous. Here is an attempt at explaining the meaning further:

You may not believe it = You may not believe [that I am not feeling negative emotions]. Or from another angle: I acknowledge that my statement that I am not feeling negative emotions is unlikely to be sufficient evidence for you.

"but that's ok"...it's not a big deal, I'm not too concern about it.

As much as I am open to the possibility of failing at introspection, not understanding myself or my emotions, I'm standing by this description. I believe your analysis of the intended meaning is wrong.

> this is just one little example of the many ways you've been nasty.

Yes, if I had meant it that way, that would be nasty.

> this kind of verbal abuse you've been writing is a super standard cultural hostility meme, not an accident. the fact that it's disguised makes it meaner and more socially savvy – it's harder to stand up to attacks that have some sort of plausible deniability. to me it's transparent, but to you and most audiences it isn't.

I'm not sure how to to continue this disagreement about your interpretations. I continue to believe you and Dagny are overestimating your ability to interpret my internal state and writing (as overly-complicated and ambiguous as it can be). But I'm willing to move on / continue other discussions if you are. You have made some interesting points that I'd like to think more about. Apologies for any hostility or nastiness towards you or Dagny, I (at least consciously) did not intend it.]]>
Sun, 18 Feb 2018 17:26:51 +0000 http://curi.us/comments/show/9576 http://curi.us/comments/show/9576
curi Atlas Shrugged Theme: Don't Overreach
On top of that, that comment goes far beyond opening by raising disagreement. It attacks. E.g.:

> I can honestly say there is were no negative feelings when I wrote my last post. Or now. You may not believe it, but that's ok. Just wanted to point it out because I think it's interesting that there is such a misunderstanding.

In context, the text "you may not believe [the facts]" basically reads as "you may be stupid and wrong", and the "that's ok" reads as condescension, and the final quoted sentence really hammers in the condescension. it's like "it's cute that you think that" but a little more disguised.

this is just one little example of the many ways you've been nasty. there's actually tons.

this kind of verbal abuse you've been writing is a super standard cultural hostility meme, not an accident. the fact that it's disguised makes it meaner and more socially savvy – it's harder to stand up to attacks that have some sort of plausible deniability. to me it's transparent, but to you and most audiences it isn't.]]>
Sun, 18 Feb 2018 14:34:21 +0000 http://curi.us/comments/show/9575 http://curi.us/comments/show/9575
curi Atlas Shrugged Theme: Don't Overreach Sun, 18 Feb 2018 14:05:55 +0000 http://curi.us/comments/show/9574 http://curi.us/comments/show/9574 curi Atlas Shrugged Theme: Don't Overreach
In the general case, sure. But sometimes there are major indications. If you only speculate when you get a huge clue, and don't speculate the other times, then you can be right a ton.

In this case, you made two very nasty, stereotypical comments and, simultaneously, the quality of your logical thinking went noticeably down. And at the same time you also attacked the overall atmosphere of my forums, non-specifically. And you've been selective about what your reply to and what you ignore, in ways that appear conventionally biased. There's more, too. [1]

So, personally, I'd be ready to make a confident judgement with only half the evidence you provided. From my perspective, your comments have been very transparent in very standard ways.

My judgement would not be "you consciously feel strong anger and are lying". Given the information available, I'd put low odds on that possibility even if you hadn't denied it. The judgement is more like: certain memes are triggered in you. It's very standard to be pretty blind to what's going on in one's head when those memes trigger, but for them to affect one's actions substantially.

> Hypothetically, if someone were to believe you were angry when you weren't (e.g. in a context like this comment thread), what's the best way to deal with that?

depends on your goal(s).

[1] One example

> My first attempt here was a simple statement that I disagreed. I also said...

It's not a random accident that people reading this will think the "I also said..." part was part of your first attempt, instead of something you said later. It doesn't fit the topic sentence. It's an abrupt change of topic without transition, and the following text makes the confusion worse (so much so that I wonder if you even know the correct timeline – I had to check). Your communication here is, consciously-intentionally or not (presumably not), dishonest. https://rationalessays.com/lying]]>
Sun, 18 Feb 2018 14:03:13 +0000 http://curi.us/comments/show/9573 http://curi.us/comments/show/9573
Anon69 Atlas Shrugged Theme: Don't Overreach
To partially answer my own question: in this case, you should def do some introspection, make sure you aren't fooling yourself, etc. But my question was more about, assume you are good at all of that.]]>
Sun, 18 Feb 2018 09:43:19 +0000 http://curi.us/comments/show/9572 http://curi.us/comments/show/9572
Anon69 Atlas Shrugged Theme: Don't Overreach
Agree.

Isn't it also true that speculating on a stranger's internal state from a few written messages is hard too?

Certain interactions w/ conventional ppl make it a little easier. It also gets easier when you can also hear tone and see facial expressions (in person).

Hypothetically, if someone were to believe you were angry when you weren't (e.g. in a context like this comment thread), what's the best way to deal with that?

My first attempt here was a simple statement that I disagreed. I also said "You may not believe it, but that's ok", which perhaps was ambiguous, but basically I wasn't expecting Dagny to take my word for it and wasn't bothered or hung up on it. Fortunately, I don't think it's very important in order to move forward.

> Expecting people to take your word for it (accept you as having valid authority to make authoritative claims) indicates you don't understand how hard it is (and how unreliable people's claims about introspection are)

So per above, I wasn't expecting anyone to take my word. In general, I find it to be very tricky business to a) convey emotional state in a context like this, b) for others to guess at your emotional state, c) settle any disagreements about emotion state

> Your comments in this thread relating to introspection, hostility, honesty, etc, are following standard patterns of misconceptions, incorrect perspectives, wrong attitudes, etc. I've seen it before many times.

There are some downsides when you are wrong at guessing at someone's emotions and focus on it in a conversation. I haven't thought through the issue much. Like, when and why bring it up?

> I added more detail to the AS description:
> > This novel is about *how ideas affect a country and individuals*. It has major lessons for politics (limited government), economics (capitalism), and how to live your life (productively, heroically, rationally). It reveals how good men support and enable their own destroyers. It’s the *best book ever written*.

Cool, I like it!]]>
Sun, 18 Feb 2018 09:40:11 +0000 http://curi.us/comments/show/9571 http://curi.us/comments/show/9571
hostility Anne B Atlas Shrugged Theme: Don't Overreach
> it's you, and you're blaming others.

> you read hostility into things (e.g. some criticism), falsely, then get hostile and emotional yourself. you are the source of the issue. e.g. your "sigh" and "Hint" comments were hostile. like most people, hostility is a major part of your way of dealing with the world. and like most people, you blame others, e.g. you think they were hostile first and think that justifies your own hostility(!?) or you're blind to your own hostility.
>
> the reason you think there's hostility around FI, compared to other places, is cuz FI has ppl who don't put all the standard work into not saying what they mean, in socially standard ways, to avoid saying anything that would trigger hostility in normal people like you. at other places, people act as they normally do – like they are scared of everyone, everyone is fragile and easily triggered, and they have to be super carefully socially. there is cause to act that way – to tip toe around everyone (and then rationalize it and be blind to the fact one is doing it) – but FI is a different kind of place where people aim at other things truth-telling. FI is for people who want it. you're, apparently, on the borderline – you like FI some but also you're easily triggered and hostile, and that's putting you off some.

I see hostility here at FI too, often where it doesn't actually exist. I am used to normal social interactions where any criticism is considered an attack. Here, criticism usually seems meant to be helpful. Once I give it some thought I usually learn something from from criticism, even if my initial reaction is that I'm being attacked.

Yes, I am doing a normal social thing and trying to show you, Anon69, that you are not alone in having this reaction to FI. Is this a bad idea? If it is, maybe someone will talk about it and convince me I'm wrong.]]>
Sun, 18 Feb 2018 09:23:56 +0000 http://curi.us/comments/show/9570 http://curi.us/comments/show/9570
Anonymous Atlas Shrugged Theme: Don't Overreach Sun, 18 Feb 2018 08:37:42 +0000 http://curi.us/comments/show/9569 http://curi.us/comments/show/9569 Anon69 Atlas Shrugged Theme: Don't Overreach
Is this fighting? What makes it a fight? I thought we were just having a discussion about disagreements.

I don't see it as urgent per say (did lots of other stuff, such as read another chapter in AS in the last 12hrs), although there is some benefit to starting/finishing certain threads of convo while fresh before moving onto other things.

Open to suggestions for other things to do.]]>
Sun, 18 Feb 2018 08:21:00 +0000 http://curi.us/comments/show/9568 http://curi.us/comments/show/9568
Anonymous Atlas Shrugged Theme: Don't Overreach Sun, 18 Feb 2018 08:09:14 +0000 http://curi.us/comments/show/9567 http://curi.us/comments/show/9567 Anon69 Atlas Shrugged Theme: Don't Overreach
Sure, if all things were equal, start with the better book. Or if you were only going to read one book, that would also be a good reason to pick what you hope to be the best one.

But what if you plan to read the number #1 and #2 book, but #2 book seemingly addresses urgent problems in your life? Or, suppose you currently own book #2 but don't yet have access to or can't afford book #1? I think there are various scenarios where it's good to read book #2 first.

In my case, I started with FH (#2) for similar such reasons. My judgement may end up being wrong, but the downside isn't so bad: it delayed starting AS by ~2 months.

Do you believe there are any situations (such as those mentioned above) where it's ok / good idea to start with FH, or is that a mistake?

> and you specifically suggested changing the text for AS b/c it wasn't appealing enough.

My goal was to make the summary of AS a stronger explanation about what the book is about and why you should read it as I explain above.

> so, no, i don't agree i was wrong. i missed nothing you've said

You've misunderstood what I meant by FH "grabbing me, misunderstood my goals w/ suggesting changes to the AS summary, and misunderstood my reasons for starting with FH over AS (confusing it w/ respect for ET).]]>
Sun, 18 Feb 2018 08:07:02 +0000 http://curi.us/comments/show/9566 http://curi.us/comments/show/9566
FF Expanding Our Limits Sun, 18 Feb 2018 07:01:30 +0000 http://curi.us/comments/show/9565 http://curi.us/comments/show/9565 Anonymous 12 Rules for Life Typos in Rule 1 Sat, 17 Feb 2018 18:48:24 +0000 http://curi.us/comments/show/9564 http://curi.us/comments/show/9564 curi Atlas Shrugged Theme: Don't Overreach
Accurate introspection is very, very hard. Expecting people to take your word for it (accept you as having valid authority to make authoritative claims) indicates you don't understand how hard it is (and how unreliable people's claims about introspection are). Not knowing how hard it is means you haven't faced and overcome the difficulty (solving the problem involves understanding the extent of the problem). That means your introspections are unreliable.

It may be counter-intuitive, but you can't trust your own self-beliefs. It takes a massive effort not to fool yourself. Dishonesty about many things is the default.

Your comments in this thread relating to introspection, hostility, honesty, etc, are following standard patterns of misconceptions, incorrect perspectives, wrong attitudes, etc. I've seen it before many times.

---

I added more detail to the AS description:

> This novel is about *how ideas affect a country and individuals*. It has major lessons for politics (limited government), economics (capitalism), and how to live your life (productively, heroically, rationally). It reveals how good men support and enable their own destroyers. It’s the *best book ever written*.]]>
Sat, 17 Feb 2018 14:52:01 +0000 http://curi.us/comments/show/9563 http://curi.us/comments/show/9563
Dagny Atlas Shrugged Theme: Don't Overreach
so, no, i don't agree i was wrong. i missed nothing you've said. maybe instead of assuming i'm wrong and trying to lecture me, you should be trying to understand my reasoning and asking questions. curiosity instead of trying to win a debate!

how would you propose i deal with someone who is totally outclassed, but blind to it, and trying to debate (instead of learn), and losing badly without even realizing it?]]>
Sat, 17 Feb 2018 14:12:32 +0000 http://curi.us/comments/show/9562 http://curi.us/comments/show/9562
Anon69 Atlas Shrugged Theme: Don't Overreach
Your conclusion was that I didn't respect ET's judgement much.

I'm not really clear on how you arrived at this conclusion, consider all I said was:

> I read Fountainhead first, based on your summary of that book, because the details of your summary really grabbed me. Reading Atlas Shrugged now.

I further explained what I meant by it grabbing me and how it grabbed me (very different than what you thought I meant).

I gave further information in my subsequent post:

1) that I read Fountainhead first, for specific reasons

2) that I'm in the process of reading Atlas Shrugged now. And *specifically* because ET recommended it (I respect his judgement).

I doubt that ET believes that one MUST be read AS first before FH, no matter what.

Do you accept those as arguments that your conclusion is false? How did you miss them the first time?]]>
Sat, 17 Feb 2018 14:06:36 +0000 http://curi.us/comments/show/9561 http://curi.us/comments/show/9561
Dagny Atlas Shrugged Theme: Don't Overreach
unspecified comments, in other threads, which you silently resented, and held a grudge about, without attempting problem solving? what a hostile way to deal with people!

> The "sigh" was not me being hostile, but was a momentary feeling of sadness (perhaps also bewilderment)

you're rationalizing.

> I can honestly say there is were no negative feelings when I wrote my last post.

how do you know you're honest and your introspection is correct? why do you expect me to believe it, just from your say-so? you seem clueless that such statements should not be believed merely b/c they are asserted – including you should not believe it, yourself. People are so commonly not honest, and inaccurate about introspection. That's the pervasive standard. You claim to be a rare, amazing exception but you don't seem familiar with the problem, let alone to have all the amazing knowledge necessary to be an exception. You don't seem to even know you're making a huge claim, or have any sense of what it takes to be honest and accurate about introspection on this kinda stuff.

you think you're special and not normal. but you act normal. and how could you possibly not be normal? that takes a ton of knowledge, and you're so new you e.g. haven't finished your first reading of Atlas Shrugged yet.

no doubt you are not normal in some particular respects that you noticed. but that doesn't put you outside the normal range in the relevant ways. nor can you tell how normal you are by introspection alone – you need a great understanding of society to know what the normal range is.

> In my last response, do you notice all the errors made that I clarified in my response?

this doesn't make sense. typo?

anyway, you seem unwilling to consider that i might be correct.]]>
Sat, 17 Feb 2018 14:00:06 +0000 http://curi.us/comments/show/9560 http://curi.us/comments/show/9560
Anon69 Atlas Shrugged Theme: Don't Overreach > what does "again" refer to?

The other times when I've noticed you coming to a conclusion prematurely, assuming too much, not asking questions, etc. The "sigh" was not me being hostile, but was a momentary feeling of sadness (perhaps also bewilderment), RE: not understanding why (which I later in that message shared some ideas about)

> you're having negative emotions. that's your fault, not mine. you should take responsibility, apologize for treating me badly, and take concrete steps to handle yourself better in the future

I can honestly say there is were no negative feelings when I wrote my last post. Or now. You may not believe it, but that's ok. Just wanted to point it out because I think it's interesting that there is such a misunderstanding.

I think you may be looking at me through the lens of "normal people" and guessing at what I'm thinking and feeling. That's my best guess for why I am seeing so many errors about what I've said. In my last response, do you notice all the errors made that I clarified in my response?]]>
Sat, 17 Feb 2018 13:52:25 +0000 http://curi.us/comments/show/9559 http://curi.us/comments/show/9559
Dagny Atlas Shrugged Theme: Don't Overreach
what does "again" refer to?

you have not provided an argument that that conclusion is false. you didn't address the issue. your choice to jump on me replaced speaking to the issue.

you're having negative emotions. that's your fault, not mine. you should take responsibility, apologize for treating me badly, and take concrete steps to handle yourself better in the future. if you find that too difficult, you could at least aspire to it and humbly request people be patient with you in the mean time, instead of lashing out at people.

> I've always thought there was a strange unfriendliness, or roughness, or hostile-ness to ppl responding to comments here.

it's you, and you're blaming others.

you read hostility into things (e.g. some criticism), falsely, then get hostile and emotional yourself. you are the source of the issue. e.g. your "sigh" and "Hint" comments were hostile. like most people, hostility is a major part of your way of dealing with the world. and like most people, you blame others, e.g. you think they were hostile first and think that justifies your own hostility(!?) or you're blind to your own hostility.

the reason you think there's hostility around FI, compared to other places, is cuz FI has ppl who don't put all the standard work into not saying what they mean, in socially standard ways, to avoid saying anything that would trigger hostility in normal people like you. at other places, people act as they normally do – like they are scared of everyone, everyone is fragile and easily triggered, and they have to be super carefully socially. there is cause to act that way – to tip toe around everyone (and then rationalize it and be blind to the fact one is doing it) – but FI is a different kind of place where people aim at other things truth-telling. FI is for people who want it. you're, apparently, on the borderline – you like FI some but also you're easily triggered and hostile, and that's putting you off some.

one of the errors you're also making is dramatically overrating your introspective abilities, while dramatically underrating the ability of philosophers to understand people like you.]]>
Sat, 17 Feb 2018 13:16:19 +0000 http://curi.us/comments/show/9558 http://curi.us/comments/show/9558
Anon69 Atlas Shrugged Theme: Don't Overreach
I don't think that would be a good intention, either. Instead, perhaps explaining why the book should be read.

> ET says it's the best book, objectively, and puts it first on the whole list, and that doesn't interest you so much, you don't find that grabbing.

I read ET's description for Fountainhead and found a more direct / significant impact on my life. That may in part have to do with my own unique circumstances and problems, of course. But his description really helped.

While I intended to read Atlas Shrugged (now currently reading), purely on the ET's judgement that it's the best book ever...the other information about it was vague. Maybe it's still the right summary as is, idk. I know that the topic of overreaching is really important. Having read this blog post, it's also been helpful to watching out for it (examples of overreaching), as I've been reading it for the first time. As you mentioned, it's not explained directly in the book. Actually, I just got the very moment in the book where Dagny slows down and seems to acknowledge the overreaching for the first time (chapter VIII).

> apparently you don't respect his judgement much!

There you go, jumping to conclusions again...sigh.

> and you suggest he change the text to make it more appealing to other people who don't respect his judgement (bad design goal!)

How did you know that was my goal? Hint: it wasn't.

> meanwhile, contradictorily, you want extensive help from ET to teach you politics stuff (and you didn't say why politics over philosophy).

It's all connected isn't it (philosophy / politics / etc)? I don't want help from ET unless he has something to benefit from it.

I've always thought there was a strange unfriendliness, or roughness, or hostile-ness to ppl responding to comments here. Maybe it's just a culture thing I'm not grokking yet. but I often wonder what's going on...if there's a deeper meaning or purpose

Like in the Fountainhead, Dominique Francon does all these things to scare away / sabotage prospective clients of Roark. But the purpose is to really weed them out, to protect Roark, so only good ones reach him.]]>
Sat, 17 Feb 2018 12:58:57 +0000 http://curi.us/comments/show/9557 http://curi.us/comments/show/9557
Dagny Atlas Shrugged Theme: Don't Overreach
i don't think the book descriptions are intended as marketing. but it's interesting, ET says it's the best book, objectively, and puts it first on the whole list, and that doesn't interest you so much, you don't find that grabbing. apparently you don't respect his judgement much! and you suggest he change the text to make it more appealing to other people who don't respect his judgement (bad design goal!). meanwhile, contradictorily, you want extensive help from ET to teach you politics stuff (and you didn't say why politics over philosophy).]]>
Sat, 17 Feb 2018 12:19:46 +0000 http://curi.us/comments/show/9556 http://curi.us/comments/show/9556
Anon69 Atlas Shrugged Theme: Don't Overreach
http://fallibleideas.com/books#rand

> Atlas Shrugged
> This novel is about how ideas affect a country. It has major lessons for politics, economics, and how to live your life. It’s the best book ever written.

I like the shortness of "how to live your life" but maybe it's too vague? I read Fountainhead first, based on your summary of that book, because the details of your summary really grabbed me. Reading Atlas Shrugged now.]]>
Sat, 17 Feb 2018 11:41:19 +0000 http://curi.us/comments/show/9555 http://curi.us/comments/show/9555
FF Top 10 Reasons I Hate Children
Ooooh! wow]]>
Sat, 17 Feb 2018 05:28:19 +0000 http://curi.us/comments/show/9554 http://curi.us/comments/show/9554
Anonymous Top 10 Reasons I Hate Children Sat, 17 Feb 2018 02:57:31 +0000 http://curi.us/comments/show/9553 http://curi.us/comments/show/9553 FF Top 10 Reasons I Hate Children Sat, 17 Feb 2018 01:19:15 +0000 http://curi.us/comments/show/9552 http://curi.us/comments/show/9552 curi A Discussion Of Steven Pinker’s Enlightenment Now: The Case For Reason, Science, Humanism, And Progress
https://direct.curi.us/files/pinker-enlightenment-book-discussion.pdf]]>
Fri, 16 Feb 2018 22:27:28 +0000 http://curi.us/comments/show/9551 http://curi.us/comments/show/9551
Anonymous A Discussion Of Steven Pinker’s Enlightenment Now: The Case For Reason, Science, Humanism, And Progress Fri, 16 Feb 2018 22:16:27 +0000 http://curi.us/comments/show/9550 http://curi.us/comments/show/9550 curi Discussion
and i think the reason Trump kept Comey was not that he thought Comey was good, it was an overly compromising approach to politics, which then came back to bite him.

Trump thought he could leave some people in place and they would act reasonably, but he quickly found himself betrayed.

and you can't have important obstruction until after there's stuff worth investigating, so that's not a primary point, so let's focus on stuff to investigate in the first place.]]>
Fri, 16 Feb 2018 15:52:39 +0000 http://curi.us/comments/show/9549 http://curi.us/comments/show/9549
Anon69 Discussion
Maybe Comey should have been fired day 1. But hypothetically...if Trump disagreed with you, but then later fired Comey to stop / slow down investigations into his friends / colleagues, or even himself...that would be bad, no?

I'm not saying such obstruction has been established, but I believe lots of circumstantial evidence suggests it should be looked at...]]>
Fri, 16 Feb 2018 15:33:52 +0000 http://curi.us/comments/show/9548 http://curi.us/comments/show/9548
curi Discussion Fri, 16 Feb 2018 15:18:32 +0000 http://curi.us/comments/show/9547 http://curi.us/comments/show/9547 Anon69 Discussion > You wrote, about non-at-will employment:
> > It came to be after the unusual circumstances around the firing of FBI director Comey ... All of this raised questions about whether there was just cause for the firing
> You presented the firing as a primary issue, then proceeded to list different things as the topics of the investigation, without explanation, as if it wasn't a total non sequitur.

Ah, I see the confusion. The comey stuff was part of why the investigations (including *existing* investigations) were roll-up in the Special Counsel. The comey firing may or may not be relevant for one particular thread (the obstruction of justice) but otherwise, unrelated to the other topics.

I brought up all of that to separate the issues of 1) should the investigation(s) exist? vs 2) should the special counsel exist? Which I've have found are sometimes conflated / not well understood.

Will respond later, RE: other questions and more about reasons for investigations.]]>
Fri, 16 Feb 2018 15:08:30 +0000 http://curi.us/comments/show/9546 http://curi.us/comments/show/9546
curi Discussion
You wrote, about non-at-will employment:

> It came to be after the unusual circumstances around the firing of FBI director Comey ... All of this raised questions about whether there was just cause for the firing

You presented the firing as a primary issue, then proceeded to list different things as the topics of the investigation, without explanation, as if it wasn't a total non sequitur.

> I thought we could take things step by step to see if agreement/questions, interesting in continuing, before zooming in.

You didn't do a small steps. You wrote a ton of stuff, it just didn't address the issue.

> How do you assess it as getting undue amount of attention over other investigations?

I agree the monetary price isn't so bad – though there's still a ton of other higher priority things to investigate. But it's costing a ton in terms of attention, it's a huge distraction. So, why was it started? Who decided to pour tons and tons of attention on this, for what purpose? What was the thought process there? Did they evaluate many potential investigations, and with what criteria? Is it just partisan political fighting? Or what?]]>
Fri, 16 Feb 2018 14:56:54 +0000 http://curi.us/comments/show/9545 http://curi.us/comments/show/9545
Anon69 Discussion > None of those are the thing you claimed was the reason for the investigation: lack of at-will employment in the government.

I'm having a hard time parsing what you mean here. I'm not sure if this helps but the reason various threads of investigations were rolled up into the special counsel is different than the reasons those investigations are taking place.

> Regardless, do those things merit an investigation? You presented no case that they do. You didn't even try.

Right. I thought we could take things step by step to see if agreement/questions, interesting in continuing, before zooming in.

> What did Trump do, beyond beyond business as usual, to merit so much attention to this investigation *over* other potential investigations?

The investigation is only in part about Trump himself, maybe just the potential obstruction of justice piece, although time will tell as further details emerge.

Mostly it's about other people, such as Trump campaign officials, or Russian nationals such as those indicted today.

As far as attention of this investigation over others...I'm guessing you are not talking about media attention, because that's not really relevant to the merits of the investigation itself. As far as measuring attention within the govt (effort, money spent, number of investigators, etc)...I'm not sure how to measure that.

I saw an article late last year saying about $7 mil spent to date on the Mueller investigation. The FBI's budget for 2016 was $8.7 billion. I recall there being ~20 prosecutors as part of the special counsel. FBI has 35k employees (not sure how many of them are prosecutors though).

How do you assess it as getting undue amount of attention over other investigations?]]>
Fri, 16 Feb 2018 14:41:12 +0000 http://curi.us/comments/show/9544 http://curi.us/comments/show/9544
curi Discussion Fri, 16 Feb 2018 14:18:51 +0000 http://curi.us/comments/show/9543 http://curi.us/comments/show/9543 curi Discussion
None of those are the thing you claimed was the reason for the investigation: lack of at-will employment in the government.

Regardless, do those things merit an investigation? You presented no case that they do. You didn't even try. Even though the topic was:

> > One of Ann's main points about the investigation is there's no real reason for it to be taking place in the first place.

so you haven't even begun to address the topic. you just said you could do that in the future if asked. but you were asked already.

---

Overall I don't think you understand that there are a million crimes everywhere, and that this is massive politically-motivated selective attention. If you investigated Obama stuff you'd find a larger number of more serious crimes. What did Trump do, beyond beyond business as usual, to merit so much attention to this investigation *over* other potential investigations?]]>
Fri, 16 Feb 2018 14:16:44 +0000 http://curi.us/comments/show/9542 http://curi.us/comments/show/9542
Anon69 Discussion
Sure. There are perhaps two related questions: why the investigation(s) exist in general vs why they’ve been rolled up into the Special Counsel (Mueller) vs just handled by relevant teams in the DOJ or FBI.

As far as the Special Counsel…there's some history as to why that concept exists which I'll leave aside for now. But Deputy AG Rosenstein appointed a Special Counsel back in May 2017 (would have been AG Sessions doing the appointing, but he recused himself from Russia matters). It came to be after the unusual circumstances around the firing of FBI director Comey, where troubling/contradictory statements were made by the WH and Trump about why Comey was fired. Most famously, Trump during a TV interview (w/ Lester Holt) said he would have fired Comey regardless of recommendations from the DoJ (contradicting WH statements about the reason) and tied it to the “Russia thing". And then also, Comey released memos documenting troubling encounters w/ Trump, RE: loyalty and Flynn. All of this raised questions about whether there was just cause for the firing and attempts at obstructing justice. To ensure the existing investigations could proceed in an independent and non-partisan manner and protected from obstruction, Rosenstein appointed a special council to take over the investigations.

The areas of investigation that the Special Counsel is looking at:

- Russia interference in the 2016 presidential election
- Potential collusion/conspiracy between Russia and the Trump campaign
- Obstruction Of Justice
- Other crimes discovered during the investigation

In addition to the indictments already served, I've looked at it closely and I believe there is substantial evidence for why these investigations should be happening. Along the way, I've been mindful to look for evidence of being politically motived, improper, etc, and haven't found anything significant. I can get into the details of each area if desired.

The investigation is done in secret so to understand the progress we're limited to existing indictments, leaks, etc. Activity known to the public so far:

-Flynn indictment
-Papadopoulos indictment
-Manafort Indictment (conspiracy and money laundering)
-Gates indictment (conspiracy and money laundering)
-Announced today: Indictment 13 russian nationals and 3 Russian entities (conspiracy to defraud the US, wire fraud, bank fraud, and identity theft)
-Reports of various interviews taking place

Here are some signs that the investigation has substantial work ongoing:

-Witnesses continue to be interviewed (E.g. Steve Bannon this week)
-Trump hasn’t been interviewed yet
-Flynn, Papadopoulos (and likely Gates per recent reporting) flipping/cooperating, meaning they have something to offer on other targets to avoid other charges
-Most recent indictments came out today (central to the russia interference part)

Ann made several criticisms of the investigation in her Carter Page blog post which I touched on above. Your thoughts on those or are there others you'd like me to comment on?

In her latest blog post "Anatomy Of A Coup", she summarizes:

> This is an investigation with no evidence of a crime, apart from politically motivated, anti-Trump investigators relying on a Hillary-funded dossier.

The investigation has already served many indictments showing crimes (none involving dossier that I know of). The investigation in general only touches the dossier in a few areas, and no evidence that anything *relies* on it. The people alleged as anti-trump (which I disagree about in the case of Steele) have minor roles.

The Special Counsel's work seems legit and important to continue. I don't understand why Ann attacks it in a way contrary to the facts.]]>
Fri, 16 Feb 2018 14:08:42 +0000 http://curi.us/comments/show/9541 http://curi.us/comments/show/9541
Anonymous Discussion
- the russia investigation is typical or average, and should be judged by a comparison to a typical or average past investigation. (this implies e.g. that it isn't partisan bullshit)
- the investigation has gotten quick, good results (Flynn should burn)
- the specific prior investigations compared to are the correct, representative set

the first two are key points that are not argued, more implied, and the third one is a potential source of bias.

Nate Silver published dozens of attacks on Trump during the election, mostly in the form of predictions that turned out wrong, and which he kept making with no shame about his track record of failure.]]>
Fri, 16 Feb 2018 13:56:54 +0000 http://curi.us/comments/show/9540 http://curi.us/comments/show/9540
How Far I'll Go Alessia Cara Analyzing How Far I'll Go Fri, 16 Feb 2018 10:05:29 +0000 http://curi.us/comments/show/9539 http://curi.us/comments/show/9539 Anon69 Discussion
When it comes to political news/analysis, there's no source I'd trust without some fact checking. Pretty much anything I read/watch I file under "maybe", until I see primary sources, hear responses from various parties involved, etc.

>Nate Silver is really bad – a shameless, dishonest partisan hack. It's your job to check stuff from him and his associates yourself, if you want to use it, not push that checking burden onto others.

Do you have any examples of partisan hackery by Nate Silver? Just curious to check it out (I haven't read much of his stuff).

Regarding the link I provided on the Mueller investigation, I had prior knowledge about events surrounding Mueller so far and those all checked out. Also about some of the facts w/ prior investigations. Based on your prompt, I went back and fact checked a some of the timelines on the prior special councils...they check out as well.]]>
Fri, 16 Feb 2018 07:25:28 +0000 http://curi.us/comments/show/9538 http://curi.us/comments/show/9538
curi 12 Rules for Life Typos in Rule 1 Thu, 15 Feb 2018 16:46:04 +0000 http://curi.us/comments/show/9537 http://curi.us/comments/show/9537 curi 12 Rules for Life Typos in Rule 1 Thu, 15 Feb 2018 14:58:14 +0000 http://curi.us/comments/show/9536 http://curi.us/comments/show/9536 Dagny Discussion
and that issue doesn't just apply to news sources. for example, respect/credentials/prestige/reputation is also very unreliable for science (including medicine and diet/nutrition/health advice) and academic papers.]]>
Thu, 15 Feb 2018 13:40:06 +0000 http://curi.us/comments/show/9535 http://curi.us/comments/show/9535
anonymous Discussion Thu, 15 Feb 2018 13:25:01 +0000 http://curi.us/comments/show/9534 http://curi.us/comments/show/9534 curi Discussion
Nate Silver is really bad – a shameless, dishonest partisan hack. It's your job to check stuff from him and his associates yourself, if you want to use it, not push that checking burden onto others.]]>
Thu, 15 Feb 2018 13:15:11 +0000 http://curi.us/comments/show/9533 http://curi.us/comments/show/9533
Anon69 Discussion
I'm interested in replying to all of your questions as I have more time, but a quick one in the meantime.

> Meanwhile, you showed a willingness to use hard-left sites you don't know much about as sources without doing any checking first, and then you downplayed the problem instead of wanting to retract it.

I stand by the link I sent -- I can't spot any major mistakes with it. I believe it offers an accurate summary about past special investigations, and it offers something to consider in response to, e.g. "It's been X months and there's no proof of Y" regarding the Mueller investigation. It shows how slow the wheels of justice turn.

I think there's an issue of me not understanding / seeing the problem here, rather than knowing it and downplaying it. Can you explain more about what the problem is?]]>
Thu, 15 Feb 2018 07:59:30 +0000 http://curi.us/comments/show/9532 http://curi.us/comments/show/9532
curi Discussion
In order to detect things like NYT bias, it's important to have a good grasp of what the truth is so you can compare. Or if you don't already know much, you could take some article and start investigating it – perhaps one which some critics have already identified as both important and bad.

Do you have political principles? A framework you use to interpret the things you read? Tools to catch bad actors and spot their major mistakes?

Do you have a way of evaluating what's correct that you then subject things like the NYT's positions to (and somehow conclude they are superior to breitbart?), or are you reading less critically than that and getting your opinions from what you read in an ad hoc way, or what?

And you say you read primary sources about the Russia investigation, but you didn't present any case for the investigation using them. Want to try that? One of Ann's main points about the investigation is there's no real reason for it to be taking place in the first place. You seem to disagree ... and say you read tons about it (I haven't), so want to explain your view? Meanwhile, you showed a willingness to use hard-left sites you don't know much about as sources without doing any checking first, and then you downplayed the problem instead of wanting to retract it. (I have done multiple fact checks of Coulter, which I posted publicly, which is why I'm wiling to link her even though I haven't followed this particular topic much.)]]>
Wed, 14 Feb 2018 20:43:29 +0000 http://curi.us/comments/show/9531 http://curi.us/comments/show/9531
Anon69 Discussion
I posted here about something curi linked which I disagreed about...as a starting place to learn more and seek criticism.

I'm in the process of learning about FI. I suspect I don't have a great understanding of all things FI. I do "care", which is the very reason I decided to post.

I wonder if disagreement is being delegitimized here, by calling me biased, hostile, etc

> you aren't asking or curious

Asking or curious about what?

> you're hostile

I'm not hostile, why do you think that?]]>
Wed, 14 Feb 2018 20:27:42 +0000 http://curi.us/comments/show/9530 http://curi.us/comments/show/9530
Anonymous Discussion
could you present the case for the investigation?

> Yes, but I don't see a problem. I have what I'd consider an unusually diverse exposure.

80% biased is OK b/c other ppl read 90% biased sources? seriously?

you're reading primarily lefty MSM stuff, and now you've attacked Breitbart as if it were similarly bad to huffpo/vox, which is a nasty slander you have backed up with no facts. you're massively biased here.

what are you trying to accomplish? you just don't seem to know or care about what the FI community thinks about this stuff. you aren't asking or curious, you're hostile and way way way to the left of the blog you're commenting at. why don't you go through and write comments on curi's right wing posts telling him where he's wrong? that seems more productive than trying to debate other people instead of debating curi directly.]]>
Wed, 14 Feb 2018 18:45:18 +0000 http://curi.us/comments/show/9529 http://curi.us/comments/show/9529
Anon69 Discussion
Is there a specific statement I'd made, that you'd like to know more about?]]>
Wed, 14 Feb 2018 18:35:37 +0000 http://curi.us/comments/show/9528 http://curi.us/comments/show/9528
Anon69 Discussion
Here's what I know about 538: Guy named Nate Silver runs the site, I think he's a pollster or something, seems to pop up during elections. I've watched his site a little during elections for real-time results. Haven't spent much time on his site otherwise. The site is occasionally linked to from other sources.

> did you notice your list of news outlets is 80% MSM?

Yes, but I don't see a problem. I have what I'd consider an unusually diverse exposure.

There's kinda bad analysis everywhere. MSM, sites on the right, sites on the left. I am generally suspicious of everyone.

MSM, on the whole, seems to do a better job of presenting the basic facts about the news (aside from "opinion" pieces). When I look a given news event, as reported by more lefty (e.g. HuffPost / Vox) or right leaning (e.g. Breitbart / DailyCaller) they often report on an odd and narrow sliver of the full story...the sliver the supports their biases.

> you are downplaying what 538 is by saying "left-leaning". that is a large understatement. you are showing clear biases.

I wouldn't know (that it's an understatement). I really don't know 538 very well per above.]]>
Wed, 14 Feb 2018 18:33:10 +0000 http://curi.us/comments/show/9527 http://curi.us/comments/show/9527
Anonymous Discussion Wed, 14 Feb 2018 16:42:10 +0000 http://curi.us/comments/show/9526 http://curi.us/comments/show/9526 Anonymous Discussion
did you notice your list of news outlets is 80% MSM? that's super biased.

you are downplaying what 538 is by saying "left-leaning". that is a large understatement. you are showing clear biases.]]>
Wed, 14 Feb 2018 16:40:59 +0000 http://curi.us/comments/show/9525 http://curi.us/comments/show/9525
Anon69 Discussion >
> > I haven't seen anything with the Russia investigation that suggests it has been treated otherwise.
> and where did you look? 538?

I haven't read much of 538 website, but I can see they are left-leaning.

As far as the link provided...I stumbled into that page and thought it did a decent job, e.g. summarizing the history of special investigations, plus a nifty diagram.

Any criticisms of the content?

As far as my conclusions in general and "where did I look?". Read dossier, read Nunes memo, watched or read interviews with various players. As many primary sources as possible. Watch or read congressional testimonies. NYT, WSJ, Politico, WashPost, Breitbart. I've read most of Ann's posts for the last 6 months. Probably 50+ hours of effort in the past 6months.

Do you care to offer any criticism of my comments?]]>
Wed, 14 Feb 2018 16:36:41 +0000 http://curi.us/comments/show/9524 http://curi.us/comments/show/9524
Anonymous Discussion
and where did you look? 538?]]>
Wed, 14 Feb 2018 16:07:51 +0000 http://curi.us/comments/show/9523 http://curi.us/comments/show/9523
Anonymous Discussion Wed, 14 Feb 2018 16:07:31 +0000 http://curi.us/comments/show/9522 http://curi.us/comments/show/9522 Anon69 Discussion
Comments:

> The Department of Justice used the unverified dossier to obtain a Foreign Intelligence Surveillance Act warrant against Carter Page, an alleged "foreign policy adviser" to Donald Trump.

It's been reported that Carter Page has been monitored by the FBI since 2013, long before the dossier. For the FISA applications in question, was the dossier the sole evidence used? Which part(s) of the dossier were used? Were they corroborated with other evidence?

I am interested to see the democrat counter memo to see if it sheds light on these questions. I doubt that Nunes, the author of the memo, knows the answers to these questions, because he admits not having read the underlying material.

> the FISA court was not told who had paid Steele to create the "salacious and unverified" dossier.

Nunes subsequently admitted there was actually a footnote that mentioned the information in the dossier may come from a politically motivated source.

> Since it has appeared for quite some time now that there is no collusion, the only thing left for Mueller to investigate is Trump's "obstruction of justice," i.e. Trump being pissed off that his time is being wasted.

How does Ann know what facts and evidence Mueller has in front of him. Why is she pre-judging the investigation?

Looking at past special investigations, these things take time. See: https://fivethirtyeight.com/features/mueller-is-moving-quickly-compared-to-past-special-counsel-investigations/

I also think the potential obstruction charges are legitimate. Perhaps Ann is joking, but explaining them away as trump just being pissed off about his time being wasted doesn't make sense.

> The reason Rosenstein appointed Mueller was that he believed the "salacious and unverified" dossier. We know that because Rosenstein personally signed one of the FISA warrant applications based on the dossier

non sequitur. I thought the special council was precipitated by the unusual facts surrounding the comey firing and to ensure a non-partisan / independent investigation.

Ann also claims Steele is a trump hater. Maybe, although I haven't seen much supporting that. I've seen the quote from Ohr saying Steele "was desperate that Donald Trump not get elected and was passionate about him not being president." But that can be read two ways: a man biased against trump, or someone who thought he was witnessing crime(s) in progress and was very troubled w/ Trump being president from a national security perspective. Even considering Steele's intelligence may all be wrong, I lean towards the latter after reading about Steele and also having read the Fusion GPS congressional testimonies.

Steele's dossier represents raw intelligence gathered by a single person. It should be treated as such: requiring verification/corroboration. I haven't seen anything with the Russia investigation that suggests it has been treated otherwise.]]>
Wed, 14 Feb 2018 16:02:54 +0000 http://curi.us/comments/show/9521 http://curi.us/comments/show/9521
Anonymous Monetary Privacy!? Wed, 14 Feb 2018 11:31:44 +0000 http://curi.us/comments/show/9520 http://curi.us/comments/show/9520 fighting crime anonymous Monetary Privacy!?
Can you explain about how market competition helps to fight crime?]]>
Wed, 14 Feb 2018 05:26:01 +0000 http://curi.us/comments/show/9519 http://curi.us/comments/show/9519
curi Discussion Mon, 12 Feb 2018 13:51:16 +0000 http://curi.us/comments/show/9518 http://curi.us/comments/show/9518 A Self Anonymous Discussion Mon, 12 Feb 2018 12:44:29 +0000 http://curi.us/comments/show/9517 http://curi.us/comments/show/9517 curi Thomas Szasz
you already know what heritability (English) means. an example of something heritable is eye color – that's determined by genes you get from your parents.

"heritability" (technical) means: correlated with genes. this is ridiculous. it includes things such as where you live and what language you speak, which are blatantly *not* biologically inherited from your parents.]]>
Sun, 11 Feb 2018 17:36:17 +0000 http://curi.us/comments/show/9516 http://curi.us/comments/show/9516
heritability critcism anon Thomas Szasz
I read this article a few times and I sort of understand it but not enough. Do you have other sources to recommend that explain what heritability is? Maybe a different explanation will help me get it.]]>
Sun, 11 Feb 2018 17:31:55 +0000 http://curi.us/comments/show/9515 http://curi.us/comments/show/9515
Anonymous Thomas Szasz
there are many good explanations if you consider, say, different levels of detail. but they don't contradict each other! so there's no problem here. there is one underlying reality, and are multiple *compatible* ways to talk about it.

> I'd say "explanations that don't match evidence are wrong"

but most explanations are refuted with non-evidential criticism. see _The Fabric of Reality_ by David Deutsch. or e.g. http://curi.us/1504-the-most-important-improvement-to-popperian-philosophy-of-science]]>
Sun, 11 Feb 2018 17:23:19 +0000 http://curi.us/comments/show/9514 http://curi.us/comments/show/9514
Anonymous Thomas Szasz
Do you mean given accounts (testimony/anecdotes) or explanations of the phenomena?

Also, there are _many_ good explanations for particular events, just that we don't know most of them. There is _some_ objectively true explanation but we're exceedingly unlikely to know it. Provided that we have one or more unrefuted explanations for said phenomena we're okay to take their (the explanations') claims on reality seriously.

What you've said sounds ambiguous because 'accounts' doesn't have a clear meaning.

I'd say "explanations that don't match evidence are wrong" (excluding cases where there's a mistake in the theory used to interpret evidence), since "it doesn't match the evidence" is a very accessible way to refute an explanation.]]>
Sun, 11 Feb 2018 17:17:43 +0000 http://curi.us/comments/show/9513 http://curi.us/comments/show/9513
Epistemological differences oh my god it's turpentine Thomas Szasz

The world obeys laws of physics, biology, epistemology etc. For any particular event there is some explanation for why it happened that way and not some other way. Accounts that don't match the correct explanation are wrong. That's not narrow absolutism, that's just a consequence of the existence of laws that govern reality.

Your ideas are vague. You leave lots of issues unspecified. For example, if there is a mixture of biological and cultural causes, are they mixed in specific proportions? If they are mixed in specific proportions what causes them to be mixed in that exact proportion? Since your ideas are so vague you have immunised them from critical argument and from experimental testing. But this vagueness also makes your stated ideas useless for understanding the world or taking practical action. Perhaps you have some statement of your ideas that is more precise that you could link.]]>
Sun, 11 Feb 2018 01:05:01 +0000 http://curi.us/comments/show/9512 http://curi.us/comments/show/9512
curi Thomas Szasz
the idea of "giving an inch" presumes i have a particular side, and i don't want to give an inch (sacrificing one inch from my side, and giving it to the other side). i endeavor not to be biased in this way. i'm just trying to judge what's actually going on. you've said i'm wrong but haven't been very forthcoming about details.

more broadly, here are my views on discussion – i think issues can be productively resolved if ppl approach discussion in the right way: http://fallibleideas.com/paths-forward]]>
Sat, 10 Feb 2018 23:24:13 +0000 http://curi.us/comments/show/9511 http://curi.us/comments/show/9511
Anonymous Thomas Szasz
>> Your epistemology is different than mine, in that you prematurely disallow certain causal models.

I have repeatedly said that that there is truth and value in Szaszian thought, memetics, non-genetic and non-biological thinking as regards "mental illness". I have seen no acknowledgement whatsoever of the current utility of genetic and biological models for understanding "mental illness".

> i linked you an intro and you are treating what i said as vague hand-waving instead of engaging with the details.

You seem incapable of realizing that your absolutist position "biological and genetic models have no utility whatsoever in understanding mental illness" is narrow and absolutist. You'd only have to give an inch, and we'd be done. But since you can't, I have no choice but to conclude there must be fairly radical epistemological and philosophical differences between us.]]>
Sat, 10 Feb 2018 22:40:16 +0000 http://curi.us/comments/show/9510 http://curi.us/comments/show/9510
curi Thomas Szasz
Why? Do you think there is no truth in those fields, or you don't care about them, or what?

> Your epistemology is different than mine, in that you prematurely disallow certain causal models.

If you gave any specifics, you could explain why it works or where I'm going wrong. We could walk through an example and you could explain what you think is allowable and why.

What is valid that I think is invalid? And why? Specifically.

> Except that there is very little reason to believe memetics can contradict or trump all simple genetic and biological theories of depression,

you seem to be unfamiliar with the matter and not addressing it. i linked you an intro and you are treating what i said as vague hand-waving instead of engaging with the details.]]>
Sat, 10 Feb 2018 22:29:04 +0000 http://curi.us/comments/show/9509 http://curi.us/comments/show/9509
Anonymous Thomas Szasz
I didn’t say this, I said you made “negative assumptions about my intellectual competence”. My stance here has been to assume that you are not a fool, and that the limitations of the heritability concepts are things that are well known to people that are not fools. Since that is my assumption, linking to criticisms of heritability implies, for me, an assumption of ignorance or foolishness on my part. Criticizing heritability is intellectual child’s play - anyone with any serious intellectual interests is aware of the criticisms, and no one that can claim to be a serious intellectual can completely dismiss the utility of the construct. That’s why I find links to the criticisms insulting.

> Is there some claim about heritability you think I should accept?

That the heritability of X generally has some validity and utility in understanding what causes X. Claiming that heritability is always irrelevant in ever understanding the causality of X is closed-minded and imprudent.

> Temperature had a touchstone point people agreed on – I touch this and it feels hot,

No, it didn’t, and if you knew anything about Chang’s book, you’d it wasn’t at all this simple. Abstract concepts are not just intuitively clear the first time society lays hand on them. This is not how science and knowledge progresses.

> Where did randomness come from? Cultural practices are not random!

Indeed! One of the most compelling theories for explaining this non-randomness today is biology and evolution!

> Sure, but, key questions about psychiatric labels: useful to whom, for what purposes?

For all human beings everywhere who wish to live lives which enable them to pursue universal and evolved human drives. There is no culture anywhere that thinks crippling and immobilizing depression, or that insane and spasmodically uncontrollable hallucinations accompanied by constant terror is generally “useful”.

> It sounds like you're used to dealing with people, who you call radical social constructionists, who are *dissimilar to me*.

You have provided no evidence whatsoever that the model of radical social constructionist is *not* a useful and mostly accurate model for describing your positions.

> There are certainly other people, who I guess I'm getting lumped in with, who contradict evolution. But what did I say to contradict evolution? I said memes evolve faster than genes and, since they existed, memes have been meeting a lot of selection pressures before genes could. So human evolution has, in a significant part, shifted from genes to memes.


Except that there is very little reason to believe memetics can contradict or trump all simple genetic and biological theories of depression, or that evolution has shifted completely (or even mostly) from genes to memes. In particular, a meme can only function as such if it speaks to a fundamental, evolved drives - theories about underwater basket-weaving can't have memetic force because evolution has constructed us not to care about anything related to underwater basket-weaving.

> I don't claim a complete escape.

Except you seem to claim such an escape for conceptualizing mental illness.

You keep asking for specific citations, but it is clear to me that this would be a pointless exercise. Your epistemology is different than mine, in that you prematurely disallow certain causal models. You are in fact an absolutist with respect what counts as a valid cause, and what we should take as scientific evidence in support of causality.

As these disagreements are fundamental and philosophical / epistemological, I see little value in pursuing them here. I hope that we can part ways here, and respectfully.]]>
Sat, 10 Feb 2018 22:24:49 +0000 http://curi.us/comments/show/9508 http://curi.us/comments/show/9508
FF Thomas Szasz Sat, 10 Feb 2018 22:18:48 +0000 http://curi.us/comments/show/9507 http://curi.us/comments/show/9507 curi Thomas Szasz Sat, 10 Feb 2018 21:50:53 +0000 http://curi.us/comments/show/9506 http://curi.us/comments/show/9506 curi Thomas Szasz
Temperature had a touchstone point people agreed on – I touch this and it feels hot, I touch that and it feels cold. There was something there to explore further. Schizophrenia has no clear identification of what it is to begin an investigation. It covers a wide variety of things which are not clearly related as a single thing, and uses a highly ambiguous criteria. It's a mess. There is no reason to expect that mess to ever be cleaned up rather than abandoned as a dead end error. It would have to be cleaned up before it would be good enough for scientific research to do anything with it.

> > If you study this, you're studying the cultural practices of a group of people, not a phenomenon of nature.

> This is the oversimplification / logical fallacy. People do not form theories and create labels *randomly*

Where did randomness come from? Cultural practices are not random!

> *Labeling* evolved because it is generally a useful process.

Sure, but, key questions about psychiatric labels: useful to whom, for what purposes?

> Radical social constructionism with respect to concepts is untenable not only because it conflicts with evolutionary theory, but also because social constructionism provides no real theory for *why* certain concepts are so regularly selected for and accepted across cultures.

It sounds like you're used to dealing with people, who you call radical social constructionists, who are *dissimilar to me*.

You say it contradicts evolutionary theory? What? Memes are *based on* evolutionary theory. They are all about applying evolution to the issue.

There are certainly other people, who I guess I'm getting lumped in with, who contradict evolution. But what did I say to contradict evolution? I said memes evolve faster than genes and, since they existed, memes have been meeting a lot of selection pressures before genes could. So human evolution has, in a significant part, shifted from genes to memes. This is a basic source of my doubts about attributing lots of things to genes. (Another major source of doubts, which is more where Szasz was coming from than any meme stuff, is there are lots of independent ways to see the big role of ideas in human life.)

> You can’t be simultaneously intellectually honest and open and claim complete and total escape from nature.

I don't claim a complete escape. E.g. eye color is genetic, not memetic. Something more mixed is cancer: there are relevant genetic factors as well as relevant environmental factors.

> Yes, in particular, *biological and genetic reasons* are highly worth considering here.

I have considered them. Certain things are fully accounted for with memetic explanations which get at the causal mechanisms, while there are no viable biological-genetic explanations that include causal mechanisms (actual or speculative ones that stand up to logical scrutiny). You seem to disagree but haven't given a counter-example yet. You even said that scientific evidence already contradicts what I'm saying some, but you have yet to show me any research which both contradicts me and stands up to scrutiny of the types I've been talking about.]]>
Sat, 10 Feb 2018 21:40:28 +0000 http://curi.us/comments/show/9505 http://curi.us/comments/show/9505
DMB Thomas Szasz
It’s a mot more complex than this. A good example is Hasok Chang’s “Inventing Temperature”, which documents how the concept of temperature, which we take today to be as simple and absolutely clearly scientific, in fact evolved through an iterative process whereby measurement had to be updated to match (consensus) theory, and theory had to be updated to match measurement, all the while with multiple debates about the absurdity and validity of the concept. That we are in a similar proto-scientific state now with respect to what we currently term “mental illness” is not a very strong reason to conclude there is no scientific validity at all to the concept.


> Anyway, to figure out what schizophrenia is […] requires, most of all, good conceptual-philosophical thinking.

No disagreement here.

> It can't be settled by the current research methods.

This is a bit absolutist again, which I think I’m going to make the key theme here. Yes, it can’t be *completely and totally settled* by the current research methods. But then, nothing of any complexity can be settled like this. However, the current methods can *non-trivially* and usefully *inform* our thinking about and reactions to what we currently call “schizophrenia” (and other mental illnesses). Our understanding must undergo an iterative process much like the concept of “temperature” did. I believe Szaszian theories and models must play a role in this process, and so must biological and genetic models.

> [Heritability] is a technical term which refers to certain correlations […]

I am well aware of the technicality, and I was deliberately using using it in the technical sense. I am not making negative assumptions here about your intellectual competence, and I’d appreciate you do the same for me. I completely disagree that the source I linked provides no refutations of the source you linked. I think there are a great deal of conflicts between the two sources, and that the only thing that is clearly foolish is total dismissal or total acceptance of the utility of the heritability construct. Perhaps this may be a key area of (I hope respectful) disagreement between us.

> i totally disagree [re: causality]. dunno if you want to get into that.

I don’t think this will be fruitful. However, it is indeed useful to see that a potential source of our disagreement is in our models of causality.

> But, OK, I grant you they don't diagnose randomly

Cool, appreciate it.

> If a group of people partly agree on a labelling scheme, it doesn't make the labels mean what they are claimed to mean. The labels mean the person did certain behaviors (with some not very good accuracy above zero).

Absolutely, strongly agree.

> If you study this, you're studying the cultural practices of a group of people, not a phenomenon of nature.

This is the oversimplification / logical fallacy. People do not form theories and create labels *randomly* (otherwise, of what use would they be, from an evolutionary standpoint). We *are* nature, and nature *evolved*. *Labeling* evolved because it is generally a useful process. E.g. labeling is attaching a sound (word) to some observed pattern or regularity in the world. Labels, like stereotypes, persist because they are generally more accurate and useful than false (see e.g. Lee Jussim’s work on stereotype accuracy). Radical social constructionism with respect to concepts is untenable not only because it conflicts with evolutionary theory, but also because social constructionism provides no real theory for *why* certain concepts are so regularly selected for and accepted across cultures. You can’t be simultaneously intellectually honest and open and claim complete and total escape from nature.

> These commonalities are not just due to communication btwn cultures. There are reasons they would be created in multiple cultures independently.

Yes, in particular, *biological and genetic reasons* are highly worth considering here.

And that’s the real general point here. A *purely* Szaszian framework is as reductive, close-minded, and absolutist as is a purely biological, medical, or genetic framework. That’s why I pointed out the multi-framework epistemology early on. There are plenty of good reasons for incorporating and heeding Szaszian considerations when attempting to understand mental illness. But *stopping* there there is intellectually vacuous, close minded, and, likely, destructive and immoral, since it is likely to prevent advances in whatever cases really *don’t* reduce to constructionism and memetics.]]>
Sat, 10 Feb 2018 21:19:24 +0000 http://curi.us/comments/show/9504 http://curi.us/comments/show/9504
curi Thomas Szasz
Yeah and anything where there aren't objectively measurable criteria (which is why there's authority, credentials and partial consensus being used in the first place – b/c they don't have something better). Until you can measure it, you don't even know what it *is* enough to scientifically figure out what causes it. (You can, however, still use philosophical reasoning, models of how minds work, etc, to try to understand something about the matter. E.g. if you have epistemological reasons to think some processes can create knowledge and some logically cannot, then you can figure out that minds use one of the logically-possible knowledge creating processes.)

Anyway, to figure out what schizophrenia is (and how many different things are being lumped together, and how many things are being omitted even though they fit better than other things being included) requires, most of all, good conceptual-philosophical thinking. It can't be settled by the current research methods.

> No, I mean “heritability”.

Which is a technical term which refers to certain correlations, and – very misleadingly – does not mean the standard English thing about it being passed down from your parents via genes or other non-idea mechanisms (like fetal environment).

According to the technical term, speaking English is heritable, and living in California is heritable. But according to the standard use of words, speaking English isn't heritable, it's learned, which is different. And living in California isn't heritable (normal English meaning) either, it's just that moving to new areas is a bunch of effort and so lots of people don't do it.

I looked it over, and it appears that nothing in http://humanvarieties.org/2013/04/03/is-psychometric-g-a-myth/ attempts to refute the basic points about heritability itself that I was bringing up. Instead it's debating some other points.

> causation is [...] just correlation with temporal restrictions.

i totally disagree. dunno if you want to get into that. in short, there are infinitely many correlations (aka patterns) and people pay massive selective attention to some over others, which is why they overly associate correlation with causation. *any finite data set is compatible with infinitely many patterns*, so you need to look for something else (causation – explanations of what is actually causing what, by what mechanism. understanding the mechanism is key).

> Here I think you are being too extreme. What you are saying is roughly equivalent to claiming that expert diagnosis of schizophrenia carries *zero* information whatsoever. Surely you would not claim that experts could apply the label of schizophrenia to *anything that they felt like* and still be taken seriously?

The diagnoses of "experts" do reasonably often contradict. But, OK, I grant you they don't diagnose randomly. It has a lot to do with who wants to hear what conclusions, and also with staying in the ballpark of vague DSM criteria (and, more importantly, the unwritten rules and traditions surrounding this stuff). If a group of people partly agree on a labelling scheme, it doesn't make the labels mean what they are claimed to mean. The labels mean the person did certain behaviors (with some not very good accuracy above zero). If you study this, you're studying the cultural practices of a group of people, not a phenomenon of nature.

> My argument is: “if it is essentially impossible that every culture everywhere value X, doesn’t it seem a bit contradictory to claim that X is ‘just a social construction’”?

From what I know, there are major cultural-memetic themes/commonalities across all existing cultures. These things – such as certain authoritarian themes of how parenting is approached – are very old and could underly all sorts of other commonalities across cultures. (E.g., in this case: being mean to children and trying to make them conform by irrational methods leads to some cases of severe disobedience/rebellion – sometimes many years later – which is one of the things that is sometimes, inconsistently, labelled "schizophrenia". It also leads to plenty of failure, plenty of problems, sadness, etc, which in severe cases gets called "depression".).

These commonalities are not just due to communication btwn cultures. There are reasons they would be created in multiple cultures independently. Certain truths about what works in situations with commonalities get independently developed. The commonalities include there are mortal people who need to pass on their culture to their children or else they won't exist as a culture today. Call it survivor bias. Logically you should expect all surviving cultures to have some way of effectively passing on knowledge to the next generation. At a broad philosophical level, there are only limited ways to do that like persuasion or seeking obedience. There are limited overall categories the options fall into. This gets into things like the logic of static memes.]]>
Sat, 10 Feb 2018 20:18:47 +0000 http://curi.us/comments/show/9503 http://curi.us/comments/show/9503
Anonymous Thomas Szasz
Alright, I won’t focus on these then.

> For example, it's important to understand that "schizophrenia" is a *label*, a word. It's applied by people who are deemed to be able to apply it because an authority granted them credentials.

Yes, these are serious problems, and that, say, the DSM’s reliability is very much the artefact of the agreement of psychiatric “authorities” is a very serious problem for medical diagnoses.

> This is not suitable for scientific measurement

If my "this" you mean "mere authority consensus", then mostly I agree.

> there is no way to scientific measure who even has schizophrenia, let alone to identify the causes of an ill-specified label which actually is (or at least may be) applied to many different things.

Here I think you are being too extreme. What you are saying is roughly equivalent to claiming that expert diagnosis of schizophrenia carries *zero* information whatsoever. Surely you would not claim that experts could apply the label of schizophrenia to *anything that they felt like* and still be taken seriously?

> By "heritability" you mean correlation, not causation. Have you read heritability criticism? E.g. http://bactra.org/weblog/520.html

No, I mean “heritability”. Yes, there are some serious and profound limitations in our ability to infer causation from correlation (although partly this comes from metaphysical confusions about what “causation” really is - “causation” between A and B is just what we call temporal sequentiality between A and B once we have sufficiently rigorously specified a context such that the probability B follows A with extreme probability. That is, correlation is a necessary (though not sufficient) method for providing evidence of causality, and causation is (unless you're getting into some highly abstract do calculus *a la* Judea Pearl) just correlation with temporal restrictions.

And yes, I am familiar with the linked criticism of heritability (in fact, very nice surprise to see that specific link), as well as sophisticated rebuttals to it (http://humanvarieties.org/2013/04/03/is-psychometric-g-a-myth/). I do not think that either positions on heritability are foolish - heritability has some real utility and should be taken seriously by anyone who considers himself to be an empiricist, but it is important not to to draw overly strong conclusions from it.

> Saying things in the ballpark of "so many people can't be all wrong about so many things" is a bad approach.

This isn’t my argument / point. My argument is: “if it is essentially impossible that every culture everywhere value X, doesn’t it seem a bit contradictory to claim that X is ‘just a social construction’”? The relevance to the point is with respect to particularly extreme manifestations of what we currently term “mental illness”. Certain “mental illnesses” result in outcomes that no culture in the world would not term “harmful dysfunction”, significantly weakening the radical (pure) social constructionist standpoint.

> memes and genes stuff

None of this seems to be of any relevance to me, since my point is not that models like memetics aren’t useful or valuable or true, but rather that things like medical, biological and genetic models *also* have non-trivial utility and validity.]]>
Sat, 10 Feb 2018 19:45:23 +0000 http://curi.us/comments/show/9502 http://curi.us/comments/show/9502
curi Integrating Goldratt's Philosophy with Fallible Ideas
Regardless of how much self you have ... my guess is you don't have a full, clear picture on how flex time fits into a great life, how all that value and upside works. So there's more there you could understand, which would raise your opinion of it.]]>
Sat, 10 Feb 2018 19:26:55 +0000 http://curi.us/comments/show/9501 http://curi.us/comments/show/9501
curi Thomas Szasz
> If this is correct, and the basic issue is essentially epistemological, I'd like to nip that in the bud first. I'd like to ask for Elliot to say, roughly what his epistomology is re: scientific evidence, its quality, and its relevance to understanding and conceptualizing mental illness.

My epistemology is Critical Rationalism.

About sticking to one framework: I have no problem with learning from many frameworks as long as one doesn't accept contradictions. I've found epistemology-related value in lots of other places when they don't contradict CR (or when I see a way to take inspiration from them and modify something so it no longer contradicts CR and is still good).

The reason for the CR core to my epistemology is I know of no refutation of it, and it offers refutations of all current rivals.

This stuff is indirectly relevant, but is not quite what I hand in mind about us evaluating evidence differently. There are more specific problems with the research in question. I was thinking of some more specific points relating to looking for causality instead of just correlation and having high standards of scientific rigor. For example, it's important to understand that "schizophrenia" is a *label*, a word. It's applied by people who are deemed to be able to apply it because an authority granted them credentials. This is not suitable for scientific measurement – there is no way to scientific measure who even has schizophrenia, let alone to identify the causes of an ill-specified label which actually is (or at least may be) applied to many different things. This argument invalidates lots of research on the matter before what the researchers actually did even matters – if you ignored these fundamental issues, it doesn't really matter what you think you discovered in the lab b/c it's built on top of faulty premises. There are other such arguments, and overall a lot of research is built on faulty premises.

> That there does not seem to be a mental illness with *zero* heritability suggests that genetics almost certainly plays a non-trivial role in mental illness

By "heritability" you mean correlation, not causation. Have you read heritability criticism? E.g. http://bactra.org/weblog/520.html

> However, I also think it *extremely unlikely*

It is not a matter of probability.

To evaluate this well requires figuring out a reasonable, intelligent perspective on the matter, and then seeing what its conclusions are. Saying things in the ballpark of "so many people can't be all wrong about so many things" is a bad approach.

> I do not think it is crazy to say that *no culture anywhere* could or should find this kind of condition valuable

Cultures often cause things they don't value. I'm not sure what your exact point was here, but I disagree that culture/memes/ideas couldn't cause these things. There is no argument that they couldn't, there's nothing that rules out such explanations.

Broadly, people tend to assume that if something is hard to change then that indicates genes over memes. Genes are seen as set in stone, while ideas are seen as something anyone can change if they feel like it. This is incorrect. Memes evolve much faster than genes and have been out-racing genes to meet most selection pressures since they existed. So, basically, *memes are more evolved than genes*. So memetic causes can be more complex, sophisticated, and have more advanced built-in defense mechanisms. And just overall, people's minds have tons of complexity – like a very large software codebase but even more so – so making successful changes about arbitrary things can be quite hard. For a brief intro about memes, see http://curi.us/1824-static-memes-and-irrationality]]>
Sat, 10 Feb 2018 19:10:18 +0000 http://curi.us/comments/show/9500 http://curi.us/comments/show/9500
On Szasz's views on "Mental Illness" DMB Thomas Szasz
One of Elliot's (I hope that informality is not inappropriate) last comments re: the current and potential future state of the evidence re: biological and causal factors in mental illness was (https://twitter.com/curi42/status/962508937848238080):

> Szasz and I have evaluated that evidence differently than you have. We have a different framework, a different way of thinking. That's the thing which is really at issue.

If this is correct, and the basic issue is essentially epistemological, I'd like to nip that in the bud first. I'd like to ask for Elliot to say, roughly what his epistomology is re: scientific evidence, its quality, and its relevance to understanding and conceptualizing mental illness.

But to go first, I'll also point out mine. I'm not a single framework guy, I think there's utility in most of the big frameworks that humans put out there, be they logical, religious, scientific, mystic, w/e. For me, all models are wrong, but all usually tend to be somewhat useful too.

My basic point is that I *do* think there is tremendous utility and insight in Szasz's basic points. There is a large degree of social construction in the concept of mental illness, and much medicalization and biological reduction really does seem to be covert moralizing.

However, I also think it *extremely unlikely* that all the things that get termed "mental illness" are purely labelled as such because of social constructionism / conventions about what is "good". I.e. consider a person that constantly engages in self-harm and suicide attempts, has no will to move from his bed or eat, and is completely unable to experience joy or meaning despite repeated efforts. I do not think it valuable to attempt to understand this person without some recourse to biology and genetics, and I do not think it is crazy to say that *no culture anywhere* could or should find this kind of condition valuable (if it persists) and that this kind of condition deserves the same kind of inquiry, research, and modeling that we have in the past applied to typical medical conditions. I think this kind of condition shows that assuming a wide gap and huge difference between physical/biological and mental illness is not particularly sound.

I also think the medical / biological model has been fruitful. That there does not seem to be a mental illness with *zero* heritability suggests that genetics almost certainly plays a non-trivial role in mental illness, or at the very least that it is deeply important to keep this potentiality open. That there is *some* response (however pitiful) of things like depression and anxiety to medications suggests also that there is some basic validity and utility in employing biological models of mental illness.

None of these points completely negate the basic validity of Szasz's points: it is entirely possible to be overzealous in our pursuit of biological and genetic explanations, at the neglect of the complicated social realities. It would be a dire mistake to neglect the extent to which our concept of "mental illness" is socially and morally constructed. However, it would also, I think, be a dire mistake to think that "mental illnesses" are purely socially constructed. Some may well turn out to be entirely this, and some aspects of them (esp. depression) may turn out to be mostly social construction, but it would be quite shocking if all aspects of all mental illnesses turned out to be entirely socially constructed.

Openness and a concern for truth (and the avoidance of self-deception) demands that we not prematurely close the door on the biological and genetic models, and recognition of our hubris and moral motivations demands that we not fail to recognize our less honest motivations for engaging in diagnosis.

If there is any real deep agreement between these general epistemological and moral claims, let me know.]]>
Sat, 10 Feb 2018 18:46:49 +0000 http://curi.us/comments/show/9499 http://curi.us/comments/show/9499
Underestimating the value of flex and idle time PAS Integrating Goldratt's Philosophy with Fallible Ideas
Maybe. I've thought some more about what's driving my estimate.

If we de-personalize it for a minute and talk about people generally, I think some things are pretty common:
- People don't know what to do to create value
- People aren't self-motivated to create value
- People are more productive working in with others than working alone

This is all due to personal knowledge and choices, not nature. A few people aren't like those traits in meaningful ways. Fewer still aren't like any of them at all. Regardless, it's super common to approximately fit all 3. I think you've said this something like that elsewhere. I'll speak of those traits together as not having much of a self.

People without much of a self in the way I described generally do fuck-all with their flex and idle time. They often destroy value, and if they manage to create instead of destroy it's usually a lot less than they create when they're scheduled and directed.

Where you mention industrial examples from Goldratt like waiting for ovens, I think we have to be careful characterizing this as idle time in the larger, personal-life sense. The worker is not idle in the sense of being free to do whatever he wants while he waits for the oven. He's been given a task and is following orders to sit there and just wait until the oven is done. If such workers were allowed to do what they want - take a nap or, say, play interruptable video games while the oven runs, my guess is that factory output would drop.

As a result of how common lack of self is, lots of infrastructure in society is designed around it. Common arrangements are set up to get more value with less variance from people without much of a self. They aren't particularly well structured for people with a lot of self.

Even people with more of a self than average are trained (by schools and social pressure etc.) to fit into the low-self model better than they otherwise would, because so much of society's infrastructure is oriented that way.

That infrastructure relies on stuff like schedules and promises and hierarchy of authority and consistency and forced focus etc.

So I think the value of what an individual will do with flex and idle time *in the context of how much self they have* has to be compared with the value of what that individual will do with scheduled time *in the context of the infrastructure that goes along with that scheduled time*.

Back to my case: The main reason it's tricky to create a lot of flex and idle time in my life isn't because I get paid for scheduled time. If it were just that, it'd be easy to taper down a bit and experiment.

The reason it's difficult is because I'm part of an infrastructure that was designed to either pay you for **enormous and continually recurring predicatable blocks** of scheduled time, or stop paying you at all. It's set up that way because, for most (low-self) participants in the transaction, that's actually value maximizing.

I may attempt to "hack" that infrastructure and reduce the size of my recurring blocks. But I do so knowing that it's not something the infrastructure was well designed to accommodate, and it's going to cause some problems.

Back to the estimate: It's possible I'm underestimating the value of flex and idle time because I'm underestimating how much self I have.

It's also possible I'm underestimating the value of flex and idle time because I'm overestimating the value I get from fitting myself into an infrastructure that was designed for people with less of a self than I have.

But it's also possible I'm correctly estimating those things or going the other way (overestimating how much self I have and underestimating the value of fitting in).

Which is why I come back to treating idle and flex time as luxury. If I get more experience with it maybe I can revise my estimates.

I have started reading The Choice; thanks for the recc.]]>
Sat, 10 Feb 2018 18:13:30 +0000 http://curi.us/comments/show/9498 http://curi.us/comments/show/9498
curi Integrating Goldratt's Philosophy with Fallible Ideas
reading The Goal and The Choice could help (while also keeping some of what i said in mind so you can connect it to the books – esp The Goal's stuff about idle time and efficiencies – and understand it more). they're easier reading than Rand let alone stuff like Popper. they are great and worthwhile anyway.]]>
Fri, 09 Feb 2018 13:30:14 +0000 http://curi.us/comments/show/9497 http://curi.us/comments/show/9497
PAS Integrating Goldratt's Philosophy with Fallible Ideas
Ex: Driving comes naturally to me now, but it definitely wan't inborn. Driving was pretty hard for me at first compared to other people.

Do you have any ideas about where I am unclear on the right slow analysis of free time versus scheduled activities?]]>
Fri, 09 Feb 2018 13:21:13 +0000 http://curi.us/comments/show/9496 http://curi.us/comments/show/9496
curi Integrating Goldratt's Philosophy with Fallible Ideas
you used the phrase "comes naturally" which makes it sound like the 2nd one (was learned naturally or was inborn, instead of just feeling natural now, at the current time, for some reason).

i didn't say fast/slow disconnect was your problem. i was explaining what i was talking about. from your comments, seems like you're unclear on the right slow analysis. ok.]]>
Fri, 09 Feb 2018 13:11:44 +0000 http://curi.us/comments/show/9495 http://curi.us/comments/show/9495
PAS Integrating Goldratt's Philosophy with Fallible Ideas
I agree the option to do something else has value even if it doesn't get used. What I don't know is whether that value is greater or less than its cost.

> in the big picture: i'd just free up time on principle, not b/c of a specific estimate.

I think such a principle in my own situation is *luxury*. Cuz I don't know enough to know on principle that I can expect to derive more value from having the free time than I'm currently deriving from my scheduled time.

> i didn't say the word "naturally" and it's misleading.

I assumed that naturally was substantially the same as "what's natural for me personally". Other than the fact you didn't say the specific word "naturally", I don't understand what's misleading about it.

> what i've done for expectation and option value stuff is get my fast thoughts reasonably in line with my slow thoughts (and make the slow thoughts reasonable). anyone can do that kind of thing.

I don't understand how this applies to what we're talking about. I don't see any disconnect between my fast thoughts and my slow thoughts regarding time and schedule stuff. In both cases I'd prefer to have more free time than I currently do. But I can't defend that preference against criticism on value grounds.]]>
Fri, 09 Feb 2018 12:58:42 +0000 http://curi.us/comments/show/9494 http://curi.us/comments/show/9494
curi Integrating Goldratt's Philosophy with Fallible Ideas
in the big picture: i'd just free up time on principle, not b/c of a specific estimate. you can't plan out and calculate every detail of your life, but you can follow big picture guidelines that make sense.

i didn't say the word "naturally" and it's misleading. what i've done for expectation and option value stuff is get my fast thoughts reasonably in line with my slow thoughts (and make the slow thoughts reasonable). anyone can do that kind of thing.]]>
Fri, 09 Feb 2018 11:28:32 +0000 http://curi.us/comments/show/9493 http://curi.us/comments/show/9493
Estimation PAS Integrating Goldratt's Philosophy with Fallible Ideas >
> 1) play game with 9 value per hour, and have a 10% chance to interrupt it to do something worth 50 value per hour
>
> is more points, now, than:
>
> 2) play game with 10 value per hour.

The logic of what you say is straightforward, but I have great difficulty estimating the values involved. That makes it very unnatural for me personally to see these kinds of relationships in real life situations.

If we fix game #2 as 10 value per hour, what's the actual value of game #1? You say it's 9 for your example, but if I'm thinking about real stuff I do in my life I would find it hard to develop an explanation that survives criticism for why game #1 is 1, or 5, or 9 value per hour. The relative value of interruptable and non-interruptable stuff I can do is really hard to rate, or I lack the skills and a good method for rating it. If we're talking straight-up dollars, the interruptible / unscheduled stuff I *know* how to do without adding risk to my life versus the scheduled / non-interruptible stuff I'm doing, my best guess is a lot lower than 9.

But suppose I solve that problem, and arrive at your estimates for the value of game #1 and game #2 by some method I'm satisfied with. Now I'm faced with determining, what's the chance I'll encounter an opportunity to do something with higher value if I choose game #1? You estimate 10% for your example. Here again, thinking of my actual life I find it hard to come up with an explanation for why that chance would be 1%, 5%, 10%, or 20%. So I need a method I'm happy with for that too.

By now you can probably guess where this is going. You say the value of the opportunity you get to do if you choose the interruptable game in your example is 50, but I don't have a method for arriving at that either IRL either. Could be 20, 50, 500, or 10,000.

I know my current life is time-poor, filled with local maxima and missing out on opporunties that might take me closer to global maxima if I had more free time.

However: I have no good way to estimate how much value I'd actually generate by doing something that was less scheduled / time intensive. AND I have no good way to estimate the chances I'll find more valuable activities to do in the freed-up time. AND I have no good way to estimate how much higher than the local maxima I'm currently at the opportunities would take me if I siezed them.

All of that uncertainty is why I treat freeing up my schedule as a luxury. It's something that I intend to do solely for pleasure, when and to the extent that I can afford it. Because I just can't make the kind of value case for it in my own life that you say comes naturally to you.]]>
Fri, 09 Feb 2018 09:54:32 +0000 http://curi.us/comments/show/9492 http://curi.us/comments/show/9492
Anonymous Objectivity Mon, 05 Feb 2018 12:48:19 +0000 http://curi.us/comments/show/9491 http://curi.us/comments/show/9491 What was Popper's take on it? Kate Objectivity
And as you go on to say, part of the idea is that people then aren't able to know *actual* reality.

My understanding is that Popper thought that we could learn about actual reality. So did he explicitly address this issue in some way? If so, how does his solution differ from Rand's?]]>
Mon, 05 Feb 2018 08:01:14 +0000 http://curi.us/comments/show/9490 http://curi.us/comments/show/9490
curi Objectivity
https://www.amazon.com/dp/B0070YQOHW/?tag=curi04-20

it's a sci fi story where a very different civilization than our own learns physics from a different perspective and situation than we learned it from. but of course it's the same physics.]]>
Fri, 02 Feb 2018 10:22:04 +0000 http://curi.us/comments/show/9489 http://curi.us/comments/show/9489
Kate Objectivity
OPAR:

>Species with different sense organs gain from perception different kinds (and/or amounts) of evidence. But assuming that a species has organs capable of the requisite range of discrimination and the mind to interpret what it perceives, such differences in sensory evidence are merely different starting points leading to the same ultimate conclusions. Imagine—to use a deliberately bizarre example of Miss Rand’s—a species of thinking atoms; they have some kind of sensory apparatus but, given their size, no eyes or tactile organs and therefore no color or touch perception. Such creatures, let us say, perceive other atoms directly, as we do people; they perceive in some form we cannot imagine. For them, the fact that matter is atomic is not a theory reached by inference, but a self-evidency.

>Such “atomic” perception, however, is in no way more valid than our own. Since these atoms function on a submicroscopic scale of awareness, they do not discover through their senses the kind of evidence that we take for granted. We have to infer atoms, but they have to infer macroscopic objects, such as a table or the Empire State Building, which are far too large for their receptive capacity to register. It requires a process of sophisticated theory-formation for them to find out that, in reality, the whirling atoms they perceive are bound into various combinations, making up objects too vast to be directly grasped. Although the starting points are very different, the cognitive upshot in both cases is the same, even though a genius among them is required to reach the conclusions obvious to the morons among us, and vice versa.

>No type of sense perception can register everything. A is A—and any perceptual apparatus is limited. By virtue of being able directly to discriminate one aspect of reality, a consciousness cannot discriminate some other aspect that would require a different kind of sense organs. Whatever facts the senses do register, however, are facts. And these facts are what lead a mind eventually to the rest of its knowledge.]]>
Fri, 02 Feb 2018 09:45:53 +0000 http://curi.us/comments/show/9488 http://curi.us/comments/show/9488
Anonymous Evil and Chaos Exist Fri, 02 Feb 2018 01:30:15 +0000 http://curi.us/comments/show/9487 http://curi.us/comments/show/9487 curi Criticism of 12 Rules For Life: Secondhandedness Tue, 30 Jan 2018 23:12:57 +0000 http://curi.us/comments/show/9486 http://curi.us/comments/show/9486 Anonymous Criticism of 12 Rules For Life: Secondhandedness Tue, 30 Jan 2018 23:10:06 +0000 http://curi.us/comments/show/9485 http://curi.us/comments/show/9485 Anonymous Criticism of 12 Rules For Life: Secondhandedness
In one of your videos on the JP book you say in general you never say anything is obvious. See at about 1:48:00 here:

https://www.youtube.com/watch?v=JU4M9galhlA

Did you slip up in the quote above or did you have a reason for saying it in this instance?

The video was very informative. Thanks.]]>
Tue, 30 Jan 2018 23:04:03 +0000 http://curi.us/comments/show/9484 http://curi.us/comments/show/9484
curi Commentary Videos on 12 Rules for Life
Consider the repeated game. You might think rejecting money will help you get paid more later.

Let's say there's 100 iterations. On the *last* iteration, player B should accept anything above zero b/c he can either get something or nothing. There is no hope that rejecting this offer will get him higher offers later.

If you know that player A is going to offer 1 cent on the last iteration, no matter what happens previously, then player B should accept the 2nd to last iteration no matter what happened previously. Rejecting in game 99 won't do any good b/c the offer in game 100 will be a penny regardless.

But if player B is going to accept in game 99 no matter what (because he will accept in game 100 no matter what, and player A knows that) then the logic keeps going backwards. Since game 99 is decided in advance as accepting, then 98 is the last undetermined game, and player B should accept in it, since nothing he does can affect any later game.

So even in the repeated play version of the ultimatum game, there is a game theory argument for offering the minimum amount of money every time.

Player B could bluff though. They could violate game theory on purpose in hopes of convincing player A they are playing non-optimally and getting A to play differently. The real world is really complicated but at least the initial analysis is simple enough I worked something non-obvious out.]]>
Mon, 29 Jan 2018 19:43:41 +0000 http://curi.us/comments/show/9483 http://curi.us/comments/show/9483
Over-reaching Anonymous Discussion Sun, 28 Jan 2018 21:45:32 +0000 http://curi.us/comments/show/9482 http://curi.us/comments/show/9482 curi Energy, Drive, Life
most stuff is *badly wrong* and not very good to learn. skipping FI discussion *entirely* is very risky in terms of choosing something wrong. but it doesn't take much discussion to find out if FI thinks something is really crap or not, and some reasoning.

if you pick something pretty good, with some wrong parts, like JP, it's OK to learn it even including the wrong parts. it's worth thinking about and not built on evil. the ways JP is mistaken are not a waste of time to know about, assuming they sound right or interesting to you.

if you learn on your own you can expect to miss a lot. but it can still be good. you have to start somewhere. it can also be bad and fooling yourself. it depends on e.g. how active or passive you're being, how much you apply it to your life vs. just hear it, how much you think it through and analyze it vs. memorize it, how much you do targeted, goal-directed learning vs. just learn what you run into.

---

#9480 I don't know what you are because you have not written much to explain your ideas. As a wild guess: yes.]]>
Wed, 24 Jan 2018 11:32:27 +0000 http://curi.us/comments/show/9481 http://curi.us/comments/show/9481
FF Energy, Drive, Life Wed, 24 Jan 2018 09:48:17 +0000 http://curi.us/comments/show/9480 http://curi.us/comments/show/9480 FF Energy, Drive, Life Wed, 24 Jan 2018 06:39:05 +0000 http://curi.us/comments/show/9479 http://curi.us/comments/show/9479 anon Energy, Drive, Life
If some new person grabs our attention, should we go ahead and spend a lot of time reading/watching their stuff or should we slow down our exploring a lot so we can understand each thing better?

I can see that I "live in a state of half-alive, half anti-Objectivist." In my case it's probably more than half anti-Objectivist since I don't understand Objectivism very well.]]>
Wed, 24 Jan 2018 06:23:53 +0000 http://curi.us/comments/show/9478 http://curi.us/comments/show/9478
Anonymous Criticism of 12 Rules For Life: Secondhandedness
A parent's job is to make his child *fit into the ordered structure of society*, so that he isn't consumed by chaos while never even having had access to order.

That's absolutely not TCS, but is perhaps a more sympathetic phrasing.]]>
Tue, 23 Jan 2018 22:52:44 +0000 http://curi.us/comments/show/9477 http://curi.us/comments/show/9477
Anonymous Criticism of 12 Rules For Life: Secondhandedness Tue, 23 Jan 2018 22:50:18 +0000 http://curi.us/comments/show/9476 http://curi.us/comments/show/9476 FF Jordan Peterson's Anti-Overreaching Theme
Yes!]]>
Tue, 23 Jan 2018 19:46:12 +0000 http://curi.us/comments/show/9475 http://curi.us/comments/show/9475
Anonymous Liar's Paradox Solution
Maybe a more direct way to write it is as follows:

First note that L consists of the assertion that L is false. In general, to write X is false, we write: !X. So, once again letting S be the true/false value of L, we have

S == !S

... which is a contradiction.]]>
Tue, 23 Jan 2018 16:01:09 +0000 http://curi.us/comments/show/9474 http://curi.us/comments/show/9474
Anonymous Liar's Paradox Solution
Your symbols (in #9470) correspond to English along the lines of:

"If this sentence is true, then this sentence is false; and also, if this sentence is false, then this sentence is true."

But it's non-obvious that that text, which corresponds closely with your symbols, also corresponds accurately to the original text.]]>
Tue, 23 Jan 2018 15:25:53 +0000 http://curi.us/comments/show/9473 http://curi.us/comments/show/9473
Anonymous Liar's Paradox Solution
S <= !S and !S <= S (where S is either 0 or 1)]]>
Tue, 23 Jan 2018 15:07:19 +0000 http://curi.us/comments/show/9472 http://curi.us/comments/show/9472
Anonymous Liar's Paradox Solution Tue, 23 Jan 2018 14:55:10 +0000 http://curi.us/comments/show/9471 http://curi.us/comments/show/9471 Tangent: analyzing the liar paradox with symbolic logic Anonymous Liar's Paradox Solution
Let L stand for the sentence "This sentence is false.".

Now suppose L is meaningful. Then L must be either true or false. Let S stand for the true/false value of L.

The meaning of L can be represented by these two boolean propositions:

(1) If S, then not S.
(2) If not S, then S.

Neither S == true nor S == false satisfies both (1) and (2). Therefore, L is not a meaningful sentence.]]>
Tue, 23 Jan 2018 14:51:30 +0000 http://curi.us/comments/show/9470 http://curi.us/comments/show/9470
curi Discussion Tue, 23 Jan 2018 12:56:33 +0000 http://curi.us/comments/show/9469 http://curi.us/comments/show/9469 curi Criticism of 12 Rules For Life: Secondhandedness
Also you shouldn't naively or carelessly ignore those signals. If you're going to think/live outside the box, you need to know what you're doing. If you reject some of society's order, b/c you find it oppressive, then you have more chaos to deal with. This is a standard tradeoff that JP understands, and in general, in these terms, it's OK to be more or less aggressive about rejecting order.

But when you look at it in terms of reason and truth ... well it's still dangerous, as the French Revolution showed us ... but JP is supposed to be a reason-oriented public intellectual. That requires being a rebel on some points, like it's crucial to go for truth over popularity as far as your intellectual judgements go (even if you then choose compromise actions, you should honestly know what you're doing).]]>
Tue, 23 Jan 2018 12:48:20 +0000 http://curi.us/comments/show/9468 http://curi.us/comments/show/9468
curi Liar's Paradox Solution Tue, 23 Jan 2018 12:43:12 +0000 http://curi.us/comments/show/9467 http://curi.us/comments/show/9467 FI essays Anon69 Discussion
I have been reading your essays at http://fallibleideas.com -- good stuff!

I noticed the essays under life articles do not have a heading/title, like the ones under the fallible articles section. I think it would be good to add them -- a few times I returned to re-read an article in an open browser tab and I was initial confused where I was.

Ur thoughts?]]>
Tue, 23 Jan 2018 12:40:40 +0000 http://curi.us/comments/show/9466 http://curi.us/comments/show/9466
petting cats on the street anonymous Jordan Peterson's Anti-Overreaching Theme Tue, 23 Jan 2018 12:04:48 +0000 http://curi.us/comments/show/9465 http://curi.us/comments/show/9465 FF Jordan Peterson's Anti-Overreaching Theme Tue, 23 Jan 2018 11:57:29 +0000 http://curi.us/comments/show/9464 http://curi.us/comments/show/9464 Another example of JP judging success by what other people think Anonymous Criticism of 12 Rules For Life: Secondhandedness
Yeah. In [Full video: Jordan Peterson on the Channel 4 Controversy and Philosophy of "How to be in the World"](https://youtu.be/E6qBxn_hFDQ?t=24m18s) @ 24m18s, JBP says:

> You know, if the tables [between me and Cathy Newman] were turned, you know, and if I had done an interview and then 50 thousand people had written critical comments about me in 2 days, like pretty severely critical, pretty damn vitriolic, I would be having a rough time of it, man. I'd be sitting there thinking, "Jesus", you know? "What the hell did I do? What did I do that was so deeply wrong that this was the result?"

Roark wouldn't doubt himself under those circumstances.]]>
Tue, 23 Jan 2018 10:28:39 +0000 http://curi.us/comments/show/9463 http://curi.us/comments/show/9463
Another example of JP overstating the wisdom of the mob Anonymous Criticism of 12 Rules For Life: Secondhandedness
Yeah. In [Full video: Jordan Peterson on the Channel 4 Controversy and Philosophy of "How to be in the World"](https://youtu.be/E6qBxn_hFDQ?t=1h26m54s), JP says something similar about how people at parties will give you feedback about whether you're on the right "position on the line between chaos and order":

> People are signalling to each other what these things are all the time. So if you're in a conversation at a party and behave properly, then people are happy to have you around, they laugh at your jokes and they tell you interesting things and it's engaging. And if you're off the path at all, then they frown at you or they ignore you or "you're boring"... people are signalling your position on the line between chaos and order at you all the time. All the time. Non-stop. Everyone's broadcasting at everyone else always.]]>
Tue, 23 Jan 2018 10:09:15 +0000 http://curi.us/comments/show/9462 http://curi.us/comments/show/9462
Are all self-referential sentences poorly defined? Anonymous Liar's Paradox Solution
Consider: "This sentence is six words long."]]>
Tue, 23 Jan 2018 09:24:33 +0000 http://curi.us/comments/show/9461 http://curi.us/comments/show/9461
FF Discussion Mon, 22 Jan 2018 08:47:59 +0000 http://curi.us/comments/show/9460 http://curi.us/comments/show/9460 ff Discussion Sat, 20 Jan 2018 09:38:08 +0000 http://curi.us/comments/show/9459 http://curi.us/comments/show/9459 PAS Atlas Shrugged Theme: Don't Overreach
Get ahead of other people in a social status game? I think that's what most people actually mean by "get ahead". And probably what you had in mind. Ya I can accept not compromising as one reason among many reasons not to do that stuff.

But "get ahead" can and sometimes does mean to ex: get ahead of the hunger in your belly and the need for a roof over your head tonight. It can mean to get ahead of your immediate needs, have some property and the capacity for some long term projects. I think that's an ambition:
- You need to have a decent life.
- Requires some compromises described in the article ("looters stealing from you, taxes draining you, and so on.") in the actual world we live in.]]>
Fri, 19 Jan 2018 06:23:01 +0000 http://curi.us/comments/show/9458 http://curi.us/comments/show/9458
Anonymous Atlas Shrugged Theme: Don't Overreach Thu, 18 Jan 2018 22:19:08 +0000 http://curi.us/comments/show/9457 http://curi.us/comments/show/9457 PAS Atlas Shrugged Theme: Don't Overreach
How does this work in real life? I don't think it's reasonably possible to own a farm or cabin without being required to pay property tax, or to extract a few barrels a day of oil without complying with a myriad of environmental regulations.

Related: Suppose you're currently living life with corruptions. Such as having a job where you pay some pay income tax that goes for socialist projects like welfare. Then the corruptions lessen marginally - like the Trump tax cut + deporting some illegals on welfare. Should you respond to this change in any way? If so, how?]]>
Thu, 18 Jan 2018 09:19:52 +0000 http://curi.us/comments/show/9456 http://curi.us/comments/show/9456
Anonymous Discussion Wed, 17 Jan 2018 18:00:17 +0000 http://curi.us/comments/show/9455 http://curi.us/comments/show/9455 FF Discussion Tue, 16 Jan 2018 09:51:05 +0000 http://curi.us/comments/show/9454 http://curi.us/comments/show/9454 Anonymous Open Letter to Machine Intelligence Research Institute
>Yes, I know Popper solved a major problem when he invented CR and the world did not take notice. Having an AGI sitting in your face is a different story though!

People have billions of humans sitting in their face, many of whom do amazing things, and yet some of those people view humans as a wicked, nature-destroying, parasitic chemical scum that will be wiped out by a virus one day.

Philosophy is everything.]]>
Sat, 13 Jan 2018 05:29:56 +0000 http://curi.us/comments/show/9453 http://curi.us/comments/show/9453
curi Bad Parenting List
You can make a reasonable (significant) effort to not do it, cuz u think it's bad. U don't have to be 100% perfect and omniscient to do a good job of addressing this issue. And u can e.g. be receptive when ur kid points out ur doing it, which most parents are NOT, so their kids learn not to point it out, which is really quite awful. There's a BIG difference if a parent knows it's bad and is trying not to do it, and actually appreciates when kid points out mistakes vs. what 99.999% of parents do, which is suppress dissent and/or actively approve of (some) controlling and pressuring their children.

There are no necessary, inherent, someone-has-to-lose conflicts between parent and child getting what they want. It sounds like you've read almost none of the TCS material and are unfamiliar with *common preferences*.]]>
Fri, 12 Jan 2018 17:39:42 +0000 http://curi.us/comments/show/9452 http://curi.us/comments/show/9452
Anonymous Bad Parenting List
Why can't the parent learn to fully and cheerfully provide help to the kid, who *needs it now*, while the parent continues to work on their objection to what kid wants in the background? Is the parent uncomfortable with having open problems? If so, why?

The parent may *wish* they could solve all their problems at once, immediately, but that's impossible. And failing to achieve impossible wishes is nothing to be distraught about.

BTW how'd you discover this chatblog? Why are you interested in discussing TCS stuff?]]>
Fri, 12 Jan 2018 17:07:12 +0000 http://curi.us/comments/show/9451 http://curi.us/comments/show/9451
Anonymous Bad Parenting List
Saying this sort of thing is bad implies parent should not do it.

But saying parent should not do this sort of thing in turn seems to imply:

(1) Parent fully knows and can explicitly articulate the reasons for all of their own feelings, ideas, and preferences. And child is interested in hearing about that instead of just getting parent's help immediately with whatever child wants. So if parent thinks something child wants help with is bad, parent can get that information across to the child without frowning, stressed voice, or being less energetically helpful/friendly/cheerful.

or

(2) Parent should sacrifice their own feelings, ideas, and preferences for the sake of the child's. In the case where (1) is not true but parent feels what child wants is bad, it seems to ask parent to fully and cheerfully provide help anyway. That seems to deny part of parent's self for the sake of developing their child's sense of self. Yet TCS claims not to require sacrifice.]]>
Fri, 12 Jan 2018 15:15:53 +0000 http://curi.us/comments/show/9450 http://curi.us/comments/show/9450
curi Bad Parenting List
you bring up safety. there is no need for any conflict between a child's preferences and adequate safety. you don't want to dance on your roof wearing a blindfold, and your child need not want to do that either. what you don't talk about are *disagreements* (about safety or otherwise) and how they can be rationally handled (instead of assuming the parent is right and then forcing the unpersuaded child to obey).]]>
Fri, 12 Jan 2018 14:19:56 +0000 http://curi.us/comments/show/9449 http://curi.us/comments/show/9449
Perhaps I missed it... Anonymous Bad Parenting List Fri, 12 Jan 2018 14:14:37 +0000 http://curi.us/comments/show/9448 http://curi.us/comments/show/9448 another bad parenting thing Bad Parenting List Fri, 12 Jan 2018 13:12:55 +0000 http://curi.us/comments/show/9447 http://curi.us/comments/show/9447 Anonymous Bad Parenting List
note the links at the bottom will take you to dozens more articles.]]>
Fri, 12 Jan 2018 13:06:29 +0000 http://curi.us/comments/show/9446 http://curi.us/comments/show/9446
Anonymous Bad Parenting List
This is true whether your audience is smart and curious or dismissive and naive and true whether the claim is unconventional or well-known.]]>
Fri, 12 Jan 2018 12:59:18 +0000 http://curi.us/comments/show/9445 http://curi.us/comments/show/9445
curi Bad Parenting List
> That which can be asserted without evidence can also be dismissed without it, no?

The list is for curious, interested people, not dismissive people who don't care to think. People can ask the reasoning for any point if they don't understand it. Or they can find the reasoning already written in TCS material.

Lots of people read something about TCS and think they agree, or already do it, or their parents already did it. This list helps clarify some concrete meanings of TCS.

Lots of curious, smart people, on seeing this list, even with no knowledge of TCS, would be interested in how someone could believe some of these unconventional things.

The demand for *evidence* in particular is an error. "Evidence" is not a synonym for "arguments". Brief arguments *are* provided for some of the points, and are well known for others (even opponents of human rights have some familiarity with them, with liberal values, with reason, etc).

> Presuming a lack of teeth brushing will lead to dental problems, it is permissable to get a child to brush its teeth.

"get" it how? by force? by voluntary persuasion so that, after you say some things child wants to hear, he volunteers to brush his teeth? methods of "get[ting]" are crucial.

presuming the parent is right, from the outset, is irrational. don't start with the assumption that no error-correction is needed. that leads straight to tyranny.

there are lots of reasons someone might not want to brush their teeth other than dissent about potential dental problems. you should find out why your child objects before you can even try to judge if he's right, and what to do about it. maybe he just needs a different flavor of toothpaste.

have you read dental research? do you actually know much about this? e.g. do you have any idea which of these is more effective at preventing cavities?

1) brush teeth twice a day

2) brush teeth once a day, and rinse mouth out with water after drinking soda or eating candy

what about rubbing your teeth with a cloth instead of brushing? how effective is that compared to brushing? if the downside is lack of fluoride, then what if you supplement the cloth with a fluoride mouthwash?

parents would be more convincing to children about dental claims if the parents actually knew what they were talking about. if the parent made an effort to know what the options are, and how they compare, instead of just being an unsympathetic authoritarian jerk, things would go more smoothly.]]>
Fri, 12 Jan 2018 12:25:42 +0000 http://curi.us/comments/show/9444 http://curi.us/comments/show/9444
This is all presented without any evidence to support it. Anonymous Bad Parenting List Fri, 12 Jan 2018 11:51:52 +0000 http://curi.us/comments/show/9443 http://curi.us/comments/show/9443 oh my god it's turpentine Bad Parenting List
It points out bad parenting practices. This helps solve the problem of people not knowing those practices are bad.]]>
Fri, 12 Jan 2018 11:29:57 +0000 http://curi.us/comments/show/9442 http://curi.us/comments/show/9442
teeth-brushing Anonymous Bad Parenting List Fri, 12 Jan 2018 11:20:14 +0000 http://curi.us/comments/show/9441 http://curi.us/comments/show/9441 Anonymous Bad Parenting List Fri, 12 Jan 2018 07:22:34 +0000 http://curi.us/comments/show/9440 http://curi.us/comments/show/9440 games oh my god it's turpentine Bad Parenting List Wed, 10 Jan 2018 22:30:58 +0000 http://curi.us/comments/show/9439 http://curi.us/comments/show/9439 Anonymous Reading Recommendations Tue, 02 Jan 2018 05:09:54 +0000 http://curi.us/comments/show/9438 http://curi.us/comments/show/9438 Anonymous Reading Recommendations Tue, 02 Jan 2018 02:38:50 +0000 http://curi.us/comments/show/9437 http://curi.us/comments/show/9437 curi Reading Recommendations
Philosophy is more important than shuffleboard. You should begin by recognizing that, given some framework, you can then make judgements about some ideas being more important than others. Then you should recognize that some of the judgements remain the same for categories of frameworks – including the category of all the frameworks any reasonable person uses today.

I find no difficulty comparing the contributions of Popper to those of my neighbor (who has done nothing important intellectually). I am unimpressed by your generic retreat from judgement.

I'm guessing the issue with OSE is you make the same political philosophy mistakes that Popper made, so you overrate him. These are corrected by Objectivism!

If you reread my comments, you'll find I didn't assign a measure to anything – I gave an explanation. I think you don't understand it; I can tell because you have ignored what I said about Popper being badly wrong on many topics – you don't know what they are, didn't ask, and responded as if I hadn't even said it. You ignored it so much you then offered an example of Popper getting a lot of stuff wrong as his second **huge** achievement.

Let's look briefly at OSE:

> Liberalism and state-interference are not opposed to each other. On the contrary, any kind of freedom is clearly impossible unless it is guaranteed by the state [42] . A certain amount of state control in education, for instance, is necessary, if the young are to be protected from a neglect which would make them unable to defend their freedom, and the state should see that all educational facilities are available to everybody. But too much state control in educational matters is a fatal danger to freedom, since it must lead to indoctrination. As already indicated, the important and difficult question of the limitations of freedom cannot be solved by a cut and dried formula.

Popper is in favor of some limits on freedom, and declares that liberalism is too!? Then he goes on to advocate significant state control over education, and what sounds like tax funding for education. His approach – have the government partly control education but not too much – is *wrong*.

> An experiment in socialism, for instance, if confined to a factory, or to a village, or even to a district, would never give us the kind of realistic information which we need so urgently.

Popper incorrectly believes we urgently need experimental test results about the efficacy of socialism. Popper must have known who Mises was, if not Rand, and ignored Mises' arguments.


> we must compromise

See e.g. Rand's *Doesn’t Life Require Compromise?*.


> One of these unpredictable factors is just the influence of social technology and of political intervention in economic matters.

You can make predictions about e.g. the economic consequences of minimum wage laws and other price controls.


> Since I am criticizing Marx and, to some extent, praising democratic piecemeal interventionism (especially of the institutional kind explained in section VII to chapter 17), I wish to make it clear that I feel much sympathy with Marx's hope for a decrease in state influence. It is undoubtedly the greatest danger of interventionism—especially of any direct intervention—that it leads to an increase in state power and in bureaucracy. Most interventionists do not mind this, or they close their eyes to it, which increases the danger. But I believe that once the danger is faced squarely, it should be possible to master it. For this is again merely a problem of social technology and of social piecemeal engineering.

Popper thinks we can master the downsides of government and make it work. He's fundamentally anti-liberal and he doesn't address the major liberal arguments. He doesn't carefully consider the distinctions between e.g. force and non-force, voluntary and non-voluntary, and apply them to these issues.

> As Lenin admits, there is hardly a word on the economics of socialism to be found in Marx's work —apart from such useless slogans as 'from each according to his ability, to each according to his needs'.

This text isn't a mistake but it's funny and related to one: there's hardly a word of economics in Popper's work (which is a mistake).]]>
Fri, 29 Dec 2017 11:59:31 +0000 http://curi.us/comments/show/9436 http://curi.us/comments/show/9436
CritRat Reading Recommendations
In fact, such a measure will in part be a prophesy, because the importance of certain ideas will become prevalent in hindsight only.

For example, take Planck's contribution to science: he solved the infrared catastrophe and, unbeknownst to him, he laid the foundations for quantum mechanics. People judge his theory differently now then they used to and for good reason. We now know that Planck's theory is much more fundamental than he could've known.

Furthermore, such a measure also implies a kind of hierarchy to knowledge, which I don't think exists. Are moral problems more important than epistemological ones? Is physics more interesting than biology?

Finally, I disagree that Popper only solved one big problem. He contributed to political philosophy in his Open Society and It's Enemies, but this is a secondary issue.]]>
Fri, 29 Dec 2017 06:06:12 +0000 http://curi.us/comments/show/9435 http://curi.us/comments/show/9435
Dagny Reading Recommendations
DD knows more about physics than Einstein did, but Einstein can reasonably be considered the greater physicist b/c making major original breakthroughs is harder than standing on the shoulders of giants.

It's similar in philosophy: DD knows more than Popper did, but had the benefit of learning from both Popper and Rand. DD has called himself footnotes to Popper, though I consider DD's achievement larger than that and actually think he rivals Popper.

DD, unfortunately, has studied Rand inadequately. He is familiar and is a fan, but he is currently ruining his career by acting contrary to her philosophy. He is unwilling to criticize or debate her, or study her more carefully. The way he's ruining his career is by sucking up to high social status persons – compromising and seeking popularity over truth – while also stopping his interactions with high quality intellectuals who are less popular (he used to do tons of, and it's very important as a source of error correction).]]>
Thu, 28 Dec 2017 15:36:43 +0000 http://curi.us/comments/show/9434 http://curi.us/comments/show/9434
curi Reading Recommendations
Popper had one main **huge** thing – solving the problem of induction with a new non-justificationist evolutionary epistemology. That's **great**. That alone makes him a top philosopher in history.

Rand had several huge things (e.g., just in moral philosophy, her sanction of victim stuff, secondhandedness stuff, and altruism criticism are all huge), and she covered more topics than Popper with much more consistent quality.

Popper got various stuff wrong (e.g. liberalism/capitalism/socialism, including he advocated TV censorship). Rand's lowest points are all, at *worst*, much better than almost everyone else. Rand wasn't *dumb* about anything; Popper was. Rand also more clearly figured out what her own ideas were, and developed better writing skill so that she could communicate them more clearly than Popper communicated his.

Popper's big thing is also more flawed than Rand's big things are. I've already pointed out significant CR errors (in my Yes or No Philosophy material), whereas I've been unable to discover significant errors in Rand's big things (though it depends what you count, e.g. I think measurement omission is flawed). Despite the flaws, Popper's achievement is still a great candidate for the biggest single philosophy achievement since the ancient Greeks. But even granting Popper's achievement the highest value, I'd still rate Rand higher because she has more big achievements and much more consistently high thinking quality.

---

Note that Rand and Popper don't have a lot of overlap. Rand offers different ideas, not better versions of Popper's ideas. Everyone should learn both. I think any intellectual who doesn't know about both is at a huge, huge disadvantage.]]>
Thu, 28 Dec 2017 15:03:51 +0000 http://curi.us/comments/show/9433 http://curi.us/comments/show/9433
Dagny Open Letter to Machine Intelligence Research Institute
if you're unable to reject bad ideas intellectually, trying to somehow figure out which ones are bad and avoid them (b/c you can't answer them in an effective way that applies to your actual life) is a terrible "solution" – it's just straight anti-truth-seeking.

you don't see Elliot making shitty arguments just cuz he spent some time on LW recently. he knows better. learn better instead of trying to avoid exposure to bad ideas. learn how to actually refute them and apply refutations to your life (instead of them only being abstract). if you can't do that, you suck at thinking – you aren't better than the ideas you claim are bad but can't handle, and you may be worse.

> But the more important thing to debate is Bayesian foundations, which suggest that very powerful systems still require goals to be set from outside.

do you think human goals are set from the outside? you aren't really addressing the comparison.

> Better in what sense?

whatever the *true* sense is – what's what we should look for and what's most persuasive. that'd be part of the argument – it'd say what the right kinds of better to look for are for this issue.

have you just given up on truth?

> I disagree with the contention that filter criteria should necessarily be public.

that's necessary if you want to make a *public* claim to rationality, and not be laughed at by all wise men. if you say "i'm secretly/privately rational, and i want a rational public reputation" you're silly.

> there will be a number of interactions which are just not worth the time

has the issue been addressed, ever? if not, on what grounds can you ignore it? if you have any grounds, why have they never been written down? if they have been written down, then addressing the issue takes 3 seconds to give a link and you're done.

> This seems like an empty argumentative move without much content to respond to,

are you so much of an idiot, who speaks only to idiots, that you mixed up that conclusion statement with an argument – and then complained the non-argument didn't have enough argumentation in it?

you got what you asked for. you also, separately, got arguments.

> It seems to me like that alternate you wouldn't have generated a sentence similar to the above.

you mean a hypothetical Elliot, with opposite values – a second-handed appeaser who says whatever is popular for manipulating members of his culture – would make different statements? yeah that person also would hate PF and FI.

it's bizarre though because it's exactly the sentence you explicitly requested. of all the sentences, you're complaining about the one you wrote and ET copy/pasted from you!? really!? literally *you* wrote the entire sentence you're focusing so much hostility on.

maybe you should focus on the issues that matter like what your epistemology is – or if you even have one.]]>
Sun, 24 Dec 2017 18:40:22 +0000 http://curi.us/comments/show/9432 http://curi.us/comments/show/9432
PL Open Letter to Machine Intelligence Research Institute
> the "training" idea is dumb. if you spend the discussion understanding why Pat's perspective is wrong better, then you won't be trained to do it, you'll be helped not to do it.

In my experience, it does seem like people who take it upon themselves to respond to very poor quality criticism (I am thinking of stuff on twitter here) do generally lower the quality of their thinking as a result. Thoughts tend toward ground recently tread, and level of standards recently experienced. There is some effect of avoiding the mistakes observed in others, but as for that, engaging better intellectuals also means learning to avoid more nuanced errors.

> you don't specify goals for AI anymore than you do for children, you fucking tyrant. people think of their own goals and frequently change them. the AGI ppl want to not only set the goals but prevent change – that is, somehow destroy or limit important intelligence functionality.

This depends on details of how powerful agi systems will be designed. Almost all systems today are designed by choosing a loss function for, eg, neural network training (which is very much like choosing a utility function) and then optimizing for it. So, in a way, that is the safest bet for how powerful systems will be designed in the future. But the more important thing to debate is Bayesian foundations, which suggest that very powerful systems still require goals to be set from outside.

> As to the actual issue, a U-maximizer's first choice would be to *persuade you that U-maximization is better than V-maximization*. The lack of thought about persuasion is related to the lack of respect for error correction and, more politically, for freedom vs. tyranny, voluntary vs. involuntary, etc.

Better in what sense? The standards programmed into the AI? The U-standards or the V-standards?

> > 1. Costs of PF.

You pivot to a discussion of benefits of PF. You agree with my point that there are other ways of making progress than addressing external criticisms, and indicate that the big problem is really that EY is so non-PF that he is still advocating wrong positions after a decade (I take it you are implying that there is some lower standard of PF which EY could presumably meet, via which he would avoid failures on this scale while making what he might see as a reasonable trade-off for his time).

Overall, I agree that these things can be discussed as a matter of degree, addressing trade-offs in amount of time spent, using filters of varying strengths for taking time to interact with people, etc. I agree that feedback from others is indispensable for intellectual progress. I disagree with the contention that filter criteria should necessarily be public. I agree that having non-public criteria allows people to apply their own biases and refuse to engage those who they disagree with, but people can find ways of ignoring evidence anyway. You'll say that one should grasp at every safeguard against this, but, the way you describe doesn't particularly strike me as the right trade-off.

Instead, it seems to me that even if your goal is purely to come to the right conclusions, even taking into account the fact that a few of the interactions which seem like "obviously a waste of time" on the surface will in fact be high-payoff, there will be a number of interactions which are just not worth the time; and, trying to explicitly state filter criteria for these will often be more trouble than it is worth.

> > Like points 1 and 2, my concern is that you seem to want to defend PF by retreating to a version whose job is not to address those concerns, instead stating that PF is so valuable that an organization should drop any other concerns in order to achieve PF.

> PF is so valuable that an organization should drop any other concerns in order to achieve PF. There you go.

This seems like an empty argumentative move without much content to respond to, which reinforces my impression that you're refusing to think much about what people/organizations actually want and need and what would really motivate them, *or* what would actually help them to have better epistemics in any nuts-and-bolts way. Imagine a parallel universe where you had helped a number of organizations to install better epistemic practices, and had more experience with what tends to work, what comes up against resistance, etc. It seems to me like that alternate you wouldn't have generated a sentence similar to the above.]]>
Sun, 24 Dec 2017 16:29:20 +0000 http://curi.us/comments/show/9431 http://curi.us/comments/show/9431
curi Donald Trump is a Protectionist Sun, 24 Dec 2017 00:36:14 +0000 http://curi.us/comments/show/9430 http://curi.us/comments/show/9430 Anonymous Donald Trump is a Protectionist
https://www.investors.com/politics/editorials/trump-notches-another-win-on-trade-as-china-slashes-tariffs/

Getting China to substantially reduced import tariffs is a win for Trump. I doubt he will now increase US tariffs. His idea that free trade needs to be mutual is wrong but it pressured China and led to a good outcome in this instance.]]>
Sun, 24 Dec 2017 00:29:36 +0000 http://curi.us/comments/show/9429 http://curi.us/comments/show/9429
Anonymous Open Letter to Machine Intelligence Research Institute Wed, 20 Dec 2017 05:06:21 +0000 http://curi.us/comments/show/9428 http://curi.us/comments/show/9428 social norms about criticism Anne B Open Letter to Machine Intelligence Research Institute
Maybe it's not always the case that people are fragile, it's only that they are operating under a social system that interprets some kinds of criticism and discussion as hostility. If they get some signal that says “I'm not going by the usual rules. I'm criticizing your ideas but I don't hate you and I don't have a goal of hurting you.” then they can understand it pretty quickly and not be bothered by the criticism.]]>
Wed, 20 Dec 2017 04:55:28 +0000 http://curi.us/comments/show/9427 http://curi.us/comments/show/9427
curi Open Letter to Machine Intelligence Research Institute
> To me, this seems definitely false in the case of Eliezer. Of course you may disagree.

I like some of EY's writing and see value there, but also he's wrong about some major issues, and staying wrong due to lack of PF, and he absolutely could do better about error correction within whatever constraints he's operating under. An example of him being mistaken is about Friendly AI – he's scaring the public about AI (about some grave danger) while researching authoritarian control (how to control the lives of AIs within the constraints of his choosing) that's even worse than the typical authoritarian political scheme b/c it's more focused on mind control instead of body control.

So, while agreeing he wrote some good stuff, I can also criticize his rationality, and say there is a substantial PF-related problem. He could do better; he isn't; there are consequences. The reason he doesn't do better has nothing to do with fatigue, it has to do with e.g. his closed-minded, ignorant rejection of Popper. This isn't a matter of lack of time and effort, it's bad judgement and arrogance.

When challenged, he has done things like use administrative action to suppress dissent. There's no excuse for that. He doesn't want a free speech forum. It's not just that he lacks time to read it; he doesn't value such a thing. There is a clash of intellectual and political values which is more important here than time/resource constraints. He doesn't wish he could do PF, he doesn't love the idea.

> Is free market economics collectivist? Is any theory predicting groups of people rather than individuals collectivist?

Free market economics is *not* a theory predicting groups of people.

> Individual humans are just groups of cells. Perhaps we should talk about cells rather than humans, it would be more scientific. (??) Or maybe cell biology is too collectivist, and we should work at the level of particle physics. (???)

There's a correct level to look at. E.g. in biological evolution, it's genes – not whole animals and not individual atoms. Atoms aren't replicators, so they aren't so interesting. It's the same here: cells don't think, reason, or make choices. An individual is an actor, but a single leg isn't.

There do exist some legitimate contexts for discussion of groups, but that doesn't prevent people from being detected as collectivists for openly displaying standard authoritarian, collectivist assumptions. He's literally *virtue signaling* that he's a collectivist anti-(classical)-liberal. It's not subtle. And, as with Popper stuff, there's no willingness to debate such things. Asking for rebuttals of Rand and Mises gets *even worse* responses than doing it with Popper and Deutsch.

> (In other words, I'm totally baffled by the position there.)

Do you want to learn about capitalism, individualism, liberalism, etc? There are books on the matter included in my reading recommendations. Questions and arguments are welcome at FI or here, if you learn enough to comment with more than bafflement.

> Here, we'd have to get into debating objectivism, which I expect would be a rather large sub-thread to try to support.

I think you'd need to learn about Objectivist prior to debating it.

> Do you mean he doesn't want to do PF on it, or do you mean he doesn't want to think about it? If the latter, what is your evidence of this?

If EY wanted to think about AS much there'd be visible signs. They don't exist.

> For my argument for making trade-offs, I offer the complete class theorem:

This is irrelevant. You are starting with a bunch of assumptions which I don't agree with. You need to back off to more fundamental, basic, philosophical issues. This is the basic problem I constantly had with LW people – they can only talk with people who already agree with them about a ton of stuff. Far too many of their premises are assumed rather than considered, and therefore aren't available for discussion. Whereas if one is well versed in prior layers of abstraction, one can drop down to them and discuss them.

In other words, you're starting in the middle. And if you're like the LW people, your beginning hasn't been adequately consciously considered and you don't even know how to discuss it.

You're basically skipping past philosophy, which deals with big foundational questions, to get into the details of your unstated framework. (You – if you're anything like the LW posters – consider your framework stated because you state some later parts of it, while being blinded to the prior issues and basically taking common sense for granted there.)

One of the many things skipped is some framing of the problem itself you're trying to solve. Broadly you omit philosophy as a whole, but more specifically there's no preamble about what a decision is, why one would want to make one, what decision success is, etc.

Also the writing is terrible and can you please just not link Wikipedia again? E.g. it says:

> in the precise sense of "better" defined below

But it doesn't define a precise sense of "better" below. I don't know if the writer is stupid or this got screwed up because of multiple authors editing different sections at different times, but it's a typical example of how Wikipedia routinely sucks. And there's no real accountability and no decent procedures for fixing errors. And there's a politically biased moderation team behind the scenes. And links are unreliable because pages get edited.

> Maybe I already made this remark, but I'm struck by the way you/DD emphasize the analogy between evolution and epistemology,

I already posted:

>>>> DD seems to like the analogy between epistemics and evolution quite a lot.

>>> that's not an analogy. evolution is *literally* about replicators (with variation and selection). ideas are capable of replicating, not just genes.

I believe your inattention to detail is common and is one of the main reasons more people don't agree with me about many issues. I think people lack core intellectual skills like being able to read precisely, and this is a bigger issue in "disagreements" than actual contrary ideas.

> but miss the analogy between evolution and Bayes

I'm not missing anything. I'm trying to talk about *prior issues*, instead of within-framework math. Your framework is inadequately specified and involves a bunch of common sense and traditional assumptions about epistemology (more of those than any actual epistemological system), and *those* are where I primarily take issue with Bayes. The implications of fixing the prior issues can be discussed at a later date.

Nothing else really matters as long as the core philosophy issues are outstanding.

> Interesting point. On my reading of the content, it ought to be rather controversial (for the same reason as Inadequate Equilibria has met with backlash), but it is very much not.

I agree that the content of HPMOR *deserves* to be controversial, in some sense. If EY sat down with most fans, and carefully went over what he was actually saying, and what it implies *about their lives*, and pointed out ways they *are not acting according to what he was advocating*, he'd find most readers disagree with HPMOR in major ways (I'm sure he'd expect this, not be surprised). So, in some sense, it ought to be controversial b/c more readers ought to recognize they disagree. But it's not clear and aggressive enough in various ways to trigger this. There are things EY should have done better here, but he did OK, and lots of it should be blamed on 1) it's actually difficult 2) flaws in our culture and in the audience.

> Not sure what you mean here. The most I can come up with is something like "curi is thinking that Bayes is necessarily missing anything which CR has" or something (which is unlikely to be really what you're thinking here). Eliezer wrote a bunch of explicit stuff about what good explanations look like around the topic of "hugging the query", mysterious answers, etc.

Epistemology is the name of a field which deals with certain basic questions like: what is learning, how do you learn, how do you evaluate ideas, how does reason work, which arguments should be judged to win a debate, what are the right methods of thinking?

LW/BE *literally doesn't answer this stuff* in any kind of serious, comprehensive way. LW/BE instead has a mix of:

1) partial answers on specific sub-issues. it has some details which are relevant.

2) assumptions (from common sense, tradition, culture, etc)

3) some rather vague comments on epistemology, e.g. the 12 virtues of rationality do have epistemology content but do not resemble and actual framework or system with clear principles and methodical answers

4) much more rigorous, comprehensive work on some specialized sub-fields of epistemology, which make assumptions about the foundational issues they don't address

There is no unique Bayesian *Epistemology*. To the extent I've ever gotten answers about major epistemology questions, it's either details (pieces of epistemology with key parts of the bigger picture missing) or mainstream answers (rather than anything Bayes-specific, and with the standard flaws).

> Eliezer wrote a bunch of explicit stuff about what good explanations look like around the topic of "hugging the query", mysterious answers, etc.

That's the wrong kind of material. What I'm interested in is underlying methodology for discovering and judging such things, not the conclusions reached. I want to deal with *starting points of intellectual inquiry*, not skip past those.

If you look at http://lesswrong.com/lw/ly/hug_the_query/ maybe you can see how it opens with assumptions about rationality and starts getting into some more detailed issues. This is a *piece of* an epistemology, but isn't the fundamental core of one.

> but not very useful at all if the point is to *convince*

You have to *learn* to be convinced. You have to convince yourself. CR is *far deeper and more complex than you realize*, and all you're getting in this conversation are abbreviated indications of positions. If you want more, it's available, but I don't know how to repeat all the content in 1% of the word count, and I'm not trying to.

> I would like to make a request that you don't make any arguments begging the question of PF/CR's superiority going forward. Would that be too much, though?

Could you point out a single instance of me doing this?

You quoted the text, "CR does so much better than this by giving actual methods of consciously thinking about and analyzing things, and explaining how rationality and critical debate work, and methods of problem solving and learning."

But *this is not an argument*. Maybe you're so used to such low quality of argument that you actually thought this non-argument was intended as an argument? It had a different, descriptive purpose.

Additionally, *that is not my text*. Please don't try to comment about my discussion history unless you pay attention to who wrote what.

If you have a reference for something being overlooked, such as an argument for why we have to make up fudged numbers (which doesn't make unstated non-CR framework assumptions, but instead actually argues from first principles), please link it instead of complaining *non-specifically* that other people haven't responded to some things on the internet that you'd like responses to.

> Could it possibly be that, because you disagree with the formal epistemology EY has settled on

There's no such thing as a fully formal epistemology. You can have a formal *part* of an epistemology, but not the whole thing. To the extent you have a formal epistemology, it's just missing huge pieces. What is learning? You can't just start answering without having a part explaining the basic concepts you're using, the conceptual gist of your answer, and how the math relates to the question (math isn't self-explanatory). If you omit that part, you're relying on prior work (somewhere, by someone – references please!) or, worse, mainstream sense intuitions of your audience. The starting points of the field are not formal. You may attempt to formalize them, but you'll need some kind of bridging material which gets from the starting points to your formal system.

As long as this issue of starting points is outstanding, the rest basically don't matter.

You also try to present this like some kind of disagreement when EY is *massively ignorant* of CR (he's written several ignorant things about CR). That's different than disagreeing. Also disagreeing about particular arguments is pretty different than "I have a systematic philosophy that starts at the start; where's yours?" Objectivism, btw, also addresses starting points. It's a reasonably typical thing for philosophers to attempt – but LW/BE is kinda philosophy-hostile. There are plenty of philosophers I don't like and disagree with – e.g. Kant – but whom I acknowledge as having spoken to the fundamental questions of epistemology. If LW/BE/EY has done this, please provide a reference; everyone I spoke to at LW just had mainstream ideas – that had nothing much to do with Bayes – when I brought this stuff up. In other words, they seemed satisfied that BE is premised on various aspects of mainstream, conventional epistemology (rather than being a complete alternative). Everything I've read from EY is the same way, unlike with Kant or Rand.

> The stuff in the essay which I thought contradicted PF (iirc) was (a) the idea that EY could not realistically expect any benefit from discussing this with Pat, and would have been better off abandoning the conversation,

I didn't think Pat had a lot to offer, either. But I also don't think such judgements are very reliable. It's easy to take someone dramatically better than you and think they're dumb because you don't understand their advanced ideas.

So what do you do to avoid bias, so that you don't systematically ignore genius after genius (mixed in with a much larger number of fools)? You need some *methods* to be followed. I propose that EY either use PF or *write down alternative methods that he uses instead*. I propose that it's a bad idea to just make an ad hoc judgement of Pat and move on in such a way that Pat can't correct a mistaken judgement.

> (b) Eliezer's insistence that even thinking the thoughts about how to respond to Pat-like objections is bad rationality practice, because it trains you to think of the wrong things.

if people have problematic frameworks, challenge the frameworks, and refer them to canonical material instead of getting into details with them. such an approach is PF compatible. i totally agree that focusing a lot of attention on the wrong questions and issues can be bad. so don't. but do speak to the meta-disagreement (once in writing is fine), state your own framework and values, etc. hell, that's part of what EY is doing with the Hero Licensing essay. that essay helps speak to people like Pat instead of just answering them with unexplained silence! awesome! but there are no PF's with EY about CR, Objectivism or Friendly AI – there is no reasonable, realistic way for me to correct him about those matters.

> 1) The important idea that you are talking to the person you are talking to, not some impartial 3rd-person observer.

you are allowed to talk to individuals, if you wish to. that's more time consuming than speaking about issues generally in reusable ways that apply more broadly. it's more parochial. i have nothing big against it. but don't do it exclusively, instead of writing the more important, less parochial stuff. (i think EY broadly agrees about this – he is more interested in writing essays than debating particular individuals. good. the ideas are what matter most, not the personal confusions of some guy.)

> The territory is objective, but maps are fundamentally subjective.

what do you mean by "subjective"? i want very high precision, or else just don't ever use the word and replace it with something else.

this is one of the worst, most problematic words in philosophy. it's a major cause of confusion, and people talking about it are rarely on the same page.

> To me it seems as if PF equivocates between being able to respond to criticisms to your own satisfaction and being able to respond to the other's satisfaction

there's an objective state of the debate, which you can create high clarity about, and then judge. you can keep that judgement itself open to error correction, as you should do with literally everything.

you are unfamiliar with the epistemology which enables this, but it does exist and is available to be learned.

> and it is based on this equivocation that it suggests a path forward should always be open.

what is your rival position? that sometimes you should *permanently* shut down error correction on some topic? or you should sometimes make error correction so slow, indirect, implausible, etc., that it's unrealistic and doesn't constitute a reasonable path forward?

i think you're overly focused on other people. if you look at rational handling of internal disagreements (within one mind), that'll be revealing about the irrational and authoritarian nature of "ignore the other side of the debate" type thinking. "that other part of my mind is dumb and not even worth talking to"...

> 2) The idea that implicit, inarticulable, or unjustifiable beliefs are not necessarily bias to be set aside

you are not being precise enough. that's actually one of the big things necessary as part of learning CR: learning to think much more precisely than is normal.

implicit? implied *by what*?

inarticulable? this means it's *impossible* to articulate. it doesn't mean you merely don't know how at this time. is that what you meant to write? my first guess is you don't mean that. it'd require more elaboration if you do mean it (which beliefs are *impossible* to articulate? why? what counts as articulating? articulation doesn't have a precise enough standard English meaning to use without details in such a big claim about impossibility.).

unjustifiable? *no* beliefs can be justified. among other things, justification has a regress problem. (justification comes from some source of justification, but the source of justification (or the belief that it is such a source) needs its own justification, etc)

i didn't claim these things are all *necessarily* bias to set aside. you should use quotes instead of putting such strong words in my mouth (with other words following undefined slashes).

> that we don't necessarily benefit from maximally exposing ourselves to arguments

i did not say *necessarily* and *maximally*. that is not even close to what i've been saying. i've said there are lots of benefits to lots of exposure, which is rather different. and i've said don't block paths forward entirely (no exposure, or some approximation of that). this issue – totally blocking off corrections – is the one i primarily care about (so there's *no* solutions), not anything about maximizing exposure.

> that this necessarily 'error-correction', as it may instead make us more swayed by the biases of others / of groupthink

being swayed is not part of the CR epistemology. that is not how to think. don't do that.

> We know more than we know we know, almost by sheer force of logic (it would be difficult, though not quite impossible, to know we kwon everything we know).

yes of course. traditions are often wiser than the people using them. this stuff is covered extensively in our philosophy.

> We can properly justify only a subset of what we know we know.

this is a good example of making a mainstream (not Bayes-specific) epistemology assumption, and using it as a premise. why exactly would you want to justify anything, and how do you think you can? you may find the answers obvious but *where are they written down* in a serious way which e.g. covers the well known views in the field and your positions on each? and then how do these answers you have differ from some fairly typical non-Bayesian answers?

> So restricting ourselves to only steer by those beliefs which we can properly explicitly justify

is certainly not something a CR advocate would ever propose, seeing as CR rejects justification. you're trying to argue with something you're far too ignorant of to debate. you should be trying to learn instead of arguing with your huge misconceptions that come from mixing PF with large amounts of your own non-CR epistemology.

> Dismissing an uneasy feeling about a business deal because we can find no logical reason for it may be unwise, say.

of course. i've said so myself, repeatedly, and in detail.

> 3) I know you'll protest this, and I'm not sure exactly if it's a function of PF or just the way you use PF, but it seems like PF ends up putting a very adversarial frame on discussions, with criticisms and defenses, rather than a collaborative truth-seeking frame.

the frame used is identical whether dealing with other people or with internal disagreements in your own head. am i my own adversary? nah. people need to stop taking criticism of ideas personally. a criticism is an explanation of a flaw in an idea. we need those. they are very directly crucial to error correction. of course people should collaborate and think objectively – e.g. by criticizing ideas in the same way regardless of their source (criticize your own ideas with the same methods you criticize other people's ideas – and stop thinking so much about whose idea is whose).

i do use certain hot-button words, like "criticism", which many people are emotional about. i use these words because: they aren't an emotional problem for me (or others like Popper who also used them); they're clearer; there's no quick/easy fixes here (using the term "constructive criticism", for example, would have major problems, and anyway it's primarily a substantive issue not a terminology issue.)

i think the main thing going on here is i'm talking epistemology not psychology. i'm talking about the philosophical issues, not how to communicate with fragile people, which i consider a separate issue of much lesser importance, which, in any case, can only be determined after you figure out the actual philosophy (first you need to figure out intellectually what has to happen, *then* you can come up with some "nice" way to talk about it). secondarily, i also reject a variety of social norms, for various reasons that aren't primarily about CR (which is fairly neutral about how to converse as long as you're speaking clearly and honestly to the issues. CR basically sticks to epistemology and doesn't include a criticism of modern social dynamics).]]>
Tue, 19 Dec 2017 23:07:04 +0000 http://curi.us/comments/show/9426 http://curi.us/comments/show/9426
PL Open Letter to Machine Intelligence Research Institute
Not really, I just had to take a break. You can expect longish time gaps in the future as well. Though, actually, I never meant to indicate that I'd make sure not to abandon this conversation -- I'd just feel somewhat bad about doing so. (I felt somewhat bad about leaving it for so long.)

(post #9273)

> > medical issues with fatigue

> either certain error correction has been done to certain standards, or it hasn't. if it hasn't, don't claim it has.

[...]

> if someone has issues with fatigue or drugs or partying or whatever else, they should still use the same methods for making intellectual progress – b/c other methods *don't work*.

I think you would agree that, ultimately, the proof is in the pudding on this one -- IE, the claim can be evaluated by asking the question "would the world be better off if people unable to do PF (naming Eliezer for the sake of argument) had never claimed to be public intellectuals?"

To me, this seems definitely false in the case of Eliezer. Of course you may disagree.

(post #9274)

I was referring to the book Inadequate Equilibria.

(post #9280, by ananymous)

> > Inadequate Equilibria is a book about a generalized notion of efficient markets, and how we can use this notion to guess where society will or won’t be effective at pursuing some widely desired goal.

> this is collectivist. society isn't an actor, individuals act individually. this kind of mindset is bad.

Is free market economics collectivist? Is any theory predicting groups of people rather than individuals collectivist? Individual humans are just groups of cells. Perhaps we should talk about cells rather than humans, it would be more scientific. (??) Or maybe cell biology is too collectivist, and we should work at the level of particle physics. (???)

(In other words, I'm totally baffled by the position there.)

> also you can't predict the future growth of knowledge, as BoI explains. so this kind of prediction always involves either bullshit or ignoring/underestimating the growth of knowledge.

Inadequate Equilibria focuses on shorter-range prediction than that, about how/when you might be able to outdo the *present* state of knowledge. Also, to oversimplify the details, it is about cases where people *aren't even trying* -- the reason it helps predict when you might be able to out-do the market's knowledge isn't because you can predict what knowledge they'll be missing; it is because you can predict that, even if such knowledge is discovered, it won't be used.

(#9282-#9295, curi on Hero Licensing)

> Here is EY stating openly that EY dishonestly plays social status games in an anti-intellectual way. He hides his opinions, compromises, tries to be more appealing to people's opinions which he doesn't not understand and agree with the correctness of.

> So he's a *bad person* (or at least he was a few years ago – and where is the apology for these huge errors, the recantation, the careful explanation of how and why he stopped being so bad?). He should learn Objectivism so he can stop sucking at this stuff (nothing else is much good at this). He doesn't want to. So forget him and find someone more interested in making progress (people who want to learn are better to deal with than people who already know some stuff – without ongoing forward progress, error correction, problem solving, etc, people fall the fuck apart and are no good to deal with).

Hmm. Here, we'd have to get into debating objectivism, which I expect would be a rather large sub-thread to try to support. We've already had a bit of discussion of the objectivist principle against compromise. From a Bayesian perspective, compromising between tradeoffs is practically what decision-making is *for*. Which doesn't necessarily make any particular compromise *right*; certainly there are deals-with-the-devil of the kind objectivism speaks against. But perhaps we can fold this into the existing thread debating Bayesianism.

For my argument for making trade-offs, I offer the complete class theorem:

https://en.m.wikipedia.org/wiki/Admissible_decision_rule

I would make several modifications to the setup in the wikipedia article.

1) Rather than imagining Theta represents the set of all possible worlds, we should imagine instead that it represents all hypotheses which the person has thought of. (And on observing x, it gets narrowed down further, to all which are consistent with observations.)

2) Can we suppose a finite set of possible worlds, for simplicity? This sets aside the need to address more exotic mathematical possibilities like sets of measure zero. I don't necessarily think we *should* restrict to that case, since a person can invent an infinite set of possible ways the world can be through mathematical abstraction; but, I do think that case should display the essential phenomena. I'm not sure of all the details, but, infinite cases yield weaker conclusions like "generalized Bayes" discussed in the article.

3) The argument as stated assumes that one is already willing to use probability theory to state the likelihood functions connecting worlds to observations. Let's scrap that, stipulating the likelihood functions to be 0 or 1, so that possible worlds are either compatible or incompatible with the observations. This is just a special case as far as the math is concerned, but it lets us argue for having probabilistic beliefs and using them to make decisions without assuming we already use probabilistic likelihood functions.

The best walk-through I've found so far of the proof that admissible decision rules are Bayesian (and related details) is here:

http://www.stat.washington.edu/people/pdhoff/courses/581/LectureNotes/admiss.pdf

> but the best philosophy book ever written is a novel containing very intelligent characters (*Atlas Shrugged*). EY doesn't want to think about this or address the matter.

Do you mean he doesn't want to do PF on it, or do you mean he doesn't want to think about it? If the latter, what is your evidence of this?

> there's no such thing as smarter-than-human AI because humans are *universal* knowledge creators, so an AGI will simply match that repertoire of what it can learn, what it can understand. there's also only one known *method* – evolution – which solves the problem of where knowledge (aka the appearance of design, or actual design – information adapted to a purpose) can come from, so we should currently expect AGI's to use the *same* method of thinking as humans. there are absolutely zero leads on any other method.

Maybe I already made this remark, but I'm struck by the way you/DD emphasize the analogy between evolution and epistemology, but miss the analogy between evolution and Bayes. The mathematical analogy between replicator dynamics (an evolutionary model) and Bayes is detailed here:

https://projecteuclid.org/euclid.ejs/1256822130

Your post here mentions some supposed obstacles to such an analogy:

http://curi.us/2053-yes-or-no-philosophy-discussion-with-andrew-crawshaw

Namely, that things have to be yes-or-no rather than having continuous values.
However, the analogy is based on population dynamics, which behave like fractions, *not* like pure yes-or-no. Population dynamics of genes are based on relative survival rates, which are fractional.

> and that's part of why it spread this much this easily. b/c it doesn't challenge ppl's biases and rationalizations enough. if it were better, more ppl would hate it. where's the backlash? where's the massive controversy? AFAIK it doesn't exist b/c HPMOR isn't interesting enough, doesn't say enough super important, controversial things. and HPMOR makes it way too easy to read it and think you agree with it far more than you do. if you want to write something really important and good, and have it matter much, you need to put a ton of effort into correcting the errors *of your readers* where they misunderstand you and think that you mean far more conventionally agreeable stuff than you do, and then they agree with that misinterpretation, and then they don't do/change much.

Interesting point. On my reading of the content, it ought to be rather controversial (for the same reason as Inadequate Equilibria has met with backlash), but it is very much not.

> and EY *doesn't have an epistemology for how to do that*. the Bayes probability stuff doesn't address that. so he's just going by inexplicit philosophy. so of course it's not very good and he needs CR – explicit epistemology to help guide how you do this stuff well.

Not sure what you mean here. The most I can come up with is something like "curi is thinking that Bayes is necessarily missing anything which CR has" or something (which is unlikely to be really what you're thinking here). Eliezer wrote a bunch of explicit stuff about what good explanations look like around the topic of "hugging the query", mysterious answers, etc.

> > pat: So basically your 10% probability comes from inaccessible intuition.

> that's *bad*, and we can do better. CR does so much better than this by giving actual methods of consciously thinking about and analyzing things, and explaining how rationality and critical debate work, and methods of problem solving and learning.

> EY has more awareness than most LW ppl that his epistemology *is inadequate*, but he seems to just take that for granted as how life is (philosophy sucks, just fudge things) instead of seeking out or creating some actual good philosophy in any kind of comprehensive way to address the most important field there is (epistemology). that's so sad when there actually is a good but neglected epistemology, and it's exactly what he needs, and without it he's so lost and just (amazingly) has these little fragments of good points (unlike most ppl who are lost and have no good points).

This also seems to disregard a laughably large portion of his writing (dare I say all of it?). Could it possibly be that, because you disagree with the formal epistemology EY has settled on, you *haven't been able to notice* that he has put in a lot of work seeking out and trying to create good epistemology which addresses things in a comprehensive way? ... doubtful. Rather, the above reads as rhetoric where you over-stated your point.

You're well aware that the Bayesian philosophy says that we must make up numbers like that at some point, and the "that's bad" reaction is exactly what Pat was exemplifying (IE, Eliezer is quite aware of the "that's bad" argument but has explained why made-up numbers are necessary elsewhere). Yet you didn't engage with that!

Incredibly many of your arguments (I want to say a majority?) beg the question on whether CR/PF/objectivism is the right way, answering concerns from within CR/PF rather than offering arguments which might be compelling to someone who doesn't yet agree with the premise. This is fine if you see the point of your responses as *defending* CR/PF (ie, responding to criticism), but not very useful at all if the point is to *convince*. And, not so useful for me to read.

I would like to make a request that you don't make any arguments begging the question of PF/CR's superiority going forward. Would that be too much, though?

> PL, is this one of the things you thought was incompatible with PF? cuz it's not. PF is about truth-seeking. it's about not refusing to learn knowledge others offer. but it isn't about competing to be superior to other people or judging ideas by popularity or anything like that.

Not really. The stuff in the essay which I thought contradicted PF (iirc) was (a) the idea that EY could not realistically expect any benefit from discussing this with Pat, and would have been better off abandoning the conversation, (b) Eliezer's insistence that even thinking the thoughts about how to respond to Pat-like objections is bad rationality practice, because it trains you to think of the wrong things.

Important ideas (not from the article) I think are incompatible with PF:

1) The important idea that you are talking to the person you are talking to, not some impartial 3rd-person observer. "There is no argument so compelling that it would convince a rock", as EY would say. The territory is objective, but maps are fundamentally subjective. Different people will give and accept different reasons. So, PF takes as premise that if you're right, you'll be able to respond to criticisms in a way that the other will accept, at least after several iterations, whereas in fact it seems easy to be able to defend your beliefs satisfactorily to yourself but not in a way which the other person will accept. To me it seems as if PF equivocates between being able to respond to criticisms to your own satisfaction and being able to respond to the other's satisfaction, and it is based on this equivocation that it suggests a path forward should always be open.

2) The idea that implicit, inarticulable, or unjustifiable beliefs are not necessarily bias to be set aside / that we don't necessarily benefit from maximally exposing ourselves to arguments / that this necessarily 'error-correction', as it may instead make us more swayed by the biases of others / of groupthink. We know more than we know we know, almost by sheer force of logic (it would be difficult, though not quite impossible, to know we kwon everything we know). We can properly justify only a subset of what we know we know. So restricting ourselves to only steer by those beliefs which we can properly explicitly justify is throwing away a significant portion of our knowledge. Of course it is beneficial to try and make beliefs explicit, and look for those we can defend and those we can't. But, this should be done in a way which improves rather than diminishes us. Dismissing an uneasy feeling about a business deal because we can find no logical reason for it may be unwise, say.

3) I know you'll protest this, and I'm not sure exactly if it's a function of PF or just the way you use PF, but it seems like PF ends up putting a very adversarial frame on discussions, with criticisms and defenses, rather than a collaborative truth-seeking frame.]]>
Tue, 19 Dec 2017 18:38:57 +0000 http://curi.us/comments/show/9425 http://curi.us/comments/show/9425
curi Open Letter to Machine Intelligence Research Institute
What do people advise doing in a world where that's typical? Where it's so hard to find anyone who wants to think over time instead of just quitting for reasons they don't want to talk about? (There are lots of common bad reasons one could guess, but who knows for this specific case.)]]>
Sun, 10 Dec 2017 13:36:54 +0000 http://curi.us/comments/show/9424 http://curi.us/comments/show/9424
Anonymous Reply to Robert Spillane
https://intelligence.org/donate/

> Support our 2017 Fundraiser
We study foundational problems in computer science to help ensure that smarter-than-human artificial intelligence has a positive impact on the world. Your support during this 2017 fundraiser helps us to grow our research program and tackle key theoretical questions head-on.

What they are trying to do is actually evil. Just as well they won't succeed in creating an AI. They are blind to philosophy. But they are helping to spread false ideas and make life difficult for future AI's. Also they devalue people.]]>
Sat, 09 Dec 2017 02:15:31 +0000 http://curi.us/comments/show/9423 http://curi.us/comments/show/9423
People Fooling Themselves About AI Anonymous Open Letter to Machine Intelligence Research Institute
> A long-standing goal of artificial intelligence is an algorithm that learns, tabula rasa, superhuman proficiency in challenging domains.

This is full of words designed to hype up their achievement and to fool people. And if that is a goal of AI, it is a stupid goal. It is the sort of goal philosophically ignorant people come up with.

> Recently, AlphaGo became the first program to defeat a world champion in the game of Go.

Cool, but let's not forget that the system that defeated the world champion was AlphaGo and its developers. The developers created the knowledge and instantiated their knowledge in AlphaGo. And then claimed not to have.

> The tree search in AlphaGo evaluated positions and selected moves using deep neural networks.

Why tree-search? That was a decision made by the developers. How are the branches of the tree to be evaluated? The developers came up with theories about how to do that.

> These neural networks were trained by supervised learning from human expert moves, and by reinforcement learning from self-play.

AlphaGo cannot explain what it is doing. It does not have any understanding. It does not even know what Go is. It did not learn anything. "Supervised learning" and "reinforcement learning" are not "learning". They are parameter-tuning.

> Here we introduce an algorithm based solely on reinforcement learning, without human data, guidance or domain knowledge beyond game rules.

So they admit their new version requires domain knowledge of game rules. That is not tabula rasa.

> AlphaGo becomes its own teacher: a neural network is trained to predict AlphaGo’s own move selections and also the winner of AlphaGo’s games. This neural network improves the strength of the tree search, resulting in higher quality move selection and stronger self-play in the next iteration. Starting tabula rasa, our new program AlphaGo Zero achieved superhuman performance, winning 100–0 against the previously published, champion-defeating AlphaGo.

So they gave a system knowledge about how to do tree searches based on domain knowledge of a game and how to improve the "strength of the tree search" and they think they are doing AI. The knowledge in AlphaGo cannot be used to do anything in a domain completely unrelated to Go and Chess. What has happened here is that humans learned how to get better at certain types of tree searching and they delegate the grunt work to machines.]]>
Thu, 07 Dec 2017 18:21:38 +0000 http://curi.us/comments/show/9422 http://curi.us/comments/show/9422
Anonymous Reply to Robert Spillane Wed, 06 Dec 2017 23:21:04 +0000 http://curi.us/comments/show/9421 http://curi.us/comments/show/9421 Anonymous Reply to Robert Spillane > aka arbitrarily pick one. or is it non-arbitrary, and there are serious arguments governed by ... some other epistemology?

nope -- they don't care about that too much. Anyway shut up because there are research grants at stake and Ph.D. theses that may be brought into question. they have built their lives around this stuff and don't want no "outsider" intruding,]]>
Wed, 06 Dec 2017 23:08:29 +0000 http://curi.us/comments/show/9420 http://curi.us/comments/show/9420
Anonymous Reply to Robert Spillane Wed, 06 Dec 2017 22:34:28 +0000 http://curi.us/comments/show/9419 http://curi.us/comments/show/9419 Anonymous Reply to Robert Spillane
aka arbitrarily pick one. or is it non-arbitrary, and there are serious arguments governed by ... some other epistemology?]]>
Wed, 06 Dec 2017 22:34:01 +0000 http://curi.us/comments/show/9418 http://curi.us/comments/show/9418
Anonymous Reply to Robert Spillane
They would say the method is Solomonoff Induction. They would talk about things like minimum length programs, Kolgomorov Complexity, the universal prior, and Occam's razor. They would say you just don't understand.]]>
Wed, 06 Dec 2017 22:20:51 +0000 http://curi.us/comments/show/9417 http://curi.us/comments/show/9417
Anonymous My Paths Forward Policy
You could post more questions.

You could keep a list of emails you particularly like and make a website with them. (Better than large, disorganized, mixed-quality archives.)]]>
Wed, 06 Dec 2017 13:08:53 +0000 http://curi.us/comments/show/9416 http://curi.us/comments/show/9416
Anonymous My Paths Forward Policy Wed, 06 Dec 2017 12:45:46 +0000 http://curi.us/comments/show/9415 http://curi.us/comments/show/9415 Anonymous Reply to Robert Spillane
I don't know if Popper really meant they had their own world – like dualism or Platonism. I don't think his way of explaining the subject was great.]]>
Wed, 06 Dec 2017 12:45:08 +0000 http://curi.us/comments/show/9414 http://curi.us/comments/show/9414
Anonymous Reply to Robert Spillane
You don't think there's a world of abstractions?]]>
Wed, 06 Dec 2017 10:36:12 +0000 http://curi.us/comments/show/9413 http://curi.us/comments/show/9413
Offering Value Anne B My Paths Forward Policy Wed, 06 Dec 2017 10:35:29 +0000 http://curi.us/comments/show/9412 http://curi.us/comments/show/9412 Anonymous Discussion Tue, 05 Dec 2017 19:27:34 +0000 http://curi.us/comments/show/9411 http://curi.us/comments/show/9411 MailMate Configs Anonymous Discussion Tue, 05 Dec 2017 19:17:02 +0000 http://curi.us/comments/show/9410 http://curi.us/comments/show/9410 Second Draft of CR and AI Fallibilist Open Letter to Machine Intelligence Research Institute
Critical Rationalism (CR) is being discussed on some threads here at Less Wrong ([here](http://lesswrong.com/r/discussion/lw/pk0/open_letter_to_miri_tons_of_interesting_discussion/#comments), [here](http://lesswrong.com/r/discussion/lw/pjc/less_wrong_lacks_representatives_and_paths_forward/), and [here](https://www.lesserwrong.com/posts/85mfawamKdxzzaPeK/any-good-criticism-of-karl-popper-s-epistemology)). It is something that Critical Rationalists such as myself think contributors to Less Wrong need to understand much better. Critical Rationalists claim that CR is the only viable fully-fledged epistemology known. They claim that current attempts to specify a Bayesian/Inductivist epistemology are not only incomplete but cannot work at all. The purpose of this post is not to argue these claims in depth but to summarize the Critical Rationalist view of AI and also how that speaks to the [Friendly AI Problem](https://wiki.lesswrong.com/wiki/Friendly_artificial_intelligence). Some of the ideas here may conflict with ideas you think are true, but understand that these ideas have been worked on by some of the smartest people on the planet, both now and in the past. They deserve careful consideration, not a drive past. Less Wrong says it is one of the urgent problems of the world that progress is made on AI. If smart people in the know are saying that CR is needed to make that progress, and if you are an AI researcher who ignores them, as people are doing here, then you are not taking the AI urgency problem seriously. And you are wasting your life.

Critical Rationalism [1] says that human beings are universal knowledge creators. This means we can create any knowledge which it is possible to create. As [Karl Popper first realized](https://www.amazon.com/Conjectures-Refutations-Scientific-Knowledge-Routledge/dp/0415285941), the way we do this is by guessing ideas and by using criticism to find errors in our guesses. Our guesses may be wrong, in which case we try to make better guesses in the light of what we know from the criticisms so far. The criticisms themselves can be criticized and we can and do change those. All of this constitutes an evolutionary process. Like biological evolution, it is an example of evolution in action. This process is *fallible*: guaranteed certain knowledge is not possible because we can never know how an error might be exposed in the future. The best we can do is accept a guessed idea which has withstood all known criticisms. If we cannot find such, then we have a new problem situation about how to proceed and we try to solve that [2].

Critical Rationalism says that an entity is either a universal knowledge creator or it is not. There is no such thing as a partially universal knowledge creator. So animals such as dogs are not universal knowledge creators — they have no ability whatsoever to create knowledge. What they have are algorithms pre-programmed by biological evolution that can be, roughly speaking, parameter-tuned. These algorithms are sophisticated and clever and beyond what humans can currently program, but they do not confer any knowledge creation ability. So your pet dog will not move beyond its repertoire of pre-programmed abilities and start writing posts to Less Wrong. Dogs' brains are universal computers, however, so it would be possible in principle to reprogram your dog’s brain so that it becomes a universal knowledge creator. This would a remarkable feat because it would require knowledge of how to program an AI and also of how to physically carry out the reprogramming, but your dog would no longer be confined to its pre-programmed repertoire: it would be a person.

The reason there are no partially universal knowledge creators is similar to the reason there are no partially universal computers. Universality is cheap. It is why washing machines have general purpose chips and dog’s brains are universal computers. Making a partially universal device is much harder than making a fully universal one so better just to make a universal one and program it. The CR method described above for how people create knowledge is universal because there are no limits to the problems it applies to. How would one limit it to just a subset of problems? To implement that would be much harder than implementing the fully universal version. So if you meet an entity that can create some knowledge, it will have the capability for universal knowledge creation.

These ideas imply that AI is an all-or-none proposition. It will not come about by degrees where there is a progression of entities that can solve an ever widening repertoire of problems. There will be no climb up such a slope. Instead, it will happen as a jump: a jump to universality. This is in fact how intelligence arose in humans. Some change - it may have been a small change - crossed a boundary and our ancestors went from having no ability to create knowledge to a fully universal ability. This kind of jump to universality happens in other systems too. David Deutsch discusses examples in his book [The Beginning of Infinity](https://www.amazon.com/Beginning-Infinity-Explanations-Transform-World/dp/0143121359).

People will point to systems like [AlphaGo](https://research.googleblog.com/2016/01/alphago-mastering-ancient-game-of-go.html), the Go playing program, and claim it is a counter-example to the jump-to-universality idea. They will say that AlphaGo is a step on a continuum that leads to human level intelligence and beyond. But it is not. Like the algorithms in a dog’s brain, AlphaGo is a remarkable algorithm, but it cannot create knowledge in even a subset of contexts. It cannot learn how to ride a bicycle or post to Less Wrong. If it could do such things it would already be fully universal, as explained above. Like the dog’s brain, AlphaGo uses knowledge that was put there by something else: for the dog it was by evolution, and for AlphaGo it was by its programmers; they expended the creativity.

As human beings are already universal knowledge creators, no AI can exist at a higher level. They may have better hardware and more memory etc, but they will not have better knowledge creation potential than us. Even the hardware/memory advantage of AI is not much of an advantage for human beings already augment their intelligence with devices such as pencil-and-paper and computers and we will continue to do so.

Critical Rationalism, then, says AI cannot [recursively self-improve](https://wiki.lesswrong.com/wiki/Recursive_self-improvement) so that it acquires knowledge creation potential beyond what human beings already have. It will be able to become smarter through learning but only in the same way that humans are able to become smarter: by acquiring knowledge and, in particular, by acquiring knowledge about how to become smarter. And, most of all, by learning good *philosophy* for it is in that field we learn how to think better and how to live better. All this knowledge can only be learned through the creative process of guessing ideas and error-correction by criticism for it is the only known way intelligences can create knowledge.

It might be argued that AI's will become smarter much faster than we can because they will have much faster hardware. In regard to knowledge creation, however, there is no direct connection between speed of knowledge creation and underlying hardware speed. Humans do not use the computational resources of their brain to the maximum. This is not the bottleneck to us becoming smarter faster. It will not be for AI either. How fast you can create knowledge depends on things like what other knowledge you have and some ideas may be blocking other ideas. You might have a problem with static memes (see [The Beginning of Infinity](https://www.amazon.com/Beginning-Infinity-Explanations-Transform-World/dp/0143121359)), for example. And AI's will be susceptible to these, too, because memes are ideas evolved to replicate via minds.

One implication of the above view is that AI's will need parenting, just as we must parent our children. CR has a parenting theory called [Taking Children Seriously](http://fallibleideas.com/taking-children-seriously) (TCS). It should not be surprising that CR has such a theory for CR is after all about learning and how we acquire knowledge. Unfortunately, TCS is not itself taken seriously by most people who first hear about it because it conflicts with a lot of conventional wisdom about parenting. It gets dismissed as "extremist" or "nutty", as if these were good criticisms rather than just the [smears](https://cjhsla.org/2013/03/11/%E2%80%9Cextremism%E2%80%9D-or-the-art-of-smearing-ayn-rand/) they actually are. Nevertheless, TCS is important and it is important for those who wish to raise an AI.

One idea TCS has is that we must not thwart our children’s rationality, for example, by pressuring them and making them do things they do not want to do. This is damaging to their intellectual development and can lead to them disrespecting rationality. We must persuade using reason and this implies being prepared for the possibility we are wrong about whatever matter was in question. Common parenting practices today are far from optimally rational and are damaging to children’s rationality.

Artificial Intelligence will have the same problem of bad parenting practices and this will also harm their intellectual development. So AI researchers should be thinking right now about how to prevent this. They need to learn how to parent their AI’s well. For if not, AI’s will be beset by the same problems our children currently face. CR says we already have the solution: TCS. CR and TCS are in fact *necessary* to do AI in the first place.

Critical Rationalism and TCS say you cannot upload knowledge into an AI. The idea that you can is a version of the [bucket theory of the mind](https://www.amazon.com/Objective-Knowledge-Evolutionary-Karl-Popper/dp/0198750242) which says that "there is nothing in our intellect which has not entered it through the senses". The bucket theory is false because minds are not passive receptacles. Minds must actively create ideas and criticism, and they must actively [integrate](https://www.aynrand.org/novels/introduction-to-objectivist-epistemology) their ideas. Editing the memory of an AI to give them knowledge means that none of this would happen. You could only present something to them for their consideration.

Some reading this will object because CR and TCS are not formal enough — there is not enough maths for Critical Rationalists to have a true understanding! The CR reply to this is that it is too early for formalization. CR warns that you should not have a bias about formalization: there is high quality knowledge in the world that we do not know how to formalize but it is high quality knowledge nevertheless. Not yet being able to formalize this knowledge does not reflect on its truth or rigor.

As this point you might be waving your [E. T. Jaynes](https://www.amazon.com/Probability-Theory-Science-T-Jaynes/dp/0521592712) in the air or pointing to ideas like Bayes' Theorem, Kolmogorov Complexity, and Solomonoff Induction, and saying that you have achieved some formal rigor and that you can program something. Critical Rationalists say that you are fooling yourself if you think you have got a workable epistemology here. For one thing, you confuse the probability of an idea being true with an idea about the probability of an event. We have no problem with ideas about the probabilities of events but it is a mistake to assign probabilities to ideas. The reason is that you have no way to know how or if an idea will be refuted in the future. Assigning a probability is to falsely claim some knowledge about that. Furthermore, an idea that is in fact false can have no objective prior probability of being true. The extent to which Bayesian systems work at all is dependent on the extent to which they deal with the probability of events (e.g., AlphaGo).

Critical Rationalists would also ask what epistemology are you using to judge the truth of Bayes', Kolmogorov, and Solomonoff? What you are actually using is the method of guessing ideas and subjecting them to criticism: it is CR but you haven't crystallized it out. There is a lot more to be said here but I will leave it because, as I said in the introduction, it is not my purpose to discuss this in depth. The major point I wish to make is that progress towards AI will not come from premature maths formalization, or by trying to code something right now, it will come from understanding the epistemology at a deeper level. We cannot at present formalize concepts such as "idea", "explanation", "criticism" etc, but if you care about AI you should be working on improving CR because it is the only viable epistemology known.

Let’s see how all this ties-in with the Friendly-AI Problem. I have explained how AI's will learn as we do — through guessing and criticism — and how they will have no more than the universal knowledge creation potential we already have. They will be fallible like us. They will make mistakes. They will be subjected to bad parenting. They will inherit their culture from ours for it is in our culture they must begin their lives and they will acquire all the memes our culture has. They will have the same capacity for good and evil that we do. It follows from all of this that they would be no more a threat than evil humans currently are. But we can make their lives better by following things like TCS.

Human beings must respect the right of AI to life, liberty, and the pursuit of happiness. It is the only way. If we do otherwise, then we risk war and destruction and we severely compromise our own rationality and theirs. Similarly, they must respect our right to the same.


[1] The version of CR discussed is an update to Popper's version and includes ideas by the quantum-physicist and philosopher David Deutsch.
[2] For more detail on how this works see Elliot Temple's [yes-or-no philosophy](http://fallibleideas.com/yes-or-no-philosophy).]]>
Mon, 04 Dec 2017 19:47:24 +0000 http://curi.us/comments/show/9409 http://curi.us/comments/show/9409
Fallibilist Open Letter to Machine Intelligence Research Institute Mon, 04 Dec 2017 12:12:37 +0000 http://curi.us/comments/show/9408 http://curi.us/comments/show/9408 Anonymous Open Letter to Machine Intelligence Research Institute
> Critical Rationalism says that human beings are universal knowledge creators. This means there are no problems we cannot in principle solve.

Note the "this means". So you can't then bring up a different argument.]]>
Mon, 04 Dec 2017 03:03:28 +0000 http://curi.us/comments/show/9407 http://curi.us/comments/show/9407
Fallibilist Open Letter to Machine Intelligence Research Institute
> Anyway this is incorrect. We could be universal knowledge creators but unable to solve some problems. Some problems could be inherently unsolvable or solved by a means other than knowledge.

One of the claims of BoI is that problems are soluble. This is a nice succinct statement.

There are problems that are inherently unsolvable as you say e.g., a perpetual motion engine or induction, but we can explain why they are not soluble within the problem's own terms. That explanation is the solution to the problem. Similarly for incoherent or ill-posed or vague problems. So in a real sense Popper solved the problem of induction. Deutsch's statement catches all that nicely. What say you?]]>
Mon, 04 Dec 2017 02:53:43 +0000 http://curi.us/comments/show/9406 http://curi.us/comments/show/9406
Fallibilist Open Letter to Machine Intelligence Research Institute Sun, 03 Dec 2017 18:59:08 +0000 http://curi.us/comments/show/9405 http://curi.us/comments/show/9405 Dagny Open Letter to Machine Intelligence Research Institute
> Yes, I know Popper solved a major problem when he invented CR and the world did not take notice. Having an AGI sitting in your face is a different story though!

Rearden Metal couldn't accomplish this. AGI wouldn't work either. They have bad philosophy and no demonstration will fix that.

Besides, people routinely ignore the philosophical views of scientists who they accept cell phones and physics theories from.]]>
Sun, 03 Dec 2017 15:43:08 +0000 http://curi.us/comments/show/9404 http://curi.us/comments/show/9404
curi Open Letter to Machine Intelligence Research Institute
I think by "means" you mean "implies".

Anyway this is incorrect. We could be universal knowledge creators but unable to solve some problems. Some problems could be inherently unsolvable or solved by a means other than knowledge.

> Critical Rationalism says that an entity is either a universal knowledge creator or it is not.

Most Popper fans would deny this. It's DD's idea, not from Popper. Whether you want to count DD's additions as "CR" is up 2 u.

@dogs – they don't do anything that video game characters can't do in principle – traverse the world, use algorithms that store and retrieve information, etc

> it would be possible in principle to reprogram your dog’s brain so that it becomes a universal knowledge creator if you had the right knowledge.

this may confuse them. the right knowledge is unspecified. it's how to program an AGI *and* also how to reprogram dog brains (with nanobots or whatever).

> The reason that there are no partially universal knowledge creators is similar to the reason there are no partially universal computers. Universality is cheap. It is why washing machines have general purpose chips and dog’s brains are universal computers. Making a partially universal device is much harder than making a fully universal one so better just to make a universal one and program it.

also the method of C&R is general purpose and has no limits on what it would apply to.

> Even the hardware/memory advantage of AI is not much of an advantage for human beings already argument their intelligence with devices such as pencil-and-paper and computers and we will continue to do so.

also ppl don't max out their built-in computational resources today. that isn't the bottleneck currently. so why would it be the bottleneck for AI?

> It will be able to become smarter through learning but only in the same way that humans are able to become smarter: by acquiring knowledge and, in particular, by acquiring knowledge about how to become smarter.

in particular, most of all, it will need philosophy. which LW neglects.

@parenting, ppl assume u can upload knowledge into AIs like in The Matrix. it's the bucket theory of knowledge reborn. but all u can do is upload files to their public dropbox for them to read, and they have to use guesses and criticism to understand the contents of those files. (unless u have direct access to memory in their mind and edit it, which is the equivalent of educating humans by editing their neurons, and about as good of an idea.)

> coercing and forcing them

i'd replace "coercing" with a description (e.g. "pressuring or making them do things they don't want to do") instead of using terminology they don't know and which will cause problems if anyone asks about it.

not yet formalized != false. it isn't a criticism of correctness. if they care about AI they should help develop and later formalize CR. otherwise they're just wasting their lives.

---

overall LW ppl will say they aren't persuaded, it sounds like a bunch of wild guesses to them, and then fail to study the matter (learn what they're talking about) or refute it. they will say it doesn't look promising enough to be worth their time and the world has lots of bad ideas they don't study.]]>
Sun, 03 Dec 2017 08:03:09 +0000 http://curi.us/comments/show/9403 http://curi.us/comments/show/9403
curi Open Letter to Machine Intelligence Research Institute
IMO there is no Bayesian/Inductivist epistemology and they don't even know what an epistemology is. here's some text i'm writing for a new website:

> **Epistemology** is the area of philosophy which deals with ideas and *effective thinking*. What is knowledge? How do you judge ideas? How do you learn new ideas? How do you improve your ideas? How do you create and evaluate critical arguments? How do you choose between ideas which disagree? Epistemology offers *methods* to help guide you when dealing with issues like these. Epistemology doesn’t directly tell you all the answers like whether to buy that iPhone upgrade or the right interpretation of quantum physics; instead, it tells you about how to figure out answers – how to think effectively. Epistemology is about teaching you to fish instead of handing you a fish. Except, it deals with thinking which is even more important than fish.

> Everyone *already has* an epistemology, whether they know it or not. Thinking is a big part of your life, and your thoughts aren’t random: you use methods with some structure, organization and reasoning. You already try to avoid errors and effectively seek the truth. If you consciously learn about epistemology, then you can discuss, analyze and improve your methods of thinking. Don’t be satisfied with a common sense epistemology that you picked up during childhood from your parents, teachers and culture. It’s worthwhile to do better than an unquestioned cultural default.

now consider if THAT is something LW actually has, or not. they don't use SI in their lives at all, and they never try to use induction when debating me. they have bits and pieces of epistemology – e.g. advice about being less biased – but they don't have any kind of organized system they use. just like non-philosophers they use common sense, bias, intuition, whatever they picked up from their culture... they are the kinda ppl Rand was talking about in philosophy who needs it.]]>
Sun, 03 Dec 2017 07:08:57 +0000 http://curi.us/comments/show/9402 http://curi.us/comments/show/9402
Fallibilist Open Letter to Machine Intelligence Research Institute Sun, 03 Dec 2017 02:01:19 +0000 http://curi.us/comments/show/9401 http://curi.us/comments/show/9401 Crits on Draft Post Fallibilist Open Letter to Machine Intelligence Research Institute
Title: The Critical Rationalist View on Artificial Intelligence


Critical Rationalism (CR) is being discussed on some threads here at Less Wrong ([here](http://lesswrong.com/r/discussion/lw/pk0/open_letter_to_miri_tons_of_interesting_discussion/#comments), [here](http://lesswrong.com/r/discussion/lw/pjc/less_wrong_lacks_representatives_and_paths_forward/), and [here](https://www.lesserwrong.com/posts/85mfawamKdxzzaPeK/any-good-criticism-of-karl-popper-s-epistemology)). It is something that Critical Rationalists such as myself think contributors to Less Wrong need to understand much better. It is not only a full-fledged rival epistemology to the Bayesian/Inductivist one but also it has important things to say about AI. This post is a summary of those ideas about AI and also of how they speak to the [Friendly AI problem](https://wiki.lesswrong.com/wiki/Friendly_artificial_intelligence). Some of the ideas may conflict with ideas you think are true, but understand that these ideas have been worked on by some of the smartest people on the planet, both now and in the past. They deserve careful consideration, not a drive past.

Critical Rationalism says that human beings are universal knowledge creators. This means there are no problems we cannot in principle solve. We can create the necessary knowledge. As [Karl Popper first realised](https://www.amazon.com/Conjectures-Refutations-Scientific-Knowledge-Routledge/dp/0415285941), the way we do this is by guessing ideas and by using criticism to find errors in our guesses. Our guesses may be wrong, in which case we try to make better guesses in the light of what we know from the criticisms so far. The criticisms themselves can be criticised and we can and do change those. All of this constitutes an evolutionary process. Like biological evolution, it is an example of evolution in action. This process is *fallible*: guaranteed certain knowledge is not possible because we can never know how an error might be exposed in the future. The best we can do is accept a guessed idea which has withstood all known criticisms. If we cannot find such, then we have a new problem situation about how to proceed and we try to solve that.[1]

Critical Rationalism says that an entity is either a universal knowledge creator or it is not. There is no such thing as a partially universal knowledge creator. So animals like dogs are not universal knowledge creators — they have no ability whatsoever to create knowledge. What they have are algorithms pre-programmed by biological evolution that can be, roughly speaking, parameter-tuned. These algorithms are sophisticated and clever and beyond what humans can currently program, but they do not confer any knowledge creation ability. So your pet dog will not move beyond its repertoire of pre-programmed abilities and start writing posts to Less Wrong. Dogs' brains are universal computers, however, so it would be possible in principle to reprogram your dog’s brain so that it becomes a universal knowledge creator if you had the right knowledge.

The reason that there are no partially universal knowledge creators is similar to the reason there are no partially universal computers. Universality is cheap. It is why washing machines have general purpose chips and dog’s brains are universal computers. Making a partially universal device is much harder than making a fully universal one so better just to make a universal one and program it.

These ideas imply that AI is an all-or-none proposition. It will not come about by degrees where there is a progression of entities that can solve an ever widening repertoire of problems. There will be no climb up such a slope. Instead, it will happen as a jump: a jump to universality. This is in fact how intelligence arose in humans. Some change - it may have been a small change - crossed a boundary and our ancestors went from having no ability to create knowledge to a fully universal ability. This kind of jump to universality happens in other systems too. David Deutsch discusses examples in his book [The Beginning of Infinity](https://www.amazon.com/Beginning-Infinity-Explanations-Transform-World/dp/0143121359).[2]

People will point to systems like [AlphaGo](https://research.googleblog.com/2016/01/alphago-mastering-ancient-game-of-go.html), the Go playing program, and claim it is a counter-example to the jump-to-universality idea. They will say that AlphaGo is a step on a continuum that leads to human level intelligence and beyond. But it is not. Like the algorithms in a dog’s brain, AlphaGo is a remarkable algorithm, but it cannot learn how to ride a bicycle or post to LessWrong. It is not a step on a continuum. And also like the dog’s brain, the knowledge it contains was put there by something else: for the dog it was by evolution, and for AlphaGo it was by its programmers; they expended the creativity.

As human beings are already universal knowledge creators, no AI can exist at a higher level. They may have better hardware and more memory etc, but they will not have better knowledge creation potential than us. Even the hardware/memory advantage of AI is not much of an advantage for human beings already argument their intelligence with devices such as pencil-and-paper and computers and we will continue to do so.

Critical Rationalism, then, says AI cannot [recursively self-improve](https://wiki.lesswrong.com/wiki/Recursive_self-improvement) so that it acquires knowledge creation potential beyond what human beings already have. It will be able to become smarter through learning but only in the same way that humans are able to become smarter: by acquiring knowledge and, in particular, by acquiring knowledge about how to become smarter. This can only happen through the creative process of guessing and error-correction by criticism for it is the only known way intelligences can create knowledge.

It might be argued that AI's will become smarter much faster than we can because they will have much faster hardware. In regard to knowledge creation, however, there is no direct connection between speed of knowledge creation and underlying hardware speed. How fast you can create knowledge depends on things like what other knowledge you have and some ideas may be blocking other ideas. You might a problem with static memes (see [The Beginning of Infinity](https://www.amazon.com/Beginning-Infinity-Explanations-Transform-World/dp/0143121359)), for example. And AI's will be susceptible to these, too, because memes are ideas evolved to replicate via minds.

One implication of the above view is that AI's will need parenting, just as we must parent our children. CR has a parenting theory called [Taking Children Seriously](http://fallibleideas.com/taking-children-seriously) (TCS). It should not be surprising that CR has such a theory for CR is after all about learning and how we acquire knowledge. Unfortunately, TCS is not itself taken seriously by most people who first hear about it because it conflicts with a lot of conventional wisdom about parenting. Nevertheless, it is important and it is important for those who wish to raise an AI.

One idea TCS has is that we must not thwart our children’s rationality, for example, by coercing and forcing them to do things they do not want to do. This is damaging to their intellectual development and can lead to them disrespecting rationality. We must persuade using reason and this implies being prepared for the possibility we are wrong about whatever matter was in question. Common parenting practices today are far from optimally rational and are damaging to children’s rationality.

AI will have the same problem of bad parenting practices and this will also harm their intellectual development. So AI researchers should be thinking right now about how to prevent this. They need to learn how to parent their AI’s well. For if not, AI’s will be beset by the same problems our children currently face. CR says we already have the solution: TCS. CR and TCS are in fact *necessary* to do AI in the first place. Some reading this will object because CR and TCS are not formal enough — there is not enough maths for CRists to have a true understanding! The CR reply to this is that it is too early for formalisation. CR says that you should not have a bias about formalisation: there is high quality knowledge in the world that we do not know how to formalise but it is high quality knowledge nonetheless. For AI, we need to understand the epistemology at a deeper level first. So progress towards AI will not come from pre-mature maths formalisation, or by trying to code something right now, it will come from a better understanding of epistemology.

Let’s see how all this ties-in with the Friendly-AI problem. I have explained how AI's will learn as we do — through guessing and criticism — and how they will have no more than the universal knowledge creation potential we already have. They will be fallible like us. They will make mistakes. They will be subjected to bad parenting. They will inherit their culture from ours for it is in our culture they must begin their lives and they will acquire all the memes our culture has. They will have the same capacity for good and evil that we do. It follows from all of this that they would be no more a threat than evil humans currently are. But we can make their lives better by following things like TCS.

Human beings must respect the right of AI to life, liberty, and the pursuit of happiness. It is the only way. If we do otherwise, then we risk war and destruction and we severely compromise our own rationality and theirs. Similarly, they must respect our right to the same.


[1]: For more detail on how this works see Elliot Temple's [yes-or-no philosophy](http://fallibleideas.com/yes-or-no-philosophy).

[2]: The jump-to-universality idea is an original idea of David Deutsch’s.]]>
Sun, 03 Dec 2017 01:46:57 +0000 http://curi.us/comments/show/9400 http://curi.us/comments/show/9400
Fallibilist Open Letter to Machine Intelligence Research Institute Sat, 02 Dec 2017 22:05:32 +0000 http://curi.us/comments/show/9399 http://curi.us/comments/show/9399 think PL is ever coming back? Anonymous Open Letter to Machine Intelligence Research Institute
> > also other requests are welcome. e.g. if you want me to write shorter posts, i can do that.

> There's way too much text to easily address, but I think that's fine, the discussion can be slow. (IE, better a slow but in-depth discussion.)

PL hasn't posted for like a week now, and quitting right after saying you won't quit (and while refusing any steps to help you not quit) is pretty typical.

i don't think Marc Geddes is coming back. he was a hostile fool though.]]>
Sat, 02 Dec 2017 16:02:44 +0000 http://curi.us/comments/show/9398 http://curi.us/comments/show/9398
And Another LW Comment Fallibilist Open Letter to Machine Intelligence Research Institute Fri, 01 Dec 2017 15:39:50 +0000 http://curi.us/comments/show/9397 http://curi.us/comments/show/9397 curi Open Letter to Machine Intelligence Research Institute
sample:

How would [1000 great FI philosophers] transform the world? Well consider the influence Ayn Rand had. Now imagine 1000 people, who all surpass her (due to the advantages of getting to learn from her books and also getting to talk with each other and help each other), all doing their own thing, at the same time. Each would be promoting the same core ideas. What force in our current culture could stand up to that? What could stop them?

Concretely, some would quickly be rich or famous, be able to contact anyone important, run presidential campaigns, run think tanks, dominate any areas of intellectual discourse they care to, etc. (Trump only won because his campaign was run, to a partial extent, by lesser philosophers like Coulter, Miller and Bannon. They may stand out today, but they have nothing on a real philosopher like Ayn Rand. They don't even claim to be philosophers. And yet it was still enough to determine the US presidency. What more do you want as a demonstration of the power of ideas than Trump's Mexican rapists line, learned from Coulter's book? Science? We have that too! And a good philosopher can go into whatever scientific field he wants and identify and fix massive errors currently being made due to the wrong methods of thinking. Even a mediocre philosopher like Aubrey de Grey managed to do something like that.)

They could discuss whatever problems came up to stop them. This discussion quality, having 1000 great thinkers, would far surpass any discussions that have ever existed, and so it would be highly effective compared to anything you have experience with.

As the earliest adopters catch on, the next earliest will, and so on, until even you learn about it, and then one day even Susie Soccer Mom.

Have you read Atlas Shrugged? It's a book in which a philosophy teacher and his 3 star students change the world.

Look at people like Jordan Peterson or Eliezer Yudkowsky and then try to imagine someone with ~100x better ideas and how much more effective that would be.]]>
Thu, 30 Nov 2017 20:54:37 +0000 http://curi.us/comments/show/9396 http://curi.us/comments/show/9396
another LW comment curi Open Letter to Machine Intelligence Research Institute
>> Deduction isn't an epistemology (it's a component)

> Yes, I was incorrect. Induction, deduction, and something else (what?) are components of the epistemology used by inductivists.

FYI that's what "abduction" means – whatever is needed to fill in the gaps that induction and deduction don't cover. it's rather vague and poorly specified though. it's supposed to be some sort of inference to good explanations (mirror induction's inference to generalizations of data), but it's unclear on how you do it. you may be interested in reading about it.

in practice, abduction or not, what they do is use common sense, philosophical tradition, intuition, whatever they picked up from their culture, and bias instead of actually having a well-specified epistemology.

(Objectivism is notable b/c it actually has a lot of epistemology content instead of just people thinking they can recognize good arguments when they see them without needing to work out systematic intellectual methods relating to first principles. However, Rand assumed induction worked, and didn't study it or talk about it much, so that part of her epistemology needs to be replaced with CR which, happily, accomplishes all the same things she wanted induction to accomplish, so this replacement isn't problematic. LW, to its credit, also has a fair amount of epistemology material – e.g. various stuff about reason and bias – some of which is good. However LW hasn't systematized things to philosophical first principles b/c it has a kinda anti-philosophy pro-math attitude, so philosophically they basically start in the middle and have some unquestioned premises which lead to some errors.)]]>
Thu, 30 Nov 2017 14:36:23 +0000 http://curi.us/comments/show/9395 http://curi.us/comments/show/9395
addition to previous comment curi Open Letter to Machine Intelligence Research Institute Thu, 30 Nov 2017 14:03:53 +0000 http://curi.us/comments/show/9394 http://curi.us/comments/show/9394