[Previous] Less Wrong Lacks Representatives and Paths Forward | Home | [Next] Replies to Gyrodiot About Fallible Ideas, Critical Rationalism and Paths Forward

Open Letter to Machine Intelligence Research Institute

I emailed this to some MIRI people and others related to Less Wrong.


I believe I know some important things you don't, such as that induction is impossible, and that your approach to AGI is incorrect due to epistemological issues which were explained decades ago by Karl Popper. How do you propose to resolve that, if at all?

I think methodology for how to handle disagreements comes prior to the content of the disagreements. I have writing about my proposed methodology, Paths Forward, and about how Less Wrong doesn't work because of the lack of Paths Forward:

http://curi.us/1898-paths-forward-short-summary

http://curi.us/2064-less-wrong-lacks-representatives-and-paths-forward

Can anyone tell me that I'm mistaken about any of this? Do you have a criticism of Paths Forward? Will any of you take responsibility for doing Paths Forward?

Have any of you written a serious answer to Karl Popper (the philosopher who refuted induction – http://fallibleideas.com/books#popper )? That's important to address, not ignore, since if he's correct then lots of your research approaches are mistakes.

In general, if someone knows a mistake you're making, what are the mechanisms for telling you and having someone take responsibility for addressing the matter well and addressing followup points? Or if someone has comments/questions/criticism, what are the mechanisms available for getting those addressed? Preferably this should be done in public with permalinks at a venue which supports nested quoting. And whatever your answer to this, is it written down in public somewhere?

Do you have public writing detailing your ideas which anyone is taking responsibility for the correctness of? People at Less Wrong often say "read the sequences" but none of them take responsibility for addressing issues with the sequences, including answering questions or publishing fixes if there are problems. Nor do they want to address existing writing (e.g. by David Deutsch – http://fallibleideas.com/books#deutsch ) which contains arguments refuting major aspects of the sequences.

Your forum ( https://agentfoundations.org ) says it's topic-limited to AGI math, so it's not appropriate for discussing criticism of the philosophical assumptions behind your approach (which, if correct, imply the AGI math you're doing is a mistake). And it states ( https://agentfoundations.org/how-to-contribute ):

It’s important for us to keep the forum focused, though; there are other good places to talk about subjects that are more indirectly related to MIRI’s research, and the moderators here may close down discussions on subjects that aren’t a good fit for this forum.

But you do not link those other good places. Can you tell me any Paths-Forward-compatible other places to use, particularly ones where discussion could reasonably result in MIRI changing?

If you disagree with Paths Forward, will you say why? And do you have some alternative approach written in public?

Also, more broadly, whether you will address these issues or not, do you know of anyone that will?

If the answers to these matters are basically "no", then if you're mistaken, won't you stay that way, despite some better ideas being known and people being willing to tell you?

The (Popperian) Fallible Ideas philosophy community ( http://fallibleideas.com ) is set up to facilitate Paths Forward (here is our forum which does this http://fallibleideas.com/discussion-info ), and has knowledge of epistemology which implies you're making big mistakes. We address all known criticisms of our positions (which is achievable without using too much resources like time and attention, as Paths Forward explains); do you?


Elliot Temple on November 9, 2017

Comments (28)

I find the idea interesting.

In some sense, it seems like it would be nice for organizations to be as transparently accountable as possible. For example, in many cases, the government is contradictory in its behavior -- laws cannot be easily interpreted as arising from a unified purpose. It would be nice if there were a way to either force the government to produce an explanation for the seeming contradiction or change. This would be really difficult for a large organization, especially one which is not run by a single individual; but, in some sense it does seem desirable.

On the other hand, this kind of accountability seems potentially very bad -- not only on an organizational level, but even on the level of individuals, who theoretically we can reasonably expect to provide justifications for actions.

The ability to force someone to give a justification in response to a criticism, or otherwise change their behavior, is the ability to bully someone. It is very appropriate in certain contexts, For example, it is important to be able to justify oneself to funders. It is important to be able to justify oneself to strategic allies. And so on.

However, even then, it is important not to be beholden to anyone in a way which warps your own standards of evidence, belief, and the good -- or, not too much. Every party you need to justify yourself to adds constraints of understandability to your actions, meaning that eventually you need to be justifiable to the lowest common denominator. MIRI is a special place in that its staff are more free to go after what it thinks is the right direction as compared with an academic department.

Nonetheless, I say the idea is interesting, because it seems like transparently accountable organizations would be a powerful thing if it could be done in the right way. I am reminded of prediction markets. An organization run by open prediction markets (like a futarchy) is in some sense very accountable, because if you think it is doing things for wrong reasons, you can simply make a bet. However, it is not very transparent: no reasons need to be given for beliefs. You just place bets based on how you think things will turn out.

I am not suggesting that the best version of Paths Forward is an open prediction market. Prediction markets still have a potentially big problem, in that someone with a lot of money could come in and manipulate the market. So, even if you run an organization with the help of a prediction market, you may want to do it with a closed prediction market. However, prediction markets do seem to move toward the ideal situation where organizations can be run on the best information available, and all criticisms can be properly integrated. Although it is in some ways bad that a bet doesn't come with reasons, it is good that it doesn't require any arguments -- there's less overhead.

I may be projecting, but the tone of your letter seems desperate to me. It sounds as if you are trying to force a response from MIRI. It sounds as if you want MIRI (and LessWrong, in your other post) to act like a single person so that you can argue it down. In your philosophy, it is not OK for a collection of individuals to each pursue the directions which they see as most promising, taking partial but not total inspiration from the sequences, and not needing to be held accountable to anyone for precisely which list of things they do and do not believe.

So, I state as my concrete objection that this is an OK situation. There is something like an optimal level of accountability, beyond which creative thinking gets squashed. I agree that building up a canon of knowledge is a good project, and I even agree that having a system in place to correct the canon (to a greater degree than exists for the sequences) would be good. Arbital tried to build something like that, but hasn't succeeded. However, I disagree that a response to all critics should be required. Something like the foom debate, where a specific critic who is seen to be providing high-quality critiques is engaged at a deep level, seems appropriate. Other than that, more blunt instruments such as a FAQ, a technical agenda, a mission statement, etc which deal with questions as seems appropriate to a given situation seems fine.

I would change my mind on this if I thought it was feasible to map out and address all arguments on a subject (as Arbital dreamed), **and** if I thought that such a system wasn't likely to turn on participants (as public discourse often does) by turning arguments into demands. You want to make sure you don't punish an organization for trying to be accountable.

PL at 3:00 AM on November 11, 2017 | #9239
There's no force involved here. It's just some comments on how reason works. Paths Forward is no more forceful than these suggestions (which I mostly agree with) that you should do these 12 things or you're not doing reason correctly: http://yudkowsky.net/rational/virtues/

PF explains how it's bad to stay wrong when better ideas are already known and people are willing to tell/help you. It talks about error correction and fallibilism. And it says how to implement this stuff in life to avoid the bad things.

People who don't want to do it ought to have some alternative which deals with issues like fallibility and correcting errors. How will they become Less Wrong?

The typical answer is: they have a mix of views they haven't really tried to systemize or write down. So their answer to how to correct error is itself not being exposed to critical scrutiny.

And what happens then? Bias, bias and more bias. What's to stop it?

Bias is a fucking hard problem and it takes a lot – like Paths Forward or something else effortful and serious – to do much to deal with bias.

MIRI doesn't do Paths Forward and *also has no serious alternative that they do instead*. So errors don't get corrected very well.

The practical consequence is: MIRI is betting the bulk of their efforts on Popper being wrong, but have not bothered to write any serious reply to Popper explaining why they are willing to bet so much on his ideas being mistaken and saying what's wrong with his ideas. MIRI should be begging for anyone to tell them something they're missing about Popper, if anyone knows it, to help mitigate the huge risk they are taking.

But MIRI doesn't want to think about that risk and acknowledge its meaning. That's a big deal even if you don't think they should address *all* criticisms (which they should via methods like criticisms of categories of bad ideas – and if you run into a "bad" idea that none of your existing knowledge can criticize, then you don't know it's bad!)

> Every party you need to justify yourself to adds constraints of understandability to your actions, meaning that eventually you need to be justifiable to the lowest common denominator. MIRI is a special place in that its staff are more free to go after what it thinks is the right direction as compared with an academic department.

This is incorrect. You can simply tell lay people that they need to learn some background knowledge to deal with various issues. You can then direct them to e.g. some reading recommendations and a discussion forum for learners. Then there's a path forward: they can learn the necessary expertise and then read your difficult material and then comment. Most people won't, and that's fine.

There are two important points here for rationality:

1) if someone *does* read your not-dumb-down-at-all material and point out a mistake, you don't just ignore them b/c of e.g. their lack of official credentials. you don't just say "i think you're a layman" whenever someone disagrees with you. you don't gate your willingness to deal with criticism on non-truth-seeking things like having a PhD or being popular.

2) it's possible that you're mistaken about the background knowledge required to understand a particular thing, and that can itself be discussed. so there's still no need to dumb anything down, but even if someone agrees they don't have a particular piece of background knowledge which you thought was relevant, it's still possible for them to make a correct point which you shouldn't reject out of hand.

> I would change my mind on this if I thought it was feasible to map out and address all arguments on a subject

I explain how to do this. You don't quote me and point out where I made a mistake. If you want more explanation we have reading recommendations, educational material, a discussion forum, etc.

> if I thought that such a system wasn't likely to turn on participants (as public discourse often does) by turning arguments into demands.

if "demands" are bad, write a criticism of them (or of a sub-category of them) and then reject all the criticized stuff by reference to the criticism. (i agree that *some* demands are bad. it depends on more precision.) if you gave a more clear example of how an organization would be "punished" for setting up mechanisms of error correction, rather than sticking to disorganized haphazard bias, perhaps we could discuss how to handle that situation and also what the alternatives are and whether they're better. (what are *you* advocating instead of Paths Forward? is it written down, substantive, good at dealing with bias, etc?)

curi at 11:55 AM on November 11, 2017 | #9242
> > Every party you need to justify yourself to adds constraints of understandability to your actions, meaning that eventually you need to be justifiable to the lowest common denominator. MIRI is a special place in that its staff are more free to go after what it thinks is the right direction as compared with an academic department.

>This is incorrect. You can simply tell lay people that they need to learn some background knowledge to deal with various issues. You can then direct them to e.g. some reading recommendations and a discussion forum for learners.

This doesn't seem right to me. For example, take the recent controversy in which someone lost his job for admitting what they believe about feminism or racism. (All the significant details have slipped my mind..) Of course this isn't exactly the concern with MIRI. However, it's relevant to the general argument for/against Paths Forward. You can make important decisions based on carefully considered positions which you wouldn't want to state publically. For an extended argument that there are important ideas in this category, see Paul Graham's essay "What You Can't Say":

http://www.paulgraham.com/say.html

It does seem to me that if you commit to being able to explain yourself to a particular audience, you become bias toward ideas which are palatable to that audience.

> Then there's a path forward: they can learn the necessary expertise and then read your difficult material and then comment. Most people won't, and that's fine.

I don't see how that's fine in your system. If you are offering the Paths Forward material as your criticism to MIRI, then according to Paths Forward methodology, MIRI needs to understand Paths Forward. Similarly, the hypothetical layperson who criticizes you in a way which makes you point to some expert knowledge they'd have to learn isn't following Paths Forward if they walk away, right? They still have their broken view, which you claim to have a solid criticism of.

This is part of what makes me skeptical. It seems like the policy of responding to all criticism requires one to read all criticism. Reading criticism is useful, but must be prioritized like all other things, which means that you have to make an estimate of how useful it will be to engage a particular critic before deciding whether to engage.

> if "demands" are bad, write a criticism of them (or of a sub-category of them) and then reject all the criticized stuff by reference to the criticism.

I meant the sort of demand-created-by-social-consensus-and-controversy mentioned above, rather than the kind which you can respond rationally to.

> (what are *you* advocating instead of Paths Forward? is it written down, substantive, good at dealing with bias, etc?)

I don't have any substantive solution to the problem of group epistemology, which is why the approach you advocate intrigues me. However, there are some common norms in the LessWrong community which serve a similar purpose:

1) If you disagree, either reach an agreement through discussion or make a bet. This holds you accountable for your beliefs, makes you more likely to remember your mistakes, aims thinking toward empirically testable claims, and is a tax on BS. Butting one-on-one does not create the kind of public accountability you advocate.
I mentioned betting markets earlier. Although I think there are some problems with betting markets, they do seem to me like progress in this direction. I would certainly advocate for their more widespread use.

2) The double crux method:

http://lesswrong.com/lw/o6p/double_crux_a_strategy_for_resolving_disagreement/

PL at 3:13 AM on November 12, 2017 | #9243
> This doesn't seem right to me.

I think you mean: that's correct as far as it goes, regarding being able to write expert level material instead of making everything lowest-common-denominator accessible. But it doesn't speak to some other issues like taboos. If you wanna link that PG essay and end a conversation, go ahead – and then there is a Path Forward b/c someone can refute the PG essay (if they know how).

I'm not expecting perfection. You could be lying and dealing with something non-taboo and just say it's taboo. Whatever. People will do shitty stuff. And some won't. I'm proposing a methodology to help the people who want to be rational. It also does a good job of catching a lot of irrational people and pointing out what they're doing wrong - a few of whom may appreciate that and reconsider some things.

If you don't want to talk about a taboo issue, you can say that, and have that position itself be open to criticism (the criticism will be difficult because you don't give much to the critic to use – but unless he proposes a solution to that problem, so be it). Similarly, the military doesn't talk about lots of things, and I have no objection to that.

More broadly, you're welcome to raise problems with doing stuff. "I would like to facilitate error correction in that way if it were unproblematic, but..." The problems themselves should be open to error correction and solutions. You need something at some level which is open to criticism. Or else say you aren't a public intellectual and don't make public claims about ideas.

You can also say you aren't interested in something, or you don't think it's worth your time. PF doesn't eliminate filters, it exposes them to criticism. At the very least you can publicly say "I'm doing a thing I have reasons I don't think I can talk about." And *that* can be exposed to criticism. E.g. you can direct them to essays covering how to talk about broad categories of things and ask if the essays are relevant, or ask a few questions. You may find out that e.g. they don't want to talk about the boundaries of what they will and won't talk about, b/c they think that'd reveal too much. OK. Fair. I don't have a criticism of that at this time. But someone potentially might. It can be left at that, for now, and maybe someone else will figure out a way forward at some point. Or not. At least, if someone knows the way forward, they aren't being blocked from saying so. Their way forward does have to address the meta issues (like privacy, secrecy, taboo, etc) not just the object issue.

---

If people would just **state their filters** it'd do so much. People filter on "I think he's dumb" all the time. And on "he is unpopular" and on "he doesn't have a PhD". And they do this *inconsistently*. They filter Joe cuz no PhD, but then talk to Bob a ton who also doesn't have a PhD.

What's going on? Bias and lack of accountability. Often they don't even consciously know what filters they are using or why. This is really bad. This is what LW and MIRI and most people are like. They have unstated, unaccountable gating around error correction, which allows for tons of bias. And they don't care to do anything about that.

You want to filter on "convince my assistant you have a good point that my assistant can't answer, and then i will read it"? Say so. You want to let some people skip that filter while others have to go through it? Say the criteria for this.

People constantly use criteria related to social status, prestige, etc, and lie about it to themselves (let alone others), and they do it really inconsistently with tons of bias. This is sooooooo bad and really ruins Paths Forward. Half-assed PF would be a massive improvement.

I'm not trying to get rid of gating/filters/etc. Just state them and prefer objective ones and have the filters themselves open to criticism. If you're too busy to deal with something, just say so. If your critics have no solution to that, then so be it. But if they say "well i have this Paths Forward methodology which actually talks about how to deal with that problem well" then, well, why aren't you interested in a solution to your problem? and people are certainly not being flooded with solutions to this problem, and they don't already have one either.

the problems are bias, dishonesty, irrationality, etc, as usual, and people's desire to hide those things.

yeah some people have legit reasons to hide stuff. but most people – especially public intellectuals – could do a lot more PF without much trouble, if they wanted to.

btw Ayn Rand, Richard Feynman, David Deutsch, Thomas Szasz and others of the best and brightest answered lots of letters from random strangers. they did a ton to be accessible.

---

No one on LW or from MIRI offered me any bets or brought up using double cruxes to resolve any disagreements.

Those things are not equivalent to Paths Forward b/c they aren't methodologies. They're individual techniques. But there's no clear specifications about when and how to use what techniques to avoid getting stuck.

I don't know what bets they could/should have offered. Double crux seems more relevant and usable for philosophy issues, but no one showed interest in using it.

The closest thing to a bet was someone wanted money to read Paths Forward, supposedly b/c they didn't have resources to allocate to reading it in the first place. I agreed to pay them the amount they specified without negotiating price, b/c it was low. They admitted they predicted that asking for a tiny amount of money would get me to say "no" instead of "yes". But neither they nor anyone else learned from their mistake. They were surprised that I would put money where my mouth is, surprised I have a budget, surprised I'm not poor, etc, but learned nothing. Also they were dishonest with me and then backed out of reading Paths Forward even though I'd offered the payment (which they said matched what they are paid at work). So apparently they'd rather *go to work* (some $15/hr job so presumably nothing too wonderful) than read philosophy. I thought that was revealing.

Someone else tried to use Paths Forward to control and pressure me. They didn't get far at all. After they said some nonsense, and I replied briefly, they wrote more non sequiturs. They wanted to use PF to make me talk with them an unlimited amount – I don't know if they were trying to prove PF is impractical or just an idiot, I guess a mix. They started complaining basically that I had to answer everything they said or I'm not doing PF – and they seemed to think (contrary to PF) that that meant answering all the details and pointing out every mistake, rather than just one mistake. I said if they had a serious criticism they could write it somewhere public with a permalink. That was enough of a barrier to entry that they gave up (despite the fact that the person already hds a blog with a lot of public posts with permalinks). Such things are typical. Very low standards – which are themselves valuable/productive and can easily be *objectively* judged – will filter out most critics.

---

> Similarly, the hypothetical layperson who criticizes you in a way which makes you point to some expert knowledge they'd have to learn isn't following Paths Forward if they walk away, right? They still have their broken view, which you claim to have a solid criticism of.

If the layperson says "oh, you have to learn all that? well i'd rather do this other thing instead. i think it fits me better." that is OK. i don't have a criticism of that. MIRI doesn't have a criticism of that. MIRI and I don't think everyone should become an expert on our stuff. Not everyone has to learn our fields and discuss with us. No problem. (well i do think a lot more people should take a lot more interest in philosophy and being better parents and stuff. so in some cases i might have a criticism, which wouldn't be a personal matter, but would be the kind of stuff i've already written generic/impersonal public essays about.)

curi at 8:54 AM on November 12, 2017 | #9244
MIRI puts a meaningful amount of their effort into scaring the public into thinking AGI is dangerous – b/c they think it’s dangerous. They frame this as: AGI is in fact dangerous and they’re working to make it safer. But the practical effect is they are spreading the (false!) opinion that it’s dangerous and therefore shooting themselves (and the whole field) in the foot.

Anonymous at 7:17 PM on November 12, 2017 | #9247
> Someone else tried to use Paths Forward to control and pressure me. They didn't get far at all. After they said some nonsense, and I replied briefly, they wrote more non sequiturs. They wanted to use PF to make me talk with them an unlimited amount – I don't know if they were trying to prove PF is impractical or just an idiot, I guess a mix.

To some extent I'm doing the same kind of testing the waters, seeing how much you stick to your guns and provide substantive responses to issues raised. Overall the effect has been to raise my plausibility estimate of PF, although I think you may have exceptionally much time to respond to things as compared to other people (or maybe you're assigning my remarks high importance?).

> I'm not expecting perfection. You could be lying and dealing with something non-taboo and just say it's taboo. Whatever. People will do shitty stuff. And some won't. I'm proposing a methodology to help the people who want to be rational. It also does a good job of catching a lot of irrational people and pointing out what they're doing wrong - a few of whom may appreciate that and reconsider some things.

> If you don't want to talk about a taboo issue, you can say that, and have that position itself be open to criticism (the criticism will be difficult because you don't give much to the critic to use – but unless he proposes a solution to that problem, so be it). Similarly, the military doesn't talk about lots of things, and I have no objection to that.

To state the state of the discussion so far as I see it:

- I suggested that there is something like an optimal level of accountability, beyond which it will stifle one's ability to come up with new ideas freely and act freely. I said that I'd change my mind on this if I thought it was possible to map all the arguments and if I thought such a system wouldn't end up creating gochas for those who used it by exposing them to scrutiny.

- You responded that your literature provides ways to handle all the arguments without much burden, and that there are no gotchas because you can always tell people why their demands which they try to impose on you are wrong.

- I didn't read about the system for handling all the arguments easily yet. I didn't find the "there are no gotchas" argument very compelling. I had a concern about how being publically accountable for your ideas can increase the risk of criticism of the more forceful kind, where violation of taboo or otherwise creating a less-than-palatable public face can have a lot of negative strategic implications; and how it is therefore necessary to craft a public image in a more traditional way rather than via PF, even if internal to the organization you try to be more rational.

- You responded by saying that of course there will be some things which you have reason not to say, but you can at least explain that it doesn't make sense to answer questions of a certain sort.

I think this addresses my concert to a significant degree. You create a sort of top level of the system at which PF can be followed, so that in principle a criticism could result in the hidden parts being opened up at some future time. I suspect we have remaining disagreements about the extent of things which it will make sense to open up to critique in this kind of setup. Maintaining any hidden information requires maintaining some plausible deniability about where the secret may be, since knowing exactly which questions are answerable and not answerable tells you most of a secret. And so it follows that if you want to maintain the ability for taboo thoughts to guide significant strategy decisions, you must maintain some plausible deniability about what is guiding strategy decisions at all times. This strategy itself may be something you need to obfuscate to some degree, because confessing that you have something to hide can be problematic in itself... well, hopefully the intellectual environment you exist in isn't **that** bad. Then PF really does seem inadvisable.

> If people would just **state their filters** it'd do so much. People filter on "I think he's dumb" all the time. And on "he is unpopular" and on "he doesn't have a PhD". And they do this *inconsistently*. They filter Joe cuz no PhD, but then talk to Bob a ton who also doesn't have a PhD.

Hm. Stating what your filters are seems like the sort of thing you might not want to do, partly because the filters will often be taboo themselves, but mainly because stating what criteria you use to filter gives people a lot of leverage to hack your filter. Checking for an academic email address might be an effective private filter, but if made public, could be easily satisfied by anyone with a school email address that is still active.

As mentioned in part of Yudkowski's recent book, the problem of signaling that you have a concern which is worthy of attention is a lemons market (https://www.lesserwrong.com/posts/x5ASTMPKPowLKpLpZ/moloch-s-toolbox-1-2, search page for "lemons market"). It's an asymmetric information problem. The person with the meaningful concern can't meaningfully differentiate themselves from others by loudly saying "This one really **is** important!" because everyone can say that. Private tests make sense in a lemons market, because you can often get some information with fallible heuristics which would stop being as good if you made them public.

Granted, I see the upside of filter criteria which anyone sufficiently motivated can meet. I agree that in many respects, a widespread standard of this kind of PF in public figures and institutions would be a big improvement.

> > Similarly, the hypothetical layperson who criticizes you in a way which makes you point to some expert knowledge they'd have to learn isn't following Paths Forward if they walk away, right?

> If the layperson says "oh, you have to learn all that? well i'd rather do this other thing instead. i think it fits me better." that is OK. i don't have a criticism of that.

This seems (and this is the reason I brought up the scenario) analogous to MIRI and Popper. It seems sort of true in principle that MIRI should have a response to Popper beyond the brief remarks made by Yudkowsky, but it doesn't concretely feel very interesting / like it would go anywhere, and there are a lot of other paths to understanding the problems MIRI is interested in which seem more interesting / more likely to go somewhere. I've read Black Swan, which is very Popperian by its own account at least, and although I found the book itself interesting, it didn't seem to contain any significant critique of Bayesian epistemology by my lights -- indeed, I roughly agree with Yudkowsky's remark about the picture being a special case of Bayes (though of course Taleb denies this firmly). It doesn't seem particularly worthwhile, all things considered, for me to even write up my thoughts on Taleb's version of Popper.

Somehow it seems like the Popperian project and the Bayesian project are so different that I'm just not optimistic that the Popperian one even connects with the Bayesian one in terms of what kind of arguments for and against epistemic methodology seem compelling. For me, a mathematical picture like information theory, and the close connection between information and probability provided by the coding theorem, is very compelling and makes me feel more strongly that Bayesianism gets close to what is really going on. I'm not aware of anything approaching this on the Popperian side, to the point where I feel like Popperians aren't interested in playing the same ball game. At least PAC learning, the MDL principle, and other alternatives to Bayesianism have corresponding machine learning algorithms and learning-theory results to show their efficacy.

I suppose precisely that statement is the kind of thing you're saying MIRI should be able to produce at least.

But, to reiterate, my point here is to say that there seems to be a parallel with the layperson who says there are better uses of time than to learn all the technical details required for the discussion.

Another thing I wanted to mention is that it seems like the ability to articulate one's thinking necessarily falls behind the thinking itself, sometimes far behind. Articulating the arguments behind one's position is often a major project, a book that takes many years to write. In these cases, it seems like a response to criticism may end up being only a promissory note to articulate arguments at some later time. Is that a sufficient PF?

PL at 1:32 AM on November 13, 2017 | #9248
> although I think you may have exceptionally much time to respond to things as compared to other people (or maybe you're assigning my remarks high importance?).

i write a lot faster than you're estimating, with less energy/effort used. this is a skill i developed over time b/c i was a heavy writer/discusser from the beginning when i got into philosophy, so i've developed relevant supporting skills. i'm also a fast reader when i want to be (i can control reading speed based on the level of detail i want to get from something). techniques include RSVP, skimming, sped up TTS/audio/video (audio stuff also allows multitasking).

i also set up my life to have a lot of time for thinking/writing/discussing, on purpose. i've been kinda horrified to find that most think tank and public intellectual types seem not to have done this. (but some of the very best, like David Deutsch, did do it.)

i'm also prioritizing this more than you may expect because people willing to talk about philosophy with me and be reasonable are a scarce commodity. this may not be your experience about what the world is like (you may find plenty of people to talk with in your experience), but for me they're very rare. why? because my method of discussion and ideas are plenty adequate to filter out most people!

oh also, **i enjoy discussions like this**. this isn't painful work for me. this isn't draining or hard. Put another way: I've been playing some Mario Odyssey recently, but to me this discussion is *more fun than video games*.

> partly because the filters will often be taboo themselves

I don't think the kind of filters I'm in favor of are taboo. but it's possible some good ones are that i haven't thought of. i don't really mind breaking most taboos anyway.

> stating what criteria you use to filter gives people a lot of leverage to hack your filter

that's the kind of filter i think is bad. the sort of filter i think is good will work just as well even if people know what it is.

i don't think you should filter on .edu email addresses b/c people with .com email addresses can be correct. if you do that, you're blocking lots of ways someone could correct you. furthermore, you're wasting people's time who see your public email and contact you from a .com and then you ignore them.

good filters involve knowledge or skills (so if the person "hacks" the filter by developing the knowledge or skills, then you're glad and that's fine), or involve clear, objective criteria people can meet (and you're *glad* if they do, b/c the criteria are *useful* instead of just a filter) such as formatting quotes correctly on FI, or actually have to do with the content instead of prestige/credentials/networking/social/authority.

my purpose in asking people to say their filters isn't just to prevent the biased application of filters, and double standards. it's also to get them to stop using irrational filters (that they don't want to admit they use) and also to get them to stop using filters on prestige/social-statatus/etc type stuff. stick to filtering on

1) content. like pointing out a mistake, and then if they have no answer, ok, done.

or

2) you can put up other barriers *if you tell them* and the barriers are reasonable asks (and they can either meet the barrier or criticize it). this needs to be shared to work well. if FI required a certain post format but didn't tell people what it was, it'd be really nasty to filter on it!

i don't want people to use approximate filters that semi-accurately find good people. i want them to use filters that actually don't block progress. i think this is very important b/c a large amount of progress comes from **outliers**. so if you block outlier stuff (over 90% of which is *especially bad*), then you're going to block a lot of great, innovative stuff.

> - You responded by saying that of course there will be some things which you have reason not to say, but you can at least explain that it doesn't make sense to answer questions of a certain sort.

yeah, you can always go to a higher meta level and still say something. the technique of going to a meta level comes up a lot in philosophy. e.g. it's also used here: http://fallibleideas.com/avoiding-coercion

> It's an asymmetric information problem. The person with the meaningful concern can't meaningfully differentiate themselves from others by loudly saying "This one really **is** important!" because everyone can say that.

I actually have credentials which stand out. But I don't usually bring them up, even though I know people are often filtering on credentials. Why?

Because of the terrible social dynamics which are anti-name-dropping and anti-bragging. If you just objectively start stating credentials, people respond with

1) social distate, and assumption that you're low status

2) thinking you're trying to appeal to authority (and not admitting they are filtering on the very kinds of "authority" you're talking about)

3) they debate which credentials are impressive. lots of idiots have PhDs. lots of idiots have spent over 10,000 hours on philosophy (as i have). few people, idiots or not, have written over 50,000 philosophy related things, but that isn't the kind of credential people are familiar with evaluating. i have a very high IQ, but i don't have that *certified*, and even if i did many people wouldn't care (and high IQ is no guarantee of good philosophy, as they would point out, forgetting they are supposedly just trying to filter out the bottom 80% of riff raff). i have associations with some great people, but some of them have no reputation, and as to David Deutsch he's a Popperian. if they wouldn't listen to him in the first place (and they don't), they won't listen to me due to association. Thomas Szasz particularly liked me and was a public author of dozens of especially great books, but most people hate him.

i have great accomplishments, but most of the important ones are in philosophy and are the very things at issue. some other accomplishments are indirect evidence of intelligence and effective learning, and stand out some, but anyone who doesn't want to listen to me still isn't going to. (e.g. at one point i was arguably the best Hearthstone player in the world – i did have the best tournament results – and i wrote Hearthstone articles with considerably more views (5-6 figures per article) than most people have for any content ever. that was just a diversion for me. i had fun and quit. anyway, from what i can tell, this is not a way to get through people's filters. and really i think social skill is the key there, with just enough credentials they can plausibly accept you if they want to.)

So it's *hard*. And I'm *bad* at social networking kinda stuff like that (intentionally – i think learning to be good at it would be intellectually destructive for me).

> This seems (and this is the reason I brought up the scenario) analogous to MIRI and Popper. It seems sort of true in principle that MIRI should have a response to Popper beyond the brief remarks made by Yudkowsky

note that his remarks on Popper are blatantly and demonstrably false, as i told him many years ago. they are false in very objective, clear ways (just plain misstating Popper's position in ways that many *mediocre* Popperians would see is wrong), not just advanced in terms of subtle advanced nuances.

Yudkowsky's opinions of Popper are basically based on the standard hostile-to-Popper secondary sources which get some basic facts wrong and also focus on LScD while ignoring Popper's other books.

> it doesn't concretely feel very interesting

Popper's philosophy explains why lots of what MIRI does is dead ends and can't possibly work. I don't see how lack of interest can be the issue. The issue at stake is basically whether they're wasting their careers and budgets by being *utterly wrong* about some of the key issues in their field. Or more intellectually, the issue is whether their work builds on already-refuted misconceptions like induction.

That's a big deal. I find the lack of interest bizarre.

> I've read Black Swan, which is very Popperian by its own account at least

I'm not familiar with this particular book (I'd be happy to look at it if the author or a fan was offering a Paths Forward discussion). Most secondary sources on Popper are really bad b/c they don't understand Popper either.

> Somehow it seems like the Popperian project and the Bayesian project are so different that I'm just not optimistic that the Popperian one even connects with the Bayesian one in terms of what kind of arguments for and against epistemic methodology seem compelling. For me, a mathematical picture like information theory, and the close connection between information and probability provided by the coding theorem, is very compelling and makes me feel more strongly that Bayesianism gets close to what is really going on.

Are you aware that David Deutsch – who wrote the best two Popperian books – is a physicist who has written papers relating to information flow and probability in the multiverse? He even has an AI chapter in his second book (btw I helped with the book). http://beginningofinfinity.com

The reason CR connects to Bayesian Epistemology (BE) stuff in the big picture is simple:

CR talks about how knowledge can and can't be created (how learning works). This says things like what methods of problem solving (question answering, goal achieving) do and don't work, and also about how intelligence can and can't work (it has to do something that *can* learn, not *can't*). CR makes claims about a lot of key issues BE has beliefs about, which are directly relevant to e.g. AGI and induction.

To the extent there are big differences, that's no reason to give up and ignore it. CR criticizes BE but not vice versa! We're saying BE is *wrong*, and *its projects will fail*, and we know why and how to fix it. And the response "well you're saying we're wrong in a big way, not a small way, so i don't want to deal with it" is terrible.

CR explained why over 2000 years of tradition in epistemology was *disastrously wrong*. And BE, like almost everyone, isn't updating and is ignoring the breakthrough and continuing with the same old errors. BE thinks it's clever b/c it has some new math tools and some tweaks, but from the CR perspective we see how BE keeps lots of the same old fundamental errors.

Why do BE ppl want to bet their careers on CR being false, just b/c some secondary sources said so (and while having no particular secondary source challenging CR that they will actually reference and take responsibility for, and reconsider if that source is refuted)?

It makes no sense to me. I think it's b/c of bad philosophy – the very thing at issue. That's a common problem. Systems of bad ideas are often self-perpetuating. They have mechanisms to keep you stuck. It's so sad, and I'd like to fix it, but people don't want to change.

> I'm not aware of anything approaching this on the Popperian side, to the point where I feel like Popperians aren't interested in playing the same ball game. At least PAC learning, the MDL principle, and other alternatives to Bayesianism have corresponding machine learning algorithms and learning-theory results to show their efficacy.

My opinion is we aren't ready to start coding AGI, and no one has made any progress whatsoever on coding AGI, and the reason is b/c they don't even understand what an AGI is and therefore can't even judge what is and isn't progress on AGI. things like Watson and AlphaGo *are not AGI and are not halfway to AGI either, they are qualitatively different things* (i think they're useful btw, and good non-AGI work).

you need to have some understanding of what you're even trying to build before you build it. how does intelligence work? what is it? how do you judge if you have it? do animals have it? address issues like these before you start coding and saying you're succeeding.

no one is currently coding anything with a generic idea data structure that can handle explanations, emotions, poetry, moral values, criticism, etc. they aren't even working on the right problems – like how to evaluate disagreements/criticism in the general case. instead they are making inductivist and empiricist mistakes, and trying to build the kind of they incorrectly believe is how human thinking works. and they don't want to talk about this forest b/c they are focused on the trees. (or more accurately there's more than 2 levels. so they don't wanna talk about this level b/c they are focused on multiple lower levels).

> Is that a sufficient PF?

it's far more than sufficient, as long as there are followups in the future. iterations can be quite small/short. i commonly recommend that to people (doing a larger number of shorter communications – that way there's less opportunity to build on errors/misunderstandings/etc before feedback).

> Another thing I wanted to mention is that it seems like the ability to articulate one's thinking necessarily falls behind the thinking itself, sometimes far behind. Articulating the arguments behind one's position is often a major project, a book that takes many years to write. In these cases, it seems like a response to criticism may end up being only a promissory note to articulate arguments at some later time.

This is partly a real issue, and that's fine – you can say "I know criticism would be very valuable, and I'll get it just as soon as I'm able to formulate what I'm thinking adequately. And the reason I judge this to be productive to work on more is..."

But I think it's partly that people structure their learning and research the wrong way. They could have more discussion, from the start, and be less fragile about criticism. They could get better at saying initial versions of things to get some initial feedback about major issues they're missing. This can proceed in stages as they keep working and adding levels of detail, and then getting feedback at that level of detail again.

And if there is no feedback, ok, proceed. It's worth a try b/c sometimes someone knows something important and is willing to say so (and this would happen a lot more if Paths Forward were more common. i know people who are very smart, know tons of stuff ... and basically don't like people and don't try to share their knowledge much b/c no one does Paths Forward. i strongly suspect there are many more such people i do not know who had some bad experiences and gave up discussion.) And, also, formulating your thoughts *in writing* at each stage is *helpful to your own thinking*. you should be getting away from "i know what i mean" to actually writing down what you developed so far (so it'd be understandable to a stranger with the right background knowledge, e.g. the reader has to already know calculus, or some science stuff, or even already know Objectivism if you're doing work building on Objectivism. but what the reader doesn't have to do is read your mind or know your quirks. this is how books work in general).

i find i often articulate rough drafts and pieces of things early on, and more later. like with Paths Forward and Yes or No Philosophy, there was discussion of some elements long before i developed them more fully. and i don't going into isolation to think about them alone would have been the best way to develop them more.

i think people who write books usually shouldn't, but i will accept sometimes they should. most people who write books do not have adequate experience writing shorter things – and exposing them to Paths Forward style criticism to check if they are any good. i think most books are bad (i bet you agree), and this could be avoided if people would try to actually write one little essay that isn't bad, first, and then put a lot of Paths Forward type work into getting the essay criticized and addressing criticisms from all comers and not making any excuses and not ignoring any issues. then they'd see how hard it is to actually do good work. but instead of doing a small project to a really high standard, and then repeating a few times, and then doing some medium projects to a really high standard, and THEN doing a book ... they go do a big project to a much lower standard. meh :/

this relates to my ideas about powering up and making progress. the focus should be on learning and self-improvement, not output like books. ~maximize your own progress. then what happens is you generate some outputs while learning, and also some types of outputs become easy for you b/c you've greatly surpassed them in skill level. so you can *very cheaply* output that stuff. if you keep this up enough, you learn enough your cheap/easy outputs end up being better than most people's super-effortful-book-writing. so this is a much much more efficient way to live. don't divert a bunch of effort into writing a book you can maybe just barely write, with a substantial failure chance. put that effort into progress until you can have the same book for a much lower effort cost with a much lower risk of it being bad/wrong.

> But, to reiterate, my point here is to say that there seems to be a parallel with the layperson who says there are better uses of time than to learn all the technical details required for the discussion.

the rational lay person is welcome to say that *and then have no opinion of the matter*. e.g. he says "i'm going to go do plumbing" and then is *neutral* about whether BE or CR is right. he's not involved, and he knows his ignorance. and that's fine b/c neither BE nor CR is saying he's doing plumbing all wrong – we agree he can be a decent plumber without learning our stuff. (he may have some problems in his life that philosophy could help with, such as destroying his children's minds with authoritarian parenting, and ultimately i'd like to do something about that too. but still, the general concept that you can take an interest in X and recognize your ignorance of Y, and not everyone has to be highly interested in Y, is fine.)

curi at 11:52 AM on November 13, 2017 | #9250
> although I think you may have exceptionally much time to respond to things as compared to other people (or maybe you're assigning my remarks high importance?).

two more comments on this to add to what i said above.

1) my comments to you have no editing pass. i'm not even doing much editing as i go, this is near max speed writing. over the years i've put a lot of effort into being able to write without it being a burden, and into making my initial thoughts/writing good instead of getting stuff wrong then fixing it in editing later. i think this is really important and also unusual. (you should fix mistakes in your writing policies themselves instead of just having a separate editing policy to fix things later – and if you can't do that something is wrong. it makes more sense this way. editing has a place but is super overrated as a crutch for ppl who just plain think lots of wrong and incoherent thoughts all the time and aren't addressing that major issue.)

2) this is public, permalinkable material which i can re-use. i will link people to this in the future. it's a good example of some things, and has some explanations i'll want to re-use. i'm not just writing to you. everyone on the FI forum who cares is reading this now, and others will read it in the future.

curi at 12:47 PM on November 13, 2017 | #9252
As Ayn Rand would say: check your premises.

Avoiding debates about your premises is so dumb.

Anonymous at 2:32 PM on November 13, 2017 | #9253
> that's the kind of filter i think is bad. the sort of filter i think is good will work just as well even if people know what it is.

> i don't think you should filter on .edu email addresses b/c people with .com email addresses can be correct. if you do that, you're blocking lots of ways someone could correct you. furthermore, you're wasting people's time who see your public email and contact you from a .com and then you ignore them.

I used a fake example of a taboo filter. I don't have some explicit filter policy which is taboo, but I can imagine that if I had one I wouldn't want to say it, and furthermore if I did say it, I wouldn't be able to defend it in conversation precisely because it would depend on assumptions I don't want to publically defend. I can imagine that I might be contacted by someone who I would automatically know that I should filter, in a knee-jerk kind of way, and my justification for this would be taboo. Suppose I do work which has serious public policy implications, but I expect those implications to be misconstrued by default, with serious negative consequences. If people in the government, or campaigning politicians, etc contact me, my best response would be either no response or something to throw them off the trail. I might be fine with talking about things in an academic journal read only by other experts, but if a reporter cornered me I would describe things only in the most boring terms, etc.

(I'll likely make a more substantive reply later.)

PL at 6:17 PM on November 13, 2017 | #9254
I'm not very concerned about edge cases. If 5% of intellectuals claim some exceptions, and 95% do Paths Forward, that sounds just fine for now. I will tentatively accept some rare edge cases, and we can investigate them more carefully at some later date if it matters.

On the other hand if you think the taboo case is a good excuses for ~100% of people not to do paths forward -- the current situation -- then we can debate it now. but if you're only trying to offer excuses for less than 20% of people, and agree with me about the other 80+%, i'll take it for now.

curi at 7:39 PM on November 13, 2017 | #9255
(Still may not get to a more proper reply today, but I'll reply to the most recent point.)

> I'm not very concerned about edge cases. If 5% of intellectuals claim some exceptions, and 95% do Paths Forward, that sounds just fine for now. I will tentatively accept some rare edge cases, and we can investigate them more carefully at some later date if it matters.

Suppose that 5% of intellectuals have good reasons not to do PF, along the lines I described. Then, if 95% of intellectuals do PF, this creates a reason for those 5% of intellectuals to be looked down on and excluded in various ways (funding, important positions, etc). The 5% will of course be unable to explain their specific reasons for not participating in PF, at least publically; which means that even if they can describe them privately, those descriptions can't be included in the official reasons for decisions (about funding, appointment of positions, etc). So it creates a feedback loop which punishes others for taking those private reasons into account. Even if this problem itself is widely understood (and I'm skeptical that it would be, even given the improved intellectual discourse of PF world), it may make sense as a policy to use the standards of PF in those decisions (funding, appointments, etc) because it seems important enough and good enough an indicator of other good qualities.

This trade-off may even be worth it. But, it's not clear at all that "95% of intellectuals could use PF" is good enough a justification to meet with my objection there.

PL at 4:26 PM on November 14, 2017 | #9257
> (Still may not get to a more proper reply today, but I'll reply to the most recent point.)

There is no hurry on my account. I think the only hurry is if you're losing interest or otherwise in danger of dropping the topic due to time passing.

---

Should I and the other 95% not pursue our own work in the most rational way for fear that some other people would be *incorrectly* attacked for not using all the same methods we're using? No.

I'm happy to grant there are legitimate concerns about working out the details if it catches on. While my answer is a clear "no" in the previous paragraph, I might still be willing to do *something else other than give up PF* to help with that.

I do see the issue that if a lot of people are more public about some matters, that makes it harder for the people with something (legitimate) to hide. I like privacy in general – but not much when it comes to impersonal ideas which ought to be discussed a bunch to help improve them.

Also I'm not directly interested in things like public reputations, appointments or funding. I care about how truth-seeking should/does work. I understand the nature of reason has implications to be worked out for e.g. funding policies and expectations. But I'm not very concerned – I'll try to speak the truth and I expect that'll make things better not worse.

curi at 5:10 PM on November 14, 2017 | #9258
> Should I and the other 95% not pursue our own work in the most rational way for fear that some other people would be *incorrectly* attacked for not using all the same methods we're using? No.

I don't feel quite right about either agreeing or disagreeing with this. I feel like you're making a logically correct response to my surface-level point but not making much effort to see the deeper intuitions which made me generate the point, and as a result, your response does little to shift my intuitions. So I'm going to state some messy intuitions. The hypothetical where PF is adopted widely is far enough from our world that it isn't really well-defined. My mental image is coming from brief experiences hanging out with the sort of person who makes suggestions whenever they see you could be doing something better, doesn't let you get away with excuses for ignoring their advice if those excuses aren't real reasons, communicates their own preferences and reasons very clearly, and engages happily in debates about them. At first the experience may be invigorating and enjoyable -- indeed, such a person holds more truly to the intellectual ideal in many ways. However, after a while, it somehow gets nerve-wracking. I have the irrational feeling that they are going to bite my head off if I do something inconsistent. I obsess about the list of stated preferences they gave, and fall into weird dysfunctional social patterns.

So, when I imagine a PF society, I imagine a sense of a big eye looking at you all the time and watching for your inconsistencies, whether or not you're trying to play the PF game yourself. It's just the way the world works now -- there's an implicit *assumption* that you're interested in feedback. And it's hard to say no.

Why do I have this reaction to "that kind of person"? Part (only part) of this is due to the exhausting nature of needing to explain myself all the time. I don't think it's exhausting in the sense that mental labor is exhausting. It feels "exhausting" because it feels like a constant obstacle to what I want to do. This is paradoxical, of course, because really this kind of person I am describing is only trying to help; they are offering all these suggestions about why what you're doing could be done better! And it's not like it takes *that* long to explain a reason, or to change plans. So, why should it feel like a constant barrier?

I think it has something to do with this LW post on bucket errors:

http://lesswrong.com/lw/o2k/flinching_away_from_truth_is_often_about/

IE, sometimes we just wouldn't make the right update, and some port of us is aware of that, and so refuses to update. And it's not *just* that we're genuinely better off not updating in those cases, until that time at which we have a better way to update our view (IE, a better view to update *to* than the one we can currently explicitly articulate). Because even if that's *not* the case, it's *still* a fact about the human motivational framework that it finds itself in these situations where it sometimes feels attacked in this way, and people can need to disengage from arguments and think about things on their own in order to maintain mental stability. (I am literally concerned about psychotic breaks, here, to some extent.)

In general, as a matter of methodology, when you have a criticism of a person or a system or a way of doing things or an idea or an ideology, I think it is very important to step back and think about why the thing is the way that it is, and understand the relevant bits of the system in a fair bit of detail. This is partly about Chesterton's Fence:

https://en.wikipedia.org/wiki/Wikipedia:Chesterton%27s_fence

IE, if it's true that people sometimes don't want to respond to arguments, and you think this is a wrong reflex, isn't it worth having a lot of curiosity about why that is, what motivations they might have for behaving in this way, so that you can be sure that your reasons in favor of PF outweigh the reasons against, and your methodology for PF address the things which can go wrong if you try to respond to all criticisms, which (perhaps) have been involved in people learning *not* to respond to all criticisms based on their life experience, or at least not naturally learning *to* respond to all criticisms?

And similarly, and perhaps as importantly, understanding in detail why the person/system/idea is the way it is seems necessary for attempting to implement/spread a new idea in a way that doesn't just bash its head against the existing incentives against that way of doing things. You want to understand the existing mechanics that put the current system in place in order to change those dynamics effectively. This is partly about errors vs bugs:

http://celandine13.livejournal.com/33599.html

And partly about the point I made earlier, concerning understanding the deeper intuitions which generate the person's statements so that you can respond in a way that has some chance of shifting the intuitions, rather than responding only to surface-level points and so being unlikely to connect with intuitions.

(This "surface level vs deep intuitions" idea has to do with the "elephant and the rider" metaphor of human cognition. (I don't think it's a particularly *good* metaphor, but I'll use it because it is standard.) It's a version of the system 1 / system 2 model of cognitive bias, where system 2 (the explicit/logical/conscious part of the brain) serves mostly as a public relations department to rationalize what system 1 (the instinctive/emotional/subconscious brain) has already decided; IE, the rider has only some small influence on where the elephant wants to go. So, to a large extent, changing someone's mind is a matter of trying to reach the intuitions. This picture in itself might be a big point of disagreement about how useful something like Paths Forward can be. Part of my contention would be that most of the time, the sort of arguments I don't want to respond to are ones which *obviously* from my position don't seem like they could have been generated for the reasons which the arguer explicitly claims they were generated, for example, couldn't have been generated out of concern for me. In such cases, I strongly expect that the conversation will not be fruitful, because any attempt on my part to say things which connect with the intuitions of that person which actually generate their arguments will be rejected by them, because they are motivated to deny their real reasons for thinking things. This denial itself will always be plausible; they will not be explicitly aware of their true motivations, and their brain wouldn't be using the cover if it didn't meet a minimal requirement of plausible deniability. Therefore no real Paths Forward conversation can be had there. If you're curious about where I got this mental model of conversations, I recommend the book The Righteous Mind.)

And partly about cognitive reductions, described in point 5 and 6 in this post:

https://agentfoundations.org/item?id=1129

IE, *in general, when you are confused about something*, particularly when it's a cognitive phenomenon, and especially when you may be confused about whether it is a cognitive phenomenon because it's related to a map-territory confusion, a great way of resolving that confusion is to figure out what kind of algorithm would produce such a confusion in the first place, and for what purpose your brain might be running that kind of algorithm. And, if you seemingly "resolve your confusion" without doing this, you're likely still confused. The same statement holds across different minds.

So, to try to make all of that a bit more coherent: I would like for you to put more effort into understanding what might motivate a person to do anything other than Paths Forward other than stupidity or bias or not having heard about it yet or other such things. If you can put yourself in the shoes of at least one mindset from which Paths Forward is an obviously terrible idea, I think you'll be in a better position both to respond constructively to people like me, and to revise Paths Forward itself to actually resolve a larger set of problems.

> CR explained why over 2000 years of tradition in epistemology was *disastrously wrong*. And BE, like almost everyone, isn't updating and is ignoring the breakthrough and continuing with the same old errors. BE thinks it's clever b/c it has some new math tools and some tweaks, but from the CR perspective we see how BE keeps lots of the same old fundamental errors.

What's a priority ordering of things I should read to understand this? Is Beginning of Infinity a good choice? I remember that my advisor brought David Deutsch's essay on why the Bayesian approach to AGI would never work to a group meeting one time, but we didn't really have anywhere to go from it because all we got out of it was that DD thought the Bayesian approach was wrong and thought there was something else that was better (IE we couldn't figure out why he thought it was wrong and what it was he thought was better).

> you need to have some understanding of what you're even trying to build before you build it. how does intelligence work? what is it? how do you judge if you have it? do animals have it? address issues like these before you start coding and saying you're succeeding.

You'll get no disagreement about that from the Bayesian side. And of course the latest MIRI approach, the logical inductor, is non-bayesian in very significant ways (though there's not a good non-technical summary of this yet). And logical inductors are still clearly not enough to solve the important problems, so it's likely there are still yet-more-unbayesian ideas needed. But that being said, it seems MIRI also still treats Bayes as a kind of guiding star, which the ideal approach should in some sense get as close to as possible while being non-Bayesian enough to solve those problems which the Bayesian approach provably can't handle. (In fact MIRI would like very much to have a version of logical inductors which comes closer to being bayesian, to whatever extent that turns out to be possible -- because it would likely be easier to solve other problems with more bayes-like logical uncertainty).

> This is partly a real issue, and that's fine – you can say "I know criticism would be very valuable, and I'll get it just as soon as I'm able to formulate what I'm thinking adequately. And the reason I judge this to be productive to work on more is..."

> But I think it's partly that people structure their learning and research the wrong way. They could have more discussion, from the start, and be less fragile about criticism. They could get better at saying initial versions of things to get some initial feedback about major issues they're missing. This can proceed in stages as they keep working and adding levels of detail, and then getting feedback at that level of detail again.

I strongly agree with this and the several paragraphs following it (contingent in some places on Paths Forward not being a net-bad idea).

> the rational lay person is welcome to say that *and then have no opinion of the matter*. e.g. he says "i'm going to go do plumbing" and then is *neutral* about whether BE or CR is right.

I strongly disagree with this principle. I might agree on a version of this principle which instead said *and then have no public opinion on the matter*, IE *either* be neutral on BE vs CR *or* not claim to be a public intellectual, *provided* I was furthermore convinced that PF is an on-net-good protocol for public intellectuals to follow. I find it difficult to imagine endorsing the unqualified principle, however. It is quite possible to simultaneously, and rationally, believe that X is true and that having a conversation about X with a particular person (who says I have to learn quantum mechanics in order to see why X is false) is not worth my time. To name a particular example, the quantum consciousness hypothesis seems relevant to whether AI is possible on a classical computer, but also seems very likely false. While I would be interested in a discussion with an advocate of the hypothesis for curiosity's sake, it seems quite plausible that such a discussion would reach a point where I'd need to learn more quantum mechanics to continue, at which point I would likely stop. At that point, I would be *wrong* to change my opinion to a neutral one, unless the argument so far had swayed my opinion in that direction.

This goes back to my initial claim that PF methodology seems to hold the intellectual integrity of the participant ransom, by saying that you must continue to respond to criticisms in order to keep your belief. While this might be somewhat good as a social motivation to get people to respond to criticism, it seems very bad in itself as epistemics. It encourages a world where the beliefs of the person with the most stamina for discussion win out.

Overall, I continue to be somewhat optimistic that there could be some variant of PF which I would endorse, though it might need some significant modifications to address my concerns. At this point I feel I may assign positive expectation to the proposition of MIRI starting to follow PF, on the whole, but am uneasy about certain aspects of how that might play out, including the time investment which might be involved. I've now read your document on how to do PF without it taking too much time, but it seems like MIRI in particular would attract a huge number of trolls. Part of the reason Eliezer moved much of his public discussion to Facebook rather than LW was that there were a number of trolls out to get him in particular, and he didn't have enough power to block them on LW. Certainly I *wish* there were *some* could way to solve the problems which PF sets out to solve.

PL at 4:12 PM on November 20, 2017 | #9261
yay, you came back.

> but not making much effort to see the deeper intuitions which made me generate the point, and as a result, your response does little to shift my intuitions.

It's hard for me to get in your head. I don't know you. I don't know your name, your country, your age, your profession, your interests, how you spend your time, your education, or your background knowledge. If you have a website, a blog, a book, papers, or an online discussion history (things I could skim to get a feel for where you're coming from), I don't know where to find it.

And speculating on what people mean can cause various problems. I also don't care much for intuitions, as against reasoned arguments.

> My mental image is coming from brief experiences hanging out with the sort of person who makes suggestions whenever they see you could be doing something better, doesn't let you get away with excuses for ignoring their advice if those excuses aren't real reasons, communicates their own preferences and reasons very clearly, and engages happily in debates about them. [...] However, after a while, it somehow gets nerve-wracking. I have the irrational feeling that they are going to bite my head off if I do something inconsistent. I obsess about the list of stated preferences they gave, and fall into weird dysfunctional social patterns.

That sounds a lot like how most people react to Objectivism (my other favorite philosophy). And it's something Objectivism addresses directly (unlike CR which only indirectly addresses it).

Anyway, I've never tried to make PF address things like an "irrational feeling". It's a theory about how to act rationally, not a solution to irrationality. I don't expect irrational people to do PF.

Separately, what can be done about irrationality? I think learning Objectivism and CR is super helpful to becoming rational, but most people find it inadequate, but I don't have anything better to offer there. I know a lot about *why and how* people become irrational – authoritarian parenting and static memes – but solutions that work for many adults do not yet exist. (Significant solutions for not making your kids irrational in the first place exist, but most adults, being irrational, don't want them. BTW, DD is a founder of a parenting/education philosophy called Taking Children Seriously, which applies CR and (classical) liberal values to the matter. And FYI static memes are a concept DD invented and published in BoI. Summary: http://curi.us/1824-static-memes-and-irrationality )

Back to your comments, there are two sympathetic things I have to say:

1) I don't think the people biting your head off are being rational. I think most of the thing you don't like which you experienced *is actually bad*. Also a substantial part of the issue may be misunderstandings.

2) I think you have rational concerns mixed in here, and I have some fairly simple solutions to address some of this.

To look at this another way: I don't have this problem. I have policies for dealing with it.

Part of this is my own character, serene self-confidence, ability to be completely unfazed by people's unargued judgements, disinterest in what people think of me (other than critical arguments about ideas), and my red pill understanding of social dynamics and unwillingness to participate in social status contests (which I don't respect, and therefore don't feel bad about the results of).

Partly I know effective ways to stand up to arguments, demands, pressures, meanness, etc. (Note that I mean rationally effective, in my view – and my own judgement is what governs my self-esteem, feelings, etc. I don't mean *socially effective*, which is a different matter which doesn't really concern me. I think seeking popularity in that way is self-destructive, especially intellectually.)

So if people give me philosophy arguments, I respond topically, and I'm not nervous, I'm fully confident in my ability to address the matter – either by refuting it or by learning something. This confidence was partly built up from a ton of experience doing those things (in particular, I lost every major argument with DD for the first ~5 years, so I have a lot of experience changing my mind successfully), and also I had some of this attitude and confidence since early childhood.

What if people argue something I don't care about? What if they want me to read some book and I think I have better things to do? What if they want to tell me how Buddhism predicted quantum physics? What if they think homeopathy works and I should study it and start using it? What if they're recommending meditation or a fad diet? What if it's some some decent stuff that I don't care about because I already know more advanced stuff?

I just state the situation as I see it. I always have some kind of reasoning for why I don't want to look into something more – or else I'd be interested. There are common patterns: I already know stuff about it, or I don't think it fits my interests, or I don't think it looks promising enough to investigate at all (due to some indicator I see and could explain). Each of these lends itself to some kind of response comment.

I investigate tons of stuff a little bit because I'm able to do it quickly and efficiently, and I'm curious. I want to form a judgement. I often find plenty of info from Amazon reviews, for example, to form an initial judgement. Then the person can either tell me something I've gotten wrong, or I consider my judgement good enough.

And I know some of the broadest knowledge which is most applicable to many fields (epistemology – in every field you have to learn whatever the field is about, so the philosophy of learning is relevant; and you need to evaluate ideas in every field). So I find it's usually easy to point out mistakes made in other fields – they are using the wrong philosophy methods, which is why they are getting the wrong answers, and that's why I disagree (I can frequently give some specifics after 15 minutes of research). Often I bring the issue back to epistemology – I will ask the person if they thing they are recommending is Popperian, and if not how can it be any good?

This is all optional. If I think something is promising I can look at it all I want. But if I think something is bad then I have minimal options like these. There are also more alienating things I can do that get rid of most people very fast while being scrupulously rational, but I'm not sure how to give generic examples that you'll understand (I talk about this a bit more at the end of this section regarding how the FI forum works). But in short I find that by asking for high standards of rationality, I can get anyone to stop talking to me very quickly.

If you use techniques like this, you can quickly find yourself in a philosophy argument (rather than an argument about the specific thing they were bringing up). That isn't a problem for me. I *want* more philosophy discussions, and I also regard it as extremely important to keep Paths Forward open regarding philosophy issues, and I also know all the philosophy arguments I need offhand (it's my speciality, and debate was a major part of my philosophy learning).

So this is convenient for me, but what about other people with other interests? I think *everyone needs philosophy*. Everyone has a philosophy, whether they think about it or not. Everyone tries to learn things and judge ideas – which are philosophy issues. THere's no getting away from philosophy, so you ought to study it some and try to be good at it. Currently, all competent philosophers are world class, and it's the field which most badly needs talent, so people in all fields ought to become world class philosophers in order to do their own field well. Our culture isn't good enough at philosophy for the stuff you pick up here and there to be decent, so you can't just focus on your own field, sorry.

Oh and what if people are mean and socially pushy? It doesn't get to me. I just think they're being dumb and immoral, so why would I feel bad? And I don't mind standing up to meanness. I'm willing to call it out, explicitly, and to name some of the social dynamics they are using to pressure me. Most people don't like to do this. E.g. when I ask people why they're being mean, they usually respond by being even more mean, and really trying hard to use social dynamics to hurt me. And they do stuff that would in fact hurt most other people, but doesn't hurt me. Learning to deal with such things is something I'd recommend to people even if they don't do PF.

Also I designed my life not to need social status – e.g. I'm not trying to get tenure, get some prestigious academic gatekeepers to publish me, or impress a think tank boss (who gets taken in by social games and office politics) enough to keep paying me. I'm not reliant on a social reputation, so the issue for me is purely if the jerks can make me feel bad or control me (they can't). Such a life situation is more or less necessary for a serious intellectual so they can have intellectual freedom and not be pressured to conform (sorry academia). If PF doesn't work well for someone b/c they are under a bunch of social pressures, I understand the issue and advise them to redesign their life if they care about the truth (I don't expect them to do this, and I'd be thrilled to find a dozen more people who'd do PF).

BTW the Fallible Ideas (FI) yahoo discussion group is public and mostly unmoderated. This has never been a big problem. I will ask problem people questions, e.g. why they're angry, why they're at FI, or what they hope to accomplish by not following FI's ethos. I find that after asking a few times and not saying anything else, they either respond, try to make serious comments about the issues, or leave – all of which are fine. Other regular posters commonly ask similar questions or ignore the problem people, instead of engaging in bad discussion. Sometimes if I think someone is both bad and persistent, and others are responding to them in a way I consider unproductive, I reply to the other people and ask why they're discussing in that way with a person with flaws X, Y and Z, and how they expect it to be productive. Stuff like this can be harsh socially (even with no flaming), and is effective, but is also fully within the rules of PF and reason (and the social harshness isn't the intent, btw, but I don't know how to avoid it, and that's why tons of other forums have way more trouble with this stuff, b/c they try to be socially polite, which doesn't solve the problem, and then they don't know what to do (within the normal social rules) so they have to ban people).

The first line of defense, though, is simply asking people to format their posts correctly. This is super objective (unlike moderating by tone, style, flaming, quality, etc) and is adequate to get rid of most bad posters. Also, if people don't want to deal with the formatting rules they can always use my blog comments instead where basically anything goes short of doxing and automated spam bots (though I might ask someone to write higher quality posts or use ">" for quoting in blog comments, but I never moderate over it. In blog comments, I don't even delete false, profanity-laced sexual insults against named FI people – whereas those actually would get you put on moderation on the yahoo group).

> Eliezer moved much of his public discussion to Facebook

I consider Facebook the worst discussion forum I've used. I've found it's consistently awful – even worse than twitter.

> At this point I feel I may assign positive expectation to the proposition of MIRI starting to follow PF

I doubt it because only one MIRI person replied to my email, and his emails were brief, low quality, and then he promptly went silent without having said anything about Paths Forward or providing any argument which addresses Popper.

> > you need to have some understanding of what you're even trying to build before you build it. how does intelligence work? what is it? how do you judge if you have it? do animals have it? address issues like these before you start coding and saying you're succeeding.

> You'll get no disagreement about that from the Bayesian side.

I found *everyone* I spoke with at LW was disagreeable to this.

> And of course the latest MIRI approach, the logical inductor, is non-bayesian in very significant ways (though there's not a good non-technical summary of this yet).

FYI I can deal with technical stuff, and I'd be willing to if I thought there were Paths Forward, and I thought the technical details mattered (instead of being rendered irrelevant by prior philosophical disagreements). I'm an experienced programmer.

> What's a priority ordering of things I should read to understand this?

I'm not sure what essay you're referring to, but DD's books explain what he thinks. I recommend DD before Popper. I have reading selections for both of them here:

https://fallibleideas.com/books#deutsch

DD's books cover multiple topics so I listed the epistemology chapter numbers for people who just want that part. Popper wrote a lot so I picked out what I think is the best (which is absolutely *not* _The Logic of Scientific Discovery_, which is what tons of Popper's critics focus on. LScD is harder to understand, and has less refined ideas, than Popper's later work.)

There's also my own material such as:

https://yesornophilosophy.com

I consider that (which improves some aspects of CR) and Paths Forward to be my largest contributions to philosophy. If you want it but the price is a problem, I can give you a large discount to facilitate discussion.

https://curi.us/1595-rationally-resolving-conflicts-of-ideas

https://fallibleideas.com

https://reasonandmorality.com

There are other resources, e.g. our community (FI) has ~100,000 archived discussion emails – which are often linked to when people ask repeat questions. There's currently a project in progress to make an ebook version of the BoI discussion archives. There hasn't been a ton of focus on organizing everything because Popper and DD's books, and some of my websites, are already organized. And because we don't yet have a solution to how to persuade people of this stuff, so we don't know what the right organization is. And because people have mostly been more interested in discussing and doing their own learning, rather than making resources for newcomers. And it's a small community.

And I don't consider FI realistically learnable without a lot of discussion, and I think the current resources are far more than adequate for someone who really wants to learn –which is the only kind of person who actually will learn. So I don't think 50% better non-discussion educational resources would change much.

People read books and *do not understand much of it*. This is the standard outcome. Dramatically overestimating how much of the book they understood is also super common. What needs to happen is people discuss stuff as issues come up, but I've had little success getting people to do this. People like to read books in full, then say they disagree without being specific – instead of stopping at the first paragraph they disagree with, quoting it, and saying the issue.

I don't know how to make material that works well for passive audiences – and no one else does either. Every great thinker in history has been horribly misunderstood by most of their readers and fans. This applies to DD's and Popper's books too. E.g. I think DD is the only person who ever read Popper and understood it super well (without the benefit of DD's help, as I and some others have had).

It took me ~25,000 hours, including ~5,000 hours of discussions with DD, to know what I know about philosophy. That doesn't include school (where I learned relevant background knowledge like reading and writing). And I'm like a one in a hundred million outlier at learning speed. There are no fast solutions for people to be good at thinking about epistemology; I've made some good resources but they don't fundamentally change the situation. DD mostly stopped talking with people about philosophy, btw (in short b/c of no Paths Forward, from anyone, anywhere – so why discuss if there's no way to make progress?); I'm still trying. Popper btw did a bunch of classics work b/c he didn't like his philosophy colleagues – b/c the mechanisms for getting disagreements resolved and errors corrected were inadequate.

> So, when I imagine a PF society, I imagine a sense of a big eye looking at you all the time and watching for your inconsistencies, whether or not you're trying to play the PF game yourself. It's just the way the world works now -- there's an implicit *assumption* that you're interested in feedback. And it's hard to say no.

Sounds good to me. But we can start with just the public intellectuals. (In the long run, I think approximately everyone should be a public intellectual. They can be some other things too.)

> It feels "exhausting" because it feels like a constant obstacle to what I want to do. This is paradoxical, of course, because really this kind of person I am describing is only trying to help; they are offering all these suggestions about why what you're doing could be done better! And it's not like it takes *that* long to explain a reason, or to change plans.

There's some big things here:

1) You need lots of reusable material to deal with this which deals with all kinds of common errors. Even giving short explanations of common errors, which you know offhand, can get tiring. Links are easier.

2) If you get popular, you need some of your fans to field questions for you. Even giving out links in response to inquiries is too much work if you have 50 million fans. But if your fanbase is any good, then it should include some people willing to help out by fielding common questions (mostly using links) and escalating to you only when they can't address something. Also if you have that many fans you should be able to make a lot of money from them, so you can pay people to answer questions about your ideas. (Efficiently (link heavy) answering questions from people interested in your ideas is a super cost efficient thing to spend money on if you have a large, monetized fanbase. So of the links can even be to non-free material, so answering questions directly helps sell stuff in addition to indirectly helping you be more popular, spreading your ideas, etc, and thus indirectly selling more stuff.)

3) When you're new to PF there's a large transition phase as you go from all your ideas being full of mistakes to trying to catch up to the cutting edge on relevant issues – the current non-refuted knowledge. But what is the alternative? Being being the cutting edge, being wrong about tons of stuff and staying wrong! That won't be an effective way to make progress on whatever work you're trying to do – you'll just do bad work that isn't valuable b/c it's full of known flaws. (This is basically what most people currently do – most intellectual work to create new ideas of some kind, including throughout science, is *bad* and unproductive.)

4) Stop Overreaching. http://fallibleideas.com/overreach This also helps mitigate the issue of 3. overreaching is trying to claim more than you know, and do overly ambitious projects, so you're basically constantly making lots of mistakes, and the mistakes are overwhelming. you should figure out what you do know and can get right, less ambitiously, and start there, and build on it. then criticism won't overwhelm you b/c you won't be making so many mistakes. it's important to limit your ambition to a level where you're only making a manageable amount of mistakes instead of so many mistakes you think fixing all your mistakes (ala PF) is hopeless and you just give up on that and ignore tons of problems.

> It is quite possible to simultaneously, and rationally, believe that X is true and that having a conversation about X with a particular person (who says I have to learn quantum mechanics in order to see why X is false) is not worth my time.

that's not the case of being a plumber who knows nothing about X and knows he knows nothing about X. that's the case of thinking you know something about X.

and in that case, you should do PF. you should say why you think it's not worth your time, so that if you're mistaken about that you could get criticism (not just criticism from the other guy, btw, but from the whole public, e.g. from the smart people on the forums you frequent.)

> To name a particular example, the quantum consciousness hypothesis seems relevant to whether AI is possible on a classical computer, but also seems very likely false.

That's exactly the kind of thing I know enough about to comment on publicly, and would debate a physicist about if anyone wanted to challenge me on it.

It's crucial to get that right if you wanna make an AGI.

It's not crucial to know all the details yourself, but you ought to have some reasoning which someone wrote down. And e.g. you could refer someone who disagrees with you on this matter the FI forum to speak with me and Alan about it (Alan is a real physicist, unlike me). (You don't have to answer everything yourself! That is one of many, many things we're happy to address if people come to our forum and ask. Though you'd have to agree with us to refer people to our answers – if you disagree with our approach to the matter then you'll have to use some other forum you agree with, or if your viewpoint has no adequate forums then you'll have to find some other option like addressing it yourself if you care about AGI and therefore need to know if classical computers can even do AGI.)

This issue is easy btw. Brains are hot and wet which causes decoherence. Quantum computers require very precise control over their components in order to function. Done. I have yet to encounter a counter-argument which requires saying much more than this about the physics involved. (If they just want to say, "But what if...? You haven't given 100% infallible proof of your position!" that is a philosophy issue which can be addressed in general purpose ways without further comment on the physics.)

> While I would be interested in a discussion with an advocate of the hypothesis for curiosity's sake, it seems quite plausible that such a discussion would reach a point where I'd need to learn more quantum mechanics to continue, at which point I would likely stop. At that point, I would be *wrong* to change my opinion to a neutral one, unless the argument so far had swayed my opinion in that direction.

If you don't know what's right, then you should be neutral. If you don't know how to address the matter, you should find out (if it's relevant to you in some high priority way, as this issue is highly relevant to people trying to work on AGI b/c if they're wrong about it then tons of their research is misguided). If no one has written down anything convenient, or made other resources, to make this easy for you ... then you shouldn't assume your initial position is correct. You shouldn't just say "I don't want to learn more QM, so I'll assume my view is right and the alternative view is wrong". You need to either give some reasoning or stop being biased about views based on their source (such as being your own view, or the view you knew first).

> This goes back to my initial claim that PF methodology seems to hold the intellectual integrity of the participant ransom, by saying that you must continue to respond to criticisms in order to keep your belief.

Yes, that's more or less the point – people who don't do that are irrational, and everyone can and should do that. You can and should always act on non-refuted ideas. There are ways to deal with criticism instead of ignore it and then arbitrarily take biased sides (e.g. for the view you already believed instead of the other one you don't answer).

> While this might be somewhat good as a social motivation to get people to respond to criticism, it seems very bad in itself as epistemics. It encourages a world where the beliefs of the person with the most stamina for discussion win out.

I think it's problematic socially (people feel pressure and then get defensive and stupid), but good epistemics. Socially it's common that people participate in discussions when they don't want to in order to try to answer criticisms, defend their view, show how open-minded they are, be able to say their views can win debates instead of needing to shy away from debate, etc. But when people don't want to discuss for whatever reason (think it's a waste of time, emotionally dislike the other guy, think the rival views are so dumb they aren't worth taking seriously, etc), they discuss badly. So that sucks. I wish people would just stop talking instead of pretending to be more interested in discussion than they are. I don't want to socially motivate people to discuss more, they'll just do it really badly. People only discuss well when they have good, intellectual motivations, not with gritted teeth and trying to get it over with.

You don't need stamina to win with PF. What you need instead are principles. You need to get criticisms of tons of common bad ideas – especially major categories – written down. And you need to be able to ask some key questions the other guy doesn't have answers to – like what is their answer to Popper, who wrote several books explaining why they're wrong about X. To win with PF, you need to know a lot of things, and have sources (so you don't have to write it all out yourself all the time).

This is what we want epistemologically – people who don't just ignore issues and instead have some answers of some kind written down somewhere (including by other authors, as long as you take responsibility for the correctness of your own sources). And if the answer is challenged in a way which gets past the many, many pre-existing criticisms of categories of bad ideas, then you have an interesting challenge and it should be addressed. (I discuss this as having "libraries of criticism" in Yes or No Philosophy – a stockpile of known criticisms that address most new ideas, especially bad ones, and then the few new ideas that make it past those pre-existing pre-written criticisms are worth addressing, there is something novel there which you should investigate, at least briefly – either it's got some good new point or else you can add a new criticism to your library of criticism.) And btw you need to do something sorta like refactoring criticisms (refactoring being a programming concept) – when you get 3 similar criticisms, then you figure out the pattern involved and replace them with a general criticism which addresses the pattern. That's what making more principled and general purpose criticisms is about – finding broader patterns to address all at once instead of having tons and tons of very specific criticisms.

So the point is, when you use methods like these, whoever has the most stamina doesn't win. A much better initial approximation is whoever knows more – whoever has the best set of references to pre-existing arguments – usually wins. But if you know something new that their existing knowledge doesn't address, then you can win even if you know less total stuff than them. (Not that it's about winning and losing – it's about truth-seeking so everyone wins.)

> but it seems like MIRI in particular would attract a huge number of trolls.

what's a troll, exactly? how do you know someone is a troll? i take issue with the attitude of judging people trolls, as against judging ideas mistaken. i take issue more broadly with judging ideas by source instead of content.

also the kind of people you attract depends on how you present yourself. i present myself in a way that most bad people aren't interested in, and which is targeted to appeal to especially good people. (there are lots of details for how to do this, particularly from Objectivism. the short explanation is don't do social status games, don't do socially appealing things, those are what attract the wrong people. present as more of a pure intellectual and most dumb people won't want to talk with you.)

and wouldn't MIRI attract a reasonable number of knowledgeable people capable of answering common, bad points? especially by reference to links covering common issues, and some standard (also pre-written and linkable) talking points about when and why some technical knowledge is needed to address certain issues.

btw Eliezer considered *me* a troll and used admin powers against me on LW, years ago, rather than say a single word to address my arguments about mostly Popper (he also ignored my email pointing out that his public statements about Popper are factually false in basic ways). and what was his reasoning for using admin powers against me? that I was unpopular. that is not a rational way to handle dissent (nor is the mechanism of preventing people from posting new topics if they have low karma, and then downvoting stuff you disagree with instead of arguing, oh and also rate limiting people to 1 comment per 10 minute at the same time that they are facing a 20-on-1 argument and trying to address many people). what's going on here is: LW uses downvotes and other mechanisms to apply social pressure to suppress dissent, on the assumption no one can resist enough social pressure escalations, and then they use ad hoc unwritten rules on any dissents who don't bow to social pressure. this lets them avoid having written rules to suppress dissent. more recently an LW moderator ordered me to limit my top level posts, including links, to 1 per week. i asked if he could refer me to the written rules. he said there aren't any, it's just people doing whatever they want, with no codified rules, and no predictability for the people who get punished with no warning. the moderator's argument for why he limited my posting is that my posts didn't have enough upvotes – at a time when a lot of other people's posts also barely had any upvotes b/c, apparently, the site doesn't have a lot of traffic. http://lesswrong.com/lw/56m/the_conjunction_fallacy_does_not_exist/3wf5

i have a lot of experience at moderated forums and stuff like this is completely typical. there are always unwritten rules used to suppress dissent without answering the arguments. they maybe try to do something like PF with people that have mild disagreements with them, but they don't want to think about larger disagreements that question some of the ideas they are most attached to. they just want to call that dissent "crazy", "ridiculous", etc. (That's the kind of thing Popper spent his life facing, and he also explained why it's bad.)

> Certainly I *wish* there were *some* could way to solve the problems which PF sets out to solve.

Although I explained the concept more recently, the FI community has basically been doing PF for over 20 years, and it works great IME. Our current discussion forum is at: http://fallibleideas.com/discussion-info

> At that point, I would be *wrong* to change my opinion to a neutral one, unless the argument so far had swayed my opinion in that direction.

getting back to this: arguments shouldn't *sway*. either you can answer it or you can't. so either it refutes your view (as far as you currently know) or it doesn't. see Yes or No Philosophy.

the reason PF thinks issues can be resolved is b/c you can and should act on non-refuted ideas and should judge things in a binary (refuted or non-refuted) way. if you go for standard, vague stuff about *sway* and the *weight* of arguments, then you're going to have a lot of problems. those are major epistemological errors related to justificationism and induction. the FI epistemology is not like the mainstream one. you might be able to adapt something similar to PF to work with a traditional epistemology, but i haven't tried to do that.

> if it's true that people sometimes don't want to respond to arguments, and you think this is a wrong reflex, isn't it worth having a lot of curiosity about why that is, what motivations they might have for behaving in this way, so that you can be sure that your reasons in favor of PF outweigh the reasons against, and your methodology for PF address the things which can go wrong if you try to respond to all criticisms, which (perhaps) have been involved in people learning *not* to respond to all criticisms based on their life experience, or at least not naturally learning *to* respond to all criticisms?

yes i've been very curious about that and looked into it a great deal. i think the answers are very sad and involve, in short, people being very bad/irrational (this gets into static memes and bad parenting and education, mentioned above – i think people are tortured for the first 20 years of their life and it largely destroys their ability to think well, and that's the problem – parents and school are *extremely destructive*). i don't like the answers and have really, really looked for alternatives, but i haven't been able to find any alternatives that i don't have many decisive criticisms of. i have been doing PFs with my answers for a long time – addressing criticism, seeking out contrary ideas to address, etc. mostly what i find is people don't want to think about how broken various aspects of the world (including their own lives) are.

unfortunately, while i know a lot about what's wrong, i haven't yet been able to create anything like a complete solution (nor has anyone else, like DD, Rand, Popper or whoever you think is super smart/wise/etc). i know lots of solutions that would help with things if people understood them and did them, but people block that in the first place, and have emotional problems getting in the way, and various other things. exactly as one would expect from static meme theory.

side note: as mentioned above, i disagree with reasons/arguments outweighing anything (or having weights).

> elephant and the rider

yeah i've heard this one. FI has several ways of talking about it, too.

> So, to a large extent, changing someone's mind is a matter of trying to reach the intuitions.

I think changing someone's mind is a matter of *them wanting to learn and doing 90% of the work*. E.g. they need to do most of the work to bridge the gap from your arguments to their intuitions/unconscious. You can help with this in limited ways, but they have to be the driving force. (BTW CR and Objectivism both converged on this position for different reasons.)

> Part of my contention would be that most of the time, the sort of arguments I don't want to respond to are ones which *obviously* from my position don't seem like they could have been generated for the reasons which the arguer explicitly claims they were generated, for example, couldn't have been generated out of concern for me.

Dishonesty is a big problem. But I say what I think so that the person can answer me or e.g. someone from FI can point out if they think the person wasn't dishonest and I'm wrong. See also Objectivism's advocacy of pronouncing moral judgements as the key to dealing with an irrational society. An alternative way to address it is to say what they could do (what kind of material to send you) that you'd be interested in and which could change your mind about your public claims – explicitly tell them a Path Forward that'd be acceptable to you. (In the first case, the PF is they could point out a way you misinterpreted them. Miscommunication is super common, so going silent is really risky about *you being wrong*. So if they say "I see how you took that as dishonest, let me clarify..." then that's good, and if they say "fuck you, you ad hominem user." and think talking about dishonesty is off limits, then, oh well, they are blocking the possibility of resolving an issue between you. And yes I agree that dishonest people are dishonest *with themselves most of all*, rather than just trying to trick you. If they want help with that, cool. If they don't want to think about it, OK bye. If they propose a way to proceed which is mutually acceptable given the outstanding disagreement about their dishonesty, ok, cool. If you mutually agree to go your separate ways, that's OK too.)

---

this is already long, so i'm not going to answer the rest now. in particular i haven't answered all of the section of your post with 4 links plus commentary. perhaps you'll be able to use things i said above to address some of those matters without me commenting directly. please let me know which of those you still want me to answer, if any.

also other requests are welcome. e.g. if you want me to write shorter posts, i can do that. i can focus on one or two key issues at a time. i can ask questions and debate logically in brief ways, or i can write at length and explain lots of related thoughts (hopefully helpfully). there are many options. i'm pretty flexible. (btw making such requests is an important part of managing your time with PF – e.g. you can ask for a short version of something, or to stick to one point at a time. you can do discussion-management techniques. i find people often want the discussion to be organized a certain way, but don't tell me, and it's hard to guess. what people want varies quite a lot. and also they're frequently hostile to meta discussion, methodology discussion, etc. if you find some problem with continuing the discussion, or the way i'm discussing, just say so and i'll either suggest some way to address it or else i'll agree that we can go our separate ways without accusing you of unwillingness to discuss and PF failure.)

> I would like for you to put more effort into understanding what might motivate a person to do anything other than Paths Forward other than stupidity or bias or not having heard about it yet or other such things. If you can put yourself in the shoes of at least one mindset from which Paths Forward is an obviously terrible idea, I think you'll be in a better position both to respond constructively to people like me, and to revise Paths Forward itself to actually resolve a larger set of problems.

I have tried very hard to do this. But I don't think such a mindset exists with no stupidity, bias, irrationality or ignorance of PF. In other words, I don't think PF is wrong. I'm well aware of many objections, but I think they all fall into those categories. I think this is to be expected – if I had been able to find any more rival perspectives that I couldn't refute, then I would already have changed PF accordingly. So we could talk about lots of standard things people are like, but in each case I have an understanding of the matter, which you may find rather negative, but which I nonetheless consider mandatory (this gets into static memes, bad parenting, etc, again. The world is not very rational. I don't know anywhere besides FI to find any fully rational perspectives, and even one contradiction or bit of irrationality can have massive implications. Broadly I think there is stuff which is compatible with unbounded progress – which btw is what the title of _The Beginning of Infinity_ refers to and the book talks about this – and there is stuff which is **not** compatible with unbounded progress. And more or less everyone has some ideas which block unbounded progress, and that makes PF unappealing to them b/c it's all about unbounded progress and doesn't allow for their progress-blocking ideas.).

curi at 7:59 PM on November 20, 2017 | #9262

PF Example

http://lesswrong.com/lw/pjc/less_wrong_lacks_representatives_and_paths_forward/dydt

I decided not to reply to this comment (even though the answer is only one word, "yes").

here are my thoughts on how my silence is PF compatible:

1) they didn't say why it's important, why they're asking, what this has to do with anything i care about, why this matters, etc

2) their comment is an ambiguous insult, and they didn't say anything to try to clarify they didn't intend it as an insult

3) i don't think it's necessary to answer everything the first time it's said. if it's a big deal, and they really wanna tell you, they can repeat it. this is contextual. sometimes saying nothing suggests you'll never say anything. but on LW i've recently written lots of comments and been responsive, plus i wrote PF, plus i'm super responsive on the FI forum, my blog comments and direct email, so anyone who really wants an answer from me can use one of those (as i tell anyone who asks or seems like they might want to know that – this guy has not communicated to me he'd have any interest in that).

4) if he wants a Path Forward he can ask. i will explicitly follow up on non-responses by asking for and linking PF, or by asking why they didn't reply and saying it's ambiguous, etc. but often i first try again in a non-meta way by asking my question a different way or explaining my point a different way – often plus some extra explanation of why i think it matters (or sometimes just the extra explanation of why it's important).

i think, in this way, if he has something important to tell me, it's possible for him to do it and be heard and addressed. there are ways he could proceed from here, including cheap/ez ones, which i would be responsive to. but i don't think replying to this is necessary. (and i already said a lot of stuff about Popper on LW, gave references, made websites, etc, which he has chosen not to engage with directly. if this is step 1 in engaging with that, he can say so or try step 2 or something.)

curi at 8:10 PM on November 20, 2017 | #9263
here's a short comment i just wrote. i don't think it was necessary to reply (i already said some of this, wrote PF, etc) but i try to make things like this super clear and unambiguous.

http://lesswrong.com/lw/pjc/less_wrong_lacks_representatives_and_paths_forward/dydu

i don't think any amount of stamina on his part would get me to use much more time on the discussion. to get more time/attention from me, he'd have to actually start speaking to the issues (in this case, PF itself), at which point i'd actually be interested in discussing! to the extent he's willing to meet strong demands about PF stuff, then i do want to discuss; otherwise not; and in this way i don't get overwhelmed with too much discussion. (and btw i can adjust demandingness downwards if i want more discussion. but in this case i don't b/c i already talked Popper with him for hours and then he said he had to go to bed, and then he never followed up on the many points he didn't answer, and then he was hostile to concepts like keeping track of what points he hadn't answered or talking about discussion methodology itself. he was also, like many people, hostile to using references. i absolutely don't consider it mandatory, for PF, to talk to anyone who has a problem with the use of references. that's just a huge demand on my time with *no rational purpose*. admittedly i know there are some reasonable things they hope to accomplish with the "no references" rule, but there are better ways to accomplish those things, and they don't want to talk about that methodology issue, so that's an impasse, and it's their fault, oh well.)

curi at 8:19 PM on November 20, 2017 | #9264
another example:

http://lesswrong.com/lw/pjc/less_wrong_lacks_representatives_and_paths_forward/dydp

i think replying to him was totally optional in terms of PF. but i did anyways b/c i like the added clarity, and i find issues like this interesting (ways people are mean, irrational, refuse to think, etc). i have an audience of other FI people who will read my comments like this, who also want to learn about people's irrationality, discussion-sabotaging, cruelty, etc, and sometimes discussions of comments like this happen on FI.

here's another one i didn't answer:

http://lesswrong.com/lw/pjc/less_wrong_lacks_representatives_and_paths_forward/dydl

it's too repetitive. i already wrote PF (and there's tons of epistemology writing by me/DD/Popper) explaining how i think matters are resolved. he knows this. he isn't trying to make progress in the discussion and address my written view (e.g. by quoting something i said and pointing out a mistake). he's just attacking me.

the pattern thing is weird. i talked with several ppl who disagree with me and similar ways, and repeated the pattern of 1) disagree about Popper 2) run into a methodology problem 3) they don't want to do PF

yeah, so what? that doesn't make me wrong. he doesn't explain why i'm wrong. he accuses me of accusing ppl of bad faith but he doesn't give any quotes/examples so that's dumb.

this particular person actually just 100% refused to

1) let me use any references

2) discuss the use of references

and i discussed some with him anyway, to the extent i wanted to. and then he was being dumb and lazy so i suggested we stop talking unless he objected, and he did not object, but now he's trying to disrupt my conversations with other ppl.

curi at 8:27 PM on November 20, 2017 | #9265
btw i don't think he can disrupt my conversations with anyone good, if i just ignore him (and if anyone else thinks i'm doing PF wrong or need to answer some point of his, they're welcome to say so – at which point i can refer them to some very brief summary of the impasse). he may disrupt conversations with bad ppl (who are fooled or distracted by him) but that doesn't really matter. he may have a lot of stamina, but that won't get him anywhere with me b/c i just point out the impasse he's causing and then i'm done (unless i want to talk more for some reason, but there's no PF requirement left).

the only way PF would tell me to talk more with him (or anyone) is if he wasn't creating an impasses to block PF and unbounded progress – that is, if he was super rational – in which case i'd be fucking thrilled to have met him, and would be thrilled to talk with him (though possibly primarily by referring him to references and resources so he can learn enough to catch up to me, and in the meantime not putting much time into it myself – i've put a lot of effort into creating some better Paths for people who want to learn FI/CR/Objectivism stuff, but i'm not required to give them a bunch of personal, customized help). and if he knew enough that our conversations didn't constantly quickly become "my reason is X, which u don't know about, so read this" then that'd take more time for me and be GREAT – i would love to have people who are familiar enough with the philosophy i know to discuss it (alternatively they can criticize it, or criticize learning it, instead of learning it all. a criticism of it is important, i would want to address that if it said something new that i haven't already addressed. and a criticism of learning it would interest me but also possibly let us just go our separate ways if he has a reason not to learn it, i might just agree that's fine and he goes and does something else, shrug).

curi at 8:35 PM on November 20, 2017 | #9266

PF

you can ask ppl, at the outset, things like:

- how much effort will this take, if you're right?

- why is it worth it, for me?

- is there a short version?

- i don't want to allocate that much effort to X b/c i think Y is more important to work on b/c [reason]. am i missing something?

this is much easier for them to speak to (and with less work for you) if you make public some information like:

- your views (so someone could point out a mistake you made, and then you can see the important of figuring out the truth of that issue)

- your current intellectual priorities, and why (and someone could point out some issue, like that issue X undermines your project, so someone better address X instead of just betting your career on X being false without investigating it cuz ur too busy doing the stuff that depends on X being false.)

- your policies for discussion. what kinds of conversations are you looking for or not looking for? do you have much time available? are there any forums you use? what are efficient formats and styles to contact you with? (ppl can then either follow this stuff or else point out something wrong with it. as an example, if someone really wants my attention, they can use the FI forum, my blog comments, or email me. and they can quote specific stuff they think is wrong, write clearly why it's wrong, and usually also broadly say where they're coming from and what sorta school of thought they are using premises from. i think those are reasonable requests if someone wants my attention, and i have yet to have anyone make a case that they aren't. note that links are fine – you can post a link to the FI forum which goes to your own blog or forum post or something. and you can keep writing your replies there and only sending me links, that's fine with me. but i often don't like to use other forums myself b/c of my concerns about 1) the permalinks not working anymore in 20 years 2) moderator intereference. for example on Less Wrong a moderator deleted a thread with over 100 comments in it. and a public facebook group decided to become private and broke all the permalinks. and at reddit u can't reply to stuff after 6 months so the permalinks still work except not really, they are broken in terms of further discussion.)

i routinely try to ask ppl their alternative to PF. u don't want to do PF, ok, do u have written methodology (by any author, which you use) for how you approach discussion? the answer is pretty damn reliable: no, and they don't care to lay out their approach to discussion b/c they want to allow in tons of bias. they won't want to have written policies which anyone could expect them to follow, point out ways they don't follow, or point out glaring problems with. just like forum moderators hate generally written policies and just wanna be biased. our political tradition is, happily, much better than this – we write out laws and try to make it predictable in advance what is allowed or not, and don't retroactively apply new laws. this reduces bias, but it's so hard to find any intellectuals who are interested in doing something like that. Is PF too demanding? OK, tell me an alternative you want to use. I suggest it should have the following property:

if you're wrong about something important, and i know it, and i'm willing to tell you, then it should be realistic for your error to be corrected. (this requires dealing with critics who are themselves mistaken, b/c you can't just reliably know in advance which critics are correct or not. and in fact many of the best critics seem "crazy" b/c they are outliers and outliers are where a substantial portion of good ideas come from. plenty of outliers are also badly wrong, but if you dismiss all outliers that initially seem badly wrong to you then you will block off a lot of good ideas.)

people use methodology like using other people's judgement. if it's so good, someone else will accept it first, and then i'll consider it when it's popular. this is so common that it's really hard for great new ideas to become popular b/c so many ppl are waiting for someone else to like it first. and they don't want to admit they use this kind of approach to ideas. :/

curi at 9:09 PM on November 20, 2017 | #9267
>> but not making much effort to see the deeper intuitions which made me generate the point, and as a result, your response does little to shift my intuitions.

> It's hard for me to get in your head. I don't know you. I don't know your name, your country, your age, your profession, your interests, how you spend your time, your education, or your background knowledge. If you have a website, a blog, a book, papers, or an online discussion history (things I could skim to get a feel for where you're coming from), I don't know where to find it.

> And speculating on what people mean can cause various problems. I also don't care much for intuitions, as against reasoned arguments.

This likely constitutes a significant disagreement. In retrospect I regret the way I worded things, which sort of expected that I *expect* you to get in my head as a matter of good conversational norms, or something like that. However, I hope it is clear how that point connected to a lot of other points in what I wrote. I do indeed think the point of communication is to help get into the other person's head, and while I agree with your point about the trouble which can come from speculating about what people mean (and the corresponding cleanliness of just responding to what they literally say), I think we disagree significantly about the tradeoff there. I say again that "The Righteous Mind" is the best book I can think of to convey the model in my head, there, although it doesn't spell out many of the implications for having good conversations.

What I said was insufficiently charitable toward you because I was attempting to make something clear, and didn't want to put in qualifications or less forceful statements which (even if accurate) might make it unclear. I'm going to do that again in the following paragraph (to a more extreme degree):

What I hear you saying is "I'm not aware of any underlying 'feelings' which guide my logical arguments to motivated-cognition in specific directions, so I obviously don't have any of those. I agree that people who have those can't use PF. But why worry about those people? I only want to talk to rational people who don't have any hidden underlying feelings shaping their arguments."

My model of how the brain works is much closer to everything coming from underlying intuitions. They aren't *necessarily* irrational intuitions. And the explicit reasoning isn't *useless*. But, there's a lot under the surface, and a lot of the stuff above the surface is "just for show" (even among *fairly* rational people).

Consider a mathematician working on a problem. Different mathematicians work in different ways. The explicit reasoning (IE, what is available for conscious introspection) may be visual reasoning, spatial reasoning, verbal reasoning, etc. But, there has to be a lot going on under the surface to guide the explicit reasoning. Suppose you're doing conscious verbal reasoning. Where do the words come from? They don't just come from syntactic manipulation of the words already in memory. The words have *meaning* which is prior to the words themselves; you can tell when a word is on the tip of your tongue. It's a concept in a sort of twilight state between explicit and implicit; you can tell some things about it, but not everything. Then, suddenly, you thing of the right word and there's a feeling to having full access to the concept you were looking for. (At least, that's how it sometimes works for me -- other times I have a feeling of knowing the concept exactly, just not the word.) And, even if this weren't the case -- if the words just came from syntactic manipulation of other words -- what determines which manipulations to perform at a given time? Clearly there's a sort of "mathematical intuition" at work, which spits out things for the explicit mind to consider.

And, even among trained mathematicians, this "mathematical intuition" can shut off when it is socially/politically inconvenient. There was an experiment (I can try to dig up the reference if you want) in which math majors, or professional mathematicians (?) were asked a math question and a politically charged version of the same math question, and got the politically charged version wrong at a surprisingly high frequency like 20%.

This doesn't make the situation hopeless. The apparent 'logical reasoning' isn't as rational as it seems, but on the other hand, the intuitions themselves aren't just dumb, either.

Imagine a philosopher and a chemist who keep having arguments about certain things in philosophy being impractical. One way to go about these arguments would be to stay on-subject, looking at the details of the issues raised and addressing them. However, the philosopher might soon "get the idea" of the chemist's arguments, "see where they are coming from". So, one day, when the chemist brings up one point or another, the philosopher says "Look, I could address your points here, but I suspect there's a deeper disagreement behind all of the disagreements we have..." and start (gently) trying to solicit the chemist's intuitions about the role of thinking in life, what a chain of reasoning from plausible premises to implausible conclusions can accomplish, or what-have-you. At the end of the hour, they've created an explicit picture of what was driving the disagreement -- that is to say, what is actually motivating the chemist to come argue with the philosopher time after time. Now they can try and address *that*.

... Having written that, I suddenly become aware that I haven't spend much of this conversation trying to solicit your intuitions. I suppose I sort of take PF at face value -- there's a clearly stated motivation, and it seems like a reasonable one. But, you've likely had a number of conversations similar to this one in which you don't end up changing PF very much. So I should have reason to expect that you're not very motivated by the kinds of arguments I'm making / the kinds of considerations I'm bringing up and personally motivated by, and perhaps I should take the time to wonder why that might be. ... but, for the most part it seems like pushing the conversation along at the object level and seeing your responses is the best way to learn about that.

I'm more directly in the dark about what motivates you with the CR stuff. What motivated you to write an open letter to MIRI? To what degree are you concerned about AI risk? Is your hope to fix the bubble of new epistemological norms which has manifested itself around Eliezer?

> Also I designed my life not to need social status – e.g. I'm not trying to get tenure, get some prestigious academic gatekeepers to publish me, or impress a think tank boss (who gets taken in by social games and office politics) enough to keep paying me. I'm not reliant on a social reputation, so the issue for me is purely if the jerks can make me feel bad or control me (they can't). Such a life situation is more or less necessary for a serious intellectual so they can have intellectual freedom and not be pressured to conform (sorry academia). If PF doesn't work well for someone b/c they are under a bunch of social pressures, I understand the issue and advise them to redesign their life if they care about the truth (I don't expect them to do this, and I'd be thrilled to find a dozen more people who'd do PF).

This seems to sum up a lot, though perhaps not all, of my concerns regarding PF.

Even MIRI, which is in many ways exceptionally free of these kinds of attachments, has to worry somewhat about public image and such. So, although I think I understand why you would like MIRI to use PF, can you explain why you think MIRI should want to use PF?

> BTW the Fallible Ideas (FI) yahoo discussion group is public and mostly unmoderated. This has never been a big problem. I will ask problem people questions, e.g. why they're angry, why they're at FI, or what they hope to accomplish by not following FI's ethos. I find that after asking a few times and not saying anything else, they either respond, try to make serious comments about the issues, or leave – all of which are fine. Other regular posters commonly ask similar questions or ignore the problem people, instead of engaging in bad discussion. Sometimes if I think someone is both bad and persistent, and others are responding to them in a way I consider unproductive, I reply to the other people and ask why they're discussing in that way with a person with flaws X, Y and Z, and how they expect it to be productive. Stuff like this can be harsh socially (even with no flaming), and is effective, but is also fully within the rules of PF and reason (and the social harshness isn't the intent, btw, but I don't know how to avoid it, and that's why tons of other forums have way more trouble with this stuff, b/c they try to be socially polite, which doesn't solve the problem, and then they don't know what to do (within the normal social rules) so they have to ban people).

*likes this paragraph*

>> Eliezer moved much of his public discussion to Facebook

> I consider Facebook the worst discussion forum I've used. I've found it's consistently awful – even worse than twitter.

I agree, and I'm sad about EY moving there. It had the one thing he wanted, I guess.

Your practice of using an old-fashioned mailing list and a fairly old-fashioned looking website is very aesthetically appealing in comparison. I suspect a lot of bad potential dynamics are warded off just by the lack of shiny web2.0.

>> At this point I feel I may assign positive expectation to the proposition of MIRI starting to follow PF

> I doubt it because only one MIRI person replied to my email, and his emails were brief, low quality, and then he promptly went silent without having said anything about Paths Forward or providing any argument which addresses Popper.

Ah, I meant more like "I think I might like to see it happen". IE, if LW followed PF, although there may be problems, it could help create a good dynamic among related AI safety and X-risk organizations and also the wider EA community. In the fantasy world where Eliezer starts following PF, he loses a lot of time replying to stuff on LW, which is likely bad; but, this helps bootstrap the new LW to the levels of quality of the old LW and the even older days on Overcoming Bias, which would be pretty good in many respects.

(If nothing else, PF seems like a good methodology for producing a lot of interesting text to read!)

> > > you need to have some understanding of what you're even trying to build before you build it. how does intelligence work? what is it? how do you judge if you have it? do animals have it? address issues like these before you start coding and saying you're succeeding.

> > You'll get no disagreement about that from the Bayesian side.

> I found *everyone* I spoke with at LW was disagreeable to this.

... huh.

Well, the MIRI agent foundations agenda certainly agrees strongly with the sentiment.

-----
(More some time later.)

PL at 11:02 PM on November 20, 2017 | #9268
> I'm more directly in the dark about what motivates you with the CR stuff. What motivated you to write an open letter to MIRI? To what degree are you concerned about AI risk? Is your hope to fix the bubble of new epistemological norms which has manifested itself around Eliezer?

I have zero concern about AI risk. I think that research into that is counterproductive – MIRI is spreading the idea that AI is risky and scaring the public. AI risk does not make sense given CR/FI's claims.

I like some of the LW ideas/material, so I tried talking to LW again. I don't know anywhere better to try. It's possible I should do less outreach; that's something I'm considering. I wrote a letter to MIRI to see if anyone there was interested, b/c it was an easy extension of my existing discussions, so why not. I didn't expect any good reply, but I did expect it to add a bit more clarity to my picture of the world, and, besides, my own audience likes to read and sometimes discuss stuff like that.

i care about AGI, i think it's important and will be good – and i think that existing work is misguided b/c of errors refuted by CR. that's the kind of thing i'd like to fix – but i don't think there's any way to. but sometimes i try to do stuff like that anyway. and maybe by trying i'll meet someone intelligent, which would be nice.

AGI is not the only field i consider badly broken and would love to fix. anti-aging is another high priority one. i talked with Aubrey de Grey at length – the guy with the good approach – but there's absolutely no Paths Forward there. (his approach to the key science issues is great and worthwhile, but then his approach to fundraising and running SENS is broken and sabotaging the project and quite possibly will cause both you and I to die. also he's wrong about how good cryonics currently is (unfortunately it's crap today). despite my life being at stake, AdG eventually convinced me to give up and go do other things... sigh.)

to me, Eliezer/LW/etc looks like a somewhat less bad mainstream epistemology group (the core ideas are within the overall standard epistemology tradition) with an explicit interest in reason. and as a bonus they know some math and programming stuff. there's not many places i can say even that much about. lots of philosophy forums are dominated by e.g. Kantians who hate reason, or talk a lot of nonsense. LW writing largely isn't nonsense, i can understand what it says and see some points to it, even if i think some parts are mistaken. i like most of Eliezer's 12 rationality virtues, for example. and when i talked with LW ppl, there were some good conversational norms that are hard to find elsewhere.

I think CR is true and extremely important, and in general no one wants to hear it. BTW I got into it because I thought the argument quality was high, and I liked that regardless of the subject matter (I wasn't a philosopher when I found it, I changed fields for this).

> Even MIRI, which is in many ways exceptionally free of these kinds of attachments, has to worry somewhat about public image and such. So, although I think I understand why you would like MIRI to use PF, can you explain why you think MIRI should want to use PF?

They should use PF so their mistakes can get fixed – so they can stop being completely wrong about epistemology and wasting most of their time and money on dead ends.

Next to the problem of betting the productivity of most of their work on known mistakes, I don't think reputation management is a major concern. Yes it's a concern, but a lesser one. What's the point of having a reputation and funding if you aren't able to do the intellectually right stuff and be productive?

And I think lots of reputation concerns are misguided. The best material on this is the story of Gail Wynand in _The Fountainhead_. And just, broadly, social status dynamics do not follow reason and aren't truth-seeking. One of my favorite Rand quotes, from _The Virtue of Selfishness_, is:

>>> The excuse, given in all such cases, is that the “compromise” is only temporary and that one will reclaim one’s integrity at some indeterminate future date. But one cannot correct a husband’s or wife’s irrationality by giving in to it and encouraging it to grow. One cannot achieve the victory of one’s ideas by helping to propagate their opposite. One cannot offer a literary masterpiece, “when one has become rich and famous,” to a following one has acquired by writing trash. If one found it difficult to maintain one’s loyalty to one’s own convictions at the start, a succession of betrayals—which helped to augment the power of the evil one lacked the courage to fight—will not make it easier at a later date, but will make it virtually impossible.

any time you hold back what you think is the truth, b/c you think something else will be better for your reputation, you are *compromising*. getting popular and funded for the wrong reasons is such a bad idea – whether you compromise or whether you try to con the public and the funders. consistency and principles are so powerful; sucking up to fools so they'll like you better just destroys you intellectually.

> In the fantasy world where Eliezer starts following PF, he loses a lot of time replying to stuff on LW, which is likely bad;

To the extent issues have already been addresses, it should take little time to give out a few links and get some of his fans to start doing that.

I think the time sync is the issues he hasn't addressed, but should have – and that isn't a bad thing or a loss, dealing with stuff like Popper's arguments is the epitome of what being an intellectual is about, it's truth-seeking. People should stop assuming conclusions in disputes they haven't addressed! That's bad for them because it often means assuming errors.

---

> My model of how the brain works is much closer to everything coming from underlying intuitions. They aren't *necessarily* irrational intuitions. And the explicit reasoning isn't *useless*. But, there's a lot under the surface, and a lot of the stuff above the surface is "just for show" (even among *fairly* rational people).

We have significantly different models, but with some things in common like a large role for unconscious/subconscious thought.

I agree with parts of what you're saying, like the philosopher and the chemist thing.

> What I hear you saying is "I'm not aware of any underlying 'feelings' which guide my logical arguments to motivated-cognition in specific directions, so I obviously don't have any of those. I agree that people who have those can't use PF. But why worry about those people? I only want to talk to rational people who don't have any hidden underlying feelings shaping their arguments."

I have lots of unconscious thinking processes which play a huge role in my life. They are much less oriented towards emotion or intuition than most people's.

Two of the main things I think are going on here are:

1) a mental structure with many layers of complexity, and conscious thought primarily deals with the top several layers.

2) automating policies. like habits but without the negative connotations. life is too complicated to think everything through in real time. you need to have standard policies you can use, to let you act in a good way in many scenarios, which doesn't take much conscious attention to use. the better you can set up this unconsciously-automated thinking, the more conscious attention is freed up to make even better policies, learn new things, etc. the pattern is you figure out how to handle something, automate it, and then you can build on it or learn something else.

lots of this is set up in early childhood so people don't remember it and don't understand themselves.

however, it's possible to take an automated policy and set it to manual, and then act consciously, and then adjust the policy to fix a problem. people routinely do this with *some* things but are super stuck on doing it with other matters. similarly it's possible to direct conscious attention to lower level layers of your mind, and make changes, and people do this routinely in some cases, but get very stuck in other cases.

i think being good at this stuff, and at introspection, is pretty necessary for making much progress in philosophy. i don't see any way to lower my standards and still be effective. i think most people are not even close to being able to participate effectively in philosophy without making some major changes to how they live/think/etc.

i have exposed my own thinking processes to extensive criticism from the best people i could find and the public. if there's bias there, no one knows how to spot it or explain it to me. by contrast, i routinely and easily find massive biases that other exceptional people have. (this was not always the case. i changed a ton as i learned philosophy. i still change but it's largely self-driven, with some help from some dead authors, and indirect little bits of help from others who e.g. provide demonstrations of irrationality and sometimes answer questions about some details for me.) regarding "best people" the search methods have been rather extensive b/c e.g. DD has access to various prestigious people that i haven't sought the credentials to access, and he's sadly not found much value there. basically all the best people were found by either DD's books, Taking Children Seriously, my writing, or their public writing let us find them.

i'm aware that i'm claiming to be unbelievably unusual. but e.g. i'm extremely familiar with standard stuff like ppl getting defensive or angry, and being biased for "their" side of the discussion, and that is in fact not how i discuss. and i have my current beliefs because i enjoyed losing argument after argument for years. i prefer losing arguments to winning them – you learn more that way. i really want to lose more arguments, but i was too good at it and it gets progressively harder the more you do it.

so no it's nothing like "not aware ... so I obviously don't have any of those." i've studied these things extensively and i'm good at recognizing them and well aware of how common they are. i don't know how to fix this stuff (in a way people will accept, want, do) and i don't know how to lower standards and still have stuff be very intellectually productive. put another way, if someone is nowhere near DD's quality of argument, then what do they matter? i've already developed my ideas to the higher standard, and people need to catch up if they want to say anything. this isn't a hard limit – sometimes a person has a bunch of bad ideas, uses bad methods, and still has one great idea ... but that's quite rare. if someone is too far from the cutting edge then i can learn more by rereading top thinkers than talking to them, so i mostly talk to them to get reminders of what people are confused about, learn about psychology, and try a few new ideas to help them. and also, just in case they're great or have a good idea, and b/c i like writing arguments as part of my thinking process. but the standards for things like objectivity are dictated by the requirements of making intellectual progress beyond what's already known, and can't be lowered just b/c most ppl are nowhere near meeting the standards.

> ... Having written that, I suddenly become aware that I haven't spend much of this conversation trying to solicit your intuitions. I suppose I sort of take PF at face value -- there's a clearly stated motivation, and it seems like a reasonable one. But, you've likely had a number of conversations similar to this one in which you don't end up changing PF very much. So I should have reason to expect that you're not very motivated by the kinds of arguments I'm making / the kinds of considerations I'm bringing up and personally motivated by, and perhaps I should take the time to wonder why that might be. ... but, for the most part it seems like pushing the conversation along at the object level and seeing your responses is the best way to learn about that.

the more you do PF, the harder it is for anyone to say anything new to you that you don't already have addressed. you keep building up more and more existing answers to stuff and becoming familiar with more and more uncommon (and some rare) ideas.

i'm interested in the kinds of things you bring up, but have already analyzed them to death, read many perspectives on them and translated those to CR and Objectivist terms, etc. it's an ongoing unsolved problem, it's very hard, and if someone really wants to help with it they should learn CR or Objectivism, or preferably both, b/c basically those are the good philosophies and all the other ones suck. unfortunately there's only double digit people who are good at either, and single digit people good at both, and there's not much interest. and so people keep dying and having all kinds of miseries that are so unnecessary :/

> I do indeed think the point of communication is to help get into the other person's head, and while I agree with your point about the trouble which can come from speculating about what people mean (and the corresponding cleanliness of just responding to what they literally say), I think we disagree significantly about the tradeoff there.

i am interested in people's thought processes. actually that's a common way i get people to stop speaking to me – by asking them questions about their thought processes (or offering my analysis).

however, the larger the perspective gap, the harder it is to accurately get inside someone's head, so i have to be careful with it. it's also super awkward to try to talk with them about their thought processes when they're dishonest with themselves about those thought processes (so pretty much always). it's also difficult when you have different models of how thinking works (different epistemologies) – that can lead to a lot of misunderstandings.

curi at 12:59 AM on November 21, 2017 | #9269
(I very much liked a lot of that reply.)

> > > In the fantasy world where Eliezer starts following PF, he loses a lot of time replying to stuff on LW, which is likely bad;

> To the extent issues have already been addresses, it should take little time to give out a few links and get some of his fans to start doing that.

> I think the time sync is the issues he hasn't addressed, but should have – and that isn't a bad thing or a loss, dealing with stuff like Popper's arguments is the epitome of what being an intellectual is about, it's truth-seeking. People should stop assuming conclusions in disputes they haven't addressed! That's bad for them because it often means assuming errors.

This very recent post of his illustrates the size of the gap between your thinking and Eliezer's on engaging critics & other key PF ideas:

https://www.lesserwrong.com/posts/dhj9dhiwhq3DX6W8z/hero-licensing

It's pretty long, but I don't think I can single out a piece of it that illustrates the point well, because there are little bits through the whole thing and they especially cluster toward the end where there's a lot of dependence on previous parts. Worse, there's a lot of dependence on the book he recently released. Long story short, he illustrates how he personally avoids conversations which he perceives will be unproductive, and he also talks at length about why that kind of conversation is worse that useless (is actively harmful to one's rationality). I think it's fair to say that he is currently trying to push the LW community in a direction which, while not directly opposite PF, is in many ways opposed to PF.

I agree with most of what's in there, and while I'm not proposing it as a path forward on our conversation, I would be interested to hear your replies to it if you found the time.

But closer to the object level -- I suspect that engaging with people in this way is very costly to him due to medical issues with fatigue, and due to the way he processes it (in contrast to the way you process it). So it's not just philosophical issues which he might some day change his mind on; he's also a particularly bad candidate for engaging in PF for unrelated reasons.

PL at 2:36 AM on November 21, 2017 | #9270
most people don't even think they have something important to say. so they won't claim it, and won't take your time that way. you just have to ask like "do you think you have something genuinely super important to say, and you're a world class thinker?" and they won't even want to claim it.

Anonymous at 11:38 AM on November 21, 2017 | #9271
sometimes i ask people to write a blog post, and they refuse. if you don't think your point is even worth a blog post, why is it super important that i pay attention to it!?

Anonymous at 2:17 PM on November 21, 2017 | #9272
> medical issues with fatigue

either certain error correction has been done to certain standards, or it hasn't. if it hasn't, don't claim it has.

the reason it hasn't happened doesn't fundamentally matter. fatigue, stupidity, zero helpers, poverty, insanity ... too bad. life's hard sometimes. get help, work while tired, find a way. or don't, and be honest about that.

if you want to develop and spread inadequately error-corrected ideas, b/c error correction is hard (generally, or for you personally) ... that sounds like a bad idea.

arguing for different standards of error correction (generally, or for a particular field/speciality/issue) is fine. but the argument for that will have nothing to do with fatigue.

you can use your fatigue to make an argument about what you should do with your life but you can't use fatigue to argue about the quality of particular ideas and what methodology has been used to develop them.

if someone has issues with fatigue or drugs or partying or whatever else, they should still use the same methods for making intellectual progress – b/c other methods *don't work*. do PF more slowly or whatever instead of just skimping on error correction and then doing stuff that's *wrong*.

mistakes are common and inevitable. a lot of error correction is absolutely necessary or your ideas are going to be crap. make it happen, somehow, regardless of your limitations, or else accept that your ideas are crap and treat them accordingly.

Anonymous at 7:54 PM on November 21, 2017 | #9273

Anonymous at 7:59 PM on November 21, 2017 | #9274
those shitty errors shouldn't have been made in the first place. ET took him up on those years ago. Yudhowsky is a fool and an appalling scholar.

Anonymous at 11:36 PM on November 21, 2017 | #9275

What do you think?

(This is a free speech zone!)