[Previous] Less Wrong Lacks Representatives and Paths Forward | Home | [Next] Replies to Gyrodiot About Fallible Ideas, Critical Rationalism and Paths Forward

Open Letter to Machine Intelligence Research Institute

I emailed this to some MIRI people and others related to Less Wrong.


I believe I know some important things you don't, such as that induction is impossible, and that your approach to AGI is incorrect due to epistemological issues which were explained decades ago by Karl Popper. How do you propose to resolve that, if at all?

I think methodology for how to handle disagreements comes prior to the content of the disagreements. I have writing about my proposed methodology, Paths Forward, and about how Less Wrong doesn't work because of the lack of Paths Forward:

http://curi.us/1898-paths-forward-short-summary

http://curi.us/2064-less-wrong-lacks-representatives-and-paths-forward

Can anyone tell me that I'm mistaken about any of this? Do you have a criticism of Paths Forward? Will any of you take responsibility for doing Paths Forward?

Have any of you written a serious answer to Karl Popper (the philosopher who refuted induction – http://fallibleideas.com/books#popper )? That's important to address, not ignore, since if he's correct then lots of your research approaches are mistakes.

In general, if someone knows a mistake you're making, what are the mechanisms for telling you and having someone take responsibility for addressing the matter well and addressing followup points? Or if someone has comments/questions/criticism, what are the mechanisms available for getting those addressed? Preferably this should be done in public with permalinks at a venue which supports nested quoting. And whatever your answer to this, is it written down in public somewhere?

Do you have public writing detailing your ideas which anyone is taking responsibility for the correctness of? People at Less Wrong often say "read the sequences" but none of them take responsibility for addressing issues with the sequences, including answering questions or publishing fixes if there are problems. Nor do they want to address existing writing (e.g. by David Deutsch – http://fallibleideas.com/books#deutsch ) which contains arguments refuting major aspects of the sequences.

Your forum ( https://agentfoundations.org ) says it's topic-limited to AGI math, so it's not appropriate for discussing criticism of the philosophical assumptions behind your approach (which, if correct, imply the AGI math you're doing is a mistake). And it states ( https://agentfoundations.org/how-to-contribute ):

It’s important for us to keep the forum focused, though; there are other good places to talk about subjects that are more indirectly related to MIRI’s research, and the moderators here may close down discussions on subjects that aren’t a good fit for this forum.

But you do not link those other good places. Can you tell me any Paths-Forward-compatible other places to use, particularly ones where discussion could reasonably result in MIRI changing?

If you disagree with Paths Forward, will you say why? And do you have some alternative approach written in public?

Also, more broadly, whether you will address these issues or not, do you know of anyone that will?

If the answers to these matters are basically "no", then if you're mistaken, won't you stay that way, despite some better ideas being known and people being willing to tell you?

The (Popperian) Fallible Ideas philosophy community ( http://fallibleideas.com ) is set up to facilitate Paths Forward (here is our forum which does this http://fallibleideas.com/discussion-info ), and has knowledge of epistemology which implies you're making big mistakes. We address all known criticisms of our positions (which is achievable without using too much resources like time and attention, as Paths Forward explains); do you?


Elliot Temple on November 9, 2017

Comments (13)

I find the idea interesting.

In some sense, it seems like it would be nice for organizations to be as transparently accountable as possible. For example, in many cases, the government is contradictory in its behavior -- laws cannot be easily interpreted as arising from a unified purpose. It would be nice if there were a way to either force the government to produce an explanation for the seeming contradiction or change. This would be really difficult for a large organization, especially one which is not run by a single individual; but, in some sense it does seem desirable.

On the other hand, this kind of accountability seems potentially very bad -- not only on an organizational level, but even on the level of individuals, who theoretically we can reasonably expect to provide justifications for actions.

The ability to force someone to give a justification in response to a criticism, or otherwise change their behavior, is the ability to bully someone. It is very appropriate in certain contexts, For example, it is important to be able to justify oneself to funders. It is important to be able to justify oneself to strategic allies. And so on.

However, even then, it is important not to be beholden to anyone in a way which warps your own standards of evidence, belief, and the good -- or, not too much. Every party you need to justify yourself to adds constraints of understandability to your actions, meaning that eventually you need to be justifiable to the lowest common denominator. MIRI is a special place in that its staff are more free to go after what it thinks is the right direction as compared with an academic department.

Nonetheless, I say the idea is interesting, because it seems like transparently accountable organizations would be a powerful thing if it could be done in the right way. I am reminded of prediction markets. An organization run by open prediction markets (like a futarchy) is in some sense very accountable, because if you think it is doing things for wrong reasons, you can simply make a bet. However, it is not very transparent: no reasons need to be given for beliefs. You just place bets based on how you think things will turn out.

I am not suggesting that the best version of Paths Forward is an open prediction market. Prediction markets still have a potentially big problem, in that someone with a lot of money could come in and manipulate the market. So, even if you run an organization with the help of a prediction market, you may want to do it with a closed prediction market. However, prediction markets do seem to move toward the ideal situation where organizations can be run on the best information available, and all criticisms can be properly integrated. Although it is in some ways bad that a bet doesn't come with reasons, it is good that it doesn't require any arguments -- there's less overhead.

I may be projecting, but the tone of your letter seems desperate to me. It sounds as if you are trying to force a response from MIRI. It sounds as if you want MIRI (and LessWrong, in your other post) to act like a single person so that you can argue it down. In your philosophy, it is not OK for a collection of individuals to each pursue the directions which they see as most promising, taking partial but not total inspiration from the sequences, and not needing to be held accountable to anyone for precisely which list of things they do and do not believe.

So, I state as my concrete objection that this is an OK situation. There is something like an optimal level of accountability, beyond which creative thinking gets squashed. I agree that building up a canon of knowledge is a good project, and I even agree that having a system in place to correct the canon (to a greater degree than exists for the sequences) would be good. Arbital tried to build something like that, but hasn't succeeded. However, I disagree that a response to all critics should be required. Something like the foom debate, where a specific critic who is seen to be providing high-quality critiques is engaged at a deep level, seems appropriate. Other than that, more blunt instruments such as a FAQ, a technical agenda, a mission statement, etc which deal with questions as seems appropriate to a given situation seems fine.

I would change my mind on this if I thought it was feasible to map out and address all arguments on a subject (as Arbital dreamed), **and** if I thought that such a system wasn't likely to turn on participants (as public discourse often does) by turning arguments into demands. You want to make sure you don't punish an organization for trying to be accountable.

PL at 3:00 AM on November 11, 2017 | #9239
There's no force involved here. It's just some comments on how reason works. Paths Forward is no more forceful than these suggestions (which I mostly agree with) that you should do these 12 things or you're not doing reason correctly: http://yudkowsky.net/rational/virtues/

PF explains how it's bad to stay wrong when better ideas are already known and people are willing to tell/help you. It talks about error correction and fallibilism. And it says how to implement this stuff in life to avoid the bad things.

People who don't want to do it ought to have some alternative which deals with issues like fallibility and correcting errors. How will they become Less Wrong?

The typical answer is: they have a mix of views they haven't really tried to systemize or write down. So their answer to how to correct error is itself not being exposed to critical scrutiny.

And what happens then? Bias, bias and more bias. What's to stop it?

Bias is a fucking hard problem and it takes a lot – like Paths Forward or something else effortful and serious – to do much to deal with bias.

MIRI doesn't do Paths Forward and *also has no serious alternative that they do instead*. So errors don't get corrected very well.

The practical consequence is: MIRI is betting the bulk of their efforts on Popper being wrong, but have not bothered to write any serious reply to Popper explaining why they are willing to bet so much on his ideas being mistaken and saying what's wrong with his ideas. MIRI should be begging for anyone to tell them something they're missing about Popper, if anyone knows it, to help mitigate the huge risk they are taking.

But MIRI doesn't want to think about that risk and acknowledge its meaning. That's a big deal even if you don't think they should address *all* criticisms (which they should via methods like criticisms of categories of bad ideas – and if you run into a "bad" idea that none of your existing knowledge can criticize, then you don't know it's bad!)

> Every party you need to justify yourself to adds constraints of understandability to your actions, meaning that eventually you need to be justifiable to the lowest common denominator. MIRI is a special place in that its staff are more free to go after what it thinks is the right direction as compared with an academic department.

This is incorrect. You can simply tell lay people that they need to learn some background knowledge to deal with various issues. You can then direct them to e.g. some reading recommendations and a discussion forum for learners. Then there's a path forward: they can learn the necessary expertise and then read your difficult material and then comment. Most people won't, and that's fine.

There are two important points here for rationality:

1) if someone *does* read your not-dumb-down-at-all material and point out a mistake, you don't just ignore them b/c of e.g. their lack of official credentials. you don't just say "i think you're a layman" whenever someone disagrees with you. you don't gate your willingness to deal with criticism on non-truth-seeking things like having a PhD or being popular.

2) it's possible that you're mistaken about the background knowledge required to understand a particular thing, and that can itself be discussed. so there's still no need to dumb anything down, but even if someone agrees they don't have a particular piece of background knowledge which you thought was relevant, it's still possible for them to make a correct point which you shouldn't reject out of hand.

> I would change my mind on this if I thought it was feasible to map out and address all arguments on a subject

I explain how to do this. You don't quote me and point out where I made a mistake. If you want more explanation we have reading recommendations, educational material, a discussion forum, etc.

> if I thought that such a system wasn't likely to turn on participants (as public discourse often does) by turning arguments into demands.

if "demands" are bad, write a criticism of them (or of a sub-category of them) and then reject all the criticized stuff by reference to the criticism. (i agree that *some* demands are bad. it depends on more precision.) if you gave a more clear example of how an organization would be "punished" for setting up mechanisms of error correction, rather than sticking to disorganized haphazard bias, perhaps we could discuss how to handle that situation and also what the alternatives are and whether they're better. (what are *you* advocating instead of Paths Forward? is it written down, substantive, good at dealing with bias, etc?)

curi at 11:55 AM on November 11, 2017 | #9242
> > Every party you need to justify yourself to adds constraints of understandability to your actions, meaning that eventually you need to be justifiable to the lowest common denominator. MIRI is a special place in that its staff are more free to go after what it thinks is the right direction as compared with an academic department.

>This is incorrect. You can simply tell lay people that they need to learn some background knowledge to deal with various issues. You can then direct them to e.g. some reading recommendations and a discussion forum for learners.

This doesn't seem right to me. For example, take the recent controversy in which someone lost his job for admitting what they believe about feminism or racism. (All the significant details have slipped my mind..) Of course this isn't exactly the concern with MIRI. However, it's relevant to the general argument for/against Paths Forward. You can make important decisions based on carefully considered positions which you wouldn't want to state publically. For an extended argument that there are important ideas in this category, see Paul Graham's essay "What You Can't Say":

http://www.paulgraham.com/say.html

It does seem to me that if you commit to being able to explain yourself to a particular audience, you become bias toward ideas which are palatable to that audience.

> Then there's a path forward: they can learn the necessary expertise and then read your difficult material and then comment. Most people won't, and that's fine.

I don't see how that's fine in your system. If you are offering the Paths Forward material as your criticism to MIRI, then according to Paths Forward methodology, MIRI needs to understand Paths Forward. Similarly, the hypothetical layperson who criticizes you in a way which makes you point to some expert knowledge they'd have to learn isn't following Paths Forward if they walk away, right? They still have their broken view, which you claim to have a solid criticism of.

This is part of what makes me skeptical. It seems like the policy of responding to all criticism requires one to read all criticism. Reading criticism is useful, but must be prioritized like all other things, which means that you have to make an estimate of how useful it will be to engage a particular critic before deciding whether to engage.

> if "demands" are bad, write a criticism of them (or of a sub-category of them) and then reject all the criticized stuff by reference to the criticism.

I meant the sort of demand-created-by-social-consensus-and-controversy mentioned above, rather than the kind which you can respond rationally to.

> (what are *you* advocating instead of Paths Forward? is it written down, substantive, good at dealing with bias, etc?)

I don't have any substantive solution to the problem of group epistemology, which is why the approach you advocate intrigues me. However, there are some common norms in the LessWrong community which serve a similar purpose:

1) If you disagree, either reach an agreement through discussion or make a bet. This holds you accountable for your beliefs, makes you more likely to remember your mistakes, aims thinking toward empirically testable claims, and is a tax on BS. Butting one-on-one does not create the kind of public accountability you advocate.
I mentioned betting markets earlier. Although I think there are some problems with betting markets, they do seem to me like progress in this direction. I would certainly advocate for their more widespread use.

2) The double crux method:

http://lesswrong.com/lw/o6p/double_crux_a_strategy_for_resolving_disagreement/

PL at 3:13 AM on November 12, 2017 | #9243
> This doesn't seem right to me.

I think you mean: that's correct as far as it goes, regarding being able to write expert level material instead of making everything lowest-common-denominator accessible. But it doesn't speak to some other issues like taboos. If you wanna link that PG essay and end a conversation, go ahead – and then there is a Path Forward b/c someone can refute the PG essay (if they know how).

I'm not expecting perfection. You could be lying and dealing with something non-taboo and just say it's taboo. Whatever. People will do shitty stuff. And some won't. I'm proposing a methodology to help the people who want to be rational. It also does a good job of catching a lot of irrational people and pointing out what they're doing wrong - a few of whom may appreciate that and reconsider some things.

If you don't want to talk about a taboo issue, you can say that, and have that position itself be open to criticism (the criticism will be difficult because you don't give much to the critic to use – but unless he proposes a solution to that problem, so be it). Similarly, the military doesn't talk about lots of things, and I have no objection to that.

More broadly, you're welcome to raise problems with doing stuff. "I would like to facilitate error correction in that way if it were unproblematic, but..." The problems themselves should be open to error correction and solutions. You need something at some level which is open to criticism. Or else say you aren't a public intellectual and don't make public claims about ideas.

You can also say you aren't interested in something, or you don't think it's worth your time. PF doesn't eliminate filters, it exposes them to criticism. At the very least you can publicly say "I'm doing a thing I have reasons I don't think I can talk about." And *that* can be exposed to criticism. E.g. you can direct them to essays covering how to talk about broad categories of things and ask if the essays are relevant, or ask a few questions. You may find out that e.g. they don't want to talk about the boundaries of what they will and won't talk about, b/c they think that'd reveal too much. OK. Fair. I don't have a criticism of that at this time. But someone potentially might. It can be left at that, for now, and maybe someone else will figure out a way forward at some point. Or not. At least, if someone knows the way forward, they aren't being blocked from saying so. Their way forward does have to address the meta issues (like privacy, secrecy, taboo, etc) not just the object issue.

---

If people would just **state their filters** it'd do so much. People filter on "I think he's dumb" all the time. And on "he is unpopular" and on "he doesn't have a PhD". And they do this *inconsistently*. They filter Joe cuz no PhD, but then talk to Bob a ton who also doesn't have a PhD.

What's going on? Bias and lack of accountability. Often they don't even consciously know what filters they are using or why. This is really bad. This is what LW and MIRI and most people are like. They have unstated, unaccountable gating around error correction, which allows for tons of bias. And they don't care to do anything about that.

You want to filter on "convince my assistant you have a good point that my assistant can't answer, and then i will read it"? Say so. You want to let some people skip that filter while others have to go through it? Say the criteria for this.

People constantly use criteria related to social status, prestige, etc, and lie about it to themselves (let alone others), and they do it really inconsistently with tons of bias. This is sooooooo bad and really ruins Paths Forward. Half-assed PF would be a massive improvement.

I'm not trying to get rid of gating/filters/etc. Just state them and prefer objective ones and have the filters themselves open to criticism. If you're too busy to deal with something, just say so. If your critics have no solution to that, then so be it. But if they say "well i have this Paths Forward methodology which actually talks about how to deal with that problem well" then, well, why aren't you interested in a solution to your problem? and people are certainly not being flooded with solutions to this problem, and they don't already have one either.

the problems are bias, dishonesty, irrationality, etc, as usual, and people's desire to hide those things.

yeah some people have legit reasons to hide stuff. but most people – especially public intellectuals – could do a lot more PF without much trouble, if they wanted to.

btw Ayn Rand, Richard Feynman, David Deutsch, Thomas Szasz and others of the best and brightest answered lots of letters from random strangers. they did a ton to be accessible.

---

No one on LW or from MIRI offered me any bets or brought up using double cruxes to resolve any disagreements.

Those things are not equivalent to Paths Forward b/c they aren't methodologies. They're individual techniques. But there's no clear specifications about when and how to use what techniques to avoid getting stuck.

I don't know what bets they could/should have offered. Double crux seems more relevant and usable for philosophy issues, but no one showed interest in using it.

The closest thing to a bet was someone wanted money to read Paths Forward, supposedly b/c they didn't have resources to allocate to reading it in the first place. I agreed to pay them the amount they specified without negotiating price, b/c it was low. They admitted they predicted that asking for a tiny amount of money would get me to say "no" instead of "yes". But neither they nor anyone else learned from their mistake. They were surprised that I would put money where my mouth is, surprised I have a budget, surprised I'm not poor, etc, but learned nothing. Also they were dishonest with me and then backed out of reading Paths Forward even though I'd offered the payment (which they said matched what they are paid at work). So apparently they'd rather *go to work* (some $15/hr job so presumably nothing too wonderful) than read philosophy. I thought that was revealing.

Someone else tried to use Paths Forward to control and pressure me. They didn't get far at all. After they said some nonsense, and I replied briefly, they wrote more non sequiturs. They wanted to use PF to make me talk with them an unlimited amount – I don't know if they were trying to prove PF is impractical or just an idiot, I guess a mix. They started complaining basically that I had to answer everything they said or I'm not doing PF – and they seemed to think (contrary to PF) that that meant answering all the details and pointing out every mistake, rather than just one mistake. I said if they had a serious criticism they could write it somewhere public with a permalink. That was enough of a barrier to entry that they gave up (despite the fact that the person already hds a blog with a lot of public posts with permalinks). Such things are typical. Very low standards – which are themselves valuable/productive and can easily be *objectively* judged – will filter out most critics.

---

> Similarly, the hypothetical layperson who criticizes you in a way which makes you point to some expert knowledge they'd have to learn isn't following Paths Forward if they walk away, right? They still have their broken view, which you claim to have a solid criticism of.

If the layperson says "oh, you have to learn all that? well i'd rather do this other thing instead. i think it fits me better." that is OK. i don't have a criticism of that. MIRI doesn't have a criticism of that. MIRI and I don't think everyone should become an expert on our stuff. Not everyone has to learn our fields and discuss with us. No problem. (well i do think a lot more people should take a lot more interest in philosophy and being better parents and stuff. so in some cases i might have a criticism, which wouldn't be a personal matter, but would be the kind of stuff i've already written generic/impersonal public essays about.)

curi at 8:54 AM on November 12, 2017 | #9244
MIRI puts a meaningful amount of their effort into scaring the public into thinking AGI is dangerous – b/c they think it’s dangerous. They frame this as: AGI is in fact dangerous and they’re working to make it safer. But the practical effect is they are spreading the (false!) opinion that it’s dangerous and therefore shooting themselves (and the whole field) in the foot.

Anonymous at 7:17 PM on November 12, 2017 | #9247
> Someone else tried to use Paths Forward to control and pressure me. They didn't get far at all. After they said some nonsense, and I replied briefly, they wrote more non sequiturs. They wanted to use PF to make me talk with them an unlimited amount – I don't know if they were trying to prove PF is impractical or just an idiot, I guess a mix.

To some extent I'm doing the same kind of testing the waters, seeing how much you stick to your guns and provide substantive responses to issues raised. Overall the effect has been to raise my plausibility estimate of PF, although I think you may have exceptionally much time to respond to things as compared to other people (or maybe you're assigning my remarks high importance?).

> I'm not expecting perfection. You could be lying and dealing with something non-taboo and just say it's taboo. Whatever. People will do shitty stuff. And some won't. I'm proposing a methodology to help the people who want to be rational. It also does a good job of catching a lot of irrational people and pointing out what they're doing wrong - a few of whom may appreciate that and reconsider some things.

> If you don't want to talk about a taboo issue, you can say that, and have that position itself be open to criticism (the criticism will be difficult because you don't give much to the critic to use – but unless he proposes a solution to that problem, so be it). Similarly, the military doesn't talk about lots of things, and I have no objection to that.

To state the state of the discussion so far as I see it:

- I suggested that there is something like an optimal level of accountability, beyond which it will stifle one's ability to come up with new ideas freely and act freely. I said that I'd change my mind on this if I thought it was possible to map all the arguments and if I thought such a system wouldn't end up creating gochas for those who used it by exposing them to scrutiny.

- You responded that your literature provides ways to handle all the arguments without much burden, and that there are no gotchas because you can always tell people why their demands which they try to impose on you are wrong.

- I didn't read about the system for handling all the arguments easily yet. I didn't find the "there are no gotchas" argument very compelling. I had a concern about how being publically accountable for your ideas can increase the risk of criticism of the more forceful kind, where violation of taboo or otherwise creating a less-than-palatable public face can have a lot of negative strategic implications; and how it is therefore necessary to craft a public image in a more traditional way rather than via PF, even if internal to the organization you try to be more rational.

- You responded by saying that of course there will be some things which you have reason not to say, but you can at least explain that it doesn't make sense to answer questions of a certain sort.

I think this addresses my concert to a significant degree. You create a sort of top level of the system at which PF can be followed, so that in principle a criticism could result in the hidden parts being opened up at some future time. I suspect we have remaining disagreements about the extent of things which it will make sense to open up to critique in this kind of setup. Maintaining any hidden information requires maintaining some plausible deniability about where the secret may be, since knowing exactly which questions are answerable and not answerable tells you most of a secret. And so it follows that if you want to maintain the ability for taboo thoughts to guide significant strategy decisions, you must maintain some plausible deniability about what is guiding strategy decisions at all times. This strategy itself may be something you need to obfuscate to some degree, because confessing that you have something to hide can be problematic in itself... well, hopefully the intellectual environment you exist in isn't **that** bad. Then PF really does seem inadvisable.

> If people would just **state their filters** it'd do so much. People filter on "I think he's dumb" all the time. And on "he is unpopular" and on "he doesn't have a PhD". And they do this *inconsistently*. They filter Joe cuz no PhD, but then talk to Bob a ton who also doesn't have a PhD.

Hm. Stating what your filters are seems like the sort of thing you might not want to do, partly because the filters will often be taboo themselves, but mainly because stating what criteria you use to filter gives people a lot of leverage to hack your filter. Checking for an academic email address might be an effective private filter, but if made public, could be easily satisfied by anyone with a school email address that is still active.

As mentioned in part of Yudkowski's recent book, the problem of signaling that you have a concern which is worthy of attention is a lemons market (https://www.lesserwrong.com/posts/x5ASTMPKPowLKpLpZ/moloch-s-toolbox-1-2, search page for "lemons market"). It's an asymmetric information problem. The person with the meaningful concern can't meaningfully differentiate themselves from others by loudly saying "This one really **is** important!" because everyone can say that. Private tests make sense in a lemons market, because you can often get some information with fallible heuristics which would stop being as good if you made them public.

Granted, I see the upside of filter criteria which anyone sufficiently motivated can meet. I agree that in many respects, a widespread standard of this kind of PF in public figures and institutions would be a big improvement.

> > Similarly, the hypothetical layperson who criticizes you in a way which makes you point to some expert knowledge they'd have to learn isn't following Paths Forward if they walk away, right?

> If the layperson says "oh, you have to learn all that? well i'd rather do this other thing instead. i think it fits me better." that is OK. i don't have a criticism of that.

This seems (and this is the reason I brought up the scenario) analogous to MIRI and Popper. It seems sort of true in principle that MIRI should have a response to Popper beyond the brief remarks made by Yudkowsky, but it doesn't concretely feel very interesting / like it would go anywhere, and there are a lot of other paths to understanding the problems MIRI is interested in which seem more interesting / more likely to go somewhere. I've read Black Swan, which is very Popperian by its own account at least, and although I found the book itself interesting, it didn't seem to contain any significant critique of Bayesian epistemology by my lights -- indeed, I roughly agree with Yudkowsky's remark about the picture being a special case of Bayes (though of course Taleb denies this firmly). It doesn't seem particularly worthwhile, all things considered, for me to even write up my thoughts on Taleb's version of Popper.

Somehow it seems like the Popperian project and the Bayesian project are so different that I'm just not optimistic that the Popperian one even connects with the Bayesian one in terms of what kind of arguments for and against epistemic methodology seem compelling. For me, a mathematical picture like information theory, and the close connection between information and probability provided by the coding theorem, is very compelling and makes me feel more strongly that Bayesianism gets close to what is really going on. I'm not aware of anything approaching this on the Popperian side, to the point where I feel like Popperians aren't interested in playing the same ball game. At least PAC learning, the MDL principle, and other alternatives to Bayesianism have corresponding machine learning algorithms and learning-theory results to show their efficacy.

I suppose precisely that statement is the kind of thing you're saying MIRI should be able to produce at least.

But, to reiterate, my point here is to say that there seems to be a parallel with the layperson who says there are better uses of time than to learn all the technical details required for the discussion.

Another thing I wanted to mention is that it seems like the ability to articulate one's thinking necessarily falls behind the thinking itself, sometimes far behind. Articulating the arguments behind one's position is often a major project, a book that takes many years to write. In these cases, it seems like a response to criticism may end up being only a promissory note to articulate arguments at some later time. Is that a sufficient PF?

PL at 1:32 AM on November 13, 2017 | #9248
> although I think you may have exceptionally much time to respond to things as compared to other people (or maybe you're assigning my remarks high importance?).

i write a lot faster than you're estimating, with less energy/effort used. this is a skill i developed over time b/c i was a heavy writer/discusser from the beginning when i got into philosophy, so i've developed relevant supporting skills. i'm also a fast reader when i want to be (i can control reading speed based on the level of detail i want to get from something). techniques include RSVP, skimming, sped up TTS/audio/video (audio stuff also allows multitasking).

i also set up my life to have a lot of time for thinking/writing/discussing, on purpose. i've been kinda horrified to find that most think tank and public intellectual types seem not to have done this. (but some of the very best, like David Deutsch, did do it.)

i'm also prioritizing this more than you may expect because people willing to talk about philosophy with me and be reasonable are a scarce commodity. this may not be your experience about what the world is like (you may find plenty of people to talk with in your experience), but for me they're very rare. why? because my method of discussion and ideas are plenty adequate to filter out most people!

oh also, **i enjoy discussions like this**. this isn't painful work for me. this isn't draining or hard. Put another way: I've been playing some Mario Odyssey recently, but to me this discussion is *more fun than video games*.

> partly because the filters will often be taboo themselves

I don't think the kind of filters I'm in favor of are taboo. but it's possible some good ones are that i haven't thought of. i don't really mind breaking most taboos anyway.

> stating what criteria you use to filter gives people a lot of leverage to hack your filter

that's the kind of filter i think is bad. the sort of filter i think is good will work just as well even if people know what it is.

i don't think you should filter on .edu email addresses b/c people with .com email addresses can be correct. if you do that, you're blocking lots of ways someone could correct you. furthermore, you're wasting people's time who see your public email and contact you from a .com and then you ignore them.

good filters involve knowledge or skills (so if the person "hacks" the filter by developing the knowledge or skills, then you're glad and that's fine), or involve clear, objective criteria people can meet (and you're *glad* if they do, b/c the criteria are *useful* instead of just a filter) such as formatting quotes correctly on FI, or actually have to do with the content instead of prestige/credentials/networking/social/authority.

my purpose in asking people to say their filters isn't just to prevent the biased application of filters, and double standards. it's also to get them to stop using irrational filters (that they don't want to admit they use) and also to get them to stop using filters on prestige/social-statatus/etc type stuff. stick to filtering on

1) content. like pointing out a mistake, and then if they have no answer, ok, done.

or

2) you can put up other barriers *if you tell them* and the barriers are reasonable asks (and they can either meet the barrier or criticize it). this needs to be shared to work well. if FI required a certain post format but didn't tell people what it was, it'd be really nasty to filter on it!

i don't want people to use approximate filters that semi-accurately find good people. i want them to use filters that actually don't block progress. i think this is very important b/c a large amount of progress comes from **outliers**. so if you block outlier stuff (over 90% of which is *especially bad*), then you're going to block a lot of great, innovative stuff.

> - You responded by saying that of course there will be some things which you have reason not to say, but you can at least explain that it doesn't make sense to answer questions of a certain sort.

yeah, you can always go to a higher meta level and still say something. the technique of going to a meta level comes up a lot in philosophy. e.g. it's also used here: http://fallibleideas.com/avoiding-coercion

> It's an asymmetric information problem. The person with the meaningful concern can't meaningfully differentiate themselves from others by loudly saying "This one really **is** important!" because everyone can say that.

I actually have credentials which stand out. But I don't usually bring them up, even though I know people are often filtering on credentials. Why?

Because of the terrible social dynamics which are anti-name-dropping and anti-bragging. If you just objectively start stating credentials, people respond with

1) social distate, and assumption that you're low status

2) thinking you're trying to appeal to authority (and not admitting they are filtering on the very kinds of "authority" you're talking about)

3) they debate which credentials are impressive. lots of idiots have PhDs. lots of idiots have spent over 10,000 hours on philosophy (as i have). few people, idiots or not, have written over 50,000 philosophy related things, but that isn't the kind of credential people are familiar with evaluating. i have a very high IQ, but i don't have that *certified*, and even if i did many people wouldn't care (and high IQ is no guarantee of good philosophy, as they would point out, forgetting they are supposedly just trying to filter out the bottom 80% of riff raff). i have associations with some great people, but some of them have no reputation, and as to David Deutsch he's a Popperian. if they wouldn't listen to him in the first place (and they don't), they won't listen to me due to association. Thomas Szasz particularly liked me and was a public author of dozens of especially great books, but most people hate him.

i have great accomplishments, but most of the important ones are in philosophy and are the very things at issue. some other accomplishments are indirect evidence of intelligence and effective learning, and stand out some, but anyone who doesn't want to listen to me still isn't going to. (e.g. at one point i was arguably the best Hearthstone player in the world – i did have the best tournament results – and i wrote Hearthstone articles with considerably more views (5-6 figures per article) than most people have for any content ever. that was just a diversion for me. i had fun and quit. anyway, from what i can tell, this is not a way to get through people's filters. and really i think social skill is the key there, with just enough credentials they can plausibly accept you if they want to.)

So it's *hard*. And I'm *bad* at social networking kinda stuff like that (intentionally – i think learning to be good at it would be intellectually destructive for me).

> This seems (and this is the reason I brought up the scenario) analogous to MIRI and Popper. It seems sort of true in principle that MIRI should have a response to Popper beyond the brief remarks made by Yudkowsky

note that his remarks on Popper are blatantly and demonstrably false, as i told him many years ago. they are false in very objective, clear ways (just plain misstating Popper's position in ways that many *mediocre* Popperians would see is wrong), not just advanced in terms of subtle advanced nuances.

Yudkowsky's opinions of Popper are basically based on the standard hostile-to-Popper secondary sources which get some basic facts wrong and also focus on LScD while ignoring Popper's other books.

> it doesn't concretely feel very interesting

Popper's philosophy explains why lots of what MIRI does is dead ends and can't possibly work. I don't see how lack of interest can be the issue. The issue at stake is basically whether they're wasting their careers and budgets by being *utterly wrong* about some of the key issues in their field. Or more intellectually, the issue is whether their work builds on already-refuted misconceptions like induction.

That's a big deal. I find the lack of interest bizarre.

> I've read Black Swan, which is very Popperian by its own account at least

I'm not familiar with this particular book (I'd be happy to look at it if the author or a fan was offering a Paths Forward discussion). Most secondary sources on Popper are really bad b/c they don't understand Popper either.

> Somehow it seems like the Popperian project and the Bayesian project are so different that I'm just not optimistic that the Popperian one even connects with the Bayesian one in terms of what kind of arguments for and against epistemic methodology seem compelling. For me, a mathematical picture like information theory, and the close connection between information and probability provided by the coding theorem, is very compelling and makes me feel more strongly that Bayesianism gets close to what is really going on.

Are you aware that David Deutsch – who wrote the best two Popperian books – is a physicist who has written papers relating to information flow and probability in the multiverse? He even has an AI chapter in his second book (btw I helped with the book). http://beginningofinfinity.com

The reason CR connects to Bayesian Epistemology (BE) stuff in the big picture is simple:

CR talks about how knowledge can and can't be created (how learning works). This says things like what methods of problem solving (question answering, goal achieving) do and don't work, and also about how intelligence can and can't work (it has to do something that *can* learn, not *can't*). CR makes claims about a lot of key issues BE has beliefs about, which are directly relevant to e.g. AGI and induction.

To the extent there are big differences, that's no reason to give up and ignore it. CR criticizes BE but not vice versa! We're saying BE is *wrong*, and *its projects will fail*, and we know why and how to fix it. And the response "well you're saying we're wrong in a big way, not a small way, so i don't want to deal with it" is terrible.

CR explained why over 2000 years of tradition in epistemology was *disastrously wrong*. And BE, like almost everyone, isn't updating and is ignoring the breakthrough and continuing with the same old errors. BE thinks it's clever b/c it has some new math tools and some tweaks, but from the CR perspective we see how BE keeps lots of the same old fundamental errors.

Why do BE ppl want to bet their careers on CR being false, just b/c some secondary sources said so (and while having no particular secondary source challenging CR that they will actually reference and take responsibility for, and reconsider if that source is refuted)?

It makes no sense to me. I think it's b/c of bad philosophy – the very thing at issue. That's a common problem. Systems of bad ideas are often self-perpetuating. They have mechanisms to keep you stuck. It's so sad, and I'd like to fix it, but people don't want to change.

> I'm not aware of anything approaching this on the Popperian side, to the point where I feel like Popperians aren't interested in playing the same ball game. At least PAC learning, the MDL principle, and other alternatives to Bayesianism have corresponding machine learning algorithms and learning-theory results to show their efficacy.

My opinion is we aren't ready to start coding AGI, and no one has made any progress whatsoever on coding AGI, and the reason is b/c they don't even understand what an AGI is and therefore can't even judge what is and isn't progress on AGI. things like Watson and AlphaGo *are not AGI and are not halfway to AGI either, they are qualitatively different things* (i think they're useful btw, and good non-AGI work).

you need to have some understanding of what you're even trying to build before you build it. how does intelligence work? what is it? how do you judge if you have it? do animals have it? address issues like these before you start coding and saying you're succeeding.

no one is currently coding anything with a generic idea data structure that can handle explanations, emotions, poetry, moral values, criticism, etc. they aren't even working on the right problems – like how to evaluate disagreements/criticism in the general case. instead they are making inductivist and empiricist mistakes, and trying to build the kind of they incorrectly believe is how human thinking works. and they don't want to talk about this forest b/c they are focused on the trees. (or more accurately there's more than 2 levels. so they don't wanna talk about this level b/c they are focused on multiple lower levels).

> Is that a sufficient PF?

it's far more than sufficient, as long as there are followups in the future. iterations can be quite small/short. i commonly recommend that to people (doing a larger number of shorter communications – that way there's less opportunity to build on errors/misunderstandings/etc before feedback).

> Another thing I wanted to mention is that it seems like the ability to articulate one's thinking necessarily falls behind the thinking itself, sometimes far behind. Articulating the arguments behind one's position is often a major project, a book that takes many years to write. In these cases, it seems like a response to criticism may end up being only a promissory note to articulate arguments at some later time.

This is partly a real issue, and that's fine – you can say "I know criticism would be very valuable, and I'll get it just as soon as I'm able to formulate what I'm thinking adequately. And the reason I judge this to be productive to work on more is..."

But I think it's partly that people structure their learning and research the wrong way. They could have more discussion, from the start, and be less fragile about criticism. They could get better at saying initial versions of things to get some initial feedback about major issues they're missing. This can proceed in stages as they keep working and adding levels of detail, and then getting feedback at that level of detail again.

And if there is no feedback, ok, proceed. It's worth a try b/c sometimes someone knows something important and is willing to say so (and this would happen a lot more if Paths Forward were more common. i know people who are very smart, know tons of stuff ... and basically don't like people and don't try to share their knowledge much b/c no one does Paths Forward. i strongly suspect there are many more such people i do not know who had some bad experiences and gave up discussion.) And, also, formulating your thoughts *in writing* at each stage is *helpful to your own thinking*. you should be getting away from "i know what i mean" to actually writing down what you developed so far (so it'd be understandable to a stranger with the right background knowledge, e.g. the reader has to already know calculus, or some science stuff, or even already know Objectivism if you're doing work building on Objectivism. but what the reader doesn't have to do is read your mind or know your quirks. this is how books work in general).

i find i often articulate rough drafts and pieces of things early on, and more later. like with Paths Forward and Yes or No Philosophy, there was discussion of some elements long before i developed them more fully. and i don't going into isolation to think about them alone would have been the best way to develop them more.

i think people who write books usually shouldn't, but i will accept sometimes they should. most people who write books do not have adequate experience writing shorter things – and exposing them to Paths Forward style criticism to check if they are any good. i think most books are bad (i bet you agree), and this could be avoided if people would try to actually write one little essay that isn't bad, first, and then put a lot of Paths Forward type work into getting the essay criticized and addressing criticisms from all comers and not making any excuses and not ignoring any issues. then they'd see how hard it is to actually do good work. but instead of doing a small project to a really high standard, and then repeating a few times, and then doing some medium projects to a really high standard, and THEN doing a book ... they go do a big project to a much lower standard. meh :/

this relates to my ideas about powering up and making progress. the focus should be on learning and self-improvement, not output like books. ~maximize your own progress. then what happens is you generate some outputs while learning, and also some types of outputs become easy for you b/c you've greatly surpassed them in skill level. so you can *very cheaply* output that stuff. if you keep this up enough, you learn enough your cheap/easy outputs end up being better than most people's super-effortful-book-writing. so this is a much much more efficient way to live. don't divert a bunch of effort into writing a book you can maybe just barely write, with a substantial failure chance. put that effort into progress until you can have the same book for a much lower effort cost with a much lower risk of it being bad/wrong.

> But, to reiterate, my point here is to say that there seems to be a parallel with the layperson who says there are better uses of time than to learn all the technical details required for the discussion.

the rational lay person is welcome to say that *and then have no opinion of the matter*. e.g. he says "i'm going to go do plumbing" and then is *neutral* about whether BE or CR is right. he's not involved, and he knows his ignorance. and that's fine b/c neither BE nor CR is saying he's doing plumbing all wrong – we agree he can be a decent plumber without learning our stuff. (he may have some problems in his life that philosophy could help with, such as destroying his children's minds with authoritarian parenting, and ultimately i'd like to do something about that too. but still, the general concept that you can take an interest in X and recognize your ignorance of Y, and not everyone has to be highly interested in Y, is fine.)

curi at 11:52 AM on November 13, 2017 | #9250
> although I think you may have exceptionally much time to respond to things as compared to other people (or maybe you're assigning my remarks high importance?).

two more comments on this to add to what i said above.

1) my comments to you have no editing pass. i'm not even doing much editing as i go, this is near max speed writing. over the years i've put a lot of effort into being able to write without it being a burden, and into making my initial thoughts/writing good instead of getting stuff wrong then fixing it in editing later. i think this is really important and also unusual. (you should fix mistakes in your writing policies themselves instead of just having a separate editing policy to fix things later – and if you can't do that something is wrong. it makes more sense this way. editing has a place but is super overrated as a crutch for ppl who just plain think lots of wrong and incoherent thoughts all the time and aren't addressing that major issue.)

2) this is public, permalinkable material which i can re-use. i will link people to this in the future. it's a good example of some things, and has some explanations i'll want to re-use. i'm not just writing to you. everyone on the FI forum who cares is reading this now, and others will read it in the future.

curi at 12:47 PM on November 13, 2017 | #9252
As Ayn Rand would say: check your premises.

Avoiding debates about your premises is so dumb.

Anonymous at 2:32 PM on November 13, 2017 | #9253
> that's the kind of filter i think is bad. the sort of filter i think is good will work just as well even if people know what it is.

> i don't think you should filter on .edu email addresses b/c people with .com email addresses can be correct. if you do that, you're blocking lots of ways someone could correct you. furthermore, you're wasting people's time who see your public email and contact you from a .com and then you ignore them.

I used a fake example of a taboo filter. I don't have some explicit filter policy which is taboo, but I can imagine that if I had one I wouldn't want to say it, and furthermore if I did say it, I wouldn't be able to defend it in conversation precisely because it would depend on assumptions I don't want to publically defend. I can imagine that I might be contacted by someone who I would automatically know that I should filter, in a knee-jerk kind of way, and my justification for this would be taboo. Suppose I do work which has serious public policy implications, but I expect those implications to be misconstrued by default, with serious negative consequences. If people in the government, or campaigning politicians, etc contact me, my best response would be either no response or something to throw them off the trail. I might be fine with talking about things in an academic journal read only by other experts, but if a reporter cornered me I would describe things only in the most boring terms, etc.

(I'll likely make a more substantive reply later.)

PL at 6:17 PM on November 13, 2017 | #9254
I'm not very concerned about edge cases. If 5% of intellectuals claim some exceptions, and 95% do Paths Forward, that sounds just fine for now. I will tentatively accept some rare edge cases, and we can investigate them more carefully at some later date if it matters.

On the other hand if you think the taboo case is a good excuses for ~100% of people not to do paths forward -- the current situation -- then we can debate it now. but if you're only trying to offer excuses for less than 20% of people, and agree with me about the other 80+%, i'll take it for now.

curi at 7:39 PM on November 13, 2017 | #9255
(Still may not get to a more proper reply today, but I'll reply to the most recent point.)

> I'm not very concerned about edge cases. If 5% of intellectuals claim some exceptions, and 95% do Paths Forward, that sounds just fine for now. I will tentatively accept some rare edge cases, and we can investigate them more carefully at some later date if it matters.

Suppose that 5% of intellectuals have good reasons not to do PF, along the lines I described. Then, if 95% of intellectuals do PF, this creates a reason for those 5% of intellectuals to be looked down on and excluded in various ways (funding, important positions, etc). The 5% will of course be unable to explain their specific reasons for not participating in PF, at least publically; which means that even if they can describe them privately, those descriptions can't be included in the official reasons for decisions (about funding, appointment of positions, etc). So it creates a feedback loop which punishes others for taking those private reasons into account. Even if this problem itself is widely understood (and I'm skeptical that it would be, even given the improved intellectual discourse of PF world), it may make sense as a policy to use the standards of PF in those decisions (funding, appointments, etc) because it seems important enough and good enough an indicator of other good qualities.

This trade-off may even be worth it. But, it's not clear at all that "95% of intellectuals could use PF" is good enough a justification to meet with my objection there.

PL at 4:26 PM on November 14, 2017 | #9257
> (Still may not get to a more proper reply today, but I'll reply to the most recent point.)

There is no hurry on my account. I think the only hurry is if you're losing interest or otherwise in danger of dropping the topic due to time passing.

---

Should I and the other 95% not pursue our own work in the most rational way for fear that some other people would be *incorrectly* attacked for not using all the same methods we're using? No.

I'm happy to grant there are legitimate concerns about working out the details if it catches on. While my answer is a clear "no" in the previous paragraph, I might still be willing to do *something else other than give up PF* to help with that.

I do see the issue that if a lot of people are more public about some matters, that makes it harder for the people with something (legitimate) to hide. I like privacy in general – but not much when it comes to impersonal ideas which ought to be discussed a bunch to help improve them.

Also I'm not directly interested in things like public reputations, appointments or funding. I care about how truth-seeking should/does work. I understand the nature of reason has implications to be worked out for e.g. funding policies and expectations. But I'm not very concerned – I'll try to speak the truth and I expect that'll make things better not worse.

curi at 5:10 PM on November 14, 2017 | #9258

What do you think?

(This is a free speech zone!)