Criticism of 12 Rules For Life: Secondhandedness

12 Rules For Life: An Antidote To Chaos by Jordan B. Peterson:

On Quora, anyone can ask a question, of any sort—and anyone can answer. Readers upvote those answers they like, and downvote those they don’t. In this manner, the most useful answers rise to the top, while the others sink into oblivion.

Some useful answers rise – and so do some bad ones. Some great answers sink into oblivion. This is well known, yet also contradicts the claim that the most useful answers rise. JP is overstating the wisdom of the mob.

Quora tells you how many people have viewed your answer and how many upvotes you received. Thus, you can determine your reach, and see what people think of your ideas.

Their viewing and voting patterns do not tell you what they think. It omits why they like things – their reasoning, their thoughts. It also leaves you with no way to tell if they're being honest (you can't spot dishonesty through votes and view).

As of July 2017, as I write this—and five years after I addressed “What makes life more meaningful?”—my answer to that question has received a relatively small audience (14,000 views, and 133 upvotes), while my response to the question about aging has been viewed by 7,200 people and received 36 upvotes. Not exactly home runs.

JP's goal is popularity. He judges a home run not by what he thinks of what he wrote, but by what other people think. His stated goal – his criteria of success (a home run) – is to get views and upvotes, not to please himself.

My goal, when I write, is truth. I don't judge ideas by popularity. I go by arguments. If someone has a criticism – even one single criticism from one person – I'll consider the reasoning and address it or change my mind. But if a thousand people downvote me without giving any arguments, I don't regard that as making any difference intellectually.

The Quora readers appeared pleased with this list. They commented on and shared it. They said such things as “I’m definitely printing this list out and keeping it as a reference. Simply phenomenal,” and “You win Quora. We can just close the site now.” Students at the University of Toronto, where I teach, came up to me and told me how much they liked it.

JP is a second-hander (see The Fountainhead by Ayn Rand to understand the term more). He's judging his work by the opinions of other people instead of by rational evaluation of the content of the work. He's concerned with who thinks what (social metaphysics, as Ayn Rand called it) instead of what the rational arguments about the material are.

If I were sharing a success story like this, I wouldn't quote reason-less praise. I'd be concerned with the rational benefit of the popularity. Did it get me any questions or criticisms I learned from? Did the audience have enough intellectual merit to help me improve the ideas? It's nice if people like you're work and they're helped, but that must not be a creator's primary motivation or reward. Yet JP focuses on it.

I had written a 99.9 percentile answer.

JP writes this like it's 99.9th percentile quality, when he's only demonstrated 99.9th percentile popularity. These are completely different things which JP blurs together.

Quora provides market research at its finest. The respondents are anonymous. They’re disinterested, in the best sense. Their opinions are spontaneous and unbiased. So, I paid attention to the results, and thought about the reasons for that answer’s disproportionate success. Perhaps I struck the right balance between the familiar and the unfamiliar while formulating the rules. Perhaps people were drawn to the structure that such rules imply. Perhaps people just like lists.

Market research is the wrong approach to truth-seeking. Who cares if people like lists? JP should be considering if lists are the best way to present his work – according to his own judgement about the issues themselves.

JP seeks to figure out what people want to hear, in what format, instead of creating original work and structuring it as he thinks best fits the content.


JP is better than this. He is, in various ways, an original and independent thinker. He does good work. That's why this error stands out. It's an internal contradiction he has, which conflicts with some of his very substantial virtues and makes things harder for him.


Elliot Temple | Permalink | Comments (8)

Elliot Temple | Permalink | Comments (3)

Liar's Paradox Solution

The liar's paradox is an ancient philosophy problem about confusing sentences like, "This sentence is false." If you say it's true, it contradicts itself. If you say it's false, then it seems to be true. People have identified that part of the issue making things weird is that the sentence refers to itself.

To understand it more clearly, I recognized that the sentence is shorthand and wrote out the implied words. It means: "The final, completed evaluation of this sentence is false." In other words, it's asking you to evaluate if the sentence is true or false, and then compare what you come up with to see if it matches "false".

This reveals that it's, in a way, referring to the future. This is a better explanation than the self-reference explanation. Consider the sentence, "Joe loves philosophy; he'd never be an altruist." In this sentence, the word "he" refers to Joe. That's self-reference because the sentence refers to a part of itself; but this self-reference is harmless.

You're supposed to evaluate if the sentence is true or false. And to do that, you're asked to compare two things:

  1. The final, completed evaluation of this sentence

  2. false

But (1) doesn't exist yet at the time you're evaluating the sentence.

At the time you're first evaluating the sentence, (1) is undefined. That's the problem and the source of the "paradox".

Sometimes when you read a sentence, you can't figure out what something means until you finish the rest of the sentence. That's OK. It can be due to forward references or the need for context. The problem is that you need to already know the evaluation of the liar's paradox sentence (from the future) at the time you're creating the evaluation.

In terms of lisp computer code, we could write it something like this:

(equals? (evaluate self) false)

But what is "self"? It's (equals? (evaluate self) false). And what is the "self" in there? It's (equals? (evaluate self) false). Each time you expand the self to its meaning, you get another self that needs expanding to some meaning, and you can never finish expanding everything. So the sentence is poorly defined.

Or we could look at as ruby code with a blatant infinite loop:

def liars_paradox()
  return liars_paradox() == false
end

This is no more paradoxical than any other non-halting program like one that loops with while true.

(This problem has been solved before, e.g. this link makes the same point as my lisp code answer. I don't know how original my English language explanation is. I reinvented these solutions myself rather than reading them.)


Elliot Temple | Permalink | Comments (7)

Gobble is Better Than Blue Apron

I tried 3 weeks (9 meals) of Blue Apron to compare to Gobble (which i used for several months). These services deliver a weekly food box for you to cook. It contains exactly the ingredients you need to make specific meals, along with the recipes.

gobble meals are significantly easier and faster to prepare, cost 20% more, and were more gourmet. selection is similar (around 7-8 meals to choose from per week. maybe blue apron had slightly more. they seem to always have 3 vegetarian meals, so if you usually don't want those then you don't get a lot of choices.)

gobble puts more effort into side dishes, more complete meals, a bit fancier meals, and sauces. i also like their packaging better because they group everything for a meal in an outer bag (except meat separate). blue apron groups up the small things for each meal in a bag but then sends several loose things to deal with.

gobble sends more food that's already partially prepared. e.g. partly cooked rice or mashed potatoes that's done in 2min in microwave. or they've sent me complete raviolis with filling. or they'll send garlic shallot confit ready to add to your dish instead of making you chop garlic. they've also sent cooked meat that you just have to heat sometimes when they want it prepared a specific way that's harder or takes longer. and they'll send complicated sauces they already put significant effort into making to save you time. i also liked all of gobble's salads (and i'm not much of a salad person), but blue apron sent a lazy simple salad that didn't impress. blue apron commonly has you put stuff in the oven for 20-30min, whereas gobble tries to get meals done in 15min. i don't really mind time leaving things in the oven but blue apron is also significantly more time preparing the food.

blue apron was fun for a bit to compare and practice cooking (since you do more actual cooking from closer to scratch) but gobble is way better overall IMO. blue apron isn't bad though, i'd use them over going to the grocery store. both services work well and consistently provide good food.

i cancelled my blue apron account (for some stupid reason you can only skip deliveries one by one, but you can't skip everything by default). next time i feel like putting higher effort into cooking, i'll try a different service (there's a bunch like chefd, peach dish, and hello fresh).

oh and on the subject of food i've gotta recommend Fasta Pasta. it's a special plastic container for microwave cooking. i cook all my pasta with it; it's easier and always comes out perfect (microwaves are more consistent about how much they cook stuff). it does rice and some other stuff in the microwave too. i used it instead of a pot on the stove for cooking some pasta from my meal kits.


Elliot Temple | Permalink | Comments (0)

Atlas Shrugged Theme: Don't Overreach

One of the themes of Atlas Shrugged is one of the themes of my own philosophy: Don't overreach.

I say: If you exceed your abilities, if you try to do more than you can manage, then you will make more mistakes. More things will go wrong. If you do this too much it'll overwhelm your capability to deal with mistakes. That's overreaching: doing activities where your rate of making mistakes is too high for your ability to find and fix mistakes. Overreaching is bad, and pretty much all adult lives have tons of overreaching. The situation is so bad people just give up on correctness and try to muddle through life putting up with many unsolved mistakes.

Rand doesn't say that. But she says something related.

In Atlas Shrugged, the world has a bunch of nasty problems. Dagny tries to ignore them and run a railroad anyway, but the problems are pretty damn overwhelming and this doesn't work out in the long run despite how amazing Dagny is. What should she have done instead? Retreat from a world where she and her values aren't wanted. Give up the railroad. Give up on big accomplishments in screwed up world. Live her own life. Keep it simpler and smaller, like how they live in Galt's Gulch. But keep it pure with no corruption. Live in a way where everything works and there's no compromises, downsides, disasters, people working to make your life harder, looters stealing from you, taxes draining you, and so on.

In other words, Atlas Shrugged says to scale back your ambitions to projects which are reasonably possible in good ways – without tons of stuff going wrong. That's what John Galt and his allies do. They won't participate in corrupt, broken projects. They will only live life in ways that work. They'd rather have a single hand-tooled tractor in Galt's Gulch, or a little farm, or a few barrels of day of oil production, or a cabin instead of a skyscraper ... as long as it's fully theirs, it's fully pure and proper and correct ... there's nothing broken or wrong or bad about it.

In other words, it's better to have less without errors, corruptions, sacrifices, and moral compromises, rather than to have more at the cost of your soul or the cost of it not actually working right.

It's also like how you should learn things in general (e.g. typing, martial arts moves, or video game techniques): do it slowly and correctly and then speed up. Do not do it fast and wrong and try to fix the mistakes when there's a bunch of them. Speed up gradually so you only deal with a few mistakes at a time and keep the mistakes manageable.

In the introduction of Atlas Shrugged (35th anniversary edition), Peikoff quotes Rand's notes:

Her [Dagny's] error—and the cause of her refusal to join the strike—is over-optimism and over-confidence (particularly this last).

...

Over-confidence-in that she thinks she can do more than an individual actually can. She thinks she can run a railroad (or the world) single-handed, she can make people do what she wants or needs, what is right, by the sheer force of her own talent

Overreaching isn't just for beginners who try to act like experts. Even a great hero can overreach.


Elliot Temple | Permalink | Comments (23)

Learning From Losing Arguments

ppl argue badly. this is ok once. even a few times is not a big deal.

from this, they need to learn things like:

  • they might suck at arguing.

  • they might be biased.

  • they might be dishonest.

don’t accept these things. you don’t know. call them maybes.

from there, pivot to: how do i figure these things out? how does one get better at them? what sort of path is there to develop intellectually in order to get better at this stuff and/or even be able to evaluate it?

what people routinely do instead is:

  • get discouraged by an arguing failure.

  • refuse to pivot to the underlying issues that are raised by the failure.

  • or pivot briefly then forget it, rather than it being an ongoing project.

  • reset back where they started an argue badly again with nothing having changed.

overall, people lose track of the situation – that they might be e.g. dishonest and they should be investigating. this is no accident, and it destroys their ability to make progress.

sure, say what you think, see what happens, make mistakes. try stuff. but don’t repeat this endlessly. don’t repeat it much at all. move on. find a problem or three and actually pursue them instead of starting over again next conversation with the now-unreasonable default assumption of your competence, rationality and honesty. those are things that are rare, and shouldn’t even be expected by default.

move on to trying to develop competence, rationality, honesty, intellectual skills. make that an actual goal and actually consider if your actions are in pursuit of that goal. don’t just carelessly argue some point that comes up as if you’ll learn much. if you aren’t taking discussions to conclusions with persistent energy, and you don’t organize your activities, you shouldn’t expect to learn much.


Elliot Temple | Permalink | Comments (0)

Having It All!? Reason and Normal Stuff?

Do you have to choose between reason and other stuff like marriage, or can you have it all?

You can’t have a conventional life and add reason on top. You can add little bits and pieces and fragments of reason on top, but a conventional life simply contradicts reason in major ways.

What if you choose reason first, then can you have it all, or do you have to give something up? Neither. You can genuinely not want some things, and decide they aren’t appealing, and have everything you rationally want. So then you’re happy, you get the things you value in the future when you’re making rational decisions about your values. But you don’t get all the things that sound tempting now, you change your mind about some of them.


Elliot Temple | Permalink | Comments (0)

What Kind of Intellectual Are You?

Are you a serious intellectual? After you found out about Fallible Ideas, you can learn a lot about yourself from how you reacted.

First, did you read an important FI-related book during the next six months? If not, (and you’re age 15+,) you’re not very interested in FI. (For younger people, it’s less clear, and you may need to look at other forms of engagement to judge interest.)

For the sake of discussion, I’ll suppose the book you read is Atlas Shrugged (AS). There are lots of great books to choose from.

Did you read AS thoughtfully? Did you write down thoughts as you read? Can you remember thoughts you had about it well enough to write them down now?

Did you have questions about the book? What did you do to get them answered? If you had fewer than 100 questions about AS, you aren’t the kind of person who is going to get very far with FI. FI is for people who both want to know things and go seeking answers. You should do that without being told to.

Did you have followup questions after your questions? If you never asked 5+ questions in a row to keep getting more depth about an issue, you aren’t very interested in learning about it.

I know you’ve got excuses. You’re used to a world, such as school, where there’s no one to answer your questions, so you learned not to think of them or not to ask them. Well, so what? Who cares what your excuse is? For whatever reason, you are not suitable for serious learning now.

Can you change? It’s conceivable. But don’t expect it before it happens. Don’t count on it. Most people don’t change in big ways about reason. If you find that discouraging, rational thinking is not for you.


Your relationship with reason can be used for considering practical decisions. E.g. should you get married, if you want to, but you heard a rational argument criticizing it? The key question here is whether you can do better than tradition, or should live with a flawed tradition. In order to attempt to outdo a major societal tradition, you need to be really serious about rational thinking. It will take a ton of serious thought (from you and anyone else involved).

If you didn’t read a book within 6 months, or weren’t bubbling with questions about it, then you’re not going to do better than tradition about marriage. You aren’t wiser than your society. You aren’t suited for paving your own way in life. You aren’t a pioneering first-mover.


Elliot Temple | Permalink | Comments (0)

Bad Parenting List

This is an incomplete list of some of unacceptable, uncivilized parenting behavior. These are pretty normal in our culture, but should be viewed with horror. They're pretty blatantly intolerable to a reasonable, classically-liberal-minded person.

  • Making children do things they don't want to (e.g. making a baby go in a carseat when their crying indicates they don't want to, or enforcing an unwanted bedtime, or making a child brush his teeth or take a bath when he'd rather not, or making a child go to school). In general anything that causes crying or "tantrums" indicates the parent is doing something wrong.

  • Punishments.

  • Anything that relies on parent being bigger/stronger than child, such as spanking or carrying a child from one location to another when child doesn't want to be moved (which is literally assault and kidnapping – it should be a crime).

  • Rules that child doesn't like.

  • The parent putting his foot down or doing "nicer" pressures and manipulations to get his way. Frowning, having a stressed voice, or being selectively less energetically helpful/friendly/cheerful can be pressuring and controlling. (E.g. parent is "too tired" to do an activity child wants, but would suddenly be available if child wanted to do a different activity that parent cares about more.)

  • Screen time limits.

  • Not getting a baby an iPad and helping them get apps and use it (by around 6 months old, for people who can afford one).

  • Having multiple children. (Parenting one child well is hard enough. Having more kids is much harder. That guarantees more mistakes in the treatment of the first child. Knowingly, intentionally guaranteeing to treat one's first child worse is a betrayal).

  • Posting baby pictures online (privacy violation).

  • Skipping vaccines (scarily trendy lately and literally killing kids), or denying children anesthetics for shots.

  • Circumcision (genital mutilation).

  • Having child to evaluated by a psychiatrist or giving him psych drugs, or letting a school do this. ("Mental illness" is a myth, and psychiatry is an attempt to "scientifically" legitimize the use of violence against non-criminal non-conformists without following the rule of law. People today are imprisoned without getting a trial, with psychiatry as the excuse. Psychiatric drugs literally cause brain damage – as their primary effect, not a side effect.)

  • Giving children (oral) herpes (sometimes called "cold sores"). Herpes is widespread and uncurable, and is often spread by people kissing babies without adequate medical knowledge or herpes testing.

  • Not prioritizing what child wants. The parent's proper role is as a helper to enable the child to get what he wants, not to control the child. That means e.g. helping child get sugar and other foods he likes, and "violent" games and movies he wants.

Read about more details.

Ask questions or add to the list, in the comments below!


Elliot Temple | Permalink | Comments (14)

Standards of Understanding

The Fountainhead by Ayn Rand:

“The worst thing about dishonest people is what they think of as honesty,” [Gail Wynand] said. “I know a woman who’s never held to one conviction for three days running, but when I told her she had no integrity, she got very tight-lipped and said her idea of integrity wasn’t mine; it seems she’d never stolen any money. Well, she’s one that’s in no danger from me whatever. I don’t hate her. I hate the impossible conception you love so passionately, Dominique.”

I thought of a related point:

The worst thing about confused, ignorant people is what they think of as understanding. They don't understand stuff (not even close), and that somehow meets their standards of understanding, and they stop trying to understand more.


Elliot Temple | Permalink | Comments (0)

Critical Rationalism Criticisms?

I believe there are no correct, unaddressed criticisms of Karl Popper’s epistemology (Critical Rationalism – CR). If I'm mistaken, I'd like to be told. If others are mistaken, I'd like them to find out and take an interest in CR.

I've found CR criticism falls into some broad categories, with some overlap:

  1. The people who heard Popper is wrong secondhand but didn’t read much Popper and have no idea what CR is actually about. They often try to rely on secondary sources to tell them what CR says, but most secondary sources on CR are bad.

  2. The pro-induction people who don’t engage with Popper’s ideas, just try to defend induction. They don’t understand Popper’s criticism of induction and focus on their own positive case for induction. They also commonly admit that some criticisms of induction are correct, but still won’t change their minds or start learning the solution to induction’s flaws (CR).

  3. The falsificationism straw man, which misinterprets Popper as advocating a simplistic, false view. (There are some other standard myths too, e.g. that Popper was a positivist.)

  4. Critics of The Logic of Scientific Discovery who ignore Popper’s later works and don’t engage with CR's best ideas.

  5. Critics with points which Popper answered while he was still alive. Most criticisms of Popper are already answered in his books, and if not there then in this collection of Popper criticism and Popper’s replies. (I linked volume two which has Popper’s replies, you will want volume 1 also.)

If you believe Popper is wrong, then: Do you believe you personally understand CR? And have you looked at Popper’s books and replies to his critics to see if your point is already answered? If so, have you written down why Popper is mistaken? If not, do you believe someone else has done all this? (They understand CR, are familiar with Popper’s books including his replies to his critics, and wrote down why Popper is mistaken.)

Whether it’s by you or someone else, you can reply with a reference to where this is publicly written down in English. I will answer it (or refer you to an answer or get a colleague to answer). Here is what I expect in return: if your reference is mistaken, you will study CR. You were wrong about CR’s falsity, so it’s time to learn it. If you would be unwilling to learn CR even if you agree that your referenced criticism of CR is false, then you shouldn’t have an opinion on CR. If you still wouldn’t want to learn CR even if all your objections were wrong, then you either aren’t participating in the field (epistemology) or shouldn’t be. (I have nothing against lay people as long as they are interested in learning and thinking. I do have something against people, whether lay or philosophy professors, who state their opinion that Popper is wrong but would not be willing to learn about Popper even if they found out their negative beliefs about Popper are false.)

If you believe one of the many criticisms of Popper is correct, but you don’t know which one and don’t want to pick one, then you are not treating the matter rationally. It’s unacceptable if your plan is, on having one criticism answered, to simply pick another one, and repeat indefinitely. You’re welcome to have one good reference which makes multiple important points, but you don’t get to just keep referencing different critical authors repetitively (as each one fails, you pick another) while not reconsidering your own beliefs. You need to stick your own neck out – as I do. If I can’t answer a challenge to CR I will reconsider my views.

If you want to bring up a couple criticisms at the start, which are written in different places, but you won't add any more later, then that could be reasonable – but provide a brief explanation of why it's needed. In this case where you want to bring up multiple points by different authors, I'd expect you to be referencing specific sections or short works, not multiple whole books. E.g. you could reasonably say you have 3 criticisms of Popper, chapter 3 of book X, chapter 7 of book Y, and paper Z.

Alternatively, if Popper is mistaken but no one has actually written correct criticism (including you), then how do you know he's mistaken? Maybe he's not!

Note: I'm interested in criticisms like "Popper's idea X is false b/c Y.", not like "I wasn't convinced by Popper's writing on topic X." (The second one is compatible with Popper being correct, and is too vague to answer.)


Broadly, the reason criticisms of CR fail is the critics do not understand CR. Having read a lot of Popper criticism, I can report this theme is nearly universal in my experience. (There is one problem with CR, which sometimes comes up, which I fixed.) CR is hard to understand because it disagrees with over 2000 years of epistemological tradition. And people in general massively underestimate the effort it takes to understand ideas well. (People seem to think they can read a philosophy book once and understand it, but that isn’t how it works – study and discussion are needed to clear up misunderstandings.) Pointing out misunderstandings of CR, with quotes, is one of the typical ways I answer CR criticisms.

Secondarily, Popper criticism often fails because the critic is much less smart and knowledgeable than Popper (one of the world’s best ever thinkers). I think people can get smarter and more knowledgeable if they make the effort, but most people don’t make that effort in a serious, persistent way and put a ton of time into it. I will not use this as an argument against any particular criticism. It’s not an argument, but it is a part of the world’s intellectual/scholarship situation which I think matters, and it helps explain what’s going on. It’s hard to criticize your intellectual betters, but easy to misunderstand and consequently vilify them. More generally, people tend to be hostile to outliers and sympathize with more conventional and conformist stuff – even though most great new ideas, and great men, are outliers.


See also: CR reading recommendations.


Elliot Temple | Permalink | Comments (9)

My Paths Forward Policy

If you think I'm mistaken or ignorant about something important, I want to hear it. I am open to public comments and criticism. See Paths Forward for an explanation of my methodology for not blocking error correction (always having some Path Forward so that if I'm mistaken, and someone knows it, and they're willing to tell me, then I can be told and I won't ignore it).

I do not reply to everything addressed to me, at all venues. I do reply to a fair amount, but I don't have time to answer everything. However, I will guarantee you some attention if you follow a method of getting my attention which anyone can follow with predictable success. Here's what you do:

  1. Post your issue to the FI Yahoo Group. Format your post correctly, e.g. by making it plain text with attributed quotes. Read the guidelines for quote formatting. For most issues, you should quote from something you're arguing with and point out a mistake in the quote. You can also do general comments and respond to your own paraphrases of my views, but pointing out at least one mistake in a quote is important too.

  2. If you don't receive a reply from anyone within a few days, post a self-reply with some followup points. Try again. If the first post didn't have them, follow up with a brief statement of why this is important and a brief summary (one paragraph max, each). Also make sure you're providing a clear question or call to action. What do you want to happen next? What sort of response do you want? Mention you want a Path Forward from me.

  3. If you still don't receive a reply within a few days, write a self-reply asking why you didn't receive a reply, and include a brief statement of why replying to you matters and what you're looking for.

  4. If you still don't receive a reply within a few days, email me personally ([email protected]) and ask for an answer and say that you've read Paths Forward. Link the Yahoo Group topic, or at least give the subject line and date.

Summary: Post to FI. Follow up on why it matters and what reply you want. Follow up asking why no one is answering. Follow up by emailing me. You will get an answer by the end of this process.

Notes

You're welcome to try contacting me in other ways, and that often works, but no promises.

Formatting posts correctly is an intentional barrier to entry. If you aren't willing to do that, I suggest you post to my blog comments (which don't have formatting requirements). I consider the FI formatting the best for a serious discussion, so if you're looking for a serious discussion you should learn it. I like this barrier to entry because I believe it improves discussion while avoiding unpredictable, subjective judgements (like about the quality of your writing and ideas – I will not ignore you because I believe your comments are low quality, as long as you follow the steps listed above.)

I don't answer everything the first time, but if you are persistent as stated above, then I can guarantee you an answer.

The reasons I want you to post on my public forum are that I want other forum readers to benefit from my answer, I want my answer to have a public permalink so I can refer other people to it in the future, and I want other people to be able to answer you (instead of me).

If you receive an answer from another person, and you think it's inadequate and really want an answer from me personally, you can continue with the steps outlined above and explain this (say why the answers from the other people are inadequate and why you want my personal attention).

I (or someone else) commonly will answer a point before reaching step 4. Often at step 1. (I'm most responsive on the FI forum, so just posting there with correct formatting is frequently enough to get a reply. I'm next most responsive to personal email, then blog comments, and then less responsive to everything else).

Like many busy people, I am less inclined to answer if I think something is low quality. I certainly don't want to reply to every low quality thing addressed to me. However, if you follow the steps then you'll get a reply from someone, including from me if necessary. (Often other people are fully capable of answering issues, especially the comments I consider lower quality, so I don't always want to do it personally if someone else will do it.)

If you don't want your content to be exclusive to my forum, that's fine. You're welcome to put it on your own website and post a link or copy/paste.

If you want me to address something which costs money, offer me a free copy somewhere within the first 3 steps. If you won't do that, say why.

If I still don't answer after step 4, your personal email went in my spam folder. I don't think this is a common problem, but if it happens feel free to post to FI again and bring it up and I'll see it or someone else will who can contact me. Or it'd be fine to post 10 blog comments in a row or tweet me or something until I notice. Say that you did the 4 Paths Forward steps and I didn't reply, so maybe the email went in spam, and identify the FI posts in question so I can find them. Or you can email Justin or Alan and they'll get my attention. I mention this because spam filtering is a conceivable problem that could get in the way of Paths Forward, and I don't want that to happen. Email is not 100% reliable for contacting me, but it's pretty good and there are solutions if it fails.

Alternatives

What if you don't want to be so demanding and challenging as to ask for a Path Forward from Elliot/CR/FI? Maybe you expect you're wrong (rather than offering a correction), but you're still interested in pursuing the issue and learning something and getting it resolved? Perhaps you want some Path Forward for yourself to make progress?

Follow up on your own posts with new questions, new explanations of the issues and their importance, new angles and perspectives. Rewrite what you're saying a different way. And report what you've done to make progress, what effort you've put in (and what the result was), what you're planning to do, and if you're running out of ideas and if you'd like help with something. Keep at it over time. Be persistent, honest and curious and FI people will want to help. And make it easy for them: take short advice/comments/suggestions and then do a bunch more on your own initiative (and share this so they see giving you help was worthwhile), rather than expecting them to guide you step by step. Put effort into your learning as independently as you can, e.g. by taking book and link suggestions and doing series of blog posts about them as you read. Be a pleasure to help and offer more value than you ask for. (If you don't know how to offer value, but want to, ask.)


Elliot Temple | Permalink | Comments (3)

Empiricism and Instrumentalism

Gyrodiot commented defending instrumentalism.

I'm going to clarify what I mean about "instrumentalism" and "empiricism". I don't know if we actually disagree or there's a misunderstanding.

FI has somewhat of a mixed view here (reason and observation are both great), and objects to an extreme focus on one or the other. CR and Objectivism both say you don't have to, and should not, choose between reason and observation. We object to the strong "rationalists" who want to sit in an armchair and reason out what reality is like without doing any science, and we object to the strong "empiricists" who want to look at reality and do science without thinking.

Instrumentalism means that theories are only or primarily instruments for prediction, with little or no explanation or philosophical thought. Our view is that observation and prediction are great and valuable, but aren't alone in being so great and valuable. Some important ideas – such as the theory of epistemology itself – are primarily non-empirical.

There's a way some people try to make philosophy empirical. It's: try different approaches and see what the results are (and try to predict the results of acting according to different philosophies of science). But how do you judge the results? What's a good result? More accurate scientific predictions, you say. But which ones? How do you decide which predictions to value more than others? Or do you say every prediction is equal and go for sheer quantity? If quantity, why, and how do you address that with only empiricism and no philosophical arguments? And you want more accurate predictions according to which measures? (E.g. do you value lower error size variance or lower error size mean, or one of the infinitely many possible metrics that counts both of them in some way?)

How do you know which observations to make, and which portion of the available facts to record about what you observe? How do you interpret those observations? Is the full answer just to predict which way of making observations will lead to the most correct predictions later on? But how do you predict that? How do you know which data will turn out useful to science? My answer is you need explanations of things like which problems science is currently working on, and why, and the nature of those problems – these things help guide you in deciding what observations are relevant.

Here are terminology quotes from BoI:

Instrumentalism   The misconception that science cannot describe reality, only predict outcomes of observations.

Note the "cannot" and "only".

Empiricism   The misconception that we ‘derive’ all our knowledge from sensory experience.

Note the "all" and the "derive". "Derive" refers to something like: take a set of observation data (and some models and formulas with no explanations, philosophy or conceptual thinking) and somehow derive all human knowledge, of all types (even poetry), from that. But all you can get that way are correlations and pattern-matching (to get causality instead of correlation you have to come up with explanations about causes and use types of criticism other than "that contradicts the data"). And there are infinitely many patterns fitting any data set, of which infinitely many both will and won't hold in the finite future, so how do you choose if not with philosophy? By assuming whichever patterns are computable by the shortest computer programs are the correct ones? If you do that, you're going to be unnecessarily wrong in many cases (because that way of prediction is often wrong, not just in cases where we had no clue, but also in cases when explanatory philosophical thinking could have done better). And anyway how do you use empiricism to decide to favor shorter computer programs? That's a philosophy claim, open to critical philosophy debate (rather than just being settled by science), of exactly the kind empiricism was claiming to do without.

Finally I'll comment on Yudkowsky on the virtue of empiricism:

The sixth virtue is empiricism. The roots of knowledge are in observation and its fruit is prediction.

I disagree about "roots" because, as Popper explained, theories are prior to observations. You need a concept of what you're looking for, by what methods, before you can fruitfully observe. Observation has to be selective (like it or not, there's too much data to record literally all of it) and goal-directed (instead of observing randomly). So goals and ideas about observation method precede observation as "roots" of knowledge.

Note: this sense of preceding does not grant debating priority. Observations may contradict preceding ideas and cause the preceding ideas to be rejected.

And note: observations aren't infallible either: observations can be questioned and criticized because, although reality itself never lies, our ideas that precede and govern observation (like about correct observational methods) can be mistaken.

Do not ask which beliefs to profess, but which experiences to anticipate.

Not all beliefs are about experience. E.g. if you could fully predict all the results of your actions, there would still be an unanswered moral question about which results you should prefer or value, which are morally better.

Always know which difference of experience you argue about.

I'd agree with often but not always. Which experience is the debate about instrumentalism and empiricism about?


See also my additional comments to Gyrodiot about this.


Elliot Temple | Permalink | Comments (0)

Replies to Gyrodiot About Fallible Ideas, Critical Rationalism and Paths Forward

Gyrodiot wrote at the Less Wrong Slack Philosophy chatroom:

I was waiting for an appropriate moment to discuss epistemology. I think I understood something about curi's reasoning about induction After reading a good chunk of the FI website. Basically, it starts from this:

He quotes from: http://fallibleideas.com/objective-truth

There is an objective truth. It's one truth that's the same for all people. This is the common sense view. It means there is one answer per question.

The definition of truth here is not the same as The Simple Truth as described in LW. Here, the important part is:

Relativism provides an argument that the context is important, but no argument that the truth can change if we keep the context constant.

If you fixate the context around a statement, then the statement ought to have an objective truth value

Yeah. (The Simple Truth essay link.)

In LW terms that's equivalent to "reality has states and you don't change the territory by thinking differently about the map"

Yeah.

From that, FI posits the existence of universal truths that aren't dependent on context, like the laws of physics.

More broadly, many ideas apply to many contexts (even without being universal). This is very important. DD calls this "reach" in BoI (how many contexts does an idea reach to?), I sometimes go with "generality" or "broader applicability".

The ability for the same knowledge to solve multiple problems is crucial to our ability to deal with the world, and for helping with objectivity, and for some other things. It's what enabled humans to even exist – biological evolution created knowledge to solve some problems related to survival and mating, and that knowledge had reach which lets us be intelligent, do philosophy, build skyscrapers, etc. Even animals like cats couldn't exist, like they do today, without reach – they have things like behavioral algorithms which work well in more than one situation, rather than having to specify different behavior for every single situation.

The problem with induction, with this view is that you're taking truths about some contexts to apply them to other contexts and derive truths about them, which is complete nonsense when you put it like that

Some truths do apply to multiple contexts. But some don't. You shouldn't just assume they do – you need to critically consider the matter (which isn't induction).

From a Bayesian perspective you're just computing probabilities, updating your map, you're not trying to attain perfect truth

Infinitely many patterns both do and don't apply to other contexts (such as patterns that worked in some past time range applying tomorrow). So you can't just generalize patterns to the future (or to other contexts more generally) and expect that to work, ala induction. You have to think about which patterns to pay attention to and care about, and which of those patterns will hold in what ranges of contexts, and why, and use critical arguments to improve your understanding of all this.

We do [live in our own map], which is why this mode of thought with absolute truth isn't practical at all

Can you give an example of some practical situation you don't understand how to address with FI thinking, and I'll tell you how or concede? And after we go through a few examples, perhaps you'll better understand how it works and agree with me.

So, if induction is out of the way, the other means to know truth may be by deduction, building on truth we know to create more. Except that leads to infinite regress, because you need a foundation

CR's view is induction is not replaced with more deduction. It's replaced with evolution – guesses and criticism.

So the best we can do is generate new ideas, and put them through empirical test, removing what is false as it gets contradicted

And we can use non-empirical criticism.

But contradicted by what? Universal truths! The thing is, universal truths are used as a tool to test what is true or false in any context since they don't depend on context

Not just contradicted by universal truths, but contradicted by any of our knowledge (lots of which has some significant but non-universal reach). If an idea contradicts some of our knowledge, it should say why that knowledge is mistaken – there's a challenge there. See also my "library of criticism" concept in Yes or No Philosophy (discussed below) which, in short, says that we build up a set of known criticisms that have some multi-context applicability, and then whenever we try to invent a new idea we should check it against this existing library of known criticisms. It needs to either not be contradicted by any of the criticisms or include a counter-argument.

But they are so general that you can't generate new idea from them easily

The LW view would completely disagree with that: laws of physics are statements like every other, they are solid because they map to observation and have predictive power

CR says to judge ideas by criticism. Failure to map to observation and lack of predictive power are types of criticism (absolutely not the only ones), which apply in some important range of contexts (not all contexts – some ideas are non-empirical).

Prediction is great and valuable but, despite being great, it's also overrated. See chapter 1 of The Fabric of Reality by David Deutsch and the discussion of the predictive oracle and instrumentalism.

http://www.daviddeutsch.org.uk/books/the-fabric-of-reality/excerpt/

Also you can use them to explain stuff (reductionism) and generate new ideas (bottom-up scientific research)

From FI:

When we consider a new idea, the main question should be: "Do you (or anyone else) see anything wrong with it? And do you (or anyone else) have a better idea?" If the answers are 'no'and 'no' then we can accept it as our best idea for now.

The problem is that by having a "pool of statements from which falsehoods are gradually removed" you also build a best candidate for truth. Which is not, at all, how the Bayesian view works.

FI suggests evolution is a reliable way to suggest new ideas. It ties well into the framework of "generate by increments and select by truth-value"

It also highlights how humans are universal knowledge machines, that anything (in particular, an AGI) created by a human would have knowledge than humans can attain too

Humans as universal knowledge creators is an idea of my colleague David Deutsch which is discussed in his book, The Beginning of Infinity (BoI).

http://beginningofinfinity.com

But that's not an operational definition : if an AGI creates knowledge much faster than any human, they won't ever catch up and the point is moot

Yes, AGI could be faster. But, given the universality argument, AGI's won't be more rational and won't be capable of modes of reasoning that humans can't do.

The value of faster is questionable. I think no humans currently maximally use their computational power. So adding more wouldn't necessarily help if people don't want to use it. And an AGI would be capable of all the same human flaws like irrationalities, anti-rational memes (see BoI), dumb emotions, being bored, being lazy, etc.

I think the primary cause of these flaws, in short, is authoritarian educational methods which try to teach the kid existing knowledge rather than facilitate error correction. I don't think an AGI would automatically be anything like a rational adult. It'd have to think about things and engage with existing knowledge traditions, and perhaps even educators. Thinking faster (but not better) won't save it from picking up lots of bad ideas just like new humans do.

That sums up the basics, I think The Paths Forwards thing is another matter... and it is very, very demanding

Yes, but I think it's basically what effective truth-seeking requires. I think most truth-seeking people do is not very effective, and the flaws can actually be pointed out as not meeting Paths Forward (PF) standards.

There's an objective truth about what it takes to make progress. And separate truths depending on how effectively you want to make progress. FI and PF talk about what it takes to make a lot of progress and be highly effective. You can fudge a lot of things and still, maybe, make some progress instead of going backwards.

If you just wanna make a few tiny contributions which are 80% likely to be false, maybe you don't need Paths Forward. And some progress gets made that way – a bunch of mediocre people do a bunch of small things, and the bulk of it is wrong, but they have some ability to detect errors so they end up figuring out which are the good ideas with enough accuracy to slowly inch forwards. But, meanwhile, I think a ton of progress comes from a few great (wo)men who have higher standards and better methods. (For more arguments about the importance of a few great men, I particularly recommend Objectivism. E.g. Roark discusses this in his courtroom speech at the end of The Fountainhead.)

Also, FYI, Paths Forward allows you to say you're not interested in something. It's just, if you don't put the work into knowing something, don't claim that you did. Also you should keep your interests themselves open to criticism and error correction. Don't be an AGI researcher who is "not interested in philosophy" and won't listen to arguments about why philosophy is relevant to your work. More generally, it's OK to cut off a discussion with a meta comment (e.g. "not interested" or "that is off topic" or "I think it'd be a better use of my time to do this other thing...") as long as the meta level is itself open to error correction and has Paths Forward.

Oh also, btw, the demandingness of Paths Forward lowers the resource requirements for doing it, in a way. If you're interested in what someone is saying, you can be lenient and put in a lot of effort. But if you think it's bad, then you can be more demanding – so things only continue if they meet the high standards of PF. This is win/win for you. Either you get rid of the idiots with minimal effort, or else they actually start meeting high standards of discussion (so they aren't idiots, and they're worth discussing with). And note that, crucially, things still turn out OK even if you misjudge who is an idiot or who is badly mistaken – b/c if you misjudge them all you do is invest less resources initially but you don't block finding out what they know. You still offer a Path Forward (specifically that they meet some high discussion standards) and if they're actually good and have a good point, then they can go ahead and say it with a permalink, in public, with all quotes being sourced and accurate, etc. (I particularly like asking for simple things which are easy to judge objectively like those, but there are other harder things you can reasonably ask for, which I think you picked up on in some ways your judgement of PF as demanding. Like you can ask people to address a reference that you take responsibility for.)

BTW I find that merely asking people to format email quoting correctly is enough barrier to entry to keep most idiots out of the FI forum. (Forum culture is important too.) I like this type of gating because, contrary to moderators making arbitrary/subjective/debatable judgements about things like discussion quality, it's a very objective issue. Anyone who cares to post can post correctly and say any ideas they want. And it lacks the unpredictability of moderation (it can be hard to guess what moderators won't like). This doesn't filter on ideas, just on being willing to put in a bit of effort for something that is productive and useful anyway – proper use of nested quoting improves discussions and is worth doing and is something all the regulars actively want to do. (And btw if someone really wants to discuss without dealing with formatting they can use e.g. my blog comments which are unmoderated and don't expect email quoting, so there are still other options.)

It is written very clearly, and also wants to make me scream inside

Why does it make you want to scream?

Is it related to moral judgement? I'm an Objectivist in addition to a Critical Rationalist. Ayn Rand wrote in The Virtue of Selfishness, ch8, How Does One Lead a Rational Life in an Irrational Society?, the first paragraph:

I will confine my answer to a single, fundamental aspect of this question. I will name only one principle, the opposite of the idea which is so prevalent today and which is responsible for the spread of evil in the world. That principle is: One must never fail to pronounce moral judgment.

There's a lot of reasoning for this which goes beyond the one essay. At present, I'm just raising it as a possible area of disagreement.

There are also reasons about objective truth (which are part of both CR and Objectivism, rather than only Objectivism).

The issue isn't just moral judgement but also what Objectivism calls "sanction": I'm unwilling to say things like "It's ok if you don't do Paths Forward, you're only human, I forgive you." My refusal to actively do anti-judgement stuff, and approve of PF alternatives, is maybe more important than any negative judgements I've made, implied or stated.

It hits all the right notes motivation-wise, and a very high number of Rationality Virtues. Curiosity, check. Relinquishment, check. Lightness, check. Argument, triple-check.

Yudkowsky writes about rational virtues:

The fifth virtue is argument. Those who wish to fail must first prevent their friends from helping them.

Haha, yeah, no wonder a triple check on that one :)

Simplicity, check. Perfectionism, check. Precision, check. Scholarship, check. Evenness, humility, precision, Void... nope nope nope PF is much harsher than needed when presented with negative evidence, treating them as irreparable flaws (that's for evenness)

They are not treated as irreparable – you can try to create a variant idea which has the flaw fixed. Sometimes you will succeed at this pretty easily, sometimes it’s hard but you manage it, and sometimes you decide to give up on fixing an idea and try another approach. You don’t know in advance how fixable ideas are (you can’t predict the future growth of knowledge) – you have to actually try to create a correct variant idea to see how doable that is.

Some mistakes are quite easy and fast to fix – and it’s good to actually fix those, not just assume they don’t matter much. You can’t reliably predict mistake fixability in advance of fixing it. Also the fixed idea is better and this sometimes helps leads to new progress, and you can’t predict in advance how helpful that will be. If you fix a bunch of “small” mistakes, you have a different idea now and a new problem situation. That’s better (to some unknown degree) for building on, and there’s basically no reason not to do this. The benefit of fixing mistakes in general, while unpredictable, seems to be roughly proportional to the effort (if it’s hard to fix, then it’s more important, so fixing it has more value). Typically, the small mistakes are a small effort to fix, so they’re still cost-effective to fix.

That fixing mistakes creates a better situation fits with Yudkowsky’s virtue of perfectionism.

(If you think you know how to fix a mistake but it’d be too resource expensive and unimportant, what you can do instead is change the problem. Say “You know what, we don’t need to solve that with infinite precision. Let’s just define the problem we’re solving as being to get this right within +/- 10%. Then the idea we already have is a correct solution with no additional effort. And solving this easier problem is good enough for our goal. If no one has any criticism of that, then we’ll proceed with it...")

Sometimes I talk about variant ideas as new ideas (so the original is refuted, but the new one is separate) rather than as modifying and rescuing a previous idea. This is a terminology and perspective issue – “modifying" and “creating" are actually basically the same thing with different emphasis. Regardless of terminology, substantively, some criticized flaws in ideas are repairable via either modifying or creating to get a variant idea with the same main points but without the flaw.

PF expects to have errors all other the place and act to correct them, but places a burden on everyone else that doesn't (that's for humility)

Is saying people should be rational burdensome and unhumble?

According to Yudkowsky's essay on rational virtues, the point of humility is to take concrete steps to deal with your own fallibility. That is the main point of PF!

PF shifts from True to False by sorting everything through contexts in a discrete way.

The binary (true or false) viewpoint is my main modification to Popper and Deutsch. They both have elements of it mixed in, but I make it comprehensive and emphasized. I consider this modification to improve Critical Rationalism (CR) according to CR's own framework. It's a reform within the tradition rather than a rival view. I think it fits the goals and intentions of CR, while fixing some problems.

I made educational material (6 hours of video, 75 pages of writing) explaining this stuff which I sell for $400. Info here:

https://yesornophilosophy.com

I also have many relevant, free blog posts gathered at:

http://curi.us/1595-rationally-resolving-conflicts-of-ideas

Gyrodiot, since I appreciated the thought you put into FI and PF, I'll make you an offer to facilitate further discussion:

If you'd like to come discuss Yes or No Philosophy at the FI forum, and you want to understand more about my thinking, I will give you a 90% discount code for Yes or No Philosophy. Email [email protected] if interested.

Incertitude is lack of knowledge, which is problematic (that's for precision)

The clarity/precision/certitude you need is dependent on the problem (or the context if you don’t bundle all of the context into the problem). What is your goal and what are the appropriate standards for achieving that goal? Good enough may be good enough, depending on what you’re doing.

Extra precision (or something else) is generally bad b/c it takes extra work for no benefit.

Frequently, things like lack of clarity are bad and ruin problem solving (cuz e.g. it’s ambiguous whether the solution means to take action X or action Y). But some limited lack of clarity, lower precision, hesitation, whatever, can be fine if it’s restricted to some bounded areas that don’t need to be better for solving this particular problem.

Also, about the precision virtue, Yudkowsky writes,

The tenth virtue is precision. One comes and says: The quantity is between 1 and 100. Another says: the quantity is between 40 and 50. If the quantity is 42 they are both correct, but the second prediction was more useful and exposed itself to a stricter test.

FI/PF has no issue with this. You can specify required precision (e.g. within plus or minus ten) in the problem. Or you can find you have multiple correct solutions, and then consider some more ambitious problems to help you differentiate between them. (See the decision chart stuff in Yes or No Philosophy.)

PF posits time and again that "if you're not achieving your goals, well first that's because you're not faillibilist". Which is... quite too meta-level a claim (that's for the Void)

Please don't put non-quotes in quote marks. The word "goal" isn't even in the main PF essay.

I'll offer you a kinda similar but different claim: there's no need to be stuck and not make progress in life. That's unnecessary, tragic, and avoidable. Knowing about fallibilism, PF, and some other already-known things is adequate that you don't have to be stuck. That doesn't mean you will achieve any particular goal in any particular timeframe. But what you can do is have a good life: keep learning things, making progress, achieving some goals, acting on non-refuted ideas. And there's no need to suffer.

For more on these topics, see the FI discussion of coercion and the BoI view on unbounded progress:

http://beginningofinfinity.com

(David Deutsch, author of BoI, is a Popperian and is a founder of Taking Children Seriously (TCS), a parenting/education philosophy created by applying Critical Rationalism and which is where the the ideas about coercion come from. I developed the specific method of creating a succession of meta problems to help formalize and clarify some TCS ideas.)

I don't see how PF violates the void virtue (aspects of which, btw, relate to Popper's comments on Who Should Rule? cuz part of what Yudkowsky is saying in that section is don't enshrine some criteria of rationality to rule. My perspective is, instead of enshrining a ruler or ruling idea, the most primary thing is error correction itself. Yudkowsky says something that sorta sounds like you need to care about the truth instead of your current conception of the truth – which happily does help keep it possible to correct errors in your current conception.)

(this last line is awkward. The rationalist view may consider that rationalists should win, but not winning isn't necessarily a failure of rationality)

That depends on what you mean by winning. I'm guessing I agree with it the way you mean it. I agree that all kinds of bad things can happen to you, and stuff can go wrong in your life, without it necessarily being your fault.

(this needs unpacking the definition of winning and I'm digging myself deeper I should stop)

Why should you stop?


Justin Mallone replied to Gyrodiot:

hey gyrodiot feel free to join Fallible Ideas list and post your thoughts on PF. also, could i have your permission to share your thoughts with Elliot? (I can delete what other ppl said). note that I imagine elliot would want to reply publicly so keep that in mind.

Gyrodiot replied:

@JUSTINCEO You can share my words (only mine) if you want, with this addition: I'm positive I didn't do justice to FI (particularly in the last part, which isn't clear at all). I'll be happy to read Elliot's comments on this and update in consequence, but I'm not sure I will take time to answer further.

I find we are motivated by the same "burning desire to know" (sounds very corny) and disagree strongly about method. I find, personally, the LW "school" more practically useful, strikes a good balance for me between rigor, ease of use, and ability to coordinate around.

Gyrodiot, I hope you'll reconsider and reply in blog comments, on FI, or on Less Wrong's forum. Also note: if Paths Forward is correct, then the LW way does not work well. Isn't that risk of error worth some serious attention? Plus isn't it fun to take some time to seriously understand a rival philosophy which you see some rational merit in, and see what you can learn from it (even if you end up disagreeing, you could still take away some parts)?


For those interested, here are more sources on the rationality virtues. I think they're interesting and mostly good:

https://wiki.lesswrong.com/wiki/Virtues_of_rationality

https://alexvermeer.com/the-twelve-virtues-of-rationality/

http://madmikesamerica.com/2011/05/the-twelve-virtues-of-rationality/

That last one says, of Evenness:

With the previous three in mind, we must all be cautious about our demands.

Maybe. Depends on how "cautious" would be clarified with more precision. This could be interpreted to mean something I agree with, but also there are a lot of ways to interpret it that I disagree with.

I also think Occam's Razor (mentioned in that last link, not explicitly in the Yudkowsky essay), while having some significant correctness to it, is overrated and is open to specifications of details that I disagree with.

And I disagree with the "burden of proof" idea (I cover this in Yes or No Philosophy) which Yudkowsky mentions in Evenness.

The biggest disagreement is empiricism. (See the criticism of that in BoI, and FoR ch1. You may have picked up on this disagreement already from the CR stuff.)


Elliot Temple | Permalink | Comments (2)

Open Letter to Machine Intelligence Research Institute

I emailed this to some MIRI people and others related to Less Wrong.


I believe I know some important things you don't, such as that induction is impossible, and that your approach to AGI is incorrect due to epistemological issues which were explained decades ago by Karl Popper. How do you propose to resolve that, if at all?

I think methodology for how to handle disagreements comes prior to the content of the disagreements. I have writing about my proposed methodology, Paths Forward, and about how Less Wrong doesn't work because of the lack of Paths Forward:

http://curi.us/1898-paths-forward-short-summary

http://curi.us/2064-less-wrong-lacks-representatives-and-paths-forward

Can anyone tell me that I'm mistaken about any of this? Do you have a criticism of Paths Forward? Will any of you take responsibility for doing Paths Forward?

Have any of you written a serious answer to Karl Popper (the philosopher who refuted induction – http://fallibleideas.com/books#popper )? That's important to address, not ignore, since if he's correct then lots of your research approaches are mistakes.

In general, if someone knows a mistake you're making, what are the mechanisms for telling you and having someone take responsibility for addressing the matter well and addressing followup points? Or if someone has comments/questions/criticism, what are the mechanisms available for getting those addressed? Preferably this should be done in public with permalinks at a venue which supports nested quoting. And whatever your answer to this, is it written down in public somewhere?

Do you have public writing detailing your ideas which anyone is taking responsibility for the correctness of? People at Less Wrong often say "read the sequences" but none of them take responsibility for addressing issues with the sequences, including answering questions or publishing fixes if there are problems. Nor do they want to address existing writing (e.g. by David Deutsch – http://fallibleideas.com/books#deutsch ) which contains arguments refuting major aspects of the sequences.

Your forum ( https://agentfoundations.org ) says it's topic-limited to AGI math, so it's not appropriate for discussing criticism of the philosophical assumptions behind your approach (which, if correct, imply the AGI math you're doing is a mistake). And it states ( https://agentfoundations.org/how-to-contribute ):

It’s important for us to keep the forum focused, though; there are other good places to talk about subjects that are more indirectly related to MIRI’s research, and the moderators here may close down discussions on subjects that aren’t a good fit for this forum.

But you do not link those other good places. Can you tell me any Paths-Forward-compatible other places to use, particularly ones where discussion could reasonably result in MIRI changing?

If you disagree with Paths Forward, will you say why? And do you have some alternative approach written in public?

Also, more broadly, whether you will address these issues or not, do you know of anyone that will?

If the answers to these matters are basically "no", then if you're mistaken, won't you stay that way, despite some better ideas being known and people being willing to tell you?

The (Popperian) Fallible Ideas philosophy community ( http://fallibleideas.com ) is set up to facilitate Paths Forward (here is our forum which does this http://fallibleideas.com/discussion-info ), and has knowledge of epistemology which implies you're making big mistakes. We address all known criticisms of our positions (which is achievable without using too much resources like time and attention, as Paths Forward explains); do you?


Elliot Temple | Permalink | Comments (161)