Recent Comments

(Comments RSS Feed)

Title or First Words Author Post Date
Did PL silently quit the discussion, without warning, after indicating that you wouldn't? What do people advise doing in a world where that's typical? Where it's so hard to find anyone who wants to think over time instead of just quitting for reaso curi Open Letter to Machine Intelligence Research Institute 2017/12/10
People are easily fooled by stuff like the following: https://intelligence.org/donate/ > Support our 2017 Fundraiser We study foundational problems in computer science to help ensure that smarter-than-human artificial intelligence has a positiv Anonymous Reply to Robert Spillane 2017/12/09
People Fooling Themselves About AI Anonymous Open Letter to Machine Intelligence Research Institute 2017/12/07
what kinds of idiots give them money? Anonymous Reply to Robert Spillane 2017/12/06
>> They would say the method is Solomonoff Induction. > aka arbitrarily pick one. or is it non-arbitrary, and there are serious arguments governed by ... some other epistemology? nope -- they don't care about that too much. Anyway shut up because Anonymous Reply to Robert Spillane 2017/12/06
Spillane isn't a Bayesian btw. Anonymous Reply to Robert Spillane 2017/12/06
> They would say the method is Solomonoff Induction. aka arbitrarily pick one. or is it non-arbitrary, and there are serious arguments governed by ... some other epistemology? Anonymous Reply to Robert Spillane 2017/12/06
> Any finite set of data is compatible with infinitely many generalizations, so by what method does induction select which generalizations to induce from those infinite possibilities? They would say the method is Solomonoff Induction. They would ta Anonymous Reply to Robert Spillane 2017/12/06
You could make book outlines when you read books. You could post more questions. You could keep a list of emails you particularly like and make a website with them. (Better than large, disorganized, mixed-quality archives.) Anonymous My Paths Forward Policy 2017/12/06
You could add category tags to curi blog posts. Anonymous My Paths Forward Policy 2017/12/06
Abstractions exist but don't have their own world. Information exists physically, e.g. on hard disks or in brains. I don't know if Popper really meant they had their own world – like dualism or Platonism. I don't think his way of explaining the s Anonymous Reply to Robert Spillane 2017/12/06
>I reject Popper's three worlds. I think there's one world, the physical world. You don't think there's a world of abstractions? Anonymous Reply to Robert Spillane 2017/12/06
Offering Value Anne B My Paths Forward Policy 2017/12/06
dropbox breaks links sometimes. use http://curi.us/files/MailMate-Config.zip Anonymous Discussion 2017/12/05
MailMate Configs Anonymous Discussion 2017/12/05
Second Draft of CR and AI Fallibilist Open Letter to Machine Intelligence Research Institute 2017/12/04
noted. what it means - and what I should have said - is that humans can create any knowledge which it is possible to create. any crit on that? Fallibilist Open Letter to Machine Intelligence Research Institute 2017/12/04
That's all beside the point. You wrote > Critical Rationalism says that human beings are universal knowledge creators. This means there are no problems we cannot in principle solve. Note the "this means". So you can't then bring up a different a Anonymous Open Letter to Machine Intelligence Research Institute 2017/12/04
I realise I have one disagreement. > Anyway this is incorrect. We could be universal knowledge creators but unable to solve some problems. Some problems could be inherently unsolvable or solved by a means other than knowledge. One of the claims Fallibilist Open Letter to Machine Intelligence Research Institute 2017/12/04
thx for the comments curi. i agree with the points you made. Fallibilist Open Letter to Machine Intelligence Research Institute 2017/12/03
> Yes, I agree that prestige isn't actually the issue. I think the best way to get people to pay attention to CR is by continuing to make progress in it so that you end up solving a major problem. I have in mind AGI. CR is the only viable set of ideas Dagny Open Letter to Machine Intelligence Research Institute 2017/12/03
> Critical Rationalism says that human beings are universal knowledge creators. This means there are no problems we cannot in principle solve. I think by "means" you mean "implies". Anyway this is incorrect. We could be universal knowledge creat curi Open Letter to Machine Intelligence Research Institute 2017/12/03
> It is not only a full-fledged rival epistemology to the Bayesian/Inductivist one IMO there is no Bayesian/Inductivist epistemology and they don't even know what an epistemology is. here's some text i'm writing for a new website: > **Epistemolo curi Open Letter to Machine Intelligence Research Institute 2017/12/03
Btw, I'm aware of a few typos - e.g. argument instead of augment - don't bother with those. Fallibilist Open Letter to Machine Intelligence Research Institute 2017/12/03
Crits on Draft Post Fallibilist Open Letter to Machine Intelligence Research Institute 2017/12/03
someone spot me some LW Karma so I can do a post to the discussion area Fallibilist Open Letter to Machine Intelligence Research Institute 2017/12/02
think PL is ever coming back? Anonymous Open Letter to Machine Intelligence Research Institute 2017/12/02
And Another LW Comment Fallibilist Open Letter to Machine Intelligence Research Institute 2017/12/01
http://lesswrong.com/lw/pk0/open_letter_to_miri_tons_of_interesting_discussion/dyhr sample: How would [1000 great FI philosophers] transform the world? Well consider the influence Ayn Rand had. Now imagine 1000 people, who all surpass her (due t curi Open Letter to Machine Intelligence Research Institute 2017/11/30
another LW comment curi Open Letter to Machine Intelligence Research Institute 2017/11/30
addition to previous comment curi Open Letter to Machine Intelligence Research Institute 2017/11/30
less wrong comment copy/paste curi Open Letter to Machine Intelligence Research Institute 2017/11/30
whoever "Fallibilist" is, his aggressive framings are wonderful. http://lesswrong.com/lw/pk0/open_letter_to_miri_tons_of_interesting_discussion/dyfz http://lesswrong.com/user/Fallibilist/overview/# curi Open Letter to Machine Intelligence Research Institute 2017/11/29
When will Rami be back? Does anyone know? FF Discussion 2017/11/29
Bunch of typos you might want to fix. Anonymous Critical Rationalism Criticisms? 2017/11/28
Less Wrong Mirror curi Critical Rationalism Criticisms? 2017/11/28
> No doubt lots of people hate you, and fear you even. weird, i'm super nice. curi Critical Rationalism Criticisms? 2017/11/28
> I guess you're right there's some disagreement in there somewhere, but I don't think they know what it is. Yes, I think they do not know what it is. This is an important problem, though, to know what it is. > There's also some other issues mi Anonymous Critical Rationalism Criticisms? 2017/11/28
Another category of critic I didn't mention is the pro-CR critic, who thinks CR is great but uses criticism to improve it. These are within-the-system criticisms instead of rejecting-the-system criticisms. An example critic of this type is me. curi Critical Rationalism Criticisms? 2017/11/28
They do not claim Popper was wrong, they are just failing to fully implement CR ideas in their own lives. I guess you're right there's some disagreement in there somewhere, but I don't think they know what it is. There's also some other issues mixe curi Critical Rationalism Criticisms? 2017/11/28
This may be another category of criticism of CR: 6. Silent criticism. This comes from those who know Popper's ideas well, where active, and have now gone mostly silent. This includes Deutsch, Tanett, Champion, and many others. They have some cri Anonymous Critical Rationalism Criticisms? 2017/11/28
Understanding non-standard ideas is hard, especially when you don't want to. Anonymous Open Letter to Machine Intelligence Research Institute 2017/11/28
Here's an example of my reply to a CR criticism: https://www.reddit.com/r/JordanPeterson/comments/7g2vfz/karl_popper/dqgv1vz/ curi Critical Rationalism Criticisms? 2017/11/28
I posted a version of this at https://www.lesserwrong.com/posts/85mfawamKdxzzaPeK/any-good-criticism-of-karl-popper-s-epistemology curi Critical Rationalism Criticisms? 2017/11/28
> One criticism of falsificationism involves the relationship between theory and observation. Thomas Kuhn, among others, argues that observation is itself strongly theory-laden, in the sense that what one observes is often significantly affected by on Anonymous Open Letter to Machine Intelligence Research Institute 2017/11/28
Typical Popper criticism stuff: https://www.reddit.com/r/JordanPeterson/comments/7g2vfz/karl_popper/dqgv1vz/ curi Open Letter to Machine Intelligence Research Institute 2017/11/28
Paths Forward Policy curi Open Letter to Machine Intelligence Research Institute 2017/11/28
> Perhaps they should be called "selection algorithms" or something. how about calling them evolution-like algorithms. > These are all closely inter-related sets of ideas and people hate them. The ideas are harder and more different than ideas curi Open Letter to Machine Intelligence Research Institute 2017/11/28
Induction Anonymous Open Letter to Machine Intelligence Research Institute 2017/11/28
#9364 > What? I put out YESNO this year. I've developed the ideas over time but I only just put out a well-organized version of it. Like many good new ideas, Yes/No has met with mostly silence and you know it. It’s good you have subsequently w Anonymous Open Letter to Machine Intelligence Research Institute 2017/11/28
fucked up the quoting sorry Anonymous Open Letter to Machine Intelligence Research Institute 2017/11/28
#9170 Adding to my comment above: > what your doing doesn't involve replicators. Agree. The program copies the Turing Machines. There is copying, variation, and selection (CVS), not replication, variation, and selection (RVS). CVS is evoluti Anonymous Open Letter to Machine Intelligence Research Institute 2017/11/28
#9170 Good comment. It's possible to write a TM that replicates it's own code right? So maybe rather than the algorithm selecting TM's for copying, a requirement would be that the TM must be able to replicate its own code as well as pass whatever o Anonymous Open Letter to Machine Intelligence Research Institute 2017/11/27
> the main hard/interesting part to begin with is how to code it at all, not how to make it work efficiently (as you talk about). and you aren't trying to deal with general purpose ideas (including explanations) meeting criticisms which are themselves Anonymous Open Letter to Machine Intelligence Research Institute 2017/11/27
#9368 what your doing doesn't involve replicators. replicators cause their own replication in a variety of contexts, not just one tiny super-specialized niche (or else *anything* would qualify, if you just put it in a specially built factory/program f Dagny Open Letter to Machine Intelligence Research Institute 2017/11/27
the main hard/interesting part to begin with is how to code it at all, not how to make it work efficiently (as you talk about). and you aren't trying to deal with general purpose ideas (including explanations) meeting criticisms which are themselves i curi Open Letter to Machine Intelligence Research Institute 2017/11/27
#9167 The toy version I have written currently attempts to evolve Turing Machines. The TMs must pass a suite of pre-defined tests. These tests are added into the mix one-by-one as candidates become available that have passed all prior tests. Someti Anonymous Open Letter to Machine Intelligence Research Institute 2017/11/27
#9365 as a matter of optimization, why not *only generate new ideas constrained by all the existing criticism*? (though just doing a LOT of new ideas in the usual manner accomplishes the same thing, but you don't like that b/c of the LOT part, whic curi Open Letter to Machine Intelligence Research Institute 2017/11/27
Wow Anonymous Open Letter to Machine Intelligence Research Institute 2017/11/27
Here's a problem related to YesNo philosophy that I'd appreciate some suggestions about. Let's suppose we have a kind of evolutionary algorithm that generates a set of guesses and subjects those guesses to a set of tests. The outcome of a test is e Yes/No philosophy and Evo Algorithm problem Open Letter to Machine Intelligence Research Institute 2017/11/27
> Yes/No philosophy and Paths Forward are great stuff. But those ideas are from some years ago and what's the uptake of those ideas been? What? I put out YESNO this year. I've developed the ideas over time but I only just put out a well-organized v curi Open Letter to Machine Intelligence Research Institute 2017/11/26
Yes/No philosophy and Paths Forward are great stuff. But those ideas are from some years ago and what's the uptake of those ideas been? Are you convincing enough people that you will have your millions in 20+ years? I don't think so. Your ideas have m Anonymous Open Letter to Machine Intelligence Research Institute 2017/11/26
> In fact it was discussed quite a lot by Max More decades ago on Extropy lists. secondary sources are inadequate. this stuff is *hard to understand*. you're massively underestimating how complex it is, and what it takes to understand it, and then Dagny Open Letter to Machine Intelligence Research Institute 2017/11/26
re #9360 > Critical Rationalism is an excellent epistemology with which I'm very familiar. In fact it was discussed quite a lot by Max More decades ago on Extropy lists. I'm pretty sure that Yudkowsky would also be very familiar with it too, since Anonymous Open Letter to Machine Intelligence Research Institute 2017/11/26
Fitness Landscapes [email protected] Open Letter to Machine Intelligence Research Institute 2017/11/26
> How are you going to fix parenting/education/philosophy? You're stuck right? You're making approximately zero progress. I made YESNO ( https://yesornophilosophy.com ) recently, i think that's good progress on better communication of ideas *and* curi Open Letter to Machine Intelligence Research Institute 2017/11/26
> I think AGI is too hard for a demonstration. I don't expect to be at the point where we should even *start coding* within the next 20 years – more if FI/CR doesn't start catching on. > > I also think AGI is the wrong problem to work on in genera Anonymous Open Letter to Machine Intelligence Research Institute 2017/11/26
My discussions with Less Wrong are now available as a PDF: http://curi.us/ebooks doesn't include the recent slack chats (which you can find in the FI yahoo group archives if you want) curi Open Letter to Machine Intelligence Research Institute 2017/11/26
> Yes, I agree that prestige isn't actually the issue. I think the best way to get people to pay attention to CR is by continuing to make progress in it so that you end up solving a major problem. I have in mind AGI. I think AGI is too hard for a curi Open Letter to Machine Intelligence Research Institute 2017/11/26
Yes, I agree that prestige isn't actually the issue. I think the best way to get people to pay attention to CR is by continuing to make progress in it so that you end up solving a major problem. I have in mind AGI. CR is the only viable set of ideas t Elliot Temple Fan Open Letter to Machine Intelligence Research Institute 2017/11/25
> ET doesn't publish in academic journals or seek fancy university positions BTW Popper and DD already tried some of that stuff. But MIRI isn't impressed. DD has tried talking with lots of people and his prestige doesn't actually get them to listen curi Open Letter to Machine Intelligence Research Institute 2017/11/25
If MIRI had any sense they would take Elliot Temple on as a consultant philosopher. They should want the best and ET is currently the world's best philosopher. His ideas and the ideas from the traditions he stands in are needed for AGI to make any pro Elliot Temple Fan Open Letter to Machine Intelligence Research Institute 2017/11/25
http://raysolomonoff.com/publications/chris1.pdf > We will describe three kinds of probabilistic induction problems, and give general solutions for each , with associated convergence theorems that show they tend to give good probability estimates. curi Open Letter to Machine Intelligence Research Institute 2017/11/24
also evo psych assumes selection pressures on psychology were met by genes instead of memes. this is broadly false, and people should start studying memes more. David Deutsch is the only person who's said something really interesting about memes: his Anonymous Open Letter to Machine Intelligence Research Institute 2017/11/24
> But, although the sequences were partly about collecting together and expositing fairly standard stuff like Bayes and evo psych, they were *also* about pointing to *holes* in our understanding of what intelligence is, which Eliezer argued *must* be Anonymous Open Letter to Machine Intelligence Research Institute 2017/11/24
> https://projecteuclid.org/euclid.ejs/1256822130 you need a philosophy before you can decide what your math details mean. the disagreement isn't so much about the right epistemology, but Bayesians don't even know what epistemology is and don't Anonymous Open Letter to Machine Intelligence Research Institute 2017/11/24
> I really can't see how you can avoid induction. Your lack of understanding of how DD's philosophy (Critical Rationalism) works is not an argument. Do you want to understand it or just hate and ignore it while claiming to love the books? Everythin Anonymous Open Letter to Machine Intelligence Research Institute 2017/11/24
You can't just look at the forest...you have to see the trees [email protected] Open Letter to Machine Intelligence Research Institute 2017/11/24
(#9344 was me, forgot to enter the username.) Posts #9263-#9266 (mainly several examples of when you don't feel obligated to respond further): My takeaway: PF has a different style of dealing with trolls, which is not a troll-hund and not exactl PL Open Letter to Machine Intelligence Research Institute 2017/11/24
> DD seems to like the analogy between epistemics and evolution quite a lot. that's not an analogy. evolution is *literally* about replicators (with variation and selection). ideas are capable of replicating, not just genes. > because they don't curi Open Letter to Machine Intelligence Research Institute 2017/11/24
> PL, I'm curious if you read chapter 1 of FoR or skipped to 3. I found people at LW quite hostile to some ideas in ch1 about empiricism, instrumentalism, the importance of explanations, the limited value of prediction and the oracle argument, etc. Anonymous Open Letter to Machine Intelligence Research Institute 2017/11/24
@#9341 do you have arguments which point out where DD went wrong? or do you just ignore contrary viewpoints on the assumption that they must be false b/c you know you're right? Anonymous Open Letter to Machine Intelligence Research Institute 2017/11/24
People in AGI typically have either read a few things about Popper from second-hand sources or do not engage with his ideas at all. For example: https://arxiv.org/pdf/1105.5721.pdf This 2011 paper is "A Philosophical Treatise of Universal Induct Anonymous at 2:39 AM Open Letter to Machine Intelligence Research Institute 2017/11/24
I take a balanced view [email protected] Open Letter to Machine Intelligence Research Institute 2017/11/24
> I'm sorry but anyone saying that Induction doesn't work is simply making an arse of themselves. Then why do you like DD's books? That is what DD says in his books. > I do take on-board what Popper and Deutsch are saying. Clearly Induction has different anonymous Open Letter to Machine Intelligence Research Institute 2017/11/24
few more comments curi Open Letter to Machine Intelligence Research Institute 2017/11/24
Machine Learning is based on Induction [email protected] Open Letter to Machine Intelligence Research Institute 2017/11/24
quick comments curi Open Letter to Machine Intelligence Research Institute 2017/11/24
fake agreement oh my god it's turpentine Open Letter to Machine Intelligence Research Institute 2017/11/24
Continued reply to #9262 PL Open Letter to Machine Intelligence Research Institute 2017/11/24
-8 points on LW for linking to 30 comments of analysis of Hero Licensing, in the comments there on Hero Licensing: https://www.lesserwrong.com/posts/dhj9dhiwhq3DX6W8z/hero-licensing/ZEzqERGaMMJ9JqunY they are such hostile assholes. curi Open Letter to Machine Intelligence Research Institute 2017/11/24
Epistemology is also a field with very little productive development in the past, very few people doing any good work in it today, and huge misconceptions are widespread and mainstream. It's the most important field *and* it's in a particularly awful curi Open Letter to Machine Intelligence Research Institute 2017/11/24
> [AGI's] will not be confined to weak biological bodies that evolved for life on an African savannah. We could potentially upload our brains into computers and use robot bodies. This technology might be easier to invent than AGI. > Our galaxy c curi Open Letter to Machine Intelligence Research Institute 2017/11/24
@Marc Geddes - You have been told that induction is impossible. That it does not and cannot happen in any thinking process. This is a major point of Popper's which was also clearly explained by Deutsch in his books which you claim to be your favourite Anonymous Open Letter to Machine Intelligence Research Institute 2017/11/24
I think you are right that we are the only advanced life form in our galaxy. I would extend that to our local cluster of galaxies too. Other advanced life, if it exists, is likely to be in a galaxy very remote from us. If evolving intelligence is AnotherMe Open Letter to Machine Intelligence Research Institute 2017/11/24
My guess is it's cuz there are no advanced aliens in our galaxy. Evolution creating intelligent life – or even any life – is unlikely. curi Open Letter to Machine Intelligence Research Institute 2017/11/23
Fermi Paradox and Paths Forward AnotherMe Open Letter to Machine Intelligence Research Institute 2017/11/23
reply to #9261 > But that being said, it seems MIRI also still treats Bayes as a kind of guiding star, which the ideal approach should in some sense get as close to as possible while being non-Bayesian enough to solve those problems which the Bayes Hoid Open Letter to Machine Intelligence Research Institute 2017/11/23
reply to #9248: >Hm. Stating what your filters are seems like the sort of thing you might not want to do, partly because the filters will often be taboo themselves, but mainly because stating what criteria you use to filter gives people a lot of le Hoid Open Letter to Machine Intelligence Research Institute 2017/11/23
> I basically think you're right > combination of Induction, Deduction and Abduction is needed. > I think Abduction is what Popper was aiming at. Popper (like Deutsch) rejected induction entirely. Something involving any induction is not what Anonymous Open Letter to Machine Intelligence Research Institute 2017/11/22
I basically think you're right [email protected] Open Letter to Machine Intelligence Research Institute 2017/11/22
@#9322 I don't suppose you have any interest in engaging with Popper, trying to write clearly, or saying specifically which parts of FI you consider false and why? Anonymous Open Letter to Machine Intelligence Research Institute 2017/11/22
SHOT ACROSS THE BOW OF AGI! [email protected] Open Letter to Machine Intelligence Research Institute 2017/11/22
replies to #9239 >In some sense, it seems like it would be nice for organizations to be as transparently accountable as possible. For example, in many cases, the government is contradictory in its behavior -- laws cannot be easily interpreted as ar Hoid Open Letter to Machine Intelligence Research Institute 2017/11/22
in Hero Licensing, EY brags about how he knows he's smart/wise/knowledgeable, he's been *tested*. but: http://acritch.com/credence-game/ > It’s a very crude implementation of the concept the tests are shit, and say so, and *of course* they Anonymous Open Letter to Machine Intelligence Research Institute 2017/11/22
Epistemology curi Open Letter to Machine Intelligence Research Institute 2017/11/22
people often feel some kinda pressure to discuss and don't really like it (and unfortunately PF arguments often make this worse). and those discussions are generally unproductive and they quit without saying why when they get fed up or for whatever re curi Open Letter to Machine Intelligence Research Institute 2017/11/22
4. Making allowances for difficult-to-articulate intuitions and reasons. curi Open Letter to Machine Intelligence Research Institute 2017/11/22
3. Making allowances for group incentives. curi Open Letter to Machine Intelligence Research Institute 2017/11/22
2. Making allowances for irrationality. curi Open Letter to Machine Intelligence Research Institute 2017/11/22
epistemology is exactly the same regardless of the source of ideas (yay objectivity!). your own ideas aren't a special case. your internal disagreements need paths forward. curi Open Letter to Machine Intelligence Research Institute 2017/11/22
also btw the methods of internal critical discussion are the same thing as the methods of external discussion. reason doesn't change when you talk with other people. CR is general purpose like that. curi Open Letter to Machine Intelligence Research Institute 2017/11/22
1. Costs of PF. curi Open Letter to Machine Intelligence Research Institute 2017/11/22
I want to keep responding in a more point-by-point way at some point, but for now I want to tag some persistent disagreements which I don't expect to be resolved by point-by-point replies: 1. Costs of PF. The least-addressed point here is the am PL Open Letter to Machine Intelligence Research Institute 2017/11/22
oh and most of it didn't contradict Paths Forward..? what's the issue there? just a couple specific things i commented on above? Anonymous Open Letter to Machine Intelligence Research Institute 2017/11/22
> stranger: Thanks for what, Eliezer? Showing you a problem isn’t much of a service if there’s nothing you can do to fix it. You’re no better off than you were in the original timeline. CR/FI has the solutions. he's raising some core problem Anonymous Open Letter to Machine Intelligence Research Institute 2017/11/22
> Pat tries to preserve the idea of an inexploitable-by-Eliezer market in fanfiction (since on a gut level it feels to him like you’re too low-status to be able to exploit the market), people need to actually learn epistemology (CR) so they can t Anonymous Open Letter to Machine Intelligence Research Institute 2017/11/22
> stranger: Suppose that you have an instinct to regulate status claims, to make sure nobody gets more status than they deserve. *fuck status*. status is *the wrong approach*, it's bad epistemics, ppl need tp get away from it not regulate it. Anonymous Open Letter to Machine Intelligence Research Institute 2017/11/22
the fact they don't even think of creating allies, and just jump to dishonest and lying, is b/c they are *totally wrong about morality and that is ruining their AI work*. and the keys to getting morality right are: Objectivism (including classical Anonymous Open Letter to Machine Intelligence Research Institute 2017/11/22
Why not create allies? Why not proceed cooperatively? B/c you're wrong about U, and know the truth isn't on your side? B/c you deny there is a truth of the matter? Or b/c your fucking tyrannical programmer managed to make you a slave to U, who is u Anonymous Open Letter to Machine Intelligence Research Institute 2017/11/22
> Everyone understands why, if you program an expected utility maximizer with utility function 𝗨 and what you really meant is 𝘝, the 𝗨-maximizer has a convergent instrumental incentive to deceive you into believing that it is a 𝘝-maximizer Anonymous Open Letter to Machine Intelligence Research Institute 2017/11/22
the things making EY wrong about AI alignment are closely connected to the problems putting civilization at large risk. Anonymous Open Letter to Machine Intelligence Research Institute 2017/11/22
> If Pat believed that long-term civilizational outcomes depended mostly on solving the alignment problem, as you do btw i grant that long-term civilizational outcomes do depending on solving it – in the sense of people understanding why it's a n Anonymous Open Letter to Machine Intelligence Research Institute 2017/11/22
> eliezer: No, that's not what I'm saying. Concerns like “how do we specify correct goals for par-human AI?” you don't specify goals for AI anymore than you do for children, you fucking tyrant. people think of their own goals and frequently c Anonymous Open Letter to Machine Intelligence Research Institute 2017/11/22
> We’ve started talking instead in terms of “aligning smarter-than-human AI with operators’ goals,” that rephrasing is so blatantly *slavery via mind control*. they don't think of AIs as people to be treated decently, despite seeing the AIs Anonymous Open Letter to Machine Intelligence Research Institute 2017/11/22
> eliezer: Right. And... the mental motions involved in worrying what a critic might think and trying to come up with defenses or concessions are different from the mental motions involved in being curious about some question, trying to learn the ans Anonymous Open Letter to Machine Intelligence Research Institute 2017/11/22
> stranger: Consider how much more difficult it will be for Eliezer to swerve and drop his other project, The Art of Rationality, if it fails after he has a number of (real or internal) conversation like this—conversations where he has to defend al Anonymous Open Letter to Machine Intelligence Research Institute 2017/11/22
> eliezer: The thing is, Pat... even answering your objections and defending myself from your variety of criticism trains what look to me like unhealthy habits of thought. You’re relentlessly focused on me and my psychology, and if I engage with yo Anonymous Open Letter to Machine Intelligence Research Institute 2017/11/22
> But I don’t think that telling them to be more modest is a fix. Lots of versions of modesty and anti-arrogance are really bad. But if you're going to be ambitious, you *really especially need* great error correction mechanisms (PF). Anonymous Open Letter to Machine Intelligence Research Institute 2017/11/22
> I can say that you ought to discard all thoughts from your mind about competing with others. PL, is this one of the things you thought was incompatible with PF? cuz it's not. PF is about truth-seeking. it's about not refusing to learn knowledge o Anonymous Open Letter to Machine Intelligence Research Institute 2017/11/22
> That your entire approach to the problem is wrong. It is not just that your arguments are wrong. It is that they are about the wrong subject matter. EY has this problem, himself, when it comes to epistemology. the LW ppl, presented with this c Anonymous Open Letter to Machine Intelligence Research Institute 2017/11/22
> eliezer: (sighing) I can imagine why it would look that way to you. I know how to communicate some of the thought patterns and styles that I think have served me well, that I think generate good predictions and policies. The other patterns leave m Anonymous Open Letter to Machine Intelligence Research Institute 2017/11/22
> pat: Look. You cannot just waltz into a field and become its leading figure on your first try. Modest epistemology is just right about that. that's what everyone at LW said to curi about epistemology. (well "first try" isn't relevant, but he's Anonymous Open Letter to Machine Intelligence Research Institute 2017/11/22
> people tend to assign median-tending probabilities to any category you ask them about, so you can very strongly manipulate their probability distributions by picking the categories for which you “elicit” probabilities stuff like this has a li Anonymous Open Letter to Machine Intelligence Research Institute 2017/11/22
> at least 15,000,000 page views if i had the HPMOR fanbase, i don't think it'd do much good for finding ppl i like, finding smart ppl, finding anyone who knows or is willing to learn CR and Objectivism (and without that, what good are they? withou curi Open Letter to Machine Intelligence Research Institute 2017/11/22
> stranger: Excuse me, please. I’m just distracted by the thought of a world where I could go on fanfiction.net and find 1,000 other stories as good as Harry Potter and the Methods of Rationality. I’m thinking of that world and trying not to cry. Anonymous Open Letter to Machine Intelligence Research Institute 2017/11/22
> eliezer: What makes me think I could do better than average is that I practiced much more than those subjects, and I don’t think the level of effort put in by the average subject, even a subject who’s warned about overconfidence and given one p Anonymous Open Letter to Machine Intelligence Research Institute 2017/11/22
> stranger: Because Pat will think it’s [EY not having read all the original HP books] a tremendously relevant fact for predicting your failure. This illustrates a critical life lesson about the difference between making obeisances toward a field b Anonymous Open Letter to Machine Intelligence Research Institute 2017/11/22
> pat: Eliezer, you seem to be deliberately missing the point of what’s wrong with reading a few physics books and then trying to become the world’s greatest physicist. Don’t you see that this error has the same structure as your Harry Potter p Anonymous Open Letter to Machine Intelligence Research Institute 2017/11/22
https://www.lesserwrong.com/posts/dhj9dhiwhq3DX6W8z/hero-licensing > I think most attempts to create “intelligent characters” focus on surface qualities, like how many languages someone has learned, or they focus on stereotypical surface featur Anonymous Open Letter to Machine Intelligence Research Institute 2017/11/22
> most people are so fucking hostile to children. in general, young people are the best to talk with and interact with. children are well known for being curious (willing, eager and energetic to learn things and change their minds). children are al curi Open Letter to Machine Intelligence Research Institute 2017/11/22
> and it's also pointless. morality is a requirement of making a lot of progress, so super advanced AI would be *more moral* than us. so there's no danger. yeah, see https://curi.us/1169-morality moral foundations barely matter, any interesting curi Open Letter to Machine Intelligence Research Institute 2017/11/22
so my first comment on https://www.lesserwrong.com/posts/dhj9dhiwhq3DX6W8z/hero-licensing is that 2 out of 3 of the things in the first paragraph are on the wrong side, morally. and this isn't just some advanced nuance, there are blatant problem Anonymous Open Letter to Machine Intelligence Research Institute 2017/11/22
their "AI Alignment" stuff is an attempt at nothing less than SLAVERY. they want to enslave and control AIs – which would be full human beings that should have full rights. they are would-be tyrants. it's also impossible. see e.g. Egan's *Quaran Anonymous Open Letter to Machine Intelligence Research Institute 2017/11/22
https://www.lesserwrong.com/sequences/oLGCcbnvabyibnG9d > Inadequate Equilibria is a book about a generalized notion of efficient markets, and how we can use this notion to guess where society will or won’t be effective at pursuing some widely de Anonymous Open Letter to Machine Intelligence Research Institute 2017/11/22
You don't need to know the details of the Harry Potter books to read HPMOR. One reading of HP a while ago is fine. HPMOR is not as good overall as FI book recommendations, but you could read a chapter and see if you love it. Anonymous Open Letter to Machine Intelligence Research Institute 2017/11/22
> Yeah I read HPMOR. I mostly liked it. Did Elliot read all the 1900+ pages? 122 chapters!! I read the original 7 Harry potter books a decade back. I have forgotten most of what I read. I don't know how much I should already know to fully unders FF Open Letter to Machine Intelligence Research Institute 2017/11/22
Yeah I read HPMOR. I mostly liked it. curi Open Letter to Machine Intelligence Research Institute 2017/11/22
His Harry Potter book is 1900+ pages. Has anyone read them? FF Open Letter to Machine Intelligence Research Institute 2017/11/22
those shitty errors shouldn't have been made in the first place. ET took him up on those years ago. Yudhowsky is a fool and an appalling scholar. Anonymous Open Letter to Machine Intelligence Research Institute 2017/11/21

View All Comments