Recent Comments

(Comments RSS Feed)

Title or First Words Author Post Date
Has Rami left FI? FF Discussion 2018/01/16
#9355 >Yes, I know Popper solved a major problem when he invented CR and the world did not take notice. Having an AGI sitting in your face is a different story though! People have billions of humans sitting in their face, many of whom do amazing Anonymous Open Letter to Machine Intelligence Research Institute 2018/01/13
#9450 You can make a reasonable (significant) effort to not do it, cuz u think it's bad. U don't have to be 100% perfect and omniscient to do a good job of addressing this issue. And u can e.g. be receptive when ur kid points out ur doing it, which curi Bad Parenting List 2018/01/12
>(2) Parent should sacrifice their own feelings, ideas, and preferences for the sake of the child's. In the case where (1) is not true but parent feels what child wants is bad, it seems to ask parent to fully and cheerfully provide help anyway. That s Anonymous Bad Parenting List 2018/01/12
> Frowning, having a stressed voice, or being selectively less energetically helpful/friendly/cheerful can be pressuring and controlling. (E.g. parent is "too tired" to do an activity child wants, but would suddenly be available if child wanted to do Anonymous Bad Parenting List 2018/01/12
do you *want* to understand? are you trying? what are you hoping to accomplish here? do you want some information? do you have any questions or curiosity about anything you read? do you think you could say, in your own words, the TCS reasoning for why curi Bad Parenting List 2018/01/12
Perhaps I missed it... Anonymous Bad Parenting List 2018/01/12
pressure to do certain careers (e.g. be a prestigious doctor), have certain life roles (e.g. having a kid so parent can be grandparent). another bad parenting thing Bad Parenting List 2018/01/12
http://fallibleideas.com/taking-children-seriously note the links at the bottom will take you to dozens more articles. Anonymous Bad Parenting List 2018/01/12
The burden of proof lies with you, in evidence or argument, to support your claims. You provide little or none, so there is no reason to believe any of them to be true. This is true whether your audience is smart and curious or dismissive and naive Anonymous Bad Parenting List 2018/01/12
> What is the purpose of this list? > That which can be asserted without evidence can also be dismissed without it, no? The list is for curious, interested people, not dismissive people who don't care to think. People can ask the reasoning for a curi Bad Parenting List 2018/01/12
This is all presented without any evidence to support it. Anonymous Bad Parenting List 2018/01/12
> What is the purpose of this list? What problem is it intended to solve? It points out bad parenting practices. This helps solve the problem of people not knowing those practices are bad. oh my god it's turpentine Bad Parenting List 2018/01/12
teeth-brushing Anonymous Bad Parenting List 2018/01/12
What is the purpose of this list? What problem is it intended to solve? Anonymous Bad Parenting List 2018/01/12
games oh my god it's turpentine Bad Parenting List 2018/01/10
@critrat - you're leaking info about who you are. Is this intentional? You have a problem which is blocking you from making progress in CR and has caused your standards to slip. Anonymous Reading Recommendations 2018/01/02
#9435 Planck solved the ultraviolet catastrophe, not, as you said, the infrared catastrophe. Anonymous Reading Recommendations 2018/01/02
There's no prophesy: Planck should not be credited for what he didn't know and could not foresee. Philosophy is more important than shuffleboard. You should begin by recognizing that, given some framework, you can then make judgements about some id curi Reading Recommendations 2017/12/29
I've yet to come across a meaningful way to assign a measure of greater-than-ness. It's difficult comparing two contributions to science/philosophy to one another. In fact, such a measure will in part be a prophesy, because the importance of certa CritRat Reading Recommendations 2017/12/29
#9433 Interesting. Here's my own comparison. DD knows more about physics than Einstein did, but Einstein can reasonably be considered the greater physicist b/c making major original breakthroughs is harder than standing on the shoulders of giants. Dagny Reading Recommendations 2017/12/28
I was asked about how I compared Rand to Popper, and how I concluded Rand was the greatest philosopher: Popper had one main **huge** thing – solving the problem of induction with a new non-justificationist evolutionary epistemology. That's **grea curi Reading Recommendations 2017/12/28
> In my experience, it does seem like people who take it upon themselves to respond to very poor quality criticism (I am thinking of stuff on twitter here) do generally lower the quality of their thinking as a result. Thoughts tend toward ground recen Dagny Open Letter to Machine Intelligence Research Institute 2017/12/24
(Response to #9297-9315) > the "training" idea is dumb. if you spend the discussion understanding why Pat's perspective is wrong better, then you won't be trained to do it, you'll be helped not to do it. In my experience, it does seem like peopl PL Open Letter to Machine Intelligence Research Institute 2017/12/24
yeah i agree, Trump is intellectually wrong in some ways but he's a decent negotiator and there's some sketchy stuff related to China (there's currency manipulation stuff too, idk a ton about it but i don't think it's all legit) and room to push for s curi Donald Trump is a Protectionist 2017/12/24
China has started to reciprocate following Trump's visit: https://www.investors.com/politics/editorials/trump-notches-another-win-on-trade-as-china-slashes-tariffs/ Getting China to substantially reduced import tariffs is a win for Trump. I doub Anonymous Donald Trump is a Protectionist 2017/12/24
What you described is a type of fragility. Anonymous Open Letter to Machine Intelligence Research Institute 2017/12/20
social norms about criticism Anne B Open Letter to Machine Intelligence Research Institute 2017/12/20
> "would the world be better off if people unable to do PF (naming Eliezer for the sake of argument) had never claimed to be public intellectuals?" > To me, this seems definitely false in the case of Eliezer. Of course you may disagree. I like s curi Open Letter to Machine Intelligence Research Institute 2017/12/19
> Did PL silently quit the discussion, without warning, after indicating that you wouldn't? Not really, I just had to take a break. You can expect longish time gaps in the future as well. Though, actually, I never meant to indicate that I'd make su PL Open Letter to Machine Intelligence Research Institute 2017/12/19
Did PL silently quit the discussion, without warning, after indicating that you wouldn't? What do people advise doing in a world where that's typical? Where it's so hard to find anyone who wants to think over time instead of just quitting for reaso curi Open Letter to Machine Intelligence Research Institute 2017/12/10
People are easily fooled by stuff like the following: https://intelligence.org/donate/ > Support our 2017 Fundraiser We study foundational problems in computer science to help ensure that smarter-than-human artificial intelligence has a positiv Anonymous Reply to Robert Spillane 2017/12/09
People Fooling Themselves About AI Anonymous Open Letter to Machine Intelligence Research Institute 2017/12/07
what kinds of idiots give them money? Anonymous Reply to Robert Spillane 2017/12/06
>> They would say the method is Solomonoff Induction. > aka arbitrarily pick one. or is it non-arbitrary, and there are serious arguments governed by ... some other epistemology? nope -- they don't care about that too much. Anyway shut up because Anonymous Reply to Robert Spillane 2017/12/06
Spillane isn't a Bayesian btw. Anonymous Reply to Robert Spillane 2017/12/06
> They would say the method is Solomonoff Induction. aka arbitrarily pick one. or is it non-arbitrary, and there are serious arguments governed by ... some other epistemology? Anonymous Reply to Robert Spillane 2017/12/06
> Any finite set of data is compatible with infinitely many generalizations, so by what method does induction select which generalizations to induce from those infinite possibilities? They would say the method is Solomonoff Induction. They would ta Anonymous Reply to Robert Spillane 2017/12/06
You could make book outlines when you read books. You could post more questions. You could keep a list of emails you particularly like and make a website with them. (Better than large, disorganized, mixed-quality archives.) Anonymous My Paths Forward Policy 2017/12/06
You could add category tags to curi blog posts. Anonymous My Paths Forward Policy 2017/12/06
Abstractions exist but don't have their own world. Information exists physically, e.g. on hard disks or in brains. I don't know if Popper really meant they had their own world – like dualism or Platonism. I don't think his way of explaining the s Anonymous Reply to Robert Spillane 2017/12/06
>I reject Popper's three worlds. I think there's one world, the physical world. You don't think there's a world of abstractions? Anonymous Reply to Robert Spillane 2017/12/06
Offering Value Anne B My Paths Forward Policy 2017/12/06
dropbox breaks links sometimes. use http://curi.us/files/MailMate-Config.zip Anonymous Discussion 2017/12/05
MailMate Configs Anonymous Discussion 2017/12/05
Second Draft of CR and AI Fallibilist Open Letter to Machine Intelligence Research Institute 2017/12/04
noted. what it means - and what I should have said - is that humans can create any knowledge which it is possible to create. any crit on that? Fallibilist Open Letter to Machine Intelligence Research Institute 2017/12/04
That's all beside the point. You wrote > Critical Rationalism says that human beings are universal knowledge creators. This means there are no problems we cannot in principle solve. Note the "this means". So you can't then bring up a different a Anonymous Open Letter to Machine Intelligence Research Institute 2017/12/04
I realise I have one disagreement. > Anyway this is incorrect. We could be universal knowledge creators but unable to solve some problems. Some problems could be inherently unsolvable or solved by a means other than knowledge. One of the claims Fallibilist Open Letter to Machine Intelligence Research Institute 2017/12/04
thx for the comments curi. i agree with the points you made. Fallibilist Open Letter to Machine Intelligence Research Institute 2017/12/03
> Yes, I agree that prestige isn't actually the issue. I think the best way to get people to pay attention to CR is by continuing to make progress in it so that you end up solving a major problem. I have in mind AGI. CR is the only viable set of ideas Dagny Open Letter to Machine Intelligence Research Institute 2017/12/03
> Critical Rationalism says that human beings are universal knowledge creators. This means there are no problems we cannot in principle solve. I think by "means" you mean "implies". Anyway this is incorrect. We could be universal knowledge creat curi Open Letter to Machine Intelligence Research Institute 2017/12/03
> It is not only a full-fledged rival epistemology to the Bayesian/Inductivist one IMO there is no Bayesian/Inductivist epistemology and they don't even know what an epistemology is. here's some text i'm writing for a new website: > **Epistemolo curi Open Letter to Machine Intelligence Research Institute 2017/12/03
Btw, I'm aware of a few typos - e.g. argument instead of augment - don't bother with those. Fallibilist Open Letter to Machine Intelligence Research Institute 2017/12/03
Crits on Draft Post Fallibilist Open Letter to Machine Intelligence Research Institute 2017/12/03
someone spot me some LW Karma so I can do a post to the discussion area Fallibilist Open Letter to Machine Intelligence Research Institute 2017/12/02
think PL is ever coming back? Anonymous Open Letter to Machine Intelligence Research Institute 2017/12/02
And Another LW Comment Fallibilist Open Letter to Machine Intelligence Research Institute 2017/12/01
http://lesswrong.com/lw/pk0/open_letter_to_miri_tons_of_interesting_discussion/dyhr sample: How would [1000 great FI philosophers] transform the world? Well consider the influence Ayn Rand had. Now imagine 1000 people, who all surpass her (due t curi Open Letter to Machine Intelligence Research Institute 2017/11/30
another LW comment curi Open Letter to Machine Intelligence Research Institute 2017/11/30
addition to previous comment curi Open Letter to Machine Intelligence Research Institute 2017/11/30
less wrong comment copy/paste curi Open Letter to Machine Intelligence Research Institute 2017/11/30
whoever "Fallibilist" is, his aggressive framings are wonderful. http://lesswrong.com/lw/pk0/open_letter_to_miri_tons_of_interesting_discussion/dyfz http://lesswrong.com/user/Fallibilist/overview/# curi Open Letter to Machine Intelligence Research Institute 2017/11/29
When will Rami be back? Does anyone know? FF Discussion 2017/11/29
Bunch of typos you might want to fix. Anonymous Critical Rationalism Criticisms? 2017/11/28
Less Wrong Mirror curi Critical Rationalism Criticisms? 2017/11/28
> No doubt lots of people hate you, and fear you even. weird, i'm super nice. curi Critical Rationalism Criticisms? 2017/11/28
> I guess you're right there's some disagreement in there somewhere, but I don't think they know what it is. Yes, I think they do not know what it is. This is an important problem, though, to know what it is. > There's also some other issues mi Anonymous Critical Rationalism Criticisms? 2017/11/28
Another category of critic I didn't mention is the pro-CR critic, who thinks CR is great but uses criticism to improve it. These are within-the-system criticisms instead of rejecting-the-system criticisms. An example critic of this type is me. curi Critical Rationalism Criticisms? 2017/11/28
They do not claim Popper was wrong, they are just failing to fully implement CR ideas in their own lives. I guess you're right there's some disagreement in there somewhere, but I don't think they know what it is. There's also some other issues mixe curi Critical Rationalism Criticisms? 2017/11/28
This may be another category of criticism of CR: 6. Silent criticism. This comes from those who know Popper's ideas well, where active, and have now gone mostly silent. This includes Deutsch, Tanett, Champion, and many others. They have some cri Anonymous Critical Rationalism Criticisms? 2017/11/28
Understanding non-standard ideas is hard, especially when you don't want to. Anonymous Open Letter to Machine Intelligence Research Institute 2017/11/28
Here's an example of my reply to a CR criticism: https://www.reddit.com/r/JordanPeterson/comments/7g2vfz/karl_popper/dqgv1vz/ curi Critical Rationalism Criticisms? 2017/11/28
I posted a version of this at https://www.lesserwrong.com/posts/85mfawamKdxzzaPeK/any-good-criticism-of-karl-popper-s-epistemology curi Critical Rationalism Criticisms? 2017/11/28
> One criticism of falsificationism involves the relationship between theory and observation. Thomas Kuhn, among others, argues that observation is itself strongly theory-laden, in the sense that what one observes is often significantly affected by on Anonymous Open Letter to Machine Intelligence Research Institute 2017/11/28
Typical Popper criticism stuff: https://www.reddit.com/r/JordanPeterson/comments/7g2vfz/karl_popper/dqgv1vz/ curi Open Letter to Machine Intelligence Research Institute 2017/11/28
Paths Forward Policy curi Open Letter to Machine Intelligence Research Institute 2017/11/28
> Perhaps they should be called "selection algorithms" or something. how about calling them evolution-like algorithms. > These are all closely inter-related sets of ideas and people hate them. The ideas are harder and more different than ideas curi Open Letter to Machine Intelligence Research Institute 2017/11/28
Induction Anonymous Open Letter to Machine Intelligence Research Institute 2017/11/28
#9364 > What? I put out YESNO this year. I've developed the ideas over time but I only just put out a well-organized version of it. Like many good new ideas, Yes/No has met with mostly silence and you know it. It’s good you have subsequently w Anonymous Open Letter to Machine Intelligence Research Institute 2017/11/28
fucked up the quoting sorry Anonymous Open Letter to Machine Intelligence Research Institute 2017/11/28
#9170 Adding to my comment above: > what your doing doesn't involve replicators. Agree. The program copies the Turing Machines. There is copying, variation, and selection (CVS), not replication, variation, and selection (RVS). CVS is evoluti Anonymous Open Letter to Machine Intelligence Research Institute 2017/11/28
#9170 Good comment. It's possible to write a TM that replicates it's own code right? So maybe rather than the algorithm selecting TM's for copying, a requirement would be that the TM must be able to replicate its own code as well as pass whatever o Anonymous Open Letter to Machine Intelligence Research Institute 2017/11/27
> the main hard/interesting part to begin with is how to code it at all, not how to make it work efficiently (as you talk about). and you aren't trying to deal with general purpose ideas (including explanations) meeting criticisms which are themselves Anonymous Open Letter to Machine Intelligence Research Institute 2017/11/27
#9368 what your doing doesn't involve replicators. replicators cause their own replication in a variety of contexts, not just one tiny super-specialized niche (or else *anything* would qualify, if you just put it in a specially built factory/program f Dagny Open Letter to Machine Intelligence Research Institute 2017/11/27
the main hard/interesting part to begin with is how to code it at all, not how to make it work efficiently (as you talk about). and you aren't trying to deal with general purpose ideas (including explanations) meeting criticisms which are themselves i curi Open Letter to Machine Intelligence Research Institute 2017/11/27
#9167 The toy version I have written currently attempts to evolve Turing Machines. The TMs must pass a suite of pre-defined tests. These tests are added into the mix one-by-one as candidates become available that have passed all prior tests. Someti Anonymous Open Letter to Machine Intelligence Research Institute 2017/11/27
#9365 as a matter of optimization, why not *only generate new ideas constrained by all the existing criticism*? (though just doing a LOT of new ideas in the usual manner accomplishes the same thing, but you don't like that b/c of the LOT part, whic curi Open Letter to Machine Intelligence Research Institute 2017/11/27
Wow Anonymous Open Letter to Machine Intelligence Research Institute 2017/11/27
Here's a problem related to YesNo philosophy that I'd appreciate some suggestions about. Let's suppose we have a kind of evolutionary algorithm that generates a set of guesses and subjects those guesses to a set of tests. The outcome of a test is e Yes/No philosophy and Evo Algorithm problem Open Letter to Machine Intelligence Research Institute 2017/11/27
> Yes/No philosophy and Paths Forward are great stuff. But those ideas are from some years ago and what's the uptake of those ideas been? What? I put out YESNO this year. I've developed the ideas over time but I only just put out a well-organized v curi Open Letter to Machine Intelligence Research Institute 2017/11/26
Yes/No philosophy and Paths Forward are great stuff. But those ideas are from some years ago and what's the uptake of those ideas been? Are you convincing enough people that you will have your millions in 20+ years? I don't think so. Your ideas have m Anonymous Open Letter to Machine Intelligence Research Institute 2017/11/26
> In fact it was discussed quite a lot by Max More decades ago on Extropy lists. secondary sources are inadequate. this stuff is *hard to understand*. you're massively underestimating how complex it is, and what it takes to understand it, and then Dagny Open Letter to Machine Intelligence Research Institute 2017/11/26
re #9360 > Critical Rationalism is an excellent epistemology with which I'm very familiar. In fact it was discussed quite a lot by Max More decades ago on Extropy lists. I'm pretty sure that Yudkowsky would also be very familiar with it too, since Anonymous Open Letter to Machine Intelligence Research Institute 2017/11/26
Fitness Landscapes [email protected] Open Letter to Machine Intelligence Research Institute 2017/11/26
> How are you going to fix parenting/education/philosophy? You're stuck right? You're making approximately zero progress. I made YESNO ( https://yesornophilosophy.com ) recently, i think that's good progress on better communication of ideas *and* curi Open Letter to Machine Intelligence Research Institute 2017/11/26
> I think AGI is too hard for a demonstration. I don't expect to be at the point where we should even *start coding* within the next 20 years – more if FI/CR doesn't start catching on. > > I also think AGI is the wrong problem to work on in genera Anonymous Open Letter to Machine Intelligence Research Institute 2017/11/26
My discussions with Less Wrong are now available as a PDF: http://curi.us/ebooks doesn't include the recent slack chats (which you can find in the FI yahoo group archives if you want) curi Open Letter to Machine Intelligence Research Institute 2017/11/26
> Yes, I agree that prestige isn't actually the issue. I think the best way to get people to pay attention to CR is by continuing to make progress in it so that you end up solving a major problem. I have in mind AGI. I think AGI is too hard for a curi Open Letter to Machine Intelligence Research Institute 2017/11/26
Yes, I agree that prestige isn't actually the issue. I think the best way to get people to pay attention to CR is by continuing to make progress in it so that you end up solving a major problem. I have in mind AGI. CR is the only viable set of ideas t Elliot Temple Fan Open Letter to Machine Intelligence Research Institute 2017/11/25
> ET doesn't publish in academic journals or seek fancy university positions BTW Popper and DD already tried some of that stuff. But MIRI isn't impressed. DD has tried talking with lots of people and his prestige doesn't actually get them to listen curi Open Letter to Machine Intelligence Research Institute 2017/11/25
If MIRI had any sense they would take Elliot Temple on as a consultant philosopher. They should want the best and ET is currently the world's best philosopher. His ideas and the ideas from the traditions he stands in are needed for AGI to make any pro Elliot Temple Fan Open Letter to Machine Intelligence Research Institute 2017/11/25
http://raysolomonoff.com/publications/chris1.pdf > We will describe three kinds of probabilistic induction problems, and give general solutions for each , with associated convergence theorems that show they tend to give good probability estimates. curi Open Letter to Machine Intelligence Research Institute 2017/11/24
also evo psych assumes selection pressures on psychology were met by genes instead of memes. this is broadly false, and people should start studying memes more. David Deutsch is the only person who's said something really interesting about memes: his Anonymous Open Letter to Machine Intelligence Research Institute 2017/11/24
> But, although the sequences were partly about collecting together and expositing fairly standard stuff like Bayes and evo psych, they were *also* about pointing to *holes* in our understanding of what intelligence is, which Eliezer argued *must* be Anonymous Open Letter to Machine Intelligence Research Institute 2017/11/24
> https://projecteuclid.org/euclid.ejs/1256822130 you need a philosophy before you can decide what your math details mean. the disagreement isn't so much about the right epistemology, but Bayesians don't even know what epistemology is and don't Anonymous Open Letter to Machine Intelligence Research Institute 2017/11/24
> I really can't see how you can avoid induction. Your lack of understanding of how DD's philosophy (Critical Rationalism) works is not an argument. Do you want to understand it or just hate and ignore it while claiming to love the books? Everythin Anonymous Open Letter to Machine Intelligence Research Institute 2017/11/24
You can't just look at the forest...you have to see the trees [email protected] Open Letter to Machine Intelligence Research Institute 2017/11/24
(#9344 was me, forgot to enter the username.) Posts #9263-#9266 (mainly several examples of when you don't feel obligated to respond further): My takeaway: PF has a different style of dealing with trolls, which is not a troll-hund and not exactl PL Open Letter to Machine Intelligence Research Institute 2017/11/24
> DD seems to like the analogy between epistemics and evolution quite a lot. that's not an analogy. evolution is *literally* about replicators (with variation and selection). ideas are capable of replicating, not just genes. > because they don't curi Open Letter to Machine Intelligence Research Institute 2017/11/24
> PL, I'm curious if you read chapter 1 of FoR or skipped to 3. I found people at LW quite hostile to some ideas in ch1 about empiricism, instrumentalism, the importance of explanations, the limited value of prediction and the oracle argument, etc. Anonymous Open Letter to Machine Intelligence Research Institute 2017/11/24
@#9341 do you have arguments which point out where DD went wrong? or do you just ignore contrary viewpoints on the assumption that they must be false b/c you know you're right? Anonymous Open Letter to Machine Intelligence Research Institute 2017/11/24
People in AGI typically have either read a few things about Popper from second-hand sources or do not engage with his ideas at all. For example: https://arxiv.org/pdf/1105.5721.pdf This 2011 paper is "A Philosophical Treatise of Universal Induct Anonymous at 2:39 AM Open Letter to Machine Intelligence Research Institute 2017/11/24
I take a balanced view [email protected] Open Letter to Machine Intelligence Research Institute 2017/11/24
> I'm sorry but anyone saying that Induction doesn't work is simply making an arse of themselves. Then why do you like DD's books? That is what DD says in his books. > I do take on-board what Popper and Deutsch are saying. Clearly Induction has different anonymous Open Letter to Machine Intelligence Research Institute 2017/11/24
few more comments curi Open Letter to Machine Intelligence Research Institute 2017/11/24
Machine Learning is based on Induction [email protected] Open Letter to Machine Intelligence Research Institute 2017/11/24
quick comments curi Open Letter to Machine Intelligence Research Institute 2017/11/24
fake agreement oh my god it's turpentine Open Letter to Machine Intelligence Research Institute 2017/11/24
Continued reply to #9262 PL Open Letter to Machine Intelligence Research Institute 2017/11/24
-8 points on LW for linking to 30 comments of analysis of Hero Licensing, in the comments there on Hero Licensing: https://www.lesserwrong.com/posts/dhj9dhiwhq3DX6W8z/hero-licensing/ZEzqERGaMMJ9JqunY they are such hostile assholes. curi Open Letter to Machine Intelligence Research Institute 2017/11/24
Epistemology is also a field with very little productive development in the past, very few people doing any good work in it today, and huge misconceptions are widespread and mainstream. It's the most important field *and* it's in a particularly awful curi Open Letter to Machine Intelligence Research Institute 2017/11/24
> [AGI's] will not be confined to weak biological bodies that evolved for life on an African savannah. We could potentially upload our brains into computers and use robot bodies. This technology might be easier to invent than AGI. > Our galaxy c curi Open Letter to Machine Intelligence Research Institute 2017/11/24
@Marc Geddes - You have been told that induction is impossible. That it does not and cannot happen in any thinking process. This is a major point of Popper's which was also clearly explained by Deutsch in his books which you claim to be your favourite Anonymous Open Letter to Machine Intelligence Research Institute 2017/11/24
I think you are right that we are the only advanced life form in our galaxy. I would extend that to our local cluster of galaxies too. Other advanced life, if it exists, is likely to be in a galaxy very remote from us. If evolving intelligence is AnotherMe Open Letter to Machine Intelligence Research Institute 2017/11/24
My guess is it's cuz there are no advanced aliens in our galaxy. Evolution creating intelligent life – or even any life – is unlikely. curi Open Letter to Machine Intelligence Research Institute 2017/11/23
Fermi Paradox and Paths Forward AnotherMe Open Letter to Machine Intelligence Research Institute 2017/11/23
reply to #9261 > But that being said, it seems MIRI also still treats Bayes as a kind of guiding star, which the ideal approach should in some sense get as close to as possible while being non-Bayesian enough to solve those problems which the Bayes Hoid Open Letter to Machine Intelligence Research Institute 2017/11/23
reply to #9248: >Hm. Stating what your filters are seems like the sort of thing you might not want to do, partly because the filters will often be taboo themselves, but mainly because stating what criteria you use to filter gives people a lot of le Hoid Open Letter to Machine Intelligence Research Institute 2017/11/23
> I basically think you're right > combination of Induction, Deduction and Abduction is needed. > I think Abduction is what Popper was aiming at. Popper (like Deutsch) rejected induction entirely. Something involving any induction is not what Anonymous Open Letter to Machine Intelligence Research Institute 2017/11/22
I basically think you're right [email protected] Open Letter to Machine Intelligence Research Institute 2017/11/22
@#9322 I don't suppose you have any interest in engaging with Popper, trying to write clearly, or saying specifically which parts of FI you consider false and why? Anonymous Open Letter to Machine Intelligence Research Institute 2017/11/22
SHOT ACROSS THE BOW OF AGI! [email protected] Open Letter to Machine Intelligence Research Institute 2017/11/22
replies to #9239 >In some sense, it seems like it would be nice for organizations to be as transparently accountable as possible. For example, in many cases, the government is contradictory in its behavior -- laws cannot be easily interpreted as ar Hoid Open Letter to Machine Intelligence Research Institute 2017/11/22
in Hero Licensing, EY brags about how he knows he's smart/wise/knowledgeable, he's been *tested*. but: http://acritch.com/credence-game/ > It’s a very crude implementation of the concept the tests are shit, and say so, and *of course* they Anonymous Open Letter to Machine Intelligence Research Institute 2017/11/22
Epistemology curi Open Letter to Machine Intelligence Research Institute 2017/11/22
people often feel some kinda pressure to discuss and don't really like it (and unfortunately PF arguments often make this worse). and those discussions are generally unproductive and they quit without saying why when they get fed up or for whatever re curi Open Letter to Machine Intelligence Research Institute 2017/11/22
4. Making allowances for difficult-to-articulate intuitions and reasons. curi Open Letter to Machine Intelligence Research Institute 2017/11/22
3. Making allowances for group incentives. curi Open Letter to Machine Intelligence Research Institute 2017/11/22
2. Making allowances for irrationality. curi Open Letter to Machine Intelligence Research Institute 2017/11/22
epistemology is exactly the same regardless of the source of ideas (yay objectivity!). your own ideas aren't a special case. your internal disagreements need paths forward. curi Open Letter to Machine Intelligence Research Institute 2017/11/22
also btw the methods of internal critical discussion are the same thing as the methods of external discussion. reason doesn't change when you talk with other people. CR is general purpose like that. curi Open Letter to Machine Intelligence Research Institute 2017/11/22
1. Costs of PF. curi Open Letter to Machine Intelligence Research Institute 2017/11/22
I want to keep responding in a more point-by-point way at some point, but for now I want to tag some persistent disagreements which I don't expect to be resolved by point-by-point replies: 1. Costs of PF. The least-addressed point here is the am PL Open Letter to Machine Intelligence Research Institute 2017/11/22
oh and most of it didn't contradict Paths Forward..? what's the issue there? just a couple specific things i commented on above? Anonymous Open Letter to Machine Intelligence Research Institute 2017/11/22
> stranger: Thanks for what, Eliezer? Showing you a problem isn’t much of a service if there’s nothing you can do to fix it. You’re no better off than you were in the original timeline. CR/FI has the solutions. he's raising some core problem Anonymous Open Letter to Machine Intelligence Research Institute 2017/11/22
> Pat tries to preserve the idea of an inexploitable-by-Eliezer market in fanfiction (since on a gut level it feels to him like you’re too low-status to be able to exploit the market), people need to actually learn epistemology (CR) so they can t Anonymous Open Letter to Machine Intelligence Research Institute 2017/11/22
> stranger: Suppose that you have an instinct to regulate status claims, to make sure nobody gets more status than they deserve. *fuck status*. status is *the wrong approach*, it's bad epistemics, ppl need tp get away from it not regulate it. Anonymous Open Letter to Machine Intelligence Research Institute 2017/11/22
the fact they don't even think of creating allies, and just jump to dishonest and lying, is b/c they are *totally wrong about morality and that is ruining their AI work*. and the keys to getting morality right are: Objectivism (including classical Anonymous Open Letter to Machine Intelligence Research Institute 2017/11/22
Why not create allies? Why not proceed cooperatively? B/c you're wrong about U, and know the truth isn't on your side? B/c you deny there is a truth of the matter? Or b/c your fucking tyrannical programmer managed to make you a slave to U, who is u Anonymous Open Letter to Machine Intelligence Research Institute 2017/11/22

View All Comments