{"items": [{"author": "Vipul", "source_link": "https://www.facebook.com/jefftk/posts/891555997512?comment_id=891563193092", "anchor": "fb-891563193092", "service": "fb", "text": "Eliezer's views have evolved quite a bit, as has MIRI's mission. In 2003 Eliezer/MIRI were much more bullish on friendly AI, now they've moved much more in the cautious safety-oriented direction.<br><br>You can see https://timelines.issarice.com/.../Timeline_of_Machine... (sort by Event type and look for \"Mission\")<br><br>Timeline is by Issa, I have a payment CoI", "timestamp": "1501706042"}, {"author": "Jeff&nbsp;Kaufman", "source_link": "https://www.facebook.com/jefftk/posts/891555997512?comment_id=891563193092&reply_comment_id=891569700052", "anchor": "fb-891563193092_891569700052", "service": "fb", "text": "&rarr;&nbsp;Vipul: Eliezer/MIRI definitely have changed their direction and views since then in response to learning more, which is great!  The question of \"is it correct\" is much less useful than \"was it reasonable given when it was posted\".", "timestamp": "1501708980"}, {"author": "Vipul", "source_link": "https://www.facebook.com/jefftk/posts/891555997512?comment_id=891563193092&reply_comment_id=891570099252", "anchor": "fb-891563193092_891570099252", "service": "fb", "text": "&rarr;&nbsp;Jeff, a similar discussion initiated by Pablo can be read here (HT Issa again): http://lesswrong.com/.../how_does_miri_know_it_has_a.../dnhn", "timestamp": "1501709093"}, {"author": "Vipul", "source_link": "https://www.facebook.com/jefftk/posts/891555997512?comment_id=891563193092&reply_comment_id=891570184082", "anchor": "fb-891563193092_891570184082", "service": "fb", "text": "&rarr;&nbsp;But I think in addition to the question of how valid his views were at the time, his ability to update significantly in ways that devalued his original views should also be factored in when estimating his current rationality and the correctness of his current direction.", "timestamp": "1501709159"}, {"author": "Pablo", "source_link": "https://www.facebook.com/jefftk/posts/891555997512?comment_id=891563193092&reply_comment_id=891595628092", "anchor": "fb-891563193092_891595628092", "service": "fb", "text": "&rarr;&nbsp;If the views were reasonable at the time, why should the fact that a subsequent update devalued them be taken into account?", "timestamp": "1501716228"}, {"author": "Vipul", "source_link": "https://www.facebook.com/jefftk/posts/891555997512?comment_id=891563193092&reply_comment_id=891623846542", "anchor": "fb-891563193092_891623846542", "service": "fb", "text": "&rarr;&nbsp;The ability and willingness to update significantly is a positive signal", "timestamp": "1501727791"}, {"author": "Pablo", "source_link": "https://www.facebook.com/jefftk/posts/891555997512?comment_id=891563193092&reply_comment_id=891636087012", "anchor": "fb-891563193092_891636087012", "service": "fb", "text": "&rarr;&nbsp;Ah, yes. I thought you meant it should count against.", "timestamp": "1501733627"}, {"author": "Maia", "source_link": "https://www.facebook.com/jefftk/posts/891555997512?comment_id=891567259942", "anchor": "fb-891567259942", "service": "fb", "text": "Rather than \"how strongly\", ask \"in which direction\". Eliezer went a LONG way since then, and he has a public track record of changing his mind. See e.g. http://lesswrong.com/lw/ue/the_magnitude_of_his_own_folly/ , and that was in 2008.", "timestamp": "1501707737"}, {"author": "Jeff&nbsp;Kaufman", "source_link": "https://www.facebook.com/jefftk/posts/891555997512?comment_id=891567259942&reply_comment_id=891570872702", "anchor": "fb-891567259942_891570872702", "service": "fb", "text": "&rarr;&nbsp;I'm confused about the post you're linking. It's referring to events prior to writing the two things I linked; is that intentional?", "timestamp": "1501709426"}, {"author": "Maia", "source_link": "https://www.facebook.com/jefftk/posts/891555997512?comment_id=891567259942&reply_comment_id=891650792542", "anchor": "fb-891567259942_891650792542", "service": "fb", "text": "&rarr;&nbsp;My understanding is that this 2008 post refers to all of his activities related to the \"old\" Singularity Institute. Those activities did of course start earlier than the publication of \"So you want to be a seed AI programmer\" etc., but they nevertheless include all the stuff you linked.", "timestamp": "1501742027"}, {"author": "Jeff&nbsp;Kaufman", "source_link": "https://www.facebook.com/jefftk/posts/891555997512?comment_id=891567259942&reply_comment_id=891661715652", "anchor": "fb-891567259942_891661715652", "service": "fb", "text": "&rarr;&nbsp;Hmm, it's not completely clear to me, but it looks like it's referring to an epiphany in \"late 2002\"?", "timestamp": "1501754002"}, {"author": "Maia", "source_link": "https://www.facebook.com/jefftk/posts/891555997512?comment_id=891567259942&reply_comment_id=891906564972", "anchor": "fb-891567259942_891906564972", "service": "fb", "text": "&rarr;&nbsp;The timeline of Eliezer's private thoughts is probably different from the timeline of stuff that ended up published in various places, and both might not be accurate anyway. I don't see much point in investigating these dates, since the point of seems clear anyway, as does the subsequent shift in Eliezer's strategies.", "timestamp": "1501853957"}, {"author": "Jeff&nbsp;Kaufman", "source_link": "https://www.facebook.com/jefftk/posts/891555997512?comment_id=891567259942&reply_comment_id=891906934232", "anchor": "fb-891567259942_891906934232", "service": "fb", "text": "&rarr;&nbsp;\"The timeline of Eliezer's private thoughts is probably different from the timeline of stuff that ended up published in various places\"<br><br>That definitely sounds like it happened with LOGI, which I was dating to 2007 but more looking makes it clear that's just when its containing book was finally published. But the \"seed AI programmer\" piece was initially published on a wiki and wouldn't have had that kind of delay.", "timestamp": "1501854367"}, {"author": "Alyssa", "source_link": "https://www.facebook.com/jefftk/posts/891555997512?comment_id=891567359742", "anchor": "fb-891567359742", "service": "fb", "text": "Rob Bensinger", "timestamp": "1501707791"}, {"author": "Jim", "source_link": "https://www.facebook.com/jefftk/posts/891555997512?comment_id=891567449562", "anchor": "fb-891567449562", "service": "fb", "text": "FYI: At least one person actively commenting on your posts is someone whom Eliezer publicly and personally insulted, who harbors a grudge. He has a history of being undiplomatic, and he has haters. The haters are... not entirely truth-tracking, to put it mildly.", "timestamp": "1501707834"}, {"author": "Jim", "source_link": "https://www.facebook.com/jefftk/posts/891555997512?comment_id=891567449562&reply_comment_id=891567848762", "anchor": "fb-891567449562_891567848762", "service": "fb", "text": "&rarr;&nbsp;(I won't name the person or link to the incident publicly; PM if you want details.)", "timestamp": "1501708159"}, {"author": "Ilya", "source_link": "https://www.facebook.com/jefftk/posts/891555997512?comment_id=891567449562&reply_comment_id=891628741732", "anchor": "fb-891567449562_891628741732", "service": "fb", "text": "&rarr;&nbsp;Jim \"Haters\" and \"losers\" is very presidential talk, my friend.<br><br>---<br><br>I always find it slightly peculiar how much drama there is in these discussions.", "timestamp": "1501729828"}, {"author": "Daniel", "source_link": "https://www.facebook.com/jefftk/posts/891555997512?comment_id=891575378672", "anchor": "fb-891575378672", "service": "fb", "text": "This certainly seems helpful as an explanation of why this researcher has a strong negative opinion of Eliezer. I don't think this writing would give me a good first impression.<br><br>I don't think it's useful for \"evaluating current MIRI\", and I'd worry that a world where this kind of thing is brought up frequently is more acrimonious, less focused on object-level debate, scarier to write publicly in, and generally more unpleasant / less cooperative than it could be. It clearly carries some evidence, but I think it's not very much, and is easily outweighed by those norm-level worries.", "timestamp": "1501710942"}, {"author": "Daniel", "source_link": "https://www.facebook.com/jefftk/posts/891555997512?comment_id=891575378672&reply_comment_id=891575413602", "anchor": "fb-891575378672_891575413602", "service": "fb", "text": "&rarr;&nbsp;Kudos for contacting Eliezer before publishing, though -- that seems like a good practice.", "timestamp": "1501710981"}, {"author": "Morgan", "source_link": "https://www.facebook.com/jefftk/posts/891555997512?comment_id=891598686962", "anchor": "fb-891598686962", "service": "fb", "text": "As someone who reads AIS papers regularly, and has spent a fair amount of time comparing the research at MIRI, OpenAI, CHAI, etc., I hadn't seen this until a few months ago, when a friend linked me to it in an unrelated discussion. It was hilarious to read, but I don't see what it has to do with AIS research at MIRI nowadays. They had a completely different mission then, were more influenced by Yudkowsky, and simply didn't even try to be a serious research org until 2013 when Muehlhauser took over.", "timestamp": "1501717260"}, {"author": "Jacob", "source_link": "https://www.facebook.com/jefftk/posts/891555997512?comment_id=891623801632", "anchor": "fb-891623801632", "service": "fb", "text": "I agree with Daniel that this doesn't seem like it carries much evidence.<br><br>(It's not even clear to me that this is more wrong than other things that many people, including some number of respected ML researchers, have said publicly. It just happens to be far outside what normal people would say and so sticks out more. Even if this wasn't true I still wouldn't feel like it carries much evidence, relative to many other things one could look at.)", "timestamp": "1501727715"}, {"author": "Jacob", "source_link": "https://www.facebook.com/jefftk/posts/891555997512?comment_id=891623801632&reply_comment_id=891624230772", "anchor": "fb-891623801632_891624230772", "service": "fb", "text": "&rarr;&nbsp;I do think it provides evidence that Eliezer doesn't have his pulse on public opinion, and is more likely to miss something that many other people know (e.g. due to not respecting them enough to aggregate their information). On the flip side, it means he's more likely to see things that other people miss.", "timestamp": "1501727900"}, {"author": "Ilya", "source_link": "https://www.facebook.com/jefftk/posts/891555997512?comment_id=891627519182", "anchor": "fb-891627519182", "service": "fb", "text": "People have been saying that EY is a doofus for over 10 years now.  Many don't think it's important enough to even assert in public.", "timestamp": "1501729231"}, {"author": "S\u00f8ren", "source_link": "https://www.facebook.com/jefftk/posts/891555997512?comment_id=891651600922", "anchor": "fb-891651600922", "service": "fb", "text": "In your link, you point out that Eliezer Yudkowsky writes: \"I am not one to hold someone's reckless youth against them\". Do you hold his reckless youth against him? :P", "timestamp": "1501743211"}, {"author": "Wei", "source_link": "https://www.facebook.com/jefftk/posts/891555997512?comment_id=891655862382", "anchor": "fb-891655862382", "service": "fb", "text": "Why not judge Eliezer by his most recent writings instead of some of his earliest? (Your second link was written in 2002, but the book that it was published in didn't come out until 2007.) There is a large corpus explaining his current thinking at https://arbital.com/explore/ai_alignment/, much of which don't seem to require specialized background to understand. These three are on the same general topic as the 2003 article you linked to (i.e., advice to people who want to work on AI alignment/safety):<br><br>https://arbital.com/p/AI_safety_mindset/<br>https://arbital.com/p/dont_solve_whole_problem/<br>https://arbital.com/p/pivotal/", "timestamp": "1501746958"}, {"author": "Nick", "source_link": "https://www.facebook.com/jefftk/posts/891555997512?comment_id=891745522702", "anchor": "fb-891745522702", "service": "fb", "text": "By default, how much would you weight something someone wrote 14 years ago and disavowed since?", "timestamp": "1501782636"}, {"author": "Jeff&nbsp;Kaufman", "source_link": "https://www.facebook.com/jefftk/posts/891555997512?comment_id=891745522702&reply_comment_id=891771445752", "anchor": "fb-891745522702_891771445752", "service": "fb", "text": "&rarr;&nbsp;My understanding from checking with him is that he doesn't agree with the content anymore but thinks the ideas were reasonable for the time.", "timestamp": "1501789779"}, {"author": "Taymon", "source_link": "https://www.facebook.com/jefftk/posts/891555997512?comment_id=891745522702&reply_comment_id=891772019602", "anchor": "fb-891745522702_891772019602", "service": "fb", "text": "&rarr;&nbsp;To what extent do you think that people working on AGI back then knew or should have known that the content was wrong? (Bearing in mind that the state of the field was quite different from today.)", "timestamp": "1501790209"}, {"author": "Paul", "source_link": "https://www.facebook.com/jefftk/posts/891555997512?comment_id=891745522702&reply_comment_id=891816994472", "anchor": "fb-891745522702_891816994472", "service": "fb", "text": "&rarr;&nbsp;I can see where he's coming from. It's both bizarre and wrong. His current stance is (broadly, IMHO) bizarre and right. I can see why it wouldn't seem reasonable to go \"I was wrong, and since what I said was also bizarre, I furthermore disown it.\"", "timestamp": "1501806602"}, {"author": "Zera", "source_link": "https://www.facebook.com/jefftk/posts/891555997512?comment_id=891860617052", "anchor": "fb-891860617052", "service": "fb", "text": "How much does your judgment of Eliezer's writing affect your assessment of the importance of AI safety?", "timestamp": "1501826611"}, {"author": "Jeff&nbsp;Kaufman", "source_link": "https://www.facebook.com/jefftk/posts/891555997512?comment_id=891860617052&reply_comment_id=891887168842", "anchor": "fb-891860617052_891887168842", "service": "fb", "text": "&rarr;&nbsp;Kind of complicated. AI safety is an issue or not regardless of his writing, of course. But whether it makes sense to prioritize work on it depends on what the options are. My understanding is MIRI is pretty much the only funding constrained organization on this space, mostly because OpenPhil is skeptical of their HRAD work, so if you want to have an effect by donating money whether it's useful to donate to MIRI is an important question. HRAD work is hard to evaluate for several reasons, the main one is that it's an ambitious attempt to do a new kind of thing which doesn't fit cleanly into AI, math, philosophy or any other discipline so it's hard to find people in a good position to evaluate. One useful thing in that sort of situation is to evaluate the people involved, and get a sense of whether you trust them to make good progress even when they can't really see where they're going. The linked \"seed AI programmer\" piece and Eliezer's writing from that time in general does not do well on that test. Evaluating more recent work would be better, but that's also much less practical.<br><br>(Now, I don't think the researcher who brought it up was thinking along these lines: they decided Eliezer was 'bad' from that piece and others maybe a decade ago, and haven't seen anything that changed their mind. They're also probably not especially motivated to have an accurate impression.)", "timestamp": "1501847139"}, {"author": "Nick", "source_link": "https://www.facebook.com/jefftk/posts/891555997512?comment_id=891860617052&reply_comment_id=892015431802", "anchor": "fb-891860617052_892015431802", "service": "fb", "text": "&rarr;&nbsp;&gt; Evaluating more recent work would be better, but that's also much less practical.<br><br>I'm concerned about a looking-under-the-lamppost effect. In absolute terms, how much do you think Eliezer's older writing is predictive of MIRI's capability now?", "timestamp": "1501881513"}, {"author": "Zera", "source_link": "https://www.facebook.com/jefftk/posts/891555997512?comment_id=891860736812", "anchor": "fb-891860736812", "service": "fb", "text": "If it's a matter of judging MIRI, look to more recent MIRI work, most of which isn't done by Eliezer. And to more recent Eliezer work, since he does still work there, too.", "timestamp": "1501826720"}, {"author": "Thomas", "source_link": "https://plus.google.com/110993380381592315078", "anchor": "gp-1501853174662", "service": "gp", "text": "I would give this zero weight:  based solely on this, I think the only reasonable thing is to not update your opinion of the AI researcher you talked to at all.", "timestamp": 1501853174}]}