{"items": [{"author": "Alex", "source_link": "https://www.facebook.com/jefftk/posts/882235241382?comment_id=882236254352", "anchor": "fb-882236254352", "service": "fb", "text": "I agree with George that a key part of any evaluation of this is going to be defining exactly what you mean by \"AI risk\". If you mean self-driving cars getting into crashes, that's one thing; if you mean superhuman intelligences enslaving humanity, that's something very different.", "timestamp": "1498485203"}, {"author": "Jeff&nbsp;Kaufman", "source_link": "https://www.facebook.com/jefftk/posts/882235241382?comment_id=882236254352&reply_comment_id=882259013742", "anchor": "fb-882236254352_882259013742", "service": "fb", "text": "&rarr;&nbsp;I mean superintelligence risk, as outlined in: https://80000hours.org/career-guide/world-problems/...<br><br>(Things like self driving cars getting into crashes are something we're already well positioned to handle: incentives are reasonably well set up, companies care a lot about it, the government cares a lot about it,  its chance of destroying our society is miniscule.)", "timestamp": "1498490724"}, {"author": "George", "source_link": "https://www.facebook.com/jefftk/posts/882235241382?comment_id=882236254352&reply_comment_id=882284522622", "anchor": "fb-882236254352_882284522622", "service": "fb", "text": "&rarr;&nbsp;Jeff&nbsp;Kaufman It seems like enough people are confused that it might be worth using the actual term \"superintelligence risk\" for future clarity and distinguishing between these two categories.", "timestamp": "1498501378"}, {"author": "Ben", "source_link": "https://www.facebook.com/jefftk/posts/882235241382?comment_id=882236523812", "anchor": "fb-882236523812", "service": "fb", "text": "I'd predict that a stronger factor in whether people think AI Risk is important is how often they use abstract considerations to make personal decisions (e.g. how often they understand something they encounter to be a Nash equilibrium). <br><br>Many people also just listen to whoever they think is high status (so people might copy e.g. EA leaders), but I think the abstract decision making factor will focus on people who think AI risk is important as a result of building models rather than repeating passwords of people they trust. <br><br>If you could focus on the ML researchers in that category AND who've read basic arguments e.g. Superintelligence, then I'd predict they largely think it's important.", "timestamp": "1498485338"}, {"author": "George", "source_link": "https://www.facebook.com/jefftk/posts/882235241382?comment_id=882236523812&reply_comment_id=882282796082", "anchor": "fb-882236523812_882282796082", "service": "fb", "text": "&rarr;&nbsp;I do not know if you realize how this reads to other people, so please don't take this the wrong way. But comments like this are a large part of why I avoid the EA community and people worrying about \"AI risk\" especially (although once again, I am willing to talk about \"AI risk\" outside the context of \"superintelligence risk\" which I think is a complete waste of time, and yes I have read Bostrom's book. But I don't know how you are using the term here.).<br><br>For a bit of context, I am a machine learning researcher, not that it matters, and I have devoted my life to studying machine learning which makes me sometimes overly sensitive when I think people are making a mockery of my life's work. I apologize in advance for that if it makes the discussion later become heated.<br><br>Phew. With that long preamble, here is my actual point. Your comment here reads to me like a carefully dressed up version of \"people who disagree with me are just not as smart as me\" and fits with the general theme of narrow minded, self-satisfied, meta-argumentation endemic in the so-called rationalist community. I find it immensely frustrating to engage with because it indicates a poisonous elitism and a trigger-happy willingness to think that disagreements on content are just other people failing to use your oh-so-sophisticated analytical methods when in reality people can make correct inferences and hold reasonable opinions with or without the tendency to think game theoretically and know the terminology of Nash equilibria and other equilibria or whatever other educational shibboleth one might come up with.<br><br>Feel free to ignore this comment if you don't find it useful to your life, but I got the sense that you do care how other people view what you write and might appreciate me sharing my reaction to it. If not, I am happy to type no more here.", "timestamp": "1498500837"}, {"author": "Ben", "source_link": "https://www.facebook.com/jefftk/posts/882235241382?comment_id=882236523812&reply_comment_id=882299417772", "anchor": "fb-882236523812_882299417772", "service": "fb", "text": "&rarr;&nbsp;George I appreciate your comment - thanks for the information! I'm busy a bunch the next 36 hours but I do intend to reply.", "timestamp": "1498505430"}, {"author": "Alice", "source_link": "https://www.facebook.com/jefftk/posts/882235241382?comment_id=882236523812&reply_comment_id=882386827602", "anchor": "fb-882236523812_882386827602", "service": "fb", "text": "&rarr;&nbsp;Ben Hi! I'm a math teacher and I'd love examples of when you've used a Nash equilibrium in your personal life to help me teach the concept! The only time I've been able to apply it is games of Diplomacy.", "timestamp": "1498526890"}, {"author": "Ella", "source_link": "https://www.facebook.com/jefftk/posts/882235241382?comment_id=882236523812&reply_comment_id=882397456302", "anchor": "fb-882236523812_882397456302", "service": "fb", "text": "&rarr;&nbsp;Alice If I get time, I would love to think through some practical examples with you. Many of  them are of a political bent, but so it goes. If you haven't seen William Spaniel's game theory book and youtube series, you should check it out. He focus on intuition and examples.", "timestamp": "1498530897"}, {"author": "Randy", "source_link": "https://plus.google.com/102251509192760989541", "anchor": "gp-1498485362487", "service": "gp", "text": "I'm very curious what you find.  \n<br>\n<br>\nI was part of a seminar/working group on catastrophic risk Bruce Schneier did at Harvard a couple of years back, and we spent two sessions exploring AI risk and came to the conclusion that it wasn't currently worth worrying about. The basic conclusion was that the devil was in the details--if you looked at the field from a distance, the chances of Skynet showing up seemed large, but if you dug into what AI actually meant, we were a long ways away from that place along several different axes, most of which were ignore by the broad-brush folks.\n<br>\n<br>\n(The actual, more detailed conclusion: Apocalyptic terminator-like scenarios weren't worth worrying about, but we did think that there was very real risk of first mover advantage in machine learning leading to a substantial increase in the corporate power of the first movers, which could lead in the general direction of corporate dystopia.   I don't consider that a catastrophic risk, and it bothered me less than many other folks in the room because we (humanity) has been dealing to a better or worse extent with excess corporate power for a couple of centuries now, but it was a risk we identified).  \n<br>\n<br>\nIf you'd like me to dig up the articles Schneier put together for us to read and discuss, I'd be happy to.  ", "timestamp": 1498485362}, {"author": "Eli", "source_link": "https://www.facebook.com/jefftk/posts/882235241382?comment_id=882242516802", "anchor": "fb-882242516802", "service": "fb", "text": "Kathy O'Neil's 'Weapons of Math Destruction' also talks about some pitfalls in algorithm-based systems, in the areas of scale, secrecy, and capacity to do harm. Worth checking out.", "timestamp": "1498486227"}, {"author": "Jacob", "source_link": "https://www.facebook.com/jefftk/posts/882235241382?comment_id=882242516802&reply_comment_id=882256698382", "anchor": "fb-882242516802_882256698382", "service": "fb", "text": "&rarr;&nbsp;That's not AI risk.", "timestamp": "1498489797"}, {"author": "Eli", "source_link": "https://www.facebook.com/jefftk/posts/882235241382?comment_id=882242516802&reply_comment_id=883489398042", "anchor": "fb-882242516802_883489398042", "service": "fb", "text": "&rarr;&nbsp;https://www.nytimes.com/.../artificial-intelligence...", "timestamp": "1498786973"}, {"author": "Jacob", "source_link": "https://www.facebook.com/jefftk/posts/882235241382?comment_id=882242516802&reply_comment_id=883504058662", "anchor": "fb-882242516802_883504058662", "service": "fb", "text": "&rarr;&nbsp;Yeah, they don't understand scale. It's not a \"threat\", it's just an old problem with new buzzwords.<br><br>AI risk is about bigger things.", "timestamp": "1498792336"}, {"author": "Bronwyn", "source_link": "https://www.facebook.com/jefftk/posts/882235241382?comment_id=882249233342", "anchor": "fb-882249233342", "service": "fb", "text": "I agree with the need to define \"AI risk\" better. I'm not worried about superhuman intelligence taking over the world. But there are plenty of examples of bad outcomes from giving an ML system too much power to act (because it's useful) and having things go haywire (because the systems are all trained on data, and that data doesn't reflect the actual task as well as people assumed). For instance, trading algorithms losing large sums of money, or systems learning and then enforcing/propagating biases. If you focus on ML fairness or similar search terms (instead of AI risk), you might find more work within the field that touches on the sort of things you're thinking about.<br>[edited for typos]", "timestamp": "1498487433"}, {"author": "Jeff&nbsp;Kaufman", "source_link": "https://www.facebook.com/jefftk/posts/882235241382?comment_id=882249233342&reply_comment_id=882260126512", "anchor": "fb-882249233342_882260126512", "service": "fb", "text": "&rarr;&nbsp;I'm looking specifically at superintelligence risk [1] because group A thinks this is incredibly important and group B thinks its a waste of time.<br><br>Things like trading algorithms losing money are already covered pretty well by existing incentives, though ML systems enforcing/propagating biases are not.  I think the latter is important, and have thought about it some [2] but I'm not sure if it's one of the world's most pressing problems.<br><br>[1] https://80000hours.org/career-guide/world-problems/...<br><br>[2] https://www.jefftk.com/p/hiring-discrimination", "timestamp": "1498491114"}, {"author": "Bronwyn", "source_link": "https://www.facebook.com/jefftk/posts/882235241382?comment_id=882249233342&reply_comment_id=882264802142", "anchor": "fb-882249233342_882264802142", "service": "fb", "text": "&rarr;&nbsp;Makes sense. I have some ideas about the disconnect, but I'm interested to see what you find.", "timestamp": "1498492543"}, {"author": "Raymond", "source_link": "https://www.facebook.com/jefftk/posts/882235241382?comment_id=882251563672", "anchor": "fb-882251563672", "service": "fb", "text": "Interested in the outcome of this. I'd also be interested in you making predictions in advance (although maybe not sharing them until afterwards) about how you think your own views will shift, and whether and why other ML people's views will shift.<br><br>(i.e. do you think current differences between you and pro-AI-safety have more to to do with differences in goals/values, differences in \"whether it's practical to make progress\" or similar concerns, or due to you not being familiar with all of the arguments?)<br><br>((I'd have assumed you were already familiar with the arguments, but I'd also assumed you'd read some or most of the pieces you listed))", "timestamp": "1498488271"}, {"author": "Jeff&nbsp;Kaufman", "source_link": "https://www.facebook.com/jefftk/posts/882235241382?comment_id=882251563672&reply_comment_id=882253110572", "anchor": "fb-882251563672_882253110572", "service": "fb", "text": "&rarr;&nbsp;((I have actually already read most of the pieces I linked, though generally the shorter ones. For those I'm going to reread them, taking a much more critical approach.))", "timestamp": "1498488824"}, {"author": "Jeff&nbsp;Kaufman", "source_link": "https://www.facebook.com/jefftk/posts/882235241382?comment_id=882251563672&reply_comment_id=882261284192", "anchor": "fb-882251563672_882261284192", "service": "fb", "text": "&rarr;&nbsp;Inside view: I don't expect to start thinking ai risk is what we should focus on.  I'm already somewhat familiar with the arguments and have been following them for a while.<br><br>Outside view: many people similar to me have started thinking ai risk is very important after reading more about it, to the point that Ceg\u0142owski considers it a \"memetic hazard\".  So my views might well shift.", "timestamp": "1498491563"}, {"author": "Paul", "source_link": "https://www.facebook.com/jefftk/posts/882235241382?comment_id=882251563672&reply_comment_id=882266219302", "anchor": "fb-882251563672_882266219302", "service": "fb", "text": "&rarr;&nbsp;What do you think the crux of disagreement will be?", "timestamp": "1498493041"}, {"author": "Jeff&nbsp;Kaufman", "source_link": "https://www.facebook.com/jefftk/posts/882235241382?comment_id=882251563672&reply_comment_id=882266688362", "anchor": "fb-882251563672_882266688362", "service": "fb", "text": "&rarr;&nbsp;@Paul: My current (and very early) guess: whether we're able to make progress on this now.  I suspect it's that professionals feel like we're too many breakthroughs away from where we'll be able to see what AI might look like.", "timestamp": "1498493162"}, {"author": "Thomas", "source_link": "https://www.facebook.com/jefftk/posts/882235241382?comment_id=882251563672&reply_comment_id=882272067582", "anchor": "fb-882251563672_882272067582", "service": "fb", "text": "&rarr;&nbsp;Unrelated aside: using ((multiple parentheses)) is modern internet shorthand for indicating the person or thing inside the parentheses is Jewish, so this makes this conversation read really strangely", "timestamp": "1498495756"}, {"author": "Jeff&nbsp;Kaufman", "source_link": "https://www.facebook.com/jefftk/posts/882235241382?comment_id=882251563672&reply_comment_id=882272561592", "anchor": "fb-882251563672_882272561592", "service": "fb", "text": "&rarr;&nbsp;Thomas I thought that was only (((...)))?  https://en.m.wikipedia.org/wiki/Triple_parentheses", "timestamp": "1498495931"}, {"author": "Thomas", "source_link": "https://www.facebook.com/jefftk/posts/882235241382?comment_id=882251563672&reply_comment_id=882272696322", "anchor": "fb-882251563672_882272696322", "service": "fb", "text": "&rarr;&nbsp;Oh maybe! I had not noticed the specific number when I encountered it in the wild", "timestamp": "1498495973"}, {"author": "Raymond", "source_link": "https://www.facebook.com/jefftk/posts/882235241382?comment_id=882251563672&reply_comment_id=882274173362", "anchor": "fb-882251563672_882274173362", "service": "fb", "text": "&rarr;&nbsp;&gt; parenthesis<br><br>I... might force myself to care about this if a critical mass of people in my filter bubble care about it to avoid confusion, but this seems like an area where \"if I have a reason to use multiple parenthesis I'mma just keep doing that.\"", "timestamp": "1498496633"}, {"author": "Howie", "source_link": "https://www.facebook.com/jefftk/posts/882235241382?comment_id=882251563672&reply_comment_id=882280984712", "anchor": "fb-882251563672_882280984712", "service": "fb", "text": "&rarr;&nbsp;Raymond When is there ever a reason to use multiple parentheses like that?", "timestamp": "1498500169"}, {"author": "Howie", "source_link": "https://www.facebook.com/jefftk/posts/882235241382?comment_id=882251563672&reply_comment_id=882281209262", "anchor": "fb-882251563672_882281209262", "service": "fb", "text": "&rarr;&nbsp;Jeff&nbsp;Kaufman If you think that'll be the crux, Luke's blog post is the best summary I've seen of the evidence on AI timelines.  http://www.openphilanthropy.org/.../potentia.../ai-timelines<br><br>Although you could simultaneously believe that: 1) advanced AI may be developed in the next 10-20 years and 2) we're too many breakthroughs away to make progress on risk.", "timestamp": "1498500305"}, {"author": "Raymond", "source_link": "https://www.facebook.com/jefftk/posts/882235241382?comment_id=882251563672&reply_comment_id=882293689252", "anchor": "fb-882251563672_882293689252", "service": "fb", "text": "&rarr;&nbsp;Howie - i use it to refer to parantheticals within parantheticals when they are spaced out over multiple comments or paragraphs.  (See the opening comments of this dubthread). I think a case can be made that its not a great notation, but that's a different argument than 'some bad people are using a notation for bad things so everyone else has to stop using it'", "timestamp": "1498503301"}, {"author": "Jeff&nbsp;Kaufman", "source_link": "https://www.facebook.com/jefftk/posts/882235241382?comment_id=882251563672&reply_comment_id=882298050512", "anchor": "fb-882251563672_882298050512", "service": "fb", "text": "&rarr;&nbsp;@Howie: thanks!  Added to the list of things to read.", "timestamp": "1498505025"}, {"author": "Howie", "source_link": "https://www.facebook.com/jefftk/posts/882235241382?comment_id=882251563672&reply_comment_id=882298968672", "anchor": "fb-882251563672_882298968672", "service": "fb", "text": "&rarr;&nbsp;Raymond Huh.  I'd never seen that before.  I'm actually confused about what you were doing in the opening comment.  Which parenthetical is inside another one?  Why did you close of the first parenthetical if the second is inside of it?", "timestamp": "1498505292"}, {"author": "Howie", "source_link": "https://www.facebook.com/jefftk/posts/882235241382?comment_id=882251563672&reply_comment_id=882299048512", "anchor": "fb-882251563672_882299048512", "service": "fb", "text": "&rarr;&nbsp;Jeff&nbsp;Kaufman No problem!  That was the crux of the issue for me and I ended up changing my mind.  I think finding concrete examples of work that seemed valuable to me ended up more important than thinking about timelines, though.", "timestamp": "1498505348"}, {"author": "Howie", "source_link": "https://www.facebook.com/jefftk/posts/882235241382?comment_id=882251563672&reply_comment_id=882333175122", "anchor": "fb-882251563672_882333175122", "service": "fb", "text": "&rarr;&nbsp;Jeff&nbsp;Kaufman To be a bit more specific, I think the thing that really changed my mind was going through lists of potential research projects (e.g. the FLI research priorities doc https://futureoflife.org/.../research_priorities.pdf...; the projects FLI ended up funding https://futureoflife.org/first-ai-grant-recipients/; the Concrete Problems paper) and asking myself whether there were several that seemed like they might contribute to growing a field that reduced risk.", "timestamp": "1498514486"}, {"author": "Howie", "source_link": "https://www.facebook.com/jefftk/posts/882235241382?comment_id=882251563672&reply_comment_id=882334218032", "anchor": "fb-882251563672_882334218032", "service": "fb", "text": "&rarr;&nbsp;Jeff&nbsp;Kaufman You might also want to add:<br><br>1) Katja Grace, John Salvatier, Allan Dafoe, Baobao Zhang, Owain Evans<br>\"When Will AI Exceed Human Performance? Evidence from AI Experts.\"  https://arxiv.org/abs/1705.08807<br><br>-This came out after Luke's review of AI timelines, which I link to above.<br><br>2) Luke's series of blog posts titled \"Reply to X on AI Risk\" on his personal blog.  He replies to many of the arguments that prominent AI/ML scientists have made against worrying about AI risk.  Reading these contributed to my feeling that a lot of the folks making these statements hadn't thought very hard about the arguments on the other side.<br><br>Obviously, that could be true and they could still be right.  But it caused me to lower the weight I placed on the outside view that \"if all these AI scientists disagree with Bostrom, they're probably right for reasons I haven't thought of.\"", "timestamp": "1498514998"}, {"author": "Howie", "source_link": "https://www.facebook.com/jefftk/posts/882235241382?comment_id=882251563672&reply_comment_id=882334397672", "anchor": "fb-882251563672_882334397672", "service": "fb", "text": "&rarr;&nbsp;Luke: Are the posts I mentioned in the above comment collected anywhere?", "timestamp": "1498515029"}, {"author": "Luke", "source_link": "https://www.facebook.com/jefftk/posts/882235241382?comment_id=882251563672&reply_comment_id=882403703782", "anchor": "fb-882251563672_882403703782", "service": "fb", "text": "&rarr;&nbsp;Howie: Yup! http://lukemuehlhauser.com/replies-to-people-who-argue.../", "timestamp": "1498533929"}, {"author": "Alexander", "source_link": "https://www.facebook.com/jefftk/posts/882235241382?comment_id=882253699392", "anchor": "fb-882253699392", "service": "fb", "text": "More attempts to reconcile EA thinking with 'expert' thinking seems like it would be valuable in a number of domains. Interested to see how this one goes.", "timestamp": "1498489118"}, {"author": "Michael", "source_link": "https://www.facebook.com/jefftk/posts/882235241382?comment_id=882253819152", "anchor": "fb-882253819152", "service": "fb", "text": "On the question of defining AI Risk, what about something like 1987's Black Monday market crash? No self-awareness involved, but lack of human oversight of a highly automated system.", "timestamp": "1498489169"}, {"author": "Jeff&nbsp;Kaufman", "source_link": "https://www.facebook.com/jefftk/posts/882235241382?comment_id=882253819152&reply_comment_id=882260755252", "anchor": "fb-882253819152_882260755252", "service": "fb", "text": "&rarr;&nbsp;I mean https://80000hours.org/career-guide/world-problems/...", "timestamp": "1498491259"}, {"author": "Michael", "source_link": "https://www.facebook.com/jefftk/posts/882235241382?comment_id=882253819152&reply_comment_id=882270475772", "anchor": "fb-882253819152_882270475772", "service": "fb", "text": "&rarr;&nbsp;Hmm.  I think my point is already covered in that. Never mind!", "timestamp": "1498495220"}, {"author": "Wolf", "source_link": "https://www.facebook.com/jefftk/posts/882235241382?comment_id=882261234292", "anchor": "fb-882261234292", "service": "fb", "text": "Thank you for doing this. If you're the kind of person that does summary notes on sources read, it would be great if you could include these in whatever post comes out of this.", "timestamp": "1498491491"}, {"author": "Goedjn", "source_link": "https://www.facebook.com/jefftk/posts/882235241382?comment_id=882263824102", "anchor": "fb-882263824102", "service": "fb", "text": "I'm assuming that \"AI Risk\" is essentially \"chance of this system becoming skynet\", and EA is \"effective altruism\"?", "timestamp": "1498492254"}, {"author": "Jeff&nbsp;Kaufman", "source_link": "https://www.facebook.com/jefftk/posts/882235241382?comment_id=882263824102&reply_comment_id=882266199342", "anchor": "fb-882263824102_882266199342", "service": "fb", "text": "&rarr;&nbsp;Yes, though I haven't seen the movie and am less sure about the \"skynet\" bit.", "timestamp": "1498493018"}, {"author": "Goedjn", "source_link": "https://www.facebook.com/jefftk/posts/882235241382?comment_id=882263824102&reply_comment_id=882269298132", "anchor": "fb-882263824102_882269298132", "service": "fb", "text": "&rarr;&nbsp;https://xkcd.com/534/", "timestamp": "1498494616"}, {"author": "Avi", "source_link": "https://www.facebook.com/jefftk/posts/882235241382?comment_id=882263824102&reply_comment_id=882281279122", "anchor": "fb-882263824102_882281279122", "service": "fb", "text": "&rarr;&nbsp;You've never seen Terminator?! It's a good movie and you should definitely include watching it in your investigation of AI risk. ;)", "timestamp": "1498500320"}, {"author": "Bryce", "source_link": "https://www.facebook.com/jefftk/posts/882235241382?comment_id=882264867012", "anchor": "fb-882264867012", "service": "fb", "text": "I don't have much a substantive response to what you've said so far, but if you're looking for an AI professor to discuss this with, let me know. Quick summary of my views: current AI techniques are very far from achieving true AI, and the techniques that make progress have changed dramatically many times before. As a result, I have very low confidence that we can predict what true AI would look like and what dangers it would realistically pose.", "timestamp": "1498492547"}, {"author": "Daniel", "source_link": "https://www.facebook.com/jefftk/posts/882235241382?comment_id=882266932872", "anchor": "fb-882266932872", "service": "fb", "text": "I won't give you advice on the specific problem, but in terms of the general question, you should avoid absorbing Big Ideas from social movements. For a variety of sociological reasons, we're far too inclined to believe in the ideas that social movements promote. One source of bias is that the promoters of the movements are organized while the critics are not; another is that the leaders of the movements have far more to gain if/when the movements succeed than the critics have to gain if the movements fail. You should spend more time talking to contrarians.", "timestamp": "1498493249"}, {"author": "Jeff&nbsp;Kaufman", "source_link": "https://www.facebook.com/jefftk/posts/882235241382?comment_id=882266932872&reply_comment_id=882267227282", "anchor": "fb-882266932872_882267227282", "service": "fb", "text": "&rarr;&nbsp;Who are the contrarians here?  The ML research community or the small number of people who think the ML community is wrong to be ignoring catastrophic risks?", "timestamp": "1498493405"}, {"author": "Daniel", "source_link": "https://www.facebook.com/jefftk/posts/882235241382?comment_id=882266932872&reply_comment_id=882421433252", "anchor": "fb-882266932872_882421433252", "service": "fb", "text": "&rarr;&nbsp;Gosh, neither. Once a debate becomes mainstream, neither side is contrarian. Democrats want to spend more money on health insurance and Republicans want to spend less. Contrarians want to ban health insurance altogether (for example).", "timestamp": "1498542333"}, {"author": "Alexander", "source_link": "https://www.facebook.com/jefftk/posts/882235241382?comment_id=882266932872&reply_comment_id=882436532992", "anchor": "fb-882266932872_882436532992", "service": "fb", "text": "&rarr;&nbsp;Jeff&nbsp;Kaufman Michael is a professor of computer science at Brown University who has read Bostrom's book.", "timestamp": "1498551592"}, {"author": "Michael", "source_link": "https://www.facebook.com/jefftk/posts/882235241382?comment_id=882266932872&reply_comment_id=882987788272", "anchor": "fb-882266932872_882987788272", "service": "fb", "text": "&rarr;&nbsp;I've been involved in a bunch of these debates and I think I know the arguments well. I am solidly on the \"waste of time\" side of things, but I don't feel like I've yet hit upon the killer argument that it makes it plain. This essay I wrote is one attempt. (Note that the selection of accompanying imagery was not mine. :-( ) https://www.livescience.com/49625-robots-will-not-conquer... . Although I had in mind the \"AI Risk\" community when I wrote the article, it was also written to be accessible by a broad audience. So, if you want me to elaborate on anything, please let me know.", "timestamp": "1498654367"}, {"author": "Graham", "source_link": "https://www.facebook.com/jefftk/posts/882235241382?comment_id=882267237262", "anchor": "fb-882267237262", "service": "fb", "text": "I have built \"AI\" machine learning systems.  We are nowhere near Kurzweil's crackpot fantasies of a singularity, strong AI is probably false, and societies have adapted to rapidly changing socioecononmic conditions many times before.  <br><br>Deep breath, folks.", "timestamp": "1498493412"}, {"author": "Andrew", "source_link": "https://www.facebook.com/jefftk/posts/882235241382?comment_id=882267237262&reply_comment_id=882268374982", "anchor": "fb-882267237262_882268374982", "service": "fb", "text": "&rarr;&nbsp;Kurzweil isn't the only futurist in town. Most people interested in AI risk aren't primarily listening to Kurzweil.", "timestamp": "1498494184"}, {"author": "Graham", "source_link": "https://www.facebook.com/jefftk/posts/882235241382?comment_id=882267237262&reply_comment_id=882270256212", "anchor": "fb-882267237262_882270256212", "service": "fb", "text": "&rarr;&nbsp;good.  he's off his nut.", "timestamp": "1498495111"}, {"author": "Evan", "source_link": "https://www.facebook.com/jefftk/posts/882235241382?comment_id=882267237262&reply_comment_id=882275450802", "anchor": "fb-882267237262_882275450802", "service": "fb", "text": "&rarr;&nbsp;What would it mean for strong AI to be \"false\"?", "timestamp": "1498497335"}, {"author": "Graham", "source_link": "https://www.facebook.com/jefftk/posts/882235241382?comment_id=882267237262&reply_comment_id=882278589512", "anchor": "fb-882267237262_882278589512", "service": "fb", "text": "&rarr;&nbsp;broadly, \"strong AI is false\" means  that consciousness is not computable in the church-turing sense, and no digital computer will ever be capable of becoming self-aware as a consequence. <br><br>Jury's still out for any number of reasons, but i've picked a camp :)", "timestamp": "1498499079"}, {"author": "Tarn", "source_link": "https://www.facebook.com/jefftk/posts/882235241382?comment_id=882267237262&reply_comment_id=882281503672", "anchor": "fb-882267237262_882281503672", "service": "fb", "text": "&rarr;&nbsp;Consciousness is not the object of concern for AI risk, intelligence in the more prosaic sense is. If something can solve problems, plan, manipulate, invent etc well enough it doesn't matter if it has true consciousness any more than it mattered that Deep Blue did", "timestamp": "1498500398"}, {"author": "Graham", "source_link": "https://www.facebook.com/jefftk/posts/882235241382?comment_id=882267237262&reply_comment_id=882284871922", "anchor": "fb-882267237262_882284871922", "service": "fb", "text": "&rarr;&nbsp;but in that case, AI is just a complete misnomer, and we should've been freaking out and panicking since the 50s because \"giant pile of linear algebra as a black box with python wrapped around it\" is _just software_.  Just because it's magically now a total pain in the ass to debug doesn't mean much.", "timestamp": "1498501480"}, {"author": "Jacob", "source_link": "https://www.facebook.com/jefftk/posts/882235241382?comment_id=882267237262&reply_comment_id=882388164922", "anchor": "fb-882267237262_882388164922", "service": "fb", "text": "&rarr;&nbsp;It doesn't have to stop being \"just software\" to suddenly start being dangerous. Chimps didn't have to stop being \"just mammals\".", "timestamp": "1498527434"}, {"author": "Graham", "source_link": "https://www.facebook.com/jefftk/posts/882235241382?comment_id=882267237262&reply_comment_id=882389123002", "anchor": "fb-882267237262_882389123002", "service": "fb", "text": "&rarr;&nbsp;sure, but the way we think about \"gee, software is eating the world\" and \"lolz sky net!!11!\" are really fundamentally different.", "timestamp": "1498527982"}, {"author": "Jim", "source_link": "https://www.facebook.com/jefftk/posts/882235241382?comment_id=882267556622", "anchor": "fb-882267556622", "service": "fb", "text": "There's a distinction that AI-risk people are drawing, between \"short term risk\" and \"long term risk\". Long-term risk means the risk of incorrectly designed superintelligence; short-term risk means things like self-driving cars, tech unemployment, etc. The short-term risk side is a political concession to people who think superintelligence is too absurd to think about, and nearly everything happening under the short-term-risk umbrella is, in fact, bullshit. Again for political reasons, most of what machine learning researchers far from the rationalist community get exposed to is this short-term risk research.", "timestamp": "1498493773"}, {"author": "Jeff&nbsp;Kaufman", "source_link": "https://www.facebook.com/jefftk/posts/882235241382?comment_id=882267556622&reply_comment_id=882267606522", "anchor": "fb-882267556622_882267606522", "service": "fb", "text": "&rarr;&nbsp;I'm only trying to talk here about what you're calling \"long-term\" risk.", "timestamp": "1498493851"}, {"author": "Jim", "source_link": "https://www.facebook.com/jefftk/posts/882235241382?comment_id=882267556622&reply_comment_id=882267910912", "anchor": "fb-882267556622_882267910912", "service": "fb", "text": "&rarr;&nbsp;Got it. I wrote that in part to warn that, when you interact with other people, you're likely to run into people calling it bullshit who lack the distinction, and might be reacting to short-term risk research they've seen.", "timestamp": "1498493951"}, {"author": "Jim", "source_link": "https://www.facebook.com/jefftk/posts/882235241382?comment_id=882267556622&reply_comment_id=882268040652", "anchor": "fb-882267556622_882268040652", "service": "fb", "text": "&rarr;&nbsp;Jonathan Yan: The short-term/long-term distinction is a politicized jargon term within AI safety research, not a claim about superintelligence timelines (which, I agree, include a small but non-negligible probability of superintelligence soon).", "timestamp": "1498494012"}, {"author": "Jeff&nbsp;Kaufman", "source_link": "https://www.facebook.com/jefftk/posts/882235241382?comment_id=882267556622&reply_comment_id=882268554622", "anchor": "fb-882267556622_882268554622", "service": "fb", "text": "&rarr;&nbsp;Jim: that's the opposite of what I've seen.  ML experts generally call your \"long term\" risk BS, while saying your \"short term\" risk is a real concern.", "timestamp": "1498494236"}, {"author": "Jeff&nbsp;Kaufman", "source_link": "https://www.facebook.com/jefftk/posts/882235241382?comment_id=882267556622&reply_comment_id=882268709312", "anchor": "fb-882267556622_882268709312", "service": "fb", "text": "&rarr;&nbsp;For example, see this thread: http://www.jefftk.com/p/whats-next...<br><br>(But please don't post in it to argue with George)", "timestamp": "1498494310"}, {"author": "Jim", "source_link": "https://www.facebook.com/jefftk/posts/882235241382?comment_id=882267556622&reply_comment_id=882269941842", "anchor": "fb-882267556622_882269941842", "service": "fb", "text": "&rarr;&nbsp;Yes, George would be one of the \"people who think superintelligence is too absurd to think about\" that I described short-term AI risk research as being a concession to.", "timestamp": "1498494898"}, {"author": "Jim", "source_link": "https://www.facebook.com/jefftk/posts/882235241382?comment_id=882267556622&reply_comment_id=882269981762", "anchor": "fb-882267556622_882269981762", "service": "fb", "text": "&rarr;&nbsp;While it's true that there's vastly more work in deep learning applications going on than work on AGI, there's a lot more work on AGI going on than there has been at any point in the past.", "timestamp": "1498494936"}, {"author": "Ajeya", "source_link": "https://www.facebook.com/jefftk/posts/882235241382?comment_id=882268285162", "anchor": "fb-882268285162", "service": "fb", "text": "I'd be interested in seeing you analyze the empirical arguments both from an Astronomical Waste/future-focused view and from some more conventional value system (e.g. caring about the next generation or two but not much further). It's possible the AI researchers' empirical probabilities, when plugged into Astronomical Waste values, would conclude that AI risk is highly important. <br><br>Also just generally trying to extract numbers from people: P(strong AI in X years), P(Atari game milestone in Y years), what sort of 1-2 year developments would change people's probabilities, etc.", "timestamp": "1498494099"}, {"author": "Mad", "source_link": "https://www.facebook.com/jefftk/posts/882235241382?comment_id=882268859012", "anchor": "fb-882268859012", "service": "fb", "text": "I have been quite skeptical of AI risk! But this is an instinctive thing, presumably based on the tone and pattern of thinking in its proponents. I am *not* a domain expert and have not spent serious time considering it, so I look forward to your analysis.", "timestamp": "1498494404"}, {"author": "Mad", "source_link": "https://www.facebook.com/jefftk/posts/882235241382?comment_id=882268859012&reply_comment_id=882977354182", "anchor": "fb-882268859012_882977354182", "service": "fb", "text": "&rarr;&nbsp;JFC there's a lot here. I can't absorb it all, sorry. :(<br><br>It sounds like there's a shortage of strong input from the ML experts. If that is true -- and at the risk of repeating a suggestion someone else made -- I would suggest paying one or more ML insiders for a solid amount of time and attention, and produce their own analysis. Maybe $10k commission, but it might need to be more, depends on whether they're already looking for an excuse to do this work. (And if so, that's a good sign.) (Maybe you can get some other philanthropy group worried about AIR to do matching on this.)<br><br>You'll want the expert to read this stuff thoroughly, and provide an extremely solid response, to be published publicly. (I see Roko's claims of \"obvious errors\". Avoid that shit.) Like maybe... Section 1. Summarize the issue. Section 2. Dissect the AI threat arguments and point out  flaws and gaps Section 3. Make their own conclusion on whether there's any valid concern, and what that is. Section 4. Make recommendations on what should be done and on what timeline (and also \"not done\" \u2013 comment on AI threat mitigation efforts they think are misguided).<br><br>Put this through an editorial round where some AI threat enthusiasts hit back, privately. (I don't imagine you need to pay this \"team\".) Sharpen the whole thing, and publish.<br><br>... Okay now that I've written this, it feels like _surely_ one of the maybe-worried-about-AI-threat philanthropy groups considered trying this? Commission some skeptical AF critical feedback from ML/AI insiders on AI threat? To argue against pushing attention and money into this field?<br><br>... (Looks at open philanthropy's grants, has a sinking feeling.) If an AI/ML expert smells grant money to support themselves doing more AI threat... COI. :-/", "timestamp": "1498652829"}, {"author": "Michael", "source_link": "https://www.facebook.com/jefftk/posts/882235241382?comment_id=882268859012&reply_comment_id=882990557722", "anchor": "fb-882268859012_882990557722", "service": "fb", "text": "&rarr;&nbsp;Nice analysis. My thoughts are that a thorough analysis of this issue runs up against deep philosophical issues quickly. I can't imagine anything that would actually bring the various sides together because the missing (but needed) common ground might be at the level of \"what is reality?\" But, I do believe that such an exercise would be an amazing learning experience! :-)", "timestamp": "1498655065"}, {"author": "Mad", "source_link": "https://www.facebook.com/jefftk/posts/882235241382?comment_id=882268859012&reply_comment_id=883174294512", "anchor": "fb-882268859012_883174294512", "service": "fb", "text": "&rarr;&nbsp;Michael Just boiling it down to that might be productive enough. :)<br><br>Maybe it'd be better to invest smaller amounts in three different persectives, to get a better perspective... And it might be nice to get a super skeptical hot take statement made at the outset, on the record \u2013 so if perspective shifts, the expert is willingly eating their own words.<br><br>The larger point I may be trying to make is that I think this feedback probably deserves to be paid for.<br><br>Engaging the AI threat community looks like one of the least appealing opportunities to \"fight on the internet\" I've ever seen. \ud83d\ude2c It's unsurprising to me that few experts are interested in doing this \"for free\". A commission may provide both rationale and shield (\"I did it 'cause he paid me to, I'm not going to stick around to argue more over it\").", "timestamp": "1498679825"}, {"author": "Michael", "source_link": "https://www.facebook.com/jefftk/posts/882235241382?comment_id=882268859012&reply_comment_id=883175187722", "anchor": "fb-882268859012_883175187722", "service": "fb", "text": "&rarr;&nbsp;I got dragged in by Alexander. I've been continuing to engage because it's fascinating to me how diverse the set of opinions of my colleagues are. It's undermining my personal theory of how people develop personal theories. :-) If I can only understand why so many smart people believe this thing that seems so plainly to be scifi, I'll either understand why I am wrong or why smart people can do silly things.", "timestamp": "1498680152"}, {"author": "Michael", "source_link": "https://www.facebook.com/jefftk/posts/882235241382?comment_id=882268859012&reply_comment_id=883175287522", "anchor": "fb-882268859012_883175287522", "service": "fb", "text": "&rarr;&nbsp;Anyhow, if you find someone to fund your proposal, let me know. It's a cool idea!", "timestamp": "1498680214"}, {"author": "Jim", "source_link": "https://www.facebook.com/jefftk/posts/882235241382?comment_id=882269617492", "anchor": "fb-882269617492", "service": "fb", "text": "There are several different long-term AI safety research paths people are currently pursuing, with very different skill requirements and expected values.<br><br>Strategy one: Build the math of decision theory and logical uncertainty up to the point where we have a usable mathematical model of decision theory which is completely free of flaws, and could be the basis of an AI (albeit one that looks very different from the machine learning people are working on now). This is the MIRI approach. While it's likely to be a long time before this pays any usable dividends, part of the reasoning behind it is that it does not parallelize well; if we later wish we had done more of this kind of research early, no amount of resources will be able to correct the mistake. This style of research requires a large amount of mathematical ability.<br><br>Strategy two: Build a toolkit of hacks out of AI incentive structures, multi-agent systems, and other quirky ideas. Stuart Armstrong and Paul Christiano's work has this style. This type of work is less likely to involve long serial research paths, but is  much more useful in scenarios where AI comes sooner than expected. This style of research doesn't require all that much mathematical ability, but does require a great deal of creativity and of ability to think clearly.<br><br>Strategy three: Improve the reliability and transparency of present-day machine learning techniques, to reduce the probability that AI systems built upon them will fail because of problems with their components. MIRI also was doing some research of this type, but decided it wasn't worthwhile and stopped, in part because this type of research already has commercial incentives to drive it.<br><br>Of these, I think strategy two is the one that probably needs more people; but only a small fraction of people will find they have the right sort of mind for it, and doing that style of research is likely to involve a long frustrating period of zero contribution before you generate any insights.", "timestamp": "1498494707"}, {"author": "Luke", "source_link": "https://www.facebook.com/jefftk/posts/882235241382?comment_id=882270540642", "anchor": "fb-882270540642", "service": "fb", "text": "If you haven't read them already, I definitely recommend OpenPhil's blog posts making the case for why we prioritized our AI risks program area in 2016:<br><br>http://www.openphilanthropy.org/.../some-background-our...<br><br>http://www.openphilanthropy.org/.../potential-risks...", "timestamp": "1498495276"}, {"author": "Jeff&nbsp;Kaufman", "source_link": "https://www.facebook.com/jefftk/posts/882235241382?comment_id=882270540642&reply_comment_id=882271039642", "anchor": "fb-882270540642_882271039642", "service": "fb", "text": "&rarr;&nbsp;added, thanks!", "timestamp": "1498495440"}, {"author": "Andrew", "source_link": "https://www.facebook.com/jefftk/posts/882235241382?comment_id=882271094532", "anchor": "fb-882271094532", "service": "fb", "text": "Looking forward to your findings.", "timestamp": "1498495448"}, {"author": "Satvik", "source_link": "https://www.facebook.com/jefftk/posts/882235241382?comment_id=882271663392", "anchor": "fb-882271663392", "service": "fb", "text": "Alex Zhu has also been spending time on this", "timestamp": "1498495536"}, {"author": "Cj", "source_link": "https://www.facebook.com/jefftk/posts/882235241382?comment_id=882272895922", "anchor": "fb-882272895922", "service": "fb", "text": "My old work used ai for risk management (fraud detection).", "timestamp": "1498496053"}, {"author": "Harris", "source_link": "https://www.facebook.com/jefftk/posts/882235241382?comment_id=882272920872", "anchor": "fb-882272920872", "service": "fb", "text": "This Maciej Ceglowski piece is highly enjoyable. Thanks for sharing.", "timestamp": "1498496102"}, {"author": "Jeff&nbsp;Kaufman", "source_link": "https://www.facebook.com/jefftk/posts/882235241382?comment_id=882272920872&reply_comment_id=882298759092", "anchor": "fb-882272920872_882298759092", "service": "fb", "text": "&rarr;&nbsp;I'm worried that it might be too enjoyable, in the sense that mockery is entertaining but we shouldn't let argumentation-via-mockery convince us.<br><br>(I considered not including it for that reason, but I'm not aware of anything better.)", "timestamp": "1498505229"}, {"author": "Harris", "source_link": "https://www.facebook.com/jefftk/posts/882235241382?comment_id=882272920872&reply_comment_id=882299068472", "anchor": "fb-882272920872_882299068472", "service": "fb", "text": "&rarr;&nbsp;That's fair. I also find the arguments in it compelling, but, y'know, confirmation bias may have some role there.", "timestamp": "1498505377"}, {"author": "Michael", "source_link": "https://www.facebook.com/jefftk/posts/882235241382?comment_id=882272920872&reply_comment_id=882424866372", "anchor": "fb-882272920872_882424866372", "service": "fb", "text": "&rarr;&nbsp;...He literally says the Orthogonality Thesis is wrong because a robot on Rick and Morty contradicted it.", "timestamp": "1498544339"}, {"author": "Harris", "source_link": "https://www.facebook.com/jefftk/posts/882235241382?comment_id=882272920872&reply_comment_id=882466008922", "anchor": "fb-882272920872_882466008922", "service": "fb", "text": "&rarr;&nbsp;I don't think he's so much proving that it's wrong as he is pointing out that it is a tremendous and not necessarily correct assumption to be making about the nature of intelligence. He doesn't have to prove it wrong to show that the foundations are shaky.", "timestamp": "1498569704"}, {"author": "Paul", "source_link": "https://www.facebook.com/jefftk/posts/882235241382?comment_id=882272920872&reply_comment_id=882466617702", "anchor": "fb-882272920872_882466617702", "service": "fb", "text": "&rarr;&nbsp;The April 1 reply on Slate Star Codex is fantastic.", "timestamp": "1498569967"}, {"author": "Noah", "source_link": "https://www.facebook.com/jefftk/posts/882235241382?comment_id=882272920872&reply_comment_id=882810528502", "anchor": "fb-882272920872_882810528502", "service": "fb", "text": "&rarr;&nbsp;(direct link http://slatestarcodex.com/.../01/g-k-chesterton-on-ai-risk/)", "timestamp": "1498632099"}, {"author": "Greg", "source_link": "https://www.facebook.com/jefftk/posts/882235241382?comment_id=882273000712", "anchor": "fb-882273000712", "service": "fb", "text": "I have worked to a limited degree with AI. It is not an immediate danger, in the way people imagine it. We're a ways away from Terminators killing all humans. On the other hand, there is a significant risk from placing decisions in the hands of AI, which is shortsighted, single-minded, and doesn't have the moral inhibitions of humans. This is happening now. In the former scenario, AI's limitations protect us; in the latter they endanger us.<br><br>Also, there is some value to getting ahead of the problem of Terminators before the risk is imminent. When the risk is imminent, it will be hard to contain, because this is not like nuclear technology where rare materials and specialized equipment is needed. Much as 99% of humanity might agree on steps to limit AI, it's easy for it to proliferate in the hands of a few underground actors.", "timestamp": "1498496168"}, {"author": "David&nbsp;Chudzicki", "source_link": "https://www.facebook.com/jefftk/posts/882235241382?comment_id=882274612482", "anchor": "fb-882274612482", "service": "fb", "text": "Jeff&nbsp;Kaufman, your full reading list is way too much for me, but if there are particular pieces you'd like to discuss when I get back, I should have some good reading time on the airplane.", "timestamp": "1498496858"}, {"author": "David&nbsp;Chudzicki", "source_link": "https://www.facebook.com/jefftk/posts/882235241382?comment_id=882274612482&reply_comment_id=882274747212", "anchor": "fb-882274612482_882274747212", "service": "fb", "text": "&rarr;&nbsp;That said, I recognize that discussion with people whose views are more different from yours than mine are is probably more useful!", "timestamp": "1498496905"}, {"author": "David&nbsp;Chudzicki", "source_link": "https://www.facebook.com/jefftk/posts/882235241382?comment_id=882275630442", "anchor": "fb-882275630442", "service": "fb", "text": "A CS professor's criticism of Superintelligence, along with an EA forum response: http://effective-altruism.com/.../comments_on_ernest.../", "timestamp": "1498497428"}, {"author": "David&nbsp;Chudzicki", "source_link": "https://www.facebook.com/jefftk/posts/882235241382?comment_id=882275630442&reply_comment_id=882287172312", "anchor": "fb-882275630442_882287172312", "service": "fb", "text": "&rarr;&nbsp;Hey Roko, I'm not really spending time on this, so sorry if you were expecting a reply from me. I was just trying to point Jeff to a potentially-useful resource. Hopefully your comments will also be a useful resource.", "timestamp": "1498502139"}, {"author": "Jeff&nbsp;Kaufman", "source_link": "https://www.facebook.com/jefftk/posts/882235241382?comment_id=882275630442&reply_comment_id=882299757092", "anchor": "fb-882275630442_882299757092", "service": "fb", "text": "&rarr;&nbsp;Thanks David; added Davis' article to the list", "timestamp": "1498505463"}, {"author": "Roman", "source_link": "https://www.facebook.com/jefftk/posts/882235241382?comment_id=882275730242", "anchor": "fb-882275730242", "service": "fb", "text": "You may enjoy reading: http://iopscience.iop.org/.../10.1088/0031-8949/90/1/018001, http://www.aaai.org/.../AAAIW16/paper/download/12566/12356", "timestamp": "1498497513"}, {"author": "Jeff&nbsp;Kaufman", "source_link": "https://www.facebook.com/jefftk/posts/882235241382?comment_id=882275730242&reply_comment_id=882300011582", "anchor": "fb-882275730242_882300011582", "service": "fb", "text": "&rarr;&nbsp;Thanks!  Added.", "timestamp": "1498505571"}, {"author": "Roman", "source_link": "https://www.facebook.com/jefftk/posts/882235241382?comment_id=882275730242&reply_comment_id=882304507572", "anchor": "fb-882275730242_882304507572", "service": "fb", "text": "&rarr;&nbsp;If you have any questions, technical or otherwise, feel free to PM me.", "timestamp": "1498506824"}, {"author": "Haydn", "source_link": "https://www.facebook.com/jefftk/posts/882235241382?comment_id=882281538602", "anchor": "fb-882281538602", "service": "fb", "text": "You might be interested in some of the Asilomar talks as well. Best of luck!<br>https://www.youtube.com/watch?v=h0962biiZa4&amp;t=", "timestamp": "1498500426"}, {"author": "Owen", "source_link": "https://www.facebook.com/jefftk/posts/882235241382?comment_id=882281538602&reply_comment_id=882314662222", "anchor": "fb-882281538602_882314662222", "service": "fb", "text": "&rarr;&nbsp;I don't think these are very optimised for someone who's already reading this much (unless a significant preference for video).", "timestamp": "1498508492"}, {"author": "Haydn", "source_link": "https://www.facebook.com/jefftk/posts/882235241382?comment_id=882281538602&reply_comment_id=882322341832", "anchor": "fb-882281538602_882322341832", "service": "fb", "text": "&rarr;&nbsp;Yep, it's if he wants to watch stuff. Although watching all these important people does wonders for my system 1", "timestamp": "1498511035"}, {"author": "Ben", "source_link": "https://www.facebook.com/jefftk/posts/882235241382?comment_id=882281538602&reply_comment_id=882444237552", "anchor": "fb-882281538602_882444237552", "service": "fb", "text": "&rarr;&nbsp;+1 to Hayden's last point, *watching* Alan Dafoe's talk about governance problem made my S1 suddenly taking seriously having world governments take significant action in next few years.", "timestamp": "1498558616"}, {"author": "Nathan", "source_link": "https://www.facebook.com/jefftk/posts/882235241382?comment_id=882282212252", "anchor": "fb-882282212252", "service": "fb", "text": "I work in machine learning and AI development, and my background is in neuroscience. I have done a lot of reading and thinking about the AIG control problem, and I will be interested to hear your independent assessment of the likelihood and danger.", "timestamp": "1498500606"}, {"author": "Dave", "source_link": "https://www.facebook.com/jefftk/posts/882235241382?comment_id=882283549572", "anchor": "fb-882283549572", "service": "fb", "text": "Just saw this last night. Could be worth adding to your reading list: https://www.nytimes.com/.../artificial-intelligence...", "timestamp": "1498501099"}, {"author": "Howie", "source_link": "https://www.facebook.com/jefftk/posts/882235241382?comment_id=882300046512", "anchor": "fb-882300046512", "service": "fb", "text": "How did you find the Ceglowski piece?  I'm wondering how I missed it.", "timestamp": "1498505588"}, {"author": "Jeff&nbsp;Kaufman", "source_link": "https://www.facebook.com/jefftk/posts/882235241382?comment_id=882300046512&reply_comment_id=882300066472", "anchor": "fb-882300046512_882300066472", "service": "fb", "text": "&rarr;&nbsp;I remembered it from when it came up on Hacker News several months ago", "timestamp": "1498505611"}, {"author": "Howie", "source_link": "https://www.facebook.com/jefftk/posts/882235241382?comment_id=882300046512&reply_comment_id=882300096412", "anchor": "fb-882300046512_882300096412", "service": "fb", "text": "&rarr;&nbsp;Jeff&nbsp;Kaufman Huh.  Maybe I should browse HN more.", "timestamp": "1498505630"}, {"author": "Jeff&nbsp;Kaufman", "source_link": "https://www.facebook.com/jefftk/posts/882235241382?comment_id=882300046512&reply_comment_id=882301633332", "anchor": "fb-882300046512_882301633332", "service": "fb", "text": "&rarr;&nbsp;Probably not worth it on balance, such a timesink.  I like being up on things, but if I could remove my desire to check it and similar things I would", "timestamp": "1498506001"}, {"author": "Paul", "source_link": "https://www.facebook.com/jefftk/posts/882235241382?comment_id=882300046512&reply_comment_id=882385639982", "anchor": "fb-882300046512_882385639982", "service": "fb", "text": "&rarr;&nbsp;Scott responded http://slatestarcodex.com/.../01/g-k-chesterton-on-ai-risk/", "timestamp": "1498526490"}, {"author": "Allison", "source_link": "https://www.facebook.com/jefftk/posts/882235241382?comment_id=882312172212", "anchor": "fb-882312172212", "service": "fb", "text": "Another ML researcher chiming in: I'm really interested to see what you find and happy to chat.  (also not worried about superintelligence risk)", "timestamp": "1498507906"}, {"author": "Peter", "source_link": "https://www.facebook.com/jefftk/posts/882235241382?comment_id=882321388742", "anchor": "fb-882321388742", "service": "fb", "text": "See also this, where 47% of surveyed experts think we should prioritise more or much more AI safety research: <br>http://slatestarcodex.com/.../ssc-journal-club-ai-timelines/<br><br>Based on the paper: https://arxiv.org/abs/1705.08807", "timestamp": "1498510687"}, {"author": "Peter", "source_link": "https://www.facebook.com/jefftk/posts/882235241382?comment_id=882332167142", "anchor": "fb-882332167142", "service": "fb", "text": "I'm very excited about this.", "timestamp": "1498513995"}, {"author": "Ajeya", "source_link": "https://www.facebook.com/jefftk/posts/882235241382?comment_id=882350675052", "anchor": "fb-882350675052", "service": "fb", "text": "(Speaking for myself only, not my employer.)<br><br>I think most people who don't have expertise or a lot of time answer questions like \"How far away is AGI?\" by picking the most credible experts and then believing them. This is a pretty reasonable strategy in general, but I think this makes it important to think about who should count as credible experts.<br><br>Most people I talk to who are skeptical of AI think that the relevant expert class is something like \"ML researchers.\" I think this fails to make an \"expert at\" vs \"expert on\" distinction. If you want a sense of recent trends in the art industry and predictions about art styles in 5 years, you'd probably ask an art historian or museum curator rather than an artist; if you want a sense of the probability of a great powers war you would ask a geopolitics expert rather than a soldier; etc. Being an expert at something probably makes it easier to be an expert on that thing, but it's not a given.<br><br>ML researchers don't generally spend their time thinking about AGI timelines -- especially if they're doing specialized ML rather than explicitly aiming to build an AGI themselves. Of course, the people who DO spend a ton of time thinking about AGI timelines and the alignment problem are probably self-selecting into that field because they think timelines are short and/or alignment is hard. <br><br>I don't think we should necessarily have a strong presumption in favor of the futurist/safety crowd being right -- you could think pretty much no one is a \"credible expert\" on AGI timelines (forecasting is a pretty young field). But I do think that we shouldn't have a very strong presumption in favor of the ML community being right.", "timestamp": "1498516297"}, {"author": "Harrison", "source_link": "https://www.facebook.com/jefftk/posts/882235241382?comment_id=882350675052&reply_comment_id=882519546632", "anchor": "fb-882350675052_882519546632", "service": "fb", "text": "&rarr;&nbsp;It's important to remember that the relevant consideration is not how \"soon\" AGI is, but rather how much easier/sooner-by-default arbitrary AGI is than safe AGI. For example, it could be the case that unsafe AGI will require only 40 years of hardware advancement and only minor further conceptual breakthroughs but safety is a deep philosophical and/or theoretical problem on the order of the Riemann hypothesis or probability theory. Conversely, it could be the case that AGI requires deep conceptual breakthroughs, and these breakthroughs shed light on safety in a way that makes safety blindingly obvious. (Or it could be that there are multiple paths, and we have to end up on the right one to win, etc.) It's important to note this, because one often sees \"AGI isn't soon\" as a rebuttal, and it's basically off-topic.<br><br>Nonetheless, your main point that most ML researchers aren't thinking about this is correct.", "timestamp": "1498588635"}, {"author": "Michael", "source_link": "https://www.facebook.com/jefftk/posts/882235241382?comment_id=882350675052&reply_comment_id=883004095592", "anchor": "fb-882350675052_883004095592", "service": "fb", "text": "&rarr;&nbsp;I accept the criticism that active AI/ML researchers are not trained in long-term forecasting or societal implications of technology. But, I certainly don't feel like philosophers or physicists are in a better position to understand/predict in these topics than CS people. Grains of salt all around.", "timestamp": "1498657052"}, {"author": "Harrison", "source_link": "https://www.facebook.com/jefftk/posts/882235241382?comment_id=882350675052&reply_comment_id=884088841752", "anchor": "fb-882350675052_884088841752", "service": "fb", "text": "&rarr;&nbsp;Michael Actually, my criticism of the current crop of AI/ML people is not that they are deficient in \"long-term forecasting or [thinking about] societal implications of technology.\" Rather it's that safety seems to require theoretical foundations, but most AI/ML work today is done by trial-and-error, and most ML researchers seem to be unable or unwilling to think about theory or recognize that theories are missing at all. What conceptual work is done seems to be done by a small number of the usual suspects on a speculative or \"pet ideas\" basis, with little competition, coordination, or critical review.", "timestamp": "1498935466"}, {"author": "Sudhanshu", "source_link": "https://www.facebook.com/jefftk/posts/882235241382?comment_id=882358853662", "anchor": "fb-882358853662", "service": "fb", "text": "I don't (yet) follow Maciej Ceglowski or his work, so perhaps I haven't fully understood his intention here:<br><br>At one point, he states: Doing this (explicitly defining the human value system and designing it in the machine) is the ethics version of the early 20th century attempt to formalize mathematics and put it on a strict logical foundation. That this program ended in disaster for mathematical logic is never mentioned.<br><br>(brackets mine)<br><br>He also hyperlinks \"ended in disaster\" to this: https://en.wikipedia.org/wiki/Foundations_of_mathematics...<br><br>I'm not sold that this is a fair comparison. This 'foundational crisis' wasn't an 'end' of an exploration into formalizing mathematical logic. It was a discovery of some of the boorishness of logic that we would just have to live with, just like we would just have to live with wave-particle duality. Not to mention, in the pursuit of formalism, an invaluable amount of mathematics was created and discovered in the journey up to and beyond this point.<br><br>Indeed, if we were to discover that no ethical system can ever be consistent, the AI safety problems become more urgent, not less.", "timestamp": "1498519658"}, {"author": "Miles", "source_link": "https://www.facebook.com/jefftk/posts/882235241382?comment_id=882358853662&reply_comment_id=882504272242", "anchor": "fb-882358853662_882504272242", "service": "fb", "text": "&rarr;&nbsp;I love the phrase \"the boorishness of logic\".", "timestamp": "1498583045"}, {"author": "Brian", "source_link": "https://www.facebook.com/jefftk/posts/882235241382?comment_id=882360355652", "anchor": "fb-882360355652", "service": "fb", "text": "While the suffering-focused parts may not be relevant to you, https://foundational-research.org/artificial-intelligence... has some discussion that may be of interest. I take something of a middle path between the \"intelligence explosion\"/\"hard takeoff\" crowd and the \"superintelligence is silly to think about\" crowd.", "timestamp": "1498520087"}, {"author": "Bil", "source_link": "https://www.facebook.com/jefftk/posts/882235241382?comment_id=882372675962", "anchor": "fb-882372675962", "service": "fb", "text": "I think it relevant to note that there is no \"I\" in \"AI.\" One heck of a lot of very clever and very useful programming, but no intelligence. That last AAAI conference I went to a few years back contained all the same things we had worked on in the '70s, but with new names and faster machines. I don't think you'll find many professionals claiming they're working on intelligence. (If they do, just ask them to define intelligence.)", "timestamp": "1498522224"}, {"author": "Bil", "source_link": "https://www.facebook.com/jefftk/posts/882235241382?comment_id=882372675962&reply_comment_id=882528503682", "anchor": "fb-882372675962_882528503682", "service": "fb", "text": "&rarr;&nbsp;I think you missed the point. We have worked on AI for 50 years, yet we've never been able to define our objective without wild hand-waving. A lawyer can tell you a great deal about right and wrong and the laws surrounding them. But I can't tell you if a program is intelligent. Ask any of my peers.", "timestamp": "1498590816"}, {"author": "Raymond", "source_link": "https://www.facebook.com/jefftk/posts/882235241382?comment_id=882378339612", "anchor": "fb-882378339612", "service": "fb", "text": "Most people have been *adding* things to Jeff's reading list, so I also wanted to note: I think the Wait But Why article is basically a shorter, somewhat less rigorous version of Superintellgence, and if you're planning on reading the latter, you don't really need to read the former.", "timestamp": "1498523968"}, {"author": "Jeff&nbsp;Kaufman", "source_link": "https://www.facebook.com/jefftk/posts/882235241382?comment_id=882378339612&reply_comment_id=882379622042", "anchor": "fb-882378339612_882379622042", "service": "fb", "text": "&rarr;&nbsp;Unfortunately the WBW post is the first one I read. But it's helpful for getting an overview.", "timestamp": "1498524426"}, {"author": "Paul", "source_link": "https://www.facebook.com/jefftk/posts/882235241382?comment_id=882384776712", "anchor": "fb-882384776712", "service": "fb", "text": "This is very interesting! I'm struck though that you have a long list of things you're going to read. Something I'd prioritise over that would be a long list of people you're going to talk to. I encourage you to line up lots of video chats with knowledgeable people. I know that you don't want to waste people's time with stuff you could learn by reading, but an awful lot of this is stuff that it's hard to get a good grip on by reading, and some conversations would make a huge difference to clarifying. I am also very confident that everyone will want to devote the time to talk to you :) Let me know if there's any way I can help with this project. Good luck!", "timestamp": "1498526217"}, {"author": "Jeff&nbsp;Kaufman", "source_link": "https://www.facebook.com/jefftk/posts/882235241382?comment_id=882384776712&reply_comment_id=882386298662", "anchor": "fb-882384776712_882386298662", "service": "fb", "text": "&rarr;&nbsp;Superintelligence is the only really long one, but I do think I should read it", "timestamp": "1498526740"}, {"author": "Paul", "source_link": "https://www.facebook.com/jefftk/posts/882235241382?comment_id=882384776712&reply_comment_id=882390789662", "anchor": "fb-882384776712_882390789662", "service": "fb", "text": "&rarr;&nbsp;Yes, it probably is worth reading that. After that, a lot of the other stuff is quite repetitive!", "timestamp": "1498528764"}, {"author": "Brian", "source_link": "https://www.facebook.com/jefftk/posts/882235241382?comment_id=882384776712&reply_comment_id=882420160802", "anchor": "fb-882384776712_882420160802", "service": "fb", "text": "&rarr;&nbsp;If you do conversations, I wonder if it'd be possible to put them on YouTube (unless people don't want to be recorded).", "timestamp": "1498541299"}, {"author": "David&nbsp;Chudzicki", "source_link": "https://www.facebook.com/jefftk/posts/882235241382?comment_id=882384776712&reply_comment_id=882447176662", "anchor": "fb-882384776712_882447176662", "service": "fb", "text": "&rarr;&nbsp;Maybe start lining up conversations now so you don't get blocked on that after reading.", "timestamp": "1498562019"}, {"author": "Ben", "source_link": "https://www.facebook.com/jefftk/posts/882235241382?comment_id=882406672832", "anchor": "fb-882406672832", "service": "fb", "text": "I'm deeply skeptical that research in reducing any risks associated with a super-intelligence translates into any realized risk reduction. It's tough to believe that research on this front gets anything about the development of a new technology right (and tougher to believe that there's meaningful safeguards that can be applied, and tougher still to believe that those developing the new technology will listen to and apply any claimed safeguards, and even tougher to believe those safeguard work as intended).", "timestamp": "1498535291"}, {"author": "Ben", "source_link": "https://www.facebook.com/jefftk/posts/882235241382?comment_id=882406672832&reply_comment_id=882542959712", "anchor": "fb-882406672832_882542959712", "service": "fb", "text": "&rarr;&nbsp;Jonathan it's a nice result, but I see capacity to learn based on human preferences as orthogonal to safety and completely unpersuasive", "timestamp": "1498594731"}, {"author": "David&nbsp;Chudzicki", "source_link": "https://www.facebook.com/jefftk/posts/882235241382?comment_id=882406672832&reply_comment_id=885442683642", "anchor": "fb-882406672832_885442683642", "service": "fb", "text": "&rarr;&nbsp;Ben -- Jeff wrote a bit here (mainly working Dario) about why it might not be orthogonal: http://www.jefftk.com/p/conversation-with-dario-amodei<br><br>I'm not convinced but it is seeming more plausible to me than it used to.", "timestamp": "1499449590"}, {"author": "Matthias", "source_link": "https://www.facebook.com/jefftk/posts/882235241382?comment_id=882443084862", "anchor": "fb-882443084862", "service": "fb", "text": "Some semi-organized thoughts (credentials: professor of biomedical informatics with multidisciplinary background and well-acquainted with the EA and rationality memespaces).<br><br> * Framing matters. If you discuss superintelligence risk in the frame of other global catastrophic or existential risks of the 21st century (e.g., global warming, cyber-surveillance and totalitarianism, bio-engineered pathogens), I think most critics will suddenly shift towards accepting it as worthwhile thing to care about.<br><br> * Having expertise and experience in common machine learning techniques (e.g., doing classification with support vector machines) might not improve a person\u2019s expertise in thinking about superintelligence risk at all.<br><br> * A common error seem to be heuristics of the sort of \"at some time in the past, people predicted AGI to exist within 20 years and they were wrong, so people who today predict AGI to exist within 20 years are probably wrong as well\". If we accept that AGI can be reached someday, this heuristic is necessarily faulty. Furthermore, the predictions of today are INFORMED by the faulty predictions of the past and take recent advances into account, so we can expect current predictions to be better than the predictions made 20 years ago.<br><br> * The arguments brought up against worrying about superintelligence risk are often of very low quality, e.g.:<br>The unfounded assertion that human-level intelligence requires some unknown, quasi-mystical ingredient that cannot be replicated in computers. <br>The unfounded assertion that because the evolution of the human brain took millions of years we can not expect to reach similar capabilities within just a few decades of computer science. <br>Various versions of thinking about AI development as a very complicated endeavor of programming submodules and functionalities as in classical imperative programming (and neglecting that current deep learning approaches are already capable of learning very complex functionalities through rather simple end-to-end learning and backprop). <br>Various forms of whataboutism that mix catastrophic risks with concerns about lower risk categories (e.g. traffic accidents caused by autonomous vehicles). <br><br> * I am unsure about the current neglectedness of superintelligence risk. The topic has already been widely disseminated in the past few years. Many of the topics arise naturally as research topics if you want to make AI systems work and make them interpretable. In general, I have the feeling that the Effective Altruism community is somewhat cursed in that it prioritizes focusing on neglected and important areas, but those areas quickly become widely known and cease to be neglected. <br><br> * I think that the risk of involuntary, catastrophic \u2018perverse instantiation\u2019 (the notorious paperclip scenario) might indeed be low even among the low-probability scenarios, and that it should be less emphasized in the discourse of superintelligence risk. What I am more concerned about is that there might be a large number of people and organizations that simply have values that differ strongly from what I (and probably others who are reading this) see as good and desirable, and that the availability of superintelligence might enable such people and organizations to enforce their goals. <br>There is no \u2018coherent extrapolated volition\u2019 of humankind that we could strive towards. It is, and has been for centuries, a struggle between different models of what kind of world is desirable. And if recent events showed us one thing, we cannot expect history to be some kind of one-way elevator towards liberalism, secularism, rationality and a pleasant life for everyone. So if you want to promote these values, and you think that advances in artificial intelligence might have a strong impact on how our societies are shaped, prepare for a struggle, rather than expect everyone to agree on them via consensus in some kind of global committee.", "timestamp": "1498557566"}, {"author": "George", "source_link": "https://www.facebook.com/jefftk/posts/882235241382?comment_id=882443084862&reply_comment_id=883553379822", "anchor": "fb-882443084862_883553379822", "service": "fb", "text": "&rarr;&nbsp;I agree with Dr.Samwald. However I feel somewhat bitter about the fact that people who receive funding are often nothing more but empty-worded philosophers with no potential to prevent or even define the problem. Those people are toying with some anthropomorphised Asimov-influenced distorted view of machine learning and artificial intelligence that, in my opinion, leads nowhere. It's true that basic knowledge of how to create a perceptron does not reveal the scope of the problem, but I believe that the necessary prerequisite (but by no means sufficient) for talking about AI dangers is a background in precise sciences. I personally would like to see people unfamiliar with the fundamentals of machine learning banned from any decision making or resource allocation in the area. Analogously to being able to create computer anti-virus software, the skill set required to fight weaponised AI (or weaponised genetics, for that matter) is the very same skill set that is used to create it - being as technically advanced as possible and on the top of the research in the area, and there'll be a long way until we'll really have to consider AI that goes beyond just being a tool and is capable of having autonomous sinister intentions.", "timestamp": "1498822609"}, {"author": "Thomas", "source_link": "https://plus.google.com/110993380381592315078", "anchor": "gp-1498564331323", "service": "gp", "text": "I work in ML, and I think AI risk is worth working on now.  I would be happy to talk more, or answer questions, if that would help.", "timestamp": 1498564331}, {"author": "Miles", "source_link": "https://www.facebook.com/jefftk/posts/882235241382?comment_id=882504706372", "anchor": "fb-882504706372", "service": "fb", "text": "I work in ML in industry, and have my name on a couple of papers applying ML to other parts of computer science. I think AI risk is a real and difficult problem, but not an urgent one; I'm glad some people are working on it, but I don't feel any great need to join them. It's not the existential risk I'm most worried about, put it that way...", "timestamp": "1498583409"}, {"author": "Brendan", "source_link": "https://plus.google.com/100334584094940516862", "anchor": "gp-1498593807659", "service": "gp", "text": "I'm really interested to see what you find. I'm in the camp of \"AI safety is important but current AI safety research seems worthless\", and I suspect that's common among people who have actually done ML. Philosophy papers about decision theory won't be very useful if we end up training AI. If we end up actually programming AI (and not just training it), then we'll likely know a lot more about how minds work at that time than we do now, so speculation today is probably not worth the effort.", "timestamp": 1498593807}, {"author": "Taymon", "source_link": "https://www.facebook.com/jefftk/posts/882235241382?comment_id=882542101432", "anchor": "fb-882542101432", "service": "fb", "text": "Regarding the Wait But Why article: While it's good that there's an article about this for people with no background whatsoever, it gets a lot of details subtly wrong. If you read it, please also read http://lukemuehlhauser.com/a-reply-to-wait-but-why-on.../, which supplies many useful corrections.<br><br>(Jeff may already know this, but I'm adding it for the benefit of anyone else stumbling across this.)", "timestamp": "1498594529"}, {"author": "Benjamin", "source_link": "https://www.facebook.com/jefftk/posts/882235241382?comment_id=882643842542", "anchor": "fb-882643842542", "service": "fb", "text": "Hey Jeff, I'd also add this to your reading list. My understanding is that it's MIRI's most current attempt to explain AI risk, particularly to a more technical audience.<br>https://www.youtube.com/watch?v=dY3zDvoLoao<br>So I might rate it over EY's older writing.<br><br>Also, if you read the rest of this list (especially Superintelligence and Open Phil) then there's not much reason to read the 80k profile, since all our arguments are taken from other sources. We're just summarising.", "timestamp": "1498614326"}, {"author": "Taymon", "source_link": "https://www.facebook.com/jefftk/posts/882235241382?comment_id=882643842542&reply_comment_id=882645234752", "anchor": "fb-882643842542_882645234752", "service": "fb", "text": "&rarr;&nbsp;If, like me, you'd rather read a 7300-word blog post than watch a 73-minute video, then go here: https://intelligence.org/2017/04/12/ensuring/", "timestamp": "1498614494"}, {"author": "Jesse", "source_link": "https://www.facebook.com/jefftk/posts/882235241382?comment_id=882798572462", "anchor": "fb-882798572462", "service": "fb", "text": "The philosophy of AI is still very far from being understood. The epistemological problems outweigh the technical ones, we can relax until we figure out creativity, not just pattern recognition. David Deutsch explains here <br>https://aeon.co/.../how-close-are-we-to-creating...", "timestamp": "1498630650"}, {"author": "Alice", "source_link": "https://www.facebook.com/jefftk/posts/882235241382?comment_id=883322682142", "anchor": "fb-883322682142", "service": "fb", "text": "I think that we already live in a world dominated by non-human super-intelligences. Whether it's the global economy, scientific communities, or national bureaucracy these complex systems discover, synthesize, and act upon information. None of these systems possesses consciousness (I hope), but that is not necessary for intelligence. We already live on the other side of singularity. So here's my question for you, how can we train the global economy to be ethical?", "timestamp": "1498744590"}, {"author": "Taymon", "source_link": "https://www.facebook.com/jefftk/posts/882235241382?comment_id=883322682142&reply_comment_id=883323720062", "anchor": "fb-883322682142_883323720062", "service": "fb", "text": "&rarr;&nbsp;I think this argument is misleading at best. See http://slatestarcodex.com/.../things-that-are-not.../", "timestamp": "1498745037"}, {"author": "Ella", "source_link": "https://www.facebook.com/jefftk/posts/882235241382?comment_id=883322682142&reply_comment_id=883356763842", "anchor": "fb-883322682142_883356763842", "service": "fb", "text": "&rarr;&nbsp;I have a similar argument to Alice's. The concern about AI is that it will manipulate us in ways we cannot foresee before we can stop it. We already have tons of large systems that manipulate us in ways we do not fully understand. These include the global economy, our system of political governance, scientific communities, and many others. I think these other systems pose a much more pressing concern than AI. In particular, I think our global economy has created a too-large community of people who have lost their ability to be productive members of society. It seems like joblessness due to automation will only continue to expand.<br><br>Whether we call these large systems super-intelligences or something else, AI is not the immediate cause of the NRA using carefully phrased language that just stops shy of explicitly calling for violence. It's not the reason for these headlines from people associated with the administration. Bad jobs/automation/xenophobia are. Point being, other large systems need to be aligned with human interests and are more immediate problems than AI. How can we train them to be ethical, i.e. aligned with human interests?<br><br>https://www.youtube.com/watch?v=XtGOQFf9VCE", "timestamp": "1498757001"}, {"author": "Taymon", "source_link": "https://www.facebook.com/jefftk/posts/882235241382?comment_id=883322682142&reply_comment_id=883438929182", "anchor": "fb-883322682142_883438929182", "service": "fb", "text": "&rarr;&nbsp;Again, this isn't relevant to the topic of catastrophic risk from transformative AI.", "timestamp": "1498778139"}, {"author": "Ella", "source_link": "https://www.facebook.com/jefftk/posts/882235241382?comment_id=883322682142&reply_comment_id=883478714452", "anchor": "fb-883322682142_883478714452", "service": "fb", "text": "&rarr;&nbsp;Sure it is! I'm saying that the risk from the possibility of transformative AI pales in comparison to the many problems society is about to encounter from global economic and political changes. However, similar approaches can be taken to solving those economic and political challenges: namely, how do we make those systems more ethical? So if the concern is over what to devote resources to, devote your resources to one of the following: global warming, automation, the connection between xenophobia and economic collapse, political extremism, etc.", "timestamp": "1498782458"}, {"author": "Neil", "source_link": "https://www.facebook.com/jefftk/posts/882235241382?comment_id=883322682142&reply_comment_id=884288217202", "anchor": "fb-883322682142_884288217202", "service": "fb", "text": "&rarr;&nbsp;None of these things are superintelligences, and it's unlikely that any of them will literally wipe out every human on the planet (with the possible exception of global thermonuclear war). See the \"things that are not superintelligences\" link...", "timestamp": "1499017854"}, {"author": "Greg", "source_link": "https://www.facebook.com/jefftk/posts/882235241382?comment_id=883358615132", "anchor": "fb-883358615132", "service": "fb", "text": "This is exactly what I wanted someone to look into, but I am not experienced enough with machine learning or informed enough in the EA community to know what the big deal is with AI risk (I do understand it, I just haven't spent a LOT of time thinking about it). Very curious to hear your assessment, Jeff!", "timestamp": "1498757896"}, {"author": "Mark", "source_link": "https://www.facebook.com/jefftk/posts/882235241382?comment_id=883955004962", "anchor": "fb-883955004962", "service": "fb", "text": "I'm a little late to the party but I recommend you chat with someone who studies complexity.  Proofs in that area seem very relevant and most people who talk about this stuff don't seem aware of them.", "timestamp": "1498877671"}, {"author": "Jeff&nbsp;Kaufman", "source_link": "https://www.facebook.com/jefftk/posts/882235241382?comment_id=883955004962&reply_comment_id=884012399942", "anchor": "fb-883955004962_884012399942", "service": "fb", "text": "&rarr;&nbsp;Sorry, could you elaborate?", "timestamp": "1498907555"}, {"author": "Louis", "source_link": "https://www.facebook.com/jefftk/posts/882235241382?comment_id=883955004962&reply_comment_id=884070373762", "anchor": "fb-883955004962_884070373762", "service": "fb", "text": "&rarr;&nbsp;I can sort of see the relevance, but I've never seen it as that relevant: you don't need to optimally solve high complexity problems, you just need to get better approximations than e.g. humans.  I don't see complexity bounds as serious obstacles to recursive self improvement etc.", "timestamp": "1498928135"}, {"author": "Mark", "source_link": "https://www.facebook.com/jefftk/posts/882235241382?comment_id=883955004962&reply_comment_id=884079086302", "anchor": "fb-883955004962_884079086302", "service": "fb", "text": "&rarr;&nbsp;I suspect someone who studies complexity would see it differently.  I'm not an expert, I have taken a few mind bending grad courses though.<br><br>Things that complexity theory studies are formal games, bounds of computation and proof, very crazy models of computation and what happens when randomness comes into the picture.  They also have a bit more rigor than I tend to see in these conversations.  For instance games against against arbitrary computation bound adversaries seems really relevant.  <br><br>for a specific example http://mathworld.wolfram.com/RabinsCompressionTheorem.html seems relevant to me (essentially there are lots arbitrarily hard things to compute).", "timestamp": "1498931408"}, {"author": "Rob", "source_link": "https://www.facebook.com/jefftk/posts/882235241382?comment_id=883955004962&reply_comment_id=884088262912", "anchor": "fb-883955004962_884088262912", "service": "fb", "text": "&rarr;&nbsp;The usual reply is https://www.gwern.net/Complexity%20vs%20AI", "timestamp": "1498935156"}, {"author": "Jeff&nbsp;Kaufman", "source_link": "https://www.facebook.com/jefftk/posts/882235241382?comment_id=883955004962&reply_comment_id=884097309782", "anchor": "fb-883955004962_884097309782", "service": "fb", "text": "&rarr;&nbsp;Thanks Rob!<br><br>@Mark Gwern's post looks pretty reasonable to me (though I don't like the worm parable).  There are a lot of reasons I don't think complexity bears on the arguments about superintelligence, which he covers pretty thoroughly.  The main one from my perspective is that complexity is about optimal solutions, and human-style thought is all approximations.", "timestamp": "1498937506"}, {"author": "Mark", "source_link": "https://www.facebook.com/jefftk/posts/882235241382?comment_id=883955004962&reply_comment_id=884109560232", "anchor": "fb-883955004962_884109560232", "service": "fb", "text": "&rarr;&nbsp;Asymptotic complexity is only a part of complexity research.  An expert in the field could also walk you through the well known caveats.  Everyone knows they're are working with models, but I do think there is real insight behind them.  I think getting that very abstract perspective would be helpful, and it would counter the argument that ML researchers are too focused on specific systems, not familiar enough with theory or whatever.<br><br>Maybe if I get some time in the next few days I can read the article, but I'm super busy for the next week and can't really commit to it.", "timestamp": "1498939871"}]}