{"items": [{"author": "Bronwyn", "source_link": "https://www.facebook.com/jefftk/posts/886169437222?comment_id=886172146792", "anchor": "fb-886172146792", "service": "fb", "text": "\"Specifically, when trying to replicate results, he's often found that they depend on a bunch of parameters being in just the right range, and without that the systems don't perform nearly as well.\"<br><br>I think this is more true than many people realize for just about any real-life ML problem, and it extends to characteristics of the data and problem formulation just as much.", "timestamp": "1499701943"}, {"author": "James", "source_link": "https://www.facebook.com/jefftk/posts/886169437222?comment_id=886202565832", "anchor": "fb-886202565832", "service": "fb", "text": "At the very least AI safety today could look at what we should do politically if in the future we come to the conclusion that AI is extremely dangerous and AI research should be slowed down or stopped.  We could also be preparing politicians and the public for this possibility.  AI safety could also consist of accelerating efforts to understand the genetics of human intelligence so we can start to genetically engineer super human intelligence that might have a better chance of crafting friendly AI.", "timestamp": "1499711102"}, {"author": "Mad", "source_link": "https://www.facebook.com/jefftk/posts/886169437222?comment_id=886202565832&reply_comment_id=886230609632", "anchor": "fb-886202565832_886230609632", "service": "fb", "text": "&rarr;&nbsp;This was almost going somewhere until ... umm... anyway. There's better futurism in genetics than this, IMHO.<br><br>Y'know, this all reminds me a bit of Bruce Sterling's Long Now talk. Which is from 2004 and yet still relevant \u2013 his thoughts age well, I suppose. I love this talk. (I recommend actually listening to it, the tone of voice can be key.) http://longnow.org/.../the-singularity-your-future-as-a.../", "timestamp": "1499721156"}, {"author": "Bruce", "source_link": "https://www.facebook.com/jefftk/posts/886169437222?comment_id=886269272152", "anchor": "fb-886269272152", "service": "fb", "text": "I think it's worth noting that we have AI safety issues right now that need to be dealt with and studied.  We have people designing deep networks as part of autonomous car ecosystems, machines that can easily kill people if they malfunction.  We still don't have a good handle on whether a network is well-trained, or has good coverage of the possible scenarios other than hoping we have a decent validation set.<br><br>Working to develop our understanding of the safety of existing deep networks for limited tasks **should** lay some foundations for understanding the systems that might underlie a more general AI.  I don't think we have to assume that safety considerations for general AI have nothing to do with safety considerations for more constrained AI.   The only way that is true is if general AI is completely divorced from what comes before.", "timestamp": "1499732966"}, {"author": "Jeff&nbsp;Kaufman", "source_link": "https://www.facebook.com/jefftk/posts/886169437222?comment_id=886269272152&reply_comment_id=886362136052", "anchor": "fb-886269272152_886362136052", "service": "fb", "text": "&rarr;&nbsp;@Bruce: That makes sense.  What do you think of the Concrete Problems in AI Safety paper?  https://arxiv.org/pdf/1606.06565.pdf", "timestamp": "1499782865"}, {"author": "Jeff&nbsp;Kaufman", "source_link": "https://www.facebook.com/jefftk/posts/886169437222?comment_id=886269272152&reply_comment_id=886362285752", "anchor": "fb-886269272152_886362285752", "service": "fb", "text": "&rarr;&nbsp;@Roko: \"there are arguments against this\"<br><br>That isn't helpful.  If you have arguments against something, please give them, or at least link to them.", "timestamp": "1499782928"}, {"author": "Jeff&nbsp;Kaufman", "source_link": "https://www.facebook.com/jefftk/posts/886169437222?comment_id=886269272152&reply_comment_id=886368508282", "anchor": "fb-886269272152_886368508282", "service": "fb", "text": "&rarr;&nbsp;@Roko: Please figure out another system for personal reminders, it's pretty noisy to have that sort of thing here.  The way you phrased it also looks very dismissive.", "timestamp": "1499785550"}, {"author": "Dario", "source_link": "https://www.facebook.com/jefftk/posts/886169437222?comment_id=886269272152&reply_comment_id=886397210762", "anchor": "fb-886269272152_886397210762", "service": "fb", "text": "&rarr;&nbsp;Roko Mijic: I don't know to what extent you're responding to Bruce vs addressing people like me/Paul, but it's worth pointing out that in my case at least, I'm interested in studying the conceptual issues underlying safety through the lens of current systems, which is different from trying to make current systems safe.   The former approach is quite capable of handling e.g. treacherous turn issues, and in a general way rather than in a hacky way that obviously won't generalize.  That said, there is not zero overlap between the two.  I think there's a good chance that studying goals and motivations in a principled way could also have practical benefits in making short term systems more reliable (in addition to just enabling us to do new ML tasks).", "timestamp": "1499796657"}, {"author": "Jacob", "source_link": "https://www.facebook.com/jefftk/posts/886169437222?comment_id=886269272152&reply_comment_id=887974140582", "anchor": "fb-886269272152_887974140582", "service": "fb", "text": "&rarr;&nbsp;+1 to what Dario said, this matches my goals (though I also see value in some amount of less concretely grounded research, for diversification purposes if nothing else).", "timestamp": "1500272817"}, {"author": "Dario", "source_link": "https://www.facebook.com/jefftk/posts/886169437222?comment_id=886269272152&reply_comment_id=887974380102", "anchor": "fb-886269272152_887974380102", "service": "fb", "text": "&rarr;&nbsp;Yep, my attitude on this is \"let a thousand flowers bloom\".  I'm pretty skeptical of most of MIRI's work but have always favored giving them enough funding to pursue the ideas they consider most promising.", "timestamp": "1500273007"}, {"author": "Andrew", "source_link": "https://www.facebook.com/jefftk/posts/886169437222?comment_id=886303518522", "anchor": "fb-886303518522", "service": "fb", "text": "I &lt;3 Michael Littman.", "timestamp": "1499747347"}, {"author": "Bob", "source_link": "https://www.facebook.com/jefftk/posts/886169437222?comment_id=886482839162", "anchor": "fb-886482839162", "service": "fb", "text": "on my phone, haven't digested full thread, but have you seen the Wait But Why article on Neuralink? (note i didn't ask do you know about Neuralink)", "timestamp": "1499812086"}]}