{"items": [{"author": "Jess", "source_link": "https://www.facebook.com/jefftk/posts/888808283952?comment_id=888809341832", "anchor": "fb-888809341832", "service": "fb", "text": "There will only be natural re-balancing insofar as the various approaches are interdependent. (Underinvestment in one approach means that  approach eventually becomes relatively cheap and easy due to parallel advances using other approaches.) Safety doesn't have to be like this. Likewise, if we had failed to invest in nuclear safeguard research, it wouldn't have naturally filled in because we were pursuing weapons research.", "timestamp": "1500571894"}, {"author": "Jeff&nbsp;Kaufman", "source_link": "https://www.facebook.com/jefftk/posts/888808283952?comment_id=888809341832&reply_comment_id=888816657172", "anchor": "fb-888809341832_888816657172", "service": "fb", "text": "&rarr;&nbsp;I think it would apply pretty strongly to the approach outlined in: http://www.jefftk.com/p/conversation-with-dario-amodei<br><br>But I'm doubtful that the rebalancing effect is as strong as presented.", "timestamp": "1500574026"}, {"author": "Daniel", "source_link": "https://www.facebook.com/jefftk/posts/888808283952?comment_id=888824606242", "anchor": "fb-888824606242", "service": "fb", "text": "Almost no one takes Oppenheimer's maxim seriously enough: \"It is a profound and necessary truth that the deep things in science are not found because they are useful; they were found because it was possible to find them.\"", "timestamp": "1500578064"}, {"author": "Dario", "source_link": "https://www.facebook.com/jefftk/posts/888808283952?comment_id=889014306082", "anchor": "fb-889014306082", "service": "fb", "text": "I entirely agree with this researcher that core progress in the field is largely constrained by hardware;  in fact it's a big reason why I suspect AGI may not be that far away (progress in hardware is measured in orders of magnitude, so looking at how small a fraction of human capabilities we can replicate can be very misleading).  I also agree that some balancing effect does exist.  But, even for a given amount of hardware, in my experience lots of stuff doesn't get done, or stays a constant amount behind rather than catching up.  The \"learning from human preferences\" work could easily have been done 4 years ago.  Another example is safe exploration, where there's a huge amount of pre-deep-learning work but no one has really bothered to take a modern approach until very recently.  I agree this areas won't fall arbitrarily behind, but my fear is that they'll remain perpetually 4 years behind, which seems like a bad situation to me.  Two things that could happen are (1) they catch up once they become essential for the open-world problems we'll be facing soon, or (2) more advanced versions of ML algorithms end up implicitly solving these problems.  But, going back to my earlier comment, this is a risk faced by all ML research, not just safety/alignment.  It's always possible someone could make a more general algorithm that subsumes what you're doing; the main remedy is to know the whole field really well so that maybe you're the one who develops that more general algorithm.", "timestamp": "1500657181"}, {"author": "Dario", "source_link": "https://www.facebook.com/jefftk/posts/888808283952?comment_id=889014306082&reply_comment_id=889053183172", "anchor": "fb-889014306082_889053183172", "service": "fb", "text": "&rarr;&nbsp;Ah, I forgot to add, the reference to \"data\" as a limiter is something I actually don't agree with.  It's correct for supervised learning, but in RL \"data\" takes on a different meaning -- it's the environment.  And building environments isn't a matter of acquiring loads of supervised data; it's more important that the environment be the right one for testing diverse skills, which could be quite a conceptually simple environment.", "timestamp": "1500671653"}, {"author": "Rosa", "source_link": "https://www.facebook.com/jefftk/posts/888808283952?comment_id=889014306082&reply_comment_id=889286989622", "anchor": "fb-889014306082_889286989622", "service": "fb", "text": "&rarr;&nbsp;isn't building environments that are \"realistic\" also pretty difficult if you don't have learning agents that are capable of interacting with the natural environments that we get for free and that non-artificial intelligent agents learn from?", "timestamp": "1500766949"}, {"author": "Dario", "source_link": "https://www.facebook.com/jefftk/posts/888808283952?comment_id=889014306082&reply_comment_id=889322348762", "anchor": "fb-889014306082_889322348762", "service": "fb", "text": "&rarr;&nbsp;Definitely building realistic environments is a challenge.  What I'm saying is it's a challenge that doesn't advantage the same set of players as data limitations.  What's needed is cleverness in environment design, not sheer scale or the access to users that comes with being a large consumer facing company.  In this way it's a lot like the innovation in the algorithms themselves.", "timestamp": "1500778979"}, {"author": "Kaj", "source_link": "https://www.facebook.com/jefftk/posts/888808283952?comment_id=889409549012", "anchor": "fb-889409549012", "service": "fb", "text": "The discussion in http://www.cell.com/neuron/fulltext/S0896-6273(17)30509-3 seems like possible counterevidence to the \"we could have had today's ML in 1965-1975 if the hardware had been right\", in that the paper argues that a number of ML insights have come from neuroscience that's more recent than that.", "timestamp": "1500822036"}, {"author": "Jess", "source_link": "https://www.facebook.com/jefftk/posts/888808283952?comment_id=889409549012&reply_comment_id=889413531032", "anchor": "fb-889409549012_889413531032", "service": "fb", "text": "&rarr;&nbsp;I'd really like to hear an ML expert comment on this. Without having read or be able to assess the paper, I'm pretty skeptical. I conjecture that most examples are after-the-fact similarities identified mostly to publicize the results rather than actually inform research. (A handful of examples would not be counterevidence unless they actually represented more than niche insights.) This sort of self-promotion with dubious \"interdiciplenary\" connections is a problem within physics.", "timestamp": "1500823932"}, {"author": "Dario", "source_link": "https://www.facebook.com/jefftk/posts/888808283952?comment_id=889409549012&reply_comment_id=889414648792", "anchor": "fb-889409549012_889414648792", "service": "fb", "text": "&rarr;&nbsp;I would take a more holistic view.  Once the hardware is present, it allows many more experiments to be run at a scale that makes it easier to tell what works and what doesn't.  If we had today's hardware in 1965-1975, I suspect it wouldn't have taken long to run through today's major neural net architectures (they aren't very complicated, after all) and discover by trial and error which ones work.", "timestamp": "1500823985"}, {"author": "Dario", "source_link": "https://www.facebook.com/jefftk/posts/888808283952?comment_id=889409549012&reply_comment_id=889418101872", "anchor": "fb-889409549012_889418101872", "service": "fb", "text": "&rarr;&nbsp;Jess: I do think we have taken some inspiration from neuroscience, but it tends to be at a fairly abstract conceptual level, and rarely ideas that we couldn't have thought of some other way.  However, as a former neuroscientist I and others (including apparently Demis Hassabis) do find it a useful source of inspiration.  But I wouldn't cite it as relevant evidence in a debate over hardware vs software; I don't think it bears much on that question.", "timestamp": "1500824114"}]}