{"items": [{"author": "Jess", "source_link": "https://www.facebook.com/jefftk/posts/900244884912?comment_id=900258218192", "anchor": "fb-900258218192", "service": "fb", "text": "Would you be willing to elaborate on \"I'm generally very skeptical about theoretical work being able to advance engineering fields in ways other than consolidating knowledge,\" a bit? This has sort of been on my mind recently.", "timestamp": "1505495406"}, {"author": "Jeff&nbsp;Kaufman", "source_link": "https://www.facebook.com/jefftk/posts/900244884912?comment_id=900258218192&reply_comment_id=900263173262", "anchor": "fb-900258218192_900263173262", "service": "fb", "text": "&rarr;&nbsp;I'm not an expert in this at all, but I kind of see theoretical work as being able to do two things for engineering:<br><br>* Sometimes a field can do a lot of stuff but doesn't really know how or why it all works.  Like, people have tried a lot of things, have some ability to predict whether new things will work, mostly proceed via trial and error.  A theoretical approach can figure out the patterns behind it, and put the field in a much better place for future progress by reducing what people need to know in order to be productive.  Example: asymptotic analysis of programs for time/space/etc and big-O notation.<br><br>* Sometimes people playing around with theory can figure out interesting new things that can be applied in various ways.  But it looks to me like this approach yields more or less random dividends, where you might get something useful out but you don't know in advance where it will apply.  Example: ecliptic curve cryptography was a relatively straightforward [1] application of existing math theory but the people who had done that math were just trying to explore interesting ideas and not build cryptographic systems.<br><br>When I've seen theoretical work that is intended to solve engineering problems, it generally hasn't.  There are enormous numbers of attempts to handle general computer security this way, and it doesn't seem to work well.  We get things like a theoretically secure system that has to have critical security guarantees dramatically relaxed in order to be at all useful, or where a system is only secure if some unreasonable assumptions hold (like where X is proved secure assuming Y that it depends on is secure, when Y is not only not proved secure but actually pretty buggy).  But smart people do still see promise in this direction, and there continues to be a lot of work there.<br><br>What MIRI is trying to do with HRAD doesn't look to me like it has any successful analogues.  It's not how we normally do engineering, and it's not how theoretical work normally makes progress either.  So this makes me pessimistic about it.<br><br>[1] In that it was independently suggested by two different people https://en.wikipedia.org/wiki/Elliptic_curve_cryptography...", "timestamp": "1505496735"}, {"author": "Jess", "source_link": "https://www.facebook.com/jefftk/posts/900244884912?comment_id=900258218192&reply_comment_id=900435482952", "anchor": "fb-900258218192_900435482952", "service": "fb", "text": "&rarr;&nbsp;Jeff&nbsp;Kaufman Your description of \"consolidation of knowledge\" seems to account for one objection I was going to make, so no problem there. I'm seeing some pretty exciting examples of exactly this happening in the theory group at UCSD right now, where folks are interested in giving an accounting for the performance of some boosting algorithm with parameters tweaked in a particular way. <br>I still want to object that lattice cryptography is a good example of theory directly contributing to engineering, since (I think) the problems underpinning lattice crypto mostly came out of a theoretical interest interest in understanding average-case vs. worst-case complexity of various hard problems, though I'm not overwhelmingly confident this is true. A few years later, adding some algebraic structure to these problems gave an immediate, significant speed-up to lattice algorithms that pushed some of these schemes towards practicality and/or competitiveness. Cryptography may be a unique exception to the rule though, since it's solidly at the intersection of theory and engineering, but it's also the only field I know anything about, so it's not unreasonable to me to think that you might discover other such examples in other domains. I suppose the initial breakthrough was also likely not intended to apply directly to cryptography, so maybe all but the \"algebraic structure\" piece of this example doesn't really hold up as a counter to your argument.<br>Also, in defense of your position, crypto theory folks can come up with security reductions all day long, but in the end, I think most people involved in crypto standardization are going to set parameters based on the best known algorithms attacking the underlying problems, not the parameters for which we have proofs (though I see debates about this happening semi-regularly). <br>Anyway, I found reading your posts to be really helpful since I've recently been trying to figure out how excited I am about various AI safety research programs as well, and I think you make good points contrasting theoretical work \"around\" an engineering problem with theoretical work \"on\" an engineering problem.", "timestamp": "1505589266"}, {"author": "Michael", "source_link": "https://www.facebook.com/jefftk/posts/900244884912?comment_id=900273402762", "anchor": "fb-900273402762", "service": "fb", "text": "Many prior advances in the AI field have been implementations of theory, with that theory being targeted at AI, though not necessarily at solving real world practical problems that are now handled by what used to be or is still considered AI.  Multiple theories about speech analysis come to mind, as examples from one sub-field, and at natural language interpretation.<br><br>So, I'm not necessarily disagreeing with you at the broadest level, but for at least some fields I think that theoreticians have deliberately and sometimes successfully laid the foundations of later engineering work.  <br><br>Am I misunderstanding you?", "timestamp": "1505500054"}, {"author": "opted out", "source_link": "#", "anchor": "unknown", "service": "unknown", "text": "this user has requested that their comments not be shown here", "timestamp": "1505515887"}, {"author": "opted out", "source_link": "#", "anchor": "unknown", "service": "unknown", "text": "&rarr;&nbsp;this user has requested that their comments not be shown here", "timestamp": "1505516011"}, {"author": "Benjamin", "source_link": "https://www.facebook.com/jefftk/posts/900244884912?comment_id=900332743842", "anchor": "fb-900332743842", "service": "fb", "text": "Hey Jeff, just a quick thought that your approach here seems to be \"ask researchers whether this research seems useful\" which I'd put more in the tractability category. To me, the case for AI safety is more about how it's the most neglected xrisk - i.e. it's about scale and neglectedness from the int framework.", "timestamp": "1505527845"}, {"author": "Brian", "source_link": "https://plus.google.com/114156500057804356924", "anchor": "gp-1505578740859", "service": "gp", "text": "Seems like there are a lot of short-term real-world security issues that involve machine learning in some way. Consider the problem of de-biasing machine learning datasets so they don't pick up human biases. Then consider what happens when hackers do the opposite...", "timestamp": 1505578740}, {"author": "Josh", "source_link": "https://www.facebook.com/jefftk/posts/900244884912?comment_id=901351666912", "anchor": "fb-901351666912", "service": "fb", "text": "http://www.smbc-comics.com/comic/ai reminded me of this post. :^)", "timestamp": "1506037023"}]}