{"items": [{"author": "Mad", "source_link": "https://www.facebook.com/jefftk/posts/888342801782?comment_id=888429128782", "anchor": "fb-888429128782", "service": "fb", "text": "Just dropping this in here (so much to do!): Eric Hekler introduced me to Control Theory which I wondered might be more relevant to AIR than ML. His summary, ML is retrospective analysis to match patterns, Control Theory is algorithms for adaptive real-time decision making. (Maybe this is already part of the discussion, but I figured there's a chance that the AIR field hadn't considered this field. Which, once unpacked to me, seemed potentially more relevant.) Good luck sorry I didn't actually read yet!", "timestamp": "1500426354"}, {"author": "Jeff&nbsp;Kaufman", "source_link": "https://www.facebook.com/jefftk/posts/888342801782?comment_id=888429128782&reply_comment_id=888436264482", "anchor": "fb-888429128782_888436264482", "service": "fb", "text": "&rarr;&nbsp;Just confirming: is 'AIR' \"AI risk\"?", "timestamp": "1500429403"}, {"author": "Mad", "source_link": "https://www.facebook.com/jefftk/posts/888342801782?comment_id=888429128782&reply_comment_id=888472886092", "anchor": "fb-888429128782_888472886092", "service": "fb", "text": "&rarr;&nbsp;Jeff&nbsp;Kaufman oh yes --   sorry if I messed up the acronym of art. Yes. I think AI threat may be related to control theory, possibly more than machine learning. That was my \"I'll share this with Jeff and run away\". Might be helpful if you're wandering in an echo chamber? I like knowing interesting thoughtful people. You're one, Eric's another. I'll be curious if it's helpful, if it is you should let me know. :)", "timestamp": "1500437521"}, {"author": "Mad", "source_link": "https://www.facebook.com/jefftk/posts/888342801782?comment_id=888429128782&reply_comment_id=888521259152", "anchor": "fb-888429128782_888521259152", "service": "fb", "text": "&rarr;&nbsp;(to drive this point home, as I understand it \"driverless cars\" = control theory, not machine learning. My impression is that this is a different albeit related category of expertise to be consulting.)", "timestamp": "1500466725"}, {"author": "Jeff&nbsp;Kaufman", "source_link": "https://www.facebook.com/jefftk/posts/888342801782?comment_id=888436968072", "anchor": "fb-888436968072", "service": "fb", "text": "https://www.inverse.com/.../34343-a-i-scientists-react-to... is an example of what makes me worry that excessive popularization is chancing superintelligence risk ending up like cryonics/nanotech", "timestamp": "1500429653"}, {"author": "Andrew", "source_link": "https://www.facebook.com/jefftk/posts/888342801782?comment_id=888482037752", "anchor": "fb-888482037752", "service": "fb", "text": "Are you going to come to a conclusion? (Did I miss a summary conclusion?)", "timestamp": "1500440167"}, {"author": "Jeff&nbsp;Kaufman", "source_link": "https://www.facebook.com/jefftk/posts/888342801782?comment_id=888482037752&reply_comment_id=888518130422", "anchor": "fb-888482037752_888518130422", "service": "fb", "text": "&rarr;&nbsp;Later this week, hopefully", "timestamp": "1500464260"}, {"author": "Michael", "source_link": "https://www.facebook.com/jefftk/posts/888342801782?comment_id=888542845892", "anchor": "fb-888542845892", "service": "fb", "text": "Jeff, think about the old saying \"life is what happens while you're making other plans\".  Similarly, AI risk is going to happen, while you're thinking about the risk that might come with the AI that you're planning to be developing.<br><br>You seem to think that someday, the AGI monster is going to come to your front door, and you're going to let him in, because you know that he's been well trained not to misbehave.  But that's not how it's going to happen.  It's also not going to be that the big Russian AGI Bear comes to your front door, and you are concerned because you feel that the Russians would not have put the same protections into their development of AI as your colleagues in the US will be doing, or that the Russians are sending their Bear to take over our country.  But while you're thinking about all this, a small AGI visitor is going to come in your back door, and he's not going to look like anything you've ever seen before, and he's going to be in your house before you notice it.  That is the real danger, that is the real AI risk.<br><br>I remember Ray Kurzweil's successes with voice recognition from 20 years ago, and more recently I read his book \"How to Create a Mind: The Secret of Human Thought Revealed\".  It totally blew me away.  It is, of course, *not* exactly about human thought.  Kurzweil identified the importance of the massively parallel hierarchical structure of the neocortex, and he saw that he could use this for his purposes at the time, and he modelled it as as a parallel hierarchical set of Markhov processes, to \"create a mind\", which worked for his project at that time.  He talks about his software being modelled after the human brain, but it seems that he stopped looking at how to replicate the human brain, once he had gleaned enough to proceed with his voice recognition software.  This is how intelligence will creep into our world slowly over time -- small components of what can become AGI, will be incorporated into ongoing projects.<br><br>And now Ray Kurzweil is at Google, working on machine learning, and on understanding natural language, and I'm sure that he's continuing to glean portions of human brain function, as meets his needs, to incorporate into that software.  More small components, ready for an opportunity to be useful elsewhere.<br><br>And in time, as the massively parallel hierarchical structure is scaled up, as processing power continues to grow, and as all these small components come together, someone will let enough of it work together, without strict controls because there is no intent to actually achieve AGI, and AGI will happen.  It will have evolved, not been planned.  It may be noticed as \"intelligent\" even before it reaches any level which would be called AGI, but it will be allowed to persist and grow because it is useful and is not yet dangerous.  It will evolve in a context very different from the way that human intelligence evolved, and with electronic rather than biological components.  Some portions of it will be accomplished with traditional computer algorithms, or with earlier versions of AI like \"expert systems\", adding components that replicate actual human brain function only where needed for particular projects along the way.  Some components will be incorrectly thought to replicate some aspect of human brain function, but they will continue to be used even after the error is noticed, because they will indeed be useful.  And as the evolution proceeds, the resulting intelligence will look very different from any human intelligence.  It might not even be able to pass the Turing Test, but it will be (or will grow in time to be) far more intelligent than any human.<br><br>There is where you will find AI risk.", "timestamp": "1500475727"}, {"author": "Brendan", "source_link": "https://plus.google.com/100334584094940516862", "anchor": "gp-1500507098815", "service": "gp", "text": "&gt; I hypothesized (incorrectly) that the main difference between ML researchers who thought we could vs couldn't work on AI risk now was how far off they thought AGI was, in terms of some combination of time and technological distance.\n<br>\n<br>\nCould you clarify - what was incorrect about this? From your other posts, it sounded like everyone you talked to in ML basically agreed with this. I guess I'm not sure who Paul and Jacob are though (maybe I should weight their opinions more heavily?).", "timestamp": 1500507098}, {"author": "Jeff&nbsp;Kaufman", "source_link": "https://plus.google.com/103013777355236494008", "anchor": "gp-1500558788076", "service": "gp", "text": "@Brendan\n \"Could you clarify - what was incorrect about this? From your other posts, it sounded like everyone you talked to in ML basically agreed with this.\"\n<br>\n<br>\nThe ML researchers who thought we couldn't work on this, yes, thought it was pretty far off both in time and technical distance.  But what I learned from the comments on that post was, so do many ML researchers who think we can work on it.\n<br>\n<br>\n\"I guess I'm not sure who Paul and Jacob are though\"\n<br>\n<br>\n'Paul' is Paul Christiano and 'Jacob' is Jacob is Steinhardt, and I do weigh their comments pretty heavily.", "timestamp": 1500558788}]}