{"items": [{"author": "Rob", "source_link": "https://www.facebook.com/jefftk/posts/886232206432?comment_id=886247455872", "anchor": "fb-886247455872", "service": "fb", "text": "\"We need to build a theoretical foundation for provably aligned AGI.\" - I don't know what 'provably aligned' would involve, and I don't think e.g. formal verification is a central part of what MIRI sees as the difficulty with AI alignment. Closer to MIRI's perspective, I think, is that we think 'get a medium-level understanding how AGI systems allocate cognitive resources to solve problems' is likely to be important and tractable, and isn't likely to come for free with the development of the first AGI systems.<br><br>By 'medium-level understanding' I'm roughly thinking of better tools and formalisms for modeling, e.g., properties of the sequence of questions AlphaGo actually answers (/ cognitive tasks it completes) in the course of determining the value of a board state. I'd contrast this with the low-level system understanding associated with traditional transparency work (e.g., better tools for visualizing NN weights) and the top-level description of how the system as a whole operates (e.g., 'it gets there in some fashion using Monte Carlo tree search and a combination of value networks and policy networks'; cf. https://agentfoundations.org/item?id=1220).", "timestamp": "1499725691"}, {"author": "Jeff&nbsp;Kaufman", "source_link": "https://www.facebook.com/jefftk/posts/886232206432?comment_id=886247455872&reply_comment_id=886261472782", "anchor": "fb-886247455872_886261472782", "service": "fb", "text": "&rarr;&nbsp;What do you think of Deep Dream style transparency?", "timestamp": "1499730638"}, {"author": "Jeff&nbsp;Kaufman", "source_link": "https://www.facebook.com/jefftk/posts/886232206432?comment_id=886247455872&reply_comment_id=886262151422", "anchor": "fb-886247455872_886262151422", "service": "fb", "text": "&rarr;&nbsp;Rob This sounds pretty different from what I thought people at MIRI thought, which I got from reading things like https://intelligence.org/2017/04/12/ensuring/<br><br>I have yet tried to schedule a conversation with someone at MIRI; seems like I should get to that! Who might be good to talk to?", "timestamp": "1499730987"}, {"author": "Jeff&nbsp;Kaufman", "source_link": "https://www.facebook.com/jefftk/posts/886232206432?comment_id=886247455872&reply_comment_id=886262680362", "anchor": "fb-886247455872_886262680362", "service": "fb", "text": "&rarr;&nbsp;(My impression of what MIRI thinks is also, to a much greater extent than everything else I'm writing about, contributed to by random things I've happened to come across over the years, having lots of friends who have cared a lot about MIRI's mission and approach. This means I'm likely to have mishearing, misinterpretation, and past claims feeding into my mental model in a way I probably have less of for other groups.)", "timestamp": "1499731247"}, {"author": "Jeff&nbsp;Kaufman", "source_link": "https://www.facebook.com/jefftk/posts/886232206432?comment_id=886247455872&reply_comment_id=886263214292", "anchor": "fb-886247455872_886263214292", "service": "fb", "text": "&rarr;&nbsp;I was also going a lot on https://intelligence.org/.../new-technical-research.../", "timestamp": "1499731528"}, {"author": "Jeff&nbsp;Kaufman", "source_link": "https://www.facebook.com/jefftk/posts/886232206432?comment_id=886247455872&reply_comment_id=887460704512", "anchor": "fb-886247455872_887460704512", "service": "fb", "text": "&rarr;&nbsp;Rob: https://intelligence.org/.../updates-to-the-research.../ and <br>https://agentfoundations.org/item?id=1470 seem to cut the other way: much less work expected on aligning current ML systems, partly because of downward updates on tractablity.", "timestamp": "1500121772"}, {"author": "Will", "source_link": "https://www.facebook.com/jefftk/posts/886232206432?comment_id=886247455872&reply_comment_id=887725224412", "anchor": "fb-886247455872_887725224412", "service": "fb", "text": "&rarr;&nbsp;\"'get a medium-level understanding how AGI systems allocate cognitive resources to solve problems' is likely to be important and tractable, and isn't likely to come for free with the development of the first AGI systems.\"<br><br>I'm very interested in alternate allocation of resources in computer systems (currently exploring a market based allocation). Can I find out more about MIRIs work?", "timestamp": "1500198987"}, {"author": "James", "source_link": "https://www.facebook.com/jefftk/posts/886232206432?comment_id=886275923822", "anchor": "fb-886275923822", "service": "fb", "text": "ML researcher with background in NLP/linguistics -- I like this categorization and I\u2019m solidly in camp II<br>My main basis for this is that (subjectively) the behaviors, capabilities, and \u201cmental processes\u201d of AI systems are increasingly convergent with corresponding scientific beliefs about human/animal cognition, and it seems plausible that AGI could emerge from that context", "timestamp": "1499735242"}, {"author": "Mad", "source_link": "https://www.facebook.com/jefftk/posts/886232206432?comment_id=886341242922", "anchor": "fb-886341242922", "service": "fb", "text": "These summaries are interesting, thanks! The aura of a memetic echochamber makes it hard for me to digest these materials.<br><br>The post and arguments around OpenAI are interesting to me (and I would agree \"open\" is not always correct) \u2013 in the sense that I can see there's investment and energy \u2013 and the current argument/concern is actually about where to direct what seems to be already-significant resources. (Is this a fair takeaway?)<br><br>I think I'd be curious about how much self-skepticism is encountered, in various strains of thought. I'm inclined to be more mistrustful of viewpoints that seem incapable of self-skepticism.", "timestamp": "1499774678"}, {"author": "Tobias", "source_link": "https://www.facebook.com/jefftk/posts/886232206432?comment_id=886341242922&reply_comment_id=886933007022", "anchor": "fb-886341242922_886933007022", "service": "fb", "text": "&rarr;&nbsp;I think there's significant disagreement on whether significant resources are allocated to the problem. See e.g. here:<br>http://slatestarcodex.com/2017/07/08/two-kinds-of-caution/", "timestamp": "1499958800"}, {"author": "Jeff&nbsp;Kaufman", "source_link": "https://www.facebook.com/jefftk/posts/886232206432?comment_id=886341242922&reply_comment_id=886933486062", "anchor": "fb-886341242922_886933486062", "service": "fb", "text": "&rarr;&nbsp;Tobias: my impression is that OpenPhil has more money it would like to allocate to risks from advanced AI than places it sees where the money would go a long way.  While current spending on AI safety is low compared to many other things, this makes it not especially funding limited.  So \"where to direct what seems to be already-significant resources\" seems reasonable to me.", "timestamp": "1499959043"}]}