• Posts
  • RSS
  • ◂◂RSS
  • Contact

  • Conversation with an AI Researcher

    July 20th, 2017
    airisk, giving  [html]
    Earlier this week I had a conversation with an AI researcher [1] at one of the main industry labs as part of my project of assessing superintelligence risk. Here's what I got from them:

    They see progress in ML as almost entirely constrained by hardware and data, to the point that if today's hardware and data had existed in the mid 1950s researchers would have gotten to approximately our current state within ten to twenty years. They gave the example of backprop: we saw how to train multi-layer neural nets decades before we had the computing power to actually train these nets to do useful things.

    Similarly, people talk about AlphaGo as a big jump, where Go went from being "ten years away" to "done" within a couple years, but they said it wasn't like that. If Go work had stayed in academia, with academia-level budgets and resources, it probably would have taken nearly that long. What changed was a company seeing promising results, realizing what could be done, and putting way more engineers and hardware on the project than anyone had previously done. AlphaGo couldn't have happened earlier because the hardware wasn't there yet, and was only able to be brought forward by massive application of resources.

    In their view, it doesn't make sense to try to influence where the field will go more than a few years out. If an area has been underinvested in by people focusing elsewhere, then after a few years, with faster hardware, that area will have lots of (now) low hanging fruit and quickly catch up. This would imply that a strategy of differential technology development, where you're trying to change the relative rates at which parts of AI advance by working in a part you think is likely to make us safer, wouldn't work very well.

    (It looks to me like a big difference between (my model of) their view and (my model of) Dario's is, what fraction of the best research directions get pursued? The lower you think that fraction is then the less the the "underinvested stuff will catch up when hardware gets better" view fits. This is also another connection back to technological distance: if you think underinvested stuff naturally starts to look more promising and catch up as people go into it, then the farther we are from AGI in terms of remaining work then the less a differential technology approach helps.)


    [1] They're a friend of mine through non-EA connections, so this is more like drawing a sample from the pool researchers as a whole. At their request, I'm not using their name, affiliation, or gender.

    Comment via: google plus, facebook

    Recent posts on blogs I like:

    Governance in Rich Liberal American Cities

    Matt Yglesias has a blog post called Make Blue America Great Again, about governance in rich liberal states like New York and California. He talks about various good government issues, and he pays a lot of attention specifically to TransitMatters and our …

    via Pedestrian Observations November 19, 2020

    Collections: Why Military History?

    This week, I want to talk about the discipline of military history: what it is, why it is important and how I see my own place within it. This is going to be a bit of an unusual collections post as it is less about the past itself and more about how we st…

    via A Collection of Unmitigated Pedantry November 13, 2020

    Misalignment and misuse: whose values are manifest?

    Crossposted from world spirit sock puppet. AI related disasters are often categorized as involving misaligned AI, or misuse, or accident. Where: misuse means the bad outcomes were wanted by the people involved, misalignment means the bad outcomes were wan…

    via Meteuphoric November 13, 2020

    more     (via openring)


  • Posts
  • RSS
  • ◂◂RSS
  • Contact