::  Posts  ::  RSS  ::  ◂◂RSS  ::  Contact

Technical Distance to AGI

July 11th, 2017
giving, airisk

I think the largest piece of the disagreement between ML people over how to prioritize AI risk may turn out to be: how likely is it that we get AGI soon that looks a lot like systems we have now?

Specifically, here are two things I'd guess most of the people I've talked to would agree with (though I haven't run this by any of them):

What I'm planning to do next is write back to all of the people I've been talking to in order to find out:

Before I reach out to people again, though, I wanted to post this here for feedback. Any thoughts?

(This is a lot like the more general question of AI timelines. For that, OpenPhil's What Do We Know about AI Timelines? gives a good overview and summary, with Katja Grace and others' recently published When Will AI Exceed Human Performance? Evidence from AI Experts (pdf) giving more recent information. But (a) that's about time and not about technical distance (would AGI look like what we have now) and (b) my main takeaway from the surveys is that people are giving pretty off-the-cuff answers without thinking very hard.)

Comment via: google plus, facebook

More Posts:

Older Post:

Newer Post:


  ::  Posts  ::  RSS  ::  ◂◂RSS  ::  Contact