Technical Distance to AGI

July 11th, 2017
airisk, ea
I think the largest piece of the disagreement between ML people over how to prioritize AI risk may turn out to be: how likely is it that we get AGI soon that looks a lot like systems we have now?

Specifically, here are two things I'd guess most of the people I've talked to would agree with (though I haven't run this by any of them):

  • If there were a 10% chance of current systems getting us to AGI within the next 10 years, then the kind of questions posed in Concrete Problems in AI Safety (pdf) would be very important from a safety perspective.

  • If the chance were less than 1%, then while those are still interesting research questions that look useful for today, they're not altruistically important.

What I'm planning to do next is write back to all of the people I've been talking to in order to find out:

  • Does this actually capture their views?
  • What do they put the chance at, and why?

Before I reach out to people again, though, I wanted to post this here for feedback. Any thoughts?

(This is a lot like the more general question of AI timelines. For that, Open Phil's What Do We Know about AI Timelines? gives a good overview and summary, with Katja Grace and others' recently published When Will AI Exceed Human Performance? Evidence from AI Experts (pdf) giving more recent information. But (a) that's about time and not about technical distance (would AGI look like what we have now) and (b) my main takeaway from the surveys is that people are giving pretty off-the-cuff answers without thinking very hard.)

Referenced in:

Comment via: google plus, facebook

Recent posts on blogs I like:

Solution-Focused Brief Therapy

Look! A therapy technique people don't already know!

via Thing of Things May 14, 2025

Workshop House case study

Lauren Hoffman interviewed me about Workshop House and wrote this post about a community I’m working on building in DC.

via Home April 30, 2025

Impact, agency, and taste

understand + work backwards from the root goal • don’t rely too much on permission or encouragement • make success inevitable • find your angle • think real hard • reflect on your thinking

via benkuhn.net April 19, 2025

more     (via openring)