::  Posts  ::  RSS  ::  ◂◂RSS  ::  Contact

Looking into AI Risk

June 26th, 2017
giving, airisk

In considering what to work on, I wrote that recently many people I respect have started to think that AI risk is the most valuable place to focus EA efforts. For example, 80000 Hours ranks it first on their "list of global issues", taking into account "scale, neglectedness, and solvability". On the other hand, I have a lot of friends working in machine learning, and none of them think AI risk is worth working on now. This level of disagreement is very strange, and kind of worrying.

What I'm planning to spend the next few days on is getting a better understanding of where this difference comes from. I think I'm in a good position to do this: I'm close to both groups, have some technical background as a programmer, and have some time. I see two ways this could go:

Of course it's also possible that I won't get to the root of the disagreement, or that I won't convince anyone except myself, but I do think it's worth trying.

Rough plan: read a bunch of stuff to get background, talk to a lot of people, write things up. Things I'm planning to read:

The list above is entirely people who think AI risk should be prioritized, aside from the Ceglowski post at the end, so I'm especially interested to read (if they exist) pieces where machine learning experts talk about why they don't think AI risk is a high priority. I'm also interested in other general AI risk background reading, and suggestions of people to talk to.

Comment via: google plus, facebook

More Posts:

Older Post:

Newer Post:

  ::  Posts  ::  RSS  ::  ◂◂RSS  ::  Contact