• Posts
  • RSS
  • ◂◂RSS
  • Contact

  • Looking into AI Risk

    June 26th, 2017
    airisk, giving  [html]
    In considering what to work on, I wrote that recently many people I respect have started to think that AI risk is the most valuable place to focus EA efforts. For example, 80000 Hours ranks it first on their "list of global issues", taking into account "scale, neglectedness, and solvability". On the other hand, I have a lot of friends working in machine learning, and none of them think AI risk is worth working on now. This level of disagreement is very strange, and kind of worrying.

    What I'm planning to spend the next few days on is getting a better understanding of where this difference comes from. I think I'm in a good position to do this: I'm close to both groups, have some technical background as a programmer, and have some time. I see two ways this could go:

    • If after looking into it more I still think AI risk is not a valuable place to be working, I may be able to convince others of this. Since 80000 Hours and other EAs are currently suggesting a lot of people go into this field, if it turns out we're overvaluing it then those people could work on other things.

    • If I change my mind and start thinking AI risk is something we should be working on, I may convince some of my friends in machine learning. It's also likely that something in this direction would be close enough to my skills to be a good career fit and I should consider working on it.

    Of course it's also possible that I won't get to the root of the disagreement, or that I won't convince anyone except myself, but I do think it's worth trying.

    Rough plan: read a bunch of stuff to get background, talk to a lot of people, write things up. Things I'm planning to read:

    The list above is entirely people who think AI risk should be prioritized, aside from the Ceglowski post at the end, so I'm especially interested to read (if they exist) pieces where machine learning experts talk about why they don't think AI risk is a high priority. I'm also interested in other general AI risk background reading, and suggestions of people to talk to.

    Comment via: google plus, facebook

    Recent posts on blogs I like:

    More on the Deutschlandtakt

    The Deutschlandtakt plans are out now. They cover investment through 2040, but even beforehand, there’s a plan for something like a national integrated timetable by 2030, with trains connecting the major cities every 30 minutes rather than hourly. But the…

    via Pedestrian Observations July 1, 2020

    How do cars fare in crash tests they're not specifically optimized for?

    Any time you have a benchmark that gets taken seriously, some people will start gaming the benchmark. Some famous examples in computing are the CPU benchmark specfp and video game benchmarks. With specfp, Sun managed to increase its score on 179.art (a su…

    via Posts on Dan Luu June 30, 2020

    Quick note on the name of this blog

    When I was 21 a friend introduced me to a volume of poems by the 14th-century Persian poet Hafiz, translated by Daniel Ladinsky. I loved them, and eventually named this blog for one of my favorite ones. At some point I read more and found that Ladinsky’s …

    via The whole sky June 21, 2020

    more     (via openring)


  • Posts
  • RSS
  • ◂◂RSS
  • Contact