::  Posts  ::  RSS  ::  ◂◂RSS  ::  Contact

Superintelligence Risk Project

July 3rd, 2017
giving, airisk  [html]
I've decided to make a larger project out of looking into AI risk. I don't think we really know why there's such a disconnect between people who think we should strongly prioritize it and the mainstream ML perspective that it's not useful to work on (at least not currently). I've applied for an EA grant, and am thinking I'll spend about a month on this.

Here's where I currently am:

  • I've read nearly all of the reading people suggested (by number of words) or about two thirds of it (by individual pieces). This is mostly an effect of Superintelligence being very long.

  • I've had conversations with one person in each camp, have a few more scheduled, and am working on lining up more.

Here are some very preliminary thoughts on where I think the disagreement might be:

  • How likely is it that current approaches are all we need for AGI, with relatively straightforward extensions and a lot of scaling?

  • How valuable is it to work on solving problems that are probably not the right ones? For example, even if we think AGI will not look like current systems, might trying to solve the control problem for current systems teach us enough about the underlying problem and how to do this kind of work that we'll be in a better position once we see more what AGI will actually look like?

  • How useful is it to have a strong theoretical foundation, vs just understanding the technology enough from an engineering perspective that we can make it do things for us?

  • How similar is this to normal engineering? How much should we expect companies' desires that their AI systems do what they want them to do to work out?

  • As we get closer to AGI, how likely is the ML community to take superintelligence risk seriously? Is it just that they don't think it can be productively worked on now or do they not think it will ever be a real problem?

Comment via: google plus, facebook

Recent posts on blogs I like:

Transfers from Infrequent to Frequent Vehicles

Imagine yourself taking a train somewhere, and imagine the train is big and infrequent. Let’s say it’s the commuter train from New York down the Northeast Corridor to Newark Airport, or perhaps a low-cost OuiGo TGV from Lyon to Paris. Now imagine that you…

via Pedestrian Observations January 20, 2020

Veganism and restrictive eating

I’m reading the book Intuitive Eating, which I highly recommend. I was looking for something like it that could get me back to trusting my biological hunger without worrying that I need to control myself or my weight. It’s raised my consciousness to the w…

via Holly Elmore January 17, 2020

Algorithms interviews: theory vs. practice

When I ask people at trendy big tech companies why algorithms quizzes are mandatory, the most common answer I get is something like "we have so much scale, we can't afford to have someone accidentally write an O(n^2) algorithm and bring the site d…

via Posts on Dan Luu January 5, 2020

more     (via openring)

More Posts:


  ::  Posts  ::  RSS  ::  ◂◂RSS  ::  Contact