Conversation with Dario Amodei

July 4th, 2017
airisk, ea
Earlier this week I talked to Dario Amodei, a research scientist at OpenAI and one of the authors of Concrete Problems in AI Safety (pdf). I wanted to talk to him as part of my superintelligence risk project because he's both an ML researcher and concerned about AI risk, so he's in a position to have a good view of the issue. Here's a rough summary of what I learned from him:

In Dario's view, the research that's most valuable from an AI safety perspective also has substantial value from the perspective of solving problems today, and can be productively worked on in the same manner as any other area of machine learning (ML) research. He started by explaining the situation around goals, as an illustration of the kind of work he'd like to see and where it fits in.

To be intelligent, you kind of do three things: make predictions about your environment, take actions based on those predictions, and have and execute complex goals. Historically, most research effort has gone into the first two. Which makes sense: there are many difficult problems where we can specify the goals very simply—recognize handwriting, win at Go, classify the subjects of images—and we've made lots of progress this way.

On the other hand, there are also lots of cases where it's hard to specify goals. Maybe we know what would be a good solution but don't know how to code that as a reward function. Right now this is mostly a limitation on our ability to apply learning systems to problems, but if the prediction and action aspects of ML get far enough ahead of the reward aspect it could be dangerous. Much of the risk of things going wrong, from Dario's perspective, is that if specifying complex goals is pretty new to us when we get to AGI we might not have enough experience to get it right. Instead he would like to see us prioritize goal work now, and help the reward side keep up with the rest of ML.

Not surprisingly, given how pressing he sees this as, that's one of the things his group is working on. For example, in their recent paper, Deep Reinforcement Learning from Human Preferences (blog post, pdf), they train systems for several tasks by asking humans to compare pairs of short video clips and pick which one is better. Instead of asking people for feedback constantly, they train a model to predict the human's judgements, which means they can have the system ask for feedback just in situations where it's most uncertain. This is only a step, but it's an example of the kind of work he thinks we need more of.

Two other examples he gave of the kind of areas he'd like to see more work in were transparency (understanding how the system gives the answers it does; example) and adversarial examples (inputs chosen to make the system screw up; example).

He also wanted to emphasize that he thinks AI safety work today should aim to be valuable on its own as ML research that allows us to perform new tasks, and would be valuable even if we didn't consider long-term safety. The idea is this helps ground the work, give it an empirical feedback loop, and make it more likely to be useful in the long run. [1]

At this point I was wondering: since industry also cares a lot about whether learning systems do what we want them to do, is this really something we need to do for altruistic reasons? Dario's response was that transparency and safety are difficult research areas and, while they do pay off in the short run, they pay off more in the long run, so will tend to be underinvested in. There are also many more promising research directions than researchers right now, so what ends up getting explored is highly dependent on what researchers are interested in.

Additionally, the sooner you think we might have AGI, the more important this work is. If it takes us long time to get AGI, perhaps because we need a lot more computational power or many new learning breakthroughs, there's more likely to have been enough time make more balanced progress. If it takes a short time, however, perhaps because it turns out that all we need for AGI is extensions of what we have today and a lot of scaling, then the risk is much higher.


[1] This lines up strongly with my 2015 thoughts on why I didn't think (mathematical-style) AI safety work was likely to be productive.

Comment via: google plus, facebook

Recent posts on blogs I like:

Thoughts on EA Funds

Hopefully helpful feedback

via Thing of Things April 16, 2024

Clarendon Postmortem

I posted a postmortem of a community I worked to help build, Clarendon, in Cambridge MA, over at Supernuclear.

via Home March 19, 2024

How web bloat impacts users with slow devices

In 2017, we looked at how web bloat affects users with slow connections. Even in the U.S., many users didn't have broadband speeds, making much of the web difficult to use. It's still the case that many users don't have broadband speeds, both …

via Posts on March 16, 2024

more     (via openring)