::  Posts  ::  RSS  ::  ◂◂RSS  ::  Contact

Examples of Superintelligence Risk

July 13th, 2017
giving, airisk

In talking to people who don't think Superintelligence Risk is a thing we should be prioritizing, it's common for them to want an example of the kind of thing I'm asking about. Unfortunately, I have never seen an example where I could say "yes, I see how that could happen". Instead, all the examples just seem kind of silly? Here are some of the examples I've seen:

Are there any better examples out there? If not, I think it would be very helpful for someone who thinks we should be taking superintelligence risk seriously to put one together. When all of the specific examples of how things could go wrong have obvious "buy why would you ..." or "but why wouldn't you just ..." openings, critics are much less willing to engage.

(Compare this to other existential risks: with these it's very easy to come up with examples of what could happen and how bad it would be.)

[1] I brought this up in a conversation with Owen. Later he told me he'd talked to 80k and they might be replacing the example.

Comment via: google plus, facebook

More Posts:

Older Post:

Newer Post:

  ::  Posts  ::  RSS  ::  ◂◂RSS  ::  Contact