Yudkowsky and MIRI

July 27th, 2017
airisk, ea
In talking to ML researchers, many were unaware that there was any sort of effort to reduce risks from superintelligence. Others had heard of it before, and primarily associated it with Nick Bostrom, Eliezer Yudkowsky, and MIRI. One of them had very strong negative opinions of Eliezer, extending to everything they saw as associated with him, including effective altruism.

They brought up the example of So you want to be a seed AI programmer, saying that it was clearly written by a crank. And, honestly, I initially thought it was someone trying to parody him. Here are some bits that kind of give the flavor:

First, there are tasks that can be easily modularized away from deep AI issues; any decent True Hacker should be able to understand what is needed and do it. Depending on how many such tasks there are, there may be a limited number of slots for nongeniuses. Expect the competition for these slots to be very tight. ... [T]he primary prerequisite will be programming ability, experience, and sustained reliable output. We will probably, but not definitely, end up working in Java. [1] Advance knowledge of some of the basics of cognitive science, as described below, may also prove very helpful. Mostly, we'll just be looking for the best True Hackers we can find.

Or:

I am tempted to say that a doctorate in AI would be negatively useful, but I am not one to hold someone's reckless youth against them - just because you acquired a doctorate in AI doesn't mean you should be permanently disqualified.

Or:

Much of what I have written above is for the express purpose of scaring people away. Not that it's false; it's true to the best of my knowledge. But much of it is also obvious to anyone with a sharp sense of Singularity ethics. The people who will end up being hired didn't need to read this whole page; for them a hint was enough to fill in the rest of the pattern.

Now, this is from 2003, when he was 24, which was a while ago. [2] On the other hand, it's much easier to evalute than his more recent work. For example, they had a similarly negative reaction to his 2007 Levels of Organization in General Intelligence, but I'm much less knowledgeable there.

Should I be considering this in evaluating current MIRI?


[1] This was after trying to develop a new programming language to create AI in, Flare:

Flare is really good. There are concepts in Flare that have never been seen before. We expect to be able to solve problems in Flare that cannot realistically be solved in any other language. We expect that people who learn to read Flare will think about programming differently and solve problems in new ways, even if they never write a single line of Flare. We think annotative programming is the next step beyond object orientation, just as object orientation was the step beyond procedural programming, and procedural program was the step beyond assembly language.
  — Goals of Flare

[2] I wrote to Eliezer asking whether he thought it was reasonable at the time, and asked if it was more like "a scientist looking back on a 2003 paper and saying 'not what I'd say now, conclusions aren't great, science moves on' vs retracting it". Eliezer skimmed it and said it was more the first one.

Referenced in: Superintelligence Risk Project: Conclusion

Comment via: google plus, facebook

Recent posts on blogs I like:

Inner dialogue, walking down the sidewalk

A discussion I have with myself a lot The post Inner dialogue, walking down the sidewalk appeared first on Otherwise.

via Otherwise October 10, 2024

Contra Emile Torres On Unethical Jobs

Thinking about replacability is hard

via Thing of Things October 3, 2024

Startup advice targeting low and middle income countries

This post was inspired by a week of working from Ambitious Impact’s office in London, and chatting with several of the startup charities there. While my experience is in the for-profit world, I think it’s applicable to entrepreneurs working on impact-driv…

via Home September 27, 2024

more     (via openring)