• Posts
  • RSS
  • ◂◂RSS
  • Contact

  • Yudkowsky and MIRI

    July 27th, 2017
    airisk, giving  [html]
    In talking to ML researchers, many were unaware that there was any sort of effort to reduce risks from superintelligence. Others had heard of it before, and primarily associated it with Nick Bostrom, Eliezer Yudkowsky, and MIRI. One of them had very strong negative opinions of Eliezer, extending to everything they saw as associated with him, including effective altruism.

    They brought up the example of So you want to be a seed AI programmer, saying that it was clearly written by a crank. And, honestly, I initially thought it was someone trying to parody him. Here are some bits that kind of give the flavor:

    First, there are tasks that can be easily modularized away from deep AI issues; any decent True Hacker should be able to understand what is needed and do it. Depending on how many such tasks there are, there may be a limited number of slots for nongeniuses. Expect the competition for these slots to be very tight. ... [T]he primary prerequisite will be programming ability, experience, and sustained reliable output. We will probably, but not definitely, end up working in Java. [1] Advance knowledge of some of the basics of cognitive science, as described below, may also prove very helpful. Mostly, we'll just be looking for the best True Hackers we can find.

    Or:

    I am tempted to say that a doctorate in AI would be negatively useful, but I am not one to hold someone's reckless youth against them - just because you acquired a doctorate in AI doesn't mean you should be permanently disqualified.

    Or:

    Much of what I have written above is for the express purpose of scaring people away. Not that it's false; it's true to the best of my knowledge. But much of it is also obvious to anyone with a sharp sense of Singularity ethics. The people who will end up being hired didn't need to read this whole page; for them a hint was enough to fill in the rest of the pattern.

    Now, this is from 2003, when he was 24, which was a while ago. [2] On the other hand, it's much easier to evalute than his more recent work. For example, they had a similarly negative reaction to his 2007 Levels of Organization in General Intelligence, but I'm much less knowledgeable there.

    Should I be considering this in evaluating current MIRI?


    [1] This was after trying to develop a new programming language to create AI in, Flare:

    Flare is really good. There are concepts in Flare that have never been seen before. We expect to be able to solve problems in Flare that cannot realistically be solved in any other language. We expect that people who learn to read Flare will think about programming differently and solve problems in new ways, even if they never write a single line of Flare. We think annotative programming is the next step beyond object orientation, just as object orientation was the step beyond procedural programming, and procedural program was the step beyond assembly language.
      — Goals of Flare

    [2] I wrote to Eliezer asking whether he thought it was reasonable at the time, and asked if it was more like "a scientist looking back on a 2003 paper and saying 'not what I'd say now, conclusions aren't great, science moves on' vs retracting it". Eliezer skimmed it and said it was more the first one.

    Comment via: google plus, facebook

    Recent posts on blogs I like:

    Some reasons to work on productivity and velocity

    A common topic of discussion among my close friends is where the bottlenecks are in our productivity and how we can execute more quickly. This is very different from what I see in my extended social circles, where people commonly say that velocity doesn…

    via Posts on Dan Luu October 15, 2021

    EDT with updating double counts

    I recently got confused thinking about the following case: Calculator bet: I am offered the opportunity to bet on a mathematical statement X to which I initially assign 50% probability (perhaps X = 139926 is a quadratic residue modulo 314159). I have acce…

    via The sideways view October 12, 2021

    Meditations on newborns

    [Content: death.]I wrote most of this a couple of months ago when Nora was a newborn, but the first few months are not that conducive to finishing blog posts. New babies put you into a liminal period, both in your own experience and in how others treat yo…

    via The whole sky October 3, 2021

    more     (via openring)


  • Posts
  • RSS
  • ◂◂RSS
  • Contact