• Posts
  • RSS
  • ◂◂RSS
  • Contact

  • Yudkowsky and MIRI

    July 27th, 2017
    airisk, giving  [html]
    In talking to ML researchers, many were unaware that there was any sort of effort to reduce risks from superintelligence. Others had heard of it before, and primarily associated it with Nick Bostrom, Eliezer Yudkowsky, and MIRI. One of them had very strong negative opinions of Eliezer, extending to everything they saw as associated with him, including effective altruism.

    They brought up the example of So you want to be a seed AI programmer, saying that it was clearly written by a crank. And, honestly, I initially thought it was someone trying to parody him. Here are some bits that kind of give the flavor:

    First, there are tasks that can be easily modularized away from deep AI issues; any decent True Hacker should be able to understand what is needed and do it. Depending on how many such tasks there are, there may be a limited number of slots for nongeniuses. Expect the competition for these slots to be very tight. ... [T]he primary prerequisite will be programming ability, experience, and sustained reliable output. We will probably, but not definitely, end up working in Java. [1] Advance knowledge of some of the basics of cognitive science, as described below, may also prove very helpful. Mostly, we'll just be looking for the best True Hackers we can find.

    Or:

    I am tempted to say that a doctorate in AI would be negatively useful, but I am not one to hold someone's reckless youth against them - just because you acquired a doctorate in AI doesn't mean you should be permanently disqualified.

    Or:

    Much of what I have written above is for the express purpose of scaring people away. Not that it's false; it's true to the best of my knowledge. But much of it is also obvious to anyone with a sharp sense of Singularity ethics. The people who will end up being hired didn't need to read this whole page; for them a hint was enough to fill in the rest of the pattern.

    Now, this is from 2003, when he was 24, which was a while ago. [2] On the other hand, it's much easier to evalute than his more recent work. For example, they had a similarly negative reaction to his 2007 Levels of Organization in General Intelligence, but I'm much less knowledgeable there.

    Should I be considering this in evaluating current MIRI?


    [1] This was after trying to develop a new programming language to create AI in, Flare:

    Flare is really good. There are concepts in Flare that have never been seen before. We expect to be able to solve problems in Flare that cannot realistically be solved in any other language. We expect that people who learn to read Flare will think about programming differently and solve problems in new ways, even if they never write a single line of Flare. We think annotative programming is the next step beyond object orientation, just as object orientation was the step beyond procedural programming, and procedural program was the step beyond assembly language.
      — Goals of Flare

    [2] I wrote to Eliezer asking whether he thought it was reasonable at the time, and asked if it was more like "a scientist looking back on a 2003 paper and saying 'not what I'd say now, conclusions aren't great, science moves on' vs retracting it". Eliezer skimmed it and said it was more the first one.

    Comment via: google plus, facebook

    Recent posts on blogs I like:

    Randal O’Toole Gets High-Speed Rail Wrong

    Now that there’s decent chance of US investment in rail, Randal O’Toole is resurrecting his takes from the early Obama era, warning that high-speed rail is a multi-trillion dollar money sink. It’s not a good analysis, and in particular it gets the reality…

    via Pedestrian Observations May 12, 2021

    Collections: Teaching Paradox, Europa Universalis IV, Part II: Red Queens

    This is the second part in a series (I) that examines the historical assumptions behind Paradox Interactive’s grand strategy computer game set in the early modern period, Europa Universalis IV (EU4). Last time, we took a look at how EU4 was a game fundame…

    via A Collection of Unmitigated Pedantry May 7, 2021

    Books and websites on babies

    Several people I know are expecting a first baby soon, and I wrote up notes for one of them. Might as well share here too: Medical:Scott Alexander’s Biodeterminist’s Guide to Parenting is an interesting read, and some parts are actionable.  If you live in…

    via The whole sky April 14, 2021

    more     (via openring)


  • Posts
  • RSS
  • ◂◂RSS
  • Contact