Jeff Kaufman  ::  Blog Posts  ::  RSS Feed  ::  Contact

More Giving vs Doing

I'm thinking about going back to earning to give, and earlier this week I wrote:

My main reservation about earning to give continues to be that I think the most important things are mostly constrained by people, not money.
David asked if I had a post explaining why I think this, and it looks like I don't, so I should probably try. I'm a bit reticent to be writing this because everything involved is so uncertain, but it's at least better to lay out my reasons than to just make assertions.

For the most part, I agree with 80k's why you should focus more on talent gaps, not funding gaps. There are a lot of talent-constrained valuable things, for example:

  • Working in government (foreign service, funding allocation, ...)
  • Starting new charities GiveWell or OpenPhil would like to fund (like CS:H)
  • Starting for-profits in low-income countries
  • Research (global catastrophic risks, medical, ...)

Additionally, Good Ventures has been ramping up their funding dramatically [1], and it's now clear that they have a lot of overlap with EAs in what they want to fund, including AMF, ACE, DWtW, GiveDirectly, 80k, FHI, MIRI, SCI, THL, MFA, and CFAR. While OpenPhil is the biggest funder here there are other funders now as well; for example, this 2015 donation.

This is far more money than was previously available, and should shift our sense of how to prioritize earning to give vs doing things directly. My earlier writing (2013-03, 2013-08) and other things from that era (like the comments on this post) were mostly in the context of there not being much funding available.

On the other hand, I think the capacity for more funds is still pretty substantial, well beyond what current funders have available. GiveWell's top charities still have major unmet room for more funding every year, and if nothing else the amount of money that could go to cash transfers is enormous.

There are also many ways that EA organizations haven't adjusted yet to the landscape changing from very tight funding to more available funding. Higher pay would probably increase the number of EAs you're able to hire, but even if you can't generally turn more money into more EA labor, my understanding is these organizations currently run almost entirely on EA labor. There are lots of tasks (proofreading, accounting, hr) that could be done by non-EA professionals, hired or contracted, and as the organizations start to explore this I think they'll be better able to use funding.

On balance, I think additional funding is less valuable than it was, though still valuable enough that some people should be earning to give, and given my constraints I think it's currently the best fit for me.


[1] Good Ventures started working with GiveWell in 2012 [2], and they jointly formed the Open Philanthropy Project (OpenPhil) in in 2014 [3] They're still scaling up, and I'm estimating [4] they'll grant over $200M this year:

Since OpenPhil plans to spend down Moskovitz and Tuna's money "before we die, and ideally well before we die," and that's currently something like $16B, I'd guess maybe one or two more doublings as they continue to ramp up capacity.

[2] The earliest thing I find on GiveWell's website here is this policy from June 2012, at which point they were already sharing office space. So it's possible that this goes back farther?

[3] The 2014 post I linked to is a about rebranding of "GiveWell Labs" to the "Open Philanthropy Project", and I think was mostly about trying to make clear a change that had already happened. The September 2011 Announcing GiveWell Labs doesn't mention Good Ventures (though it does mention a $1M pre-committment) so I'm not sure whether GiveWell and Good Ventures were already working together at this point.

[4] Data comes from OpenPhil's grants page. For 2017 I've taken their current 2017 giving ($140M through July) and adjusted for them historically (2012-2016) making 64% of their grants by dollars in January through July.

full post...

Conversation with an AI Researcher

Earlier this week I had a conversation with an AI researcher [1] at one of the main industry labs as part of my project of assessing superintelligence risk. Here's what I got from them:

They see progress in ML as almost entirely constrained by hardware and data, to the point that if today's hardware and data had existed in the mid 1950s researchers would have gotten to approximately our current state within ten to twenty years. They gave the example of backprop: we saw how to train multi-layer neural nets decades before we had the computing power to actually train these nets to do useful things.

Similarly, people talk about AlphaGo as a big jump, where Go went from being "ten years away" to "done" within a couple years, but they said it wasn't like that. If Go work had stayed in academia, with academia-level budgets and resources, it probably would have taken nearly that long. What changed was a company seeing promising results, realizing what could be done, and putting way more engineers and hardware on the project than anyone had previously done. AlphaGo couldn't have happened earlier because the hardware wasn't there yet, and was only able to be brought forward by massive application of resources.

In their view, it doesn't make sense to try to influence where the field will go more than a few years out. If an area has been underinvested in by people focusing elsewhere, then after a few years, with faster hardware, that area will have lots of (now) low hanging fruit and quickly catch up. This would imply that a strategy of differential technology development, where you're trying to change the relative rates at which parts of AI advance by working in a part you think is likely to make us safer, wouldn't work very well.

(It looks to me like a big difference between (my model of) their view and (my model of) Dario's is, what fraction of the best research directions get pursued? The lower you think that fraction is then the less the the "underinvested stuff will catch up when hardware gets better" view fits. This is also another connection back to technological distance: if you think underinvested stuff naturally starts to look more promising and catch up as people go into it, then the farther we are from AGI in terms of remaining work then the less a differential technology approach helps.)


[1] They're a friend of mine through non-EA connections, so this is more like drawing a sample from the pool researchers as a whole. At their request, I'm not using their name, affiliation, or gender.

full post...

Thinking about going back to earning to give

I'm wrapping up my ai risk project and at this point I'm thinking like I should probably go back to earning to give:

  • Most of my best altruistic options probably look a lot like what I did the past month, working remotely with varying amounts of self-management. While I suspected I wouldn't like this very much, now I know.

  • While I haven't finished the AI risk project yet, I think I'm pretty likely to conclude something like "I think work here is valuable, but not the very most pressing thing" and I don't think I'm a good fit for this kind of research.

  • I really like programming. When I think about different things I might work on, I keep coming back to opportunities that looked pretty exciting at big tech companies with openings here.

  • I still mostly agree with my 2016 EAG talk.

more...
Superintelligence Risk Project Update II

This is the beginning of my third week looking into potential risks from superintelligence (kickoff, update 1) and I think I'm hitting diminishing returns. I'm planning to wrap up in the next day or so, and go back to figuring out what I should work on next.

Last week:

  • Technical Distance to AGI: I hypothesized (incorrectly) that the main difference between ML researchers who thought we could vs couldn't work on AI risk now was how far off they thought AGI was, in terms of some combination of time and technological distance. Recommended comments: Jacob, Paul, Dario. I also made a 1:9 bet with Dave on whether we'll have driverless cars in the next 10 years.

  • Examples of Superintelligence Risk: I collected the examples I've seen of what a "loss of control of an AI" catastrophe might look like, and tried to figure out why the examples are much less realistic than we see of other existential risks like nukes or bioterror. Recommended comments: Eliezer, Jim, Paul.

  • Conversation with Bryce Wiedenbeck: I talked to an AI professor, main takeaway being that he thinks the technical distance to AGI is very high. Recommended comment: Dario.

  • I found OpenPhil's notes on Early Field Growth interesting, especially their section on failure modes in cryonics and molecular nanotechnology. My takeaway was that heavy popularization of a new field prior to scientific success leads scientists on the border of the new field to take on an oppositional stance. The field gets starved of people who could do substantial technical work, makes minimal progress, and I think people also avoid the areas around its edges, like a chilling effect. I see superintelligence risk as just on the edge of this, where it could go either way. Which also makes me (weakly) think that Daniel Dewey's point on the relevant field-building effects of MIRI-style vs prosaic AI-style should maybe go farther. Specifically, you don't want safety to be thought of as a "we don't do that, those people are cranks" sort of thing, so it's a lot better if AI safety develops primarily as a field within ML.

  • Spoke to three other ML researchers, one of which I'm hoping to write up conversation notes from.

  • I had applied for an EA Grant when I thought I might spend longer on this, but withdrew after getting to the phone interview stage.

  • I spent most of Monday working on the house and running errands instead of on this project.

(A big takeaway for me is that I don't like doing this kind of work very much. I think it's a combination of two things: it's isolated work (as I'm doing it) and it's a kind of thinking that I enjoy in moderation, but not for full time work. These two combine pretty strongly: this kind of thinking is much more enjoyable for me when working with someone else, where we can have a lot of conversations to clarify ideas and look for the best areas to make progress. David Chudzicki, one of my housemates, has been helpful here, and we've talked a lot, but it's still something I'm mostly working on alone.)

full post...

Window Vent

During the summer (here) night-time outdoor temperatures are typically pretty pleasant, but sleeping inside can still be uncomfortable because of heat stored from during the day. The normal fan-based solution has two parts: fans in the windows with cross-ventillation to trade the inside air for outside air, and fans pointing at the people to enhance evaporative cooling. This works well, but we can do better!

What if we pipe the outside air directly onto you? Then you feel the outdoor temperature, instead of the current room temperature which lags significantly behind:

more...

Conversation with Bryce Wiedenbeck

A few days ago I spoke with Bryce Wiedenbeck, a CS professor at Swarthmore teaching AI, as part of my project of assessing superintelligence risk. Bryce had relatively similar views to Michael: AGI is possible, it could be a serious problem, but we can't productively work on it now. more...

More Posts


Jeff Kaufman  ::  Blog Posts  ::  RSS Feed  ::  Contact  ::  G+ Profile