When I'm on my deathbed, I won't look back at my life and wish I had worked harder. I'll look back and wish I spent more time with the people I loved.
Setting aside that some people don't have the economic breathing room to make this kind of tradeoff, what jumps out at me is the implication that you're not working on something important that you'll endorse in retrospect. I don't think the author is envisioning directly valuable work (reducing risk from international conflict, pandemics, or AI-supported totalitarianism; improving humanity's treatment of animals; fighting global poverty) or the undervalued less direct approach of earning money and donating it to enable others to work on pressing problems.
When I look in man zstd
I see that you can set
-B<num>
to specify the size of the chunks, and it's
documented as "generally 4 * windowSize
". Except the
documentation doesn't say how windowSize
is set.
To encourage myself to live more frugally and to give an example of what I thought was a pretty fulfilling life at relatively low cost for the US, I used to calculate numbers for how much we spent on ourselves. This included housing, food, transportation, medical, etc but not donations, taxes, or savings. At one point there were some news stories comparing our spending to our income, and it was nice to have a simple number to point at.
I was thinking it might be nice to start calculating these numbers again, but when I looked back at why I stopped it's mostly that it's actually a pretty tricky accounting question and I'm not sure there are ways to draw the lines that make much sense. For example:
Cross-posted from my NAO Notebook.
This is an internal strategy note I wrote in November 2024 that I'm making public with some light editing.
In my work at the NAO I've been thinking about what I expect to see as LLMs continue to become more capable and get closer to where they can significantly accelerate their own development. I think we may see very large advances in the power of these systems over the next few years.
I'd previously thought that the main impact of AI on the NAO was through accelerating potential adversaries, and so shorter timelines primarily meant more urgency: we needed to get a comprehensive detection system in place quickly.
I now think, however, that this also means the best response involves some reprioritization. Specifically, AI will likely speed up some aspects of the creation of a detection system more than others, and so to the extent that we expect rapid advances in AI we should prioritize the work that we expect to bottleneck our future AI-accelerated work.
In American English (AE), "quite" is an intensifier, while in British English (BE) it's a mild deintensifier. So "quite good" is "very good" in AE but "somewhat good" in BE. I think "rather" works similarly, though it's less common in AE and I don't have a great sense for it.
"Scheme" has connotations of deviousness in AE, but is neutral in BE. Describing a plans or system as a "scheme" is common in BE and negative in AE.
"Graft" implies corruption in AE but hard work in BE.
These can cause silent misunderstandings where two people have very different ideas about the other's view:
Work | Nucleic Acid Observatory | |
Work | Speaking | |
Band | Kingfisher | |
Band | Free Raisins | |
Band | Dandelion | |
Code | Whistle Synth | |
Code | Apartment Price Map | |
Board | BIDA Contra | |
Board | Giving What We Can | |
Spouse | Julia | |
Child | Lily | |
Child | Anna | |
Child | Nora |