Conflicted on AI Politics |
June 10th, 2025 |
airisk, tech |
There are a lot of ways this could go, and many of them are seriously bad. I'm personally most worried about AI removing the technical barriers that keep regular people from creating pandemics, removing human inefficiencies and moral objections that have historically made totalitarian surveillance and control difficult to maintain, and gradually being put in control of critical systems without effective safeguards that keep them aligned with our interests. I think these are some of the most important problems in the world today, and quit my job to work on one of them.
Despite these concerns, I'm temperamentally and culturally on the side of better technology, building things, and being confident in humanity's ability to adapt and to put new capabilities to beneficial use. When I see people pushing back against rapid deployment of AI, it's often with objections I think are minor compared to the potential benefits. Common objections I find unconvincing include:
Energy and water: the impact is commonly massively overstated, and we can build solar and desalination.
Reliability: people compare typical-case AI judgement to best-case human judgement, ignoring that humans often operate well below best-case performance.
Art: technological progress brought us to a world with more artists than ever before, and I'd predict an increase in human-hours devoted to art as barriers continue to lower.
Tasks: it's overall great when we're able to automate something, freeing up humans to work elsewhere. In my own field, a large fraction of what programmers were spending their time on in 1970 has been automated. Now, at companies that draw heavily on AI it's the majority of what programmers were doing just 3-5 years ago. The role is shifting quickly to look a lot more like management.
I'm quite torn on how to respond when I see people making these objections. On one hand we agree on how we'd like to move a big "AI: faster or slower" lever, which puts us on the same side. Successful political movements generally require accepting compatriots with very different values. On the other hand, reflexively emphasizing negative aspects of changes in ways that keep people from building has been really harmful (housing, nuclear power, GMO deployment). This isn't an approach I feel good about supporting.
Other criticisms, however, are very reasonable. A few examples:
Employment: it's expensive to have employees, and companies are always looking to cut costs. Initially I expect AI to increase employment, the same way the development of the railroad and trucking increased demand for horses. In some areas humans (or horses) excel; in others AI (or mechanized transport) does. Over time, however, and possibly pretty quickly, just as horses became economically marginal as their competition became cheaper and more capable, I expect the same to happen to humans.
Scams: these have historically been limited by labor, both in terms of costs and in terms of how many people would take the job. AI loosens both of these constraints dramatically.
Education: cheating in school is another thing that has historically been limited by cost and ethics. But when the AI can do your homework better than you can, cheating is nearly inevitable. You'll be graded on a curve against classmates who are using the AI, your self-control is still developing, and teachers are mostly not adapting to the new reality. Learning suffers massively.
I'd love it if people thought hard about potential futures and where we should go with AI, and took both existential (pandemic generation) and everyday (unemployment) risks seriously. I'm very conflicted, though, on how much to push back on arguments where I agree with the bottom line while disagreeing with the specifics. For now I'm continuing to object when I see arguments that seem wrong, but I'm going to try to put more thought into emphasizing the ways we do agree and not being too adversarial.
Comment via: facebook, lesswrong, the EA Forum, mastodon, bluesky, substack