Adversarial vs Collaborative Contexts
|October 3rd, 2022|
In a collaborative context other people respond to your change by building on it, and it typically has a larger effect than you'd naively expect. For example, say you create an icon to communicate an idea in your program, other people start using it, and it becomes the standard representation for the concept. While an A/B test might have been only mildly positive, or even negative, after the world adapted to the new icon it would have become very clearly the right symbol to use.
In an adversarial context other people respond to your change by countering it, and it typically has a smaller effect. You figure out a way to detect bot traffic, but the bot operators are very motivated to shift their behavior to undo your work.
Some contexts are also neutral, where both collaborative and adversarial effects are minimal or nonexistent. A baby toy design doesn't get better or worse with the generations.
Diseases are adversarial: antibiotics lose resistance, new variants evade immunity. While something looks good in initial testing, that advantage will erode over time. An interesting exception here is cancer, which (almost always) has to evolve anew in each patient, and so cancer treatments don't become less effective over time (in new patients).
Adopting standards is collaborative: each computer with USB increased demand for devices that spoke USB and vice-versa. Someone thinking "let's not bother with USB, it's not that widely supported yet" would have been ignoring collaborative effects.
Same with building on a platform: the more iPhones sold the more people wanted apps, and the more apps there were the more people wanted iPhones.
Interaction with nature is generally adversarial: cockatoos figure out how to get into trash, weeds evolve herbicide resistance.
A new kind of ad has both kinds of effects: adversarial in that people will notice novel ads more, but cooperative in that advertisers haven't yet learned how to make the most of the format.
This sort of effect is especially important to think through when considering rolling something out based on the results of an experiment. You've almost certainly not run the experiment long enough to see how the world will adjust, but you can use these patterns to predict whether the long-term results will be stronger or weaker than you've measured.
Comment via: facebook, lesswrong