::  Posts  ::  RSS  ::  ◂◂RSS  ::  Contact

Experiments and Consent

November 10th, 2019
experiments, ethics  [html]
One of the responses to my Uber self-driving car post was objecting to Uber experimenting on public roads:
Self-driving research as practiced across the industry is in violation of basic research ethics. They should not be allowed to toss informed consent out the window, no matter how cool or revolutionary they think their research is.
I've seen this general sentiment before: if you want to run an experiment involving people you need to get their consent, and get approval from an IRB, right?

While academia and medicine do run on a model of informed consent, it's not required or even customary in most fields. Experimentation is widespread, as organizations want to learn what effect their actions have. Online companies run tons of a/b tests. UPS ran experiments on routing and found it was more efficient if they planned routes to avoid left turns. Companies introduce new products in test markets. This is all very standard and has been happening for decades, though automation has made it easier and cheaper, so there's more now.

When you look at historical cases of experimentation gone wrong, the problem is generally that the intervention was unethical on its own. Leaving syphilis untreated, infecting people with diseases, telling people to shock others, and dropping mosquitoes from planes are all things you normally shouldn't do. The problem in these cases wasn't that they were experimenting on people, but that they were harming people.

Similarly, the problem with Uber's car was that if you have an automatic driving system that can't recognize pedestrians, can't anticipate the movements of jaywalkers, freezes in response to dangerous situations, and won't brake to mitigate collisions, it is absolutely nowhere near ready to guide a car on public roads.

We have a weird situation where the rules for experimentation in academia and medicine are much more restrictive than everywhere else. So restrictive that even a very simple study where you do everything you normally do but also record whether two diagnostics agreed with each other can be bureaucratically impractical to run. We should remove most of these restrictions: you should still have to get approval and informed consent if you want to hurt people or violate a duty you have to them, but "if it's ok to do A or B then it's fine to run an experiment on A vs B" should apply everywhere.

(I wrote something similar earlier, after facebook's sentiment analysis experiment.)

Comment via: facebook, lesswrong

Recent posts on blogs I like:

The Different Travel Markets for Regional Rail

At a meeting with other TransitMatters people, I had to explain various distinctions in what is called in American parlance regional rail or commuter rail. A few months ago I wrote about the distinction between S-Bahn and RegionalBahn, but made it clear t…

via Pedestrian Observations January 14, 2020

A foolish consistency

“The other terror that scares us from self-trust is our consistency; a reverence for our past act or word, because the eyes of others have no other data for computing our orbit than our past acts, and we are loath to disappoint them. But why should you ke…

via Holly Elmore January 5, 2020

Algorithms interviews: theory vs. practice

When I ask people at trendy big tech companies why algorithms quizzes are mandatory, the most common answer I get is something like "we have so much scale, we can't afford to have someone accidentally write an O(n^2) algorithm and bring the site d…

via Posts on Dan Luu January 5, 2020

more     (via openring)

More Posts:


  ::  Posts  ::  RSS  ::  ◂◂RSS  ::  Contact