The Facebook Research
The Facebook experiments are actually very clever.
The content stream is the presentation of everyone else's
material to an individual user. So, in my content stream, I get stuff from Rod,
stuff from my family, stuff from Moncton Free Press, stuff from the City, etc.,
including some sponsored or promoted content from magazines and advertisers.
Facebook has been tinkering with this content stream
since day one. You can't show everything, because there's generally too much.
So you show the most 'relevant' links (on some content streams you have the
option to 'see most recent' and 'see most relevant'). They are experimenting
with what counts as 'relevant'.
Facebook is an advertising company, and therefore the
product it sells is the induction of beliefs in the users. Coke, for example,
want people to think that Coke is good and good for you, and that they want a
Coke now. The NRA wants you to believe that guns are harmless and that "they're
trying to take away our freedoms."
So the experiments basically measure whether the
presentation of posts created by friends and family, etc., rather than the
creation and presenting of actual advertising, can produce the desired result.
Intuiitively, it should. Present nothing but crime stories in the news feed and
you'll end up thinking crime is everywhere. The experiments measure whether
this intuition is correct.
From the perspective of ethics, they are blending two
things which are, on the face of it, innocuous:
- they are altering stream results in an attempt to
produce a 'better' stream - something every content vendor everywhere does, and
has done since the days of FTP and UseNet
- they are accessing publicly available data to analyze
it for affective and cognitive properties, something we do as well, and
something that does not require permission from individual users
Does the combination of them create an ethical dilemma?
It's not clear to me that it does. Sure, it reinforces
Facebook's image as a somewhat greasy operation that will manipulate results in
order to satisfy the needs of its advertisers (hence making it no different
from Dr. Oz and your local news broadcast). But that's not unethical, at least,
not in the sense that you'd take them before the courts.
The question of ethics comes into the equation when there
exists some possibility of harm, and where that harm is a predictable outcome
of the experiment, and where that sort of harm would not normally be expected.
The classic case, of course, is the testing of drugs that have harmful
side-effects, where you have not disclosed the side-effects.
In the case of the manipulation of free digital content
to stimulate emotional responses, and then measuring for those responses, the
presence of actual harm is a lot more difficult to show. The mere production of emotional responses is not harm, otherwise most of what we do every day is
ethically wrong. The mere measurement of emotional responses is not harm
either.
If we don't actually harm someone, then how could it be
ethically unsound? Doing all this in secret could be ethically wrong. But
Facebook is not doing it in secret; it's all over ther news.
There is a hard line in research ethics to the effect
that any interaction with a user needs to be declared beforehand, and conducted
with the explicit consent of the user. I don't subscribe to that line. In one
sense it is impractical. There are too many interactions and too many users to
require consent in advance. In a second sense it's unnecessary. Research is not
inherently evil, and studying people to find out how they work is not wrong.
And third, it can be harmful. Creating conditions of consent alters research
results; tell people their emotions are being monitored and they change their
emotions.
This is the point of disagreement:
"They formulated a research hypothesis and tested it on human subjects. For this, explicit consent is required."
Here is a counterexample:
Engineers have theories regarding the length of left-tern lanes on the highway. To test this hypothesis, they construct a left-turn land, and then measure ho much it underfills or overfills. Based on this work they publish a paper. No research consent is obtained.
Should consent have been obtained? It fails the three tests. First, it is impractical. You can't have drivers fill out a consent form befor they enter the intersection. Second, it is unnecessary. No harm will be caused by the research. And third, requiring consent changes the outcome.
So it seems clear to me that this statement is false. The requirement for explicit consent must depend on different conditions. I argue actual harm must be cause, that it must be practical to obtain consent, and that obtaining consent can't change the results.
Facebook's experiments on users and Emotional Contagion (via Peter Turney)
The full paper -http://www.pnas.org/content/111/24/8788.full
The engineering example would have a sign posted somewhere "lane design testing in progress" - they do this when measuring the effect of traffic over different kinds of marking paint as well. Disclosure. And they're not trying to intentionally skew the emotions of drivers, just to see how lane design impacts traffic flow. Similar words, entirely different things.
ReplyDeleteThey needed to at least provide disclosure. The "…and research…" in the Facebook EULA is entirely inadequate for this purpose. It's completely unethical to do a human studies research project on participants who are a) unaware of the research and b) have not given informed consent. They did neither for this. That's not OK. Even if the "harm" is minimal, they crossed a pretty important line and have lost trust.
I've this horrible suspicion that in a world where 1 in 4 adults will experience mental illness at some point in their lives, deliberately making (potentially vulnerable) people miserable for no other reason than to prove that you can may not be an ethical experiment.
ReplyDeleteHow many people lost jobs and relationships over the test period due to a depression that had a stream of miserable status updates as a contributing factor? How many people self-harmed as a result of the experiment? How many people committed suicide?
We don't know.
I agree with Stephen Downes. There were not ethical problems with Facebook studying their participants. The fact that participation in Facebook is completely voluntary, and the fact that we can choose what we want to see in our news feed, only further proves his point.
ReplyDelete