Explaining the Facebook data ethics issue: Nazis, topless models & Stanley Milgram

This sort of thing used to be considered normal... seriously?

This sort of thing used to be considered normal… seriously?

There’s a lot been written of late about that Facebook data experiment. You know the one I mean. The one where, without taking any of the basic ethical measures a regulated scientist would take before undertaking research, Facebook conducted a massive scale psychological research piece. 689,003 users unwittingly took part in an experiment by data scientist Adam Kramer to study if emotions were “contagious”, for a week in January 2012. When critics demanded such studies required informed consent, Facebook referred them to it’s Data Use Policy, which covers “research”. But then it emerged that in 2012, the same document didn’t cover “research” at all.

So is Facebook trampling over our emotions in an unethical experiment not seen since the experimental work of the infamous Stanley Milgram in the 1960s? Or is it just perfectly normal digital user testing? It’s a thinking problem of epic proportions…

 

Facebook, Nazis and the ghost of Milgram’s “shock box”

In many respects this story began in the 1960s.

Not long after the trial of Adolf Eichmann in 1961, a Yale psychology professor Stanley Milgram performed a now legendary experiment in testing human obedience, referred to ‘affectionately’ as Milgram’s Shock Box. It involved an authority figure encouraging test subjects to electrocute an actor in another booth (not for real, obviously, but the subjects didn’t know that). As the test subjects increased the shocks from mild to lethal, the actor in the other room called out, moaned, and screamed. In the study, most test subjects went far enough to kill the actor in the other room. When the test was repeated with a window, so test subjects could both see and hear the actor faking being electrocuted, most people stopped before they killed him.

Milgram’s study was radical. It explored the notion of human obedience, and how easily we can act with extreme brutality if the conditions are right, namely being given permission to abandon our own ethical standards of behaviour by someone we perceive to be in authority. The work has been used to explore the reasons why the horrific events of the second world war, in particular the Holocaust, could have been conducted by people who clearly were capable of inhuman acts of mass murder whilst at the same time, were just ordinary people. It goes a long way to explain acts of wartime genocide and brutal oppression of minority groups in the post war years.

We’d all like to think we wouldn’t behave like that. However, the data Milgram gathered suggested humans are significantly more morally labile, when placed under pressure, than anyone would like to admit.

Milgram’s work was also groundbreaking for another reason. It highlighted the importance of ethics in scientific research, in particular psychology and medical studies. Milgram himself was criticised for breaching Yale’s ethical guidelines, and since the 1960s, most countries have imposed much stricter ethical controls on psychological researchers to prevent the potentially damaging effects of falling victim to an unethical study.

And then Facebook perform the biggest piece of psychological research in human history without even attempting to address the ethical requirements of that kind of research. Make no mistake, no psychologist would be allowed to perform that kind of study without making test subjects aware of the fact they’re participating in a research project. Full stop.

But Facebook argued (after their claims it was all in the small print of their user agreements) that what they did was no more unethical than the typical A/B testing all web companies do to monitor user behaviour for the purposes of placing adverts in better places to receive clicks, or styling buttons and layout to make pages more usable.

So who’s right?
 

Debunking Facebook’s A/B testing argument…

The argument that Facebook was simply researching new methods to tune up their interface to work better for selling adverts, or make the site more user friendly doesn’t wash. This experiment was not all about tracking user behaviour to optimise commercial performance. Facebook’s argument is a neatly wrapped idea, but unlike monitoring your users (using analytics, cookies and the usual tools) for commercial purposes – which is something all digital media companies do, btw – exploring people’s emotional states through biasing their Facebook feeds towards positive or negative content is a qualitatively different kind of experiment.

The Facebook study claims to show something most people intuitively know, that if you see a majority of a certain kind of content (positive posts or negative posts, in this case) then you are more likely to adopt a positive or negative mood yourself, and post similar stuff yourself.

Okay, so that’s the essence of trends in social media. In fact, everywhere. We see it in the news all the time, if everyone is talking about evil bankers, or TV celebrity pedophiles, or immigration, then we see buzz developing and more people post about that trending topic. It’s not a huge conceptual leap to think that, if you remove the topic and focus on the emotional dimension of content, I.e. Positive or negative emotions caused by a post, then yes, the chances are we’ll engage in the same kind of buzz-driven trend mentality and post similar stuff.

So think of the Facebook experiment as one to define emotional trends, or emotional buzz. What’s the problem with that? Is it any different from seeding lots of content about Nike shoes, or Red Bull sponsored extreme sports events? Is it any different from paying influencer bloggers to write about brands, or using people for viral marketing and so on?

The answer is yes, but it’s hard to explain. It is different because of the nature of choice within cognition, and the way we form our opinions and make decisions based on unconscious thought processes and conscious ones. To explain it, we need to visit the 1970 TVR stand at the London Motor show… (see the picture above).
 

The ethics of psychology in marketing

In 1970, people didn’t think it was odd to place a couple of topless models on your stand at the London Motor Show in order to attract people to your stand and sell cars. It was simple marketing. TVR the sports car manufacturer knew it’s audience was mostly young men without families, and what do young men like? “Saucy birds with their boobs out” as they used to call young women being exploited by patriarchal gender bias within society.

Now if you were to take Facebook’s argument that their study was perfectly normal A/B Testing at face value, it would be akin to saying it no different from using emotional triggers, like topless models, or celebrity endorsements, or constructed lifestyle images (etc.) to make users resonate with a product or a brand and make them more inclined to buy. It’s just using social imagery and lifestyle concepts to make people feel a certain way about buying your stuff, same as everyone else. Just like tracking the placement of two ads to see whether placing it above or below a piece of text makes it more clickable. All just traditional, commercial experimentation, right?

Wrong.

In Facebook’s case, that’s not what they’ve done. You see, in any example of marketing… including TVR’s topless motor show stand, the user is aware of their environment and therefore aware of the artificial manipulation of it. Put simply, the family man visiting the motor show with his wife and kids didn’t see the TVR stand and think “I want one of those because of the saucy birds in their scanties” because he’d also been to the Volvo stand where a man called Sven in a cardigan had shown him how many bags of shopping / dogs / bikes etc. could fit in the back of an new Volvo Amazon estate. Therefore, he has an emotional choice. He can see the marketing activity, even if he’s only aware of it at an unconscious level, that’s pushing his emotional buttons by virtue of comparison between the different car brands and their different marketing approaches.

The same is true of A/B Testing on websites. We may be unaware that another simultaneous user is seeing a different layout of the same page we’re looking at, sure, but we also intrinsically know that shops and websites experiment with the layout of goods in their store, to present them in such a way as to influence our choices. Some people may not, they might have no inkling of why supermarkets put sweets and magazines by the checkout counters, or make you walk past lots of other non-dairy produce to get to the milk, however by visiting many different kinds of stores from supermarkets to delicatessens, to corner shops (and anywhere else sweets or milk is sold) they are aware of different approaches to selling us things.

Basically, we have cognitive awareness of marketing. We have an opportunity to choose whether or not we allow it to influence our choices to some extent. Even if only unconsciously, we learn from experience that of course the salesperson in the shop will tell us their products are the best. Or their prices are the best. Or their brand of cereal will make them healthier. Or their cars will make you more attractive to women.

But when the whole environment is controlled, like what you see on your Facebook feed, you have no external reference points against which to make a judgement. The influencing variables (as psyhcologists call them) that are affecting your choices are hidden from you. i.e. You’re participating in an environment designed to manipulate your emotional state and decision making without being able to do what your brain naturally does with any other kind of information, namely make a comparative judgement about the nature of the information you’re being exposed to.

To draw the analogy back to the TVR stand at the 1970s Motor Show, what Facebook did was put a pair of blinkers over your eyes which meant you saw topless models on every stand at the show from TVR to Volvo. To further the analogy, you’d be seeing topless models in the bank, in the supermarket and in your house too. In that world, you couldn’t possibly form a view about topless models apart from thinking they were an intrinsic part of the world around you.

If you were a 1970s man, you’d go blind with that sort of thing. Ethically, you’d be unable to tell the difference between marketing and reality.

Naughty Facebook.
 

Converging business and society: ethical problems in the small print

The stinky cherry of shame on Facebook’s unethical cake is the fact they have, two years later, added some words to the thousands of other words in their terms and conditions to cover themselves. Now we all know it’s almost impossible to read those anyway. We also know very few people do. Why? Because all your buddies are on Facebook. It’s your digital social life. It plays a role in your life no other product or service has ever enjoyed. It’s business built out of social lives, which is still very much new territory or commercial operations. So mistakes are bound to be made. However, Facebook know that. They’ve got loads of data, on everyone, so they must know people don’t actually read the terms and conditions, or the privacy policies, or the data use polices and so on.

Worse than that, they failed to even attempt to provide an ethical framework for the study. I asked a published consultant clinical Psychologist friend about it, (I have to withhold her name for professional legal reasons, but she has run a lot of studies and published some well respected research) and she said this:

“They should have notified the test subjects that they were conducting behavioural research. They should have told them they couldn’t reveal precisely what it was, because that knowledge would skew the results and render the test invalid, however they were ethically bound to offer test subjects the ability to choose to participate in an emotional research study or not. That’s the basic requirement I’d have to meet to make this kind of study pass ethical standards in a hospital or university.”

She went on to say…

“The fact Facebook didn’t even attempt to address those basic ethical standards is what’s so challenging about this case, they must have known there was an ethical dimension to what they were doing.”.

There is, perhaps, a reason why they didn’t seem to give a shit. The guy who devised the research is a data scientist. Data doesn’t have a right to ethical treatment, it’s just data. The fact the data, in this case, was derived from people’s emotional states crossed a boundary between disciplines. Which of course, is what Facebook does all the time, it’s converging commerce and social relationships in a way that has never happened before. People like branded content, see adverts targeted by demographics, popping up alongside their mates’ holiday snaps. It’s a new environment. So there’s a likelihood that people who weren’t bound by ethical standards of care towards people are now finding themselves in positions they never thought they’d be in before.

Which takes us back to Milgram. None of his test subjects (mostly Yale students) actually expected to participate in an on-campus psych department experiment where they’d be called upon to electrocute a stranger to death, either.

But they still did.

So maybe in the new socialised digital economy, there’s a need to bring medical ethics to bear upon commercial operations where they cross the line between selling premium ad slots and making you feel depressed by filling your feed with stories about crime and illness.

Otherwise, one day, we might click the like button until some poor bastard gets fried with 40,000 volts, without even realising it.