I posted this yesterday on the subject of the #FacebookExperiment scandal, quoting from a Cornell press release:
Because the research was conducted independently by Facebook and Professor Hancock had access only to results – and not to any individual, identifiable data at any time – Cornell University’s Institutional Review Board concluded that he was not directly engaged in human research and that no review by the Cornell Human Research Protection Program was required.
I then went on to conclude:
So, then, it’s OK to use research that has been obtained without permission from any source whatsoever, as long as one cannot identify the
victims unwilling participantssocial network users in question – creatures, incidentally, who occupy the lowest of all low strata in the 21st century litany of unobserved rights and excessive obligations.
The thesis of my post was that Facebook was not just doing what other tech corps out there are doing – which is true – but that their behaviours in testing out “emotional contagion” in their users was very similar to what our Coalition government here in the UK has been doing since 2010:
And if the ICO feels that data protection laws may have been broken when Facebook experimented on the way that people reacted to negative and positive stories, without asking their permission first and even though they’d signed up to a wide-ranging set of T&Cs, who is to say this Coalition government didn’t similarly break human rights laws when they decided to experiment on how a nation might react to a barrage of false stories about immigrants “nicking” jobs, the “scrounging” poor, the “feckless” disabled and a well-packaged myriad of other lies, distortions and half-truths?
Today, Jay Rosen, writing in the Washington Post, adds a further twist to the resistance a whole host of people should feel with respect to this entire adventure, when he argues that the most culpable participants have been the universities themselves, for not observing the difference between “thin” and “thick” legitimacy:
Thin legitimacy is when the experiments conducted on human beings are: fully legal and completely normal, as in common practice across the industry, but there is no way to know if they are minimally ethical, because companies have no duty to think such matters through or share with us their methods.
Thick legitimacy: when experiments conducted on human beings are not only legal under U.S. law and common in practice but also attuned to the dark history of abuse in experimental situations and thus able to meet certain standards for transparency and ethical conduct— like, say, the American Psychological Association’s “informed consent” provision.
After having spoken to people who work in pharmaceuticals, I’m inclined more and more to believe that tech corps have shrugged off both thick and thin legitimacy in a way that, for example, the former sector usually finds very difficult to manage. Perhaps the problem is the degree to which we’ve wanted to legislate data outwith the very specific field of medicine, as well as the wider issue of consent (whether spoken or unspoken) in general.
Ethical committees in a medical context are there to ensure two things: firstly, that people are protected in an informed way, and as much as is possible, from the potentially toxic side-effects of otherwise useful experiments; and secondly, that the experiments carried out are robustly designed and take full advantage of the opportunities to learn and develop understanding. There’s no point in exposing people to the downsides of science if the options are not properly explored to ensure the upsides; if maximum advantage isn’t part of the gameplan. And whilst we all understand why medical data should be collected, collated and handled with care (or at least we did before #caredata hit the screens), other kinds of data have seemed to slip through the net of our awareness and coherence.
So. Perhaps we should forget the nature of the data and focus our attention, instead, on the simple quantity. Given that, for example, a sufficiently clever and substantial collecting of metadata can say far more about what someone intends than a close line-by-line reading of the content it inscribes, I would suggest we stop defining when something requires thick legitimacy with respect to the degree of intimacy or fragility or sensitivity of the material in question, and started defining it in terms of how much we hold. Big data means we can find out practically everything – assuming we have enough of it – from the virtual equivalent of rubbish bins strewn across the web. It doesn’t need to be intimate or fragile or sensitive in itself to allow intimate or fragile or sensitive conclusions to be reached.
Thick legitimacy for everyone and everything above a certain size, then? I think so. A thick legitimacy which should imply the oversight of independent ethical committees – just as with pharmaceutical corps, just as with the medical sector – and which, as committees of the ethical and the proper, should know far more about the subject than a cack-handed PR awareness of the potential for reputational damage permits.