The Samaritans take their name from the biblical story of the Good Samaritan: Jesus answered, “A certain man was going down from Jerusalem to Jericho, and he fell among robbers, who both stripped him and beat him, and departed, leaving him half dead. By chance a certain priest was going down that way. When he [...]
Two days ago I posted on the awful mistake the Samaritans have committed in hanging on to the coattails of (bad) big business (not all by any means is bad, of course), as they launched onto a sensitively unsuspecting public a pretty standard scraper of Twitter’s supposedly public tweets. As I commented this morning on this very same post:
[...] But the question I’ve asked elsewhere (*not* casually tossing words out into the public domain) is whether the Samaritans are not only using “public” tweets (they’re actually only public in the sense that, for example, a supermarket car-park is a private space of public use – they can throw you off it without due legal process whenever they want) but have also been paying to use Twitter’s firehose. I suggest this because before you launch an app, you normally test it. So are we coming up against issues similar to the Facebook ones a few months ago when we discovered they were seeing how easy it was to make us happy or sad, without advising us of the frame of the experiment?
To then ask that:
Really, I would love the Samaritans to clarify the whole process leading up to launch day – and everything they did to finetune (if that’s the right word) the tech involved by using user-generated content.
As I argued:
This Radar app is actually the equivalent of rummaging threw rubbish bags to collate, process and re-distribute unhappiness thus unturned. And that’s what I really object to the Samaritans doing: not the overhearing, which is fine for me – it’s out there, please do listen in if that’s what turns you on; no, it’s the collation and programmed RT-ing which is the really unpleasant thing here.
What’s more, and far more importantly, as someone neatly summed up in one short tweet the main thrust of the 1000 words this post of mine needed:
“Previously ppl chose whether to seek help from @samaritans & controlled relationship doesnt #SamaritansRadar change who has agency?”
Meanwhile, this afternoon the Samaritans’ Twitter feed tweets a link to an NSPCC story which indicates that suicidal feelings amongst children are at an all-time-high in our society (the link in the tweet is broken for me, but I dug out the story referred to here) (the bold in this quote is mine):
4,517 counselling sessions were held by ChildLine (UK) in 2013/14 with children who talked about suicidal thoughts – a 117% increase since 2010/11. Nearly 6,000 of these children had told a counsellor that they had previously attempted suicide – a 43% increase on the year before. The vast majority of these children had not revealed their feelings to anyone else. ChildLine is urging these young people not to feel fearful or ashamed to tell others of their feelings.
The story then goes on to tell us:
More open and frank conversations are needed
The charities all strongly believe that more open and frank conversations should be encouraged with children to enable them to describe their feelings, and discuss issues such as self-worth, self-harm and suicidal feelings. Suicidal thoughts carry a stigma, which makes it hard for many young people to talk about, but it is important that this issue should be tackled with young people, parents and professionals.
I don’t know about you but I do myself find it very hard to square the circle of an org like the Samaritans on the one hand scraping Twitter to tell third parties that the people they follow are potentially feeling suicidal, so changing dramatically the agency of a once well-tested process, whilst on the other (in practically the same virtual breath) they solemnly tweet broken links to stories which argue the indisputable: that open and frank conversations between young people, parents and professionals are needed. Conversations that is – not Twitter algorithms lashing out disturbing retweets left, right and centre.
And if ChildLine is “urging young people not to feel fearful or ashamed to tell others of their feelings”, are we really sure that putting data-collation, processing and redistribution tools in the hands of everyone who’s frightened enough not to speak directly with their suicidal – but not afraid of trusting an algorithm! – is the best way of achieving such goals?
Typing and retweeting people’s language as suicidal is not going to get any productive conversations going. It’s going to make all people self-censor; it’s going to make young people in particular – who use social media almost universally – find their digital tongues cut off at source through the fear of “discovery”; and, finally, what’s more, these young people who are the object of the NSPCC article today will find it far easier to feel shame, stigma and desperation on a medium they’ve clearly made their own than trust their communicative and sharing instincts – instincts us older lot should strive to admire and encourage.
It’s a generational thing, this: the designers and promoters of #SamaritansRadar are obviously young people too – accustomed, out of their wellbeing and fortunate mental health, to putting up effectively with the inevitable latterday stress of living in the goldfish bowl of the worldwide web. If they can do it, and even earn a wealthy living on it, why not use the same techniques – techniques they’re so familiar and comfortable with – to allow charitable organisations to enter stridently this young people’s world?
Except that some young people, as we speak, healthy or otherwise, are slowly beginning to question the value of this data landgrab – this assumption that everything is find and dandy in the interconnectedness we’ve had imposed on us by humongous technological interests.
And if healthy young people are beginning to doubt they want it, why must people with support needs be treated any differently?