Why is Twitter’s social-media software not legally responsible for the content it enables?

For a rather long time, I’ve been thinking about the subject of:

 […] the hijacking of the benefits of the knowledge society by those who have created the social web.

Let’s just rewind and see how it could’ve been: a society where brains, applied to ideas, developed and implemented technologies on a massive scale – technologies which became cheap enough for everyone to remove drudgery from their ordinary lives and so release the human mind for much better things.

What do we have instead?  Poorly paid – or even unpaid – worker bees (that’s you and me on Twitter and Facebook) inputting data for the software code of such a social web to generate outputs which fascinate companies and allow them to better identify their markets.

Yes.  We are now generating the data for corporations which not only make money out of us directly through advertising (Facebook and Twitter) but also sell our personal details to other organisations (food and consumer-durable manufacturers for example) in order that they may better sell their products to us.  We are now an outsourced part of this latter group of companies’ marketing departments.  Instead of costly opinion polls and focus groups, all they have to do is pay a modicum amount of money to examine Twitter’s firehose (its full complement of content to which the rest of us cannot have access beyond about a maximum of seven days of search) and thus use our freely inputted data to better sell us their products.

I go on to conclude:

[…] The problem is that these software companies have worked out a way of attracting us to sit down for free in front of our monitors and screens, and input devices various, and create content which substitutes the stuff they promised us fifty years ago was going to release us from the drudgery of manual labour.

Essentially, it would seem the long-promised knowledge economy has been hijacked and dumbed-down by the requirements of the social web.  And, right now, I really cannot see our way around it.

In reality, what we have here is a social-media software which first makes uncomplex the requirements of inputs from ourselves and, once harvested, proceeds to automatically put it all back together in order to make it sufficiently re-complexed to be of interest.

Arguably, without the software to give automated form to the content so produced, we wouldn’t have anything anyone would really want to witness.  Random 140-character text messages which related in no way to any other?  Who’d care to enjoy an afternoon of that?

So why do I return to an issue I did to the death a while ago?  Because, in the light of quite reasonable demands for defamatory reparations, it occurs to me that, in social media, we have a less than clear division between publisher and distributor.  Now I’m quite unaware if – in previous court cases in relation to, for example, obscenity trials of books or other historically significant offline content – distributors of such books ran the same risks as the publishers themselves.  But I wouldn’t be surprised if the history of our country had thrown up prior examples of both parties being asked to carry the responsibility can.

In this case, however, in particular in relation to Twitter, which is what seems to occupy our minds most vigorously at the moment, I would argue that the division between the two roles of publisher and distributor is far more difficult to delineate.  Twitter’s current Terms of Service make it very clear that each user is entirely responsible for their own content:

You are responsible for your use of the Services, for any Content you post to the Services, and for any consequences thereof. The Content you submit, post, or display will be able to be viewed by other users of the Services and through third party services and websites (go to the account settings page to control who sees your Content). You should only provide Content that you are comfortable sharing with others under these Terms.

They then go on to say:

You may use the Services only if you can form a binding contract with Twitter and are not a person barred from receiving services under the laws of the United States or other applicable jurisdiction. If you are accepting these Terms and using the Services on behalf of a company, organization, government, or other legal entity, you represent and warrant that you are authorized to do so. You may use the Services only in compliance with these Terms and all applicable local, state, national, and international laws, rules and regulations.

So far, so good.  Content thus generated is under the jurisdiction of – presumably – where one is resident.  Or, alternatively, one’s nationality.  Or, perhaps, where one has tweeted from.

But in any case, wherever the rules and regulations are applicable.


Form, however, is quite a different matter.  As far as the software is concerned, and Twitter’s own corporate liability, Californian law is judged to rule everything else:

These Terms and any action related thereto will be governed by the laws of the State of California without regard to or application of its conflict of law provisions or your state or country of residence. All claims, legal proceedings or litigation arising in connection with the Services will be brought solely in the federal or state courts located in San Francisco County, California, United States, and you consent to the jurisdiction of and venue in such courts and waive any objection as to inconvenient forum.

And the liability in question is limited thus (the bold is mine):



I would rule this significant and almost certainly deliberate.  I’m no expert in law, much less in Californian law, but I’m pretty sure it’ll make it easier to sell and distribute software which makes more complex and interesting dumbed-down content without running foul of legal complaints about issues of free speech than, say, its European counterparts.

What I’m really saying with all of this is that Twitter’s Terms of Service attempt to argue that its software simply distributes and does not publish.  It takes no responsibility for the bringing together of such content – and it consequently allows form to come under one legislation and content, thus defined, to belong entirely to the user.  (Though we know that even this is not true: a user cannot normally access more than a limited number of tweets back in time, whilst companies pay Twitter good money to access on a massive scale such ancient thoughts and occurrences.)

My argument, however, would run as follows: deliberately dumbing down individual ideas into 140-character gobbets and then bringing them together automatically to create interesting streams of thought involves not just the process of distribution but also the process of transformation.  We are not just talking about giving someone else the tool to publish off their own bat: microblogging (ie Twitter) is essentially different from its much more discursive and single-authored precursor – which is to say, the blogging you see in front of you right now.  Microblogging, essentially, is collaborative writing which involves many many others – and in order for it to work someone, or something, needs to sort and filter the information.

That is to say, give it shape.  Edit and give sense and sensibility to what would otherwise be a morass of idiocies.

So who are the authors who write in a microblogging site like Twitter?  Obviously the individuals who post.  But also, surely, if we’re being realistic, the software which joins as a seamless whole the activities of so many busy worker bees; which is programmed and designed from ground up to prioritise speed of transmission over reflection; and which aims above all to indicate the latest over the lasting.

Which is why we finally come to the question I pose at the top of this post: why is a company like Twitter’s social-media software not also legally responsible for what it – basically – creates? Or at the very least enables?

The software, that is – and, by extension, the company.

For if the form which it gives involves fundamental transformation of the content its “employees” end up generating, the line between content and form is far more blurred than any post-modern attempt to confuse our senses could ever achieve.

So think about it.

And then ask.

And then come back here, in order to tell me what they said.

4 comments for “Why is Twitter’s social-media software not legally responsible for the content it enables?

  1. November 21, 2012 at 10:05 am

    So many interesting points, hard to know where to start…

    Essentially I see two largely unrelated issues: (1) the devaluation of the input (art, writing, science) because of the drive to encourage participation and sharing, and (2) the impunity of the service from liability over what’s published.

    The first is a worry I have described for years as Audience Monopoly. The critical mass needed to make such a service financially viable limits the number of services out there, meaning the service providers hold all the strings.

    This has massive implications for competition and privacy, that I pull together in Identonomics

    As far as “art”, journalism etc are concerned there will be a re-correction where new business models are found to pay creators, that I’m sure. In fact it’s already happening – “digital” companies are paying for the skills previously paid for by old media.

    However there will be no longer a market for some previous skills due to there being no demand because of “citizen” participation. I don’t necessarily see that as a bad thing, so long as society doesn’t lose out e.g. due to lack of proper investigative journalism. I’m sure though society will find a way for the things that are important to continue; important things have a funny knack of happening despite the challenges.

    But should Twitter et al be legally liable for misuses, is there a moral or legal argument for this? Well I guess there is, but I personally think we’re better off with the liability sitting with the participant.

    In the same way that no company in their right mind would want to deliver letters if they were help liable for the content of the letters, we need to create a somewhat blurred barrier between service providers and content creators in order to make it viable for the likes of Twitter, Blogger, etc to exist as a business.

    Moreover all these services are for the moment free, so whilst on one hand they are capitalising on social input, they are providing a social good too, although the jury’s out whether having free email, blogging, etc makes up for the potential damage to society through the privacy angle and also through the capacity for a small number of companies to control what goes on their networks.

    But the alternative to service provider indemnity will inevitably be many more restrictions on use. Twitter for example may introduce registration checks, great for stamping out the idiots wanting to sling mud in the UK but not so great for someone trying to talk about life in Iran.

    Blogging platforms may insist on a fee as the only way to cover their liability for the times they get sued. With shared liability I doubt instantaneous reaction will survive – as many sites shift to pre-moderated or pre-filtered.

    There’s another threat in automated censorship. If Twitter e.g. had joint liability they may pro-actively filter potentially libellous messages through a filter system, meaning it would be hard to talk about unrelated affairs where similar names were user – try Wikipedia for the Scunthorpe Problem.

    I’m as certain as I can be service providers need such an indemnity to survive in a form that is most beneficial to society, I know it’s a hard balance on where to draw the line but we also have to look at the “distributed” “soft” regulation on sites like Twitter which comes through user participation.

    I have a theory: if you have a broad enough section of society participating in an online forum and a few basic tools such as block/ignore and a feedback mechanism of some sort the discussion becomes self-moderating.

    People chose who to add their “vote” to (feedback, e.g. follow/unfollow or vote up, block). A problem comes when the system is subverted for commercial gain (e.g. spamming).

    If the participation is less broad there are insufficient natural moderators, people whose participation seems to drown-out even the flamiest of loud-mouthed ranters.

    A question that should be asked is: if and when legal intervention is even needed, since without the law there is no question of liability.

    Now I know this sounds quite crazy, and I’m not actually talking about personal liability. Take the issue of Chris Cairns being libelled on Twitter by ex-IPL commissioner Lalit Modi.

    The allegation only had credibility because of the status of the person making the allegation.

    We really have to face up to the reality that 10,000 people tweeting a rumour or mere innuendo is substantially different to one person of credibility making a highly pointed accusation.

    We don’t give readers enough credit. It’s not just highbrow intellectuals who ask themselves where the evidence is, I’d guess at a good 80% of the population know instinctively not to believe something just because someone wrote it down or put it on Twitter.

    As for the problem with trends, well a trend needs to reach a critical mass, a tipping point. McAlpine only really took off when the BBC added credibility to a long-standing online conspiracy theory. And that online theory only bubbled on under the surface because of a bizarre series of what can only be described as cover-ups (or at least serious failures to investigate transparently).

    The online medium is nowhere near as volatile as the Daily Mail would like us to believe, and I say that after spending much of the last four years delving deep into all sorts of conspiracy nonsense to try and understand how messages propagate, gain credibility, etc.

    • mil
      November 21, 2012 at 1:34 pm

      Thanks so much for this – for taking the time out and sharing your knowledge. A brief comment with what you start with – if anything more occurs to me, I’ll post later on today more fully:

      “Essentially I see two largely unrelated issues: (1) the devaluation of the input (art, writing, science) because of the drive to encourage participation and sharing, and (2) the impunity of the service from liability over what’s published.”

      I put them in the same post because I don’t see them as unrelated. The Twitter software constitution dumbs down by design; the result is crowdsourced – but not by the crowd. Rather it’s the design of the software itself which produces cogent content. That this is automated through code doesn’t mean the intentionality of the system doesn’t belong to the human beings who made the design decisions and earn a living from its overall output. It’s not a moral responsiblity/liability here I’m arguing: what I’m actually saying is Twitter is *not* like other blogging where clear authorial lines of command exist: in essence, this thing we call microblogging only works because something else apart from the human beings edits, sorts, filters, makes cogent and, finally, makes complex again. And this something did come from other humans who are not the direct authors of the dumbed-down content: authors who continue to own, run, tweak and adapt the service. As well as make a living off it.

      Editing content adds value to such content, whether automated or not, and therefore – in my opinion – also must transform (therefore “write” or “publish”) that content.

      Blogger clearly – to my mind, anyhow – never crosses the distributor line. But microblogging environments tease out meaning for multiple entries and authors which by themselves would be meaningless. Here, then, a line *is* being crossed.

      Now I understand that if we go down this route, then we may lose Web 2.0. But I also think it unreasonable that a participant can be made responsible for their content when the true meaning of their content depends on a multi-author stream which only exists because a company like Twitter has deliberately designed certain behaviours into the system.

      Especially if legal implications then arise.

      You’re right. The solution would be for the crowd to moderate and for the law to leave be. But this is not going to happen. In a previous post, I suggested a voluntary flag for potentially libellous content: this wouldn’t censor anything, but – according to your jurisdiction – would warn you before you pressed the “Send” button that what you were about to send *could* be libellous. Much like the squiggly green line in Word when flagging up grammar issues. Such content-scraping software is already used by the legal eagles. Why not incorporate it into social-networking software?

      Furthermore, as I think I mentioned in that same post, if it’s so easy to engineer and has already been done by others, why haven’t the social-networking companies cared to do it yet themselves?

      Anything, perhaps, to do with business models which depend on scandal?


      • November 21, 2012 at 3:07 pm

        I think I see, but I don’t necessarily agree. At the moment, with Twitter in its current form, in any case.

        I don’t feel the multiplexing of streams in this way, or any aggregation technique they deploy, fundamentally alters the context. Rather I see each tweet and thread as a conversation in the traditional sense, albeit with a limit on the size of the contribution.

        I do see cases where the “software creates the libel” from its automatic association, but not in this case. E.g. it could interpret multiple sources and attempt to draw conclusions, and state those conclusions as fact. Which Twitter doesn’t do.

        As for the flag, that still raises questions of who controls the controls. Lets say a company wants to quell rumours about its working practices and coerces Twitter into flagging up words in certain context in relation to the company as potentially libellous.

        Would a small business have the same access? Where are the checks and balances to prevent abuse of process, is a court order required?

        Whilst you could argue its only a flag, I feel it’s important not to overlook the effect such a flag might have on participation. It might put-off a large law-abiding section from participating, leaving debate to the very brave and very foolish, as I discuss here: http://www.sroc.eu/2012/11/the-de-democratisation-of-democratised.html

        • mil
          November 21, 2012 at 10:06 pm

          It’s not context but content which I think is being transformed. But I do understand where you’re coming from, and do understand that Web 2.0 has added many good things to our lives.

          I’m not trying to destroy that – rather, understand how to deal with a state and establishment which doesn’t understand the values that a generation used to sharing could add to our future progress. I fear that we are in the anteroom of a wholesale shutdown of anything which doesn’t involve exchanging pictures of suppers, cats or cups of coffee. God forbid that our citizens should want to publicly debate democracy, for example.


          By the way, I think I linked to your de-democratisation piece the other day. Excellent as always. I wish I knew half what you do, as well as with the confidence you manifest.

Leave a Reply

Your email address will not be published. Required fields are marked *