Samaritans Radar and use cases

When you develop a piece of software or app, it is common to undergo some evaluative thinking about the use cases and user needs. And this isn’t simple. Use cases are often complicated, and user needs often contradictory. It’s hard to make ‘the right’ decision, especially when you are creating a tool that directly influences and alters an existing social environment and ecosystem – be that on or offline.

Earlier this week, to much fanfare in the press, the Samaritans released a new app called the Samaritans Radar, with dramatic claims about how this app is going to save lives. The discussion on Twitter, the space directly influenced by the new app, has been much more nuanced, and there has been much furore from a number of people – including those who identify as using twitter to discuss mental health issues, or who have been previously subjected to online abuse.

What is the Samaritans Radar?

The Samaritans Radar is an app that uses twitter and the feeds of twitter users who have not made their account private. The ‘user’ of the app is an individual who wishes to receive email alerts to tweets from people the user follows to certain tweets. These are tweets that the app, based upon content analysis, decides may be correlated with being depressed.

Originally any individual who uses Twitter, has not chosen to make their feed private, and who has a friend who decides to use the app, would have their tweets read, scanned and subject to analysis by the Samaritans app. There is now an process by which twitter users may opt out of being the subject of this app – although it requires contacting the Samaritans directly, and being put on a ‘white list’.

My use case and user story

I’ve written a use case and user story about myself here. It was originally part of this blog post but I removed it to make the post easier/shorter to read, and it felt clunky to have the inclusion of something so personal. However, it adds weight to the point I am trying to convey to the Samaritans so I wanted to include it somewhere.

The summary of my user story is that a piece of software that potentially analyses my tweets and alerts my friends to what it considers to be negative content has absolutely no positive effects for me. Instead it will cause me harm, making me even more self aware about how I present in a public space, and make it difficult for me to engage in relatively safe conversations online for fear that a warning might be triggered to an unknown follower.

And the use cases and user stories of others…

However I am not the only person who may be affected (either positively or negatively) by a piece of software such as the Samaritans Radar being implemented on top of Twitter.

Operating in a public space requires a degree of negotiation between the needs and wants of many individuals. And twitter is one such public space, with a very complex (existing, but always changing) social ecosystem. People use the infrastructure provided in a huge variety of ways, operating as a combination of the online ecosystem and their own personal preferences.

Therefore, to think about the Samaritans Radar and to evaluate its value, we need to think about the large number of different twitter users and potential Samaritan Radar app users. Just to mention a few possible user groups who use twitter relating to mental health:

  • Individual who has no mental health concerns and doesn’t engage in any online discussion
  • Individual who has no mental health concerns, but engages in online conversations
  • Individual with mental health concerns but doesn’t see twitter plays any role for them
  • Individual who has mental health issues, and uses Twitter as part of ongoing activism
  • Individual who has mental health issues, and engages in small conversations and gains support in small groups.
  • Individual who has mental health concerns, and doesn’t have any/little existing support mechanisms.
  • Individual who is subject to online abuse for various reasons (be that in relation to activism in other areas, gender, sexuality) – who may or may not have mental health concerns.

Each of these twitter users are likely to be followed by a variety of people – some of whom they know, some of whom they don’t, and a very large number on the whole scale in between. Although tweets from accounts which are not locked can be easily read by any individual, everyone is different in how they choose to use Twitter (as a balance of broadcast or engage), and with individual tweets, they make decisions about how public or accessible they choose to make tweets (be that in conversation, using hashtags). (EDIT – Paul Bernal has written a great post about this here)

Furthermore, individuals interact differently with different followers. A user of twitter doesn’t want or expect the same degree or type of interaction from all individuals who have made the decision to follow them.

Potential Radar users

There are also a number of different potential Radar users. These people will all be twitter users, but not everyone who has twitter will become a ‘user’ of the Samaritans Radar. For ease, I’ve pulled three main groups out:

  • Person who knows they follow people on twitter who might have mental health issues, wants to keep an eye on those people, and are able to provide meaningful support to them.
  • Person who knows they follow people on twitter who might have mental health issues, wants to keep an eye on those people, and are unable to provide meaningful support to them (this may be through not being emotionally mature/able enough to provide support to those with mental health, not knowing said people as well as they think, said recipient of ‘help’ not wishing for support from that individual etc)
  • Person who wants to hurt and abuse vulnerable people (and a lot of the conversations around online abuse show that this is a known category of individuals)

Who is the Samaritan Radar designed for?

The whole focus of the app is designed towards the ‘user’, a follower who may wish to identify when people they follow on twitter are vulnerable. Originally the privacy section of their website only mentioned the privacy of the user of the app – the person doing the monitoring – with no mention of any privacy concerns for the individuals who are being followed. You can argue that tweets are in the public domain therefore there are no privacy concerns (something I disagree with, but that’s another post), but you cannot deny that the app very clearly is designed for the follower, not for the specific benefit of an individual with a possible mental health issue. An app designed for an individual with mental health problems would be very unlikely to remove the agency of the individual in such a fashion.

Beyond this, it feels like Radar has been set up with a very specific type of Twitter user in mind and specific type of Radar user in mind. The type of Radar user appears to be the individual who will do no/little harm – either deliberate or accidental – through their engagement with a specific person with possible mental health issues that an app is flagging up. And the type of Twitter user is someone who is (for a variety of reasons) open about their mental health and depressive spells on Twitter, and wants people to look out for them and isn’t able to flag concerns to an existing support network.

And this itself is a laudable aim. And really does have the benefit to do much good.

But the major problem comes in assuming that this very specific case is the only use an app such as the Samaritans Radar could be put to, that the existence of the Samaritans Radar does no harm/limited harm or a comparatively insignificant amount of harm to other individuals (compared to the benefits that may be provided to other individuals) and assuming by default that Twitter users want to allow their followers the option to monitor their tweets in this fashion.

Radar users may not be benevolent. Online abuse can and does take place – with individuals being bullied, told they should go kill themselves. If online monitoring of individuals (with mental health problems or not) was made even easier, that seems like a significant probable cause of harm – and something a mental health charity should not be enabling.

Radar users may be less than great at identifying when and how someone with mental health issues wants to talk. If individuals need to be poked to remember to check on someone, they very much might not be the type of person someone with mental health problems wants to talk to.

The Samaritans Radar only introduced an opt out system after a number of people on Twitter raised concerns, but there has not yet been any recognition from them that the existence of such an app, opt in or opt out, affects the social ecosystem of Twitter. And this ecosystem includes a large number of individuals who are already finding ways to support themselves and some effects of mental health problems online.

A number of these individuals are now stating that how Samaritans Radar is implemented affects the ecosystem they operate. Yes, this may be self reported, but from vulnerable individuals – exactly the type of vulnerable individuals with whom the Samaritans should be concerned – it should not be taken lightly.

At the same time, we’ve not seen any evidence of the benefits of the app – despite much talk of work with academics, and time spent in the apps development.

And before potentially inflicting harm, ignoring critical comments from users with exactly the type of condition an app is supposed to help, and rolling an app out globally, there should be good reason to believe that the overall benefits outweigh the overall costs.

So I have a couple of question to throw to the Samaritans:

  • What, if any, work or assessment, have you done on evaluating the benefits that may be provided by the Samaritan Radar?
  • What, if any, work or assessment have you done on considering the user stories other than the idea of a benevolent and capable Radar user, and a Twitter user who uses Twitter broadcasting a need for help?
  • What, if any, work or assessment have you done of the negative impacts such an app may have on the existing support networks of those with mental health issues who use twitter?
  • What, if any, work or assessment have you done of the impact of having an opt in Samaritans Radar app? Was an opt in app ever considered until the initial backlash on Twitter? If so, why was it rejected?

I look forward to their response.

10 thoughts on “Samaritans Radar and use cases

  1. Firstly, has it been confirmed that protecting an account will prevent the Radar from processing it? It seems that the Radar sets up an account for a user based on Twitter OAuth, and the user grants authorization to read tweets on their behalf – in the same way that you might do with Tweetdeck or similar. So it seems that the radar in theory can process any tweet from a protected account that the user is following, on that user’s behalf, using that user’s authorisation.

    Moving on from that, there are two organisations here that are involved in collecting, making available and processing the tweets – the Samaritans, and Twitter. I am desperately concerned that all of the post-Radar coverage focuses on the Samaritans, and there is no examination of Twitter’s own role in this. Yes, the Samaritans have questions to answer, and yes, they are in a position to act most quickly on the concerns. But they are not acting alone.

    If we go down the route of asking whether there is a data protection issue with the content and rights to use it, then don’t we have to ask whether Twitter is also liable for having collected and passed on the information in the first instance?

    And the we have everything that has been said about not giving consent, about registering with a third-party to opt in or out, about how new users would even know this exists.

    All of which comes back to the Twitter infrastructure. As a user I can authorise an app to use the APIs with my credentials, to be able to make use of any content I have access to. But I don’t “own” the vast majority of that content – they are things being tweeted by other people. And they don’t have any say over which apps are allowed to be authorized to operate on their content.

    If those controls existed – if we could prevent what goes out of the API in which context, limit the visibility in searches, independently of whether tweets can be viewed anonymously – then it would address a lot of those issues.

    And whilst I expect that some online abuse will come from people making use of Radar, there are other ways of finding targets to abuse, whih are easier than having to follow people in order to get an alert. Making changes to the authorization of apps in Twitter doesn’t just address a lot of the concerns about Radar, it can fundamentally make it harder to use Twitter to abuse people.

    I agree that the Samaritans have questions to answer, that there are ethical issues in any automated processing. I completely respect people’s concerns, and recognise the fear that this is causing. I’m not making the case for it to remain. But if we just focus on the Radar, and not what made it possible, we may not be doing much to protect/support those that feel threatened by it.

    Like

  2. you cannot deny that the app very clearly is designed for the follower, not for the specific benefit of an individual with a possible mental health issue.

    Bingo. As one (Radar-positive) poster put it here: “My understanding is that it is designed to avoid those horrible occasions when people die and, when their friends look back at their communications, they realise they were trying to ask for help but never got it. – it’s geared to the needs of friends/family wanting to help MH sufferers, not the needs and wishes of the MH persons themselves.

    Like

Share your thoughts