Samaritans Radar – What happens next

At 6pm (GMT) on Friday 7 November, the Samaritans announced they were suspending the Samaritans Radar. I’m not going to go into how they suspended it or why they suspended it.

Instead, I want to look at what happens next. The statement made by the Samaritans on suspending the tool included the line:

We will use the time we have now to engage in further dialogue with a range of partners, including in the mental health sector and beyond in order to evaluate the feedback and get further input. We will also be testing a number of potential changes and adaptations to the app to make it as safe and effective as possible for both subscribers and their followers.

One of the mechanisms by which they hope to capture feedback is by a survey which was released in a footnote of the suspension notice. This survey has not been promoted by the Samaritans in any other way, as far as I know. Although a number of individuals on the #SamaritansRadar hashtag have been talking about it.

There have been some concerns expressed about the methodological limitations of the survey – which is likely due to it being written in a bit of a panic last week. However, I want to make these limitations clear, so that the Samaritans can fully understand the restrictions on the input they gain from the survey, and I want to ensure that those who have been expressing concern about the Samaritans Radar have the opportunity to provide some critique.

Therefore, I have set up a Google Document (found here) that anyone should be able to view, edit and comment upon anonymously. If for any reason, this Google doc isn’t suitable for you – let me know via twitter (@MLBrook) or send me an email (on michelle@michellebrook.org), and I will send you a .doc copy with existing comments. You can then make your comments, and send a copy back, and I’ll add them to the Google Document – which acts as the canonical version.

If you are concerned about anonymity, feel free to set up a temporary email address somewhere which we can use to exchange email documents.

I’ll the write up the comments into a blog post, and from that write a letter to the CEO of the Samaritans. Anyone who wishes to be may be credited for their input – or may remain as anonymous as they like.

 

Samaritans Radar and the big questions…

Regardless of what you think of the Samaritans Radar, (and as a previous post states, I’m not a fan) it’s highlighting a number of really interesting and fundamental questions around tech policy, ethics, and legal aspects of operating in an international space, aspects of data, and how people view NGOs and charities.

I’ve been meaning to write blog posts on many aspects for a very long time. But for now, I’ll just provide some short summaries about some of the issues it highlights – and expand on them as I have time.

Criticising charities/3rd sector organisations

How should individuals criticise or make constructive comment on the actions of charities or third sector organisations? For instance, I know many people who would like to publicly criticise the Samaritans current approach, who would do so if it were Government or business doing something similar, but feel that they shouldn’t say anything because of the huge good the organisation has done in other spaces.

Others I’ve spoken to automatically assume that because an organisation which aims to do good has done this, then it is stamped with some mark of authority (with presumed research or due diligence carried out), or automatically a definite good.

As we see more innovation in the third sector, and see NGOs and charities coming online, and engaging with digital and tech, there will be a large number of problems – just as there have been messes in other sectors. Mistakes may lead to a bad day at work, or someones last day at that job, but they rarely systematically affect the standing of the organisation once seen to be corrected.

How will individuals or society feel engaging with these organisations or offering even constructive criticism in other messes? How do these charities/NGOs accept, as Government has had to do with the creation of GDS, that they might not always be at the forefront of digital/social, and how do they change to fit that? How do they feel about perhaps a changed role in digital where they may not always be able to offer *the* voice of authority on specific issues, and instead understand a greater democratisation of expertise? And how does all of this happen in such a way that doesn’t discourage innovation in this sector, but doesn’t prevent questioning if well meaning actions may cause harm?

Interesting links

Because it’s possible does that mean it’s OK?

As ever with technological developments, practice runs ahead of legislation – with businesses often pushing boundaries to maximise their benefit (profit?) and to establish new norms before laws catch up. Indeed, with the Samaritans Radar, it’s true there are a number of existing ways of doing similar monitoring – for instance keeping an eye on search terms or creating your own code. Or just skim reading tweets of individuals (although that allows people the ability to delete tweets within a short period of time).

Many people know that businesses, organisations, and individuals use the Twitter API to capture tweets, and subject them to all types of analysis for a whole range of reasons – from money making to academic research and data journalism. However, a large number of people aren’t aware of this, or don’t know the extent to which this happens – raising interesting ethical questions regarding informed consent about data which may be used to identify an individual even if it isn’t ‘personal data’ by any legal definition.

Are all these ‘OK’? What is an ‘OK use’ of such data and who should be involved in helping to define what is OK and what isn’t? Should any use of data produced be considered OK from a legal perspective? And how do we ensure people are informed about these expectations?

If there is nuance to be found here, around uses, then ideas of who is to be impacted should possibly be considered.

Furthermore, charities should be aiming for a higher standard than just legality. Charitable organisations should also be able to win a moral case. People also have a degree of expectation that profit making companies may be interested in exploitation – but are less likely to expect this from a charity.

Informed consent on platforms

Can you presume that individuals consent to something automatically if they don’t know about it?

In most ethical frameworks, the answer to this would be no – but how can you ensure online that an individual has not only actually read something, but also understood it and is able to consent (for reasons of language, age etc)?

Obviously, that’s presuming that an individual can even be certain of being directly told/asked about the possible uses data they produce may be used for – which clearly isn’t the case. The Samaritans Radar shows some of the instances here – with the Radar being automatically opt in. New Twitter users won’t necessarily hear about it, twitter users outside specific circles won’t hear about it, many people from non UK countries (where the media coverage was heaviest) won’t hear about it. Is it ‘OK’ to presume these people all opt in – even if effectively they have not been given any ability to opt out.

Should it be the case that we as service users are directly informed/asked about use of data? Should we as users of the web assume that companies have the right to sell data about us in return for our using services? Are there any other models that may work?

Who owns data online/on Twitter

When you use any website there is a lot of data floating from one geographic ‘real world’ location to other real world locations. This data may be data that an individual produces (clicked on this link, visited this website, said this thing, was in this location), or the data that is provided to an individual (the downloaded website on the browser, the email alert, the picture of a map).

Similarly, when you use a service – an app, or social media site for instance – you create data and consume data. For example when you tweet, one example of data created by a user is the tweet itself – but there is also other data created, such as time of tweet, optional location, photos, and the context that comes from previous (or later) tweets, etc.  Much of this can be acquired through the Twitter API which anyone with sufficient technical knowledge can use.

Data is subject to a number of laws subject to interpretation (depending on perspective) – for instance data protection (eg. how data is stored, processed, who has access), copyright laws, data ownership, duty of care, and likely many other issues.

According to the terms and conditions of Twitter, all data including the tweets made are owned by Twitter.

EDIT 6th Nov: Thanks Graham Triggs.

Twitter Terms of Services state that:

You retain your rights to any Content you submit, post or display on or through the Services. By submitting, posting or displaying Content on or through the Services, you grant us a worldwide, non-exclusive, royalty-free license (with the right to sublicense) to use, copy, reproduce, process, adapt, modify, publish, transmit, display and distribute such Content in any and all media or distribution methods (now known or later developed).

So – you as a user own the content, but grant Twitter a license. What does this mean in effect?

EDIT ends

How these laws and terms and conditions play out, what these ToS mean versus how users of those sites perceive the situation is fascinating.

What is the impact on how users may use a service if they become more aware that tweets are not considered the users data? Does this mean that what an individual says on twitter cannot be considered ‘personal data’ even if it is data that says something personal about the user?

What is the moral responsibility for a platform holder to be transparent about exactly how they consider the data being produced, and is it OK to hide this deep in terms and conditions when it is likely that people *will* produce data that can be monitored, aggregated, and used both for and against said individual.

Interesting links

How public or private are social media sites?

Paul Bernal has written a great blog post about this. But what is clear (especially looking at #SamaritansRadar) is there are no real agreements about whether tweets are fully public or not. It’s a fairly new, technologically enabled, slightly grey zone – yes tweets can be found by any individual who goes looking (either directly or using the API) – but does that mean they are *totally* public or should be treated as such? Social interaction tends to rely upon degrees of friction, and ‘online social interaction’ is subject to less friction than we are used to in the offline world. What new norms do we have to get used to and think about in an online world, how do we deal with the fact that people in different countries, of different ages etc will perceive this issue differently?

And if we do just accept that all that data is publicly available and owned by someone else, then perhaps users need to be made more aware about this (and by whom is a really interesting question, as no one has an incentive to do this). There may also be place for safer social networks to be set up – although what this may look like, I have no idea.

Online design

It’s getting ever easier to create apps and software – and this has potential for much good in the world – hopefully meaning that in the distant future we will have more diverse groups creating tools to solve their own needs.

But there are also possible negative impacts. How can we tell the difference? As designers, platform owners or as users of a service? And what do we do as a result? What are the responsibilities and culpability of each of these individuals?

There are many ‘online spaces’ which now exist, and altering the environment of these can impact many people a designer or platform owner may never thought of. When you create something, what responsibility as a designer do you have to engage a wide range of individuals and carry out significant research?

Research

Even if research is carried out, what is a ‘sufficient’ evidence base to establish a case for and subsequently introduce a new innovation into an existing online environment? Is there a difference in the level of evidence needed for an online environment (where interactions are more social) than an offline environment? The answers are likely to vary dependent upon how significant any proposed changes are, who is likely to be impacted, how many individuals, and the possible outcomes of any alterations.

Is there a responsibility to engage with existing users of that platform to share research outputs/insights? Or a servicer user is the onus on individuals to leave if they become unhappy with changes that are made?

Whose voice is important – and how do you make sure you hear a balanced perspective?

Lets say an app creator does carry out some research – how do you ensure that you don’t get trapped in echo chambers? Digital and social media are great, but it’s very easy to get trapped in echo chambers and to forget about other possible users (as I covered here)

It’s also crucial to remember that people may not feel able to raise their voice to vocalise their opinions. For instance there are individuals who support the Samaritans Radar and individuals who really detest it who don’t feel able to vocalise their opinions on the platform.

And what qualifies as a ‘balanced’ opinion? If you are affecting a minority of individuals, does it matter if you negatively impact them? Are sheer numbers the most important issue?

Tech innovation for the sake of innovation

There is a huge tendency at present for tech solutionism – the idea that more technology or an app can solve almost any societal problem. This is problematic, as it limits the potential solutions individuals and organisations may try to find.

Maybe, with the Samaritans Radar, no-one thought ‘maybe we could have promoted advertisements in a depressed persons timeline’, or noone thought perhaps a concerted advertising campaign reminding young people to look out for each other might work – but they may have better long term impacts on solving a people problem than an app.

How do we stop ourselves and others limiting our problem solving framing around the idea of building more and more technology to deal with our existing lives?

Digital skills in charities/NGOs

If you don’t know much about digital, tech or social media it’s really easy to be wowed by agencies or individuals who come in, waving hands and saying they have the solution to all your problems. With the world becoming more digital and data heavy, as a charity or NGO you need to have staff who have skills and understand of issues relating to technology, data and digital. And many otherwise excellent individuals don’t have these skills (yet). While this is true in many organisations, within those which don’t have a strong incentive to ensure they keep skills up to date, or willingness to spend money on staff training or new hires, this may be especially true.

How do charities keep up to date with digital or social media skills? There are some great organisations and individuals working out there – but are there good ways of helping the charity sector improve on this more rapidly?

New media engagement

One of the digital skills that needs to be thought about is social media engagement and comms work.

Old comms work used to be heavily focused around press releases, lots of fanfare on radio, newspapers, TV and that was it. However, it’s no longer enough. Any communications plan needs to include considerations how to engage with people on social media sites  *especially* if your project is about social media, and especially if there is already an existing community of those you are trying to work with/for.

Ignoring people who have specific comments, who due to expectations set by the platform and other companies, now expect to be engaged with online, makes you look bad, causes significant ‘brand’ damage, and risks causing frustration to those individuals. Even worse, if it’s a group of vulnerable individuals (*cough* those with mental health issues *cough*) you risk causing significant agitation, and if you are a charity has potentially a huge impact upon your future donations. All of this the Samaritans have done in their response to the Samaritans Radar.

How organisations choose to do that is something that marketers are still trying to solve, and it’s true that many organisations haven’t caught up in this space yet. But regardless you need to feel confident engaging online and in digital spaces.

Academic involvement in projects

What is the ethical responsibility of academics being involved in public projects? I’m all for greater academic participation, building a more evidence informed world, but there needs to be some balance of the desire for impact (driven by current UK research policy?), desired peer academic scrutiny, and culpability (for any possible negative repercussions). This falls into all manner of academic involvement – not just projects like the Radar, and I imagine this is something various research/science engagement people will engage with in the medium term.

Interesting links

  • Jon Mendel has written some excellent posts here and here

Who should a charity/NGO be attempting to please?

Any charity or membership organisation has to have at least one eye on the people who are likely to give them money. They are also likely to have other people to engage with – service users, or a community. How an organisation chooses to balance these needs is challenging – and not something that we see being discussed publicly often.

International nature of the Internet

It’s easy to forget that the Internet is truly global. Data comes in from, and ends up in, different geographic locations; and it’s an issue not explored often enough even in technology policy.

Laws, social norms, ethical expectations, and language use are not standardised across that – and it’s often easy to forget that, especially if you have a specific framing and not deliberately reached out to understand other contexts.

There are not many international standards on these issues, but we’re operating on a platform that supersedes national boundaries.

With the Samaritans Radar, presuming the app is not just processing tweets from UK twitter accounts (which is likely to be the case), then the Samaritans app is using data that has originated in many geographic locations around the world. Points above around data protection and copyright in various countries are likely to be different. As are issues around research ethics, responsibility for how data and information can be utilised.

Furthermore, there are also some interesting issues around what happens if you keep getting alerts about someone in a totally different time zone that you can never help. How as an individual do you feel? How responsible are the creators of the app for making an individual feel that way?

I would be very interested to know how individuals in Africa and Latin America, where there is often an existing distrust of white, Western ‘do-gooders’, feel about an organisation in the UK processing their tweets.

Final thoughts

I really don’t want to see organisations stop innovating and thinking about new ways to solve problems. But this doesn’t stop there being a lot of interesting questions, especially related to ethics, that haven’t yet been explored let alone started to come to any form of resolution.  And the Samaritans Radar is a fascinating case study for many of these issues as it’s about such a sensitive issue (mental health) on such a well used social media platform.

I’d love to see more conversations take place beyond the very heavy focus just on privacy/data protection vs. ‘charity doing good’ than we’ve so far been seeing – and will update this post as I see some of these conversations starting to take place.

My thinking around this is very much evolving – so I can imagine I’ll be making changes/edits to this page, and doing a bunch of blogging off the side over the next week. But I see this has the potential to be the starting point of some very interesting conversations about online technology.

Samaritans Radar and use cases

When you develop a piece of software or app, it is common to undergo some evaluative thinking about the use cases and user needs. And this isn’t simple. Use cases are often complicated, and user needs often contradictory. It’s hard to make ‘the right’ decision, especially when you are creating a tool that directly influences and alters an existing social environment and ecosystem – be that on or offline.

Earlier this week, to much fanfare in the press, the Samaritans released a new app called the Samaritans Radar, with dramatic claims about how this app is going to save lives. The discussion on Twitter, the space directly influenced by the new app, has been much more nuanced, and there has been much furore from a number of people – including those who identify as using twitter to discuss mental health issues, or who have been previously subjected to online abuse.

What is the Samaritans Radar?

The Samaritans Radar is an app that uses twitter and the feeds of twitter users who have not made their account private. The ‘user’ of the app is an individual who wishes to receive email alerts to tweets from people the user follows to certain tweets. These are tweets that the app, based upon content analysis, decides may be correlated with being depressed.

Originally any individual who uses Twitter, has not chosen to make their feed private, and who has a friend who decides to use the app, would have their tweets read, scanned and subject to analysis by the Samaritans app. There is now an process by which twitter users may opt out of being the subject of this app – although it requires contacting the Samaritans directly, and being put on a ‘white list’.

My use case and user story

I’ve written a use case and user story about myself here. It was originally part of this blog post but I removed it to make the post easier/shorter to read, and it felt clunky to have the inclusion of something so personal. However, it adds weight to the point I am trying to convey to the Samaritans so I wanted to include it somewhere.

The summary of my user story is that a piece of software that potentially analyses my tweets and alerts my friends to what it considers to be negative content has absolutely no positive effects for me. Instead it will cause me harm, making me even more self aware about how I present in a public space, and make it difficult for me to engage in relatively safe conversations online for fear that a warning might be triggered to an unknown follower.

And the use cases and user stories of others…

However I am not the only person who may be affected (either positively or negatively) by a piece of software such as the Samaritans Radar being implemented on top of Twitter.

Operating in a public space requires a degree of negotiation between the needs and wants of many individuals. And twitter is one such public space, with a very complex (existing, but always changing) social ecosystem. People use the infrastructure provided in a huge variety of ways, operating as a combination of the online ecosystem and their own personal preferences.

Therefore, to think about the Samaritans Radar and to evaluate its value, we need to think about the large number of different twitter users and potential Samaritan Radar app users. Just to mention a few possible user groups who use twitter relating to mental health:

  • Individual who has no mental health concerns and doesn’t engage in any online discussion
  • Individual who has no mental health concerns, but engages in online conversations
  • Individual with mental health concerns but doesn’t see twitter plays any role for them
  • Individual who has mental health issues, and uses Twitter as part of ongoing activism
  • Individual who has mental health issues, and engages in small conversations and gains support in small groups.
  • Individual who has mental health concerns, and doesn’t have any/little existing support mechanisms.
  • Individual who is subject to online abuse for various reasons (be that in relation to activism in other areas, gender, sexuality) – who may or may not have mental health concerns.

Each of these twitter users are likely to be followed by a variety of people – some of whom they know, some of whom they don’t, and a very large number on the whole scale in between. Although tweets from accounts which are not locked can be easily read by any individual, everyone is different in how they choose to use Twitter (as a balance of broadcast or engage), and with individual tweets, they make decisions about how public or accessible they choose to make tweets (be that in conversation, using hashtags). (EDIT – Paul Bernal has written a great post about this here)

Furthermore, individuals interact differently with different followers. A user of twitter doesn’t want or expect the same degree or type of interaction from all individuals who have made the decision to follow them.

Potential Radar users

There are also a number of different potential Radar users. These people will all be twitter users, but not everyone who has twitter will become a ‘user’ of the Samaritans Radar. For ease, I’ve pulled three main groups out:

  • Person who knows they follow people on twitter who might have mental health issues, wants to keep an eye on those people, and are able to provide meaningful support to them.
  • Person who knows they follow people on twitter who might have mental health issues, wants to keep an eye on those people, and are unable to provide meaningful support to them (this may be through not being emotionally mature/able enough to provide support to those with mental health, not knowing said people as well as they think, said recipient of ‘help’ not wishing for support from that individual etc)
  • Person who wants to hurt and abuse vulnerable people (and a lot of the conversations around online abuse show that this is a known category of individuals)

Who is the Samaritan Radar designed for?

The whole focus of the app is designed towards the ‘user’, a follower who may wish to identify when people they follow on twitter are vulnerable. Originally the privacy section of their website only mentioned the privacy of the user of the app – the person doing the monitoring – with no mention of any privacy concerns for the individuals who are being followed. You can argue that tweets are in the public domain therefore there are no privacy concerns (something I disagree with, but that’s another post), but you cannot deny that the app very clearly is designed for the follower, not for the specific benefit of an individual with a possible mental health issue. An app designed for an individual with mental health problems would be very unlikely to remove the agency of the individual in such a fashion.

Beyond this, it feels like Radar has been set up with a very specific type of Twitter user in mind and specific type of Radar user in mind. The type of Radar user appears to be the individual who will do no/little harm – either deliberate or accidental – through their engagement with a specific person with possible mental health issues that an app is flagging up. And the type of Twitter user is someone who is (for a variety of reasons) open about their mental health and depressive spells on Twitter, and wants people to look out for them and isn’t able to flag concerns to an existing support network.

And this itself is a laudable aim. And really does have the benefit to do much good.

But the major problem comes in assuming that this very specific case is the only use an app such as the Samaritans Radar could be put to, that the existence of the Samaritans Radar does no harm/limited harm or a comparatively insignificant amount of harm to other individuals (compared to the benefits that may be provided to other individuals) and assuming by default that Twitter users want to allow their followers the option to monitor their tweets in this fashion.

Radar users may not be benevolent. Online abuse can and does take place – with individuals being bullied, told they should go kill themselves. If online monitoring of individuals (with mental health problems or not) was made even easier, that seems like a significant probable cause of harm – and something a mental health charity should not be enabling.

Radar users may be less than great at identifying when and how someone with mental health issues wants to talk. If individuals need to be poked to remember to check on someone, they very much might not be the type of person someone with mental health problems wants to talk to.

The Samaritans Radar only introduced an opt out system after a number of people on Twitter raised concerns, but there has not yet been any recognition from them that the existence of such an app, opt in or opt out, affects the social ecosystem of Twitter. And this ecosystem includes a large number of individuals who are already finding ways to support themselves and some effects of mental health problems online.

A number of these individuals are now stating that how Samaritans Radar is implemented affects the ecosystem they operate. Yes, this may be self reported, but from vulnerable individuals – exactly the type of vulnerable individuals with whom the Samaritans should be concerned – it should not be taken lightly.

At the same time, we’ve not seen any evidence of the benefits of the app – despite much talk of work with academics, and time spent in the apps development.

And before potentially inflicting harm, ignoring critical comments from users with exactly the type of condition an app is supposed to help, and rolling an app out globally, there should be good reason to believe that the overall benefits outweigh the overall costs.

So I have a couple of question to throw to the Samaritans:

  • What, if any, work or assessment, have you done on evaluating the benefits that may be provided by the Samaritan Radar?
  • What, if any, work or assessment have you done on considering the user stories other than the idea of a benevolent and capable Radar user, and a Twitter user who uses Twitter broadcasting a need for help?
  • What, if any, work or assessment have you done of the negative impacts such an app may have on the existing support networks of those with mental health issues who use twitter?
  • What, if any, work or assessment have you done of the impact of having an opt in Samaritans Radar app? Was an opt in app ever considered until the initial backlash on Twitter? If so, why was it rejected?

I look forward to their response.

My use case – on twitter and mental health

This is my use case and user story around twitter and mental health. It was originally part of this blog post on the Samaritans Radar, but I removed it to make the post easier/shorter to read, and it felt clunky to have the inclusion of something so personal. However, it adds weight to the point I am trying to convey to the Samaritans so I wanted to include it somewhere.

Use Case

I am an 18-35 (target age for the app) woman who has used Twitter as part of their professional and personal life for several years. Until recently I was working in high profile positions, but I’ve spent the last few months recovering from suicidal depression and anxiety.

I use twitter in a semi professional capacity, and although I deliberately engage with conversations around mental health, because twitter is public I am very conscious not to be too obvious about my engagement with these issues due to perceived potential impact on my future professional life. My mental health problems probably makes me more anxious about this than I need to be, but I am happy that my involvement around these issues can be construed solely as my involvement with conversations around inclusion and equality.

My user story

Let’s presume there is a bit of software being designed to flag concerning tweets to people who follow me on twitter.

    • I want to know that such a piece of software is discrete, as I do not want to feel limited or silenced in my exploration of mental health issues. I self censor enough already (indeed it’s part of what caused my current problem) and do not want to do so still further. I will self censor further if I feel it becomes even easier to watch what I am saying publicly.
    • I do not want others to feel they need to be censored further than they already are, and would like others around me to feel able to continue discussing mental health issues, as I’ve found great support even knowing that other people have experienced issues, and being able to observe and engage with those who are more open and frank has allowed me to find some key people to talk with and some significant support.

Having a number of emotionally insensitive but technologically adept friends, I know people who would be likely to specifically use an app like this to check up on me. I would be concerned that certain individuals would be more likely to use an app than waste a few seconds skimming back through past tweets, and may be likely to end up missing actual warning flags, which in my instance are very unlikely to be something picked up by sentiment analysis.

  • No matter how suicidal I’ve been, I would never, ever tweet something like ‘I can’t go on’ or ‘I’m so depressed’. It wouldn’t be professional, and the possibility of someone seeing something like that makes me incredibly anxious. Anything that approaches a cry for help or warning flag about my mental health is unlikely to be picked up by any form of basic analysis.
  • I also don’t want to cause a fuss, so being aware of specific trigger words that may cause a fuss would make me consciously not use them. Furthermore, I am private about my mental health, and do not want to talk about such issues with most people. In fact, most people raising the topic is something that causes anxiety in me and makes things worse – as I don’t want my health problems to be noticeable.

 

To be frank, for me a piece of software that potentially analyses my tweets and alerts my friends to what it considers to be negative content has absolutely no positive effects for me. Instead it will cause me harm, making me even more self aware about how I present in a public space, and make it difficult for me to engage in relatively safe conversations online for fear that a warning might be triggered to an unknown follower, and I’d need to negotiate some odd combination of lying about my mental health (which is bad for me and makes things more confusing and difficult), or telling the truth about my mental health (which I don’t want to do and shouldn’t need to do).

Lobbying and disciplinary infighting – learned societies of old

Learned societies and professional organisations have some very interesting histories written up on their websites – and sometimes these can be quite insightful (as well as providing me with something to chuckle at).

Tonight I have been reading about the history of the ‘Institution of Structural Engineers‘, set up in 1897 as ‘The Concrete Institute’. And I’ve found two quotes in the official history they publish online that I want to share, as they touch on interesting issues I’ve been thinking about.

Disciplinary infighting

Sir Henry Tanner in his presidential address made the first proposal for the Institute to broaden its scope and become the Institution of Structural Engineers but through its editorial Concrete and Constructional Engineering responded by stating it regarded the term “structural engineer” as one which described steel contractors and failed civil engineers

There is no obvious citation, so I’ve not (yet) checked the veracity of this apparent retort from the journal – but I find it interesting to see some suggestion that even back in the early 20th century there was disagreement over disciplinary boundaries and what was/wasn’t an ‘acceptable’ discipline.

Lobbying

There is a lot I want to consider about modern learned societies/professional organisations and their role in lobbying – but there is much in the research literature that discusses their role in finding public funding in the 19th/20th centuries. However, this quote from the Institute of Structural Engineers is the first I’ve come across which boasts of pushing for specific laws/regulations. While, again, I’ve not (yet) checked any original resources, at present the Institution publishes this on their website:

On 22nd February 1909, the Institute was incorporated under The Companies Act (1862-1907) and much of the time and energy of the new body was spent on ensuring reinforced concrete was accepted by the London County Council Regulations and the London Building Acts.

.

I mean, I’m sure the ‘Concrete Institute’ would have no vested interest in making sure concrete was accepted by Council Regulations and Building Acts, and that it was specifically done for ‘the good of all of London’.

Right…?

How will I know if it’s a learned society?

For a while I’ve been thinking about academic publishing and the problems faced by learned societies as a transition is being made away from libraries buying journal subscriptions. And this has started me off down lines of thinking about the activities carried out by learned societies (which will be the subject of later blog posts), accountability and the role or function of learned societies in a broader research landscape. But I’ve come up against a very fundamental question:

What actually is a learned society?

I’ve been doing some research into these organisations, and I’ve spent a few days staring at at various websites and documents.

But one thing I still don’t know is what a learned society actually *is*.

Organisations which either claim, or which others claim, to be learned societies vary wildly in many attributes.

Many are charities, but others are not. I’ve found several which have fewer than 100 members in total, many which provide you with no information about membership size, and one which claims to be over 80,000 in size – which seems to be the result of adding of the total size of each of their member organisations together, while not in any way accounting for the fact that people may be members of multiple organisations (In fairness I should add that this number seems to only exist now in past consultation responses they’ve written, and they seem to have removed it from their website).

And membership models also vary significantly. In some instances, you may only become a member after being nominated (which must be an *excellent* way to develop diversity in your organisation, and I’m sure organisations like this are the societies least filled with white, middle class, university educated, grey haired men), and in one case I’ve seen, your nomination is then put up to a membership vote in which you must achieve a 4:1 ratio of yes to no votes.

The research I’ve done seems to suggest that in the sciences, seconding by multiple existing members is fairly typical – which seems another great way to specifically exclude the ‘wrong sort’ from your group. Many societies specifically state they welcome anyone with an interest but require you to state your institution (which is not something many people outside of students/academia will have) or have membership options specifically defining job roles/experience which are only found within academia.

There are surprisingly few societies who seem to have fully thought through the idea of opening membership to even all those with a professional interest, let alone those with an amateur interest – despite the fact that almost every learned society I’ve looked at specifically talking about ‘educating the public’ somewhere in their charter or aims. There are some who clearly have thought this through, for instance the British Society of Soil Science, and I was also really impressed with their commitment to reduced fees for those in countries considered low-income by the World Bank. It’s not perfect – but much better than many societies I’ve looked at, where membership costs often increase for all countries outside the UK or EU.

Some learned societies are large, professional organisations with slick and shiny websites (like this one), with professional staff, amazing looking libraries and venues which may be rented out. Others are run by a group of volunteers, often academics working in their ‘spare time’, and are decidedly less professional in their appearance (I love this website – you can just imagine someone saying “we need a background image that will make people think of the cold”), and have no physical facilities. And there seems to be almost everything in between.

There are organisations which randomly have pictures of Harrison Ford on their front page (leading me to ask if there are perhaps no glycobiologists they are willing to show off), and others which are incredibly succinct in the history of their organisation.

I’ve seen societies which are focused on very specific sub disciplines (eg. the Society for the Study of Inborn Errors of Metabolism), and societies that are much more umbrella organisations (the Institute of Physics or Royal Society spring to mind here). There are even societies which focus on specific counties, such as the Devonshire Association dedicated to “the study and appreciation of all matters relating to Devon” – although I am slightly surprised to have not found an equivalent for Cornwall.

The sheer number of learned societies and scholarly societies is mind-numbing. My current list (which I’ll publish when I’ve done a bit more tidying up) consists of about 500 associations and organisations. And this is in no way complete, and mostly focuses *just* on organisations based in the UK. And the societies are not in any way obviously distinct organisations. For some reason, people feel the existence of both a British Mycological Society and a British Society for Medical Mycology to be necessary. And for there to be a British Society for Histocompatibility and Immunogenetics and a British Society for Immunology. And there are many, many other examples of overlapping societies with very similar remits and only slightly different names – reminding me of the People’s Front of Judea.

And for almost every UK based learned society, there is an equivalent in most other European countries and in the US (I’ve not yet looked at countries outside of this, although I intended to). And then there are also organisations covering the same or similar disciplines which operate at both the European and international levels. And all of these organisations fund and help support each others in a a really fascinating and intricate fashion.

One thing I can say for certain about learned and scholarly societies is quite how varied they are. But despite a few days exploring various UK based learned societies, I’m still no closer to what a ‘learned society’ or ‘scholarly society’ actually is. Wikipedia states that a learned society is “an organization that exists to promote an academic discipline or profession”, but it is incredibly easy to justify almost any activity as ‘promotion’ of a discipline – especially if organisations don’t step back to evaluate the effectiveness of these activities.

Many learned societies I’ve explored provide grants and bursaries, especially to early career researchers, in their specific sub-discipline, and many (although by no means all) seem to run some form of regular conference or have an ‘official journal’. But again how the journal is produced varies significantly: it may be published by the society or external/commercial publishers. It may be produced online only, or may be published in hard copy as well.

And the reason why I want to think about these activities and learned societies is that they have often been used to justify the retention of subscription based journal models or to justify a very slow progression away from them, with advocates from these societies and publishers making clearly stated, but often untested, beliefs about the value these organisations provide to the academic community. And I believe it’s important that we start to think about that.

So at present, I’ve been using a definition of a learned society/scholarly society as ‘any organisation that has been defined as being a learned society by either themselves or another organisation’, but that’s not very robust. Especially as anyone can define their own organisation or another organisation as one.

One possibility is to only include learned societies which make money from publishing – but as I want to explore business models that don’t necessarily rely upon this, that seems to negate part of the point. How do I separate a ‘learned society’ away from an amateur society, which may also have a published, peer reviewed journal? (For instance does the Amateur Entomologists Society count as a learned society or not?).

I don’t want to focus on whether or not an organisation is only open to academic members, as I’m the type of open minded individual who would like to see these societies open up to include non professional researchers and interested individuals.

And similarly, I don’t want to define a society by a specific section of activities, as that again seems to make some assumptions about what such a body ‘should’ do.

So many people say learned societies and scholarly societies are incredibly valued by academics, and a crucial part of the research landscape. But does anyone know what one actually is? Any thoughts below the line please!(*)

(*) This is not a rhetorical device – please do put any ideas you have below in the comments section below. I would love to know what I should be using as a working definition for learned societies as I continue exploring this area.

On wombling…

Stourbridge Common – Image by Prisoner 5413 on Flickr and shared under a CC BY-NC 2.0 license

“What a great idea. I’m going to do something similar.”

“Do you have OCD or something?”

“You do it for free?!?”

“Thank you”

I get a wide range of comments from people for my “wombling” (as one friend affectionately refers to it). Every day I go for a 30 minute to 1 hour walk as part of ongoing therapy to regain strength in my legs and ankles, and while I started just walking (while listening to various audiobooks of Charles Dickens novels narrated by Martin Jarvis), I soon found myself wanting to do more with my time.

I live in a gorgeous area of Cambridge (not far from Stourbridge Common and near a lovely stretch of the the River Cam), but the area is often covered in litter – including obvious items such as sweet wrappers, alcohol bottles and newspapers, but also more random items such as shoes. I like green spaces, and I like shared spaces, but this litter detracts from them and makes them less pleasurable for everyone.

So on almost every walk I take two plastic bags and a pair of rubber gloves, and spend 15 minutes or so picking up litter (sorting it into recyclable material and rubbish), before disposing of it in one of the many bins around the area. I then continue with my walk and lose myself in the dulcet tones of Martin Jarvis.

The reactions I receive for this litter picking are varied. Some people stare at me like I’m an alien, and others stop to chat – with conversations ranging from the ideas of communal spaces to Charles Dickens and my preference for hearing his books spoken than reading them myself.

I’ve not made much of my wombling before; after all I do this partially to make the space more pleasant for me, but it ties into a broader narrative that I’ve been seeing in the media and on social media.

There have been a number of reports about people who are doing similar on beaches (See this news story, or this hashtag) – something I’m very fond of originally coming from Cornwall.

However, it’s not just the beaches that are shared spaces, rich in wildlife, and contain ecosystems that suffer from litter. Spaces such as greens, parks and woods are also in this category, and it would be great to see people who live in these spaces also helping to tackle the problem of litter.