Why voter tools may be problematic…

I’m a little unsurprised by this post I just saw on Buzzfeed about how the Daily Telegraphs Tactical Voting Tool was coded never to recommend the SNP. Not only am I not totally surprised, but I’m both a little happy and a little sad to be proven right about the use of voter apps and tools.

Before the election I became a little concerned about applications and tools that were being created by a variety of organisations that were supposed to give a floating voter an idea of how to vote in the 2015 General Election.

There is a fundamental (and incorrect) assumption underlying these tools that a parliamentary candidate should only be considered as a member of a party, rather than both as a member of a party and as an individual who will have their own pet areas of interest and qualities, but let’s ignore that and pretend that we should only be thinking about what national party policy says.

How the tools work

Many of these tools that were being created operated on a simple set of premises. A user would chose some areas of policy they were interested in, and then a variety of statements would be shown to the user without any obviously identifying information about which party had made those statements (although language used and the proposals themselves often gave that away to the more politically active), and the user would choose which statements they thought fitted most with their beliefs. An algorithm would then run over the answers provided and recommendations for how someone may wish to vote, or a bar chart/pie chart of the users similarities with political parties on various issues would pop out.

Alternatively, a user may be asked a series of questions about their priorities or beliefs, with similar outputs being provided – some form of chart or suggestions for which party their beliefs most align with.

Sometimes the bits of policy would be direct quotes from various statements and/or manifestos – albeit it shorn of any context and just a part that the tool creator had deemed ‘relevant’ – perhaps removing a sentence or two from either side that may alter how something reads, perhaps not using the paragraph that may be most directly comparable to other parties policies, or perhaps not considering that policies in seemingly unrelated areas may have an impact

And sometimes – especially with those tools that asked a series of questions – policy ideas and proposals from parties would be condensed by the tool creator into slightly different sentences and ideas than were ever originally considered.

How can bias creep in?

My concern before the election was that even with the best will in the world, any such tool will be full of biases. Be that removing specific nuance that may accidentally change what was originally meant in a policy statement, or removing the idea of how policies can and do interact (as an example the Green Party’s proposal to reduce copyright to 14 years after creation itself may be considered very difficult for many creators – but as the Green Party also propose a citizens basic income, this reduction in copyright would not itself cause destitution if both policies were brought in together).

And added to which, most organisations, including charities and NGOs, creating these tools are not politically neutral. They care about specific areas and will have either accidental or deliberate political biases that will emerge through these tools. Not only is it easy to accidentally remove nuance from a policy statement in a way that may make a political party look bad, it would also be very easy to deliberately do it – if say you wanted more people to vote for Labour than the Greens and Lib Dems – you could chose statements for the latter two parties that are less likely to be palatable to a wider audience, or summarise their policy proposals in less favorable ways.

And any questions asked will invariably contain bias. Just as one example, when I looked at 38 degrees vote match, none of the topics mentioned were my top voting issues. And the questions themselves were troublesome. For instance – one of the statements I was asked if I agree with was:

‘Government should raise new taxes to fund the NHS’

This statement is really leading. ‘Raising new taxes’ is not the same as ‘give more money to NHS’ as it is presuming the means by which more money needs to be provided. Furthermore, it also presumes the answer to any existing problems within the NHS is ‘a lack of money’, rather than for (possibly) bad internal management structures, or the wrong types of services being offered. Different technical procedures, less paper work, a decrease in homeopathy funding can all act to reduce costs for the NHS, providing an effective increase in funds – without ever ‘raising new taxes’. But the questions didn’t allow for such subtlety.

And therefore, any party which had more nuanced approaches to policy, which may need a little more explaining would be punished by this – as their idea wouldn’t fit easily into a ‘yes’ or ‘no’ answer to this question.

I don’t expect most people to go away and read all the manifestos, and I do genuinely like the idea of tools and apps that people can play with to explore voting options. But I worry that these tools and apps can be used by organisations to push biases that less politically aware individuals may not notice.

Steps forwards

I would like to like to see those with strong interests in politics and with some understanding of social science to start more critical analysis of the biases that can be built into such tools – both in the language and framings used, as well as in the software and algorithms underpinning them. For this, it would help if the tools themselves were open source, so that the underlying code can be explored fully. Obviously, making something open source doesn’t magically make it evaluated and tested – but it is a start that enables this.

With each of these voter apps basically acting as a (potentially) poorly-designed survey, but one with the potential to influence voters and possibly alter outcomes within our election and democracy, much more critical analysis of these tools is needed.

Collecting a list of learned societies

Happy Open Data Day 2015!

To celebrate, I’m going to spend the day curating a list of UK based learned societies, and scientific organisations to release later today/tomorrow as an openly licensed data set. I’ll then hopefully update the Wikipedia page – which is woefully incomplete.

It would be great to have some help if you’re interested. At present I’m looking just to gather names of organisations and websites.

I’ve merged a number of lists of learned societies and scientific organisations together, into a Google spreadsheet, and shall spend the next few hours tidying it up, making notes, and hunting down other organisations to include.

Scientific organisations and learned societies are not defined terms, and come in a variety of shapes and sizes. Some are professional with paid staff, others entirely voluntary, and they may caryy out a wide range of activities from publishing through to arranging meetings or networking events. They may provide grants and bursaries, get involved in public engagement and outreach, or get involved in political lobbying.

The criteria for inclusion on the list:

  1. The organisation or group should be a learned society or scientific organisation. Although these aren’t defined terms, often an organisation claims to be one or looks a lot like other organisations which claim to be one. The key things to look for are that it must be a membership organisation that is in some way linked to an area of research. I don’t want to include organisations who are solely professional bodies – although some learned societies also act in this way. There are lots of grey areas though and I’m very keen to explore them in the future – so if you aren’t sure, please do add it!
  2. Based in the UK. This means the main physical base or contact address should be in the UK and/or the organisation specifically states that it is an organisation for either the whole UK or part of the UK.

Thanks for the help – and if nothing else, please just make sure you add your learned society or organisation on to the sheet!

Journal subscriptions – Wiley, Oxford University Press, Springer

Following on from Tim Gowers work exploring the amount Russell Group universities pay Elsevier for access to journals, I started submitting freedom of information requests to the same universities, asking what they spent on journal subscriptions to Oxford University Press, Wiley and Springer over the previous 5 years[1]. During this time a number of funders have brought in mandatory open access policies, and I wondered if there had been any change in subscription costs during this time period.

£54.5 million was spent in total spent by 17 Universities on subscriptions to these 3 publishers over the 5 years; £31 million of which being spent on journals published by Wiley. Over the period of time, subscription costs increased in most instances, and increased faster than the rate of VAT.

Although the figures are interesting, there is only limited value comparing across universities.

There is a lot of secrecy around the cost of journals; which is why I’ve had to use FOI to get data that really ought to be in the public domain. However, what isn’t known is what access is being purchased in each instance. It is (very, very) likely that different universities are purchasing access to different journals – but I’ve not yet found a university who publishes a list of which journals they subscribe to.

I don’t know if a university would release a list in response to an FOI request, although I am tempted to try. There must be a computational way to work this out, however?

It’s also likely that different universities will be paying different amounts for the same or similar journal access, based upon the strengths of different institutions and organisations to negotiate deals.

Without knowing what universities are purchasing access to, it’s impossible to make statements about why there are price increases over the 5 years (and in some cases very significant price increases). It may be that the universities are purchasing more journals from the publishers, or it may be that publishers have put their prices up.

A catalog of journal subscriptions from institutions would be very useful for all manner of reasons (including enabling academics and students to easily see if there is access to something on a reading list), and it may well start a discussion between academics and librarians about what journals are accessible. I don’t see a reason to keep this information hidden away.

It would also be useful for useage statistics to be made available – even just the number of times an item is downloaded. I’m not sure why these figures are deemed ‘sensitive data’ and why publishers are not keen for this information to be made available. Perhaps if academics and students were aware how rarely some journals were accessed, they might be less willing to allow libraries to continue paying such large subscription fees.

Despite these caveats, the data from the Russell Group universities on journal subscriptions to Wiley, Oxford University Press and Springer make for interesting reading.

I’ve made the data available on Figshare here.

Why these publishers?

People often use Elsevier as a proxy for all the woes in the traditional scholarly publishing market, but I wanted to explore some of the other publishers. It’s very easy to forget other publishers given the focus many devote to just the one company:

  • Oxford University Press – the different roles of OUP interest me. It is a publisher, but is also considered a department of the University of Oxford, and the University receives income from the press (see pages 34 and 35 of this pdf.
  • Springer – recently purchased the open access publisher, Biomed Central, so I was very interested to see what/if anything had been changing here.
  • Wiley – one of the big learned society publishers, and have been involved significantly in many of the discussions at a policy level around publishing, including in the Finch Group.
Of the Russell Group
  • Oxford and Leeds never responded
  • Birmingham – refused point blank to give me the figures.[2]
  • Edinburgh wanted me to pay £10 per FOI – which I refused to do. I still am not sure why – as this wasn’t something I’ve ever previously been asked to do in response to an FOI. After speaking directly to a librarian there, I got some figures but nothing specifically accurate, so I made the choice not to include them in the data.
  • Durham, Exeter and Queen Mary’s weren’t approached. There was no specific reason for this.
Universities – columns AE to AK
  • £54.4 million was spent by 17 Universities on subscriptions to 3 publishers over the 5 years.
  • Over the 5 years, University of Cambridge spent £5 million on subscriptions to journals from the three publishers, paying more in each of the years 2010 to 2014, and spending significantly more than any other university on Springer journals. UCL and the University of Manchester were also big spenders, each paying over £4 million in the same period of time.
  • London School of Economics paid the least of those who responded – paying £1.1 million over the 5 years (just over £200,000 a year).
  • As column AK shows, most universities have experienced a significant increase over the 5 years, with the University of Southampton having the greatest increase at 36 %. All increases are in the double figures, although 2 universities did have a very slight decrease (York – 3.03 % and Cardiff – 1.15%).
  • Only Imperial gave me figures that excluded VAT. I didn’t want to add the VAT from their figures as VAT has varied across the period 2010 to 2014. Without knowing when amounts were paid, there was a strong risk I may remove the wrong amount.
Oxford University Press
  • The most interesting thing about OUP is the very large percentage increases over the time period. Every library experienced at least a double figure percentage increase, and some, such as Imperial, receiving a triple figure increase. Increases in VAT are not anywhere near sufficient justification for these price increases.
  • The sums of money involved are relatively small numbers (compared to other journal subscription costs) – with only £4.36 million spent over 5 years by the 17 universities.
  • The 2 Universities paying the most to OUP are Southampton (£393,276 over 5 years)  and Manchester (£367,083 over 5 years).
  • The LSE spent the least over 5 years – £136,573.
  • I was interested to see if Oxford University received a subsidy on OUP journals, or if they pay for them at all (due to OUP being department of the University of Oxford). I’m sadly unable to answer this – but will be resubmitting the FOI.
Springer (Columns Q -> V)
  • University of Cambridge spends far more on Springer subscriptions than any other Russell Group University who responded – spending over £500,000 a year on average, and £2.65 million over the 5 years. UCL is the next highest spender, with a cost of £1.94 million over the 5 years
  • LSE paid the least over the 5 years – £ 327,779.75.
  • A few Universities managed a reduction in costs over the 5 years, although Glasgow, Nottingham and Southampton all had large percentage increases. Southampton was the highest at 45.4%.
  • Many of the responses came back with figures in Euros. In each instance, I converted the figures to pounds using the exchange rate on the 1st Aug of the relevant year. However, rates fluctuate across a year, so as we do not know when invoices were paid, it is very difficult to give exact figures.
Wiley (Columns X -> AD)
  • £31 million was spent by the 17 universities on Wiley subscriptions over the 5 year period, Only LSE and York paid less than £1 million.
  • Many universities spent around £2 million across the period – but Imperial paid the most spending more excluding VAT than any other university did including VAT.
  • Only York University saw a decrease in subscription costs across the 5 years – dropping 8.1%.
What is next?

I’m really pleased that we’ve got more of the subscription data out into the public domain. I hope that in the future, releasing this data becomes the ‘standard’ thing to do for all universities. I would love to see UK academic libraries commit to publish their journal spend in 2015. I’m sure it will be cheaper than responding to another set of FOIs next year.

As for what’s next? Well, I will be merging my data with the work recently published by Stuart Lawson and Ben Meghreblian, and I’m going to continue talking about publishing models with academics. I’m also going to start thinking about how to develop a catalog of journals that institution subscribes to. If you have any bright ideas on that, please do let me know!



[1] Yes, I’m writing this up *much* later than I’d hoped. I’ve been very ill over much of 2014 – although am better now!

[2] Some of the documents have been made unavailable after I left Open Knowledge Foundation at the end of the summer. I’m hoping to regain access to them soon, so I can provide the explanation that Birmingham University gave me.

Samaritans Radar – What happens next

At 6pm (GMT) on Friday 7 November, the Samaritans announced they were suspending the Samaritans Radar. I’m not going to go into how they suspended it or why they suspended it.

Instead, I want to look at what happens next. The statement made by the Samaritans on suspending the tool included the line:

We will use the time we have now to engage in further dialogue with a range of partners, including in the mental health sector and beyond in order to evaluate the feedback and get further input. We will also be testing a number of potential changes and adaptations to the app to make it as safe and effective as possible for both subscribers and their followers.

One of the mechanisms by which they hope to capture feedback is by a survey which was released in a footnote of the suspension notice. This survey has not been promoted by the Samaritans in any other way, as far as I know. Although a number of individuals on the #SamaritansRadar hashtag have been talking about it.

There have been some concerns expressed about the methodological limitations of the survey – which is likely due to it being written in a bit of a panic last week. However, I want to make these limitations clear, so that the Samaritans can fully understand the restrictions on the input they gain from the survey, and I want to ensure that those who have been expressing concern about the Samaritans Radar have the opportunity to provide some critique.

Therefore, I have set up a Google Document (found here) that anyone should be able to view, edit and comment upon anonymously. If for any reason, this Google doc isn’t suitable for you – let me know via twitter (@MLBrook) or send me an email (on michelle@michellebrook.org), and I will send you a .doc copy with existing comments. You can then make your comments, and send a copy back, and I’ll add them to the Google Document – which acts as the canonical version.

If you are concerned about anonymity, feel free to set up a temporary email address somewhere which we can use to exchange email documents.

I’ll the write up the comments into a blog post, and from that write a letter to the CEO of the Samaritans. Anyone who wishes to be may be credited for their input – or may remain as anonymous as they like.


Samaritans Radar and the big questions…

Regardless of what you think of the Samaritans Radar, (and as a previous post states, I’m not a fan) it’s highlighting a number of really interesting and fundamental questions around tech policy, ethics, and legal aspects of operating in an international space, aspects of data, and how people view NGOs and charities.

I’ve been meaning to write blog posts on many aspects for a very long time. But for now, I’ll just provide some short summaries about some of the issues it highlights – and expand on them as I have time.

Criticising charities/3rd sector organisations

How should individuals criticise or make constructive comment on the actions of charities or third sector organisations? For instance, I know many people who would like to publicly criticise the Samaritans current approach, who would do so if it were Government or business doing something similar, but feel that they shouldn’t say anything because of the huge good the organisation has done in other spaces.

Others I’ve spoken to automatically assume that because an organisation which aims to do good has done this, then it is stamped with some mark of authority (with presumed research or due diligence carried out), or automatically a definite good.

As we see more innovation in the third sector, and see NGOs and charities coming online, and engaging with digital and tech, there will be a large number of problems – just as there have been messes in other sectors. Mistakes may lead to a bad day at work, or someones last day at that job, but they rarely systematically affect the standing of the organisation once seen to be corrected.

How will individuals or society feel engaging with these organisations or offering even constructive criticism in other messes? How do these charities/NGOs accept, as Government has had to do with the creation of GDS, that they might not always be at the forefront of digital/social, and how do they change to fit that? How do they feel about perhaps a changed role in digital where they may not always be able to offer *the* voice of authority on specific issues, and instead understand a greater democratisation of expertise? And how does all of this happen in such a way that doesn’t discourage innovation in this sector, but doesn’t prevent questioning if well meaning actions may cause harm?

Interesting links

Because it’s possible does that mean it’s OK?

As ever with technological developments, practice runs ahead of legislation – with businesses often pushing boundaries to maximise their benefit (profit?) and to establish new norms before laws catch up. Indeed, with the Samaritans Radar, it’s true there are a number of existing ways of doing similar monitoring – for instance keeping an eye on search terms or creating your own code. Or just skim reading tweets of individuals (although that allows people the ability to delete tweets within a short period of time).

Many people know that businesses, organisations, and individuals use the Twitter API to capture tweets, and subject them to all types of analysis for a whole range of reasons – from money making to academic research and data journalism. However, a large number of people aren’t aware of this, or don’t know the extent to which this happens – raising interesting ethical questions regarding informed consent about data which may be used to identify an individual even if it isn’t ‘personal data’ by any legal definition.

Are all these ‘OK’? What is an ‘OK use’ of such data and who should be involved in helping to define what is OK and what isn’t? Should any use of data produced be considered OK from a legal perspective? And how do we ensure people are informed about these expectations?

If there is nuance to be found here, around uses, then ideas of who is to be impacted should possibly be considered.

Furthermore, charities should be aiming for a higher standard than just legality. Charitable organisations should also be able to win a moral case. People also have a degree of expectation that profit making companies may be interested in exploitation – but are less likely to expect this from a charity.

Informed consent on platforms

Can you presume that individuals consent to something automatically if they don’t know about it?

In most ethical frameworks, the answer to this would be no – but how can you ensure online that an individual has not only actually read something, but also understood it and is able to consent (for reasons of language, age etc)?

Obviously, that’s presuming that an individual can even be certain of being directly told/asked about the possible uses data they produce may be used for – which clearly isn’t the case. The Samaritans Radar shows some of the instances here – with the Radar being automatically opt in. New Twitter users won’t necessarily hear about it, twitter users outside specific circles won’t hear about it, many people from non UK countries (where the media coverage was heaviest) won’t hear about it. Is it ‘OK’ to presume these people all opt in – even if effectively they have not been given any ability to opt out.

Should it be the case that we as service users are directly informed/asked about use of data? Should we as users of the web assume that companies have the right to sell data about us in return for our using services? Are there any other models that may work?

Who owns data online/on Twitter

When you use any website there is a lot of data floating from one geographic ‘real world’ location to other real world locations. This data may be data that an individual produces (clicked on this link, visited this website, said this thing, was in this location), or the data that is provided to an individual (the downloaded website on the browser, the email alert, the picture of a map).

Similarly, when you use a service – an app, or social media site for instance – you create data and consume data. For example when you tweet, one example of data created by a user is the tweet itself – but there is also other data created, such as time of tweet, optional location, photos, and the context that comes from previous (or later) tweets, etc.  Much of this can be acquired through the Twitter API which anyone with sufficient technical knowledge can use.

Data is subject to a number of laws subject to interpretation (depending on perspective) – for instance data protection (eg. how data is stored, processed, who has access), copyright laws, data ownership, duty of care, and likely many other issues.

According to the terms and conditions of Twitter, all data including the tweets made are owned by Twitter.

EDIT 6th Nov: Thanks Graham Triggs.

Twitter Terms of Services state that:

You retain your rights to any Content you submit, post or display on or through the Services. By submitting, posting or displaying Content on or through the Services, you grant us a worldwide, non-exclusive, royalty-free license (with the right to sublicense) to use, copy, reproduce, process, adapt, modify, publish, transmit, display and distribute such Content in any and all media or distribution methods (now known or later developed).

So – you as a user own the content, but grant Twitter a license. What does this mean in effect?

EDIT ends

How these laws and terms and conditions play out, what these ToS mean versus how users of those sites perceive the situation is fascinating.

What is the impact on how users may use a service if they become more aware that tweets are not considered the users data? Does this mean that what an individual says on twitter cannot be considered ‘personal data’ even if it is data that says something personal about the user?

What is the moral responsibility for a platform holder to be transparent about exactly how they consider the data being produced, and is it OK to hide this deep in terms and conditions when it is likely that people *will* produce data that can be monitored, aggregated, and used both for and against said individual.

Interesting links

How public or private are social media sites?

Paul Bernal has written a great blog post about this. But what is clear (especially looking at #SamaritansRadar) is there are no real agreements about whether tweets are fully public or not. It’s a fairly new, technologically enabled, slightly grey zone – yes tweets can be found by any individual who goes looking (either directly or using the API) – but does that mean they are *totally* public or should be treated as such? Social interaction tends to rely upon degrees of friction, and ‘online social interaction’ is subject to less friction than we are used to in the offline world. What new norms do we have to get used to and think about in an online world, how do we deal with the fact that people in different countries, of different ages etc will perceive this issue differently?

And if we do just accept that all that data is publicly available and owned by someone else, then perhaps users need to be made more aware about this (and by whom is a really interesting question, as no one has an incentive to do this). There may also be place for safer social networks to be set up – although what this may look like, I have no idea.

Online design

It’s getting ever easier to create apps and software – and this has potential for much good in the world – hopefully meaning that in the distant future we will have more diverse groups creating tools to solve their own needs.

But there are also possible negative impacts. How can we tell the difference? As designers, platform owners or as users of a service? And what do we do as a result? What are the responsibilities and culpability of each of these individuals?

There are many ‘online spaces’ which now exist, and altering the environment of these can impact many people a designer or platform owner may never thought of. When you create something, what responsibility as a designer do you have to engage a wide range of individuals and carry out significant research?


Even if research is carried out, what is a ‘sufficient’ evidence base to establish a case for and subsequently introduce a new innovation into an existing online environment? Is there a difference in the level of evidence needed for an online environment (where interactions are more social) than an offline environment? The answers are likely to vary dependent upon how significant any proposed changes are, who is likely to be impacted, how many individuals, and the possible outcomes of any alterations.

Is there a responsibility to engage with existing users of that platform to share research outputs/insights? Or a servicer user is the onus on individuals to leave if they become unhappy with changes that are made?

Whose voice is important – and how do you make sure you hear a balanced perspective?

Lets say an app creator does carry out some research – how do you ensure that you don’t get trapped in echo chambers? Digital and social media are great, but it’s very easy to get trapped in echo chambers and to forget about other possible users (as I covered here)

It’s also crucial to remember that people may not feel able to raise their voice to vocalise their opinions. For instance there are individuals who support the Samaritans Radar and individuals who really detest it who don’t feel able to vocalise their opinions on the platform.

And what qualifies as a ‘balanced’ opinion? If you are affecting a minority of individuals, does it matter if you negatively impact them? Are sheer numbers the most important issue?

Tech innovation for the sake of innovation

There is a huge tendency at present for tech solutionism – the idea that more technology or an app can solve almost any societal problem. This is problematic, as it limits the potential solutions individuals and organisations may try to find.

Maybe, with the Samaritans Radar, no-one thought ‘maybe we could have promoted advertisements in a depressed persons timeline’, or noone thought perhaps a concerted advertising campaign reminding young people to look out for each other might work – but they may have better long term impacts on solving a people problem than an app.

How do we stop ourselves and others limiting our problem solving framing around the idea of building more and more technology to deal with our existing lives?

Digital skills in charities/NGOs

If you don’t know much about digital, tech or social media it’s really easy to be wowed by agencies or individuals who come in, waving hands and saying they have the solution to all your problems. With the world becoming more digital and data heavy, as a charity or NGO you need to have staff who have skills and understand of issues relating to technology, data and digital. And many otherwise excellent individuals don’t have these skills (yet). While this is true in many organisations, within those which don’t have a strong incentive to ensure they keep skills up to date, or willingness to spend money on staff training or new hires, this may be especially true.

How do charities keep up to date with digital or social media skills? There are some great organisations and individuals working out there – but are there good ways of helping the charity sector improve on this more rapidly?

New media engagement

One of the digital skills that needs to be thought about is social media engagement and comms work.

Old comms work used to be heavily focused around press releases, lots of fanfare on radio, newspapers, TV and that was it. However, it’s no longer enough. Any communications plan needs to include considerations how to engage with people on social media sites  *especially* if your project is about social media, and especially if there is already an existing community of those you are trying to work with/for.

Ignoring people who have specific comments, who due to expectations set by the platform and other companies, now expect to be engaged with online, makes you look bad, causes significant ‘brand’ damage, and risks causing frustration to those individuals. Even worse, if it’s a group of vulnerable individuals (*cough* those with mental health issues *cough*) you risk causing significant agitation, and if you are a charity has potentially a huge impact upon your future donations. All of this the Samaritans have done in their response to the Samaritans Radar.

How organisations choose to do that is something that marketers are still trying to solve, and it’s true that many organisations haven’t caught up in this space yet. But regardless you need to feel confident engaging online and in digital spaces.

Academic involvement in projects

What is the ethical responsibility of academics being involved in public projects? I’m all for greater academic participation, building a more evidence informed world, but there needs to be some balance of the desire for impact (driven by current UK research policy?), desired peer academic scrutiny, and culpability (for any possible negative repercussions). This falls into all manner of academic involvement – not just projects like the Radar, and I imagine this is something various research/science engagement people will engage with in the medium term.

Interesting links

  • Jon Mendel has written some excellent posts here and here

Who should a charity/NGO be attempting to please?

Any charity or membership organisation has to have at least one eye on the people who are likely to give them money. They are also likely to have other people to engage with – service users, or a community. How an organisation chooses to balance these needs is challenging – and not something that we see being discussed publicly often.

International nature of the Internet

It’s easy to forget that the Internet is truly global. Data comes in from, and ends up in, different geographic locations; and it’s an issue not explored often enough even in technology policy.

Laws, social norms, ethical expectations, and language use are not standardised across that – and it’s often easy to forget that, especially if you have a specific framing and not deliberately reached out to understand other contexts.

There are not many international standards on these issues, but we’re operating on a platform that supersedes national boundaries.

With the Samaritans Radar, presuming the app is not just processing tweets from UK twitter accounts (which is likely to be the case), then the Samaritans app is using data that has originated in many geographic locations around the world. Points above around data protection and copyright in various countries are likely to be different. As are issues around research ethics, responsibility for how data and information can be utilised.

Furthermore, there are also some interesting issues around what happens if you keep getting alerts about someone in a totally different time zone that you can never help. How as an individual do you feel? How responsible are the creators of the app for making an individual feel that way?

I would be very interested to know how individuals in Africa and Latin America, where there is often an existing distrust of white, Western ‘do-gooders’, feel about an organisation in the UK processing their tweets.

Final thoughts

I really don’t want to see organisations stop innovating and thinking about new ways to solve problems. But this doesn’t stop there being a lot of interesting questions, especially related to ethics, that haven’t yet been explored let alone started to come to any form of resolution.  And the Samaritans Radar is a fascinating case study for many of these issues as it’s about such a sensitive issue (mental health) on such a well used social media platform.

I’d love to see more conversations take place beyond the very heavy focus just on privacy/data protection vs. ‘charity doing good’ than we’ve so far been seeing – and will update this post as I see some of these conversations starting to take place.

My thinking around this is very much evolving – so I can imagine I’ll be making changes/edits to this page, and doing a bunch of blogging off the side over the next week. But I see this has the potential to be the starting point of some very interesting conversations about online technology.