I’m a little unsurprised by this post I just saw on Buzzfeed about how the Daily Telegraphs Tactical Voting Tool was coded never to recommend the SNP. Not only am I not totally surprised, but I’m both a little happy and a little sad to be proven right about the use of voter apps and tools.
Before the election I became a little concerned about applications and tools that were being created by a variety of organisations that were supposed to give a floating voter an idea of how to vote in the 2015 General Election.
There is a fundamental (and incorrect) assumption underlying these tools that a parliamentary candidate should only be considered as a member of a party, rather than both as a member of a party and as an individual who will have their own pet areas of interest and qualities, but let’s ignore that and pretend that we should only be thinking about what national party policy says.
How the tools work
Many of these tools that were being created operated on a simple set of premises. A user would chose some areas of policy they were interested in, and then a variety of statements would be shown to the user without any obviously identifying information about which party had made those statements (although language used and the proposals themselves often gave that away to the more politically active), and the user would choose which statements they thought fitted most with their beliefs. An algorithm would then run over the answers provided and recommendations for how someone may wish to vote, or a bar chart/pie chart of the users similarities with political parties on various issues would pop out.
Alternatively, a user may be asked a series of questions about their priorities or beliefs, with similar outputs being provided – some form of chart or suggestions for which party their beliefs most align with.
Sometimes the bits of policy would be direct quotes from various statements and/or manifestos – albeit it shorn of any context and just a part that the tool creator had deemed ‘relevant’ – perhaps removing a sentence or two from either side that may alter how something reads, perhaps not using the paragraph that may be most directly comparable to other parties policies, or perhaps not considering that policies in seemingly unrelated areas may have an impact
And sometimes – especially with those tools that asked a series of questions – policy ideas and proposals from parties would be condensed by the tool creator into slightly different sentences and ideas than were ever originally considered.
How can bias creep in?
My concern before the election was that even with the best will in the world, any such tool will be full of biases. Be that removing specific nuance that may accidentally change what was originally meant in a policy statement, or removing the idea of how policies can and do interact (as an example the Green Party’s proposal to reduce copyright to 14 years after creation itself may be considered very difficult for many creators – but as the Green Party also propose a citizens basic income, this reduction in copyright would not itself cause destitution if both policies were brought in together).
And added to which, most organisations, including charities and NGOs, creating these tools are not politically neutral. They care about specific areas and will have either accidental or deliberate political biases that will emerge through these tools. Not only is it easy to accidentally remove nuance from a policy statement in a way that may make a political party look bad, it would also be very easy to deliberately do it – if say you wanted more people to vote for Labour than the Greens and Lib Dems – you could chose statements for the latter two parties that are less likely to be palatable to a wider audience, or summarise their policy proposals in less favorable ways.
And any questions asked will invariably contain bias. Just as one example, when I looked at 38 degrees vote match, none of the topics mentioned were my top voting issues. And the questions themselves were troublesome. For instance – one of the statements I was asked if I agree with was:
‘Government should raise new taxes to fund the NHS’
This statement is really leading. ‘Raising new taxes’ is not the same as ‘give more money to NHS’ as it is presuming the means by which more money needs to be provided. Furthermore, it also presumes the answer to any existing problems within the NHS is ‘a lack of money’, rather than for (possibly) bad internal management structures, or the wrong types of services being offered. Different technical procedures, less paper work, a decrease in homeopathy funding can all act to reduce costs for the NHS, providing an effective increase in funds – without ever ‘raising new taxes’. But the questions didn’t allow for such subtlety.
And therefore, any party which had more nuanced approaches to policy, which may need a little more explaining would be punished by this – as their idea wouldn’t fit easily into a ‘yes’ or ‘no’ answer to this question.
I don’t expect most people to go away and read all the manifestos, and I do genuinely like the idea of tools and apps that people can play with to explore voting options. But I worry that these tools and apps can be used by organisations to push biases that less politically aware individuals may not notice.
I would like to like to see those with strong interests in politics and with some understanding of social science to start more critical analysis of the biases that can be built into such tools – both in the language and framings used, as well as in the software and algorithms underpinning them. For this, it would help if the tools themselves were open source, so that the underlying code can be explored fully. Obviously, making something open source doesn’t magically make it evaluated and tested – but it is a start that enables this.
With each of these voter apps basically acting as a (potentially) poorly-designed survey, but one with the potential to influence voters and possibly alter outcomes within our election and democracy, much more critical analysis of these tools is needed.