This is my use case and user story around twitter and mental health. It was originally part of this blog post on the Samaritans Radar, but I removed it to make the post easier/shorter to read, and it felt clunky to have the inclusion of something so personal. However, it adds weight to the point I am trying to convey to the Samaritans so I wanted to include it somewhere.
Use Case
I am an 18-35 (target age for the app) woman who has used Twitter as part of their professional and personal life for several years. Until recently I was working in high profile positions, but I’ve spent the last few months recovering from suicidal depression and anxiety.
I use twitter in a semi professional capacity, and although I deliberately engage with conversations around mental health, because twitter is public I am very conscious not to be too obvious about my engagement with these issues due to perceived potential impact on my future professional life. My mental health problems probably makes me more anxious about this than I need to be, but I am happy that my involvement around these issues can be construed solely as my involvement with conversations around inclusion and equality.
My user story
Let’s presume there is a bit of software being designed to flag concerning tweets to people who follow me on twitter.
- I want to know that such a piece of software is discrete, as I do not want to feel limited or silenced in my exploration of mental health issues. I self censor enough already (indeed it’s part of what caused my current problem) and do not want to do so still further. I will self censor further if I feel it becomes even easier to watch what I am saying publicly.
- I do not want others to feel they need to be censored further than they already are, and would like others around me to feel able to continue discussing mental health issues, as I’ve found great support even knowing that other people have experienced issues, and being able to observe and engage with those who are more open and frank has allowed me to find some key people to talk with and some significant support.
Having a number of emotionally insensitive but technologically adept friends, I know people who would be likely to specifically use an app like this to check up on me. I would be concerned that certain individuals would be more likely to use an app than waste a few seconds skimming back through past tweets, and may be likely to end up missing actual warning flags, which in my instance are very unlikely to be something picked up by sentiment analysis.
- No matter how suicidal I’ve been, I would never, ever tweet something like ‘I can’t go on’ or ‘I’m so depressed’. It wouldn’t be professional, and the possibility of someone seeing something like that makes me incredibly anxious. Anything that approaches a cry for help or warning flag about my mental health is unlikely to be picked up by any form of basic analysis.
- I also don’t want to cause a fuss, so being aware of specific trigger words that may cause a fuss would make me consciously not use them. Furthermore, I am private about my mental health, and do not want to talk about such issues with most people. In fact, most people raising the topic is something that causes anxiety in me and makes things worse – as I don’t want my health problems to be noticeable.
To be frank, for me a piece of software that potentially analyses my tweets and alerts my friends to what it considers to be negative content has absolutely no positive effects for me. Instead it will cause me harm, making me even more self aware about how I present in a public space, and make it difficult for me to engage in relatively safe conversations online for fear that a warning might be triggered to an unknown follower, and I’d need to negotiate some odd combination of lying about my mental health (which is bad for me and makes things more confusing and difficult), or telling the truth about my mental health (which I don’t want to do and shouldn’t need to do).
One thought on “My use case – on twitter and mental health”