My name’s Georgiana and I’ve just started as a user researcher at Essex County Council. This isn’t just a new job for me; it’s a whole new discipline.
I’ve spent most of my career gathering and analysing information about various segments of the UK population for market research firms, so I’ve got a good knowledge of research practice.
What’s new for me is observing users’ behaviour and building a picture of their needs, rather than exploring their expectations and what they want, as you might do in a more ‘traditional’ research environment.
Starting with the basics
The first step of my journey was a hefty one, all the way up to the GDS Academy in Newcastle. There, others like me gathered to learn how they could improve their services through research and testing.
We started with the basics, learning the theory behind this user-centred approach. There were some top tips for analysing your data and presenting your findings, which I’ll definitely be using in the future.
But, for me, the most helpful things were the practical exercises we did in our groups. These took in all the essential steps for good user research: putting together user needs, identifying your users, writing discussion guides, interviewing, taking notes and, finally, analysing the data pulling out insights and actions.
Then we moved on to some usability testing. I showed Essex’s Find a childcare provider app, and tested Blackburn’s Report a broken street lamp app. These both seem like simple tasks, but hide a plethora of variables that can quickly throw an unsuspecting user off.
As user researchers, it often falls to us to fight for the user. This involves gathering data, lots of data. From just a 10 minute usability session, we managed to cover an entire glass wall with insights, outputs and actions.
Beating the bias
Data on its own isn’t enough. Like any other resource, the way we collate it and use it is key. We need to make sure we build an accurate picture of the users’ experience, rather than picking the stats that fit our own frame of reference.
To illustrate how this bias can creep in, each of us were asked to draw a £5 note from memory. Some were elaborate and decorative, others simple and to the point. The lesson we took from this is that the users who give us most data shouldn’t outweigh those who gave less. When you’re interviewing people face-to-face, and building a rapport, it can be really tempting to delve into the richness of detail they give, but their input isn’t more valuable than someone else’s who may have given shorter, less detailed answers.
It’s our job to take a step back from these direct interactions with the users, and focus on the data, and what the users actually did, not just what they said. Someone who tells you that they’re really good at doing things online might actually really struggle with bits of the journey you’re testing.
This is a big ask. You have to be empathetic, keeping in mind barriers facing the user, but also not lose sight of the issues that you're trying to spot.
It was a really great experience, and I’ve shared what I’ve learned with our user research community of practice back here in Essex.
We’re hoping that we’ll be able to put the things we’ve learned and how they apply in Essex into a show and tell soon, so keep an eye out for that.