The more successful it is, the harder it is to see
When integrated seamlessly into the development process, UX research is a force that guides decision-making, gives teams conviction in their direction of travel, and mitigates against potential pitfalls.
But as anyone working in the field will tell you, when it works it fades into invisibility. Nothing ever seems more natural than success and you never see the path that you didn’t go down.
It easy to point to high-profile project failures that could have been prevented by robust research, but research is rarely given full credit for triumphs.
The more successful research is, the harder it is to see it seems. And to measure.
Every MBA course will bang on and on about the importance of informed-decision making, but measuring the value of research is still a problem.
And in 2023 – when hundreds of thousands of words have been written about the value of research; dozens of books have made waves in UX and business circles about how much and when to do research; and every MBA course will bang on and on about the importance of informed-decision making – measuring the value of research is still a problem.
Earlier this week a really experienced UX researcher came to me and laid out a situation. She was putting together a proposal for a chief marketing officer of a software company.
This company has problems – their site, the sole way that they convert potential customers into a sale is vastly under-performing. They’re getting traffic. But people just aren’t buying. Something isn’t clicking (literally in the case of users!).
This is a classic role for qualitative research. Undercover the ‘why’ behind customer behaviour.
So this CMO definitely has a need. CMO looked at her proposal, he looked at the activities planned, he looked at the proposed outputs. And didn’t get it.He pushed back asking one question: ”How do you measure the success of user research?” And this guy wanted numbers, numbers, numbers.
But there are no numbers!
It is tempting to measure research itself, quantifying the process in micro-detail in order to lend it credibility to an audience of stakeholders who may seem sceptical, especially when it comes to budgeting. At that point research becomes an aesthetic – a list of the activities conducted; nicely rendered user journeys; the page count of a report – they are not measuring the actual outcomes
It mistakes the process for the subject. Research informs what should happen later, but measuring it in itself will result only in vanity metrics.
And still, as designers we often need to convince clients of the value of research. But if research leads to the avoidance of costly mistakes, something that is difficult to demonstrate, and measuring the process itself is of limited value, how then can the case be made?
It starts with the reframing of research as a factor in project success, not as cost or even investment, but almost as insurance: If you give me 1% per cent of your project budget, and three weeks, I can double or triple the chance that your project succeeding.
Three good arguments
There are three key arguments I make these days for research:
Failure breeds respect – even for something that’s invisible.