How many happy users did your product have this month?
The other day, I mulled over the problem of bad user experience resulting in more usage of products.
And I followed up with wondering how we could get to a “happy user” metric.
When measuring the success of software products, data is king. When I moved from DevRel to program management, my world got a lot more dashboards and charts. I found myself spending more time looking at user feedback. I learned about asking questions to find the real issue that caused dismay in users instead of adding a feature satisfying their request. I started feeling despair looking at the noise to signal ratio you get in feedback channels. And I started to question our razor-sharp focus on numbers and continuous growth.
We love growth and numbers. It is pretty depressing how much we see growth as the only thing that matters. Users, download numbers, sessions, minutes of usage — all have to go up from version to version — or you failed.
This makes sense when you grow and when the funding of your product is dependent on user numbers. It is the main thing that empty products like social platforms have. You need users to get content and engagement. You always need to get more to be able to tell advertisers that you have so and so many users. I remember working on app stores in another company and wanted to bang my head on the table. People measure the success of a store by the number of apps, not by their quality. At one time I met with a company offering to create 500 apps a week for a certain price. That way our app store can look bigger than the others. You’ve seen apps like these: “paint a cat”, “paint a dog”…
The big issue is that our fairytale success stories in software are the fast growing platforms millions of people use. Are you as successful as Facebook? Are you growing as fast as TikTok? Do you have as many active users as MySpace? Oh, wait…
Not all products are there to always grow. Some products are specialised, and you don’t even expect people to use them all the time.
Many of the things I work on in the developer tooling space are things you only use when there is a certain problem. The happy path for the user is simple. Open the tool, use it to find the problem to fix, close the tool and forget about it until the next time.
No growth there, but something more important: helpfulness to the user. If the tool does not only allow you to track down a problem, but also gives you valid advice how to fix it, even better. I love when a tool educates me about a problem. I am happier, and I even learned how to avoid the issue in the future. Which means I don’t need to use the tool any time soon any longer.
As the tool creator, who has to show continuous growth, this is a problem. The only way to increase usage numbers is to ensure the problem happens more often. And that is beyond the reach of the tool creator.
The question then is, how do you measure the success of those? And even more interesting — how do you defend a drop in numbers when you fixed an issue?
The other day we encountered the issue that a highly used product also got a lot of feedback. Negative feedback, telling us that people don’t know what it is and why it is there and that they want to get rid of it. This is something you want to dive into. Unhappy user feedback is high priority, especially when your product confuses the users. People put effort in to tell you about the problem, you should reward them for that.
As it turns out, there was a simple way to accidentally hit the keyboard shortcut to open the product. For power users keyboard shortcuts are amazing. That’s why developer focused tools have dozens of them. For normal folk, they can appear as witchcraft.
The solution was to add an extra step the first time the keyboard shortcut happens. We show information telling you what will happen now, what product you will open and if that is what you wanted. We offer to never to do that again for people who triggered it by accident and never to show the message again for others.
We didn’t come up with this super clever way of dealing with this ourselves. We did user research and tested various ways to work around the problem.
This will cause a dip in our usage numbers for sure, but it also means that all we lose is accidental, unhappy users. Which is great. But what if I worked elsewhere and the next bonus or funding for my product is dependent on more users each month?
The interesting thing here is that we have proof that those users were not happy getting there. So we can point to the feedback data and show the drop off in negative feedback as a win of happy users. We can only do that as we have an easy to reach and use feedback mechanism in our tool. And this is where far too many products fail, in my book. If you don’t ask your users how they like what you do and allow them to complain about problems, you can’t find issues like these.
We need a new metric of happy users. But I am still at a loss how to get to that one. One thing I know though is that chasing more users or interactions for the sake of growth doesn’t make better products.