What's a good or bad score for PES?
Hey team, I am looking to understand what is a good and bad score for the stickiness, growth and adoption? I read the benchmarking article, however I am finding it difficult to understand.
3
Hey team, I am looking to understand what is a good and bad score for the stickiness, growth and adoption? I read the benchmarking article, however I am finding it difficult to understand.
コメント
A few thoughts: One thing that's critical for adoption is features tagged. Since features aren't like pages there isn't an untagged features list. So it's up to you how many or how little features are tagged. Finding the sweet spot of the correct amount of features to tag is critical. Too few and you could get a widely variable stickiness. But if you tag indiscriminately you'll get a lower percentage than is probably reality. There's also how you make your rules for features. For instance, I have my Export Report button as a feature overall, meaning any report exported counts. I could have done each report's export button as a separate feature. There's no right and wrong in that. It's what you think is best. For feature adoption the benchmarks are a good guide but I recommend looking at the data and interpreting yourself. Make a hypothesis. What features do you think account for 80% of the clicks? Are the actual features there missing some of yours or have additional ones? What does that mean?
With all of these metrics I think it's well worth it to examine and come up with your own benchmarks. The article is a good baseline. But all products have different use cases and realities that influence these. For instance, does a large portion of your user base only need to log in every once in a while (ie end users vs admins)? If so you want to make sure you segment stickiness to the correct population or keep those users in mind when viewing your score. If you have a busy season you may see a huge spike and then a huge decrease in certain numbers. Segment for some of these is critical.
With these metrics I always think it helps to say what the metric means. So instead of saying I have 20% stickiness, say 20% of my monthly active users use my product on any given day. For feature adoption, instead of saying 15% say 15% of my features are 80% of my clicks. And for app retention instead of saying 75% say 75% of users return to my app one month after first login.
I guess in all of this I'm saying is that it's a good exercise to give meaning and context to your numbers for your use case. The benchmarks can be a guideline, but it's critical to understand what the numbers are and the factors in your product that should help determine your benchmark.
サインインしてコメントを残してください。