We made these Inside Look posts a regular feature in the Canvas Teacher Focus Group, and it seemed to work pretty well, so I’m writing this one to see how it flies in CMUG. The basic premise is to illuminate little nuggets of our product development process for people who might be interested.
Today, we’re talking success metrics. Every major Canvas project starts in a ‘Discover’ phase, in which a product manager (PM) researches a problem until they feel comfortable with it from a bunch of different perspectives. Then the PM prepares a project summary, which is a high-level review of the problem, and what Canvas could do about it, and how it fits with our product strategy. The PM also defines success metrics for the potential project, which take the form of, “If we do this thing, then we would expect this result by this timeframe.” Then the PM presents the project summary to leadership, who gives a thumbs-up or a thumbs-down or a “keep digging.”
Supposing the project makes it through those gates, and is developed and then released -- at the end of the project, we measure success by the metrics that we agreed upon at the outset. Usually with the mobile apps, we’re measuring success by usage and client satisfaction. In the case of the teacher app launch, we’re measuring usage by monthly active user count and we’re measuring client satisfaction by app store rating.
The new teacher app’s success metrics were:
- By the end of Q3, this app will have at least 15,000 monthly active users.
- By the end of Q3, this app will have at least a 4-star rating in stores.
Great news: We had over 30,000 monthly active users in the teacher app in the month of September! Woah! By comparison, we had roughly 7,000 monthly active users in the old SpeedGrader app at this point a year ago.
Okay news: The iOS teacher app is currently at a 4.1! The Android teacher app is currently at a 3.3 – but we think we’ll see a bump in Android ratings with version 1.1 out the door.
These metrics aren’t used to get people in trouble, but they are used to try to compare what we expected to happen with what actually happened, and then to make better metrics the next time.
If you want to help us out, make your feelings known in app store ratings!