New Estimize Scoring System and Leaderboards

While you probably noticed the design overhaul on Estimize, you may have missed the updated scoring system and leaderboards that we rolled out yesterday. After listening to the community’s feedback as well as many internal debates, we realized that our existing scoring wasn’t meeting the goals that we originally set out to achieve and it needed a re-visit.

What didn’t we like about our current scoring system

  • It was very difficult to know whether an estimate improved or degraded your overall score
  • Scores didn’t take into account how well you performed compared to the rest of the community. Meaning, you might do very well on an estimate compared to every other user, but your score could still decline if it was a particularly difficult release, for example Research In Motion.
  • It was not clear or intuitive how your score was calculated. We did some fancy mathematics to decay scores over time and blah blah blah, but it was much too dense for users to clearly understand how their score was calculated.

Goals of our new scoring system

  • Users should easily and clearly be able to see how each estimate made their score increase or decrease
  • It should encourage and reward users for making more estimates
  • It should reward for better accuracy and aggressive estimates
  • New users should be able to jump in and start competing immediately with seasoned users

So what did we come up with?

Seasons

First and foremost, we now have seasons! While we’ll continue to keep all-time scores, we’re going to place a larger emphasis on your score for the current season.

  • Winter 2013: January 1 – March 31
  • Spring 2013: April 1 – June 30
  • Summer 2013: July 1 – September 30
  • Fall 2013: October 1 – December 31

If you have a bad quarter, it’s okay because you can start fresh next season; however, you can still look at users’ all-time scores to see who is consistently the most accurate.

Scores

Okay, so what about the scores? Each estimate will gain or lose you a certain number of points. If are more accurate than Wall Street, you gain points. If you are less accurate than Wall Street, you lose points. We score your EPS and revenue separately for each of your estimates, and then combine the points to give you a total point value for each release. The number of points that you gain or lose is determined by how accurate you are compared to the rest of the community with exponentially more points scored as your accuracy approaches perfect.. Your score (all-time as well as each season) is simply the sum of the points from all your estimates, you beat Wall Street and your score goes up, you lose to Wall Street and your score goes down.

image

Bonus points

The other positive aspect of this scoring system is that it allows us to reward certain activities with bonus points. We’ve given some deep thought to the philosophy behind rewarding bonus points which are not necessarily associated with an analyst’s accuracy, and have come to the conclusion that as long as the action taken by the analyst is a value add to the community as a whole it deserves an incentive.

For example, we will soon be giving bonus points for estimating 2-8 quarters in advance and revising those estimates on a regular basis. We’ve also been testing some ideas around providing bonus points for estimating on specific stocks which are undercovered by Wall Street. The opportunities to mold the data set to produce maximum value is amazing, but we will strive to protect the integrity of the ranking system by always focusing it on accuracy.

A philosophical divide

Most of our internal discussions focused on what we felt the scoring system should accomplish philosophically. Was it there to simply incent certain actions as a game mechanic in order to achieve the goals of our company in building the dataset, or should it be the most rigorous interpretation of how good an analyst each user is? These are obviously two extreme sides of the argument and not necessarily mutually exclusive, but at the end of the day we landed somewhere in the middle.

We landed in the middle because we have begun the process of developing a host of different ways to measure analysts outside of the scoring system. These deep statistical models will provide the community with the ability to view an analyst from many different perspectives outside of any incentive structures that we have laid out in the scoring system. For example, one major value add of an analyst is how accurate their estimates were 6-12 months before the report. We call this the “early and right vs late and wrong” measurement. While this activity is not reflected in our scoring system, it will be an independent statistic.

If you have any questions regarding the new system, or would like to offer feedback and ideas for future improvement, please don’t hesitate to email leigh@estimize.com.

Full Disclosure: Nothing on this site should ever be considered to be advice, research or an invitation to buy or sell any securities, please see the Disclaimer page for a full disclaimer.


blog comments powered by Disqus