Testing vs Endorsements

There is an interesting trend taking hold in the early stage startup world around testing and accreditation of skills. The growth of this industry is primarily in reaction to the uselessness of most undergraduate degrees along with their stupidly onerous cost of time and money. I am a major cheerleader for this ecosystem of startups that are trying to give us all the ability to simply prove that we can do something technical and do it well. This country needs a larger base of labor with a high technical skill level, we have too many liberal arts majors working at the GAP.

Meanwhile, LinkedIn is quietly building what I believe will be the most important social graph on the web by having its community endorse each other for specific skills. Having a well respected person within a skill set or industry endorse you is an extremely powerful thing. In the startup world we call this social proof. I have been very loud about the fact that someone should build the Klout of social capital measurement on the back of all the amazing data LinkedIn has. I am sure they are thinking about this at LinkedIn as well, but it may be dangerous for them. Giving people a score is going to be controversial, it was for Klout, and still is.

What I want to explore here is the difference between testing and endorsements and how each are and will be used.

It comes down to this. You can throw as many statistics in someone’s face as you want about how well they scored on their tests, but unless you connect those stats to a personal story or a personal recommendation, they matter very little. This is what we call a predictably irrational behavior. If we trust the test to be a good measure of the skill set, then why won’t we more heavily weight the score in our consideration of the individual?

We struggle with this every day at Estimize. No matter how accurate a specific analyst has been, people still want to know how and why they made the estimate they made, who they are, what qualifications they have, on and on. Without that other information the statistics themselves are greatly devalued in the mind of the viewer.

Endorsements and recommendations are how the world works, whether you like it or not. In the startup world venture capital investors will often ignore a company forever until they have achieved social proof by getting some other investor to go in before them, or a ton of buzz. When I raised money for Surfview Capital, the asset management firm I ran for two years successfully, I learned that people give money to people, they do not give money to track records or strategies. It’s just the way it is.

People generally don’t want to make decisions for themselves and have a herding mentality. We trust a person when someone we know trusts them. We don’t trust statistics. It all comes back to agency risk. If something who is trusted tells you that they trust someone, and you hire them, and they turn out to suck, you can always blame the hire on that person you trusted who everyone else trusted too. But if you hire someone no one else knows on the basis of some test score, you have no cover, it’s all on you.

By no means am I saying that people should act this way. This system is exactly what opens the door for risk takers and innovators who shun the need for social proof before stepping into something or hiring someone. You can also find great values by ignoring the crowd and investing in the startup or the employee that no one else has taken an interest in yet.

There is certainly a market for both testing and endorsements based scores. But at the end of the day endorsements will always be far more valuable because they tap into a very core behavior regarding how we choose to associate and trust people, while statistics need to be augmented with a story and another layer of trust. People will believe statistics, but they won’t internalize them until connected with something else.

And this is why as we begin to build the premium layer on top of the Estimize platform which will look a lot like Starmine, we are not only taking into consideration the statistics around how accurate a given analyst has been. It has to go deeper than that. There has to be a measure of influence, a measure of reliability, a measure of longevity, a measure of social reputation, and maybe a measure of endorsement. I can definitely see us factoring in some of that LinkedIn endorsement data to our models. I can also see us factoring in someone having achieved a CFA designation, or being part of the Society of Securities Analysts.

Both testing and endorsements matter, but if I were building a new startup today which was trying to measure social capital, it would be on the back of the LinkedIn data first, then testing second.

 

Full Disclosure: Nothing on this site should ever be considered to be advice, research or an invitation to buy or sell any securities, please see the Disclaimer page for a full disclaimer.


blog comments powered by Disqus