TalentSpring – can it work?

Since I’ve recently had hiring a new employee on the mind I was interested to read about TalentSpring (on TechCrunch). This is a site that hopes to pour a little web2.0 sauce on a resume/recruiting site.

It’s schtick is that when you put in your resume you are presented with a series of pairs of accomplishments by other prospective job seekers and you, in hot-or-not style, must pick the one you think is more compelling. The idea behind this is that as a someone looking to hire someone it can be difficult and time consuming to wade through a huge list of resumes to find the ones you are interested in, so by using the wisdom of the crowds it hopes to help float the best ones up to the top.

In principle this sounds like a great idea. I know how tough it can be to receive a stack of undifferentiated resumes and have to narrow them down. Given that and the apparent wide success felt by community voting systems it would seem like a slam dunk, but I have a concern with this. Let’s say that I am a prospective job seeker and I am presented with all these other job seekers looking for work – if they are in the same sector as I am why would I vote for the most qualified? I would always vote for the weaker one in order to boost my chances of appearing good and thus getting hired. And if they were not competitive with me, how am I to be able to judge whether their accomplishments are actually impressive? Things that might seem hard to people not in the same profession are often quite easy and vice versa.

So the problem is, in times when a job seeker is qualified to be a judge of relative merit, it would generally be in their field and thus they would be judging potential competitors to themselves, so they are given a powerful incentive to lie and surface the weaker person. And in those cases where they would not be in competition with those they are rating, they will generally not be qualified to say which person has more merit.

Is there something here that I am missing? Because that problem seems like a really big one. I’m sure they’ve thought of this problem and figured out the solution, but it is not clear to me what that could be. I, as a prospective employer would look at their data set with a bit of skepticism, and as the quality of their data is the key to their business, that’s not a good skepticism to engender.

UPDATE: Found some more details. alarm:clock gives this quote about the very subject of gaming the system:

“Erroneous votes will be detected and discarded using our advanced mathematics,” says Talent Spring. If you develop an unusually large pattern of incorrect votes your account will show a voting score of “F”.

Clearly they’re thinking about this and putting heavy work behind making these ratings good. Unfortunately, this system still requires the vast majority of people to be honest in order to detect fakers. In a normal situation, I’d believe that was true, but in this circumstance where someone’s livelihood is at stake I’m not convinced. If, even a small but non-trivial group of people are liars and they happen to be the first bunch to rate certain pairs, the system may decide that the correct answer is actually the wrong one – reducing the ability to detect serial gamers. And beyond that, rating outside your competence is a crap shoot – so those ratings are randomly correct or incorrect, I’d think.

I also found myshoggoth which sounds like someone who works at TalentSpring. He goes on to discuss a whole different vector of problems that they come across. I’m impressed at the depth of thought, but still not convinced that the site can work reliably.

blog comments powered by Disqus

Not Found

Sorry, but what you are looking for isn't here...