Hey wiki comics is advertising some pharmacy links you can find below. Sorry for any inconvenience. Hope you can understand... Links are below: Tadalafil Citrate | generic cialis 10 mg | tadalafil citrate 10mg | tadalafil citrate 5mg | generic cialis 40 mg |

Business - Written by on Thursday, July 2, 2009 17:04 - 6 Comments

Naumi Haque
Measuring collaboration: Lessons from Shane Battier and the NBA

One of the critical challenges with enterprise collaboration (a Steve noted earlier) is determining how to measure and reward it. For inspiration on how to solve this problem, I look to non-corporate collaborative context – professional sports, and more specifically, the NBA. In this environment, success is based largely on collaboration between players, individual and team outcomes and rewards are easily measured, and some efforts are being made to measure the value of teamwork in a quantitative sense.

What really propelled my thinking in this area was an article written back in February in the New York Times. “The No-Stats All-Star” written by Michael Lewis, (author of “Moneyball: The Art of Winning an Unfair Game”) highlights “a basketball mystery: a player [who] is widely regarded inside the N.B.A. as, at best, a replaceable cog in a machine driven by superstars. And yet every team he has ever played on has acquired some magical ability to win.” Specifically, the article dissects the play of Shane Battier, a collaborative team player whose value is difficult to measure using traditional basketball statistics.

battier-diving

So why look at basketball for insight on how to measure collaboration rather than some other sport? As Lewis notes, “The difference in basketball is that it happens to be the sport that is most like life.” What the author means is that basketball is not a series of one-on-one contests between individuals, as with baseball, or a series of plays determined by a coach, as with football. Rather, basketball is a truly collaborative effort with many subtle offensive and defensive moves taking place simultaneously by a number of players. What’s more, in basketball “the player, in his play, faces choices between maximizing his own perceived self-interest and winning. The choices are sufficiently complex that there is a fair chance he doesn’t fully grasp that he is making them.” Sound familiar?

Measurement can help illuminate the tradeoffs being made and the “cost of winning,” if you measure the right things. The most common approach used to measure the team is to try and identify the individual contributions of each team member and add them together – the box score in basketball. As an example, fantasy sports are a great way to quantify the value of individual contributions because each statistically tracked player activity is weighted and given either a positive score (i.e. shots made, points, rebounds, assists, blocks, steals) or a negative score (shots taken, turnovers) which is aggregated into a composite fantasy score for each game. Battier’s overall rank on Yahoo! Fantasy Basketball for 2008/2009 was 101. Not bad, but does it represent his true value as a team collaborator?

Coaches and players know that the team impact of players is more than the sum of their parts. However, with basketball, as in traditional enterprises, managers have “measured not so much what is important as what is easy to measure — points, rebounds, assists, steals, blocked shots — and these measurements have warped perceptions of the game.” In the corporate context, this could just as easily be hours clocked, widgets produced, products sold, or deadlines met.

In basketball, one way to measure the collaborative value of a player to the team is the plus/minus or Roland rating. This is described as “the difference in how the team plays with the player on court versus performance with the player off court.” It helps quantify some of the more intangible factors such as good defence, threat of 3-point shooting, setting picks, taking charges, going after loose balls, intimidation around the rim, and the “I.Q. of where to be.”

In the 2008/2009 regular season Battier was 38th in terms of pure plus/minus. With respect to how he is rewarded for his contributions, Battier is making $6.9 million in 2009, which isn’t bad, but it’s a far cry from the $22.2 million made by Tim Duncan and $19 million made by Dirk Nowitzki (37th and 39th in pure plus/minus). As enterprise decision makers think about incenting and rewarding collaboration, it is these types of discrepancies that they will want to look for.

But there are drawbacks to the plus/minus approach as well. Some players will have overstated plus/minus statistics because they share the court with All-Stars or have weak players that substitute for them. More dynamic and representative way to approach plus/minus would be to “adjust these plus/minus ratings to account for the quality of players that a given player plays with and against.” Dan Rosenbaum, an economics professor at the University of North Carolina, demonstrated this approach for the 2003/2004 NBA season. In an enterprise, how do you account for factors like project management, the economy, competitors, support staff, and team members when evaluating an individual? If it’s all based on team performance, what incentives can you use to drive the discretionary efforts of individuals?

I’ll have to give it some more thought, but I think there are definitely some specific aspects of the statistical approach to team/individual measurement in sports that enterprises can emulate. Any thoughts on what a collaboration box score and plus/minus might look like for an enterprise?



6 Comments

You can follow any responses to this entry through the RSS 2.0 feed. Responses are currently closed, but you can trackback from your own site.

Web Media Daily – Thurs. July 2, 2009 | Reinventing Yourself...
Jul 2, 2009 17:44

[...] Measuring collaboration: Lessons from Shane Battier and the NBA …Wikinomics [...]

links for 2009-07-04 « lugar do conhecimento
Jul 4, 2009 5:07

[...] Measuring collaboration: Lessons from Shane Battier and the NBA [...]

Jayanth
Jul 7, 2009 13:53

Nice post – you raise some great questions. Measurable is not always meaningful, whether sports or business. But in terms of effectively applying a statistical approach, pro sports management would appear to have a few advantages (mostly due to the league/union structure).

Visibility into a common, vetted set of statistics (plus raw data) for any player can provide as close to an apples-to-apples market comparison when it comes to identifying and rewarding top individual performers. No such consistency exists for the enterprise; Company A typically does not have relative measures of its PM’s performance compared to his/her counterpart at Company B. More or less blind when it comes to the outside world.

Then of course is the flexibility in the sports world to directly act on these captured individual performance metrics (trades, free agent signings, contract extensions, minor league reassignments). I’m sure many business leaders would love to drop a ‘DNP-CD’ on a poor performer, but there are often clear and necessary barriers to this.

So I think this really starts with establishing good business and team performance metrics – and most enterprises have a lot of work to do there before they can effectively move this down to the individual-level.

But then let’s be honest.. it’s all about the playoffs.

Wikinomics» Blog Archive » Sabermetrics as Mass Collaborators
Jul 10, 2009 14:49

[...] week, my colleague Naumi Haque posted about basketball stats and featured the amazing Michael Lewis article about Shane Battier. There was another good article [...]

Wikinomics» Blog Archive » The collaboration box score
Aug 4, 2009 22:12

[...] [...]

Denis
Aug 6, 2009 9:25

I think one of the big lessons here is that a bad measure of collaboration is actually worse / more misleading than traditional metrics that don’t account for it at all.

The Roland rating is a great example of that. Using Battier specifically, it would appear to put him in between to superstars in terms of contribution – and I don’t think anyone really believes that. But the failure of the model is better captured by simply looking at the front page.

The top-13 in +/- come from a total of 3 teams – Cleveland, L.A. and Orlando. Cleveland alone has 6 players in the top-20. Some of these players – such as Ben Wallace – are not good. They’re just buoyed by their strong teammates, as mentioned. And by definition, any player on a good team will likely end up rated higher than great player on a bad team – as the latter will almost certainly be in negative territory.

While I haven’t wrapped my head around everything Rosenbaum did, a quick scan of the lists indicate his adjustments still lead to many peculiar rankings.

So I would wager that a GM that relied on traditional metrics would likely end up making better decisions than one who focused on the Roland ratings(and the various derivatives of it). And I think that’s the big challenge here – as you noted, there’s a long way to go in measuring collaboration in the NBA. But incremental steps in that direction might actually make a team that follows them worse.

But that’s not to say it isn’t worth exploring – if a team manages to make the full “leap” to effectively measuring and comparing collaboration, it will be a definitive competitive advantage…

Now available in paperback!
Don Tapscott and Anthony D. William's latest collaboration, Macrowikinomics: New Solutions for a Connected Planet. Learn more.

Business - Oct 5, 2010 12:00 - 0 Comments

DRM and us

More In Business


Entertainment - Aug 3, 2010 13:14 - 2 Comments

Want to see the future? Look to the games

More In Entertainment


Society - Aug 6, 2010 8:19 - 4 Comments

The Empire strikes a light

More In Society