The Wisdom of (Gamer) Crowds
By Howard Rheingold, published at 10 May 2007 - 8:12pm, last updated 8 years 2 weeks ago.
Henry Jenkins has summarized and commented
on other's posts about the use of online games to aggregate the judgements of populations. In the process, Jenkins uses his commentary to make distinctions between "the wisdom of crowds" and "collective intelligence:"
The idea of using games to collect the shared wisdom of thousands of players seems a compelling one -- especially if one can develop, as Edery proposes, mechanisms for linking game play mechanics with real world data sets. Indeed, Raph Koster -- another games blogger who has been exploring these ideas -- does Edery one better, pointing to a project which actually tested this concept:
What [Byron Reeves] showed was a mockup of a Star Wars Galaxies medical screen, displaying real medical imagery. Players were challenged to advance as doctors by diagnosing the cancers displayed, in an effort to capture the wisdom of crowds. The result? A typical gamer was found to be able to diagnose accurately at 60% of the rate of a trained pathologist. Pile 30 gamers on top of one another, and the averaged result is equivalent to that of a pathologist -- with a total investment of around 60-100 hours per player.
At the risk of being annoyingly pedantic, however, this debate keeps getting muddied because participants are blurring important distinctions between Surowiecki's notion of the Wisdom of Crowds and Pierre Levy's notion of Collective Intelligence. Edery uses the two terms interchangeably in his discussion (and to some degree, so does Koster), yet Surowiecki and Levy start from very different premises which would lead to very different choices in the game design process. Surowiecki's model seeks to aggregate anonymously produced data, seeing the wisdom emerging when a large number of people each enter their own calculations without influencing each other's findings. Levy's model focuses on the kinds of deliberative process that occurs in online communities as participants share information, correct and evaluate each other's findings, and arrive at a consensus understanding.