自由软件开发与应用 开源社区建设与推广
|
||
你正在浏览一月, 2007
isn’t it? Jakob Nielsen, September 5, 1999 A reputation manager is an independent service that keeps track of the rated quality, credibility, or some other desirable metric for each element in a set. The things being rated will typically be websites, companies, products, or people, but in theory anything can have a reputation that users may want to look up before taking action or doing business. The reputation metrics are typically collected from other users who have had dealings with the thing that is being rated. Each user would indicate whether he or she was satisfied or dissatisfied. In the simplest case, the reputation of something is the average rating received from all users who have interacted with it in the past. Other systems are possible, as discussed below. Current Reputation Managers Reputation Manager Problems Amazon.com pioneered the idea of customer reviews, but has been plagued by unreliable reviews (an author’s enemies post a flood of negative reviews; followed by the author’s friends who post glowing reviews). Also, users never know whether they can trust reviews that are posted as part of a site that profits from selling the product. Google and eBay avoid these problems by aggregating ratings across a very large sample. Google also benefits from the fact that Web authors are reluctant to include a link unless they actually want to guide users to the destination site. Even if there are some spurious links, they vanish when doing statistics across a billion pages with several billion links. eBay collects reputation rankings from the specific people who actually bought something from a seller, thus avoiding comments from random users. Epinions is a double reputation manager: not only does it rate products and services, it also rates reviewers. After users have read a review, they are encouraged to vote on whether they found the review useful or not. In showing lists of reviews to users, Epinions places the most highly rated reviews on top, thus assuring that readers will focus on the best content. Also, reviewers build up status depending on the user feedback on all their reviews, meaning that people will be reluctant to contribute low-quality reviews to the service. A final interesting twist is that users earn a micropayment every time somebody reads one of their reviews. Thus, people are motivated to write valuable reviews, not just to gain a high reputation rating, but also to earn money. Future of Reputation Managers Reputation managers overcome the complaint against shop bots that they purely focus on price and ignore customer service. Once it can include an independent source of rating data, a shop bot can show users: * what they can buy Reputations managers will thus cause a renaissance for good customer service: the way a company treats any individual customer will be fed directly back into its reputation ranking and will influence its future sales. Investors will finally get a handle on intangible concepts like “brand equity” and “goodwill”: just go to the reputation manager and look up how customers rate the company and various aspects of its service. If a company does something wrong, its reputation statistics will rapidly drop, immediately followed by a massacre of the stock valuation. If a few Belgians become sick from drinking a soft drink, then the manufacturer may lose billions on Wall Street five minutes later. Another reason reputation managers will contribute to highly improved product quality and customer service. Jakob Nielsen, October 9, 2006 All large-scale, multi-user communities and online social networks that rely on users to contribute content or build services share one property: most users don’t participate very much. Often, they simply lurk in the background. In contrast, a tiny minority of users usually accounts for a disproportionately large amount of the content and other system activity. This phenomenon of participation inequality was first studied in depth by Will Hill in the early ’90s, when he worked down the hall from me at Bell Communications Research (see references below). When you plot the amount of activity for each user, the result is a Zipf curve, which shows as a straight line in a log-log diagram. User participation often more or less follows a 90-9-1 rule: * 90% of users are lurkers (i.e., read or observe, but don’t contribute). Early Inequality Research In Whittaker et al.’s Usenet study, a randomly selected posting was equally likely to come from one of the 580,000 low-frequency contributors or one of the 19,000 high-frequency contributors. Obviously, if you want to assess the “feelings of the community” it’s highly unfair if one subgroup’s 19,000 members have the same representation as another subgroup’s 580,000 members. More importantly, such inequities would give you a biased understanding of the community, because many differences almost certainly exist between people who post a lot and those who post a little. And you would never hear from the silent majority of lurkers. Inequality on the Web There are about 1.1 billion Internet users, yet only 55 million users (5%) have weblogs according to Technorati. Worse, there are only 1.6 million postings per day; because some people post multiple times per day, only 0.1% of users post daily. Blogs have even worse participation inequality than is evident in the 90-9-1 rule that characterizes most online communities. With blogs, the rule is more like 95-5-0.1. Inequalities are also found on Wikipedia, where more than 99% of users are lurkers. According to Wikipedia’s “about” page, it has only 68,000 active contributors, which is 0.2% of the 32 million unique visitors it has in the U.S. alone. Wikipedia’s most active 1,000 people — 0.003% of its users — contribute about two-thirds of the site’s edits. Wikipedia is thus even more skewed than blogs, with a 99.8-0.2-0.003 rule. Participation inequality exists in many places on the Web. A quick glance at Amazon.com, for example, showed that the site had sold thousands of copies of a book that had only 12 reviews, meaning that less than 1% of customers contribute reviews. Furthermore, at the time I wrote this, 167,113 of Amazon’s book reviews were contributed by just a few “top-100″ reviewers; the most prolific reviewer had written 12,423 reviews. How anybody can write that many reviews — let alone read that many books — is beyond me, but it’s a classic example of participation inequality. Downsides of Participation Inequality Participation inequality is not necessarily unfair because “some users are more equal than others” to misquote Animal Farm. If lurkers want to contribute, they are usually allowed to do so. The problem is that the overall system is not representative of Web users. On any given user-participation site, you almost always hear from the same 1% of users, who almost certainly differ from the 90% you never hear from. This can cause trouble for several reasons: * Customer feedback. If your company looks to Web postings for customer feedback on its products and services, you’re getting an unrepresentative sample. How to Overcome Participation Inequality The first step to dealing with participation inequality is to recognize that it will always be with us. It’s existed in every online community and multi-user service that has ever been studied. Your only real choice here is in how you shape the inequality curve’s angle. Are you going to have the “usual” 90-9-1 distribution, or the more radical 99-1-0.1 distribution common in some social websites? Can you achieve a more equitable distribution of, say, 80-16-4? (That is, only 80% lurkers, with 16% contributing some and 4% contributing the most.) Although participation will always be somewhat unequal, there are ways to better equalize it, including: * Make it easier to contribute. The lower the overhead, the more people will jump through the hoop. For example, Netflix lets users rate movies by clicking a star rating, which is much easier than writing a natural-language review. Your website’s design undoubtedly influences participation inequality for better or worse. Being aware of the problem is the first step to alleviating it, and finding ways to broaden participation will become even more important as the Web’s social networking services continue to grow. References Laurence Brothers, Jim Hollan, Jakob Nielsen, Scott Stornetta, Steve Abney, George Furnas, and Michael Littman (1992): “Supporting informal communication via ephemeral interest groups,” Proceedings of CSCW 92, the ACM Conference on Computer-Supported Cooperative Work (Toronto, Ontario, November 1-4, 1992), pp. 84-90. William C. Hill, James D. Hollan, Dave Wroblewski, and Tim McCandless (1992): “Edit wear and read wear,” Proceedings of CHI’92, the SIGCHI Conference on Human Factors in Computing Systems (Monterey, CA, May 3-7, 1992), pp. 3-9. Steve Whittaker, Loren Terveen, Will Hill, and Lynn Cherny (1998): “The dynamics of mass interaction,” Proceedings of CSCW 98, the ACM Conference on Computer-Supported Cooperative Work (Seattle, WA, November 14-18, 1998), pp. 257-264.
— 下一页 »
|