WordPress » Social Software imag1
 
 
你正在浏览Social Software
Reputation Managers are Happening

Jakob Nielsen, September 5, 1999

A reputation manager is an independent service that keeps track of the rated quality, credibility, or some other desirable metric for each element in a set. The things being rated will typically be websites, companies, products, or people, but in theory anything can have a reputation that users may want to look up before taking action or doing business.

The reputation metrics are typically collected from other users who have had dealings with the thing that is being rated. Each user would indicate whether he or she was satisfied or dissatisfied. In the simplest case, the reputation of something is the average rating received from all users who have interacted with it in the past. Other systems are possible, as discussed below.

Current Reputation Managers

* eBay (auction site) keeps reputation ratings for all the people who offer things for sale on the site. After buying a collectible in an auction, you can go back to the site and rate the seller for prompt shipping and whether the physical item actually matched the description in the auction. This is the most literal of the current reputation managers: eBay literally keeps track of the reputation of each seller. Prospective buyers can feel safe bidding on items from people they have never heard of: if the reputation ratings show that many previous buyers were treated well and thought that the textual descriptions matched the actual collectible, then the seller is almost certainly honest and worth dealing with. Also, sellers are highly motivated to offer great service to every single buyer: a single customer with a bad experience will ruin a seller’s perfect reputation rating and multiple bad experiences (quickly followed by negative ratings) will put a seller out of business for good.
* Epinions (electronic opinions) is the most interesting new reputation manager: it collects user feedback, reviews, and ratings for a wide range of products and services - all the way from laptop computers to museums in New York. When you want to buy something, you go to Epinions first to check the reputation of the different models you are considering. You can also check the reputation of the manufacturer’s other models: do they in fact work as advertised or do people experience problems after owning something for awhile? Despite all the hype about ecommerce, it is so hard to buy anything on the Web today because you never know whom to trust. It has close to zero value when somebody who sells a product claims that it is great or that it meets certain needs. Having an independent service to guide customers to good products and warn them against lemons will be one of the most important enablers of ecommerce.
* Google (search engine) maintains a reputation rating for every site on the Web and uses this data to sort the return set for searches to place the highest-quality hits on top of the list. Google derives its estimate of a website’s quality from the number of other sites that link to it (as well as some fancy math that gives greater weight to links from more important sites and less weight to links from minor sites).
* Go (search engine formerly known as Infoseek) is adding a human touch to the service in the form of so-called Guides: individuals who are experts in a certain area and provide Go with their ratings and comments on sites within that area. These comments combine to form the reputation of the sites. But more interestingly, the Guides themselves are rated for the quality and value of their contributions and rise through the ranks based on these reputation metrics. More advanced Guides (with high ratings) are responsible for larger areas of the service and have some form of management responsibility for lower-rated Guides.
* Slashdot (discussion board) lets users rate the usefulness of the various comments in a discussion thread. When reading a thread, you can set an option to show only the N highest-rated postings, thus significantly increasing your experienced signal-to-noise ratio. Unfortunately, the ability to filter out poorly rated comments is not turned on by default, so only diligent users who study the slightly confused user interface will discover this useful feature. Slashdot also awards regular users “karma” points which are a true reputation manager: if you have done well in the past, you have high karma, which again means that your actions carry more weight.
* Third Voice is an annotation service that allows users to write comments on any Web page in a transparent overlay layer that is shown to other users of the service. These annotations are not under the control of the website owner since they come directly from the Third Voice server. The annotations combine to a kind of reputation for each site: for sure, they can be used to warn unsuspecting visitors about shoddy products and false or misleading advertising. Since the annotations are natural language text, they are less useful for finding the best sites or doing any kind of computations.

Reputation Manager Problems

When collecting feedback from random people, the results can be random as well. Third Voice suffers from the traditional flaming problem of Usenet as well as the low signal-to-noise ratio of chat rooms. You never know whether the person who posted a comment actually knows what they are talking about or whether you are wasting your time reading some bozo’s rantings.

Amazon.com pioneered the idea of customer reviews, but has been plagued by unreliable reviews (an author’s enemies post a flood of negative reviews; followed by the author’s friends who post glowing reviews). Also, users never know whether they can trust reviews that are posted as part of a site that profits from selling the product.

Google and eBay avoid these problems by aggregating ratings across a very large sample. Google also benefits from the fact that Web authors are reluctant to include a link unless they actually want to guide users to the destination site. Even if there are some spurious links, they vanish when doing statistics across a billion pages with several billion links. eBay collects reputation rankings from the specific people who actually bought something from a seller, thus avoiding comments from random users.

Epinions is a double reputation manager: not only does it rate products and services, it also rates reviewers. After users have read a review, they are encouraged to vote on whether they found the review useful or not. In showing lists of reviews to users, Epinions places the most highly rated reviews on top, thus assuring that readers will focus on the best content. Also, reviewers build up status depending on the user feedback on all their reviews, meaning that people will be reluctant to contribute low-quality reviews to the service. A final interesting twist is that users earn a micropayment every time somebody reads one of their reviews. Thus, people are motivated to write valuable reviews, not just to gain a high reputation rating, but also to earn money.

Future of Reputation Managers

I see reputation managers as core to the success of the Web. As we get more sites, more content, and more services online, users need a way to learn what is credible and useful. Quality assessments must become an explicit component of most Web user interfaces. It is not sufficient to list millions of items for sale and leave it to the user to determine what they need. Everybody is not equal.

Reputation managers overcome the complaint against shop bots that they purely focus on price and ignore customer service. Once it can include an independent source of rating data, a shop bot can show users:

* what they can buy
* where they can buy it
* how much each option costs
* how good each option is
* what level of customer service to expect from each vendor (e.g., average fulfillment delay, whether shipments usually arrive in good shape, whether the vendor is decent in dealing with returns, etc.)

Reputations managers will thus cause a renaissance for good customer service: the way a company treats any individual customer will be fed directly back into its reputation ranking and will influence its future sales.

Investors will finally get a handle on intangible concepts like “brand equity” and “goodwill”: just go to the reputation manager and look up how customers rate the company and various aspects of its service. If a company does something wrong, its reputation statistics will rapidly drop, immediately followed by a massacre of the stock valuation. If a few Belgians become sick from drinking a soft drink, then the manufacturer may lose billions on Wall Street five minutes later. Another reason reputation managers will contribute to highly improved product quality and customer service.

Participation Inequality: Encouraging More Users to Contribute

Jakob Nielsen, October 9, 2006

All large-scale, multi-user communities and online social networks that rely on users to contribute content or build services share one property: most users don’t participate very much. Often, they simply lurk in the background.

In contrast, a tiny minority of users usually accounts for a disproportionately large amount of the content and other system activity. This phenomenon of participation inequality was first studied in depth by Will Hill in the early ’90s, when he worked down the hall from me at Bell Communications Research (see references below).

When you plot the amount of activity for each user, the result is a Zipf curve, which shows as a straight line in a log-log diagram.

User participation often more or less follows a 90-9-1 rule:

* 90% of users are lurkers (i.e., read or observe, but don’t contribute).
* 9% of users contribute from time to time, but other priorities dominate their time.
* 1% of users participate a lot and account for most contributions: it can seem as if they don’t have lives because they often post just minutes after whatever event they’re commenting on occurs.

Early Inequality Research
Before the Web, researchers documented participation inequality in media such as Usenet newsgroups, CompuServe bulletin boards, Internet mailing lists, and internal discussion boards in big companies. A study of more than 2 million messages on Usenet found that 27% of the postings were from people who posted only a single message. Conversely, the most active 3% of posters contributed 25% of the messages.

In Whittaker et al.’s Usenet study, a randomly selected posting was equally likely to come from one of the 580,000 low-frequency contributors or one of the 19,000 high-frequency contributors. Obviously, if you want to assess the “feelings of the community” it’s highly unfair if one subgroup’s 19,000 members have the same representation as another subgroup’s 580,000 members. More importantly, such inequities would give you a biased understanding of the community, because many differences almost certainly exist between people who post a lot and those who post a little. And you would never hear from the silent majority of lurkers.

Inequality on the Web

There are about 1.1 billion Internet users, yet only 55 million users (5%) have weblogs according to Technorati. Worse, there are only 1.6 million postings per day; because some people post multiple times per day, only 0.1% of users post daily.

Blogs have even worse participation inequality than is evident in the 90-9-1 rule that characterizes most online communities. With blogs, the rule is more like 95-5-0.1.

Inequalities are also found on Wikipedia, where more than 99% of users are lurkers. According to Wikipedia’s “about” page, it has only 68,000 active contributors, which is 0.2% of the 32 million unique visitors it has in the U.S. alone.

Wikipedia’s most active 1,000 people — 0.003% of its users — contribute about two-thirds of the site’s edits. Wikipedia is thus even more skewed than blogs, with a 99.8-0.2-0.003 rule.

Participation inequality exists in many places on the Web. A quick glance at Amazon.com, for example, showed that the site had sold thousands of copies of a book that had only 12 reviews, meaning that less than 1% of customers contribute reviews.

Furthermore, at the time I wrote this, 167,113 of Amazon’s book reviews were contributed by just a few “top-100″ reviewers; the most prolific reviewer had written 12,423 reviews. How anybody can write that many reviews — let alone read that many books — is beyond me, but it’s a classic example of participation inequality.

Downsides of Participation Inequality

Participation inequality is not necessarily unfair because “some users are more equal than others” to misquote Animal Farm. If lurkers want to contribute, they are usually allowed to do so.

The problem is that the overall system is not representative of Web users. On any given user-participation site, you almost always hear from the same 1% of users, who almost certainly differ from the 90% you never hear from. This can cause trouble for several reasons:

* Customer feedback. If your company looks to Web postings for customer feedback on its products and services, you’re getting an unrepresentative sample.
* Reviews. Similarly, if you’re a consumer trying to find out which restaurant to patronize or what books to buy, online reviews represent only a tiny minority of the people who have experiences with those products and services.
* Politics. If a party nominates a candidate supported by the “netroots,” it will almost certainly lose because such candidates’ positions will be too extreme to appeal to mainstream voters. Postings on political blogs come from less than 0.1% of voters, most of whom are hardcore leftists (for Democrats) or rightists (for Republicans).
* Search. Search engine results pages (SERP) are mainly sorted based on how many other sites link to each destination. When 0.1% of users do most of the linking, we risk having search relevance get ever more out of whack with what’s useful for the remaining 99.9% of users. Search engines need to rely more on behavioral data gathered across samples that better represent users, which is why they are building Internet access services.
* Signal-to-noise ratio. Discussion groups drown in flames and low-quality postings, making it hard to identify the gems. Many users stop reading comments because they don’t have time to wade through the swamp of postings from people with little to say.

How to Overcome Participation Inequality
You can’t.

The first step to dealing with participation inequality is to recognize that it will always be with us. It’s existed in every online community and multi-user service that has ever been studied.

Your only real choice here is in how you shape the inequality curve’s angle. Are you going to have the “usual” 90-9-1 distribution, or the more radical 99-1-0.1 distribution common in some social websites? Can you achieve a more equitable distribution of, say, 80-16-4? (That is, only 80% lurkers, with 16% contributing some and 4% contributing the most.)

Although participation will always be somewhat unequal, there are ways to better equalize it, including:

* Make it easier to contribute. The lower the overhead, the more people will jump through the hoop. For example, Netflix lets users rate movies by clicking a star rating, which is much easier than writing a natural-language review.
* Make participation a side effect. Even better, let users participate with zero effort by making their contributions a side effect of something else they’re doing. For example, Amazon’s “people who bought this book, bought these other books” recommendations are a side effect of people buying books. You don’t have to do anything special to have your book preferences entered into the system. Will Hill coined the term read wear for this type of effect: the simple activity of reading (or using) something will “wear” it down and thus leave its marks — just like a cookbook will automatically fall open to the recipe you prepare the most.
* Edit, don’t create. Let users build their contributions by modifying existing templates rather than creating complete entities from scratch. Editing a template is more enticing and has a gentler learning curve than facing the horror of a blank page. In avatar-based systems like Second Life, for example, most users modify standard-issue avatars rather than create their own.
* Reward — but don’t over-reward — participants. Rewarding people for contributing will help motivate users who have lives outside the Internet, and thus will broaden your participant base. Although money is always good, you can also give contributors preferential treatment (such as discounts or advance notice of new stuff), or even just put gold stars on their profiles. But don’t give too much to the most active participants, or you’ll simply encourage them to dominate the system even more.
* Promote quality contributors. If you display all contributions equally, then people who post only when they have something important to say will be drowned out by the torrent of material from the hyperactive 1%. Instead, give extra prominence to good contributions and to contributions from people who’ve proven their value, as indicated by their reputation ranking.

Your website’s design undoubtedly influences participation inequality for better or worse. Being aware of the problem is the first step to alleviating it, and finding ways to broaden participation will become even more important as the Web’s social networking services continue to grow.

References

Laurence Brothers, Jim Hollan, Jakob Nielsen, Scott Stornetta, Steve Abney, George Furnas, and Michael Littman (1992): “Supporting informal communication via ephemeral interest groups,” Proceedings of CSCW 92, the ACM Conference on Computer-Supported Cooperative Work (Toronto, Ontario, November 1-4, 1992), pp. 84-90.

William C. Hill, James D. Hollan, Dave Wroblewski, and Tim McCandless (1992): “Edit wear and read wear,” Proceedings of CHI’92, the SIGCHI Conference on Human Factors in Computing Systems (Monterey, CA, May 3-7, 1992), pp. 3-9.

Steve Whittaker, Loren Terveen, Will Hill, and Lynn Cherny (1998): “The dynamics of mass interaction,” Proceedings of CSCW 98, the ACM Conference on Computer-Supported Cooperative Work (Seattle, WA, November 14-18, 1998), pp. 257-264.

SWiK-Source

WIKI + TAG + OpenSource

SWiK-Source is the source code of the swik.net core engine. It is licensed for distribution under the GNU General Public License v2.

SWiK-Source is meant for people who are curious about the code that powers SWiK, for people who want to try making their own SWiK for their internal use or for some new web site, as well as for use with SourceLabs’ internal and partnering projects.

SourceLabs has released SWiK-Source as a source code release rather than a packaged release: it’s a complicated system built with a lot of assumptions for custom production servers rather than arbitrary or generic systems.

SWiK Source is built on and integrated with the platform combination of PHP, MySQL, and Apache HTTPd. These applications are not packaged as part of the distribution, however SWiK Source requires these and other 3rd party open source components to run properly.
List of components you must either have or obtain and install before setting up SWiK Source:

1. PHP 5.0 (see below for specific configure line)
2. Apache 2.0
3. MySQL 4.1
4. Ruby
5. Java
6. libxml
7. libjpg
8. libpng

张树人:从社会性软件、Web2.0到复杂适应信息系统研究

中国人民大学信息学院张树人的博士论文 《从社会性软件、Web2.0到复杂适应信息系统研究》

(阅读全文…)

Klaudius 和 他的blog

“认识”klaudius已经好久了,不记得有几年了,也不记得当初是怎么聊上的
只记得当初他还是一个在广州(?)读书学生物的学生,有自己的文字,有自己的思想,很独特,不论文字、名字,还是经历。

后来,就是前两天,我们组织了XOOPS 北京聚会,忽然收到klaudius的留言 — msn上已经改名字了,说是要来参加聚会。收到消息,有些惊喜,可以有机会看看这位奇特的朋友。
留言中说同行还有两个,其中一个ynzheng,这不也是我很久以前就认识的那个ynzheng?去年回来时还说好去雍和宫时去找他,一直没去成。
聚会的前一天,一位朋友在msn上推荐了一篇文章,具体内容记不清了,却发现那篇文章就是来自ynzheng的blog …

我自己算是一个开发者,blog程序多多少少写了一些,改了一些,却很少固定看什么非技术类blog,一来时间比较紧张,二来不喜欢那些自说自话自娱自乐故作神圣的blog。
klaudius的blog,和他在做的其他东西,却是很有些意思,摘一篇留作存底。

(聚会那天见面了,三个人都是瘦瘦弱弱书生模样,跟我形体比较类似)

(阅读全文…)