Student comments 11/26/2008

 

Brett Harrison

 

This paper explores the problem of whitewashing, in which a user can create multiple pseudonyms in order to clean his reputation or just ruin the value of reputation on the systems that user is participating in. For example, on eBay, if a seller receives lots of negative feedback, that seller can just wipe his slate clean by creating a new account. The paper characterizes equilibria in the case that the users in a systems cannot change their identity, or will change their identity with some probability. The paper also discusses some methods for preventing multiple identities, including introducing a cost per identity and the option to use a once-in-a-lifetime identifier. The former would unfortunately introduce a cost for the first identity as well and so would discourage entry into the system.

 

I wonder how much the following scheme has been explored: a system designates one person (or possible a group of people) to be the "trustworthy" assessor of reputation. When a person creates a new identity, they are involved in some handshake/trust-gaining protocol with the trustworthy person. Whenever transactions occur, both parties must receive information from the trustworthy person about the other party of the transaction in order to determine whether they should cooperate or defect. While this involves trusting a single source, this wouldn't be unreasonable: for example, I would trust the creator of eBay to tell me if an eBay seller is trustworthy. Also, while this might introduce some cost, the cost would be associated with trust and not with the actual creation of the account, so it might not discourage participation as much as, say, if a monetary or high time cost was associated with just creating a new user name.

 

Zhenming Liu

 

This paper formalized the study of the social cost of pseudonyms via introducing repeated prisonerÕs dilemma game with trembles. While many trivial equilibriums exist when trembles are not presented, the authors showed that these equilibriums are not stable when trembles are introduced. Beyond this exciting result, a few other interesting observations were made in the papers too. The followings are a few of my questions/confusions about this paper,

 

1.       How representative is the prisonerÕs dilemma game? In particular, the fact that in each round players are paired up and each of them only interacts with one agent in the group sounds quite unrealistic. On the other hand, I am not sure whether their results can easily be generalized to multiple players (is this direction a good extension of their work?).

 

2.       Why malicious players can be analyzed in a similar way as models with trembles (as suggested in page 181)? Many of the results appeared in this paper are under the condition that epsilon is sufficiently small while for malicious players, this epsilon can arbitrarily close to 1. Furthermore, the sentence Ò(malicious players) like to see others suffer and thus will choose actions that cause a general increase in the level of defectionÓ is not making too much sense to me either. Specifically, if malicious players are happy to see others suffer, shall we in the first place modify the payoff function instead of trying to introduce the notion of trembles.  Closing the gap between the behavior of malicious players and the results for models with trembles would be interesting.

 

Travis May

 

In ÒThe Social Cost of Cheap Pseudonyms,Ó Friedman and Resnick discuss the cost of users being able to easily switch their pseudonyms on websites.  Cheap pseudonyms, while welcoming newcomers, also enable users to immediately escape from negative reputations.  Thus newcomers are confused with potential non-cooperators, and newcomers are punished with general distrust.

 

The paper is most interesting in proposing several potential solutions to the problems it outlines.  As one proposal for remedying this problem, the authors propose a centralized intermediary that could track true identities and ensure that each true identity is assigned only a single pseudonym.  The intermediary, it is argued, would need to be a monopoly (or at least have complete access to full information on identities assigned by other intermediaries).  The paper proposes auctioning off the rights to this monopoly, requiring a government intervention and social action for the issue to be resolved.  Instead, I would suggest that the problem could easily be resolved by large, trusted companies (such as Google or Microsoft) offering this service immediately.  The companies, both of which are seeking to build common log-on systems where users are able to sign on to multiple sites by just logging in once, could require a credit card to validate identities and ensure that users only create a single account (with the identification information encrypted to preserve anonymity).  The account could then offer a username-generation tool that allows users to create a username for a particular website (allowing distinct usernames on different websites if desired).

 

Hao-Yu Su

 

The idea of once-in-a-lifetime identifiers is great, if it is actually working.

I think there are still some difficulties in implementation. The first

problem is how to prevent one user to apply several different identifiers?

The author mentioned that it is possible that a user may have multiple

different identifiers in the names of her friends, but it depends on a

significant cost. However, what if a fake id number can be easily

generated? For example, there is an ID-number generator on the

Internet that can be utilized when such information is reacquired by

any website. By this tool, users can easily pass the ID inspection and

acquired as many identifiers as they want. In this case, malicious players

may have larger profit from the trust people give to those once-in-a-lifetime

identifiers.  From above, I think it is essential for this mechanism to have a

credible way to secure its one-identifier-per-person policy.

 

Haoqi Zhang

 

The main contribution of the paper is in showing that when agents have the ability to change their identities, the amount of cooperation in the system is limited by the dues-paying equilibrium in which newcomers have to establish reputation before being trusted. The authors then present extensions to adding a low cost for entry to deter low payoff users, and to the use of a standard pseudonym to which the agent commits to.

 

I find the game theoretic analysis to be interesting in that it provides some rigor and concreteness to the problem. However, I am not too clear on the motivation of the paper. The main question I have is this: what is wrong with a dues paying equilibrium? In particular, given a plethora of information, unless the newcomer provides very good information, there is good reason to discount that user's participation. In fact, dues paying equilibrium can be very effective in enforcing quality on a site. For example, dailyKos, a liberal political blog, uses a policy where users can register but cannot post for a week, and the community moderates itself via troll labeling and recommendations for articles. This system gives new users who want to contribute incentive to want to do better (via getting a higher reputation and in enjoying the high quality of content), and they can still be recognized via rankings and recommendations from other users. As another example, shopping communities such as slickdeals.net uses point system to rank deals where the best deals are promoted to the front page and that users receive rep's from others when their deal is 'hot'. In these cases, bad deals are quickly detected and forgotten about, but good deals (regardless of who posted it) will gain recognition quickly because of its rating.

 

Other mechanisms, such as asking users to post a real name (e.g., you cannot be smileyface23), can help deter trolls and promote cooperation and trust as well.

 

Malvika Rao

 

I found the paper "The social cost of cheap pseudonyms" to be very

interesting. It is a brave attempt to find a solution to the problem of

multiple and new identities on the internet without compromising the

flexibility and accessibility of online systems.

 

Yet the solutions proposed appear to be "patch" solutions rather than

solutions that are more fundamental to the nature of the problem.

Admittedly this is a very difficult problem.

 

For example, in the case of once-in-a-lifetime identifiers, how is an

intermediary selected to guarantee the integrity of the intermediary? What

are the incentives for this intermediary to perform their job correctly?

This also discourages people from having multiple identities where each

identity plays cooperatively and always behaves towards the social good.

Why should that be punished?

 

The "pay your dues" (PYD) model seems a better bet. While it does lead to

some inefficiencies it seems to be a more natural mechanism that meshes

well with the philosophy of the internet.

 

It is interesting to try to think of a mechanism where subsequent

identifier registrations are costly but not punished unless a deviation

occurs. unfortunately there appears to be no natural and "automatic" way

of differentiating between the first registration and subsequent

registrations.

 

The paper "The value of reputation on eBay" reveals that established

identities fared better than new seller identities. This is unsurprising

and presents a natural incentive for sellers to keep their identities in

the long-run.

 

Sagar Mehta

 

The main contribution of this paper is that it provides a game theoretic analysis of the social cost of cheap pseudonyms. The authors model social interaction on internet sites as a repeated prisoner's dilemma game and try to find equilibrium strategies under different assumptions. Some of the equilibria that the authors present have undesirable aspects which introduce new inefficiences (i.e. mistreating newcomers will exclude individuals with low payoffs), however the most interesting mechanism to me was the use of free, unreplaceable pseudonyms. Though the authors (writing in 2001) expected the use of unreplaceable pseudonyms to "blossom", this has not been the case in the real world. This may be due to the fact that implementing such a system on already existing technologies would be difficult. The cost of e-bay to overhaul its user account system to include unreplaceable pseudonyms seems rather high given the number of existing users. If e-bay feels the current reputation mechanism works relatively well, they may not want to make the switch. What other reasons are there for unreplaceable pseudonyms not being used more often?

 

The repeated prisoner's dilemma game used by the authors, while a good first step, doesn't seem to convey the true interactions of some computer networks. In the introduction, the authors mention an online discussion forum where mothers with premature babies came to discuss. A fake player partaking in this will gain something from "tricking" the other people, but every other player will incur a cost due to his presence. Interactions in a discussion forum don't take place one on one (i.e. there is no matching in every round of two players), instead it seems that each player impacts every other player's payoffs in every round (if I post to the forum as a fake pregnant mother, I am hurting everyone). I don't think the model takes this dynamic properly into account. I also think the payoff matrix/motivations for the fake player should be considered more deeply.

 

Avner May

 

I thought this article did a great job analyzing the general issue of cooperation on the internet, and how the issue of being able to change oneÕs identity affects equilibriums in this setting.  They analyzed proÕs and conÕs to different strategy profiles in repeated games of the prisonerÕs dilemma, which I thought was a very reasonable approach.  The prisonerÕs dilemma game is one in which cooperation is the socially optimal outcome, but defecting is profitable on an individual level.  Thus, everyone is best off if everyone cooperates, but one person could take advantage of this system of trust, and benefit personally by defecting.  Thus, if a person of this sort can change their ÒnameÓ and keep their true identity hidden, there is no way to know if someone is not trustworthy (good reputations take time to acquire, but bad reputations can be immediately erased).  For these properties of the prisonerÕs dilemma game, I think it is a good choice as a model for cooperation on the internet.  I think that the solution proposed by the author, of creating a cryptographic service which offers Òonce-in-a-lifetime identifiers,Ó is a very good idea, and solves many of the inefficiencies in the system.  However, as noted by the author, it still raises some very interesting questions regarding privacy, and the size of each arena.  This article is a good counterpart to the eBay article we read, as it presents a theoretical model for a system in which good reputations are valued, but take a while to acquire.

 

Michael Aubourg

 

First of all, people who want to reveal their own identity do exist.

Hence, on online forum this option should exist. Some people who go often on a precise forum and who don't have anything to hide should be able to reveal themself. (like in real life).

 

The paper talked about the fact that relying on email addresses cannot work, which is true. But, this is due to the fact that their are no(low) cost to create a new email account.

Then, the problem would be solved if new email account were not free.

Furthermore, another solution should be to bank on people Internal Protocol, even if it is not 100% reliable.

 

Finally, a solution could be to attribute a set of one-life Internet identifiers to each human being. Let's say 3 identifiers for each person. Governments should keep the list secret, that is to say the matching between a real person and its 3 life-pseudonyms. Hence, If you need not to reveal yourself (if you want to buy something embarrassing for instance), you use one particular pseudonym. But that one is not going to change. If you need to reveal yourself to friends, jobs etc, you use another one, the "main" one.

 

Alice Gao

 

The main contribution of this paper is to characterize different strategies for which there are different costs associated with being identified with a new identity or an old identity with a history of actions.  This paper presents a model using the prisoner's dilemma game to illustrate different strategies involving cooperating or defecting. 

 

My first comment is regarding the basic model being used.  The prisoner's dilemma game seems like a game encompassing two extreme scenarios.  So a person has to choose one extreme or the other.  I have doubts about how well this model represents the interactions of people in online communities.  Things are rarely that extreme in real life.

 

One thing attractive about this paper is that it uses the classic approach of trying to explain real life interactions with direct use of game theory concepts.  I think one beauty of game theory is that it is theoretically very well formed and elegant, despite the fact that it might not give you anything directly applicable to real world scenarios.  In my opinion, the propositions are all very elegant, theoretically sound and attractive.  However, there is no guarantee that user behaviours in real online communities will actually approach one of the equilibriums defined. 

 

I like the idea of using the once-in-a-lifetime identifiers a lot because it seems to me to be a good theory.  I guess one difficulty with this scheme is regarding the choice for the intermediary.  It sounds like this role is only suitable for someone omniscient, who does not care about what is actually going on between the users he/she is dealing with.  It might be difficult to find such a person in real life.

 

Nick Wells

 

This paper discusses the importance of a positive reputation as a determinant of social interaction on websites. When it is easy to recreate oneself with a cheap pseudonym, others have a hard time assessing the worth of that person's services/participation. This paper proves that the dues-paying equilibrium provides for the most sustainable cooperation system. The aim of the system is to create a cost for participation such that those with negative services lose the incentive to join.

 

This is interesting especially in the context of websites. One example that comes to mind is that of dating websites where trust can be a very important factor especially for women. There are plenty of sites which are not successful when they simply provide free accounts. eHarmony overcomes this problem with a substantive charge to participate. Yahoo! Personals uses a similar strategy. Free sites on the other hand tend to attract mostly male users and can generate a lot of user activity but it is probably of a different nature.

 

Andrew Berry

 

This paper proposes a system of anonymous certificates in which for each "social arena" a person if given a single identifier that is unrelated to the user's true identity. These "once-in-a-lifetime" identifiers cause a participant to effectively commit to having one's reputation spread across the arena. This commitment provides a reputation signaling device for other players in the arena. This paper was overall very effective in explaining the model. The repeated one-shot prisoner's dilemma game was a natural example that did a nice job of illustrating the reputation mechanisms. I don't quite understand the claim in the beginning of the paper that newcomer distrust can be entirely eliminated when a newcomer is only distrusted if a veteran player in the previous round did something wrong (perhaps this is a poor explanation of the grim trigger strategy?). I think the PYD strategy is very robust and a well-described reputation mechanism. I think the only question unanswered by implementing this strategy is how long does a new entrant pay dues? The only other major question I had about this work was in regards to the payments for identifiers. The payment scheme where new entrant dues are redistributed across the other players in the system sees like it would be very effective in practice. However, the paper claims that such a scheme would invalidate the exit process of the model. However, does this have a negative effect in application? Additionally, the authors claim that the solution does not work if players' expected lifetimes are heterogenous. This is not readily apparent to me and should be explained in the paper.

 

On a slightly unrelated note, if anyone reading this has seen the movie "Fight Club". . . when reading page 175 about the woman who would pose in different support groups, were you reminded at all about the beginning of the movie?

 

Xiaolu Yu

 

The paper presents a game-theoretic study of various strategies for dealing with cheap pseudonyms, which becomes quite common in a wide variety of interactions on the internet, in order to maximize the overall efficiency of a given pseudonymous system. Seeking to predict the effects of a given identity management scheme by assessing incentives to agents in the system, the authors reach a set of conclusions about identity, decision-making, and reputation.

 

Although it would be nice to create environments where strangers were trusted until proven otherwise, these strategies vectors are proved to be not stable. The inherent social cost to free name changes seems imply punishing all new comers would be the best strategy; in another word,  there aren't many good ones except charging entry fees and requiring pseudonym commitments.

 

One of the very interesting points the paper made is the trade-off between anonymity and accountability in the choice of how broad a set of activities to define as a single arena. The broader the arena, the more opportunities there are for correlating behavior between activities, and the easier and better an individual's reputation will be tracked and understand. In my opinion, connecting some related sub-arena together to make a super-arena, and requiring one identifier per person would to some extent discourage misbehaviors by facilitating identity tracking down. But again, we need to think about whether this could discourage participation as well: if some behaviors with good motivations end up with bad results due to some unexpected factors, people may hesitate to take any actions at the first place since their reputation would be hurt badly and this bad reputation would follow them to lots of places, and forever. It is difficult for an intermediary to distinguish between malicious behaviors and accidents (given they have the same effects on an individual's reputation).

 

Ziyad Aljarboua

 

This paper discusses online reputational consequences in lights of the ability

to cheaply obtain new online identities. The fact that online identifies can be

easily obtained changes the paradigm of on line interaction that otherwise would

be partly based on reputation. Since people can wipe out their negative onlnie

reputation by simply obtaining a new identity, newcomers to reputation based

online communities are often not trusted which leads to less cooperation with

newcomers. In a perfect world, newcomers would be trusted until they prove

themselves untrustworthy.

 

This paper shows that achieving an equilibrium in which there is sustainable

high cooperation with new comers is hard compared to the current situation

where all new comers are mistrusted until they build their reputation. I find

this analogous to real life situation where people are trusted after they prove

themselves. For an example, a hedge fund manager would not invest in a startup

company simply because they company requested him/her to invest. However,

he/she would invest in it after the company provided a proof that it will not

fail or at least it is more likely to succeed.

 

The author discusses the effect of an entry fee to help newcomers start with a

reparation that will help facilitate a fast cooperation with existing users.

This might sound like a possible solution to the online reparation system since

a requring a fee is equivalent to having a system in which obtaining a new

identify is not free, just like in the real world. However, it is noted that

entry fee might prevent new users from joining. I think that the decision

whether to include a registration fee for new users or not is case dependent.

While i think that the majority of online communities would be negatively

impacted by such a measure, some might benefit. It is also important to note

that such a problem is not an issue for many online communities. As mentioned

in the paper, the discussion about social cost of reputation and pseudonyms is

essentially a discussion about trade offs between accountability and anonymity.

For many online communities, accountability is less important than anonymity

such as an AIDS online forum. Where as accountability is crucial in financial

forums where users predict future stock prices and collaborate to better

understand the stock market.

 

Rory Kulz

 

I like a lot of this paper, although I am suspicious of two things.

First, I am not convinced the prisoner's dilemma is a useful model

here for player interactions, especially in the examples of support

groups or certain non-massively-multiplayer online games like

backgammon. Second, I find this idea of once-in-a-lifetime identifiers

to be not so useful for implementation in real world e-commerce /

interaction protocols.

 

The heart of this paper are the results on the PYD equilibrium, and

the last bit, showing essentially that the idea behind PYD is

basically the most natural and that "slow-start schemes," something I

wondered precisely about, are not as efficient, I liked the most.

 

There is one issue, however: in the games, the idea of reputation is

tied to awareness of an entire common knowledge history. But in the

real world, to what extent can players be relied on to full analyse a

player's history? On eBay, we saw that it is possible that many people

don't even click through to the detailed feedback page. So it is still

an open question here of how to design reputation mechanisms that can

aggregate the information in the common knowledge history into a

digestable form for the user. If we can't do that, then a lot of these

questions about the behavior of real-world systems are moot.

 

Peter Blair

 

In this paper the authors examine the social cost of cheap pseudonymns on the internet. The result of easily being able to change one's identity is that good reputations matter, but bad reputations are inconsequential. The goal then is to create an environment in which cooperation can be sustained but on in which there is also accountability for one's actions. It turns out then that a "pay your dues" system, which imposes a certain cost on new players and benefits veteran players is an equilibrium that sustains more cooperation than any other method. The article then discusses the possiblity of players commiting to a single identity that can be verified by a trusted intermediary. The goal here is to eradicate the efficiency of imposing a social cost on new players while maintaining the accountability of the online community. The authors are convincing in stating the case for this type of identity committment. I have no doubt that this would work pragmatically, but two unresolved issues is whether this scenario would be analogous to imposing a cost on registering for an online community and secondly if this type of concept is consistent with the notion of the internet as an open, free and easily accessible market place. Certainly for certain features such as online banking, the possible risks outweight the cost of having such a committed identity; in terms of registering for an online social networking site it's harded to make this case, which then means that the internet, at least for these select activities, becomes a much duller place. A related comment would be that having a secure identity should increase the rate of cooperation, but would it also have adverse effects on the size of the community. Otherwise stated, should we consider cooperation in terms of both quantity of cooperation and the the rate at which agents are cooperating. Facebook provides an interesting case study at the intersection of this debate: most facebook users reveal their true identity online, but there are some users who have a clandestine identity. It might be interesting to consider situations in which people have the option to reveal their true identity or not, based on thier on volition and that this in turn sends signals to other agents about whether to cooperate or not, without the efficiency loss of mandating that someone who wishes to remain anonymous make themselves known to a third party or otherwise incur some PYD social cost as anew user.