CS 286r Comments
10/6/2008

Malvika Rao




This paper discusses market scoring rules, their costs and their
modularity. Specifically it addresses logarithmic market scoring rules.
Many different kinds of events require probability estimates. Hence, in
practice, market scoring rules musty help people understand and manage
large event spaces. They must enable people to change their estimates for
certain parameters while minimizing unintended changes to other estimates.
The logarithmic rule is shown to be unique in making only local inferences
- if there is a bet on one event given another event, the logarithmic rule
preserves the probability of the given event. Open problems for research
include how to minimize the computational costs of updating market scoring
rules. The computational comlexity of these updates is NP-complete.

Malvika Rao



The purpose of information markets is to aggregate information.
Combinatorial information markets are markets where information is
aggregated on the entire joint probability distribution over many variable
value combinations. This paper presents market scoring rules and considers
some design issues. In particular the paper seeks to address problems
caused by the thin market case and irrational participation. Open areas of
investigation include that of user interfaces where it is yet unclear what
would be the most efficient way to handle the range of cases that are
likely to arise.

Alice Gao



This paper introduces market scoring rules and describes the advantage of 
using the logarithmic version of the market scoring rules.  The two main 
questions addressed are the cost of implementing a market scoring rule and 
the modularity of the logarithmic market scoring rules for combinatorial 
information markets.  For a combinatorial market, the cost is no more than 
the cost for the reports of basic events.  However, this calculation neglects 
the costs for computing and updating prices as well as implementing transactions.  
Also, regarding modularity, the logarithmic market scoring rule is unique in 
having a local inference rule such that it preserves conditional independence 
relations.  
 
This paper presents most of its ideas using theoretical arguments.  It would be 
interesting to see any empirical studies done on the comparison of performances 
of market scoring rules versus traditional prediction markets.  Also, I think 
the computational cost for updating prices and assets for combinatorial reports 
would be a huge problem in practice.  So I am skeptical about the possible 
improvements offered by market scoring rules for combinatorial markets unless 
we can in some way bound these computational costs.  Also, it would be interesting 
to see discussions of implementation issues for market scoring rules because 
anything that can be broken down into "infinitesimal" parts has to be supported 
by limited resources in practice.

Alice Gao



This paper introduces market scoring rule as a new technology that combines the 
advantages of simple scoring rules and information markets.  The most important 
contribution of this paper is that it discusses several implementations issues 
for implementing marketing scoring rules.  In particular, it considers issues 
such as how the values of variables can be represented and computational issues 
related to updating prices, managing user assets, and implementing transactions.  
 
Also, this paper addresses my question about the other paper on how to handle 
the computational costs for working with probability inferences on a large state 
space.  Basically, the approach limits users to choosing distributions within a 
particular family of probability distributions.  The author uses the popular 
Bayes net example to illustrate this implementation choice.  After I read David 
Pennock's blog, I am surprised to realize that, even though the idea of using 
market scoring rules is pretty obvious in the this paper by Hanson, he never 
directly mentions this perspective of looking at market scoring rules.  Hanson's 
paper always talks about each individual user changing the price instead of trading 
shares.  In general, I enjoyed reading this paper and the blog by Pennock.  They 
are very well written and informative.

Angela Ying



This paper discussed two theorems concerning logarithmic market scoring rules, 
which are rules where the expected payoff is of the form a + blog(r), where r 
is the report of agent i. The particular advantage of logarithmic market scoring 
rules is the ease of betting on the probability of a combination of events, 
instead of a single event. Although most of the paper provides background 
information on general market scoring rules, the main contribution of this paper 
comes at the end, where the author provides the two theorems  that prove 
properties about logarithmic market scoring rules, both which demonstrate that 
the independence of events is preserved, even when one makes a conditional bet on 
one given the other.

Overall, I thought that this paper was slightly confusing - format-wise, it seemed 
that the paper focused much more on market scoring rules in general rather than 
specific logarithmic market scoring rules. I wonder if any of this will be tested 
in the future on real prediction markets, as logarithmic scoring would be rather 
confusing for the general public.

Angela Ying



This paper discussed the general concept of market scoring rules, a system designed 
to prevent the problems faced by simple scoring rules, including thin markets, because 
people are paying off each other rather than paying and taking money from a centralized 
market maker. This allows the market to retain information and encourages investors to 
stay in the market, even in a field with low liquidity. In addition, investors are given 
more freedom to choose the exact combination of probabilities to include in the rule, 
which aggregates many investors that otherwise may not have participated due to lack of 
interest on the part of other investors. This paper was a general survey of the different 
aspects of market scoring rules, and thus did not have one particular main contribution.

I thought that the section on Avoiding Bankruptcy was particularly interesting, because 
to the authors, it seems that to gather a large number of people together to participate 
in a market with market scoring rules, the only collateral that can feasibly used is money. 
I think that using a system such as the eBay reputation system could be an effective way 
of avoiding this problem. Essentially, rather than avoiding bankruptcy altogether, the 
traders themselves have an indication of the likelihood of another trader filing for 
bankruptcy. Of course, as with the eBay system we run into problems where a person with a 
bad reputation simply creates multiple accounts, but we can add safeguards to recording 
reputation by requiring that a person's reputation can only be changed by another after a 
trade has successfully occurred. Over time, reputations of reliable traders would build, 
and those who would create multiple accounts have no incentive to amp up their reputations 
because they would have to pay transaction fees to the system in the process.


Avner May



This paper presented an overview of the theory behind different types of scoring rules.  
It highlighted the logarithmic scoring rule in particular, as it is the only proper scoring 
rule which some very special characteristics.  For example, if someone makes a bet on event 
A given event B, this scoring rule will leave the probability of event B unchanged.  This 
is desirable as when someone is making a bet on a specific probability in the large 
probability space, that is the only one which should be affected by a new bet.  I think that 
this result is a very important one, with huge applications to the way these futures markets 
are organized; it would make the probability estimates of these systems more accurate, 
especially in cases where conditional probability futures are traded very often.  I thought 
that the analysis of the problem from the market makers point of view was insightful; usually 
I have seen this type of problem simply discussed as an optimization problem by the trader, 
not paying much attention to the market maker.  Finally, the two theorems presented, 
pertaining to the logarithmic scoring rule, are very insightful. 



Avner May



In this paper, Hanson introduces the ideas of scoring rules and information markets, but points 
out the drawbacks to these systems – namely the thin market and irrational participation problems 
with information markets, and the thick market problem of scoring rules.  He presents market 
scoring rules as a system which solves both of these problems, as it essentially behaves like a 
scoring rule in the case of a single trader, and like a market maker in the case of a group of 
traders.  He discusses the advantages to logarithmic market scoring rules in particular, which he 
talked about in more depth in his previous paper.  

He then began to delve much more into issues of implementation of these markets, which I found to 
be extremely interesting, as well as the most valuable part of the paper.  He wrote about the 
problem of the computation of arbitrary events in the extremely large probability space, 
acknowledging that once the probability space gets large enough (exponential in the number of 
random variables/possible values per random variable), it is impossible with today’s computers to 
store all possibly relevant conditional probabilities.  However, since these probability 
distributions are usually quite sparse, the problem then becomes how to store as much useful 
information as possible in as little space.  He presents two main options: limiting the probability 
distribution by only allowing traders to trade among a particular subfamily of distributions, as 
well as using several market makers.  I was intrigued by the possibility of the same 
organization/person sponsoring several markets in order to process more information about the larger 
probability distribution in less space, with the downside of allowing arbitrage opportunities to 
arise within these markets.  

I found the main contribution of this paper to be the discussion of the implementation issues and 
computational difficulties of implementing these theoretically interesting markets.  I think that 
a potential research topic could be testing the performance of these suggested implementation 
designs, and maybe trying out different variations, and seeing in which cases each performed best.

Andrew Berry



Although market scoring rules elicit desirable properties such as probability preservation and 
both aggregate and individual estimates of event probabilities, I can't help but wonder if the 
questions surrounding computational complexity limit the applications of such scoring rules. If 
updating prices and assets in combinatorial event space are NP-complete in the worst case, does 
this sort of defeat the cost improvements of rule application to combinatorial events once given 
base events? Perhaps not if the average case is reasonable. The math was a bit dense for me so I 
am unclear of how we are able to think of a market scoring rule as a "continuous inventory-based 
automated market maker," but given this thought it is clear how such a scoring rule can extract 
information implicit in other trades and produce consensus estimates.

Andrew Berry




I should have read this paper first because I thought this paper did an excellent job of explaining 
the benefits of market scoring rules. When discussing the costs of logarithmic scoring rules one of 
the benefits is that it does not change the probability, P(B), on which an event, A, is conditioned. 
Is this in effect an automatic hedge strategy? I know it is common for traders in financial markets 
to hedge out market exposure. . . would a logarithmic market scoring rule accomplish this 
automatically? Suppose we want P(stock A goes up | Dow goes up). According to this section of the 
paper one takes no risk regarding "Dow goes up" (and also the P(Dow goes up) isn't change). Can we 
infer anything about the complement bet regarding P(stock A goes down | Dow goes down)?

My other main question in terms of limiting the probability distribution. The authors mention that 
one can deal with enormous state spaces by limit users to choosing distributions within a certain 
family. From a practical standpoint, what is lost from this restriction bias? One of the nice aspects 
of the market scoring rules mentioned is the ability to provide consensus estimates. How severely is 
this altered with such restrictions? Also, with these restrictions one would probably have to consider 
how many traders would cease to be market participants because restrictions may prevent them from 
making the bets they desire.

Nikhil Srivastava



This paper presents a market scoring rule that elicits probability estimates with two strong advantages. 
First, by using logarithmic payoffs, it achieves a "local" characteristic whereby agent bets do not 
affect the value of logically independent outcomes. Second, by accommodating estimates of conditional 
probabilities in all combinations (and at no extra cost), it is well-suited to extract information from 
agents who often frame uncertain outcomes in terms of conditional statements.

One major limitation I saw was in the nature of the market procedures, specifically the stipulation that 
every trade had to be made "against" the current estimate. Ignoring the difficulty of scaling the system 
to large numbers of individuals,  I imagine a large problem in probabilibity elicitation is a bandwagon 
effect whereby agents' estimates skew toward those already established. For example, the final outcome 
f(T) may depend on the initial estimate r(0). With a "single-threaded" process like LMSR, this might be 
exacerbated.

(By the way, the author is an editor for the excellent and highly-recommended blog "Overcoming Bias" that 
should be especially interesting for economics students with opinions about rationality and preference.)



Nikhil Srivastava



This paper reviews market scoring rules as presented in Hanson 2002, and investigates variable representation, 
distribution limitation, and market segmentation as ways to limit computational complexity in the implementation 
of this theoretically tractable probability elicitation technique.

I found the representation of variables to be especially interesting, given the way it tried to preserve one of 
the strengths of market scoring rules - the ability to consistently and independently incorporate conditional 
probability estimates - while making the system as simple as possible, i.e. by using a small number of variables 
to limit computational complexity. Some of the variable profiles seemed cleverly designed to capture a certain 
aspect of cognition - "somewhat", for example - and reminded me of complex options profiles.

I found the discussion of methods to limit computational complexity to be a bit too idealistic, in that it 
mentioned a list of plausible ideas for most topics (distribution limitation, market segmentation, bankruptcy 
avoidance), but failed to present any theoretical or experimental work in support of them. The integration of 
all of them at the end into a summary proposal was nice - and sounded great - but it would be good to see some 
real results.

Brett Harrison



Modular Combinatorial Information Aggregation

This paper presents a survey of scoring rules, betting markets, and the interface between the two: market scoring 
rules. The goal of proper scoring rules is to offer reward to the better that elicits the player to report his 
probability estimates truthfully. The authors show several scoring rules, including the well-known quadratic and 
logarithmic scoring rules. The authors proceed to favor the logarithmic score in that it has modularity with respect 
to player's conditional probability estimates, that is, in that it respects independence relations among events 
according to the player's beliefs.

I found this paper hard to follow. As is the mistake with many survey papers, facts are dropped haphazardly throughout 
the paper without sufficient introduction or explanation. For example, what is the cost of a market scoring rule to the 
market maker? (It is frequently mentioned that the logarithmic scoring rule does not incur cost, but it is unclear where 
such a cost would come from.

In fact, I found this whole paper to be difficult to follow, especially since the paper is not self-sufficient in terms 
of the background information it provides. I hope to see a better survey that reviews market scoring rules. 


Brett Harrison



Combinatorial Information Market Design
By Hanson

This paper is similar in nature to Hanson's other paper that we had to read in that it outlines proposals for "market 
scoring rules", a combination of information markets and scoring rules in order to elicit true probabilities from experts 
while avoiding both the thin and thick market problems. Unlike the other paper, this paper is much more well-organized, 
much the information is much more clearly presented, and the language is much easier to follow. It is very clear now what 
the problems are associated with information markets, scoring rules, and the new market scoring rules described later on. 
Moreover, the author offers a clear outline of a suggested system that utilizes the market scoring rule in a real wold 
setting.

As the author mentions at the end of the paper, it is uncertain how practical this system would become in real markets 
since the system is not intuitive. That is, a trader would have to choose the values of many parameters, several of which 
are unintuitive as it relates to the pure actions of buying and selling assets. Traders would have to become mathematical 
experts in this particular scoring rule in order to leverage opportunities in the markets, which could be substantially 
more difficult than in the simple information market models.

I would like to see this market system implemented, which would require a lot of thought to be put into the user interface.


Brian Young



Logarithmic Market Scoring Rules for Modular Combinatorial Information Aggregation (Hanson)

Like Hanson's other paper, this deals with market scoring rules, which are an attempt to combine the advantages of scoring 
rules and prediction markets. Of the proper scoring rules, Hanson points to logarithmic scoring rules, which have a number 
of desirable qualities that other such rules lack; he demonstrates that logarithmic rules allow trades to incorporate 
conditional probabilities. Again, though, including more variables and conditionals will result in a substantial blowup in 
complexity.


I was able to follow Hanson's arguments, and I found them reasonably persuasive. I found this paper much more accessible 
after reading David Pennock's description of how to implement market scoring rules as a market maker -- trying to imagine 
trading "scoring rules", even in Hanson's "infinitesimal fair bet" formulation (2), made it seem too complicated to be at 
all practical.

Brian Young



A scoring rule can convince an agent to reveal her beliefs, but it cannot be used to combine multiple agents' beliefs into 
a single consensus. A prediction market can combine the knowledge of many agents, but it cannot always induce any individual 
agent to reveal her beliefs. Hanson suggests that a "sequentially shared scoring rule" (110), or a market scoring rule, can 
be used to solve both these problems.

Hanson describes a few limitations on his results: computational complexity prevents markets from becoming as thick as we 
might desire, since as we incorporate more variables, we increase our state space exponentially. His suggested method of 
dealing with this problem is to have several market scoring rules, relying on the system to avoid inconsistencies between 
them by finding and exploiting arbitrage opportunities. This seems rather lax, since having multiple scoring rules makes it 
almost certain that such inconsistencies will eventually arise. It's unclear to me that we can altogether prevent these from 
being exploited by users.

Towards the end of the paper (116), Hanson discusses how later bettors can influence previous bettors by changing the market 
scoring rule; he concludes that the best solution is to allow all new bets and trust that previous users will take action to 
"mitigate the externalities such changes produce on them" (117). It is not immediately clear to me how this translates to the 
market-maker implementation described by Pennock, but it seems to me that it yet again leaves room for the savvy investor to 
profit through exploiting the market structure, rather than merely by making accurate predictions. Further analysis might, as 
Hanson suggests, focus on how to minimize such inefficiencies.

Nick Wells



Simple scoring rules are where individuals make a probability estimate and then
are paid according to the outcome of the actual event. Market scoring rules
differ from this in that betters have an incentive to achieve marginal
improvements to their predictions, and betting is not necessarily matched to
another person.

Part of the goal of these rules is to improve the probability estimation of the
bets. With the market scoring rules, we achieve a probability inference that
works well in combining bets to create an estimate. The computational cost of
performing the combinatorial analysis, however, can be high.

This paper proposes an innovative market scoring rule, however, further
discussion of these rules would be beneficial to me in understanding them more
fully.

Nick Wells



This paper surveys market scoring rules and then looks at design problems.
Market scoring rules systematically allow us to aggregate information from
agents' actions. Combinatorial and simple scoring rules have different
problems, which are avoided by market scoring rules avoid.

Hanson, also proposes a design for a set of market scoring rules which includes
a set of logarithmic market scoring rules, agents choosing of and refining the
different variables, the allowance of arbitrage opportunities between rules to
avoid inconsistency in probability distributions, etc.

This paper seems to present an innovative framework for designing a set of
market scoring rules and contrasts it with the other models used. I don't fully
understand the technical formulation of the different rules, so further
discussion would be helpful.


Hao-Yuh Su



This paper provides a mechanism that aggregates crowd wisdom on prediction market. Firstly, the 
logarithmic market scoring rules (LMSR) offers incentives for players to improve previous 
prediction. Players with better predictions than previous one will receive positive net profit, 
while those with worse predictions will receive negative net profit. This rule acts like a 
continuous automatic market maker. In addition, it is not limited by the number of predictors. 
Even if there is only one participant, the predictor is also willing to make the truthful 
prediction because of the incentives. Secondly, it is difficult to manipulate under LMSR, since 
one has to revert previous predictions and pay money repeatedly until the end of the game. In sum, 
LMSR not only improves the correctness of predictions, but also prevent the market from being 
manipulated.

In LMSR, the logarithmic scoring rule is implemented, which has been briefly introduced in previous 
lecture. LMSR can be applied in any prediction market and furthermore, strategies decision. One project 
idea I can think of is that I can develop a prediction market of the US president election within a 
small group of friends. There are several advantages to use LMSR. It is relatively applicable since it 
doesn't have any limitation on the number of participants and it allows continuous infinitesimal trades 
between predictors, which can generate as many data points as possible.


Hao-Yuh Su



In this paper, Hanson adds details about the practical points of view on the logarithmic market scoring 
rules (LMSR). He develops several measures to fix the problems that might occur in LMSR. Firstly, the 
author developed two methods to limit the state spaces- one is limiting the probability and the other is 
having several market scoring rules. The later seem to be better than the first one since it has an 
efficient way to limit the arbitrage opportunities. In the second part, the author made a small adjustment 
to the market scoring rules to avoid bankruptcy in the market. In sum, Hanson has introduced detailed 
procedures to implement LMSR.


However, there are some shortcomings in this paper. The first is that the way to prevent bankruptcy may 
also discourage participants from making small changes on previous predictions. It is still an open 
question to decide the appropriate amount of collateral. The second is that the context doesn't include 
the users' point of view, such as question like "how to investigate the probability distribution over the 
set of all variable value combination," or "how to decide whether they have enough collateral to make 
corresponding changes." 

I think there might be several ways to utilize this paper. The most obvious way is to apply this mechanism 
on real prediction market. Furthermore, we can investigate this mechanism from users' perspective. We may 
try to develop a proper strategy to participate the prediction market under LMSR.




Haoqi Zhang



The main contribution of this paper is the introduction of market scoring rules as a way to combine the 
effects of scoring rules for eliciting probability estimates from individuals and that of markets to get 
consensus from a good. The intuition is that by having each agent make a fair bet in reporting his 
information, agents are sharing their information one at a time in fair local trades, whereas the cost 
for elicitation this information is the same as if we had just elicited from one person with the same 
final value. In considering market scoring rules,  the author focuses on the logarithmic scoring rule 
which has the feature that conditional probabilities and conditional independence relations are preserved 
in the elicitation process, which allows one to elicit base probabilities that can then be combined to get 
probabilities over combinations of the base events.  One thing that wasn't clear to me from the paper is 
just what are the computational costs? Also, are the conditional independence relationships in essense 
being used to build a bayes net?


Haoqi Zhang



This paper considers the problem of combinatorial information markets in which the desired estimate is 
the entire joint probability distribution over all variables. However, given there are so many combinations 
of events, certain markets will not be traded heavily, and other ones will suffer from overtrading 
(irrational trading). To deal with this, the author suggests using a log market scoring rule, the intuition 
behind which is that by letting people choose a scoring rule by paying off the last person who used their 
rule. Here the sequential changing of the scoring rule can be seen as the market maker facilitating trades 
between individuals where at any point the cost of buying or selling shares is computed using a logarithmic 
function of the share outstanding. Then, the authors discuss the use of Bayes nets to limit the influences 
of variables on each other to better capture the structure of the variable's relations so as to allow for 
tractable estimation when using a logarithmic market scoring rule. However, I found this discussion somewhat 
lacking - where does this bayes net come from? Is the complexity problem really resolved?





Rory Kulz



The main contribution of this paper is to introduce a new "big idea:"
market scoring rules. For forecasting and information aggregation,
they combine the best of scoring rules (which are good at elucidating
individual probability estimates but are difficult or impossible to
aggregate) and market mechanisms (which are good at aggregating but
fail in thin market cases and in practice suffer from irrational
participants). The basic idea is to mix the theoretical guarantees on
incentives of scoring rules with a means for information exchange over
time; the paper then goes on to show how this can be implemented
essentially as an automated market maker.

Unlike the other paper we read, Hanson here is primarily concerned
with demonstrating a number of nice properties of his market scoring
rules. The main result demonstrates that market scoring rules which
are derived from logarithmic proper scoring rules are in some ways the
most natural; in particular, any market scoring rule that satisfies
some weak criteria for preserving conditional independence relations
must be logarithmic.

The applications are obvious, but the implementation much less so: how
to overcome the computational complexity involved when dealing with a
doubly-exponential state space size? This leads us to Hanson's follow
up paper, which I review in my next email.

Rory Kulz



Continuing from the last paper, Hanson again goes over why proper
scoring rules and information markets are useful, citing for example
the paper we read on the Iowa Electronic Markets. Again, he goes over
the fundamental ideas behind market scoring rules and touches on why
they work well, citing his 2002 paper's theoretical contributions.

But what Hanson primarily tries to do here is explain some
considerations for actually implementing such a system: how can we
break up variable values? How can we manage the number of possible
states and the probability distributions? (Here Hanson reduces the
event space with some reasonable constraints and applies a technique
using sparse directed graphs based on Bayes' rule to lower the space
complexity of the problem.) Finally, Hanson investigates ways to
prevent exploitation of a market scoring rule by, for example, users
who cannot pay their losses (or users looking to exploit probability
approximations, a technique Hanson rules out for reducing complexity).

There is definitely a lot in this paper, but also a lot that isn't:
it's still not clear to me exactly how such a market would function.
How are users to be expected to be able to navigate rationality such a
large collection of possible actions? Would this really work in
practice? Has this been put into practice somewhere in the intervening
years? I haven't delved into this yet, but I plan to; hopefully the
presentation tomorrow will discuss this? And if not, this might be an
interesting project to conduct on, say, a class scale.




Zhenming Liu



Both papers address the logarithmic market scoring rules and discuss some of this rule’s nice 
properties. It is interesting to see the patron’s cost to implement a market with scoring rules 
is closely related to the notion of entropy. It is perhaps not surprising to see logarithmic 
scoring rules is the unique form that satisfies the properties mentioned in the papers given 
that information theorists and physicists already proved uniqueness of entropy function. A 
natural extension is to ask whether there are corresponding prediction markets for other forms 
of entropy like Renyi entropy (e.g., we only cares whether all events are equally likely to 
happen; but we don’t really care which one happens). 

An inherited difficulty for this problem is that the probability space is huge and many 
functions become infeasible to compute. The curse of probability space (and the curse of 
dimensionality) is not uncommon in the study of statistics or computer science. To my knowledge, 
there is so far not any generalized scheme to approach this problem. And I don’t think Hanson is 
doing a good job in dealing with this problem either --- from the theory side, Bayesian net is 
probably the first thought for those who want to approach this problem; from the practice side,  
I am not convinced to ask traders to understand Bayesian net before they trade. 

I am viewing the computational challenge an inherited problem of trying to represent the 
probability space concisely. If the entropy of the probability space is too huge, we cannot 
really do much to have a compressed representation of the probability space. Another possible 
way to deal with the complexity issues of scoring rules maybe is to parameterize how much 
information the patron wants to obtain (i.e., maybe it is acceptable to see the running time/space 
is polynomial to the entropy of the random variable to be elicited).  

On the other hand, the notion of indistinguishability in computational complexity maybe is relevant 
in this context. There are two types of indistinguishability between two probability ensembles. The 
first one says if two probability ensembles are statistically close, they are “indistinguishable”; 
the second one, a relaxed version, says if no efficient computer programs can tell the difference 
between these two ensembles, these two ensembles shall be treated as identical. In a market with 
scoring rules, our goal is essentially to elicit a probability ensemble that is statistically close 
to the true one. Maybe there could also be a prediction market that only elicits a probability 
ensemble that is computationally close to the true one, in which case we might overcome the “curse 
of dimensionality”.

The relationship between the amounts of money the patron invests and the efficiency of the market 
is also desired to investigate. When the patron doubles the investment, he/she probably either wants 
to see the market is faster to converge or the result is closer to the real distribution. So far I 
cannot see the market described in the paper has this property. 

Finally, some part of Hanson’s discussion really sounds not relevant. For example, in [Hanson 2003] 
he discussed the issue of “bankruptcy”, which I think is not an uncommon problem in stock market. I 
suspect a naïve reputation system would work well (e.g., if you don’t pay this time, you cannot play 
next time). 
 
[Hanson 2002] Robin Hanson “Logarithmic Market Scoring rules for modular combinatorial information aggregation”.
[Hanson 2003] Robin Hanson “Combinatorial information market design”.



Subhash Arja



The main purpose of this paper is to describe scoring rules and characteristics of information markets 
in order to combine the advantages of the two by using market scoring rules. The authors state that one 
advantage of information markets include giving the participants incentive to be honest. This is mainly 
because the traders must invest their own money and stand to lose it by trying to inject false 
information into the markets. Also, information markets tend to be self selective, since those that 
dabble in a market that they known nothing about tend to lose large sums of money.

The authors also analyze the enormity of the state spaces that result from market scoring rules. This 
can be solved by allowing only a particular family of distribution functions and having several market 
scoring rules. Overall, I found the paper informative from a technical and tangible application 
standpoint. However, I did not fully understand some of the analysis on the scoring rules equation. This 
may mainly be because I don't have a strong game theory or economics background.

-Subhash Arja


Victor Chan



Victor Chan
Comment: Logarithmic Market Scoring Rules for Modular Combinatorial Information
Aggregation

The main contribution of this paper was to explain the use of market scoring
rules for combinatorial information aggregation. The paper further elaborates
on how market scoring rules act as a continuous inventory based automated
market maker. Market scoring rules present their consensus estimates when the
price to change something is reached, and no one else is willing to take the
risk to change the price higher. The paper also talks about the cost of
implementing logarithmic market scoring rules, and it is found that no
additional financial cost is required; however the computation complexity is a
limiting factor, when dealing with such combinatorial information sets. Finally
it is shown that logarithmic market scoring rules preserve the conditional
independence relations of events.

The main limitation of the paper was that there was no experimental data. It
would have been nice to have experiments that provided results which follow the
theorems or formulas presented. The main insight of the paper was the value of
market scoring rules. It was unclear at first how the market scoring rules
could actually be implemented and used in a real world situation; however this
is explained in Hanson’s 2003 paper, on combinatorial information market
design.


Victor Chan





Victor Chan
Comments: Combinatorial Information Market Design

The main contribution of the paper is that it deals with market design to create
a combinatorial information market. This is important since traditional
information markets suffer from thin market and irrational participation
problems, so it will not give a good estimate of the overall probability of all
combinations of values. The article further discusses the use of simple scoring
rules and market scoring rules, where it is explained that market scoring rules
are better suited combinatorial information market design. Furthermore, the
paper introduces several designs for the market, including how to choose the
market scoring rules, and how patrons will interact with the market (ie how to
place bets).

The limitation of the paper was that it did not present any data to back up the
claims. Most of the ideas presented seemed to be from a review paper
perspective. The main insight of this paper trying to design a market that used
multiple market scoring rules, to gather information about a probability
distribution of a set of events (covering all states). The obvious application
of this paper would be to build such a said market, and allow user to trade on
it, to overcome the issues that were explained about prediction markets. One
project idea would be to see the effects of sudden influx of irrational traders
on this type of a market. The IEM seemed to have failed when this occurred
during the 1996 Election, when sudden influx of new users, drove the
predictions off. It would be interesting to see if such a problem exists in
this system.

Xiaolu Yu



Logarithmic Market Scoring Rules for Modular Combinatorial Information Aggregation
Motivated by the empirical successes of scoring rules and betting markets, the author 
invented a wonderful market maker well suited for use in prediction market applications 
-- the logarithmic market scoring rule market maker. Market makers always have public 
offers to buy or to sell, and update these prices in response to trades. The paragraph 
clearly describes the process of market scoring rules to produce consensus estimate says 
that while each person is always free to change the current estimate, doing so requires 
taking on more risk, and eventually everyone reaches a limit where they do not want to 
make further changes, at least not until they receive further information. At this point 
the market can be said to be in equilibrium. The market scoring rule can be expressed as 
that a group of forecasters could sequentially share a common forecast, with a scoring 
rule used to reward forecasters for incremental improvements made to the forecast. 
The interesting point here is the total cost to pay for T reports depends only on the 
initial and final reports, and is thus the same as the cost for one final report with 
the same final values. Logarithmic rules only change the probabilities of events where 
people betting took a risk. Regarding a bet on one event given another event, only a 
logarithmic rule preserves the probability of the give event. It also preserves the 
conditional probabilities of further events, and so preserves conditional independence 
relations. One advantage of logarithmic rules is there is no additional cost to elicit 
estimates on all combinations of the base events for which probability estimates are 
invited. How to best minimize and allocate the computational cost of updating scoring 
rules in combinatorial spaces remains an open question.
One application of market scoring rules I noticed is inkling markets.


Xiaolu Yu



Combinatorial Information Market Design
Market scoring rules is presented in this paper in detail. The importance of market 
scoring rules is that it overcomes major problems and limitations of scoring rules 
and information markets by becoming automated market makers facilitating trades 
between the people who are using the scoring rule in the thick market case, and 
simple scoring rules in the thin market case. The "thin market" problem and 
"irrational participation" problem within the standard information markets, as well 
as the "thick market" problem with the scoring rules are well addressed in the 
market scoring rules. Market scoring rules are essentially sequentially shared 
scoring rules, in which each user only pays off the previous user. Under a market 
scoring rule, people always want to honestly report their beliefs to maximize their 
expected value of payoff, the same as they do for a simple scoring rule.
Given the computational complexity when a large amount of variables are present, 
there are two basic approaches to dealing with enormous state spaces. One is to 
choose a particular family of probability distribution from which users are limited 
to choose. It is necessary to have a policy of only allowing bets on probabilities 
that one can exactly compute in order to keep patron from becoming a money pump – 
make money via arbitrage. Another approach is to have several market scoring rules 
dealing with different parts of the same total state space. All the market makers 
could be made consistent with each other via waves of arbitrage passing through a 
network of market makers. This arbitrage wave could even propagate to their neighbors 
if this produced a large enough change. Some implementation issues include allowing 
past bets to be used as collateral for future bets, and users with bets using old 
structures need to mitigate the negative externalities caused by structure change.
What confused me is that this paper does not spend a lot of time explaining how the 
market scoring rules, such as logarithmic market scoring rule (very useful and widely 
applied in practice) functions as a market maker in the typical sense. However, this 
idea is well suited for use in prediction market applications can be used as a market 
maker. The logarithmic market scoring rule market maker, for example, can be used in 
a standard prediction market setting. It is now being used in several places, 
including an implementation at InklingMarkets, the Washington Stock Exchange, 
BizPredict, and (reportedly) at YooNew. 

Ziyad Aljarboua



This paper considers a logarithmic version of market scoring rules and discusses
modularity of market scoring rules. In market scoring rules, anyone can change
the official report and his/her pay will be correspond to the new report. This
fact eliminates the need for matching bets. Just like the previous paper, this
paper shows that the market scoring rules combine the advantages of both simple
scoring rules and betting markets.

It is shown in this paper that market scoring rules doe not cost more to
implement when compared to simple scoring rules. Once one pays to create a
logarithmic rule, there is not additional cost to apply that rule to all
possible combinations of the base events. For other rules, the cost depends on
the number of base events for which probability estimates are invited.

This paper briefly address a limitation of market scoring rules that the
previous paper also addressed: large computational costs of updating market
scoring rues in combinatorial event spaces.


Ziyad Aljarboua



This paper discuss a new model of Information Markets, markets that aggregate
information and allow traders to hedge risk and speculators to profit from the
market by predicting future price. The introduction piece of this paper
discusses some shortcomings of the information market and the scoring rules
when used separetly such as irrational participation and thin market problems.
Mainly, this paper explains a new technology: market scoring rules that
combines advantages of both information market and scoring rules. Information
markets provide a tool to combine diverse opinions into single probability
distribution. This is a problem that scoring rules lack. This is done by
repeated interaction between agents. With repeated interaction, they tend to
converge to a identical estimates since they are rrational agents.

According to the author, this technology solves the thin market and irrational
participation problems with the information market and the thick market problem
with the scoring rules. As shown on figure 1, market scoring rule combines the
advantages of both methods and solves the problems with opinion pool problem
and thin market problem. The ultimate goal of this model is to reveal what
people know. This model is based on rewarding agents for correct answers
according to a scoring rule that is constrained by inventive compatibility and
ational participation. if agents do not participate, the receive zero reward.
the market scoring rules are described in terms of probability distribution
over states that are defined by combinations of variable values.

for a market scoring rule, current probability distribution can be inspected at
any time and can also be modified by by making new report. an agent is
incentiviced to give his/her honest opinion because he/she cannot change the
previous report.

the paper discusses some limitations to the market scoring rules. one is a
computational issue that arise when variables are too many and each has several
values. this approach of the market scoring rules for this will become
infeasible as the state spaces is large and the computation cannot be performed
on current computers. The paper also offers a solution to limit the states space
by carefully selection probability distribution and limiting user's selection of
those distributions. Also, another way to avoid large state spaces is have
several overlapping market scoring rules.


Michael Aubourg



Fact : Repeated exchanges of human opinion in conversation do not produce the 
degree of convergence predicted by theory.
Contrary to this, the betting market create good probability estimates.
The author here, follow an original way : he banks more on the empiricism than 
on the theory.

In short, Market scoring rules are scoring rules where anyone can change the 
current report, and be paid according to their new report, as long as he agrees 
to pay the last person reporting to that person's report. I think this last 
condition is very important, because this push people to improve continuously 
the information quality. The more you change the report, the more you have to 
be sure, since many people have already changed it.

The great difference with a standard betting market is that the cost of this 
market depends only on the informativeness of the last report and does not depend 
on the frequency of use.

The other positive point with logarithmic rules, is that bets on some events do 
not change conditional independence relations between other events.

Among all proposed rule, only the logarithmic rule is the only one that can 
simultaneously reward agents and evaluate them via standard likelihood methods 
which is great.

Market scoring rules produce consensus estimates in the same way that betting 
markets produce consensus estimates.

How can we define an equilibrium in the market ? When no one want to make further 
changes, not until they receive further information, the market can be said to be 
in equilibrium.

Since the entropy is a linear function, the maximum expected cost for the full 
combinatorial repport r={ri}, which reports on the probability of all base variable 
value combinations is no more than the cost for the base-only reports.

conclusion : There is no need to find another person willing to make a matching bet, 
as in betting markets. This market lets anyone make any infinitesimal fair bet at the 
odds in the last report, with no need to find a counter party. The computational costs 
for updating the market can be very large, depending on the event spaces.


Question raised : How to devise market scoring rules that minimize such computational 
costs ? How to allocate those costs ?


Michael Aubourg




Topic : Combinatorial Information Market. What are the goal ?
Combinatorial Information Market has to aggregate information on the entire joint 
probability distribution over many variables by allowing bets on all variable value 
combinations.
Auxiliary goals :
- To overcome the thin market
- To overcome irrational participation problems.

The forecasts from real financial markets, are more accurate than the ones from 
professional.
For this reason, Information Markets, tried to copy their pattern, in order to gather 
accurate informations.

The market scoring rules combine the advantages of standard information markets and 
scoring rules.
The task of the paper is to induce people in the market, to acquire and reveal 
information relevant to estimating certain random variables.

Advantages of scoring rules :
1) People will try to reach P=r
2) People will be incentives to acquire information they would not otherwise possess.

We have to learn that the best scoring rule is the logarithm one because it allows to 
reward an agent and to evaluate his performance.

Like scoring rules, Informations Markets push people to be honest.

So how does a market scoring rule behave ? Like an automated inventory-based market 
maker who stands ready to make any tiny fair bets at its current probabilities.

A market scoring rule is actually an automated market maker which deal an all of the 
assets linked to a state space.

One good approach is to have several market scoring rules. This is especially useful 
with enormous potential states.


What are the limit ? Simple scoring rules suffer from opinion pool problems., in the 
thick market case.

questions raised :
by the way how does the user interface look like ?


Travis May




Following up on the Combinatorial Information Market Design paper,
this paper provides a detailed mechanism through which a scoring market
could be created.  As mentioned in my other post, this has immense practical
value, if properly implemented, as many corporations and policy-setters
could benefit from knowing joint probability distributions.

Unfortunately, this market has a major limitation that was not adequately
addressed.  Notably, even though thin markets are not as large an issue for
scoring markets as prediction markets, there is still a substantial value to
being able to reach a large number of market participants (it may be
especially important to be able to reach a large percentage of market
participants in markets where traders have different information sets).
However, the nature of the scoring market will substantially reduce the
number of participants due to its complexity.  There is a simple, intuitive
understanding that average intelligent people have about prediction markets,
and the concept of betting on an outcome can be quickly and neatly
explained, and new participants can easily be induced.  Scoring markets,
however, do not have any such intuitive simplicity, and the concept of
trading probability distributions would baffle most of the public that does
not know what a probability distribution is.

Under some conditions, this could be an acceptable outcome.  However, if the
set of experts with the most information about the likely outcome does not
overlap with the set of finance experts, such markets could be doomed.
Thus, despite their theoretical appeal, the added complexity of these
markets may make limit their ability to synthesize useful predictions.


Travis May



This paper provides an intriguing methodology for eliciting a joint
probability distribution with several different possible events taking place
- an idea that has much practical value.  Individuals are often interested
in soliciting conditional probabilities, which may be used for
decision-making purposes in practice.  For example, in order to assess a new
incentives scheme, a company might be interested in the difference of
expected results GIVEN the incentive scheme and the results given no
incentive scheme.

Currently, eliciting such information is difficult.  Scoring rules provide
incentives for probability distributions to be revealed, but they do not
provide a cost-effective mechanism of simultaneously receiving input from
multiple users. In contrast, prediction markets allow mass input into a
consensus probability, but do not perform effectively in thin markets -
meaning that only a small set of assets can be traded and making it
difficult to gather joint probability distributions.  The novelty of this
paper is to propose a mechanism that merges the benefits of both system: in
scoring markets, the input of both a crowd and an individual can be
solicited at a modest price.

If properly implemented, this could have useful benefits to corporations,
policy-setters, academics, and others with an interest in determining joint
probabilities.  Furthermore, this could play a substantial role in real
financial markets: due to the large number of joint distributions and
implied correlations that are assumed by market participants (especially
quants, who use mathematical models to trade), a scoring market could
provide a way to hedge major assumptions made by traders.  For instance, a
market could be created (using binary outcomes, such as price thresholds)
that looks at the joint distribution of oil prices, equity prices, and bond
prices, testing traders' assumptions about the correlations between these
markets.