Comments from CS286r students – October 22, 2008

 

Andrew Berry

 

This paper demonstrates that only securities whose payoffs can be expressed as

weighted threshold functions are guaranteed to converge to the price that

reflects all information regarding the security's value. Additionally, it shows

that this convergence will occur in at most n rounds (where n is the number of

bits of information) and will occur in at least n/2 rounds.

 

I have two main questions regarding the paper. The first is in regards to the

assumption that qi = 1 for each agent in each round. The authors state that a

forced trade is logical because a previous work showed that "perfectly rational

and risk-neutral agents will never trade with each other for purely speculative

purposes." While this may be true under theoretical assumptions, it seems that

empirical evidence deviates from this idea. Therefore, what are the

implications of this assumption in the final results. If this was relaxed would

a major difference occur? My other question is about the assumptions regarding

the bits of information. All the work here was done in terms of Boolean

variables to make the problem tractable. How much complexity would be added to

the model if the bits of information were continuous? Does this make the

analysis intractable and would it have a major effect on the results?

 

Despite my questions and critiques, I thought this was one of the best papers we

have read so far. The assumptions were well laid out and clearly explained.

Additionally, shortcomings are acknowledged and the possible extensions of the

work are articulated within the final section. I'd like to see results within

this paper's framework in scenarios where the agents do not trade myopically

(especially considering the results from last week's paper).

 

Subhash Arja

 

This paper by J. Feigenbaum, et al., seeks to explore the prospect of finding complete information among the traders from the market price. The process is done over a longer period of time in a step by step manner, and, eventually, it is expected that the price will reach some equilibrium where it is not desirable to continue trading until new information is known. The authors do state the limitation that a equilibrium won't be reached in the case of payoffs that cannot be expressed as weighted threshold functions. They go through extensive proofs to support this conclusion. However, one good result from the paper is in the case where payoffs are threshold functions. In this scenario, the price of the securities will converge, allowing some extrapolation of the traders' information. The paper makes assumptions regarding the mindsets and strategies of the traders. The authors do not take into account non-myopic strategies where a trader would bluff in the early rounds to make more profit in the future. I think this is a reasonable assumption since taking this behavior into account makes any result too computationally difficult. Although it must be pointed out that when this model is applied to a real financial market, this assumption might lead to strong discrepancies between the expected equilibrium results and the actual security price. There is no way that one can guarantee that a set of all traders have the same amount of information and act truthfully soley on that information.

 

One result I found valuable was that in the first round, no information can be extrapolated from the bids of the agents. It is stated in the paper that at least 2 rounds of trading are necessary to reveal information. Therefore, one would expect that as the number of rounds increased, an observer would make greater and greater progress towards learning the information known by each trader. Of course this paper proves everything theoretically. It is possible that in a real application no equilbrium price and reached, and no information can be extrapolated. I appreciated the list at the end of the paper which shows all the areas where the analysis falls short. As a reader, I would prefer that this section be part of every paper since it others in the same area to target future work directly towards a single or a set of these issues. This helps by incrementally adding on to this research area and eventually introduce new and interesting questions and answers.

 

Nick Wells

 

This paper deals specifically with markets created for the purpose of

information aggregation. The authors develop a model with two agents with a

prior distribution for the state. They trade securities

based on the outcome of the events. The authors show that the market will

converge to a true expectation equilibrium given a weighted threshold function

(what is that?).

 

This paper makes the assumption that the agents are risk-neutral and myopic. The

proof does not necessarily hold if this is not the case. The authors admit there

is room for exploration here, but state that this is nevertheless a good

starting point.

 

Given this assumption, the authors present an interesting proof. I don't

understand the weighted threshold function assumption, and would be learning

what is meant by it.

 

Ziyad Aljarboua

 

This paper investigates the computation involved in achieving a rational

expectations equilibrium where all traders in a market have revealed their

informations and converged to the same information state after observing the

market prices.

 

For a market with n agents each with one bit of information. Starting from the

initial state where all agents are only aware of their bit of information, all

agents know the payoff function of the agents' bits. at each round, agents

submit their bids and quantity and market clears at end of each round and the

clearing price is publicized. Assuming that agents are risk neutral, myopic and

truthfully, the price evaluation process by agents can be predicted. over the

rounds, knowledge of agents improves until it reaches a points where agents do

not learn anything more from the price of the market and thus converge to

equilibrium in at most n rounds. Markets always converge to the rational

expectations equilibrium if security payoff function is a weighted threshold

function.

 

Questions i have:

-what is a threashold function?

-what is a distributed computation?

-is the rational expectations equilibrim ever reached in reality? Do traders

ever reveal all their informtion?

-how would the assumptions used in the analysis affect the conclusion?

-how does the variation in amout if information between agetns factor in this

conclson?

 

Alice Gao

 

The main contribution of this paper is proposing a systematic way of investigating the information aggregation process in an information market.  In my opinion, creating a complete characterization of information aggregation processes in information markets would be one of the ultimate goals of studying prediction markets.  This contribution is very important as a preliminary study of the information aggregation process using the given simplified model.  However, as the authors mentioned, a major limitation of this paper is that it puts too many restrictions on the information structure of the information market as well as trader behaviours. 

 

Relating to the two papers on strategic behaviours in a prediction market, there were two possible assumptions.  This paper only considers the case of when the traders receive conditionally dependent signals.  An possible extension to this paper would be to consider the same questions for the case when traders receive conditionally independent signals?

 

Many assumptions are made in this paper, especially to make the model simple enough for theoretical considerations.  First of all, it seems to me that the model uses a call market mechanism.  Moreover, by assuming forced trades for every round, the authors avoid dealing with the problem of lack of motivation for trading in certain scenarios.  Additionally, the authors also assume that the agents always behave myopically and bid their true valuations.  One interesting justification used by the authors says that their assumptions seem reasonable since the large number of agents will mean that complex strategic behaviours will not likely improve the agent's payoff compared to bidding one's true expected value.  It sounds to me that the authors are using the limited computational powers of traders to justify the fact that their arguments cannot handle a more complicated and perhaps more realistic model of information market (which is somewhat ironic).  

 

This paper gives me an idea of a possible project that would not be overly ambitious for a course like this.  We can start with very simple examples of prediction markets, like the one about XOR function mentioned in the paper.  By gradually coming up with and analyzing more and more complicated examples, we can attempt to characterize different convergence behaviours and trader behaviours in different scenarios.  The main goal of this project would be to come up with characteristics that can be used to distinguish different types of markets.  At some point, I imagine that the work will get to a point where we could try to make the characterizations more abstract, and this will naturally lead to the theorems and mathematical proofs similar to those mentioned in this paper.

 

Michael Aubourg

 

First of all, I would like to temper the first assessment about economic theory : The equilibrium price of a financial asset does not always reflects all the information regarding the asset's value :

This is true only when markets are EFFICIENT. But it is not always the case. This concept is also known as Information efficiency. And this concept should not be coufused with the concept of Pareto efficiency.

To think about Market efficiency, we can :

-test returns versus prices

-adjust returns for risk

-studying the information we are looking at.

 

Unpredictable risk-adjusted returns implies market efficiency.

 

What information are we looking ? There are 3 of them :

1) Past returns and trading volume (weak-form efficiency)

2)Publicly available fundamental information (semi-strong-form efficiency)

3)Information used by a group of traders (strong-form information)

 

The aggregation of information in an incomplete market is only partial. However, the dynamic trading, provide extra information to fill this gap.

In reality, markets are never complete, but we often refer to this model in order to apply the "Law of one price" and them, to be able to price assets easily.

 

The paper states that the markets reflect the true information, that is the true value of securities. However, what happen for new financial products, or new stocks ?

When a new company do a IPO (initial public offering) the value of the stock often undergoes a lot of variations in the beginning.

 

Furthermore, if an investor invested a lot, let's say 25% in the stocks of a company, he might need to sell it, if he need immediate cash (perfect liquidity).

If he is on a rush, he will maybe sell the stocks to a lower price of its true value. Since 25% is a big proportion, the stock price will change. However, there were no information change in the market.

 

Finally some assets are sold trough auction. (T-Bonds..) and dealers have to buy there foor their customers. We know of course that in a auction, the bid does not always reflect the true value of the sold object. Let remind us the Salomon's Brother scandal.

 

Rory Kulz

 

In this paper, Feigenbaum, et al. look at information aggregation in a

particular information market design, the Shapley-Shubik mechanism; it

operates in rounds like a call market with (truthful, myopic) agents

placing bids that are averaged into a new market price. They show that

under certain (somewhat strong) conditions for Boolean functions, all

information is guaranteed to be aggregated at equilibrium, regardless

of the nature of the agents' common prior.

 

This is an interesting paper, but has a number of limitations, most

notably the assumptions of truthfulness and myopic behavior. The

authors are also, I think, wrong when they ask, "What is the minimum

number of threshold securities required to implement a given

function," and claim the problem is NP-hard. It is true that this

bears an equivalence to the neural network problem, but the

NP-hardness only applies I believe to real functions. For Boolean

functions, I think it is known that two hidden layers in a Perceptron

network suffices to represent any Boolean function (a

single-hidden-layer Peceptron network learns precisely those Boolean

functions which are linearly separable, which is only a small fraction

of the total space of Boolean functions)

 

Angela Ying

 

This paper presented an interesting trading model with information aggregation and it derived results concerning securities markets. It modeled trading as a game with a number of discrete rounds, where n traders trade at every time period. Each trader is given some private information and bets every round, using myopic strategy. The main contribution of this paper was several theorems that show that when payoffs can be expressed as threshold functions, then the market is guaranteed to converge on the true value of the security, while when the payoffs cannot be expressed as a threshold function, the market will not necessarily reach an equilibrium that is the true value. I thought this was interesting because it  doesn't mean people want to hide their information - they simply cannot obtain any information from the markets because of the probability distribution of information bits. In fact, the players are playing myopically so they are not trying to deceive other players. However, this paper lacks in the fact that the market model may be oversimplified - in particular, I thought the fact that each player only trades 1 unit was very limiting. In the future, it may be worthwhile to explore a market model where players can trade an arbitrary number of units, as long as q > 0 (because otherwise they just wouldn't trade). In fact, it may be interesting to have a distribution of q_i's since we have no idea why any one player would bet more shares than another. Another thing that would be interesting is to explore non-myopic strategies and perhaps to simulate this - maybe players could bluff with a probability p? Anyway, the simplicity of the model leads to many interesting extensions.

 

Xiaolu Yu

 

In this article, the authors develop a simplified model of an information market, along with trading strategies (compared to Hanson's market scoring rules), showing the dynamical process toward equilibrium, where information bits distributed among traders is revealed round-by-round and incorporated into the market price, as long as addressing the computational properties of the process. Limitations of this method include agent simplification, prior distribution assumption, price inaccuracy, etc. Agents are assumed to be risk-neutral, myopic, and bid truthfully (even with proper market scoring rules, non-myopic traders, misinformed traders, or irrational traders will contribute to noisy prices.) in the first place. If they fail to accurately update their beliefs after each round due to computation complexity, we can hardly believe the result price would converge to the correct full-information price within a certain number rounds depending on the number of the input bids. On the other hand, if agents within a market don't have exactly the same prior distributions, or even very different priors, what should we expect to happen to the information aggregation? It would be more difficult (or maybe rarely possible) for agents to update their beliefs. It's also possible that the market price would not converge in finite rounds. It is also interesting, as the author suggested, to see if we can derive a generalized result for generic prior distributions or expected information states for agents. Thirdly, how inaccurate and unprecise clearing price for each round would affect belief updating, information revealing, and market price converging is still an open question. For example, can a round-off error cause blow-up of the entire computation in a later step? Last but not least, it's interesting and important to define and configure securities to speed up convergence to equilibrium.

 

Brian Young

 

Computation in a Distributed Information Market (Feigenbaum, Fortnow, Pennock, Sami)

 

I was curious about whether the results from this paper can be linked in some way to previous papers we have discussed. Does the paper offer any insight into the question of how to represent conditional securities?

 

I really appreciated the conclusion of this paper, which outlined a great number of the questions that I had thought of myself, as well as many I hadn't considered. The effect of non-rational traders, for instance, seems to relate strongly to the empirical results found by the Berg, et al., paper on the Iowa Electronic Markets, describing distinct groups of "typical traders" (who make trades that are not optimal) and "marginal traders" (who are responsible for driving market prices). Do Berg et al.'s results suggest that the effects of irrational traders can be discounted or disregarded, and what justifications can we find for this?

 

Victor Chan

 

The main contribution of the paper was showing that a security that has payoff, defined by a weighted threshold function, will have the equilibrium price converge with the actually value of the security. The paper sets up a market where information of traders are presented as bits, and the trades happen in iterations, where the new price is revealed at the end of each round. The authors find that as the game cycles, the traders gain information of others, based on the new current prices. Then they will be able to eliminate states that cannot exist based on the current price of the security. By doing so, they change their own expected pay off of the security, and eventually this will converge on the actual price of the security. The authors prove though, only when the security's pay off function is a weighted threshold function will the property hold. The paper also evaluates the upper bound and lower bound of reaching the convergence.

The limitation of this paper is the number of assumptions taken. First of all the traders behave myopically, meaning they always trade truthfully. Secondly, the amount of trading volume is kept to one, for each trader in every round. These two assumptions greatly simplify the model, however it doesn't reflect as accurately real world situations. It is interesting that the author's noted it might be advantageous to use the Market Scoring Rules, which we've seen previously in Hanson's paper will cause traders to use a myopic strategy.

 

Something that was unclear to me was how the probability distribution for the Carry bit functions. I wasn't sure how the conditional probabilities were calculated in that part of the paper. However most parts of the paper had simple examples using small sized examples to demonstrate the concepts, these were helpful in understanding the ideas.

 

A future work for this was already mentioned in the discussion section of the paper, and that would be to evaluate the convergence if traders behaved strategically, and did not play truthfully every turn. This type of a model is more complex than the one presented in the paper, however it would be more insightful

 

Haoqi Zhang

 

This paper makes two main contributions: (1) characterize the set of boolean functions over agent inputs for which the equilibrium price incorporating the agents' information will converge to reveal the value of the function and (2) provide linear bounds on the number of rounds until prices converge for the set of functions that converge to equilibrium prices that reveal the value of the function. The results are significant because it shows that a market may not be able to converge to prices that incorporate all information, but also that when it can it could do so quite quickly. I found the explanations to be clear and the examples illustrative.

 

The authors make a number of simplifying assumptions to reach their conclusion, some of which are fairly strong assumptions. The one most mentioned throughout the paper is the myopic rationality assumption, but I found that forcing the quantity of the security traded to be 1 in each round to be perhaps a stronger assumption. While the authors give an explanation based on ease of analysis, I feel that this assumption strays quite far from the market setting (perhaps towards the point of voting). Or perhaps I missed something here.

 

One question I had is about the weighted threshold function: how hard is this condition to satisfy? While the XOR explanation is clear, I don't quite know how XOR relates to a realistic prediction situation nor think of other prediction situations that relate to functions that don't satisfy weighted threshold.

 

Nikhil Srivastava

 

The paper presents some very interesting results on equilibrium price convergence in simplified distributed-information markets. By formalizing the intuition of trading as computation, and then examining the conditions under which that computational process can combine information, Feigenbaum et al prove that combinations of Boolean securities whose payoffs are threshold functions of the information bits are guaranteed to converge to the proper equilibrium price in bounded time. Securities that lack this property are not guaranteed to converge.

 

The most interesting aspect of this paper was its formalization in terms of weighted threshold functions and separating hyperplanes, as it reminded me strongly of a result in computational neuroscience dealing with perceptrons. A perceptron is a simple model of a feedforward neural network consisting of neurons whose outputs are Boolean and whose decision conditions are simple linear functions. In trying to train this system to "learn" some given rule relating input to output, there exists a learning algorithm that causes the system converge to the rule in bounded time if and only if the rule can be expressed as a threshold function.

 

Of course, the learning algorithm in this case uses the desired result as an input (the justification in this case is that in many circumstances the system should be able to "train" on a subset of known data), but I think the similarities of the analogy are strong. In each case we have a collection of individual agents with unique priors (the initial decision rules of the neurons), the agents modify their priors based on iterative outputs of all agents (the learning algorithm), and the convergence criteria is exactly the same. Thus, perhaps more results from computational neuroscience could be used to investigate related problems in distributed information computing. Specifically, "noisy" neural networks could relate to misinformed or irrational traders, and more distributed network computation could shed light on decentralized markets.

 

Avner May

 

This paper discusses a specific scenario in which a group of n traders, each with a bit of private information, is betting (over the course of a possibly infinite number of rounds) on the value of some Boolean function over the traderŐs bits.  It arrives at some very interesting results regarding which types of functions result in the market price converging to the actual value of the function, and how long it takes this convergence to happen.  However, I found many of these results to be incredibly Ňfragile,Ó to use the words of the authors.  I thought that the assumptions made at the beginning were incredibly simplifying (the authorŐs acknowledge that it was necessary for them to make many assumptions).  The authors assume that each trader places exactly one bid in every round, and behaves myopically.  It is immediately clear that the results in this paper do not hold in a game theoretic sense; as a trader in a given round, if I assume that the other traders are behaving as discussed, I can bet a value which does not equal my conditional probability that f(x) = 1 given my information, and destroy the validity of everyone elseŐs information sets, while preserving the validity of my own.  This sabotage mechanism could be quite enticing in a competitive setting, or in any zero-sum market.  Furthermore, the information structure itself is very simplifying; the fact that each person only receives a bit of information, and this information never changes throughout the game, might not model real information, which is more amorphous, well.  Furthermore, the fact that everyone is betting on a publicly known Boolean function, taking only the traderŐs bits as input, is also very simplifying; how does everyone know this function?  What if the outcome being bet on in a prediction market depends on information which is unknowable to any human trader, even in aggregate (ie, weather forecasts, or coin flip result)?  In conclusion, I found this paper to be an interesting starting point; however, most of its results I do not find are very valuable in general due to the large amount of simplifying assumptions which had to be made.

 

Malvika Rao

 

This paper presents a simplified model of an information market and

illustrates the information aggregration process as a computation of

distributed information. The authors show that the securities whose

payoffs can be expressed as the weighted threshold functions of

distributed input bits are guaranteed to converge to the equilibrium price

forecast by economic theory.

 

Some practical questions arise - which the paper does not really address

even as motivation for examining this particular problem. Where do

threshold securities occur? Are they realistic? The assumption that

clearing prices are known is a strong one and it would be interesting to

examine how this process would work if the price were not known (this is

also pointed out in the conclusion as possible future work). How does the

behaviour of this market mechanism change as we vary the number of

participating agents? Or if the agents entered and left the market in a

dynamic setting?

 

I was also curious as to why Bob Auman's "Agreeing to disagree" paper was

not cited (from what I understand this is the work that first introduced

the concept of common knowledge).

 

Sagar Mehta

 

The paper considers circumstances under which a simplified model of an information market converges to the expected equilibrium. Securities whose payoffs can be expressed as a threshold function (e.g. OR function) are guaranteed to converge, while those that cannot be expressed in this manner (e.g. XOR function) are not guaranteed to do so. The paper also sets a bound on the number of rounds before convergence in these markets. While the paper gives interesting results, I did find their model to be rather simplified compared to how actual information markets work. This is an interesting starting point to model information aggregation, real scenarios are much more difficult to work with.

 

 

 

The paper does a good job of mentioning some areas of future work in its conclusion. I'd be interested in knowing what progress has been made on these open research questions thus far. I'd especially be interested in the influence of uninformed traders or traders with a purpose other than maximizing profit would be on whether the market equilibrium can be reached. In real life, a trader may buy a security as a tool to hedge against other positions in his/her portfolio. Most of the models we have studied assume each player is trying to maximize profit given his information, but here the agent is not necessarily acting on some belief of his, but only to mitigate risk in some other trade. How does this behavior affect information aggregation? Since players do not know who the signal is coming from, how can they differentiate an agent acting on the basis of new knowledge and an agent acting simply to reduce his overall risk?

 

Hao-Yuh Su

 

This paper investigates the computational process of information aggregation. Basing on a simplified model of information market, it shows that only when the payoff of the security can be expressed as a weighted threshold function, there exist a converged equilibrium. This is also true, conversely. Furthermore, this paper shows that the converging time is bounded between n/2 to n, where n is the number of bits of distributed information.

 

There are several points I'd like to make clear. First of all, I'd like to inspect the information aggregation model used in this paper. Is it reasonable to assume that the information an agent received is Boolean? Is there any example of Boolean information in real market? How about common-prior assumption? How well it can describe the system? Moreover, is it righteous to assume the quantity each agent holds is equal to 1? It is surely alleviate the complexity of computation, but I think it is overly simplified. Second, I'd like to know the physical meanings of "stochastically monotone" and "stochastically regular" functions.

 

I think there may be several extensions and applications of this paper. One direction is that we can make some alterations in the aggregation functions and to see the corresponding results. As the paper has suggested, we can investigate the information aggregation using similar techniques but with real inputs instead. Although some properties still hold, it is unclear that if the market will reach an equilibrium in a finite number of steps. Another direction to think about is the presence of noisy prices. In this model, it is assumed that all agents are rational. What if it is not true? Or we can drop the premise that all agents big truthfully. It is interesting to examine the impact of the "disordered" phenomenon. Moreover, we may also examine the common-prior assumption. How will the aggregation be if agents don't share the same prior? There is still much work to be done in related subjects.

 

Brett Harisson

 

This paper presents a intriguing model of a simple market where traders bet on the outcome of a security, which is in turn dependent on all of the traders' private information. In particular, the security is a boolean function of the traders' private information, where each individual trader's information consists of a single bit. The authors show that such a market must eventually converge to an equilibrium price, but that price may not necessarily be the true value of the security. So the authors proceed to give a necessary and sufficient condition for the security conditions under which this equilibrium price is exactly the value of the security. Furthermore, the authors give lower and upper bounds to the number of timesteps required for such convergence.

 

I do not like the writing style of this paper. I think it is unorganized, and often too wordy in many places. The size of the paper could be cut down substantially and still convey the same points, perhaps in a clearer more concise manner. Also, the poor typesetting makes the computations difficult to follow. I also think that some of the proof exposition can be left to an appendix so that the results are more cleanly focused.

 

Nevertheless, the paper is highly interesting. Unfortunately, there are far too many simplifying assumptions to relate this experiment too closely to real financial markets, but it is a fine starting point. I wonder if there are convergence results for general functions, not just boolean ones. Also, I am very curious about the connection of the boolean function securities to neural networks. After all, the functions for which the equibrium price does not equal the security's value are exactly the functions which can not be learned by a single perceptron.

 

Travis May

 

In this paper, the authors demonstrate that there is a finite, linear number of periods required for markets to converge to a probability that includes the information of all of its participants.  This is a useful insight because it shows the efficacy of markets in revealing information that is distributed across agents.  However, there are three strong assumptions in this work that future research may wish to address in order to more accurately reflect real market conditions.

 

First, the each bit of information is not necessary binary.  While the model expands logically for bits of information that are real numbers, the paper does not address STOCHASTIC pieces of information, in which the market probabilistic shifts are revealed to market participants (i.e., if there is random noise in observing the information that is revealed to a participant).

 

Second, the assumption of equivalent priors does not necessarily translate into reality, as different market participants may assume different outcomes.  This could be due to irrational optimism/pessimism, or it could occur in cases where there are different theories as to joint distributions.

 

Third, and perhaps most importantly, the model assumes simultaneity in revealed information.  In a real market, there are different bits of information being revealed at different times, and thus the market adjusts over time based on new factors being priced in.  In such a case, is there any theory to indicate that convergence would occur?

 

Once these three assumptions are relaxed, the model could much more accurately reflect true markets.  True markets, however, tend to not converge to a single price, and tend to continuously shift in prices over time – an effect that could be caused by these three factors alone.