On Directed Information and Gambling

On Directed Information and Gambling

ISIT 2008, Toronto, Canada, July 6 - 11, 2008 On Directed Information and Gambling Haim H. Permuter Young-Han Kim Tsachy Weissman Stanford University University of California, San Diego Stanford University/Technion Stanford, CA, USA La Jolla, CA, USA Stanford, CA, USA/Haifa, Israel [email protected] [email protected] [email protected] Abstract—We study the problem of gambling in horse races rates of the optimal gambler’s wealth with and without side with causal side information and show that Massey’s directed information Y . Thus, Kelly’s result gives an interpretation that information characterizes the increment in the maximum achiev- mutual information I(X; Y ) is the value of side information able capital growth rate due to the availability of side infor- Y X mation. This result gives a natural interpretation of directed for the horse race . information I(Y n → Xn) as the amount of information that Y n In order to tackle problems arising in information systems n causally provides about X . Extensions to stock market portfolio with causally dependent components, Massey [5] introduced strategies and data compression with causal side information are the notion of directed information as also discussed. n n n i i−1 I. INTRODUCTION I(X → Y ) I(X ; Yi|Y ), Mutual information arises as the canonical answer to a va- i=1 riety of problems. Most notably, Shannon [1] showed that the C and showed that the maximum directed information upper capacity , the maximum data rate for reliable communication bounds the capacity of channels with feedback. Subsequently, over a discrete memoryless channel p(y|x) with input X and Y it was shown that Massey’s directed information and its output , is given by variants indeed characterize the capacity of feedback and two- C =maxI(X; Y ), (1) way channels [6]–[13] and the rate distortion function with p(x) feedforward [14]. which leads naturally to the operational interpretation of mu- The main contribution of this paper is showing that directed tual information I(X; Y )=H(X) − H(X|Y ) as the amount information I(Y n → Xn) has a natural interpretation in of uncertainty about X that can be reduced by observation gambling as the gain in growth rates due to causal side Y , or equivalently, the amount of information Y can provide information. As a special case, if the horse race outcome and about X. Indeed, mutual information I(X; Y ) plays the cen- the corresponding side information sequences are i.i.d., then tral role in Shannon’s random coding argument because the the (normalized) directed information becomes a single letter probability that independently drawn X n and Y n sequences mutual information I(X; Y ), which coincides with Kelly’s “look” as if they were drawn jointly decays exponentially result. with exponent I(X; Y ). Shannon also proved a dual result The rest of the paper is organized as follows. We describe [2] showing that the minimum compression rate R to satisfy the notation of directed information and causal conditioning a certain fidelity criterion D between the source X and its in Section II. In Section III, we formulate the horse-race gam- reconstruction Xˆ is given by R(D)=minp(ˆx|x) I(X; Xˆ).In bling problem, in which side information is revealed causally another duality result (Lagrange duality this time) to (1), Gal- to the gambler. We present the main result in Section IV and an lager [3] proved the minimax redundancy theorem, connecting analytically solved example in Section V. Finally, Section VI the redundancy of the universal lossless source code to the concludes the paper and states two possible extensions of this capacity of the channel with conditional distribution described work to stock market and data compression with causal side by the set of possible source distributions. information. Later on, it was shown that mutual information also has an important role in problems that are not necessarily related to II. DIRECTED INFORMATION AND CAUSAL CONDITIONING describing sources or transferring information through chan- nels. Perhaps the most lucrative example is the use of mutual Throughout this paper, we use the causal conditioning nota- information in gambling. tion (·||·) developed by Kramer [6]. We denote as p(xn||yn−d) n Kelly showed in [4] that if each horse race outcome can the probability mass function (pmf) of X =(X1,...,Xn) be represented as an independent and identically distributed causally conditioned on Y n−d, for some integer d ≥ 0, which (i.i.d.) copy of a random variable X, and the gambler has is defined as Y X some side information relevant to the outcome of the n n n−d i−1 i−d race, then under some conditions on the odds, the mutual in- p(x ||y ) p(xi|x ,y ). formation I(X; Y ) captures the difference between the growth i=1 978-1-4244-2571-6/08/$25.00 ©2008 IEEE 1403 Authorized licensed use limited to: Univ of Calif San Diego. Downloaded on November 18, 2009 at 02:09 from IEEE Xplore. Restrictions apply. ISIT 2008, Toronto, Canada, July 6 - 11, 2008 i−d 1 n n (By convention, if i − d ≤ 0, then x is set to null.) In Finally, the growth rate n W (X ||Y ) is defined as the particular, we use extensively the cases d =0, 1: normalized growth. n Here is a summary of the notation: n n i−1 i p(x ||y ) p(xi|x ,y ), • Xi is the outcome of the horse race at time i. i=1 • Yi is the the side information at time i. n i−1 • o Xi|X i Xi n n−1 i−1 i−1 ( ) is the payoffs at time for horse given p(x ||y ) p(xi|x ,y ). that in the previous race the horses X i−1 won. i=1 i i−1 • b(Xi|Y ,X ) the fractions of the gambler’s wealth Using the chain rule, we can easily verify that invested in horse Xi at time i given that the outcome Xi−1 n n n n n n−1 of the previous races are and the side information p x ,y p x ||y p y ||x . i ( )= ( ) ( ) available at time i is Y . n n n n • S X ||Y the gambler’s wealth after n races when the The causally conditional entropy H(X ||Y ) is defined as ( ) outcomes of the races are Xn and the side information n n n n n H(X ||Y ) E[log p(X ||Y )] Y is causally available. n 1 n n • n W (X ||Y ) is the growth rate. i−1 i = H(Xi|X ,Y ). Without loss of generality, we assume that the gambler’s i=1 capital is 1 initially; therefore S0 =1. Under this notation, directed information can be written as IV. MAIN RESULTS n I Y n → Xn I X Y i|Xi−1 In Subsection IV-A, we assume that the gambler invests all ( )= ( i; ) his money in the horse race, while in Subsection IV-B, we i=1 H Xn − H Xn||Y n , allow the gambler to invest only part of the money. Using = ( ) ( ) Kelly’s result, it is shown in Subsection IV-B that if the odds which hints, in a rough analogy to mutual information, of a are fair with respect to some distribution, then the gambler possible interpretation of directed information I(Y n → Xn) should invest all his money in the race. as the amount of information causally available side informa- A. Investing all the money in the horse race tion Y n can provide about Xn. We assume that at any time n the gambler invests all his Note that the channel capacity results involve the term capital and, therefore, I(Xn → Y n), which measures the information in the forward n n n n n−1 n n−1 n−1 n−1 link X → Y . In contrast, in gambling the gain in growth S(X ||Y )=b(Xn|X ,Y )o(Xn|X )S(X ||Y ). rate is due to the side information (backward link), and This also implies that therefore the expression I(Y n → Xn) appears. n S Xn||Y n b X |Xi−1,Yi o X |Xi−1 . III. GAMBLING IN HORSE RACES WITH CAUSAL SIDE ( )= ( i ) ( i ) i=1 INFORMATION The following proposition characterizes the optimal betting Suppose that there are m racing horses in an infinite strategy and the corresponding growth of wealth. sequence of horse races, and let Xi ∈X {1, 2,...,m}, i =1, 2,..., denote the horse that wins at time i. Before Theorem 1: For any finite horizon n, the maximum growth betting in the i-th horse race, the gambler knows some side rate is achieved when the gambler invests the money propor- information Yi ∈Y. We assume that the gambler invests all his tional to the causal conditioning distribution, i.e., capital in the horse race as a function of the information that he ∗ i−1 i i−1 i i i b (xi|x ,y )=p(xi|x ,y ), ∀x ,y ,i≤ n, (3) knows at time i, i.e., the previous horse race outcomes X i−1 i i−1 i and side information Y up to time i. Let b(xi|x ,y ) be the and the growth is proportion of wealth that the gambler bets on horse xi given W ∗ Xn||Y n o Xn − H Xn||Y n . i−1 i−1 i i ( )=E[log ( )] ( ) X = x and Y = y .

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    5 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us