Entropy Region and Network Information Theory
Total Page:16
File Type:pdf, Size:1020Kb
Entropy Region and Network Information Theory Thesis by Sormeh Shadbakht In Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy California Institute of Technology Pasadena, California 2011 (Defended May 16, 2011) ii c 2011 Sormeh Shadbakht All Rights Reserved iii To my family iv Acknowledgements I am deeply grateful to my advisor Professor Babak Hassibi, without whom this dis- sertation would have not been possible. He has never hesitated to offer his support and his guidance and mindful comments, during my Ph.D. years at Caltech, have indeed been crucial. I have always been inspired by his profound knowledge, unparal- leled insight, and passion for science, and have enjoyed countless hours of intriguing discussions with him. I feel privileged to have had the opportunity to work with him. My sincere gratitude goes to Professors Jehoshua Bruck, Michelle Effros, Tracey Ho, Robert J. McEliece, P. P. Vaidyanathan, and Adam Wierman for being on my candidacy and Ph.D. examination committees, and providing invaluable feedback to me. I would like to thank Professor Alex Grant with whom I had the opportunity of discussion during his visit to Caltech in Winter 2009 and Professor John Walsh for interesting discussions and kind advice. I am also grateful to Professor Salman Avestimehr for the worthwhile collaborations|some instances of which are reflected in this thesis|and Professor Alex Dimakis for several occasions of brainstorming. I am further thankful to David Fong, Amin Jafarian, and Matthew Thill, with whom I have had pleasant collaborations; parts of this dissertation are joint works with them. My warmest thanks go to Shirley Slattery for her amiable assistance in adminis- trative matters and beyond. I am also indebted to all my friends and colleagues who made my life more pleasant during my graduate studies. In particular, I would like to express my gratitude to my former and current group-mates in the Systems Lab, Moore 155, who created a nice environment that made long hours of working more v joyful. I would like to thank my dear friend, Ali Vakili, for his unlimited friendship and support. To him, my gratitude is beyond words. Last but not least, my deepest appreciations go to my family for their endless love, care and encouragement. I am sincerely thankful to my parents, Farideh Ghavidel and Bahram Shadbakht, who have unconditionally put forth anything that they could for my success and progress. Their love, care and support have been boundlessly heartwarming and their appreciation for education and science inspiring. My mother's presence here has been a blessing for me, and my father's encouragements have always helped me pursue my goals decisively. I wholeheartedly thank my dear sister, Bahareh Shadbakht, for her unreserved kindness and being my wonderful friend and support. She and her husband, Sohrab Zamani, have been of great help to me especially during my first year of stay in the U.S. I am greatly thankful to them and I have always enjoyed spending time with them and my lovely nieces, Yasmin and Hannah. To my family, for all that they have done for me, I shall always be grateful. Sormeh Shadbakht Caltech May 2011 vi Abstract The ever-growing need for data transmission over networks calls for optimal design of systems and efficient use of coding schemes over networks. However, despite the attention given to the analysis of networks and their optimum performance, many issues remain unsolved. An important subject of study in any network is its capac- ity region, which is defined through the limits of the set of data rates at which the sources can reliably communicate with their corresponding destinations. Although the capacity of a single user communication channel is completely known, the ca- pacity region of many multiuser information theory problems are still open. A main hurdle in obtaining the capacity of multiuser networks is that the problem is usually nonconvex and it involves limiting expressions in terms of the number of channel uses (data transmissions). This thesis takes a step toward a general framework for solving network information theory problems by studying the capacity region of networks through the entropy region. An entropy vector of n random variables with a fixed probability distribution is the vector of all their joint entropies. The entropy region of n random variables accordingly is the space of all entropy vectors that can arise from different probability distributions of those random variables. In the first part, it is shown that the capacity of a large class of acyclic memoryless multiuser information theory problems can be formulated as convex optimization over the region of entropy vectors of the network random variables. The advantage of this characterization over the previous approaches, beside its universality, is that the capacity optimization need not be performed over the limit of an infinite number of vii channel uses. This formulation on the other hand reveals the fundamental role of the entropy region in determining the capacity of network information theory problems. With this viewpoint, the rest of the thesis is dedicated to the study of the entropy region and its consequences for networks. A full characterization of the entropy region has proven to be a very challenging problem and so we have mostly examined the space of entropy vectors through inner bound constructions. For discrete random variables our approaches include the characterization of entropy vectors with a lattice- derived probability distribution, the entropy region of binary random variables, and the linear representable region roughly defined as the entropy region of linearly related random variables over a finite field. Our lattice-based construction can in principle be generalized to any number of random variables and we have explicitly computed its resulting entropy region for up to 5 random variables. The entropy region of binary random variables and the linear representable vectors are mostly considered in the context of the linear coding capacity of networks. In particular, we formulate the binary scalar linear coding capacity of networks and give the necessary and sufficient conditions for its set of solutions. Moreover, we also obtain similar necessary and sufficient conditions in the case of linearly coded arbitrary alphabet-size scalar random variables of a network with 2 sources and determine the optimality of linear codes among all scalar codes for such a network. For continuous random variables we have studied the entropy region of jointly Gaussian random variables and have determined that the convex cone of the cor- responding region of 3 Gaussian random variables obtains the entropy region of 3 continuous random variables in general. For more than 3 random variables we point out the set of minimal necessary and sufficient conditions for a vector to be an entropy vector of jointly Gaussian random variables. Finally in the absence of a full analytical characterization of the entropy region, it is desirable to be able to perform numerical optimization over this space. In this viii regard, a certain Monte Carlo method is proposed that enables one to numerically optimize the achievable rates in an arbitrary wired network under linear or nonlinear network coding schemes. This method can be adjusted for decentralized operation of networks and can also be used for optimization of any function of entropies of discrete random variables. The promise of this technique is shown through various simulations of several interesting network problems. ix Contents Acknowledgements iv Abstract vi 1 Introduction 1 1.1 Capacity Region in Multiuser Information Theory . 3 1.2 Entropy Region Characterization . 5 1.3 Linear Coding and Linear Representability . 7 1.4 Numerical Optimization Over the Entropy Region . 8 1.5 Scope and Contributions of the Thesis . 8 2 Network Information Theory and Convex Optimization 13 2.1 Introduction . 13 2.2 Entropy Vectors and Network Information Theory . 16 2.3 Network Capacity as a Convex Optimization . 21 2.3.1 Objective and Constraints . 21 2.3.1.1 Topological Constraints . 21 2.3.1.2 Channel Constraints . 22 2.3.2 Capacity Formulation as Convex Optimization . 23 2.3.3 Some Applications . 24 2.3.3.1 Duality and Cutset Bounds . 24 2.3.3.2 Wired Networks . 25 x 2.4 Conclusions . 27 3 Lattice-Based Entropy Construction 29 3.1 Introduction . 29 3.2 Quasi-Uniform Distributions . 33 3.3 Distributions from Lattices . 36 3.3.1 Principles and Preliminaries of Construction . 36 3.3.2 Actual Construction . 42 3.3.3 Characterizing the Lattice-Based Entropy Region . 55 3.3.4 Vectorized Lattices . 57 3.3.5 Connection to Groups and Linear Representable Region . 57 3.4 Explicit Constructions . 61 3.4.1 Two Random Variables . 61 3.4.2 Three Random Variables . 64 3.4.3 Four Random Variables . 69 3.4.3.1 Calculating R4 ..................... 71 3.4.3.2 Achieving Ingleton Inner Bound . 72 3.4.4 Five Random Variables . 75 3.4.4.1 Calculating R5 ..................... 76 3.5 Quasi-Uniforms of Alphabet Size 2 . 77 3.6 Conclusions . 81 3.7 Appendix . 82 4 Linear Representable Entropy Vectors 96 4.1 Introduction . 96 4.2 Preliminaries . 98 4.3 Scalar Linear Representable Region . 103 4.3.1 General Technique . 103 xi 4.3.2 Scalar Linear Representable Region of 4 Random Variables . 103 4.3.2.1 Deriving All Rank Vectors of a 4 × 4 Matrix . 104 sr 4.3.2.2 Characterizing Γ4 . 109 4.4 Scalar Linear Codes for Networks with Two Sources . 111 4.4.1 Rank Vectors of an n × 2 Matrix . 112 4.4.2 Some Explicit Computations . 119 4.5 Binary Scalar Linear Network Coding . 119 4.5.1 Entropy, Polymatroid, and Matroid . 120 4.5.2 Matroid Representability and Excluded Minors .