A Low-Complexity Improved Successive Cancellation Decoder

Total Page:16

File Type:pdf, Size:1020Kb

A Low-Complexity Improved Successive Cancellation Decoder A Low-Complexity Improved Successive Cancellation Decoder for Polar Codes Orion Afisiadis, Alexios Balatsoukas-Stimming, and Andreas Burg Telecommunications Circuits Laboratory, Ecole´ Polytechnique F´ed´erale de Lausanne, Switzerland. Abstract—Under successive cancellation (SC) decoding, polar for the least reliable bits of the polar code, thus also reducing codes are inferior to other codes of similar blocklength in terms of the computational complexity. However, in [7] L distinct paths frame error rate. While more sophisticated decoding algorithms are still followed in parallel. such as list- or stack-decoding partially mitigate this performance loss, they suffer from an increase in complexity. In this paper, we Unfortunately, when implementing any decoder in hard- describe a new flavor of the SC decoder, called the SC flip decoder. ware, one always has to provision for the worst case in Our algorithm preserves the low memory requirements of the terms of hardware resources. For the reduced-complexity SC basic SC decoder and adjusts the required decoding effort to the list decoders in [4]–[7] and the reduced-complexity SC stack signal quality. In the waterfall region, its average computational decoder in [4] this means that O(LN) and O(DN) memory complexity is almost as low as that of the SC decoder. needs to be instantiated, respectively. Moreover, the reduced- complexity list SC and stack SC algorithms also have a I. INTRODUCTION significantly higher computational complexity than that of the Polar codes [1] are particularly attractive from a theoretical original SC algorithm. point of view because they are the first codes that are both Contribution: In this paper, we describe a new SC-based de- highly structured and provably optimal for a wide range of coding algorithm, called SC flip, which retains the O(N) mem- applications (in the sense of optimality that pertains to each ory complexity of the original SC algorithm and has an average application). Moreover, they can be decoded using an elegant, computational complexity that is practically O(N log N) at albeit suboptimal, successive cancellation (SC) algorithm, high SNR, while still providing a significant gain in terms of which has computational complexity O(N log N) [1], where error correcting performance. N = 2n, n ∈ Z, is the blocklength of the code, and II. POLAR CODES AND SUCCESSIVE CANCELLATION memory complexity O(N) [2]. Even though the SC decoder DECODING is suboptimal, it is sufficient to prove that polar codes are capacity achieving in the limit of infinite blocklength. A. Construction of Polar Codes Unfortunately, the error correcting performance of SC de- Let W denote a binary input memoryless channel with coding at finite blocklengths is not as good as that of other input u ∈ {0, 1}, output y ∈ Y, and transition proba- modern codes, such as LDPC codes. To improve the finite bilities W (y|u). A polar code is constructed by recursively blocklength performance, more sophisticated algorithms, such applying a 2 × 2 channel combining transformation on 2n as SC list decoding [3] and SC stack decoding [4], were independent copies of W , followed by a channel splitting introduced recently. These algorithms use SC as the underlying step [1]. This results in a set of N = 2n synthetic channels, (i) N i−1 , decoder, but improve its performance by exploring multiple denoted by Wn (y1 ,u1 |ui), i = 1,...,N. Let Zi (i) N i−1 paths on a decision tree simultaneously, with each path re- Z Wn Y ,U Ui , i ,...,N, where Z W is the ( 1 1 | ) =1 ( ) arXiv:1412.5501v1 [cs.IT] 17 Dec 2014 sulting in one candidate codeword. The computational and Bhattacharyya parameter of W , which can be calculated using memory complexities of SC list decoding are O(LN log N) various methods (cf. [1], [8], [9]). The construction of a polar and O(LN), respectively, where L is the list size parameter, , k code of rate R N , 0 < k < N, is completed by choosing whereas the computational and memory complexities of SC the k best synthetic channels (i.e., the synthetic channels with stack decoding are O DN N and O DN , respectively, ( log ) ( ) the lowest Zi) as non-frozen channels which carry information where D is the stack depth parameter. bits, while freezing the input of the remaining channels to Since an exhaustive search through all paths is prohibitively some values ui that are known both to the transmitter and complex, choosing a suitable strategy for pruning unlikely to the receiver. The set of frozen channel indices is denoted paths is an important ingredient for low-complexity tree search by Ac and the set of non-frozen channel indices is denoted N algorithms. To this end, in [4], some path pruning-based c by A. The encoder generates a vector u1 by setting uA methods were proposed in order to reduce the computational equal to the known frozen values, while choosing uA freely. complexity of both SC stack and SC list decoding. An alter- N N A codeword is obtained as x1 = u1 GN , where GN is the native approach to reduce the computational complexity of SC generator matrix [1]. list decoding was taken in [5], [6], where decoding starts with list size 1, and the list size is increased only when decoding B. Successive Cancellation Decoding fails (failures are detected using a CRC), up to the maximum The SC decoding algorithm [1] starts by computing an list size L. Moreover, in [7] SC list decoding is employed only estimate of u1, denoted by uˆ1, based only on the received N N u L(1) L(1) L(1) L(1) ˆ1 3 2 1 0 values y1 . Subsequently, u2 is estimated using (y1 , uˆ1), etc. c ui, i u L(2) (1) L(2) (1) L(2) (1) L(2) Since ∈ A are known to the receiver, the real task of ˆ2 3 (u3 ) 2 (u2 ) 1 (u1 ) 0 SC decoding is to estimate ui, i ∈ A. Let the log-likelihood Channel LLRs – stage i uˆ L(3) L(3) L(3) L(3) ( ) N i−1 3 3 2 1 0 ratio (LLR) for Wn (y1 , uˆ1 |ui) be defined as (4) (3) (4) (3) (4) (3) (4) uˆ4 L (u ) L (u ) L (u ) L (i) N i−1 3 3 2 2 1 1 0 i N i− Wn (y , uˆ |ui = 0) ( ) 1 , 1 1 (5) (5) (5) (5) Ln (y , uˆ |ui) log . (1) uˆ5 L L L L 1 1 (i) N i−1 3 2 1 0 Wn y , u ui ( 1 ˆ1 | = 1) s (6) (5) (6) (5) (6) (5) (6) u L L L L =0 ˆ6 3 (u3 ) 2 (u2 ) 1 (u1 ) 0 Decisions are taken according to u L(7) L(7) L(7) L(7) ˆ7 3 2 1 0 (i) N i−1 n i 0, L (y1 , uˆ |u ) ≥ 0 and i ∈ A, (8) (7) (8) (7) (8) (7) (8) 1 uˆ8 L u L u L u L i i 3 ( 3 ) 2 ( 2 ) 1 ( 1 ) 0 uˆi = ( ) N −1 (2) 1, Ln (y1 , uˆ1 |ui) < 0 and i ∈ A, c ui, i ∈ A . Fig. 1: The computation graph of the SC decoder for N = 8. The f nodes are green and g nodes are blue and in the (i) N i−1 The decision LLRs Ln (y1 , uˆ1 |ui) can be calculated effi- parentheses are the partial sums that are used by each g node. ciently through a computation graph which contains two types of nodes, namely f nodes and g nodes. An example of this 1 Eb/N0 = 1.5 dB graph for N =8 is given in Fig. 1. Both types of nodes have 0.9 Eb/N0 = 2.0 dB two input LLRs, denoted by L1 and L2, and one output LLR, Eb/N0 = 2.5 dB 0.8 denoted by L. The g nodes have an additional input called the partial sum, denoted by u. The partial sums form the decision 0.7 feedback part of the SC decoder. The min-sum update rules [2] 0.6 for the two types of nodes are 0.5 0.4 f(L1,L2)= sign(L1)sign(L2) min (|L1|, |L2|) , (3) Relative frequency u 0.3 g(L1,L2,u) = (−1) L1 + L2. (4) 0.2 The partial sums at stage (s − 1) can be calculated from the partial sums at stage s, s ∈{1,...,n}, as 0.1 s−1 0 (2i−1−[(i−1) mod 2 ]) (2i−1) (2i) 0 1 2 3 4 5 6 us−1 = us ⊕ us , (5) Number of errors s−1 s−1 (2 +2i−1−[(i−1) mod 2 ]) (2i) Fig. 2: Histogram showing the relative frequency of the us−1 = us , (6) number of errors caused by the channel for a polar code with where N = 1024 and R =0.5 for three different SNR values. (i) un , uˆi, ∀i ∈{1,...,N} . (7) The computation graph contains N log(N + 1) nodes and For example, assume that, for the polar code in Fig. 1, the each node only needs to be activated once. Thus, the com- frozen set is Ac = {1, 2, 5, 6} and the information set is A = putational complexity of SC decoding is O(N log N). A , , , . Moreover, assume that the all-zero codeword was straightforward implementation of the computation graph in {3 4 7 8} transmitted and that uˆ was erroneously decoded as uˆ = 1 Fig. 1 requires O(N log N) memory positions. However, by 3 3 due to channel noise. Now suppose that the two LLRs that are cleverly re-using memory locations, it is possible to reduce L(4) the memory complexity to O(N) [2].
Recommended publications
  • Probabilistically Checkable Proofs Over the Reals
    Electronic Notes in Theoretical Computer Science 123 (2005) 165–177 www.elsevier.com/locate/entcs Probabilistically Checkable Proofs Over the Reals Klaus Meer1 ,2 Department of Mathematics and Computer Science Syddansk Universitet, Campusvej 55, 5230 Odense M, Denmark Abstract Probabilistically checkable proofs (PCPs) have turned out to be of great importance in complexity theory. On the one hand side they provide a new characterization of the complexity class NP, on the other hand they show a deep connection to approximation results for combinatorial optimization problems. In this paper we study the notion of PCPs in the real number model of Blum, Shub, and Smale. The existence of transparent long proofs for the real number analogue NPR of NP is discussed. Keywords: PCP, real number model, self-testing over the reals. 1 Introduction One of the most important and influential results in theoretical computer science within the last decade is the PCP theorem proven by Arora et al. in 1992, [1,2]. Here, PCP stands for probabilistically checkable proofs, a notion that was further developed out of interactive proofs around the end of the 1980’s. The PCP theorem characterizes the class NP in terms of languages accepted by certain so-called verifiers, a particular version of probabilistic Turing machines. It allows to stabilize verification procedures for problems in NP in the following sense. Suppose for a problem L ∈ NP and a problem 1 partially supported by the EU Network of Excellence PASCAL Pattern Analysis, Statis- tical Modelling and Computational Learning and by the Danish Natural Science Research Council SNF. 2 Email: [email protected] 1571-0661/$ – see front matter © 2005 Elsevier B.V.
    [Show full text]
  • The Complexity Zoo
    The Complexity Zoo Scott Aaronson www.ScottAaronson.com LATEX Translation by Chris Bourke [email protected] 417 classes and counting 1 Contents 1 About This Document 3 2 Introductory Essay 4 2.1 Recommended Further Reading ......................... 4 2.2 Other Theory Compendia ............................ 5 2.3 Errors? ....................................... 5 3 Pronunciation Guide 6 4 Complexity Classes 10 5 Special Zoo Exhibit: Classes of Quantum States and Probability Distribu- tions 110 6 Acknowledgements 116 7 Bibliography 117 2 1 About This Document What is this? Well its a PDF version of the website www.ComplexityZoo.com typeset in LATEX using the complexity package. Well, what’s that? The original Complexity Zoo is a website created by Scott Aaronson which contains a (more or less) comprehensive list of Complexity Classes studied in the area of theoretical computer science known as Computa- tional Complexity. I took on the (mostly painless, thank god for regular expressions) task of translating the Zoo’s HTML code to LATEX for two reasons. First, as a regular Zoo patron, I thought, “what better way to honor such an endeavor than to spruce up the cages a bit and typeset them all in beautiful LATEX.” Second, I thought it would be a perfect project to develop complexity, a LATEX pack- age I’ve created that defines commands to typeset (almost) all of the complexity classes you’ll find here (along with some handy options that allow you to conveniently change the fonts with a single option parameters). To get the package, visit my own home page at http://www.cse.unl.edu/~cbourke/.
    [Show full text]
  • Complexity-Adjustable SC Decoding of Polar Codes for Energy Consumption Reduction
    Complexity-adjustable SC decoding of polar codes for energy consumption reduction Citation for published version (APA): Zheng, H., Chen, B., Abanto-Leon, L. F., Cao, Z., & Koonen, T. (2019). Complexity-adjustable SC decoding of polar codes for energy consumption reduction. IET Communications, 13(14), 2088-2096. https://doi.org/10.1049/iet-com.2018.5643 DOI: 10.1049/iet-com.2018.5643 Document status and date: Published: 27/08/2019 Document Version: Accepted manuscript including changes made at the peer-review stage Please check the document version of this publication: • A submitted manuscript is the version of the article upon submission and before peer-review. There can be important differences between the submitted version and the official published version of record. People interested in the research are advised to contact the author for the final version of the publication, or visit the DOI to the publisher's website. • The final author version and the galley proof are versions of the publication after peer review. • The final published version features the final layout of the paper including the volume, issue and page numbers. Link to publication General rights Copyright and moral rights for the publications made accessible in the public portal are retained by the authors and/or other copyright owners and it is a condition of accessing publications that users recognise and abide by the legal requirements associated with these rights. • Users may download and print one copy of any publication from the public portal for the purpose of private study or research. • You may not further distribute the material or use it for any profit-making activity or commercial gain • You may freely distribute the URL identifying the publication in the public portal.
    [Show full text]
  • Simple Doubly-Efficient Interactive Proof Systems for Locally
    Electronic Colloquium on Computational Complexity, Revision 3 of Report No. 18 (2017) Simple doubly-efficient interactive proof systems for locally-characterizable sets Oded Goldreich∗ Guy N. Rothblumy September 8, 2017 Abstract A proof system is called doubly-efficient if the prescribed prover strategy can be implemented in polynomial-time and the verifier’s strategy can be implemented in almost-linear-time. We present direct constructions of doubly-efficient interactive proof systems for problems in P that are believed to have relatively high complexity. Specifically, such constructions are presented for t-CLIQUE and t-SUM. In addition, we present a generic construction of such proof systems for a natural class that contains both problems and is in NC (and also in SC). The proof systems presented by us are significantly simpler than the proof systems presented by Goldwasser, Kalai and Rothblum (JACM, 2015), let alone those presented by Reingold, Roth- blum, and Rothblum (STOC, 2016), and can be implemented using a smaller number of rounds. Contents 1 Introduction 1 1.1 The current work . 1 1.2 Relation to prior work . 3 1.3 Organization and conventions . 4 2 Preliminaries: The sum-check protocol 5 3 The case of t-CLIQUE 5 4 The general result 7 4.1 A natural class: locally-characterizable sets . 7 4.2 Proof of Theorem 1 . 8 4.3 Generalization: round versus computation trade-off . 9 4.4 Extension to a wider class . 10 5 The case of t-SUM 13 References 15 Appendix: An MA proof system for locally-chracterizable sets 18 ∗Department of Computer Science, Weizmann Institute of Science, Rehovot, Israel.
    [Show full text]
  • Glossary of Complexity Classes
    App endix A Glossary of Complexity Classes Summary This glossary includes selfcontained denitions of most complexity classes mentioned in the b o ok Needless to say the glossary oers a very minimal discussion of these classes and the reader is re ferred to the main text for further discussion The items are organized by topics rather than by alphab etic order Sp ecically the glossary is partitioned into two parts dealing separately with complexity classes that are dened in terms of algorithms and their resources ie time and space complexity of Turing machines and complexity classes de ned in terms of nonuniform circuits and referring to their size and depth The algorithmic classes include timecomplexity based classes such as P NP coNP BPP RP coRP PH E EXP and NEXP and the space complexity classes L NL RL and P S P AC E The non k uniform classes include the circuit classes P p oly as well as NC and k AC Denitions and basic results regarding many other complexity classes are available at the constantly evolving Complexity Zoo A Preliminaries Complexity classes are sets of computational problems where each class contains problems that can b e solved with sp ecic computational resources To dene a complexity class one sp ecies a mo del of computation a complexity measure like time or space which is always measured as a function of the input length and a b ound on the complexity of problems in the class We follow the tradition of fo cusing on decision problems but refer to these problems using the terminology of promise problems
    [Show full text]
  • User's Guide for Complexity: a LATEX Package, Version 0.80
    User’s Guide for complexity: a LATEX package, Version 0.80 Chris Bourke April 12, 2007 Contents 1 Introduction 2 1.1 What is complexity? ......................... 2 1.2 Why a complexity package? ..................... 2 2 Installation 2 3 Package Options 3 3.1 Mode Options .............................. 3 3.2 Font Options .............................. 4 3.2.1 The small Option ....................... 4 4 Using the Package 6 4.1 Overridden Commands ......................... 6 4.2 Special Commands ........................... 6 4.3 Function Commands .......................... 6 4.4 Language Commands .......................... 7 4.5 Complete List of Class Commands .................. 8 5 Customization 15 5.1 Class Commands ............................ 15 1 5.2 Language Commands .......................... 16 5.3 Function Commands .......................... 17 6 Extended Example 17 7 Feedback 18 7.1 Acknowledgements ........................... 19 1 Introduction 1.1 What is complexity? complexity is a LATEX package that typesets computational complexity classes such as P (deterministic polynomial time) and NP (nondeterministic polynomial time) as well as sets (languages) such as SAT (satisfiability). In all, over 350 commands are defined for helping you to typeset Computational Complexity con- structs. 1.2 Why a complexity package? A better question is why not? Complexity theory is a more recent, though mature area of Theoretical Computer Science. Each researcher seems to have his or her own preferences as to how to typeset Complexity Classes and has built up their own personal LATEX commands file. This can be frustrating, to say the least, when it comes to collaborations or when one has to go through an entire series of files changing commands for compatibility or to get exactly the look they want (or what may be required).
    [Show full text]
  • Counting T-Cliques: Worst-Case to Average-Case Reductions and Direct Interactive Proof Systems
    2018 IEEE 59th Annual Symposium on Foundations of Computer Science Counting t-Cliques: Worst-Case to Average-Case Reductions and Direct Interactive Proof Systems Oded Goldreich Guy N. Rothblum Department of Computer Science Department of Computer Science Weizmann Institute of Science Weizmann Institute of Science Rehovot, Israel Rehovot, Israel [email protected] [email protected] Abstract—We study two aspects of the complexity of counting 3) Counting t-cliques has an appealing combinatorial the number of t-cliques in a graph: structure (which is indeed capitalized upon in our 1) Worst-case to average-case reductions: Our main result work). reduces counting t-cliques in any n-vertex graph to counting t-cliques in typical n-vertex graphs that are In Sections I-A and I-B we discuss each study seperately, 2 drawn from a simple distribution of min-entropy Ω(n ). whereas Section I-C reveals the common themes that lead 2 For any constant t, the reduction runs in O(n )-time, us to present these two studies in one paper. We direct the and yields a correct answer (w.h.p.) even when the reader to the full version of this work [18] for full details. “average-case solver” only succeeds with probability / n 1 poly(log ). A. Worst-case to average-case reductions 2) Direct interactive proof systems: We present a direct and simple interactive proof system for counting t-cliques in 1) Background: While most research in the theory of n-vertex graphs. The proof system uses t − 2 rounds, 2 2 computation refers to worst-case complexity, the impor- the verifier runs in O(t n )-time, and the prover can O(1) 2 tance of average-case complexity is widely recognized (cf., be implemented in O(t · n )-time when given oracle O(1) e.g., [13, Chap.
    [Show full text]
  • 63Cu and 19F NMR on Five-Layer Ba2ca4cu5o10(F
    PHYSICAL REVIEW B 85, 024528 (2012) High-temperature superconductivity and antiferromagnetism in multilayer cuprates: 63 19 Cu and F NMR on five-layer Ba2Ca4Cu5O10(F,O)2 Sunao Shimizu,* Shin-ichiro Tabata, Shiho Iwai, Hidekazu Mukuda, and Yoshio Kitaoka Graduate School of Engineering Science, Osaka University, Toyonaka, Osaka 560-8531, Japan Parasharam M. Shirage, Hijiri Kito, and Akira Iyo National Institute of Advanced Industrial Science and Technology (AIST), Umezono, Tsukuba 305-8568, Japan (Received 22 August 2011; revised manuscript received 2 January 2012; published 20 January 2012) We report systematic Cu and F NMR measurements of five-layered high-Tc cuprates Ba2Ca4Cu5O10(F,O)2.Itis revealed that antiferromagnetism (AFM) uniformly coexists with superconductivity (SC) in underdoped regions, and that the critical hole density pc for AFM is ∼0.11 in the five-layered compound. We present the layer-number dependence of AFM and SC phase diagrams in hole-doped cuprates, where pc for n-layered compounds pc(n) increases from pc(1) ∼ 0.02 in La2−x Srx CuO4 or pc(2) ∼ 0.05 in YBa2Cu3O6+y to pc(5) ∼ 0.11. The variation of pc(n) is attributed to interlayer magnetic coupling, which becomes stronger with increasing n. In addition, we focus on the ground-state phase diagram of CuO2 planes, where AFM metallic states in slightly doped Mott insulators change into the uniformly mixed phase of AFM and SC and into simple d-wave SC states. The maximum Tc exists just outside the quantum critical hole density, at which AFM moments on a CuO2 plane collapse at the ground state, indicating an intimate relationship between AFM and SC.
    [Show full text]
  • Convex Optimization: Algorithms and Complexity
    Foundations and Trends R in Machine Learning Vol. 8, No. 3-4 (2015) 231–357 c 2015 S. Bubeck DOI: 10.1561/2200000050 Convex Optimization: Algorithms and Complexity Sébastien Bubeck Theory Group, Microsoft Research [email protected] Contents 1 Introduction 232 1.1 Some convex optimization problems in machine learning . 233 1.2 Basic properties of convexity . 234 1.3 Why convexity? . 237 1.4 Black-box model . 238 1.5 Structured optimization . 240 1.6 Overview of the results and disclaimer . 240 2 Convex optimization in finite dimension 244 2.1 The center of gravity method . 245 2.2 The ellipsoid method . 247 2.3 Vaidya’s cutting plane method . 250 2.4 Conjugate gradient . 258 3 Dimension-free convex optimization 262 3.1 Projected subgradient descent for Lipschitz functions . 263 3.2 Gradient descent for smooth functions . 266 3.3 Conditional gradient descent, aka Frank-Wolfe . 271 3.4 Strong convexity . 276 3.5 Lower bounds . 279 3.6 Geometric descent . 284 ii iii 3.7 Nesterov’s accelerated gradient descent . 289 4 Almost dimension-free convex optimization in non-Euclidean spaces 296 4.1 Mirror maps . 298 4.2 Mirror descent . 299 4.3 Standard setups for mirror descent . 301 4.4 Lazy mirror descent, aka Nesterov’s dual averaging . 303 4.5 Mirror prox . 305 4.6 The vector field point of view on MD, DA, and MP . 307 5 Beyond the black-box model 309 5.1 Sum of a smooth and a simple non-smooth term . 310 5.2 Smooth saddle-point representation of a non-smooth function312 5.3 Interior point methods .
    [Show full text]
  • Languages Which Capture Complexity Classes (Preliminary Report)
    Languages Which Capture Complexity Classes (Preliminary Report) Neil Immerman * Department of Mathematics Laboratory for Computer Science Tufts University and Massachusetts Institute of Technology Medford,Mass. 02155 Cambridge, Mass.. 02139 complexity classes as much more than machine dependent Introduction issues. Furthermore a whole new approach is suggested. Upper bounds (algorithms) can be produced by expressing the We present in this paper a series of languages adequate property of interest in one of our languages. Lower bounds for expressing exactly those properties checkable in a series may be demonstrated by showing that such expression is im- of computational complexity classes. For example, we show possible. that a graph property is in polynomial time if and only if it is For example, from the above we know that P = NP if expressible in the language of first order graph theory together and only if every second order property is already expressible with a least fixed point operator. As another example, a group using first order logic plus least fixed point. Similarly non- theoretic property is in the logspace hierarchy if and only if deterministic logspace is different from P just if there is some it is expressible in the language of first order group theory sentence using the fixed point operator which cannot be ex- together with a transitive closure operator. pressed with a single application of transitive closure. The roots of our approach to complexity theory go back In previous work [Im81], [Im82b], we showed that the to 1974 when Fagin showed that the NP properties are exactly complexity cf a property is related to the number of variables those expressible in second order existential sentences.
    [Show full text]
  • CS 535: Complexity Theory, Fall 2020 Homework 4 Due: 8:00PM, Friday, October 9, 2020
    CS 535: Complexity Theory, Fall 2020 Homework 4 Due: 8:00PM, Friday, October 9, 2020. Reminder. Homework must be typeset with LATEX preferred. Make sure you understand the course collaboration and honesty policy before beginning this assignment. Collaboration is permitted, but you must write the solutions by yourself without assistance. You must also identify your collaborators. Assignments missing a collaboration statement will not be accepted. Getting solutions from outside sources such as the Web or students not enrolled in the class is strictly forbidden. Problem 1 (\Mid"-Semester Feedback Form). Please fill out the feedback form here: https: //forms.gle/YdQ2tqHsfbM6VcxH9. (It's anonymous, so we can't check whether you did it, but we value your voice!) Problem 2 (Time vs. Space). As usual, you can quote anything stated in the main body of Arora-Barak or during the lectures to solve these problems. (Which may give some of them very short solutions.) (a) Show that NL ⊆ P. (2 points) 1 c (b) Define PolyL = [c=1SPACE(log n). Show that NL ⊆ PolyL. (2 points) (c) Prove that there are no PolyL-complete problems (with respect to logspace reductions). Hint: Assume that such a complete problem were to exist. Show that it would contradict the space hierarchy theorem. (4 points) (d) Define the class SC (\Steve's Class," in honor of Stephen Cook) to consist of all languages A for which there exists a deterministic TM M deciding A that runs in time poly(n) and uses space poly log(n). It is an open question whether NL ⊆ SC.
    [Show full text]
  • T HE Circulation of the Blood Is a Mechanical Phenomenon in Which
    PHYSICAL BASIS OF THE LOW-FREQUENCY BALLISTOCARDIOGRAPH H.C. BURGER, D.Sc.,A. NOORDERGRAAF, B.Sc., AND A.M. W. VERHAGEN, B.Sc. UTRECHT,HOLLAND HE circulation of the blood is a mechanical phenomenon in which the heart T has the task of keeping this circulation going. The forces appearing in the body of man and animal displace the blood. Ballistocardiography makes the attempt, by means of observation and analysis of the forces, the velocities, or the mass displacements, to gather infor- mation about the circulation, in particular of the stroke volume and of abnormali- ties of the heart or the IargeTblood ivessels. ( Several methods have already been developed for the recording of one of the three quantities: force, velocity, or mass displacement. The original model of Gordon5 recording mass displacement was very simple: a light bed swung by four ropes from a trestle. Because the absence of damping of this “swing” and the respiration of the patient lying on it caused almost insurmountable difficulties, later investigators (Starr and associateslO and Nickerson and Curtisg) proceeded to build their ballistocardiographs in a different way. They no longer recorded mass displacements. The original difficulties have been replaced by other ones, which made the analysis of the obtained curve more difficult and the apparatus more complicated. As our aim is to develop a method that is physically understandable and reliably based we have again taken up the original, simple, and clear method of Gordon6 and Henderson.’ In the following this “real” low-frequency method will be explained and its practical advantages will be accounted for.
    [Show full text]