1-107 P-Wave Morphology Correlates with The
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Error Correction Capacity of Unary Coding
Error Correction Capacity of Unary Coding Pushpa Sree Potluri1 Abstract Unary coding has found applications in data compression, neural network training, and in explaining the production mechanism of birdsong. Unary coding is redundant; therefore it should have inherent error correction capacity. An expression for the error correction capability of unary coding for the correction of single errors has been derived in this paper. 1. Introduction The unary number system is the base-1 system. It is the simplest number system to represent natural numbers. The unary code of a number n is represented by n ones followed by a zero or by n zero bits followed by 1 bit [1]. Unary codes have found applications in data compression [2],[3], neural network training [4]-[11], and biology in the study of avian birdsong production [12]-14]. One can also claim that the additivity of physics is somewhat like the tallying of unary coding [15],[16]. Unary coding has also been seen as the precursor to the development of number systems [17]. Some representations of unary number system use n-1 ones followed by a zero or with the corresponding number of zeroes followed by a one. Here we use the mapping of the left column of Table 1. Table 1. An example of the unary code N Unary code Alternative code 0 0 0 1 10 01 2 110 001 3 1110 0001 4 11110 00001 5 111110 000001 6 1111110 0000001 7 11111110 00000001 8 111111110 000000001 9 1111111110 0000000001 10 11111111110 00000000001 The unary number system may also be seen as a space coding of numerical information where the location determines the value of the number. -
Acute Coronary Syndrome
Technology Assessment Systematic Review of ECG-based Signal Analysis Technologies for Evaluating Patients With Acute Coronary Syndrome Technology Assessment Program Prepared for: Agency for Healthcare Research and Quality October 2011 540 Gaither Road Rockville, Maryland 20850 Systematic Review of ECG-based Signal Analysis Technologies for Evaluating Patients With Acute Coronary Syndrome Technology Assessment Report Project ID: CRDD0311 October 2011 Duke Evidence-based Practice Center Remy R. Coeytaux, M.D., Ph.D. Philip J. Leisy, B.S. Galen S. Wagner, M.D. Amanda J. McBroom, Ph.D. Cynthia L. Green, Ph.D. Liz Wing, M.A. R. Julian Irvine, M.C.M. Gillian D. Sanders, Ph.D. DRAFT – Not for citation or dissemination This draft technology assessment is distributed solely for the purpose of peer review and/or discussion at the MedCAC meeting. It has not been otherwise disseminated by AHRQ. It does not represent and should not be construed to represent an AHRQ determination or policy. This report is based on research conducted by the Duke Evidence-based Practice Center under contract to the Agency for Healthcare Research and Quality (AHRQ), Rockville, MD (Contract No. HHSA 290-2007-10066 I). The findings and conclusions in this document are those of the authors, who are responsible for its contents. The findings and conclusions do not necessarily represent the views of AHRQ. Therefore, no statement in this report should be construed as an official position of the Agency for Healthcare Research and Quality or of the U.S. Department of Health and Human Services. None of the investigators has any affiliations or financial involvement related to the material presented in this report. -
Image Data Compression Introduction to Coding
Image Data Compression Introduction to Coding © 2018-19 Alexey Pak, Lehrstuhl für Interaktive Echtzeitsysteme, Fakultät für Informatik, KIT 1 Review: data reduction steps (discretization / digitization) Continuous 2D siGnal Fully diGital siGnal (liGht intensity on sensor) gq (xa, yb,ti ) Discrete time siGnal (pixel voltaGe readinGs) g(xa, yb,ti ) g(x, y,t) Spatial discretization Temporal discretization and diGitization g(xa, yb,t) g(xa, yb,t) gq (xa, yb,t) Discrete value siGnal AnaloG siGnal Spatially discrete siGnal (e.G., # of electrons at each (liGht intensity at a pixel) (pixel-averaGed intensity) pixel of the CCD matrix) © 2018-19 Alexey Pak, Lehrstuhl für Interaktive Echtzeitsysteme, Fakultät für Informatik, KIT 2 Review: data reduction steps (discretization / digitization) Discretization of 1D continuous-time signals (sampling) • Important signal transformations: up- and down-sampling • Information-preserving down-sampling: rate determined based on signal bandwidth • Fourier space allows simple interpretation of the effects due to decimation and interpolation (techniques of up-/down-sampling) Scalar (one-dimensional) signal quantization of continuous-value signals • Quantizer types: uniform, simple non-uniform (with a dead zone, with a limited amplitude) • Advanced quantizers: PDF-optimized (Max-Lloyd algorithm), perception-optimized, SNR- optimized • Implementation: pre-processing with a compander function + simple quantization Vector (multi-dimensional) signal quantization • Terminology: quantization, reconstruction, codebook, distance metric, Voronoi regions, space partitioning • Relation to the general classification problem (from Machine Learning) • Linde-Buzo-Gray algorithm of constructing (sub-optimal) codebooks (aka k-means) © 2018-19 Alexey Pak, Lehrstuhl für Interaktive Echtzeitsysteme, Fakultät für Informatik, KIT 3 LGB vector quantization – 2D example [Linde, Buzo, Gray ‘80]: 1. -
Prof. Jonathan S Steinberg
Revised June 2006 CURRICULUM VITAE PERSONAL DATA Name: Jonathan S. Steinberg, M.D. Business Address: St. Luke's/Roosevelt 1111 Amsterdam Avenue New York, NY 10025 Home Address: 268 Underhill Road South Orange, NJ 07079 Social Security No. 092-44-3548 Birthdate: March 28, 1956 Birthplace: New York Marital Status: Married, Two Children Citizenship: USA ACADEMIC TRAINING 9/72 – 6/76 AB - Queens College of the City University of New York 9/76 – 6/80 MD - Mt. Sinai School of Medicine, New York, NY TRAINEESHIP 7/80 – 6/81 Medical Intern New York University - New York VA Medical Center New York, NY 7/81 – 6/83 Medical Resident New York University - New York VA Medical Center New York 7/83 – 6/84 Chief Medical Resident New York University New York VA Medical Center New York, NY TRAINEESHIP (cont’d) 7/84 – 6/86 Clinical Cardiology Fellow George Washington University Medical Center, Washington, DC 7/86 – 6/88 Fellowship - Electrophysiology Columbia - Presbyterian Medical Center, New York, NY LICENSURE New York - 147162 New Jersey - MA57755 BOARD CERTIFICATION 1980 Diplomate, National Board of Medical Examiners 1983 Diplomate, American Board of Internal Medicine 1987 Diplomate, Subspecialty of Cardiovascular Diseases 1992 Diplomate, Clinical Electrophysiology PROFESSIONAL ORGANIZATIONS AND SOCIETIES Fellow - American College of Cardiology Fellow - American College of Physicians Fellow - Council on Clinical Cardiology, American Heart Association Member - North American Society of Pacing & Electrophysiology Member - International Society of Holter -
The Pillars of Lossless Compression Algorithms a Road Map and Genealogy Tree
International Journal of Applied Engineering Research ISSN 0973-4562 Volume 13, Number 6 (2018) pp. 3296-3414 © Research India Publications. http://www.ripublication.com The Pillars of Lossless Compression Algorithms a Road Map and Genealogy Tree Evon Abu-Taieh, PhD Information System Technology Faculty, The University of Jordan, Aqaba, Jordan. Abstract tree is presented in the last section of the paper after presenting the 12 main compression algorithms each with a practical This paper presents the pillars of lossless compression example. algorithms, methods and techniques. The paper counted more than 40 compression algorithms. Although each algorithm is The paper first introduces Shannon–Fano code showing its an independent in its own right, still; these algorithms relation to Shannon (1948), Huffman coding (1952), FANO interrelate genealogically and chronologically. The paper then (1949), Run Length Encoding (1967), Peter's Version (1963), presents the genealogy tree suggested by researcher. The tree Enumerative Coding (1973), LIFO (1976), FiFO Pasco (1976), shows the interrelationships between the 40 algorithms. Also, Stream (1979), P-Based FIFO (1981). Two examples are to be the tree showed the chronological order the algorithms came to presented one for Shannon-Fano Code and the other is for life. The time relation shows the cooperation among the Arithmetic Coding. Next, Huffman code is to be presented scientific society and how the amended each other's work. The with simulation example and algorithm. The third is Lempel- paper presents the 12 pillars researched in this paper, and a Ziv-Welch (LZW) Algorithm which hatched more than 24 comparison table is to be developed. -
The Deep Learning Solutions on Lossless Compression Methods for Alleviating Data Load on Iot Nodes in Smart Cities
sensors Article The Deep Learning Solutions on Lossless Compression Methods for Alleviating Data Load on IoT Nodes in Smart Cities Ammar Nasif *, Zulaiha Ali Othman and Nor Samsiah Sani Center for Artificial Intelligence Technology (CAIT), Faculty of Information Science & Technology, University Kebangsaan Malaysia, Bangi 43600, Malaysia; [email protected] (Z.A.O.); [email protected] (N.S.S.) * Correspondence: [email protected] Abstract: Networking is crucial for smart city projects nowadays, as it offers an environment where people and things are connected. This paper presents a chronology of factors on the development of smart cities, including IoT technologies as network infrastructure. Increasing IoT nodes leads to increasing data flow, which is a potential source of failure for IoT networks. The biggest challenge of IoT networks is that the IoT may have insufficient memory to handle all transaction data within the IoT network. We aim in this paper to propose a potential compression method for reducing IoT network data traffic. Therefore, we investigate various lossless compression algorithms, such as entropy or dictionary-based algorithms, and general compression methods to determine which algorithm or method adheres to the IoT specifications. Furthermore, this study conducts compression experiments using entropy (Huffman, Adaptive Huffman) and Dictionary (LZ77, LZ78) as well as five different types of datasets of the IoT data traffic. Though the above algorithms can alleviate the IoT data traffic, adaptive Huffman gave the best compression algorithm. Therefore, in this paper, Citation: Nasif, A.; Othman, Z.A.; we aim to propose a conceptual compression method for IoT data traffic by improving an adaptive Sani, N.S. -
AE Berezin, VA Vizir, OV Demidenko
MINISTRY OF HEALTH OF UKRAINE ZAPORIZHZHIA STATE MEDICAL UNIVERSITY Department Internal Medicine no2 A. E. Berezin, V. A. Vizir, O. V. Demidenko CARDIOVASCULAR DISEASE («INTERNAL MEDICINE» MODULE 2) PART 1 The executive task force for students of medical faculty of 5th cource Zaporizhzhia 2018 UDC 616.12(075.8) B45 Ratified on meeting of the Central methodical committee of Zaporizhzhia State Medical University and it is recommended for the use in educational process for foreign students. (Protocol no 5 from 24 may 2018 ) Reviewers: V. V. Syvolap - MD, PhD, professor, Head of Department of Propedeutics of In- ternal Diseases with the Course of Patients’ Care, Zaporizhzhia State Medical Uni- versity; O. V. Kraydashenko - MD, PhD, professor, Head of Department of Clinical Pharmacology, Pharmacy and Pharmacotherapy with the Course of Cosmetology, Zaporizhzhia State Medical University. Authors: A. E. Berezin - MD, PhD, professor, Department of Internal Diseases 2; V. A. Vizir - MD, PhD, professor, Department of Internal Diseases 2; O. V. Demidenko -MD, PhD, Head of Department of Internal Diseases 2. Berezin A. E. B45 Cardiovascular diseases («Internal Medicine». Modul 2). Part 1=Серцево-судинні захворювання («Внутрішня медицина». Модуль 2). Ч. 1 : The executive task force for students of 5th course of medical faculty / A. E. Berezin, V. A. Vizir, O. V. Demidenko. – Zaporizhzhia : ZSMU, 2018. – 220 p. The executive task force is provided for students of 5th courses of medical faculties for helping to study of some topics in the fields of cardiovascular diseases incorporated into the discipline «Internal Medicine». There is the information about the most impor- tant topics regarding diagnosis of cardiac diseases. -
High-Frequency Signature of the QRS Complex Across Ischemia Quantified by QRS Slopes
High-Frequency Signature of the QRS Complex across Ischemia Quantified by QRS Slopes E Pueyo, A Arciniega, P Laguna Comm Techn Group, Aragon Institute of Eng Research, University of Zaragoza, Spain Abstract specificity values. The reason for this could be in the hy- pothesis itself or in the way the high-frequency compo- In this study two new indexes that measure the upward nents are usually quantified by high-pass filtering of the and downward slopes of the QRS complex are proposed to narrow transient QRS signal. The filtering introduces leak- quantify variations in the depolarization period due to in- age in frequency and smearing in time that can mask the duced ischemia during Percutaneous Transluminal Coro- very nature of the localized HF-QRS features. To further nary Angioplasty. Our results show that QRS slopes turn investigate on this we hypothesize that if the decrease in out to be substantially less steep during artery occlusion, the HF-QRS components is due to a reduction of the con- being this effect more manifested for the downward slope duction velocity in the region of ischemia [4], such a phe- than for the upward one. Comparing the proposed indexes nomenon could be also quantified by measuring the up- with a previously reported index that quantifies the energy ward and downward slopes of the QRS complex. of the high-frequency QRS signal (150-250 Hz), it is shown To test our hypothesis, we analyze recordings of pa- that the ability of the slope indexes for ischemia detection tients before and during prolonged Percutaneous Translu- is clearly superior. -
Decrease in the High Frequency QRS Components Depending on the Local Conduction Delay
Jpn Circ J 1998; 62: 844–848 Decrease in the High Frequency QRS Components Depending on the Local Conduction Delay Tetsu Watanabe, MD; Michiyasu Yamaki, MD; Hidetada Tachibana, MD; Isao Kubota, MD; Hitonobu Tomoike, MD The high frequency components contained in the QRS complex (HF-QRS) are a powerful indicator for the risk of sudden cardiac death. However, it is controversial whether conduction delay increases or decreases the HF- QRS. In 21 anesthetized, open-chest dogs, the right atrium was constantly paced. A cannula was inserted into the left anterior descending artery and flecainide, lidocaine or disopyramide was infused to slow the local conduc- tion. Sixty unipolar electrograms were recorded from the entire ventricular surface and were signal-averaged. Data were filtered (30–250Hz) by using fast-Fourier transform. The HF-QRS was calculated by integrating the filtered QRS signal. Activation time (AT; dV/dt minimum) was delayed and the HF-QRS was reduced in the area perfused by flecainide, lidocaine or disopyramide. The percent increase in AT closely correlated the percentage decrease in the HF-QRS; the correlation coefficients were 0.75, 0.83 and 0.76 for flecainide, lido- caine and disopyramide infusion, respectively, (p<0.001). Decrease in the HF-QRS linearly correlated with the local conduction delay. This study proved that conduction delay decreases the HF-QRS, and that the HF-QRS is a potent indicator of disturbed local conduction. (Jpn Circ J 1998; 62: 844–848) Key Words: Disopyramide; Fast-Fourier transform; Flecainide; Lidocaine; Signal averaging ate potentials, or high-frequency low-amplitude anesthetized with sodium pentobarbital (30mg/kg, iv) and signals in the terminal portion of the QRS received supplemental doses as needed. -
"Electrocardiogram (ECG) Signal Processing". In: Encyclopedia Of
ELECTROCARDIOGRAM (ECG) SIGNAL ECG Noise QRS Wave PROCESSING filtering detection delineation LEIF SO¨ RNMO Lund University Sweden Data Storage or PABLO LAGUNA compression transmission Zaragoza University Spain Figure 1. Algorithms for basic ECG signal processing. The timing information produced by the QRS detector may be fed to the blocks for noise filtering and data compression (indicated by 1. INTRODUCTION gray arrows) to improve their respective performance. The output of the upper branch is the conditioned ECG signal and related temporal information, including the occurrence time of each Signal processing today is performed in the vast majority heartbeat and the onset and end of each wave. of systems for ECG analysis and interpretation. The objective of ECG signal processing is manifold and com- prises the improvement of measurement accuracy and operate in sequential order, information on the occurrence reproducibility (when compared with manual measure- time of a heartbeat, as produced by the QRS detector, is ments) and the extraction of information not readily sometimes incorporated into the other algorithms to im- available from the signal through visual assessment. In prove performance. The complexity of each algorithm many situations, the ECG is recorded during ambulatory varies from application to application so that, for example, or strenuous conditions such that the signal is corrupted noise filtering performed in ambulatory monitoring is by different types of noise, sometimes originating from much more sophisticated than that -
Habilitation `A Diriger Des Recherches from Image Coding And
Habilitation `aDiriger des Recherches From image coding and representation to robotic vision Marie BABEL Universit´ede Rennes 1 June 29th 2012 Bruno Arnaldi, Professor, INSA Rennes Committee chairman Ferran Marques, Professor, Technical University of Catalonia Reviewer Beno^ıtMacq, Professor, Universit´eCatholique de Louvain Reviewer Fr´ed´ericDufaux, CNRS Research Director, Telecom ParisTech Reviewer Charly Poulliat, Professor, INP-ENSEEIHT Toulouse Examiner Claude Labit, Inria Research Director, Inria Rennes Examiner Fran¸oisChaumette, Inria Research Director, Inria Rennes Examiner Joseph Ronsin, Professor, INSA Rennes Examiner IRISA UMR CNRS 6074 / INRIA - Equipe Lagadic IETR UMR CNRS 6164 - Equipe Images 2 Contents 1 Introduction 3 1.1 An overview of my research project ........................... 3 1.2 Coding and representation tools: QoS/QoE context .................. 4 1.3 Image and video representation: towards pseudo-semantic technologies . 4 1.4 Organization of the document ............................. 5 2 Still image coding and advanced services 7 2.1 JPEG AIC calls for proposal: a constrained applicative context ............ 8 2.1.1 Evolution of codecs: JPEG committee ..................... 8 2.1.2 Response to the call for JPEG-AIC ....................... 9 2.2 Locally Adaptive Resolution compression framework: an overview . 10 2.2.1 Principles and properties ............................ 11 2.2.2 Lossy to lossless scalable solution ........................ 12 2.2.3 Hierarchical colour region representation and coding . 12 2.2.4 Interoperability ................................. 13 2.3 Quadtree Partitioning: principles ............................ 14 2.3.1 Basic homogeneity criterion: morphological gradient . 14 2.3.2 Enhanced color-oriented homogeneity criterion . 15 2.3.2.1 Motivations .............................. 15 2.3.2.2 Results ................................ 16 2.4 Interleaved S+P: the pyramidal profile ........................ -
Efficient Inverted Index Compression Algorithm Characterized by Faster
entropy Article Efficient Inverted Index Compression Algorithm Characterized by Faster Decompression Compared with the Golomb-Rice Algorithm Andrzej Chmielowiec 1,* and Paweł Litwin 2 1 The Faculty of Mechanics and Technology, Rzeszow University of Technology, Kwiatkowskiego 4, 37-450 Stalowa Wola, Poland 2 The Faculty of Mechanical Engineering and Aeronautics, Rzeszow University of Technology, Powsta´ncówWarszawy 8, 35-959 Rzeszow, Poland; [email protected] * Correspondence: [email protected] Abstract: This article deals with compression of binary sequences with a given number of ones, which can also be considered as a list of indexes of a given length. The first part of the article shows that the entropy H of random n-element binary sequences with exactly k elements equal one satisfies the inequalities k log2(0.48 · n/k) < H < k log2(2.72 · n/k). Based on this result, we propose a simple coding using fixed length words. Its main application is the compression of random binary sequences with a large disproportion between the number of zeros and the number of ones. Importantly, the proposed solution allows for a much faster decompression compared with the Golomb-Rice coding with a relatively small decrease in the efficiency of compression. The proposed algorithm can be particularly useful for database applications for which the speed of decompression is much more important than the degree of index list compression. Citation: Chmielowiec, A.; Litwin, P. Efficient Inverted Index Compression Keywords: inverted index compression; Golomb-Rice coding; runs coding; sparse binary sequence Algorithm Characterized by Faster compression Decompression Compared with the Golomb-Rice Algorithm.