Global Warming – Impacts and Future Perspective
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Thompson Sampling on Symmetric Α-Stable Bandits
Thompson Sampling on Symmetric α-Stable Bandits Abhimanyu Dubey and Alex Pentland Massachusetts Institute of Technology fdubeya, [email protected] Abstract Thompson Sampling provides an efficient technique to introduce prior knowledge in the multi-armed bandit problem, along with providing remarkable empirical performance. In this paper, we revisit the Thompson Sampling algorithm under rewards drawn from α-stable dis- tributions, which are a class of heavy-tailed probability distributions utilized in finance and economics, in problems such as modeling stock prices and human behavior. We present an efficient framework for α-stable posterior inference, which leads to two algorithms for Thomp- son Sampling in this setting. We prove finite-time regret bounds for both algorithms, and demonstrate through a series of experiments the stronger performance of Thompson Sampling in this setting. With our results, we provide an exposition of α-stable distributions in sequential decision-making, and enable sequential Bayesian inference in applications from diverse fields in finance and complex systems that operate on heavy-tailed features. 1 Introduction The multi-armed bandit (MAB) problem is a fundamental model in understanding the exploration- exploitation dilemma in sequential decision-making. The problem and several of its variants have been studied extensively over the years, and a number of algorithms have been proposed that op- timally solve the bandit problem when the reward distributions are well-behaved, i.e. have a finite support, or are sub-exponential. The most prominently studied class of algorithms are the Upper Confidence Bound (UCB) al- gorithms, that employ an \optimism in the face of uncertainty" heuristic [ACBF02], which have been shown to be optimal (in terms of regret) in several cases [CGM+13, BCBL13]. -
On Products of Gaussian Random Variables
On products of Gaussian random variables Zeljkaˇ Stojanac∗1, Daniel Suessy1, and Martin Klieschz2 1Institute for Theoretical Physics, University of Cologne, Germany 2 Institute of Theoretical Physics and Astrophysics, University of Gda´nsk,Poland May 29, 2018 Sums of independent random variables form the basis of many fundamental theorems in probability theory and statistics, and therefore, are well understood. The related problem of characterizing products of independent random variables seems to be much more challenging. In this work, we investigate such products of normal random vari- ables, products of their absolute values, and products of their squares. We compute power-log series expansions of their cumulative distribution function (CDF) based on the theory of Fox H-functions. Numerically we show that for small arguments the CDFs are well approximated by the lowest orders of this expansion. For the two non-negative random variables, we also compute the moment generating functions in terms of Mei- jer G-functions, and consequently, obtain a Chernoff bound for sums of such random variables. Keywords: Gaussian random variable, product distribution, Meijer G-function, Cher- noff bound, moment generating function AMS subject classifications: 60E99, 33C60, 62E15, 62E17 1. Introduction and motivation Compared to sums of independent random variables, our understanding of products is much less comprehensive. Nevertheless, products of independent random variables arise naturally in many applications including channel modeling [1,2], wireless relaying systems [3], quantum physics (product measurements of product states), as well as signal processing. Here, we are particularly motivated by a tensor sensing problem (see Ref. [4] for the basic idea). In this problem we consider n1×n2×···×n tensors T 2 R d and wish to recover them from measurements of the form yi := hAi;T i 1 2 d j nj with the sensing tensors also being of rank one, Ai = ai ⊗ai ⊗· · ·⊗ai with ai 2 R . -
Space Expansion of Feature Selection for Designing More Accurate Error Predictors
Space Expansion of Feature Selection for Designing more Accurate Error Predictors Shayan Tabatabaei Nikkhah Mehdi Kamal Ali Afzali-Kusha Department of Electrical School of Electrical and Computer School of Electrical and Computer Engineering Engineering Engineering Eindhoven University of University of Tehran University of Tehran Technology Tehran, Iran Tehran, Iran Eindhoven, The Netherlands [email protected] [email protected] [email protected] Massoud Pedram Department of Electrical Engineering University of Southern California Los Angeles, CA, USA [email protected] ABSTRACT outputs. These errors strongly depend on application inputs [6], putting an extra emphasis on quality control in approximate Approximate computing is being considered as a promising computing. One of the proposed solutions for managing the design paradigm to overcome the energy and performance output quality is sampling the output elements and changing the challenges in computationally demanding applications. If the case approximate configuration accordingly to meet the quality where the accuracy can be configured, the quality level versus requirements [3], [7]. Given that output errors are strongly energy efficiency or delay also may be traded-off. For this dependent to the inputs, this approach is highly susceptible to technique to be used, one needs to make sure a satisfactory user missing the output elements with large errors [6]. experience. This requires employing error predictors to detect There is another approach emphasized in recent works (e.g., unacceptable approximation errors. In this work, we propose a [6], [8], [9]) which leverages light-weight quality checkers to scheduling-aware feature selection method which leverages the dynamically manage the output quality. -
Constructing Copulas from Shock Models with Imprecise Distributions
Constructing copulas from shock models with imprecise distributions MatjaˇzOmladiˇc Institute of Mathematics, Physics and Mechanics, Ljubljana, Slovenia [email protected] Damjan Skuljˇ Faculty of Social Sciences, University of Ljubljana, Slovenia [email protected] November 19, 2019 Abstract The omnipotence of copulas when modeling dependence given marg- inal distributions in a multivariate stochastic situation is assured by the Sklar's theorem. Montes et al. (2015) suggest the notion of what they call an imprecise copula that brings some of its power in bivari- ate case to the imprecise setting. When there is imprecision about the marginals, one can model the available information by means of arXiv:1812.07850v5 [math.PR] 18 Nov 2019 p-boxes, that are pairs of ordered distribution functions. By anal- ogy they introduce pairs of bivariate functions satisfying certain con- ditions. In this paper we introduce the imprecise versions of some classes of copulas emerging from shock models that are important in applications. The so obtained pairs of functions are not only imprecise copulas but satisfy an even stronger condition. The fact that this con- dition really is stronger is shown in Omladiˇcand Stopar (2019) thus raising the importance of our results. The main technical difficulty 1 in developing our imprecise copulas lies in introducing an appropriate stochastic order on these bivariate objects. Keywords. Marshall's copula, maxmin copula, p-box, imprecise probability, shock model 1 Introduction In this paper we propose copulas arising from shock models in the presence of probabilistic uncertainty, which means that probability distributions are not necessarily precisely known. Copulas have been introduced in the precise setting by A. -
Design and Development MIPS Processor Based on a High Performance and Low Power Architecture on FPGA
I.J.Modern Education and Computer Science, 2013, 5, 49-59 Published Online June 2013 in MECS (http://www.mecs-press.org/) DOI: 10.5815/ ijmecs.2013.05.06 Design and Development MIPS Processor Based on a High Performance and Low Power Architecture on FPGA Tina Daghooghi Shahid Chamran University, Ahvaz, Iran E-ma il: [email protected] Abstract— This paper presents the design and architecture must contain all of this state. Based on the development of a high performance and low power current architectural state, the processor executes a MIPS microprocessor and implementation on FPGA. In particular instruction with a particular set of data to this method we for achieving high performance and low produce a new architectural state. Some power in the operation of the proposed microprocessor microarchitectures contain additional nonarchitectural use different methods including, unfolding state to either simplify the logic or improve performance transformation (parallel processing), C-slow retiming [2]. The IBM System z10™ microprocessor is currently technique, and double edge registers are used to get even the fastest running 64-bit CISC (co mple x instruction set reduce power consumption. Also others blocks designed computer) microprocessor. This microprocessor operates based on high speed digital circuits. Because of at 4.4 GHz and provides up to two times performance feedback loop in the proposed architecture C-slo w improvement compared with its predecessor, the System retiming can enhance designs that contain feedback z9® microprocessor. In addition to its ultrahigh- loops. The C-slow retiming is well-known for frequency pipeline, the z10™ microprocessor offers optimization and high performance technique, it such performance enhancements as a sophisticated automatically rebalances the registers in the proposed branch-prediction structure, a large second-level private design. -
Model-Order Reduction Using Variational Balanced Truncation with Spectral Shaping Payam Heydari, Member, IEEE, and Massoud Pedram, Fellow, IEEE
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS—I: REGULAR PAPERS, VOL. 53, NO. 4, APRIL 2006 879 Model-Order Reduction Using Variational Balanced Truncation With Spectral Shaping Payam Heydari, Member, IEEE, and Massoud Pedram, Fellow, IEEE Abstract—This paper presents a spectrally weighted balanced the reduced system. Extensions of explicit moment-matching truncation (SBT) technique for tightly coupled integrated circuit techniques have been proposed recently, to reduce the linear interconnects, when the interconnect circuit parameters change as time-varying (LTV) as well as nonlinear dynamic systems [7], a result of statistical variations in the manufacturing process. The salient features of this algorithm are the inclusion of the parameter [8]. variation in the RLCK interconnect, the guaranteed passivity of the An alternative model reduction technique is the balanced reduced transfer function, and the availability of provable spec- realization [9]–[11]. While being widely investigated in con- trally weighted error bounds for the reduced-order system. This trol system theory, balanced realization techniques have not paper shows that the variational balanced truncation technique received the same attention as Krylov-based and Pade-based produces reduced systems that accurately follow the time- and fre- quency-domain responses of the original system when variations in model reduction techniques in the context of interconnect the circuit parameters are taken into consideration. Experimental analysis, partly because these methods involve computation-in- results show that the new variational SBT attains, in average, 30% tensive algorithms [9], [11], [12]. Furthermore, it is well known more accuracy than the variational Krylov-subspace-based model- that the balancing transformation may be poorly conditioned order reduction techniques. -
Field Guide to Continuous Probability Distributions
Field Guide to Continuous Probability Distributions Gavin E. Crooks v 1.0.0 2019 G. E. Crooks – Field Guide to Probability Distributions v 1.0.0 Copyright © 2010-2019 Gavin E. Crooks ISBN: 978-1-7339381-0-5 http://threeplusone.com/fieldguide Berkeley Institute for Theoretical Sciences (BITS) typeset on 2019-04-10 with XeTeX version 0.99999 fonts: Trump Mediaeval (text), Euler (math) 271828182845904 2 G. E. Crooks – Field Guide to Probability Distributions Preface: The search for GUD A common problem is that of describing the probability distribution of a single, continuous variable. A few distributions, such as the normal and exponential, were discovered in the 1800’s or earlier. But about a century ago the great statistician, Karl Pearson, realized that the known probabil- ity distributions were not sufficient to handle all of the phenomena then under investigation, and set out to create new distributions with useful properties. During the 20th century this process continued with abandon and a vast menagerie of distinct mathematical forms were discovered and invented, investigated, analyzed, rediscovered and renamed, all for the purpose of de- scribing the probability of some interesting variable. There are hundreds of named distributions and synonyms in current usage. The apparent diver- sity is unending and disorienting. Fortunately, the situation is less confused than it might at first appear. Most common, continuous, univariate, unimodal distributions can be orga- nized into a small number of distinct families, which are all special cases of a single Grand Unified Distribution. This compendium details these hun- dred or so simple distributions, their properties and their interrelations. -
QUANTIFYING EEG SYNCHRONY USING COPULAS Satish G
QUANTIFYING EEG SYNCHRONY USING COPULAS Satish G. Iyengara, Justin Dauwelsb, Pramod K. Varshneya and Andrzej Cichockic a Dept. of EECS, Syracuse University, NY b Massachusetts Institute of Technology, Cambridge, MA c RIKEN Brain Science Institute, Japan ABSTRACT However, this is not true with other distributions. The Gaussian In this paper, we consider the problem of quantifying synchrony model fails to characterize any nonlinear dependence and higher or- between multiple simultaneously recorded electroencephalographic der correlations that may be present between the variables. Further, signals. These signals exhibit nonlinear dependencies and non- the multivariate Gaussian model constrains the marginals to also Gaussian statistics. A copula based approach is presented to model follow the Gaussian disribution. As we will show in the following the joint statistics. We then consider the application of copula sections, copula models help alleviate these limitations. Following derived synchrony measures for early diagnosis of Alzheimer’s the same approach as in [4], we then evaluate the classification (nor- disease. Results on real data are presented. mal vs. subjects with MCI) performance using the leave-one-out cross validation procedure. We use the same data set as in [4] thus Index Terms— EEG, Copula theory, Kullback-Leibler diver- allowing comparison with other synchrony measures studied in [4]. gence, Statistical dependence The paper is structured as follows. We discuss copula theory and its use in modeling multivariate time series data in Section 2. 1. INTRODUCTION Section 3 describes the EEG data set used in the present analysis and considers the application of copula theory for EEG synchrony Quantifying synchrony between electroencephalographic (EEG) quantification. -
Design Technologies for Low Power VLSI
To appear in Encyclopedia of Computer Science and Technology, 1995 Design Technologies for Low Power VLSI Massoud Pedram Department of EE-Systems University of Southern California Los Angeles , CA 90089 Abstract Low power has emerged as a principal theme in today’s electronics indus- try. The need for low power has caused a major paradigm shift where power dis- sipation has become as important a consideration as performance and area. This article reviews various strategies and methodologies for designing low power cir- cuits and systems. It describes the many issues facing designers at architectural, logic, circuit and device levels and presents some of the techniques that have been proposed to overcome these difficulties. The article concludes with the future challenges that must be met to design low power, high performance systems. 1. Motivation In the past, the major concerns of the VLSI designer were area, perfor- mance, cost and reliability; power consideration was mostly of only secondary importance. In recent years, however, this has begun to change and, increasingly, power is being given comparable weight to area and speed considerations. Sev- eral factors have contributed to this trend. Perhaps the primary driving factor has been the remarkable success and growth of the class of personal computing devices (portable desktops, audio- and video-based multimedia products) and wireless communications systems (personal digital assistants and personal com- municators) which demand high-speed computation and complex functionality with low power consumption. In these applications, average power consumption is a critical design con- cern. The projected power budget for a battery-powered, A4 format, portable multimedia terminal, when implemented using off-the-shelf components not opti- 2 Design Technologies for Low Power VLSI mized for low-power operation, is about 40 W. -
Block Level Voltage/Frequency Scheduling for Low Power Consumption Under Timing Constraints
BLOCK LEVEL VOLTAGE/FREQUENCY SCHEDULING FOR LOW POWER CONSUMPTION UNDER TIMING CONSTRAINTS A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF RESEARCH MASTER in the School of Electronic Engineering by Li-Chuan Weng B.Sc. ECE, National Chiao-Tung University, Taiwan September 2004 ©Li-Chuan Weng 2004 DUBLIN CITY UNIVERSITY, IRELAND All rights reserved. No part of this work shall be reproduced, in any form or by any means without the permission from the author. BLOCK LEVEL VOLTAGE/FREQUENCY SCHEDULING FOR LOW POWER CONSUMPTION UNDER TIMING CONSTRAINTS Dissertation of Research Master Degree School of Electronic Engineering Dublin City University Ireland by Li-Chuan Weng September 2004 Supervisor: Dr. XiaoJun Wang, Lecturer, Dublin City University Interanl Examiner: Dr. Noel O’Connor, Lecturer, Dublin City University External Examiner: Dr. Steve Grainger, Professor, Staffordshire University Declaration I hereby certify that this material, which I now submit for assessment on the program of study leading to the award of Research Master Degree of Electronic Engineering is entirely my own work and has not been taken from the work of others save and to the extent that such work has been cited and acknowledged within the text of my work. Signed : ID No, MM* Date : ^ To my parents Acknowledgements This thesis would not have been possible without my advisors, Dr. XiaoJun Wang and Dr. Alan P. Su. Many thanks to Dr. Wang for his inspiration and the time and patience spent in reading this thesis, thanks to Dr. Su for the discussions regarding to my research. Dr. Wang allowing me the freedom to work on topics that I enjoyed. -
Joint Distributions Math 217 Probability and Statistics a Discrete Example
F ⊆ Ω2, then the distribution on Ω = Ω1 × Ω2 is the product of the distributions on Ω1 and Ω2. But that doesn't always happen, and that's what we're interested in. Joint distributions Math 217 Probability and Statistics A discrete example. Deal a standard deck of Prof. D. Joyce, Fall 2014 52 cards to four players so that each player gets 13 cards at random. Let X be the number of Today we'll look at joint random variables and joint spades that the first player gets and Y be the num- distributions in detail. ber of spades that the second player gets. Let's compute the probability mass function f(x; y) = Product distributions. If Ω1 and Ω2 are sam- P (X=x and Y =y), that probability that the first ple spaces, then their distributions P :Ω1 ! R player gets x spades and the second player gets y and P :Ω2 ! R determine a product distribu- spades. 5239 tion on P :Ω1 × Ω2 ! R as follows. First, if There are hands that can be dealt to E1 and E2 are events in Ω1 and Ω2, then define 13 13 P (E × E ) to be P (E )P (E ). That defines P on 13 39 1 2 1 2 these two players. There are hands \rectangles" in Ω1×Ω2. Next, extend that to count- x 13 − x able disjoint unions of rectangles. There's a bit of for the first player with exactly x spades, and with 13 − x26 + x theory to show that what you get is well-defined. -
Decision-Making with Heterogeneous Sensors - a Copula Based Approach
Syracuse University SURFACE Electrical Engineering and Computer Science - Dissertations College of Engineering and Computer Science 2011 Decision-Making with Heterogeneous Sensors - A Copula Based Approach Satish Giridhar Iyengar Syracuse University Follow this and additional works at: https://surface.syr.edu/eecs_etd Part of the Electrical and Computer Engineering Commons Recommended Citation Iyengar, Satish Giridhar, "Decision-Making with Heterogeneous Sensors - A Copula Based Approach" (2011). Electrical Engineering and Computer Science - Dissertations. 310. https://surface.syr.edu/eecs_etd/310 This Dissertation is brought to you for free and open access by the College of Engineering and Computer Science at SURFACE. It has been accepted for inclusion in Electrical Engineering and Computer Science - Dissertations by an authorized administrator of SURFACE. For more information, please contact [email protected]. Abstract Statistical decision making has wide ranging applications, from communications and signal processing to econometrics and finance. In contrast to the classical one source - one receiver paradigm, several applications have been identified in the recent past that require acquiring data from multiple sources or sensors. Information from the multiple sensors are transmitted to a remotely located receiver known as the fusion center which makes a global decision. Past work has largely focused on fusion of information from homogeneous sensors. This dissertation extends the formulation to the case when the local sensors may possess disparate sensing modalities. Both the theoretical and practical aspects of multimodal signal processing are considered. The first and foremost challenge is to ‘adequately’ model the joint statistics of such heterogeneous sensors. We propose the use of copula theory for this purpose. Copula models are general descriptors of dependence.