Generalized Least-Squares Regressions IV: Theory and Classification Using Generalized Means

Total Page:16

File Type:pdf, Size:1020Kb

Generalized Least-Squares Regressions IV: Theory and Classification Using Generalized Means City University of New York (CUNY) CUNY Academic Works Publications and Research Kingsborough Community College 2014 Generalized Least-Squares Regressions IV: Theory and Classification Using Generalized Means Nataniel Greene CUNY Kingsborough Community College How does access to this work benefit ou?y Let us know! More information about this work at: https://academicworks.cuny.edu/kb_pubs/77 Discover additional works at: https://academicworks.cuny.edu This work is made publicly available by the City University of New York (CUNY). Contact: [email protected] Mathematics and Computers in Science and Industry Generalized Least-Squares Regressions IV: Theory and Classication Using Generalized Means Nataniel Greene Department of Mathematics and Computer Science Kingsborough Community College, CUNY 2001 Oriental Boulevard, Brooklyn, NY 11235, USA Email: [email protected] Abstract— The theory of generalized least-squares is reformu- arithmetic mean of the square deviations in x and y and was lated here using the notion of generalized means. The generalized called Pythagorean regression previously. Here, logarithmic, least-squares problem seeks a line which minimizes the average Heronian, centroidal, identric, Lorentz, and root mean square generalized mean of the square deviations in x and y. The notion of a generalized mean is equivalent to the generating regressions are described for the rst time. Ordinary least- function concept of the previous papers but allows for a more squares regression is shown here to be equivalent to minimum robust understanding and has an already existing literature. or maximum mean regression. Regressions based on weighted Generalized means are applied to the task of constructing arithmetic means of order and weighted geometric means more examples, simplifying the theory, and further classifying generalized least-squares regressions. of order are explored. The weights and parameterize all generalized mean square regression lines lying between the Keywords—Linear regression, least-squares, orthogonal re- gression, geometric mean regression, generalized least-squares, two ordinary least-squares lines. generalized mean square regression. Power mean regression of order p offers a particularly simple framework for parameterizing all the generalized mean I. OVERVIEW square regressions previously described. The p-scale has Ordinary least-squares regression suffers from a fundamen- xed numerical values corresponding to many known special tal lack of symmetry: the regression line of y given x and the means. All the symmetric regressions discussed in the previous regression line of x given y are not inverses of each other. papers are power mean regressions for some value of p. Ordinary least-squares corresponds to p = . The power Ordinary least-squares y x regression minimizes the average 1 square deviation betweenj the data and the line in the y variable mean is one example of a generalized mean whose free and ordinary least-squares x y minimizes the average square parameter unites a variety of special means as subcases. Other deviation between the dataj and the line in the x variable. generalized means which do the same include: the Dietel- A theory of generalized least-squares was described by this Gordon mean of order r, Stolarsky's mean of order s, and author for minimizing the average of a symmetric function of Gini's mean of order t. There are also two-parameter means the square deviations in both x and y variables [7,8,9]. The due to Stolarsky and Gini. Regression formulas based on all symmetric function was referred to as a generating function these generalized means are worked out here for the rst time. for the particular regression method. II. REGRESSIONS BASED ON GENERALIZED MEANS This paper continues the development of the theory of generalized least-squares, reformulated using the notion of Generalized means are applied to the task of constructing generalized means. The generalized least-squares problem more examples, simplifying the theory, and further classifying described here seeks a line which minimizes the average generalized least-squares regressions. generalized mean of the square deviations in x and y. The A. Axioms of a Generalized Mean notion of a generalized mean is equivalent to the generating function concept of the previous papers but allows for a more The axioms of a generalized mean presented here are drawn robust understanding and has an already existing literature. from Mays [13] and also from Chen [2]. It is clear from the name that geometric mean regression Denition 1: A function M (x; y) denes a generalized (GMR) seeks a line which minimizes the average geometric mean for x; y > 0 if it satises Properties 1-5 below. If mean of the square deviations in x and y. Orthogonal it satises Property 6 it is called a homogenous generalized regression seeks a line which minimizes the average harmonic mean. The properties are: mean of the square deviations in x and y. Therefore it is also 1. (Continuity) M (x; y) is continuous in each variable. called harmonic mean regression (HMR). Arithmetic mean 2. (Monotonicity) M (x; y) is non-decreasing in each vari- regression (AMR) seeks a line which minimizes the average able. ISBN: 978-1-61804-247-7 19 Mathematics and Computers in Science and Industry 3. (Symmetry) M (x; y) = M (y; x) : or using Ehrenberg's formula 4. (Identity) M (x; x) = x: 2 5. (Intermediacy) min (x; y) M (x; y) max (x; y) : E = g (b) 2 1 2 + 2 b y y x 6. (Homogeneity) M (tx; ty) = tM (x; y) for all t > 0: x All known means are included in this denition. All 2 + a + bx y (5) the means discussed in this paper are homogeneous. The where reader can verify that the weighted arithmetic mean or convex 1 combination of any two generalized means is a generalized g (b) = M 1; : (6) b2 mean. The weighted geometric mean of any two generalized a 1 1 Proof: Substitute b +xi b yi with b (a + bxi yi) and means is a generalized mean. More generally, the generalized then use the homogeneity property: mean of any two generalized means is itself a generalized n 2 mean. 1 2 (a + bxi yi) E = M (a + bxi yi) ; n b2 The equivalence of generalized means and generating func- i=1 ! tions is now demonstrated. Xn 1 2 1 Theorem 1: Let M (x; y) be any generalized mean, then = (a + bxi yi) M 1; : n b2 2 2 i=1 (x; y) = M x ; y is the generating function for a X corresponding generalized symmetric least-squares regression. Dene 1 Let (x; y) be any generating function, then M (x; y) = g (b) = M 1; ; b2 x ; y denes a generalized mean. The weight j j j j 1 1 factor g (b) outside of the summation and replace using functionp isp given by g (b) = 1; b = M 1; b2 : From here it is clear that the theory of generalized least- Ehrenberg's formula. The theory now continues unchanged with all the same squares can be reformulated using generalized means. The theorems and formulas involving the weight function g (b). general symmetric least-squares problem is re-stated as fol- It is reviewed now. To nd the regression coefcients a and lows. Denition 2: (The General Symmetric Least-Squares Prob- b take rst order partial derivatives of the error function Ea and Eb and set them equal to zero. The result is a = b lem) Values of a and b are sought which minimize an error y x function dened by and the First Discrepancy Formula: 2 2 N 2 y 1 g0 (b) y y 2 1 2 a 1 b = b + 1 : E = M (a + bxi yi) ; + xi yi (1) 2 g (b) N b b x x x ! i=1 ! X (7) where M (x; y) is any generalized mean. The solution to this To derive the slope equation for any generalized regression problem is called generalized mean square regression. of interest, begin with the First Discrepancy Formula and Denition 3: (The General Weighted Ordinary Least- substitute the specic expression for g (b) into the formula, Squares Problem) Values of a and b are sought which minimize simplify and reset the equation equal to zero. What emerges an error function dened by is the specic slope equation. This is the procedure employed N for all the slope equations presented in this paper. 1 2 y E = g (b) (a + bxi yi) (2) Solving this equation for the discrepancy b using the N x i=1 quadratic formula yields the Second Discrepancy Formula: X or using Ehrenberg's formula g (b) 2 g (b) 2 2 b y = 1 1 2 y 0 : 2 2 2 y E = g (b) 1 + b x g0 (b) 0 s x g (b) 1 y x x (8) @ A 2 In order for the y-intercept a and slope b to minimize the error + a + bx y (3) function the Hessian determinant E E (E )2 must be aa bb ab where g (b) is a positive even function that is non-decreasing positive. Calculation of this determinant in the general case for b < 0 and non-increasing for b > 0. yields a function The next theorem states that every generalized mean square G (b) = 2g0(b)=g(b) g00(b)=g0(b) (9) regression problem is equivalent to a weighted ordinary least- squares problem with weight function g (b). called the indicative function. The Hessian determinant is pos- Theorem 2: The general symmetric least-squares error itive provided that G (b) b y > 1. This differential function can be written equivalently as x equation for the weight function is solved to yield N 1 2 E = g (b) (a + bxi yi) (4) g(b) = 1=(c + k exp( G(b)db)db): (10) N i=1 X Z Z ISBN: 978-1-61804-247-7 20 Mathematics and Computers in Science and Industry B.
Recommended publications
  • The Matroid Theorem We First Review Our Definitions: a Subset System Is A
    CMPSCI611: The Matroid Theorem Lecture 5 We first review our definitions: A subset system is a set E together with a set of subsets of E, called I, such that I is closed under inclusion. This means that if X ⊆ Y and Y ∈ I, then X ∈ I. The optimization problem for a subset system (E, I) has as input a positive weight for each element of E. Its output is a set X ∈ I such that X has at least as much total weight as any other set in I. A subset system is a matroid if it satisfies the exchange property: If i and i0 are sets in I and i has fewer elements than i0, then there exists an element e ∈ i0 \ i such that i ∪ {e} ∈ I. 1 The Generic Greedy Algorithm Given any finite subset system (E, I), we find a set in I as follows: • Set X to ∅. • Sort the elements of E by weight, heaviest first. • For each element of E in this order, add it to X iff the result is in I. • Return X. Today we prove: Theorem: For any subset system (E, I), the greedy al- gorithm solves the optimization problem for (E, I) if and only if (E, I) is a matroid. 2 Theorem: For any subset system (E, I), the greedy al- gorithm solves the optimization problem for (E, I) if and only if (E, I) is a matroid. Proof: We will show first that if (E, I) is a matroid, then the greedy algorithm is correct. Assume that (E, I) satisfies the exchange property.
    [Show full text]
  • Gpindex: Generalized Price and Quantity Indexes
    Package ‘gpindex’ August 4, 2021 Title Generalized Price and Quantity Indexes Version 0.3.4 Description A small package for calculating lots of different price indexes, and by extension quan- tity indexes. Provides tools to build and work with any type of bilateral generalized-mean in- dex (of which most price indexes are), along with a few important indexes that don't be- long to the generalized-mean family. Implements and extends many of the meth- ods in Balk (2008, ISBN:978-1-107-40496-0) and ILO, IMF, OECD, Euro- stat, UN, and World Bank (2020, ISBN:978-1-51354-298-0) for bilateral price indexes. Depends R (>= 3.5) Imports stats, utils License MIT + file LICENSE Encoding UTF-8 URL https://github.com/marberts/gpindex LazyData true Collate 'helper_functions.R' 'means.R' 'weights.R' 'price_indexes.R' 'operators.R' 'utilities.R' NeedsCompilation no Author Steve Martin [aut, cre, cph] Maintainer Steve Martin <[email protected]> Repository CRAN Date/Publication 2021-08-04 06:10:06 UTC R topics documented: gpindex-package . .2 contributions . .3 generalized_mean . .8 lehmer_mean . 12 logarithmic_means . 15 nested_mean . 18 offset_prices . 20 1 2 gpindex-package operators . 22 outliers . 23 price_data . 25 price_index . 26 transform_weights . 33 Index 37 gpindex-package Generalized Price and Quantity Indexes Description A small package for calculating lots of different price indexes, and by extension quantity indexes. Provides tools to build and work with any type of bilateral generalized-mean index (of which most price indexes are), along with a few important indexes that don’t belong to the generalized-mean family. Implements and extends many of the methods in Balk (2008, ISBN:978-1-107-40496-0) and ILO, IMF, OECD, Eurostat, UN, and World Bank (2020, ISBN:978-1-51354-298-0) for bilateral price indexes.
    [Show full text]
  • CONTINUITY in the ALEXIEWICZ NORM Dedicated to Prof. J
    131 (2006) MATHEMATICA BOHEMICA No. 2, 189{196 CONTINUITY IN THE ALEXIEWICZ NORM Erik Talvila, Abbotsford (Received October 19, 2005) Dedicated to Prof. J. Kurzweil on the occasion of his 80th birthday Abstract. If f is a Henstock-Kurzweil integrable function on the real line, the Alexiewicz norm of f is kfk = sup j I fj where the supremum is taken over all intervals I ⊂ . Define I the translation τx by τxfR(y) = f(y − x). Then kτxf − fk tends to 0 as x tends to 0, i.e., f is continuous in the Alexiewicz norm. For particular functions, kτxf − fk can tend to 0 arbitrarily slowly. In general, kτxf − fk > osc fjxj as x ! 0, where osc f is the oscillation of f. It is shown that if F is a primitive of f then kτxF − F k kfkjxj. An example 1 6 1 shows that the function y 7! τxF (y) − F (y) need not be in L . However, if f 2 L then kτxF − F k1 6 kfk1jxj. For a positive weight function w on the real line, necessary and sufficient conditions on w are given so that k(τxf − f)wk ! 0 as x ! 0 whenever fw is Henstock-Kurzweil integrable. Applications are made to the Poisson integral on the disc and half-plane. All of the results also hold with the distributional Denjoy integral, which arises from the completion of the space of Henstock-Kurzweil integrable functions as a subspace of Schwartz distributions. Keywords: Henstock-Kurzweil integral, Alexiewicz norm, distributional Denjoy integral, Poisson integral MSC 2000 : 26A39, 46Bxx 1.
    [Show full text]
  • Cardinality Constrained Combinatorial Optimization: Complexity and Polyhedra
    Takustraße 7 Konrad-Zuse-Zentrum D-14195 Berlin-Dahlem fur¨ Informationstechnik Berlin Germany RUDIGER¨ STEPHAN1 Cardinality Constrained Combinatorial Optimization: Complexity and Polyhedra 1Email: [email protected] ZIB-Report 08-48 (December 2008) Cardinality Constrained Combinatorial Optimization: Complexity and Polyhedra R¨udigerStephan Abstract Given a combinatorial optimization problem and a subset N of natural numbers, we obtain a cardinality constrained version of this problem by permitting only those feasible solutions whose cardinalities are elements of N. In this paper we briefly touch on questions that addresses common grounds and differences of the complexity of a combinatorial optimization problem and its cardinality constrained version. Afterwards we focus on polytopes associated with cardinality constrained combinatorial optimiza- tion problems. Given an integer programming formulation for a combina- torial optimization problem, by essentially adding Gr¨otschel’s cardinality forcing inequalities [11], we obtain an integer programming formulation for its cardinality restricted version. Since the cardinality forcing inequal- ities in their original form are mostly not facet defining for the associated polyhedra, we discuss possibilities to strengthen them. In [13] a variation of the cardinality forcing inequalities were successfully integrated in the system of linear inequalities for the matroid polytope to provide a com- plete linear description of the cardinality constrained matroid polytope. We identify this polytope as a master polytope for our class of problems, since many combinatorial optimization problems can be formulated over the intersection of matroids. 1 Introduction, Basics, and Complexity Given a combinatorial optimization problem and a subset N of natural numbers, we obtain a cardinality constrained version of this problem by permitting only those feasible solutions whose cardinalities are elements of N.
    [Show full text]
  • Some Properties of AP Weight Function
    Journal of the Institute of Engineering, 2016, 12(1): 210-213 210 TUTA/IOE/PCU © TUTA/IOE/PCU Printed in Nepal Some Properties of AP Weight Function Santosh Ghimire Department of Engineering Science and Humanities, Institute of Engineering Pulchowk Campus, Tribhuvan University, Kathmandu, Nepal Corresponding author: [email protected] Received: June 20, 2016 Revised: July 25, 2016 Accepted: July 28, 2016 Abstract: In this paper, we briefly discuss the theory of weights and then define A1 and Ap weight functions. Finally we prove some of the properties of AP weight function. Key words: A1 weight function, Maximal functions, Ap weight function. 1. Introduction The theory of weights play an important role in various fields such as extrapolation theory, vector-valued inequalities and estimates for certain class of non linear differential equation. Moreover, they are very useful in the study of boundary value problems for Laplace's equation in Lipschitz domains. In 1970, Muckenhoupt characterized positive functions w for which the Hardy-Littlewood maximal operator M maps Lp(Rn, w(x)dx) to itself. Muckenhoupt's characterization actually gave the better understanding of theory of weighted inequalities which then led to the introduction of Ap class and consequently the development of weighted inequalities. 2. Definitions n Definition: A locally integrable function on R that takes values in the interval (0,∞) almost everywhere is called a weight. So by definition a weight function can be zero or infinity only on a set whose Lebesgue measure is zero. We use the notation to denote the w-measure of the set E and we reserve the notation Lp(Rn,w) or Lp(w) for the weighted L p spaces.
    [Show full text]
  • Linear Discriminant Analysis Using a Generalized Mean of Class Covariances and Its Application to Speech Recognition
    IEICE TRANS. INF. & SYST., VOL.E91–D, NO.3 MARCH 2008 478 PAPER Special Section on Robust Speech Processing in Realistic Environments Linear Discriminant Analysis Using a Generalized Mean of Class Covariances and Its Application to Speech Recognition Makoto SAKAI†,††a), Norihide KITAOKA††b), Members, and Seiichi NAKAGAWA†††c), Fellow SUMMARY To precisely model the time dependency of features is deal with unequal covariances because the maximum likeli- one of the important issues for speech recognition. Segmental unit input hood estimation was used to estimate parameters for differ- HMM with a dimensionality reduction method has been widely used to ent Gaussians with unequal covariances [9]. Heteroscedas- address this issue. Linear discriminant analysis (LDA) and heteroscedas- tic extensions, e.g., heteroscedastic linear discriminant analysis (HLDA) or tic discriminant analysis (HDA) was proposed as another heteroscedastic discriminant analysis (HDA), are popular approaches to re- objective function which employed individual weighted duce dimensionality. However, it is difficult to find one particular criterion contributions of the classes [10]. The effectiveness of these suitable for any kind of data set in carrying out dimensionality reduction methods for some data sets has been experimentally demon- while preserving discriminative information. In this paper, we propose a ffi new framework which we call power linear discriminant analysis (PLDA). strated. However, it is di cult to find one particular crite- PLDA can be used to describe various criteria including LDA, HLDA, and rion suitable for any kind of data set. HDA with one control parameter. In addition, we provide an efficient selec- In this paper we show that these three methods have tion method using a control parameter without training HMMs nor testing a strong mutual relationship, and provide a new interpreta- recognition performance on a development data set.
    [Show full text]
  • AMS / MAA CLASSROOM RESOURCE MATERIALS VOL 35 I I “Master” — 2011/5/9 — 10:51 — Page I — #1 I I
    AMS / MAA CLASSROOM RESOURCE MATERIALS VOL 35 i i “master” — 2011/5/9 — 10:51 — page i — #1 i i 10.1090/clrm/035 Excursions in Classical Analysis Pathways to Advanced Problem Solving and Undergraduate Research i i i i i i “master” — 2011/5/9 — 10:51 — page ii — #2 i i Chapter 11 (Generating Functions for Powers of Fibonacci Numbers) is a reworked version of material from my same name paper in the International Journalof Mathematical Education in Science and Tech- nology, Vol. 38:4 (2007) pp. 531–537. www.informaworld.com Chapter 12 (Identities for the Fibonacci Powers) is a reworked version of material from my same name paper in the International Journal of Mathematical Education in Science and Technology, Vol. 39:4 (2008) pp. 534–541. www.informaworld.com Chapter 13 (Bernoulli Numbers via Determinants)is a reworked ver- sion of material from my same name paper in the International Jour- nal of Mathematical Education in Science and Technology, Vol. 34:2 (2003) pp. 291–297. www.informaworld.com Chapter 19 (Parametric Differentiation and Integration) is a reworked version of material from my same name paper in the Interna- tional Journal of Mathematical Education in Science and Technology, Vol. 40:4 (2009) pp. 559–570. www.informaworld.com c 2010 by the Mathematical Association of America, Inc. Library of Congress Catalog Card Number 2010924991 Print ISBN 978-0-88385-768-7 Electronic ISBN 978-0-88385-935-3 Printed in the United States of America Current Printing (last digit): 10987654321 i i i i i i “master” — 2011/5/9 — 10:51 — page iii — #3 i i Excursions in Classical Analysis Pathways to Advanced Problem Solving and Undergraduate Research Hongwei Chen Christopher Newport University Published and Distributed by The Mathematical Association of America i i i i i i “master” — 2011/5/9 — 10:51 — page iv — #4 i i Committee on Books Frank Farris, Chair Classroom Resource Materials Editorial Board Gerald M.
    [Show full text]
  • Inequalities Among Complementary Means of Heron Mean and Classical Means
    Advances and Applications in Mathematical Sciences Volume 20, Issue 7, May 2021, Pages 1249-1258 © 2021 Mili Publications INEQUALITIES AMONG COMPLEMENTARY MEANS OF HERON MEAN AND CLASSICAL MEANS AMBIKA M. HEJIB, K. M. NAGARAJA, R. SAMPATHKUMAR and B. S. VENKATARAMANA Department of Mathematics R N S Institute of Technology Uttarahalli-Kengeri Main Road R Rnagar post, Bengaluru-98, Karnataka, India E-mail: [email protected] [email protected] Department of Mathematics J.S.S. Academy of Technical Education Uttarahalli-Kengeri Main Road Bengaluru-60, Karnataka, India E-mail: [email protected] Department of Mathematics K S Institute of Technology Kannakapura Main Road Bengaluru - 560 109, Karnataka, India E-mail: [email protected] Abstract In this paper, the complementary means of arithmetic, geometric, harmonic and contra harmonic with respect to Heron mean are defined and verified them as means. Further, inequalities among them and classical means are established. I. Introduction In Pythagorean school ten Greek means are defined based on proportion, the following are the familiar means in literature and are given as follows: 2010 Mathematics Subject Classification: Primary 26D10, secondary 26D15. Keywords: Complementary mean, Heron mean, Classical means. Received September 7, 2020; Accepted February 18, 2021 1250 HEJIB, NAGARAJA, SAMPATHKUMAR and VENKATARAMANA u v For two real numbers u and v which are positive, Au , v 1 1 ; 1 1 1 1 2 2 2 2u1v1 u1 v1 Gu1, v1 u1v1 ; Hu1, v1 and Cu1, v1 . These are u1 v1 u1 v1 called Arithmetic, Geometric, Harmonic and Contra harmonic mean respectively. The Hand book of Means and their Inequalities, by Bullen [1], gave the tremendous work on Mathematical means and the corresponding inequalities involving huge number of means.
    [Show full text]
  • Digital Image Processing Chapter 5: Image Restoration Concept of Image Restoration
    Digital Image Processing Chapter 5: Image Restoration Concept of Image Restoration Image restoration is to restore a degraded image back to the original image while image enhancement is to manipulate the image so that it is suitable for a specific application. Degradation model: g(x, y) = f (x, y) ∗h(x, y) +η(x, y) where h(x,y) is a system that causes image distortion and η(x,y) is noise. (Images from Rafael C. Gonzalez and Richard E. Wood, Digital Image Processing, 2nd Edition. Concept of Image Restoration g(x, y) = f (x, y) ∗h(x, y) +η(x, y) We can perform the same operations in the frequency domain, where convolution is replaced by multiplication, and addition remains as addition, because of the linearity of the Fourier transform. G(u,v) = F(u,v)H(u,v) + N(u,v) If we knew the values of H and N we could recover F by writing the above equation as N(u,v) F(u,v) = G(u,v) − H(u,v) However, as we shall see, this approach may not be practical. Even though we may have some statistical information about the noise, we will not know the value of n(x,y) or N(u,v) for all, or even any, values. A s we ll, di vidi ng b y H(i, j) will cause diff ulti es if there are values which are close to, or equal to, zero. Noise WdfiibddiihiilWe may define noise to be any degradation in the image signal, caused by external disturbance.
    [Show full text]
  • Unsupervised Machine Learning for Pathological Radar Clutter Clustering: the P-Mean-Shift Algorithm Yann Cabanes, Frédéric Barbaresco, Marc Arnaudon, Jérémie Bigot
    Unsupervised Machine Learning for Pathological Radar Clutter Clustering: the P-Mean-Shift Algorithm Yann Cabanes, Frédéric Barbaresco, Marc Arnaudon, Jérémie Bigot To cite this version: Yann Cabanes, Frédéric Barbaresco, Marc Arnaudon, Jérémie Bigot. Unsupervised Machine Learning for Pathological Radar Clutter Clustering: the P-Mean-Shift Algorithm. C&ESAR 2019, Nov 2019, Rennes, France. hal-02875430 HAL Id: hal-02875430 https://hal.archives-ouvertes.fr/hal-02875430 Submitted on 19 Jun 2020 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. Unsupervised Machine Learning for Pathological Radar Clutter Clustering: the P-Mean-Shift Algorithm Yann Cabanes1;2, Fred´ eric´ Barbaresco1, Marc Arnaudon2, and Jer´ emie´ Bigot2 1 Thales LAS, Advanced Radar Concepts, Limours, FRANCE [email protected]; [email protected] 2 Institut de Mathematiques´ de Bordeaux, Bordeaux, FRANCE [email protected]; [email protected] Abstract. This paper deals with unsupervised radar clutter clustering to char- acterize pathological clutter based on their Doppler fluctuations. Operationally, being able to recognize pathological clutter environments may help to tune radar parameters to regulate the false alarm rate. This request will be more important for new generation radars that will be more mobile and should process data on the move.
    [Show full text]
  • Hydrologic and Mass-Movement Hazards Near Mccarthy Wrangell-St
    Hydrologic and Mass-Movement Hazards near McCarthy Wrangell-St. Elias National Park and Preserve, Alaska By Stanley H. Jones and Roy L Glass U.S. GEOLOGICAL SURVEY Water-Resources Investigations Report 93-4078 Prepared in cooperation with the NATIONAL PARK SERVICE Anchorage, Alaska 1993 U.S. DEPARTMENT OF THE INTERIOR BRUCE BABBITT, Secretary U.S. GEOLOGICAL SURVEY ROBERT M. HIRSCH, Acting Director For additional information write to: Copies of this report may be purchased from: District Chief U.S. Geological Survey U.S. Geological Survey Earth Science Information Center 4230 University Drive, Suite 201 Open-File Reports Section Anchorage, Alaska 99508-4664 Box 25286, MS 517 Denver Federal Center Denver, Colorado 80225 CONTENTS Abstract ................................................................ 1 Introduction.............................................................. 1 Purpose and scope..................................................... 2 Acknowledgments..................................................... 2 Hydrology and climate...................................................... 3 Geology and geologic hazards................................................ 5 Bedrock............................................................. 5 Unconsolidated materials ............................................... 7 Alluvial and glacial deposits......................................... 7 Moraines........................................................ 7 Landslides....................................................... 7 Talus..........................................................
    [Show full text]
  • Sharp Inequalities Between Hölder and Stolarsky Means of Two Positive
    Aust. J. Math. Anal. Appl. Vol. 18 (2021), No. 1, Art. 8, 42 pp. AJMAA SHARP INEQUALITIES BETWEEN HÖLDER AND STOLARSKY MEANS OF TWO POSITIVE NUMBERS MARGARITA BUSTOS GONZALEZ AND AUREL IULIAN STAN Received 29 September, 2019; accepted 15 December, 2020; published 12 February, 2021. THE UNIVERSITY OF IOWA,DEPARTMENT OF MATHEMATICS, 14 MACLEAN HALL,IOWA CITY,IOWA, USA. [email protected] THE OHIO STATE UNIVERSITY AT MARION,DEPARTMENT OF MATHEMATICS, 1465 MOUNT VERNON AVENUE,MARION,OHIO, USA. [email protected] ABSTRACT. Given any index of the Stolarsky means, we find the greatest and least indexes of the Hölder means, such that for any two positive numbers, the Stolarsky mean with the given index is bounded from below and above by the Hölder means with those indexes, of the two positive numbers. Finally, we present a geometric application of this inequality involving the Fermat-Torricelli point of a triangle. Key words and phrases: Hölder means; Stolarsky means; Monotone functions; Jensen inequality; Hermite–Hadamard in- equality. 2010 Mathematics Subject Classification. Primary 26D15. Secondary 26D20. ISSN (electronic): 1449-5910 © 2021 Austral Internet Publishing. All rights reserved. This research was started during the Sampling Advanced Mathematics for Minority Students (SAMMS) Program, organized by the Depart- ment of Mathematics of The Ohio State University, during the summer of the year 2018. The authors would like to thank the SAMMS Program for kindly supporting this research. 2 M. BUSTOS GONZALEZ AND A. I. STAN 1. INTRODUCTION The Hölder and Stolarsky means of two positive numbers a and b, with a < b, are obtained by taking a probability measure µ, whose support contains the set fa, bg and is contained in the interval [a, b], integrating the function x 7! xp, for some p 2 [−∞, 1], with respect to that probability measure, and then taking the 1=p power of that integral.
    [Show full text]