A Directed Graph Fourier Transform with Spread Frequency Components
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Fast Spectral Approximation of Structured Graphs with Applications to Graph Filtering
algorithms Article Fast Spectral Approximation of Structured Graphs with Applications to Graph Filtering Mario Coutino 1,* , Sundeep Prabhakar Chepuri 2, Takanori Maehara 3 and Geert Leus 1 1 Department of Microelectronics, Faculty of Electrical Engineering, Mathematics and Computer Science, Delft University of Technology, 2628 CD Delft , The Netherlands; [email protected] 2 Department of Electrical and Communication Engineering, Indian Institute of Science, Bangalore 560 012, India; [email protected] 3 RIKEN Center for Advanced Intelligence Project, Tokyo 103-0027, Japan; [email protected] * Correspondence: [email protected] Received: 5 July 2020; Accepted: 27 August 2020; Published: 31 August 2020 Abstract: To analyze and synthesize signals on networks or graphs, Fourier theory has been extended to irregular domains, leading to a so-called graph Fourier transform. Unfortunately, different from the traditional Fourier transform, each graph exhibits a different graph Fourier transform. Therefore to analyze the graph-frequency domain properties of a graph signal, the graph Fourier modes and graph frequencies must be computed for the graph under study. Although to find these graph frequencies and modes, a computationally expensive, or even prohibitive, eigendecomposition of the graph is required, there exist families of graphs that have properties that could be exploited for an approximate fast graph spectrum computation. In this work, we aim to identify these families and to provide a divide-and-conquer approach for computing an approximate spectral decomposition of the graph. Using the same decomposition, results on reducing the complexity of graph filtering are derived. These results provide an attempt to leverage the underlying topological properties of graphs in order to devise general computational models for graph signal processing. -
Hybrid Whale Optimization Algorithm with Modified Conjugate Gradient Method to Solve Global Optimization Problems
Open Access Library Journal 2020, Volume 7, e6459 ISSN Online: 2333-9721 ISSN Print: 2333-9705 Hybrid Whale Optimization Algorithm with Modified Conjugate Gradient Method to Solve Global Optimization Problems Layth Riyadh Khaleel, Ban Ahmed Mitras Department of Mathematics, College of Computer Sciences & Mathematics, Mosul University, Mosul, Iraq How to cite this paper: Khaleel, L.R. and Abstract Mitras, B.A. (2020) Hybrid Whale Optimi- zation Algorithm with Modified Conjugate Whale Optimization Algorithm (WOA) is a meta-heuristic algorithm. It is a Gradient Method to Solve Global Opti- new algorithm, it simulates the behavior of Humpback Whales in their search mization Problems. Open Access Library for food and migration. In this paper, a modified conjugate gradient algo- Journal, 7: e6459. https://doi.org/10.4236/oalib.1106459 rithm is proposed by deriving new conjugate coefficient. The sufficient des- cent and the global convergence properties for the proposed algorithm proved. Received: May 25, 2020 Novel hybrid algorithm of the Whale Optimization Algorithm (WOA) pro- Accepted: June 27, 2020 posed with modified conjugate gradient Algorithm develops the elementary Published: June 30, 2020 society that randomly generated as the primary society for the Whales opti- Copyright © 2020 by author(s) and Open mization algorithm using the characteristics of the modified conjugate gra- Access Library Inc. dient algorithm. The efficiency of the hybrid algorithm measured by applying This work is licensed under the Creative it to (10) of the optimization functions of high measurement with different Commons Attribution International License (CC BY 4.0). dimensions and the results of the hybrid algorithm were very good in com- http://creativecommons.org/licenses/by/4.0/ parison with the original algorithm. -
Adaptive Graph Convolutional Neural Networks
The Thirty-Second AAAI Conference on Artificial Intelligence (AAAI-18) Adaptive Graph Convolutional Neural Networks Ruoyu Li, Sheng Wang, Feiyun Zhu, Junzhou Huang∗ The University of Texas at Arlington, Arlington, TX 76019, USA Tencent AI Lab, Shenzhen, 518057, China Abstract 1 Graph Convolutional Neural Networks (Graph CNNs) are generalizations of classical CNNs to handle graph data such as molecular data, point could and social networks. Current filters in graph CNNs are built for fixed and shared graph structure. However, for most real data, the graph structures varies in both size and connectivity. The paper proposes a generalized and flexible graph CNN taking data of arbitrary graph structure as Figure 1: Example of graph-structured data: organic com- input. In that way a task-driven adaptive graph is learned for pound 3,4-Lutidine (C7H9N) and its graph structure. each graph data while training. To efficiently learn the graph, a distance metric learning is proposed. Extensive experiments on nine graph-structured datasets have demonstrated the superior performance improvement on both convergence speed and In some cases, e.g classification of point cloud, the topo- predictive accuracy. logical structure of graph is more informative than vertex feature. Unfortunately, the existing graph convolution can not thoroughly exploit the geometric property on graph due Introduction to the difficulty of designing a parameterized spatial kernel Although the Convolutional Neural Networks (CNNs) have matches a varying number of neighbors (Shuman et al. 2013). been proven supremely successful on a wide range of ma- Besides, considering the flexibility of graph and the scale of chine learning problems (Hinton et al. -
Convolutional Neural Networks on Graphs with Fast Localized Spectral Filtering
Convolutional Neural Networks on Graphs with Fast Localized Spectral Filtering Michaël Defferrard Xavier Bresson Pierre Vandergheynst EPFL, Lausanne, Switzerland {michael.defferrard,xavier.bresson,pierre.vandergheynst}@epfl.ch Abstract In this work, we are interested in generalizing convolutional neural networks (CNNs) from low-dimensional regular grids, where image, video and speech are represented, to high-dimensional irregular domains, such as social networks, brain connectomes or words’ embedding, represented by graphs. We present a formu- lation of CNNs in the context of spectral graph theory, which provides the nec- essary mathematical background and efficient numerical schemes to design fast localized convolutional filters on graphs. Importantly, the proposed technique of- fers the same linear computational complexity and constant learning complexity as classical CNNs, while being universal to any graph structure. Experiments on MNIST and 20NEWS demonstrate the ability of this novel deep learning system to learn local, stationary, and compositional features on graphs. 1 Introduction Convolutional neural networks [19] offer an efficient architecture to extract highly meaningful sta- tistical patterns in large-scale and high-dimensional datasets. The ability of CNNs to learn local stationary structures and compose them to form multi-scale hierarchical patterns has led to break- throughs in image, video, and sound recognition tasks [18]. Precisely, CNNs extract the local sta- tionarity property of the input data or signals by revealing local features that are shared across the data domain. These similar features are identified with localized convolutional filters or kernels, which are learned from the data. Convolutional filters are shift- or translation-invariant filters, mean- ing they are able to recognize identical features independently of their spatial locations. -
Graph Convolutional Networks Using Heat Kernel for Semi-Supervised
Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence (IJCAI-19) Graph Convolutional Networks using Heat Kernel for Semi-supervised Learning Bingbing Xu , Huawei Shen , Qi Cao , Keting Cen and Xueqi Cheng CAS Key Laboratory of Network Data Science and Technology, Institute of Computing Technology, Chinese Academy of Sciences School of Computer Science and Technology, University of Chinese Academy of Sciences, Beijing, China fxubingbing, shenhuawei, caoqi, cenketing, [email protected] Abstract get node by its neighboring nodes. GraphSAGE [Hamilton et al., 2017] defines the weighting function as various aggrega- Graph convolutional networks gain remarkable tors over neighboring nodes. Graph attention network (GAT) success in semi-supervised learning on graph- proposes learning the weighting function via self-attention structured data. The key to graph-based semi- mechanism [Velickovic et al., 2017]. MoNet [Monti et al., supervised learning is capturing the smoothness 2017] offers us a general framework for designing spatial of labels or features over nodes exerted by graph methods. It takes convolution as a weighted average of multi- structure. Previous methods, spectral methods and ple weighting functions defined over neighboring nodes. One spatial methods, devote to defining graph con- open challenge for spatial methods is how to determine ap- volution as a weighted average over neighboring propriate neighborhood for target node when defining graph nodes, and then learn graph convolution kernels convolution. to leverage the smoothness to improve the per- formance of graph-based semi-supervised learning. Spectral methods define graph convolution via convolution One open challenge is how to determine appro- theorem. As the pioneering work of spectral methods, Spec- priate neighborhood that reflects relevant informa- tral CNN [Bruna et al., 2014] leverages graph Fourier trans- tion of smoothness manifested in graph structure. -
Lecture Notes on Numerical Optimization
Lecture Notes on Numerical Optimization Miguel A.´ Carreira-Perpi˜n´an EECS, University of California, Merced December 30, 2020 These are notes for a one-semester graduate course on numerical optimisation given by Prof. Miguel A.´ Carreira-Perpi˜n´an at the University of California, Merced. The notes are largely based on the book “Numerical Optimization” by Jorge Nocedal and Stephen J. Wright (Springer, 2nd ed., 2006), with some additions. These notes may be used for educational, non-commercial purposes. c 2005–2020 Miguel A.´ Carreira-Perpi˜n´an 1 Introduction Goal: describe the basic concepts & main state-of-the-art algorithms for continuous optimiza- • tion. The optimization problem: • c (x)=0, i equality constraints (scalar) min f(x) s.t. i ∈E x Rn ci(x) 0, i inequality constraints (scalar) ∈ ≥ ∈ I x: variables (vector); f(x): objective function (scalar). Feasible region: set of points satisfying all constraints. max f min f. ≡ − − 2 2 2 x1 x2 0 Ex. (fig. 1.1): minx1,x2 (x1 2) +(x2 1) s.t. − ≤ • − − x1 + x2 2. ≤ Ex.: transportation problem (LP) • x a i (capacity of factory i) j ij ≤ i ∀ min cijxij s.t. i xij bj i (demand of shop j) xij P ≥ ∀ { } i,j xij 0 i, j (nonnegative production) X P ≥ ∀ cij: shipping cost; xij: amount of product shipped from factory i to shop j. Ex.: LSQ problem: fit a parametric model (e.g. line, polynomial, neural net...) to a data set. Ex. 2.1 • Optimization algorithms are iterative: build sequence of points that converges to the solution. • Needs good initial point (often by prior knowledge). -
GSPBOX: a Toolbox for Signal Processing on Graphs
GSPBOX: A toolbox for signal processing on graphs Nathanael Perraudin, Johan Paratte, David Shuman, Lionel Martin Vassilis Kalofolias, Pierre Vandergheynst and David K. Hammond March 16, 2016 Abstract This document introduces the Graph Signal Processing Toolbox (GSPBox) a framework that can be used to tackle graph related problems with a signal processing approach. It explains the structure and the organization of this software. It also contains a general description of the important modules. 1 Toolbox organization In this document, we briefly describe the different modules available in the toolbox. For each of them, the main functions are briefly described. This chapter should help making the connection between the theoretical concepts introduced in [7, 9, 6] and the technical documentation provided with the toolbox. We highly recommend to read this document and the tutorial before using the toolbox. The documentation, the tutorials and other resources are available on-line1. The toolbox has first been implemented in MATLAB but a port to Python, called the PyGSP, has been made recently. As of the time of writing of this document, not all the functionalities have been ported to Python, but the main modules are already available. In the following, functions prefixed by [M]: refer to the MATLAB implementation and the ones prefixed with [P]: refer to the Python implementation. 1.1 General structure of the toolbox (MATLAB) The general design of the GSPBox focuses around the graph object [7], a MATLAB structure containing the necessary infor- mations to use most of the algorithms. By default, only a few attributes are available (see section 2), allowing only the use of a subset of functions. -
Optimization and Approximation
Optimization and Approximation Elisa Riccietti12 1Thanks to Stefania Bellavia, University of Florence. 2Reference book: Numerical Optimization, Nocedal and Wright, Springer 2 Contents I I part: nonlinear optimization 5 1 Prerequisites 7 1.1 Necessary and sufficient conditions . .9 1.2 Convex functions . 10 1.3 Quadratic functions . 11 2 Iterative methods 13 2.1 Directions for line-search methods . 14 2.1.1 Direction of steepest descent . 14 2.1.2 Newton's direction . 15 2.1.3 Quasi-Newton directions . 16 2.2 Rates of convergence . 16 2.3 Steepest descent method for quadratic functions . 17 2.4 Convergence of Newton's method . 19 3 Line-search methods 23 3.1 Armijo and Wolfe conditions . 23 3.2 Convergence of line-search methods . 27 3.3 Backtracking . 30 3.4 Newton's method . 32 4 Quasi-Newton method 33 4.1 BFGS method . 34 4.2 Global convergence of the BFGS method . 38 5 Nonlinear least-squares problems 41 5.1 Background: modelling, regression . 41 5.2 General concepts . 41 5.3 Linear least-squares problems . 43 5.4 Algorithms for nonlinear least-squares problems . 44 5.4.1 Gauss-Newton method . 44 5.5 Levenberg-Marquardt method . 45 6 Constrained optimization 47 6.1 One equality constraint . 48 6.2 One inequality constraint . 50 6.3 First order optimality conditions . 52 3 4 CONTENTS 6.4 Second order optimality conditions . 58 7 Optimization methods for Machine Learning 61 II Linear and integer programming 63 8 Linear programming 65 8.1 How to rewrite an LP in standard form . -
Download Thesisadobe
Exploiting the Intrinsic Structures of Simultaneous Localization and Mapping by Kasra Khosoussi Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy at the Centre for Autonomous Systems Faculty of Engineering and Information Technology University of Technology Sydney March 2017 Declaration I certify that the work in this thesis has not previously been submitted for a degree nor has it been submitted as part of requirements for a degree except as fully acknowledged within the text. I also certify that the thesis has been written by me. Any help that I have received in my research work and the preparation of the thesis itself has been acknowledged. In addition, I certify that all information sources and literature used are indicated in the thesis. Signature: Date: i Abstract Imagine a robot which is assigned a complex task that requires it to navigate in an unknown environment. This situation frequently arises in a wide spectrum of applications; e.g., sur- gical robots used for diagnosis and treatment operate inside the body, domestic robots have to operate in people’s houses, and Mars rovers explore Mars. To navigate safely in such environments, the robot needs to constantly estimate its location and, simultane- ously, create a consistent representation of the environment by, e.g., building a map. This problem is known as simultaneous localization and mapping (SLAM). Over the past three decades, SLAM has always been a central topic in mobile robotics. Tremendous progress has been made during these years in efficiently solving SLAM using a variety of sensors in large-scale environments. -
Arxiv:1806.10197V1 [Math.NA] 26 Jun 2018 Tive
LINEARLY CONVERGENT NONLINEAR CONJUGATE GRADIENT METHODS FOR A PARAMETER IDENTIFICATION PROBLEMS MOHAMED KAMEL RIAHI?;† AND ISSAM AL QATTAN‡ ABSTRACT. This paper presents a general description of a parameter estimation inverse prob- lem for systems governed by nonlinear differential equations. The inverse problem is presented using optimal control tools with state constraints, where the minimization process is based on a first-order optimization technique such as adaptive monotony-backtracking steepest descent technique and nonlinear conjugate gradient methods satisfying strong Wolfe conditions. Global convergence theory of both methods is rigorously established where new linear convergence rates have been reported. Indeed, for the nonlinear non-convex optimization we show that under the Lipschitz-continuous condition of the gradient of the objective function we have a linear conver- gence rate toward a stationary point. Furthermore, nonlinear conjugate gradient method has also been shown to be linearly convergent toward stationary points where the second derivative of the objective function is bounded. The convergence analysis in this work has been established in a general nonlinear non-convex optimization under constraints framework where the considered time-dependent model could whether be a system of coupled ordinary differential equations or partial differential equations. Numerical evidence on a selection of popular nonlinear models is presented to support the theoretical results. Nonlinear Conjugate gradient methods, Nonlinear Optimal control and Convergence analysis and Dynamical systems and Parameter estimation and Inverse problem 1. INTRODUCTION Linear and nonlinear dynamical systems are popular approaches to model the behavior of complex systems such as neural network, biological systems and physical phenomena. These models often need to be tailored to experimental data through optimization of parameters in- volved in the mathematical formulations. -
Multi-Dimensional Graph Fourier Transform Takashi Kurokawa, Non-Member, IEEE, Taihei Oki, Non-Member, IEEE, and Hiromichi Nagao, Non-Member, IEEE
1 Multi-dimensional Graph Fourier Transform Takashi Kurokawa, Non-Member, IEEE, Taihei Oki, Non-Member, IEEE, and Hiromichi Nagao, Non-Member, IEEE Abstract—Many signals on Cartesian product graphs appear in the real world, such as digital images, sensor observation time series, and movie ratings on Netflix. These signals are “multi- 0.3 dimensional” and have directional characteristics along each factor graph. However, the existing graph Fourier transform 0.2 does not distinguish these directions, and assigns 1-D spectra to signals on product graphs. Further, these spectra are often 0.1 multi-valued at some frequencies. Our main result is a multi- dimensional graph Fourier transform that solves such problems 0.0 associated with the conventional GFT. Using algebraic properties of Cartesian products, the proposed transform rearranges 1- 0.1 D spectra obtained by the conventional GFT into the multi- dimensional frequency domain, of which each dimension repre- 0.2 sents a directional frequency along each factor graph. Thus, the multi-dimensional graph Fourier transform enables directional 0.3 frequency analysis, in addition to frequency analysis with the conventional GFT. Moreover, this rearrangement resolves the (a) multi-valuedness of spectra in some cases. The multi-dimensional graph Fourier transform is a foundation of novel filterings and stationarities that utilize dimensional information of graph signals, which are also discussed in this study. The proposed 1 methods are applicable to a wide variety of data that can be 10 regarded as signals on Cartesian product graphs. This study 3 also notes that multivariate graph signals can be regarded as 2- 10 D univariate graph signals. -
Unconstrained Optimization
Chapter 4 Unconstrained optimization An unconstrained optimization problem takes the form min f(x) (4.1) x∈Rn for a target functional (also called objective function) f : Rn → R. In this chapter and throughout most of our discussion on optimization, we will assume that f is sufficiently smooth, that is, at least continuously differentiable. In most applications of optimization problem, one is usually interested in a global minimizer x∗, which satisfies f(x∗) ≤ f(x) for all x in Rn (or at least for all x in the domain of interest). Unless f is particularly nice, optimization algorithms are often not guaranteed to yield global minima but only yield local minima. A point x∗ is called a local minimizer if there is a neighborhood N such that f(x∗) ≤ f(x) for all x ∈ N . Similarly, x∗ is called a strict local minimizer f(x∗) < f(x) for all x ∈N with x 6= x∗. 4.1 Fundamentals Sufficient and necessary conditions for local minimizers can be developed from the Taylor expansion of f. Let us recall Example 2.3: If f is two times continuously differentiable then 1 T f(x + h)= f(x)+ ∇f(x)T h + h H(x)h + O(khk3), (4.2) 2 2 m where ∇f is the gradient and H = ∂ f is the Hessian [matrice Hessienne] ∂xi∂xj i,j=1 of f. Ä ä Theorem 4.1 (First-order necessary condition) If x∗ is a local mini- mizer and f is continuously differentiable in an open neighborhood of x∗ then ∇f(x∗) = 0.