Geometrical Aspects of Statistical Learning Theory

Total Page:16

File Type:pdf, Size:1020Kb

Geometrical Aspects of Statistical Learning Theory Geometrical Aspects of Statistical Learning Theory Vom Fachbereich Informatik der Technischen Universit¨at Darmstadt genehmigte Dissertation zur Erlangung des akademischen Grades Doctor rerum naturalium (Dr. rer. nat.) vorgelegt von Dipl.-Phys. Matthias Hein aus Esslingen am Neckar Prufungskommission:¨ Vorsitzender: Prof. Dr. B. Schiele Erstreferent: Prof. Dr. T. Hofmann Korreferent : Prof. Dr. B. Sch¨olkopf Tag der Einreichung: 30.9.2005 Tag der Disputation: 9.11.2005 Darmstadt, 2005 Hochschulkennziffer: D17 Abstract Geometry plays an important role in modern statistical learning theory, and many different aspects of geometry can be found in this fast developing field. This thesis addresses some of these aspects. A large part of this work will be concerned with so called manifold methods, which have recently attracted a lot of interest. The key point is that for a lot of real-world data sets it is natural to assume that the data lies on a low-dimensional submanifold of a potentially high-dimensional Euclidean space. We develop a rigorous and quite general framework for the estimation and ap- proximation of some geometric structures and other quantities of this submanifold, using certain corresponding structures on neighborhood graphs built from random samples of that submanifold. Another part of this thesis deals with the generalizati- on of the maximal margin principle to arbitrary metric spaces. This generalization follows quite naturally by changing the viewpoint on the well-known support vector machines (SVM). It can be shown that the SVM can be seen as an algorithm which applies the maximum margin principle to a subclass of metric spaces. The motivati- on to consider the generalization to arbitrary metric spaces arose by the observation that in practice the condition for the applicability of the SVM is rather difficult to check for a given metric. Nevertheless one would like to apply the successful ma- ximum margin principle even in cases where the SVM cannot be applied. The last part deals with the specific construction of so called Hilbertian metrics and positive definite kernels on probability measures. We consider several ways of building such metrics and kernels. The emphasis lies on the incorporation of different desired pro- perties of the metric and kernel. Such metrics and kernels have a wide applicability in so called kernel methods since probability measures occur as inputs in various situations. Zusammenfassung Geometrie spielt eine wichtige Rolle in der modernen statistischen Lerntheorie. Viele Aspekte der Geometrie k¨onnen in diesem sich schnell entwickelnden Feld gefunden werden. Diese Dissertation besch¨aftigt sich mit einigen dieser Aspekte. Ein großer Teil dieser Arbeit befasst sich mit sogenannten Mannigfaltigkeits-Methoden. Die Hauptmotivation liegt darin, daß es fur¨ Datens¨atze in Anwendungen eine in vielen F¨allen zutreffende Annahme ist, daß die Daten auf einer niedrig-dimensionalen Un- termannigfaltigkeit eines potentiell hoch-dimensionalen Euklidischen Raumes liegen. In dieser Arbeit wird ein mathematisch strenger und allgemeiner Rahmen fur¨ die Sch¨atzung und Approximation von geometrischen Strukturen und anderen Gr¨oßen der Untermannigfaltigkeit entwickelt. Dazu werden korrespondierende Strukturen auf einem durch eine Stichprobe von Punkten der Untermannigfaltigkeit erzeug- ten Nachbarschaftsgraphen genutzt. Ein weiterer Teil dieser Dissertation behandelt die Verallgemeinerung des sogenannten maximum-margin“-Prinzips auf allgemeine ” metrische R¨aume. Durch eine neue Sichtweise auf die sogenannte support vector ” machine“(SVM) folgt diese Verallgemeinerung auf naturliche¨ Weise. Es wird gezeigt, daß die SVM als ein Algorithmus gesehen werden kann, der das maximum-margin“- ” Prinzip auf eine Unterklasse von metrischen R¨aumen anwendet. Die Motivation fur¨ diese Verallgemeinerung entstand durch das in der Praxis h¨aufig auftretende Pro- blem, daß die Bedingungen fur¨ die Verwendung einer bestimmten Metrik in der SVM schwer zu uberpr¨ ufen¨ sind. Trotzdem wurde¨ man gerne selbst in F¨allen in denen die SVM nicht angewendet werden kann das erfolgreiche maximum-margin“-Prinizp ” verwenden. Der abschließende Teil dieser Arbeit besch¨aftigt sich mit der speziel- len Konstruktion von sogenannnten Hilbert’schen Metriken und positiv definiten Kernen auf Wahrscheinlichkeitsmaßen. Mehrere M¨oglichkeiten solche Metriken und Kerne zu konstruieren werden untersucht. Der Schwerpunkt liegt dabei auf der Inte- gration verschiedener gewunschter¨ Eigenschaften in die Metrik bzw. den Kern. Sol- che Metriken und Kerne haben vielf¨altige Anwendungsm¨oglichkeiten in sogenannten Kern-Methoden, da Wahrscheinlichkeitsmaße als Eingabeformate in verschiedensten Situationen auftreten. Wissenschaftlicher Werdegang des Verfassers 10/1996–02/2002 Studium der Physik mit Nebenfach Mathematik an der Universit¨at Tubingen.¨ 02/2002 Diplom in Physik Thema der Diplomarbeit: Numerische Simulation axialsymmetrischer, isolierter Systeme in der Allgemeinen Relativit¨atstheorie. Betreuer: PD. Dr. J. Frauendiener 06/2002–11/2005 Wissenschaftlicher Mitarbeiter am Max-Planck-Institut fur¨ biologische Kybernetik in Tubingen¨ in der Abteilung von Prof. Dr. Bernhard Sch¨olkopf. Erkl¨arung Hiermit erkl¨are ich, daß ich die vorliegende Arbeit - mit Ausnahme der in ihr ausdrucklich¨ genannten Hilfen - selbst¨andig verfasst habe. Acknowledgements First of all I would like to thank Bernhard Sch¨olkopf for giving me the possibility to do my doctoral thesis in an excellent research environment. He gave me the freedom to look for my own lines of research while always providing ideas how to progress. I also very much appreciated his advice and support in times when it was needed. I am especially thankful to Olivier Bousquet for guiding me into the world of learning theory. In our long discussions we usually grazed through all sorts of topics ranging from pure mathematics to machine learning to theoretical physics. This was very inspiring and raised my interest in several branches of mathematics. He always had time for questions and was a constant source of ideas for me. I want to thank Thomas Hofmann for giving me the opportunity to do my thesis at the TU Darmstadt. I am very thankful for his support in these last steps towards the thesis. A special thanks goes to Olaf Wittich for reading parts of the second chapter and for giving helpful comments which improved the clarity of this part. During these three years I had the pleasure to work or discuss with several other nice people. They all influenced in the way I think about learning theory. I thank all of them for their time and help: Jean-Yves Audibert, Goekhan Bakır, Stephane Boucheron, Olivier Chapelle, Jan Eichhorn, Andr´eElisseeff, Matthias Franz, Arthur Gretton, Jeremy Hill, Kwang-In Kim, Malte Kuss, Matti K¨a¨ari¨ainen, Navin Lal, Cheng Soon Ong, Petra Philips, Carl Rasmussen, Gunnar R¨atsch, Lorenzo Rosasco, Alexander Smola, Koji Tsuda, Ulrike von Luxburg, Felix Wichmann, Olaf Wittich, Dengyong Zhou, Alexander Zien, Laurent Zwald. I would like to thank all the AGBS team and in particular all the PhD students in our lab for a very nice atmosphere and a lot of fun. In particular I would like to thank our pioneer Ulrike von Luxburg for pleasant and helpful discussions and for the mutual support of our small ‘theory’ group, Navin Lal for a nice time here in Tubingen,¨ Malte Kuss for providing me his Matlab script to produce the nice manifold figures, my office mate Arthur Gretton for his subtle jokes and the nice atmosphere and all AOE participants for relaxing afterhours in our lab. Finally I would like to thank my family for their unconditional help and support during my studies and to Kathrin for her understanding and for reminding me sometimes that there is more in life than a thesis. Inhaltsverzeichnis 1 Introduction 13 1.1 Introduction to statistical learning theory . ..... 13 1.1.1 Empirical risk minimization . 15 1.1.2 Regularized empirical risk minimization . 18 1.2 Geometry in statistical learning theory . 19 1.3 SummaryofContributionsofthisthesis . 20 2 Consistent Continuum Limit for Graph Structure on Point Clouds 23 2.1 AbstractDefinitionoftheGraphStructure . 27 2.1.1 Hilbert Spaces of Functions on the vertices V and the edges E 27 2.1.2 The difference operator d and its adjoint d∗ .......... 28 2.1.3 The general graph Laplacian . 29 2.1.4 The special case of an undirected graph . 29 2.1.5 Smoothness functionals for regularization on undirected graphs 31 2.2 Submanifolds in Rd and associated operators . 33 2.2.1 Basics of submanifolds . 33 2.2.2 The weighted Laplacian and the continuous smoothness func- tional ............................... 41 2.3 Continuumlimitofthegraphstructure . 44 2.3.1 Notationsandassumptions. 45 2.3.2 Asymptotics of Euclidean convolutions on the submanifold M 47 2.3.3 Pointwise consistency of the degree function d or kernel den- sity estimation on a submanifold in Rd ............. 52 2.3.4 Pointwise consistency of the normalized and unnormalized graphLaplacian.......................... 58 2.3.5 Weak consistency of and the smoothness functional S(f) 64 HV 2.3.6 Summary and fixation of V by mutual consistency requirement 69 2.4 Applications................................H 71 2.4.1 Intrinsic dimensionality estimation of submanifolds in Rd . 71 2.5 Appendix ................................. 84 2.5.1 U-statistics ............................ 84 3 Kernels, Associated Structures and Generalizations 85 3.1 Introduction................................ 85 3.2 Positive Definite
Recommended publications
  • CSE 152: Computer Vision Manmohan Chandraker
    CSE 152: Computer Vision Manmohan Chandraker Lecture 15: Optimization in CNNs Recap Engineered against learned features Label Convolutional filters are trained in a Dense supervised manner by back-propagating classification error Dense Dense Convolution + pool Label Convolution + pool Classifier Convolution + pool Pooling Convolution + pool Feature extraction Convolution + pool Image Image Jia-Bin Huang and Derek Hoiem, UIUC Two-layer perceptron network Slide credit: Pieter Abeel and Dan Klein Neural networks Non-linearity Activation functions Multi-layer neural network From fully connected to convolutional networks next layer image Convolutional layer Slide: Lazebnik Spatial filtering is convolution Convolutional Neural Networks [Slides credit: Efstratios Gavves] 2D spatial filters Filters over the whole image Weight sharing Insight: Images have similar features at various spatial locations! Key operations in a CNN Feature maps Spatial pooling Non-linearity Convolution (Learned) . Input Image Input Feature Map Source: R. Fergus, Y. LeCun Slide: Lazebnik Convolution as a feature extractor Key operations in a CNN Feature maps Rectified Linear Unit (ReLU) Spatial pooling Non-linearity Convolution (Learned) Input Image Source: R. Fergus, Y. LeCun Slide: Lazebnik Key operations in a CNN Feature maps Spatial pooling Max Non-linearity Convolution (Learned) Input Image Source: R. Fergus, Y. LeCun Slide: Lazebnik Pooling operations • Aggregate multiple values into a single value • Invariance to small transformations • Keep only most important information for next layer • Reduces the size of the next layer • Fewer parameters, faster computations • Observe larger receptive field in next layer • Hierarchically extract more abstract features Key operations in a CNN Feature maps Spatial pooling Non-linearity Convolution (Learned) . Input Image Input Feature Map Source: R.
    [Show full text]
  • 1 Convolution
    CS1114 Section 6: Convolution February 27th, 2013 1 Convolution Convolution is an important operation in signal and image processing. Convolution op- erates on two signals (in 1D) or two images (in 2D): you can think of one as the \input" signal (or image), and the other (called the kernel) as a “filter” on the input image, pro- ducing an output image (so convolution takes two images as input and produces a third as output). Convolution is an incredibly important concept in many areas of math and engineering (including computer vision, as we'll see later). Definition. Let's start with 1D convolution (a 1D \image," is also known as a signal, and can be represented by a regular 1D vector in Matlab). Let's call our input vector f and our kernel g, and say that f has length n, and g has length m. The convolution f ∗ g of f and g is defined as: m X (f ∗ g)(i) = g(j) · f(i − j + m=2) j=1 One way to think of this operation is that we're sliding the kernel over the input image. For each position of the kernel, we multiply the overlapping values of the kernel and image together, and add up the results. This sum of products will be the value of the output image at the point in the input image where the kernel is centered. Let's look at a simple example. Suppose our input 1D image is: f = 10 50 60 10 20 40 30 and our kernel is: g = 1=3 1=3 1=3 Let's call the output image h.
    [Show full text]
  • Deep Clustering with Convolutional Autoencoders
    Deep Clustering with Convolutional Autoencoders Xifeng Guo1, Xinwang Liu1, En Zhu1, and Jianping Yin2 1 College of Computer, National University of Defense Technology, Changsha, 410073, China [email protected] 2 State Key Laboratory of High Performance Computing, National University of Defense Technology, Changsha, 410073, China Abstract. Deep clustering utilizes deep neural networks to learn fea- ture representation that is suitable for clustering tasks. Though demon- strating promising performance in various applications, we observe that existing deep clustering algorithms either do not well take advantage of convolutional neural networks or do not considerably preserve the local structure of data generating distribution in the learned feature space. To address this issue, we propose a deep convolutional embedded clus- tering algorithm in this paper. Specifically, we develop a convolutional autoencoders structure to learn embedded features in an end-to-end way. Then, a clustering oriented loss is directly built on embedded features to jointly perform feature refinement and cluster assignment. To avoid feature space being distorted by the clustering loss, we keep the decoder remained which can preserve local structure of data in feature space. In sum, we simultaneously minimize the reconstruction loss of convolutional autoencoders and the clustering loss. The resultant optimization prob- lem can be effectively solved by mini-batch stochastic gradient descent and back-propagation. Experiments on benchmark datasets empirically validate
    [Show full text]
  • Tensorizing Neural Networks
    Tensorizing Neural Networks Alexander Novikov1;4 Dmitry Podoprikhin1 Anton Osokin2 Dmitry Vetrov1;3 1Skolkovo Institute of Science and Technology, Moscow, Russia 2INRIA, SIERRA project-team, Paris, France 3National Research University Higher School of Economics, Moscow, Russia 4Institute of Numerical Mathematics of the Russian Academy of Sciences, Moscow, Russia [email protected] [email protected] [email protected] [email protected] Abstract Deep neural networks currently demonstrate state-of-the-art performance in sev- eral domains. At the same time, models of this class are very demanding in terms of computational resources. In particular, a large amount of memory is required by commonly used fully-connected layers, making it hard to use the models on low-end devices and stopping the further increase of the model size. In this paper we convert the dense weight matrices of the fully-connected layers to the Tensor Train [17] format such that the number of parameters is reduced by a huge factor and at the same time the expressive power of the layer is preserved. In particular, for the Very Deep VGG networks [21] we report the compression factor of the dense weight matrix of a fully-connected layer up to 200000 times leading to the compression factor of the whole network up to 7 times. 1 Introduction Deep neural networks currently demonstrate state-of-the-art performance in many domains of large- scale machine learning, such as computer vision, speech recognition, text processing, etc. These advances have become possible because of algorithmic advances, large amounts of available data, and modern hardware.
    [Show full text]
  • Fully Convolutional Mesh Autoencoder Using Efficient Spatially Varying Kernels
    Fully Convolutional Mesh Autoencoder using Efficient Spatially Varying Kernels Yi Zhou∗ Chenglei Wu Adobe Research Facebook Reality Labs Zimo Li Chen Cao Yuting Ye University of Southern California Facebook Reality Labs Facebook Reality Labs Jason Saragih Hao Li Yaser Sheikh Facebook Reality Labs Pinscreen Facebook Reality Labs Abstract Learning latent representations of registered meshes is useful for many 3D tasks. Techniques have recently shifted to neural mesh autoencoders. Although they demonstrate higher precision than traditional methods, they remain unable to capture fine-grained deformations. Furthermore, these methods can only be applied to a template-specific surface mesh, and is not applicable to more general meshes, like tetrahedrons and non-manifold meshes. While more general graph convolution methods can be employed, they lack performance in reconstruction precision and require higher memory usage. In this paper, we propose a non-template-specific fully convolutional mesh autoencoder for arbitrary registered mesh data. It is enabled by our novel convolution and (un)pooling operators learned with globally shared weights and locally varying coefficients which can efficiently capture the spatially varying contents presented by irregular mesh connections. Our model outperforms state-of-the-art methods on reconstruction accuracy. In addition, the latent codes of our network are fully localized thanks to the fully convolutional structure, and thus have much higher interpolation capability than many traditional 3D mesh generation models. 1 Introduction arXiv:2006.04325v2 [cs.CV] 21 Oct 2020 Learning latent representations for registered meshes 2, either from performance capture or physical simulation, is a core component for many 3D tasks, ranging from compressing and reconstruction to animation and simulation.
    [Show full text]
  • Pre-Training Cnns Using Convolutional Autoencoders
    Pre-Training CNNs Using Convolutional Autoencoders Maximilian Kohlbrenner Russell Hofmann TU Berlin TU Berlin [email protected] [email protected] Sabbir Ahmmed Youssef Kashef TU Berlin TU Berlin [email protected] [email protected] Abstract Despite convolutional neural networks being the state of the art in almost all computer vision tasks, their training remains a difficult task. Unsupervised rep- resentation learning using a convolutional autoencoder can be used to initialize network weights and has been shown to improve test accuracy after training. We reproduce previous results using this approach and successfully apply it to the difficult Extended Cohn-Kanade dataset for which labels are extremely sparse but additional unlabeled data is available for unsupervised use. 1 Introduction A lot of progress has been made in the field of artificial neural networks in recent years and as a result most computer vision tasks today are best solved using this approach. However, the training of deep neural networks still remains a difficult problem and results are highly dependent on the model initialization (local minima). During a classification task, a Convolutional Neural Network (CNN) first learns a new data representation using its convolution layers as feature extractors and then uses several fully-connected layers for decision-making. While the representation after the convolutional layers is usually optizimed for classification, some learned features might be more general and also useful outside of this specific task. Instead of directly optimizing for a good class prediction, one can therefore start by focussing on the intermediate goal of learning a good data representation before beginning to work on the classification problem.
    [Show full text]
  • Universal Invariant and Equivariant Graph Neural Networks
    Universal Invariant and Equivariant Graph Neural Networks Nicolas Keriven Gabriel Peyré École Normale Supérieure CNRS and École Normale Supérieure Paris, France Paris, France [email protected] [email protected] Abstract Graph Neural Networks (GNN) come in many flavors, but should always be either invariant (permutation of the nodes of the input graph does not affect the output) or equivariant (permutation of the input permutes the output). In this paper, we consider a specific class of invariant and equivariant networks, for which we prove new universality theorems. More precisely, we consider networks with a single hidden layer, obtained by summing channels formed by applying an equivariant linear operator, a pointwise non-linearity, and either an invariant or equivariant linear output layer. Recently, Maron et al. (2019b) showed that by allowing higher- order tensorization inside the network, universal invariant GNNs can be obtained. As a first contribution, we propose an alternative proof of this result, which relies on the Stone-Weierstrass theorem for algebra of real-valued functions. Our main contribution is then an extension of this result to the equivariant case, which appears in many practical applications but has been less studied from a theoretical point of view. The proof relies on a new generalized Stone-Weierstrass theorem for algebra of equivariant functions, which is of independent interest. Additionally, unlike many previous works that consider a fixed number of nodes, our results show that a GNN defined by a single set of parameters can approximate uniformly well a function defined on graphs of varying size. 1 Introduction Designing Neural Networks (NN) to exhibit some invariance or equivariance to group operations is a central problem in machine learning (Shawe-Taylor, 1993).
    [Show full text]
  • Phd Thesis, Stanford University
    DISSERTATION TOPOLOGICAL, GEOMETRIC, AND COMBINATORIAL ASPECTS OF METRIC THICKENINGS Submitted by Johnathan E. Bush Department of Mathematics In partial fulfillment of the requirements For the Degree of Doctor of Philosophy Colorado State University Fort Collins, Colorado Summer 2021 Doctoral Committee: Advisor: Henry Adams Amit Patel Chris Peterson Gloria Luong Copyright by Johnathan E. Bush 2021 All Rights Reserved ABSTRACT TOPOLOGICAL, GEOMETRIC, AND COMBINATORIAL ASPECTS OF METRIC THICKENINGS The geometric realization of a simplicial complex equipped with the 1-Wasserstein metric of optimal transport is called a simplicial metric thickening. We describe relationships between these metric thickenings and topics in applied topology, convex geometry, and combinatorial topology. We give a geometric proof of the homotopy types of certain metric thickenings of the circle by constructing deformation retractions to the boundaries of orbitopes. We use combina- torial arguments to establish a sharp lower bound on the diameter of Carathéodory subsets of the centrally-symmetric version of the trigonometric moment curve. Topological information about metric thickenings allows us to give new generalizations of the Borsuk–Ulam theorem and a selection of its corollaries. Finally, we prove a centrally-symmetric analog of a result of Gilbert and Smyth about gaps between zeros of homogeneous trigonometric polynomials. ii ACKNOWLEDGEMENTS Foremost, I want to thank Henry Adams for his guidance and support as my advisor. Henry taught me how to be a mathematician in theory and in practice, and I was exceedingly fortu- nate to receive my mentorship in research and professionalism through his consistent, careful, and honest feedback. I could always count on him to make time for me and to guide me to interesting problems.
    [Show full text]
  • Understanding 1D Convolutional Neural Networks Using Multiclass Time-Varying Signals Ravisutha Sakrepatna Srinivasamurthy Clemson University, [email protected]
    Clemson University TigerPrints All Theses Theses 8-2018 Understanding 1D Convolutional Neural Networks Using Multiclass Time-Varying Signals Ravisutha Sakrepatna Srinivasamurthy Clemson University, [email protected] Follow this and additional works at: https://tigerprints.clemson.edu/all_theses Recommended Citation Srinivasamurthy, Ravisutha Sakrepatna, "Understanding 1D Convolutional Neural Networks Using Multiclass Time-Varying Signals" (2018). All Theses. 2911. https://tigerprints.clemson.edu/all_theses/2911 This Thesis is brought to you for free and open access by the Theses at TigerPrints. It has been accepted for inclusion in All Theses by an authorized administrator of TigerPrints. For more information, please contact [email protected]. Understanding 1D Convolutional Neural Networks Using Multiclass Time-Varying Signals A Thesis Presented to the Graduate School of Clemson University In Partial Fulfillment of the Requirements for the Degree Master of Science Computer Engineering by Ravisutha Sakrepatna Srinivasamurthy August 2018 Accepted by: Dr. Robert J. Schalkoff, Committee Chair Dr. Harlan B. Russell Dr. Ilya Safro Abstract In recent times, we have seen a surge in usage of Convolutional Neural Networks to solve all kinds of problems - from handwriting recognition to object recognition and from natural language processing to detecting exoplanets. Though the technology has been around for quite some time, there is still a lot of scope to do research on what's really happening 'under the hood' in a CNN model. CNNs are considered to be black boxes which learn something from complex data and provides desired results. In this thesis, an effort has been made to explain what exactly CNNs are learning by training the network with carefully selected input data.
    [Show full text]
  • Helly Groups
    HELLY GROUPS JER´ EMIE´ CHALOPIN, VICTOR CHEPOI, ANTHONY GENEVOIS, HIROSHI HIRAI, AND DAMIAN OSAJDA Abstract. Helly graphs are graphs in which every family of pairwise intersecting balls has a non-empty intersection. This is a classical and widely studied class of graphs. In this article we focus on groups acting geometrically on Helly graphs { Helly groups. We provide numerous examples of such groups: all (Gromov) hyperbolic, CAT(0) cubical, finitely presented graph- ical C(4)−T(4) small cancellation groups, and type-preserving uniform lattices in Euclidean buildings of type Cn are Helly; free products of Helly groups with amalgamation over finite subgroups, graph products of Helly groups, some diagram products of Helly groups, some right- angled graphs of Helly groups, and quotients of Helly groups by finite normal subgroups are Helly. We show many properties of Helly groups: biautomaticity, existence of finite dimensional models for classifying spaces for proper actions, contractibility of asymptotic cones, existence of EZ-boundaries, satisfiability of the Farrell-Jones conjecture and of the coarse Baum-Connes conjecture. This leads to new results for some classical families of groups (e.g. for FC-type Artin groups) and to a unified approach to results obtained earlier. Contents 1. Introduction 2 1.1. Motivations and main results 2 1.2. Discussion of consequences of main results 5 1.3. Organization of the article and further results 6 2. Preliminaries 7 2.1. Graphs 7 2.2. Complexes 10 2.3. CAT(0) spaces and Gromov hyperbolicity 11 2.4. Group actions 12 2.5. Hypergraphs (set families) 12 2.6.
    [Show full text]
  • Convolution Network with Custom Loss Function for the Denoising of Low SNR Raman Spectra †
    sensors Article Convolution Network with Custom Loss Function for the Denoising of Low SNR Raman Spectra † Sinead Barton 1 , Salaheddin Alakkari 2, Kevin O’Dwyer 1 , Tomas Ward 2 and Bryan Hennelly 1,* 1 Department of Electronic Engineering, Maynooth University, W23 F2H6 Maynooth, County Kildare, Ireland; [email protected] (S.B.); [email protected] (K.O.) 2 Insight Centre for Data Analytics, School of Computing, Dublin City University, Dublin D 09, Ireland; [email protected] (S.A.); [email protected] (T.W.) * Correspondence: [email protected] † The algorithm presented in this paper may be accessed through github at the following link: https://github.com/bryanhennelly/CNN-Denoiser-SENSORS (accessed on 5 July 2021). Abstract: Raman spectroscopy is a powerful diagnostic tool in biomedical science, whereby different disease groups can be classified based on subtle differences in the cell or tissue spectra. A key component in the classification of Raman spectra is the application of multi-variate statistical models. However, Raman scattering is a weak process, resulting in a trade-off between acquisition times and signal-to-noise ratios, which has limited its more widespread adoption as a clinical tool. Typically denoising is applied to the Raman spectrum from a biological sample to improve the signal-to- noise ratio before application of statistical modeling. A popular method for performing this is Citation: Barton, S.; Alakkari, S.; Savitsky–Golay filtering. Such an algorithm is difficult to tailor so that it can strike a balance between O’Dwyer, K.; Ward, T.; Hennelly, B. denoising and excessive smoothing of spectral peaks, the characteristics of which are critically Convolution Network with Custom important for classification purposes.
    [Show full text]
  • Fighting Deepfake by Exposing the Convolutional Traces on Images
    Received August 6, 2020, accepted September 1, 2020, date of publication September 9, 2020, date of current version September 22, 2020. Digital Object Identifier 10.1109/ACCESS.2020.3023037 Fighting Deepfake by Exposing the Convolutional Traces on Images LUCA GUARNERA 1,2, (Student Member, IEEE), OLIVER GIUDICE 1, AND SEBASTIANO BATTIATO 1,2, (Senior Member, IEEE) 1Department of Mathematics and Computer Science, University of Catania, 95124 Catania, Italy 2iCTLab s.r.l. Spinoff of University of Catania, 95124 Catania, Italy Corresponding author: Luca Guarnera ([email protected]) This work was supported by iCTLab s.r.l. - Spin-off of University of Catania. ABSTRACT Advances in Artificial Intelligence and Image Processing are changing the way people interacts with digital images and video. Widespread mobile apps like FACEAPP make use of the most advanced Generative Adversarial Networks (GAN) to produce extreme transformations on human face photos such gender swap, aging, etc. The results are utterly realistic and extremely easy to be exploited even for non-experienced users. This kind of media object took the name of Deepfake and raised a new challenge in the multimedia forensics field: the Deepfake detection challenge. Indeed, discriminating a Deepfake from a real image could be a difficult task even for human eyes but recent works are trying to apply the same technology used for generating images for discriminating them with preliminary good results but with many limitations: employed Convolutional Neural Networks are not so robust, demonstrate to be specific to the context and tend to extract semantics from images. In this paper, a new approach aimed to extract a Deepfake fingerprint from images is proposed.
    [Show full text]