Particle Filter Recurrent Neural Networks

Total Page:16

File Type:pdf, Size:1020Kb

Particle Filter Recurrent Neural Networks The Thirty-Fourth AAAI Conference on Artificial Intelligence (AAAI-20) Particle Filter Recurrent Neural Networks Xiao Ma,∗ Peter Karkus,∗ David Hsu, Wee Sun Lee National University of Singapore {xiao-ma, karkus, dyhsu, leews}@comp.nus.edu.sg Abstract thus increasing the number of network parameters and the amount of data required for training. Recurrent neural networks (RNNs) have been extraordinar- ily successful for prediction with sequential data. To tackle We introduce Particle Filter Recurrent Neural Networks highly variable and multi-modal real-world data, we intro- (PF-RNNs), a new family of RNNs that seeks to improve duce Particle Filter Recurrent Neural Networks (PF-RNNs), belief approximation without lengthening the latent vector a new RNN family that explicitly models uncertainty in its in- h, thus reducing the data required for learning. Particle fil- ternal structure: while an RNN relies on a long, deterministic tering (Del Moral 1996) is a model-based belief tracking al- latent state vector, a PF-RNN maintains a latent state distribu- gorithm. It approximates the belief as a set of sampled states tion, approximated as a set of particles. For effective learning, that typically have well-understood meaning. PF-RNNs bor- we provide a fully differentiable particle filter algorithm that row from particle filtering the idea of approximating the updates the PF-RNN latent state distribution according to the belief as a set of weighted particles, and combine it with Bayes rule. Experiments demonstrate that the proposed PF- RNNs outperform the corresponding standard gated RNNs the powerful approximation capacity of RNNs. PF-RNN on a synthetic robot localization dataset and 10 real-world se- approximates the variable and multi-modal belief as a set 1 2 quence prediction datasets for text classification, stock price of weighted latent vectors {h ,h ,...} sampled from the prediction, etc. same distribution. Like standard RNNs, PF-RNNs follow a model-free approach: PF-RNNs’ latent vectors are learned distributed representations, which are not necessarily inter- Introduction pretable. As an alternative to the Gaussian based filters, Prediction with sequential data is a long-standing challenge e.g., Kalman filters, particle filtering is a non-parametric in machine learning. It has many applications, e.g., object approximator that offers a more flexible belief representa- tracking (Blake and Isard 1997), speech recognition (Xiong tion (Del Moral 1996); it is also proven to give a tighter et al. 2018), and decision making under uncertainty (So- evidence lower bound (ELBO) in the data generation do- mani et al. 2013). For effective prediction, predictors require main (Burda, Grosse, and Salakhutdinov 2015). In our case, “memory”, which summarizes and tracks information in the the approximate representation is trained from data to opti- input sequence. The memory state is generally not observ- mize the prediction performance. For effective training with able, hence the need for a belief, i.e., a posterior state dis- gradient methods, we employ a fully differentiable particle tribution that captures the sufficient statistic of the input for filter algorithm that maintains the latent belief. See Fig. 1 for making predictions. Modeling the belief manually is often a comparison of RNN and PF-RNN. difficult. Consider the task of classifying news text—treated We apply the underlying idea of PF-RNN to gated RNNs, as a sequence of words—into categories, such as politics, which are easy to implement and have shown strong per- education, economy, etc. It is difficult to handcraft the belief formance in many sequence prediction tasks. Specifically, representation and dynamics for accurate classification. we propose PF-LSTM and PF-GRU, the particle filter exten- State-of-the-art sequence predictors often use recurrent sions of Long Short Term Memory (LSTM) (Hochreiter and neural networks (RNNs), which learn a vector h of deter- Schmidhuber 1997) and Gated Recurrent Unit (GRU) (Cho ministic time-dependent latent variables as an approxima- et al. 2014). PF-LSTM and PF-GRU serve as drop-in re- tion to the belief. Real-world data, however, are highly vari- placements for LSTM and GRU, respectively. They aim able and often multi-modal. To cope with the complexity of to learn a better belief representation from the same data, uncertain real-world data and achieve better belief approx- though at a greater computational cost. h imation, one could increase the length of latent vector , We evaluated PF-LSTM and PF-GRU on 13 data sets: ∗equal contribution 3 synthetic dataset for systematic understanding and 10 real- Copyright c 2020, Association for the Advancement of Artificial world datasets with different sample sizes for performance Intelligence (www.aaai.org). All rights reserved. comparison. The experiments show that our PF-RNNs out- 5101 single vector ... ≈ ≈ K particles deterministic stochastic update Bayesian update inputRNN output inputPF-RNN output Figure 1: A comparison of RNN and PF-RNN. An RNN approximates the belief as a long latent vector and updates it with a deterministic nonlinear function. A PF-RNN approximates the belief as a set of weighted particles and updates them with the stochastic particle filtering algorithm. perform the corresponding standard RNNs with a compara- ticle filter in an RNN for learning belief tracking, but fol- ble number of parameters. Further, the PF-RNNs achieve the lows a model-based approach and relies on handcrafted be- best results on almost all datasets when there is no restriction lief representation (Jonschkowski, Rastogi, and Brock 2018; on the number of model parameters used.1 Karkus, Hsu, and Lee 2018). PF-RNNs retain the model- free nature of RNNs and exploit their powerful approx- Related Work imation capabilities to learn belief representation directly from data. Other work explicitly addresses belief repre- There are two general categories for prediction with sequen- sentation learning with RNNs (Gregor and Besse 2018; tial data: model-based and model-free. The model-based ap- Guo et al. 2018); however, they do not involve Bayesian be- proach includes, e.g., the well-known hidden Markov mod- lief update or particle filtering. els (HMMs) and the dynamic Bayesian networks (DBNs) (Murphy 2002). They rely on handcrafted state representa- tions with well-defined semantics, e.g., phonemes in speech Particle Filter Recurrent Neural Networks recognition. Given a model, one may perform belief track- ing according to the Bayes’ rule. The main difficulty here Overview is the state space and the computational complexity of be- The general sequence prediction problem is to predict an lief tracking grow exponentially with the number of state output sequence, given an input sequence. In this paper, we dimensions. To cope with this difficulty, particle filters rep- focus on predicting the output yt at time t, given the input resent the belief as a set of sampled states and perform ap- history x1,x2,...,xt. proximate inference. Alternatively, the model-free approach, Standard RNNs handle sequence prediction by maintain- such as RNNs, approximates the belief as a latent state vec- ing a deterministic latent state ht that captures the sufficient tor, learned directly from data, and updates it through a de- statistic of the input history, and updating h sequentially terministic nonlinear function, also learned from data. t given new inputs. Specifically, RNNs update ht with a deter- The proposed PF-RNNs build upon RNNs and com- ministic nonlinear function learned from data. The predicted bine their powerful data-driven approximation capabilities output yˆt is another nonlinear function of the latent state ht, with the sample-based belief representation and approxi- also learned from data. mate Bayesian inference used in particle filters. Related To handle highly variable, noisy real-world data, one key sample-based methods have been applied to generative mod- idea of PF-RNN is to capture the sufficient statistic of the els. Importance sampling is used to improve variational input history in the latent belief b(h ) by forming multi- auto-encoders (Burda, Grosse, and Salakhutdinov 2015). t ple hypotheses over h . Specifically, PF-RNNs approximate This is extended to sequence generation (Le et al. 2018) and t b(h ) by a set of K weighted particles {(hi,wi)}K , for la- to reinforcement learning (Igl et al. 2018). Unlike the earlier t t t i=1 tent state hi and weight wi; the particle filtering algorithm works that focus on generation, we combine RNNs and par- t t is used to update the particles according to the Bayes rule. ticle filtering for sequence prediction. PF-RNNs are trained The Bayesian treatment of the latent belief naturally cap- discriminatively, instead of generatively, with the target loss tures the stochastic nature of real-world data. Further, all function on the model output. As a result, PF-RNN training particles share the same parameters in a PF-RNN. The num- prioritizes target prediction over data generation which may ber of particles thus does not affect the number of PF-RNN be irrelevant to the prediction task. network parameters. Given a fixed training data set, we ex- PF-RNNs exploit the general idea of embedding algo- pect that increasing the number of particles improves the be- rithmic priors, in this case, filtering algorithms, in neu- lief approximation and leads to better learning performance, ral networks and train them discriminatively (Jonschkowski but at the cost of greater computational complexity. and Brock 2016; Jonschkowski, Rastogi, and Brock 2018; Karkus, Hsu, and Lee 2018). Earlier work embeds a par- Similar to RNNs, we used learned functions for updating latent states and predict the output yt based on the mean par- 1 ¯ ¯ K i i More details are publicly available in arXiv:1905.12885. The ticle: yˆt = fout(ht), where ht = i=1 wtht and fout is a code is available at https://github.com/Yusufma03/pfrnns task-dependent prediction function.
Recommended publications
  • Real-Time Wavelet Compression and Self-Modeling Curve
    REAL-TIME WAVELET COMPRESSION AND SELF-MODELING CURVE RESOLUTION FOR ION MOBILITY SPECTROMETRY A dissertation presented to the faculty of the College of Arts and Sciences of Ohio University In partial fulfillment of the requirements for the degree Doctor of Philosophy Guoxiang Chen March 2003 This dissertation entitled REAL-TIME WAVELET COMPRESSION AND SELF-MODELING CURVE RESOLUTION FOR ION MOBILITY SPECTROMETRY BY GUOXIANG CHEN has been approved for the Department of Chemistry and Biochemistry and the College of Arts and Sciences by Peter de B. Harrington Associate Professor of Chemistry and Biochemistry Leslie A. Flemming Dean, College of Arts and Sciences CHEN, GUOXIANG. Ph.D. March 2003. Analytical Chemistry Real-Time Wavelet Compression and Self-Modeling Curve Resolution for Ion Mobility Spectrometry (203 pp.) Director of Dissertation: Peter de B. Harrington Chemometrics has proven useful for solving chemistry problems. Most of the chemometric methods are applied in post-run analyses, for which data are processed after being collected and archived. However, in many applications, real-time processing is required to obtain knowledge underlying complex chemical systems instantly. Moreover, real-time chemometrics can eliminate the storage burden for large amounts of raw data that occurs in post-run analyses. These attributes are important for the construction of portable intelligent instruments. Ion mobility spectrometry (IMS) furnishes inexpensive, sensitive, fast, and portable sensors that afford a wide variety of potential applications. SIMPLe-to-use Interactive Self-modeling Mixture Analysis (SIMPLISMA) is a self-modeling curve resolution method that has been demonstrated as an effective tool for enhancing IMS measurements. However, all of the previously reported studies have applied SIMPLISMA as a post-run tool.
    [Show full text]
  • Curriculum Vitae
    Jeff Goldsmith 722 W 168th Street, 6th floor New York, NY 10032 jeff[email protected] Date of Preparation April 20, 2021 Academic Appointments / Work Experience 06/2018{Present Department of Biostatistics Mailman School of Public Health, Columbia University Associate Professor 06/2012{05/2018 Department of Biostatistics Mailman School of Public Health, Columbia University Assistant Professor 01/2009{12/2010 Department of Biostatistics Bloomberg School of Public Health, Johns Hopkins University Research Assistant (R01NS060910) 01/2008{12/2009 Department of Biostatistics Bloomberg School of Public Health, Johns Hopkins University Research Assistant (U19 AI060614 and U19 AI082637) Education 08/2007{05/2012 Johns Hopkins University PhD in Biostatistics, May 2012 Thesis: Statistical Methods for Cross-sectional and Longitudinal Functional Observations Advisors: Ciprian Crainiceanu and Brian Caffo 08/2003{05/2007 Dickinson College BS in Mathematics, May 2007 Jeff Goldsmith 2 Honors 04/2021 Dean's Excellence in Leadership Award 03/2021 COPSS Leadership Academy For Emerging Leaders in Statistics 06/2017 Tow Faculty Scholar 01/2016 Public Voices Fellow 10/2013 Calderone Junior Faculty Prize 05/2012 ASA Biometrics Section Travel Award 12/2011 Invited Paper in \Highlights of JCGS" Session at Interface 05/2011 Margaret Merrell Award for Outstanding Research by a Biostatistics Doc- toral Student 05/2011 School-wide Teaching Assistant Recognition Award 05/2011 Helen Abbey Award for Excellence in Teaching 03/2011 ENAR Distinguished Student Paper Award 05/2010 Jane and Steve Dykacz Award for Outstanding Paper in Medical Statistics 05/2009 Nominated for School-wide Teaching Assistant Recognition Award 08/2007{05/2012 Sommer Scholar 05/2007 James Fowler Rusling Prize 05/2007 Lance E.
    [Show full text]
  • Multivariate Chemometrics As a Strategy to Predict the Allergenic Nature of Food Proteins
    S S symmetry Article Multivariate Chemometrics as a Strategy to Predict the Allergenic Nature of Food Proteins Miroslava Nedyalkova 1 and Vasil Simeonov 2,* 1 Department of Inorganic Chemistry, Faculty of Chemistry and Pharmacy, University of Sofia, 1 James Bourchier Blvd., 1164 Sofia, Bulgaria; [email protected]fia.bg 2 Department of Analytical Chemistry, Faculty of Chemistry and Pharmacy, University of Sofia, 1 James Bourchier Blvd., 1164 Sofia, Bulgaria * Correspondence: [email protected]fia.bg Received: 3 September 2020; Accepted: 21 September 2020; Published: 29 September 2020 Abstract: The purpose of the present study is to develop a simple method for the classification of food proteins with respect to their allerginicity. The methods applied to solve the problem are well-known multivariate statistical approaches (hierarchical and non-hierarchical cluster analysis, two-way clustering, principal components and factor analysis) being a substantial part of modern exploratory data analysis (chemometrics). The methods were applied to a data set consisting of 18 food proteins (allergenic and non-allergenic). The results obtained convincingly showed that a successful separation of the two types of food proteins could be easily achieved with the selection of simple and accessible physicochemical and structural descriptors. The results from the present study could be of significant importance for distinguishing allergenic from non-allergenic food proteins without engaging complicated software methods and resources. The present study corresponds entirely to the concept of the journal and of the Special issue for searching of advanced chemometric strategies in solving structural problems of biomolecules. Keywords: food proteins; allergenicity; multivariate statistics; structural and physicochemical descriptors; classification 1.
    [Show full text]
  • Copula-Based Analysis of Dependent Data with Censoring and Zero Inflation
    Copula-Based Analysis of Dependent Data with Censoring and Zero Inflation by Fuyuan Li B.S. in Telecommunication Engineering, May 2012, Beijing University of Technology M.S. in Statistics, May 2014, The George Washington University A Dissertation submitted to The Faculty of The Columbian College of Arts and Sciences of The George Washington University in partial satisfaction of the requirements for the degree of Doctor of Philosophy January 10, 2019 Dissertation directed by Huixia J. Wang Professor of Statistics The Columbian College of Arts and Sciences of The George Washington University certifies that Fuyuan Li has passed the Final Examination for the degree of Doctor of Philosophy as of December 7, 2018. This is the final and approved form of the dissertation. Copula-Based Analysis of Dependent Data with Censoring and Zero Inflation Fuyuan Li Dissertation Research Committee: Huixia J. Wang, Professor of Statistics, Dissertation Director Tapan K. Nayak, Professor of Statistics, Committee Member Reza Modarres, Professor of Statistics, Committee Member ii c Copyright 2019 by Fuyuan Li All rights reserved iii Acknowledgments This work would not have been possible without the financial support of the National Science Foundation grant DMS-1525692, and the King Abdullah University of Science and Technology office of Sponsored Research award OSR-2015-CRG4-2582. I am grateful to all of those with whom I have had the pleasure to work during this and other related projects. Each of the members of my Dissertation Committee has provided me extensive personal and professional guidance and taught me a great deal about both scientific research and life in general.
    [Show full text]
  • Principal Component Analysis
    Tutorial n Chemometrics and Intelligent Laboratory Systems, 2 (1987) 37-52 Elsevier Science Publishers B.V., Amsterdam - Printed in The Netherlands Principal Component Analysis SVANTE WOLD * Research Group for Chemometrics, Institute of Chemistry, Umei University, S 901 87 Urned (Sweden) KIM ESBENSEN and PAUL GELADI Norwegian Computing Center, P.B. 335 Blindern, N 0314 Oslo 3 (Norway) and Research Group for Chemometrics, Institute of Chemistry, Umed University, S 901 87 Umeci (Sweden) CONTENTS Intr~uction: history of principal component analysis ...... ............... ....... 37 Problem definition for multivariate data ................ ............... ....... 38 A chemical example .............................. ............... ....... 40 Geometric interpretation of principal component analysis ... ............... ....... 41 Majestical defi~tion of principal component analysis .... ............... ....... 41 Statistics; how to use the residuals .................... ............... ....... 43 Plots ......................................... ............... ....... 44 Applications of principal component analysis ............ ............... ....... 46 8.1 Overview (plots) of any data table ................. ............... ..*... 46 8.2 Dimensionality reduction ....................... ............... 46 8.3 Similarity models ............................. ............... , . 47 Data pre-treatment .............................. ............... *....s. 47 10 Rank, or dim~sion~ty, of a principal components model. .......................... 48
    [Show full text]
  • Chemometrics in Europe: I (R;Q,P)= I (Q11p) Na (2) Selected Results
    Volume 93, Number 3, May-June 1988 Journal of Research of the National Bureau of Standards Accuracy in Trace Analysis [C]={[E]' [El> [E] [A] tions to the field. Obviously, this is not the place to offer a review on chemometrics, let alone one that to recalculate the concentration profiles. This pro- is restricted to a continent. cess (truncation, normalization and pseudoinverse The definition of chemometrics [I] comprises followed by pseudoinverse) was repeated until no three distinct areas characterized by the key words further refinement occurred. "optimal measurements," "maximum chemical in- The concentration profiles and spectra of the formation" and, for analytical chemistry something three unknown components of stearyl alcohol in that sounds like the synopsis of the other two: "op- carbon tetrachloride obtained in this manner were timal way [to obtain] relevant information." found to make chemical sense. This EFA procedure, unlike others, was success- ful in extracting concentration profiles from situa- Information Theory tions where one component profile was completely encompassed underneath another component Eckschlager and Stepanek [2-5] pioneered the profile. adaption and application of information theory in analytical chemistry. One of their important results gives the information gain of a quantitative deter- References mination [5] [I] Malinowski, E. R., and Howery, D. G., Factor Analysis in Chemistry, Wiley Interscience, New York (1980). [2] Gemperline, P. G., J. Chem. Inf. Comput. Sci. 24, 206 I (qI 1p)=lIn toII)= n(X 2SV2xR-en-xl) \/nA (I) (1984). [3] Vandeginste, B. G. M., Derks, W., and Kateman, G., Anal. Chim. Acta 173, 253 (1985). [4] Gampp, H., Macder, M., Meyer, C.
    [Show full text]
  • Chemometrics: Theory and Application
    Chapter 7 Chemometrics: Theory and Application Hilton Túlio Lima dos Santos, André Maurício de Oliveira, Patrícia Gontijo de Melo, Wagner Freitas and Ana Paula Rodrigues de Freitas Additional information is available at the end of the chapter http://dx.doi.org/10.5772/53866 1. Introduction This chapter aims to present a chemometrics as important area in chemistry to be able to help work with many among of data obtained in analysis. The term chemometrics was introduced in initial 70th years by Svant Wold (Swede) and Bruce Kowalski (USA). According International Chemometrics Society, founded in 1974, the accept definition to chemometrics is (i) the chemical discipline that uses mathematical and statistical methods to design or select optimal measurement procedures and experiments (ii) to provide maximum chemical information by analyzing chemical data [1]. When the study involving many variable became the study in a multivariate analysis, so it is necessary to building a typical matrix and is normal to do a pre-processing. Pre-processing is a procedure to adjust the different factors with different units in values than allow give for each factor the same change to contribute to the model. After, next step is usually the Pattern Recognition method, to find any similarity in your data. In This method is common using the unsupervised group where there are the HCA and PCA analysis and the supervised group where there is the KNN. The HCA analysis (Hierarchical Cluster Analysis) is used to examine the distance among the samples in two dimensional plot (dendogram) and cluster samples with similarity. (Figure 1).
    [Show full text]
  • A Synergistic Use of Chemometrics and Deep Learning Improved the Predictive Performance of Near-Infrared Spectroscopy Models for Dry Matter Prediction in Mango Fruit
    Chemometrics and Intelligent Laboratory Systems 212 (2021) 104287 Contents lists available at ScienceDirect Chemometrics and Intelligent Laboratory Systems journal homepage: www.elsevier.com/locate/chemometrics A synergistic use of chemometrics and deep learning improved the predictive performance of near-infrared spectroscopy models for dry matter prediction in mango fruit Puneet Mishra a,*,Dario Passos b a Wageningen Food and Biobased Research, Bornse Weilanden 9, P.O. Box 17, 6700AA, Wageningen, the Netherlands b CEOT, Universidade do Algarve, Campus de Gambelas, FCT Ed.2, 8005-189 Faro, Portugal ARTICLE INFO ABSTRACT Keywords: This study provides an innovative approach to improve deep learning (DL) models for spectral data processing 1D-CNN with the use of chemometrics knowledge. The technique proposes pre-filtering the outliers using the Hotelling’s Neural networks T2 and Q statistics obtained with partial least-square (PLS) analysis and spectral data augmentation in the variable Fruit quality domain to improve the predictive performance of DL models made on spectral data. The data augmentation is Artificial intelligence carried out by stacking the same data pre-processed with several pre-processing techniques such as standard Ensemble pre-processing normal variate, 1st derivatives, 2nd derivatives and their combinations. The performance of the approach is demonstrated on a real near-infrared (NIR) data set related to dry matter (DM) prediction in mango fruit. The data set consisted of a total 11,961 spectra and reference DM measurements. The results showed that removing the outliers and augmenting spectral data improved the predictive performance of DL models. Furthermore, this innovative approach not only improved DL models but attained the lowest root mean squared error of prediction (RMSEP) on the mango data set i.e., 0.79% compared to the best known RMSEP of 0.84%.
    [Show full text]
  • Philip Ernst: Tweedie Award the Institute of Mathematical Statistics CONTENTS Has Selected Philip A
    Volume 47 • Issue 3 IMS Bulletin April/May 2018 Philip Ernst: Tweedie Award The Institute of Mathematical Statistics CONTENTS has selected Philip A. Ernst as the winner 1 Tweedie Award winner of this year’s Tweedie New Researcher Award. Dr. Ernst received his PhD in 2–3 Members’ news: Peter Bühlmann, Peng Ding, Peter 2014 from the Wharton School of the Diggle, Jun Liu, Larry Brown, University of Pennsylvania and is now Judea Pearl an Assistant Professor of Statistics at Rice University: http://www.stat.rice. 4 Medallion Lecture previews: Jean Bertoin, Davar edu/~pe6/. Philip’s research interests Khoshnevisan, Ming Yuan include exact distribution theory, stochas- tic control, optimal stopping, mathemat- 6 Recent papers: Stochastic Systems; Probability Surveys ical finance and statistical inference for stochastic processes. Journal News: Statistics 7 The IMS Travel Awards Committee Surveys; possible new Data selected Philip “for his fundamental Science journal? Philip Ernst contributions to exact distribution theory, 8 New Researcher Travel in particular for his elegant resolution of the Yule’s nonsense correlation problem, and Awards; Student Puzzle 20 for his development of novel stochastic control techniques for computing the value of 9 Obituaries: Walter insider information in mathematical finance problems.” Rosenkrantz, Herbert Heyer, Philip Ernst will present the Tweedie New Researcher Invited Lecture at the IMS Jørgen Hoffmann-Jørgensen, New Researchers Conference, held this year at Simon Fraser University from July James Thompson,
    [Show full text]
  • Abstracts Chemometrics and Analytical Chemistry 2014
    Abstracts Chemometrics and Analytical Chemistry 2014 PL1 PERSPECTIVES ON THE INTERDISCIPLINARY NATURE OF CHEMOMETRICS AND THE FUTURE OF ITS IDENTITY AS A DISCIPLINE Paul J. Gemperline1, Maryann Cuellar1, and Paul Trevorrow2 1Department of Chemistry, East Carolina University, Greenville, NC 27858 2John Wiley & Sons Limited, The Atrium, Southern Gate, Chichester, West Sussex, United Kingdom. PO19 8SQ Chemometrics as an identifiable discipline is about 40 years old. It is characterized by its specialized jargon and distinctive mathematical methods. In the pioneering years it was identified as a sub discipline of analytical chemistry. It experienced rapid growth during the 1990’s and 2000’s, during which time publications grew at a linear rate while citations grew nearly exponentially, a dramatic indication of chemometrics’ growing impact in a broad range of multidisciplinary fields. By these measures, chemometrics has been highly successful as a discipline. However, as the use of data analytics has become ubiquitous in the past decade, is research in chemometrics at risk of becoming irrelevant? Has the pace of innovation and development of new chemometric methods stalled, or worse yet, is it in decline? Are there still new methods to be discovered and invented, or, as knowledge and expertise in mathematics and computational methods has risen to new levels throughout the world, have all novel and innovative mathematical tools been discovered? If chemometrics is to remain a viable discipline, where will the next innovations come from? This perspective will examine the impact of chemometrics in three specialized research areas as bellwether indicators of the discipline and its future, including work on use of Raman spectroscopy for monitoring bioprocesses.
    [Show full text]
  • Chemometric Study of the Correlation Between Human Exposure to Benzene and Pahs and Urinary Excretion of Oxidative Stress Biomarkers
    atmosphere Article Chemometric Study of the Correlation between Human Exposure to Benzene and PAHs and Urinary Excretion of Oxidative Stress Biomarkers Flavia Buonaurio 1, Enrico Paci 2, Daniela Pigini 2, Federico Marini 1 , Lisa Bauleo 3 , Carla Ancona 3 and Giovanna Tranfo 2,* 1 Department of Analytical Chemistry, Sapienza University, Piazzale Aldo Moro, 5, 00185 Rome, Italy; fl[email protected] (F.B.); [email protected] (F.M.) 2 INAIL Research, Department of Occupational and Environmental Medicine, Epidemiology and Hygiene, Via di Fontana Candida 1, 00078 Monte Porzio Catone (RM), Italy; [email protected] (E.P.); [email protected] (D.P.) 3 Department of Epidemiology Lazio Regional Health Service, Via Cristoforo Colombo, 112, 00154 Rome, Italy; [email protected] (L.B.); [email protected] (C.A.) * Correspondence: [email protected] Received: 29 October 2020; Accepted: 8 December 2020; Published: 11 December 2020 Abstract: Urban air contains benzene and polycyclic aromatic hydrocarbons (PAHs) which have carcinogenic properties. The objective of this paper is to study the correlation of exposure biomarkers with biomarkers of nucleic acid oxidation also considering smoking. In 322 subjects, seven urinary dose biomarkers were analyzed for benzene, pyrene, nitropyrene, benzo[a]pyrene, and naphthalene exposure, and four effect biomarkers for nucleic acid and protein oxidative stress. Chemometrics was applied in order to investigate the existence of a synergistic effect for the exposure to the mixture and the contribution of active smoking. There is a significant difference between nicotine, benzene and PAH exposure biomarker concentrations of smokers and non-smokers, but the difference is not statistically significant for oxidative stress biomarkers.
    [Show full text]
  • Sequential Truncation of R-Vine Copula Mixture Model for High- Dimensional Datasets
    Hindawi International Journal of Mathematics and Mathematical Sciences Volume 2021, Article ID 3214262, 14 pages https://doi.org/10.1155/2021/3214262 Research Article Sequential Truncation of R-Vine Copula Mixture Model for High- Dimensional Datasets Fadhah Amer Alanazi Prince Sultan University, Riyadh, Saudi Arabia Correspondence should be addressed to Fadhah Amer Alanazi; [email protected] Received 12 May 2021; Accepted 22 July 2021; Published 31 July 2021 Academic Editor: Fernando Bobillo Copyright © 2021 Fadhah Amer Alanazi. &is is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Uncovering hidden mixture dependencies among variables has been investigated in the literature using mixture R-vine copula models. &ey provide considerable flexibility for modeling multivariate data. As the dimensions increase, the number of the model parameters that need to be estimated is increased dramatically, which comes along with massive computational times and efforts. &is situation becomes even much more complex and complicated in the regular vine copula mixture models. Incorporating the truncation method with a mixture of regular vine models will reduce the computation difficulty for the mixture-based models. In this paper, the tree-by-tree estimation mixture model is joined with the truncation method to reduce computational time and the number of parameters that need to be estimated in the mixture vine copula models. A simulation study and real data applications illustrated the performance of the method. In addition, the real data applications show the effect of the mixture components on the truncation level.
    [Show full text]