Data-Driven Astronomical Inference Josh Bloom, UC Berkeley @profjsb Computational Astrophysics 2014–2020: Approaching Exascale LBNL 21 March 2014 Friday, March 21, 14 2 Inference Space Data Driven non-parametric Bayesian Frequentist parametric Theory Driven Hardware laptops → NERSC Software Python/Scipy, R, ... Carbonware astro grad students, postdocs Friday, March 21, 14 3 Bayesian Distance Ladder Pulsational Variables: Period-Luminosity Relation mij = µi + M0j i indexes over individual stars j indexes over wavebands + ↵j log10 (Pi/P0) a and b are fixed constants at each waveband + E(B V ) [R a (1/λ )+b (1/λ )] − i ⇥ V ⇥ i i + ✏ij Data 134 RR Lyrae (WISE, Hipparcos, UVIORJHK) Fit 307 dimensional model parameter inference - deterministic MCMC model - ~6 days for a single run (one core) - parallelism for convergence tests Klein+12; Klein,JSB+14, Friday, March 21, 14 4 4 BayesianC. R. Klein et al. Distance Ladder • Approaching 1% distance uncertainty • Precision 3D dust measurements Figure 6. Multi-band period–luminosity relations. RRab stars are in blue, RRc stars in red. Blazhko-a↵ected stars are denoted with Friday,diamonds, March stars not 21, known 14 to exhibit the Blazhko e↵ect are denoted with squares. Solid black lines are the best-fitting period–luminosity 5 relations in each waveband and dashed lines indicate the 1σ prediction uncertainty for application of the best-fitting period–luminosity relation to a new star with known period. c 2013 RAS, MNRAS 000, 1–11 Bayesian Astrometry TORRES 2007 Fitting for: Parallax, Proper Motion, Binary Parameters, Microlensing... North (milliarcsec) Hipparcos: 106 stars Gaia: ~109 stars East (milliarcsec) Friday, March 21, 14 6 Bayesian Astrometry “Distortion map” 0.5 0.4 Step 1: 0.3 Regress 7-d parametric 0.2 affine transformation δ (scale, rotation, shear, 0.1 etc.) 0 −0.1 central declination Step 2: −0.2 Learn a non-parametric −0.3 distortion map with −0.4 Gaussian processes −0.5 −0.5 −0.4 −0.3 −0.2 −0.1 0 0.1 0.2 0.3 0.4 0.5 http://berianjames.github.com/pyBAST/ James,JSB+14 Friday, March 21, 14 7 Bayesian Astrometry Some Clear Benefits High Proper Motion WD in SDSS Stripe 82 • covariate uncertainties in 204±5 mas/yr celestial coordinates • mapping observed points can incorporate variance throughout image, extending even to highly non-trivial distortion effects • astrometry can be treated as Bayesian updating, allowing incorporation of prior knowledge colored by time about proper motion & parallax non-parallel Cholesky + MCMC: ~hour for 71 observations http://berianjames.github.com/pyBAST/ Friday, March 21, 14 8 Machine Learned Classification 25-class variable star Data: 50k from ASAS, 810 with known labels (timeseries, colors) PRRL = 0.94 Richards+12 Friday, March 21, 14 9 The AstrophysicalMachine Journal Supplement Series,203:32(27pp),2012December Learned ClassificationRichards et al. True Class a b1 b2 b3 b4 c d e f g h i j j1 l o p q r1 s1 s2 s3 t u v w x y a. Mira 0.923 0.015 0.042 0.057 0.176 b1. Semireg PV 0.066 0.824 0.276 0.111 0.125 0.086 0.25 0.059 0.375 0.6 0.034 0.019 25-class variable0.059 star b2. SARG A 74 b3. SARG B 0.015 0.267 0.586 0.25 0.05 1 b4. LSP Data:0.011 0.074 0.067 0.069 0.87 50k from ASAS,0.2 0.171 8100.059 with known labels c. RV Tauri 0.029 0.75 0.011 0.059 0.125 0.015 dimensional d. Classical Cepheid 0.955 0.538 0.25 0.25 0.1 0.042 0.308 0.25 (timeseries, colors) e. Pop. II Cepheid feature set for f. Multi. Mode Cepheid 0.011 0.077 0.5 0.012 0.043 0.1 0.059 1 g. RR Lyrae, FM 0.011 0.077 0.965 0.5 1 h. RR Lyrae, FO 0.012 0.897 0.036 0.015 learning i. RR Lyrae, DM 0.5 j. Delta Scuti 0.034 0.786 0.278 0.015 j1. SX Phe l. Beta Cephei 0.107 0.722 0.015 o. Pulsating Be 0.067 0.4 0.059 featurization is p. RSG 0.044 0.686 0.125 Predicted Class 0.036 0.2 0.913 0.015 PRRL = 0.94 q. Chem. Peculiar the bottleneck r1. RCB 0.706 s1. Class. T Tauri (but s2. Weak−line T Tauri 0.034 0.011 0.043 0.7 0.118 0.061 s3. RS CVn 0.05 0.059 0.037 embarrassingly t. Herbig AE/BE 0.2 0.375 u. S Doradus parallel) v. Ellipsoidal w. Beta Persei 0.353 0.889 0.182 x. Beta Lyrae 0.059 0.118 0.074 0.667 0.044 y. W Ursae Maj. 0.042 0.012 0.069 0.036 0.25 0.059 0.091 0.882 91 68 15 29 54 24 89 13 4 86 29 2 28 1 18 5 35 23 17 4 20 17 8 1 1 27 33 68 Figure 5. Cross-validated confusion matrix for all 810 ASAS training sources. Columns are normalized to sum to unity, with the total number of true objects of each class listed along the bottom axis. The overall correspondence rate for these sources is 80.25%, with at least 70% correspondence for half of the classes. Classes with Richards+12 low correspondence are those with fewer than 10 training sources or classes which are easily confused. Red giant classes tend to be confused with other red giant classes and eclipsing classes with other eclipsing classes. There is substantial power in the top-right quadrant, where rotational and eruptive classes are misclassified as red giants;Friday, these March errors 21, are 14 likely due to small training set size for those classes and difficulty to classify those non-periodic sources. 9 (A color version of this figure is available in the online journal.) with any monotonically increasing function (which is typically probabilities, pi1, pi2,...,piC ,areproperprobabilitiesinthat restricted to a set of non-parametric isotonic functions, such as they are each{ between 0 and 1 and} sum to unity for each object. step-wise constants). A drawback to both of these methods is The optimal value of r is found by minimizing the Brier score ! ! ! that they assume a two-class problem; a straightforward way (Brier 1950)betweenthecalibrated(cross-validated)andtrue around this is to treat the multi-class problem as C one-versus- probabilities.14 We find that using a fixed value for r is too all classification problems, where C is the number of classes. restrictive and, for objects with small maximal RF probability, However, we find that Platt Scaling is too restrictive of a trans- it enforces too wide of a margin between the first- and second- formation to reasonably calibrate our data and determine that largest probabilities. Instead, we implement a procedure similar we do not have enough training data in each class to use Isotonic to that of Bostrom (2008)andparameterizer with a sigmoid Regression with any degree of confidence. function based on the classifier margin, ∆i pi,max pi,2nd, for Ultimately, we find that a calibration method similar to the each source, = − one introduced by Bostrom (2008)isthemosteffectiveforour 1 1 data. This method uses the probability transformation r(∆i) , (5) = 1+eA∆i +B − 1+eB pij + r(1 pij )ifpij max pi1,pi2,...,piC where the second term ensures that there is zero calibration per- pij − = { } formed at ∆i 0. This parameterization allows the amount of = pij (1 r)otherwise, = " − (4) 14 N C 2 The Brier score is defined as B(p) 1/N (I(yi j) pij ) , = i 1 j 1 = − where! pi1,pi2,...,piC is the vector of class probabilities where N is the total number of objects, C is the number= = of classes, and { } # # for object i and r [0, 1] is a scalar. Note that the adjusted I(yi j) is 1 if and only if the true class of the source i is j. ∈ = ! ! 11 Machine-learned varstar catalog: http://bigmacc.info Friday, March 21, 14 10 Doing Science with Probabilistic Catalogs Demographics (with little followup): trading high purity at the cost of lower efficiency e.g., using RRL to find new Galactic structure Novelty Discovery (with lots of followup): trading high efficiency for lower purity e.g., discovering new instances of rare classes DRAFT April 20, 2012 Discovery of Bright Galactic R Coronae Borealis and DY Persei Variables: Rare Gems Mined from ASAS 1, 1,2 1 1 1 A. A. Miller ⇤, J. W. Richards , J. S. Bloom , S. B. Cenko , J. M. Silverman , D. L. Starr1, and K. G. Stassun3,4 Friday, March 21, 14 11 ABSTRACT We present the results of a machine-learning (ML) based search for new R Coronae Borealis (RCB) stars and DY Persei-like stars (DYPers) in the Galaxy using cataloged light curves obtained by the All-Sky Automated Survey (ASAS). RCB stars—a rare class of hydrogen-deficient carbon-rich supergiants—are of great interest owing to the insights they can provide on the late stages of stellar evolution. DYPers are possibly the low-temperature, low-luminosity analogs to the RCB phenomenon, though additional examples are needed to fully estab- lish this connection. While RCB stars and DYPers are traditionally identified by epochs of extreme dimming that occur without regularity, the ML search framework more fully captures the richness and diversity of their photometric behavior.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages23 Page
-
File Size-