Carina Curto · Curriculum Vitae

Total Page:16

File Type:pdf, Size:1020Kb

Carina Curto · Curriculum Vitae Carina Curto · Curriculum Vitae Department of Mathematics tel: (814) 863 9119 The Pennsylvania State University email: [email protected] 331 McAllister Building website: http://www.personal.psu.edu/cpc16/ University Park, State College, PA 16802 CV last updated: December 23, 2016 Research · Mathematical neuroscience Interests · Applied algebra, geometry, and topology · Neural networks and neural codes: theory, modeling, and data analysis Education & Employment Academic The Pennsylvania State University (PSU) State College, PA Positions Associate Professor of Mathematics Aug 2014–present Member of the Center for Neural Engineering University of Nebraska-Lincoln (UNL) Lincoln, NE Assistant Professor of Mathematics Aug 2009–Aug 2014 Courant Institute, New York University (NYU) New York, NY Courant Instructor (Mathematics) Sep 2008–Aug 2009 Rutgers, The State University of New Jersey Newark, NJ Center for Molecular and Behavioral Neuroscience May 2005–Aug 2008 Postdoctoral Associate (Neuroscience) Education Duke University Durham, NC Ph.D. in Mathematics (Algebraic Geometry & String Theory) Aug 2000–May 2005 Advisor: Prof. David R. Morrison Harvard University Cambridge, MA A.B. in Physics (Cum Laude) Sep 1996–Jun 2000 Iowa City West High School Iowa City, IA Valedictorian (4.0 GPA) Aug 1992–Jun 1996 Additional MBL Neuroinformatics Course, Woods Hole, MA Aug 2006 Education IAS Prospects in Theoretical Physics (String Theory), Princeton, NJ Jun 2002 IAS Women’s Program in Symplectic Geometry, Princeton, NJ May 2002 IAS Park City Math Institute, Park City, Utah Jul 2001 – Supersymmetry, Quantum Field Theory & Enumerative Geometry IAS Women’s Program in Mathematical Physics, Princeton, NJ May 2001 Budapest Semesters in Mathematics, Hungary Spring 1999 University of Iowa Summer French Program in Lyon, France Summer 1996 University of Iowa coursework (as a high school student, 15 courses) 1993-1996 Grants & Awards Research NIH R01 EB022862 (lead PI; $1,134,009; BRAIN Initiative grant) 2016-19 Funding – Emergent dynamics from network connectivity: a minimal model NSF DMS-1516881 (single PI; $150,000) 2015-18 – Theory of threshold-linear networks and combinatorial neural codes NSF DMS-1225666/DMS-1537228 (single PI; $164,999; transferred from UNL to PSU) 2012-16 – Memory encoding in spatially structured networks: dynamics, discrete geometry & topology 1 Sub-award from NIH R01-EY010542-19 ($33,000) 2014-15 – Regulation of photoreceptor neurotransmission (PI: Wallace Thoreson, UNMC) Janelia Visitor Program (co-PI with E. Pastalkova & V. Itskov; $61,976) 2013-15 – Development of a mathematical tool for rigorous analysis of neural activity sequences Alfred P. Sloan Research Fellowship ($50,000) 2011-15 Career Enhancement Fellowship for Junior Faculty ($31,500) 2012-13 – Woodrow Wilson National Fellowship Foundation & Andrew W. Mellon Foundation Nebraska EPSCoR First Award ($20,000) 2012-13 NSF DMS-0920845 (single PI; $109,635) 2009-13 – Stimulus representation and spontaneous activity in recurrent networks Other Statistical and Applied Mathematical Sciences Institute (SAMSI) 2016 Funding – year-long program: Challenges in Computational Neuroscience & Awards – fully funded long-term visitor (3 months) – taught a graduate course in Mathematical Neuroscience for Duke, UNC, & NC State students Institute for Mathematics and its Applications (IMA), long-term visitor 2014 – year-long program: Scientific and Engineering Applications of Algebraic Topology – fully funded long-term visitor (2.5 months) AMS Travel Award (Mathematical Congress of the Americas in Guanajuato) 2013 Named a UNL “Academic Star” 2012 UNL Visiting Scholars Grant (to support visitors) 2010, 2012 NSF S-STEM-1060322 (co-PI; $599,996; education grant) 2011-16 UNL Research Council Interdisciplinary Grant (co-PI; $5,000) 2010 NSF VIGRE Graduate Fellowship 2004–05 NSF Graduate Research Fellowship 2000–04 Duke Endowment Fellowship, Duke University 2001–04 James B. Duke Fellowship, Duke University 2000–03 University Scholars Program Fellowship, Duke University 2000–01 Mellon Mays Undergraduate Fellowship, Harvard University 1997–2000 American Physical Society Scholarship 1996–98 Detur Prize, Harvard University 1997 Varsity Letter Winner, Harvard Women’s Tennis 1996-97 Publications In progress 26. C. Curto, E. Gross, J. Jeffries, K. Morrison, M. Omar, Z. Rosen, A. Shiu, N. Youngs. Algebraic signatures of convex and non-convex codes. In preparation. 25. A. Veliz-Cuba, V. Itskovy, C. Curtoy (yequal last authors). Sequences of neural activity in randomly- connected networks with Hebbian plasticity yield robust bump attractor dynamics. In preparation. Preprints/ 24. K. Morrison, A. Degeratu, V. Itskov, C. Curto. Diversity of emergent dynamics in competitive threshold- Submitted linear networks: a preliminary report. Preprint available at arXiv.org. 23. C. Curto, R. Vera. The Leray dimension of a convex code. Preprint available at arXiv.org. 22. C. Curto, N. Youngs. Neural ring homomorphisms and maps between neural codes. Preprint available at arXiv.org. Published/ 21. C. Curto, E. Gross, J. Jeffries, K. Morrison, M. Omar, Z. Rosen, A. Shiu, N. Youngs. What makes a neural Accepted code convex? SIAM Journal on Applied Algebra and Geometry, in press, 2017. 20. C. Curto. What can topology tells us about the neural code? Bulletin of the AMS, vol. 54, no. 1, pp. 63-78, 2017. 19. C. Curto, K. Morrison. Pattern completion in symmetric threshold-linear networks. Neural Computation, Vol 28, pp. 2825-2852, 2016. 2 18. C. Curto, V. Itskov. Combinatorial neural codes. Book chapter to appear in the Handbook of Discrete and Combinatorial Mathematics, 2016. 17. W.B. Thoreson, M.J. Van Hook, C. Parmelee, C. Curto. Modeling and measurement of vesicle pools at the cone ribbon synapse: changes in release probability are solely responsible for voltage-dependent changes in release. Synapse, 70:1-14, 2016. 16. C. Giusti, E. Pastalkova, C. Curtoy, V. Itskovy (yequal last authors). Clique topology reveals intrinsic geometric structure in neural correlations. PNAS, vol. 112, no. 44, pp. 13455-13460, 2015. 15. M.J. Van Hook, C. Parmelee, M. Chen, K.M. Cork, C. Curto, W.B. Thoreson. Calmodulin enhances ribbon replenishment and shapes filtering of synaptic transmission by cone photoreceptors. Journal of General Physiology, 144:357-378, 2014. 14. C. Curto, A. Degeratu, V. Itskov. Encoding binary neural codes in networks of threshold-linear neurons. Neural Computation, Vol 25, pp. 2858-2903, 2013. 13. C. Curto, V. Itskov, A. Veliz-Cuba, N. Youngs. The neural ring: an algebraic tool for analyzing the intrinsic structure of neural codes. Bulletin of Mathematical Biology, Volume 75, Issue 9, pp. 1571-1611, 2013. 12. C. Curto, V. Itskov, K. Morrison, Z. Roth, J.L. Walker. Combinatorial neural codes from a mathematical coding theory perspective. Neural Computation, Vol 25(7), pp. 1891-1925, 2013. 11. C. Curto, D.R. Morrison. Threefold flops via matrix factorization. Journal of Algebraic Geometry, Vol 22(4), pp. 599-627, 2013. 10. C. Curto, A. Degeratu, V. Itskov. Flexible memory networks. Bulletin of Mathematical Biology, Vol. 74(3), pp. 590-614, 2012. 9. V. Itskov∗, C. Curto∗, E. Pastalkova, G. Buzsaki (*equal contribution). Cell assembly sequences arising from spike threshold adaptation keep track of time in the hippocampus. Journal of Neuroscience, Vol. 31(8), pp. 2828-2834, 2011. 8. K.D. Harris, P. Bartho, P. Chadderton, C. Curto, J. de la Rocha, L. Hollender, V. Itskov, A. Luczak, S. Marguet, A. Renart, S. Sakata. How do neurons work together? Lessons from auditory cortex. Hearing Research, Vol. 271(1-2), pp. 37-53, 2011. 7. P. Bartho, C. Curto, A. Luczak, S. Marguet, K.D. Harris. Population coding of tone stimuli in auditory cortex: dynamic rate vector analysis. European Journal of Neuroscience, Vol. 30(9), pp. 1767-1778, 2009. 6. C. Curto, S. Sakata, S. Marguet, V. Itskov, K.D. Harris. A simple model of cortical dynamics explains variability and state dependence of sensory responses in urethane-anesthetized auditory cortex. Journal of Neuroscience, Vol. 29(34):10600-10612, 2009. 5. C. Curto∗, V. Itskov∗ (∗equal contribution). Cell groups reveal structure of stimulus space. PLoS Compu- tational Biology, Vol. 4(10): e1000205, 2008. 4. V. Itskov, C. Curto, K.D. Harris. Valuations for spike train prediction. Neural Computation, Vol. 20(3), pp. 644-667, 2008. 3. C. Curto. Matrix model superpotentials and ADE singularities. Advances in Theoretical and Mathemati- cal Physics, Vol. 12 (2), pp. 357-409, 2008. 2. C. A. Kletzing, J. D. Scudder, E. E. Dors, and C. Curto. The auroral source region: plasma properties of the high altitude plasma sheet. Journal of Geophysical Research, 108(A10), 1360, 2003. 1. C. Curto, S. J. Gates, V. G. J. Rodgers, Superspace geometrical realization of the N-extended super Virasoro algebra and its dual. Physics Letters B 480, pp. 337-347, 2000. 3 Ph.D. Thesis 0. C. Curto. Matrix Model Superpotentials and Calabi-Yau Spaces: an ADE Classification. Duke University Ph.D. Thesis, 2005. (arxiv.org/math.AG/0505111) This work resulted in two publications: #3 and #11 above. Research Presentations & Invitations Invited BRAIN Initiative Investigator’s Meeting, Bethesda, MD Dec 2016 Talks Mathematical Biosciences Institute (MBI), Ohio State University Oct —— – Dynamical Systems and Data Analysis in Neuroscience: Bridging the Gap EPFL Life Sciences Seminar Series, Lausanne, Switzerland Sep —— MONA2 Conference on Modelling Neural Activity, Waikoloa, Hawaii Jun —— SIAM Conference on Discrete Mathematics, Atlanta,
Recommended publications
  • Luis David Garcıa Puente
    Luis David Garc´ıa Puente Department of Mathematics and Statistics (936) 294-1581 Sam Houston State University [email protected] Huntsville, TX 77341–2206 http://www.shsu.edu/ldg005/ Professional Preparation Universidad Nacional Autonoma´ de Mexico´ (UNAM) Mexico City, Mexico´ B.S. Mathematics (with Honors) 1999 Virginia Polytechnic Institute and State University Blacksburg, VA Ph.D. Mathematics 2004 – Advisor: Reinhard Laubenbacher – Dissertation: Algebraic Geometry of Bayesian Networks University of California, Berkeley Berkeley, CA Postdoctoral Fellow Summer 2004 – Mentor: Lior Pachter Mathematical Sciences Research Institute (MSRI) Berkeley, CA Postdoctoral Fellow Fall 2004 – Mentor: Bernd Sturmfels Texas A&M University College Station, TX Visiting Assistant Professor 2005 – 2007 – Mentor: Frank Sottile Appointments Colorado College Colorado Springs, CO Professor of Mathematics and Computer Science 2021 – Sam Houston State University Huntsville, TX Professor of Mathematics 2019 – 2021 Sam Houston State University Huntsville, TX Associate Department Chair Fall 2017 – 2021 Sam Houston State University Huntsville, TX Associate Professor of Mathematics 2013 – 2019 Statistical and Applied Mathematical Sciences Institute Research Triangle Park, NC SAMSI New Researcher fellowship Spring 2009 Sam Houston State University Huntsville, TX Assistant Professor of Mathematics 2007 – 2013 Virginia Bioinformatics Institute (Virginia Tech) Blacksburg, VA Graduate Research Assistant Spring 2004 Virginia Polytechnic Institute and State University Blacksburg,
    [Show full text]
  • Arxiv:2011.03572V2 [Math.CO] 17 Dec 2020
    ORDER-FORCING IN NEURAL CODES R. AMZI JEFFS, CAITLIN LIENKAEMPER, AND NORA YOUNGS Abstract. Convex neural codes are subsets of the Boolean lattice that record the inter- section patterns of convex sets in Euclidean space. Much work in recent years has focused on finding combinatorial criteria on codes that can be used to classify whether or not a code is convex. In this paper we introduce order-forcing, a combinatorial tool which recognizes when certain regions in a realization of a code must appear along a line segment between other regions. We use order-forcing to construct novel examples of non-convex codes, and to expand existing families of examples. We also construct a family of codes which shows that a dimension bound of Cruz, Giusti, Itskov, and Kronholm (referred to as monotonicity of open convexity) is tight in all dimensions. 1. Introduction A combinatorial neural code or simply neural code is a subset of the Boolean lattice 2[n], where [n] := f1; 2; : : : ; ng. A neural code is called convex if it records the intersection pattern of a collection of convex sets in Rd. More specifically, a code C ⊆ 2[n] is open convex if there d exists a collection of convex open sets U = fU1;:::;Ung in R such that \ [ σ 2 C , Ui n Uj 6= ?: i2σ j2 =σ T S U The region i2σ Ui n j2 =σ Uj is called the atom of σ in U, and is denoted Aσ . The collection U is called an open realization of C, and the smallest dimension d in which one can find an open realization is called the open embedding dimension of C, denoted odim(C).
    [Show full text]
  • Matrix Model Superpotentials and ADE Singularities
    University of Nebraska - Lincoln DigitalCommons@University of Nebraska - Lincoln Faculty Publications, Department of Mathematics Mathematics, Department of 2008 Matrix model superpotentials and ADE singularities Carina Curto University of Nebraska - Lincoln, [email protected] Follow this and additional works at: https://digitalcommons.unl.edu/mathfacpub Part of the Mathematics Commons Curto, Carina, "Matrix model superpotentials and ADE singularities" (2008). Faculty Publications, Department of Mathematics. 34. https://digitalcommons.unl.edu/mathfacpub/34 This Article is brought to you for free and open access by the Mathematics, Department of at DigitalCommons@University of Nebraska - Lincoln. It has been accepted for inclusion in Faculty Publications, Department of Mathematics by an authorized administrator of DigitalCommons@University of Nebraska - Lincoln. c 2008 International Press Adv. Theor. Math. Phys. 12 (2008) 355–406 Matrix model superpotentials and ADE singularities Carina Curto Department of Mathematics, Duke University, Durham, NC, USA CMBN, Rutgers University, Newark, NJ, USA [email protected] Abstract We use F. Ferrari’s methods relating matrix models to Calabi–Yau spaces in order to explain much of Intriligator and Wecht’s ADE classifi- cation of N = 1 superconformal theories which arise as RG fixed points of N = 1 SQCD theories with adjoints. We find that ADE superpoten- tials in the Intriligator–Wecht classification exactly match matrix model superpotentials obtained from Calabi–Yau with corresponding ADE sin- gularities. Moreover, in the additional O, A, D and E cases we find new singular geometries. These “hat” geometries are closely related to their ADE counterparts, but feature non-isolated singularities. As a byproduct, we give simple descriptions for small resolutions of Gorenstein threefold singularities in terms of transition functions between just two co-ordinate charts.
    [Show full text]
  • Duke University News May 10, 2005
    MathDuke University News May 10, 2005 Mathematics since 1978, she has been a major Events participant in Calculus reform, and has been very active in the Women in Science and Engineering Math Department Party (WISE) program.This program o ers academic, nancial and social support to female undergrad- A large group of undergraduate and gradu- uate students in those elds. ate students mixed with the faculty at the an- nual mathematics department party on April 28. Jordan Ellenberg On the day after classes ended, members of the Duke mathematical community packed the de- In a March 1 talk scheduled by DUMU, Jordan partment lounge to enjoy sandwiches and con- Ellenberg of Princeton University and the Uni- versation in an informal setting. This party is the versity of Wisconsin gave an enjoyable talk relat- traditional occasion for the faculty to honor the ing the card game Set to questions in combinato- graduating students, contest participants and re- rial geometry. A dozen DUMU students enjoyed search students for their hard work and accom- chatting with him at dinner after his talk. plishments. Many math majors received the Set is a simple but addictive card game played 2005 Duke Math shirt, some received certi cates with a special 81-card deck. A standard "folk- and a few took home generous cash prizes. A lore question" among players of this game is: good time was had by all. what is the largest number of cards that can be on the table which do not allow a legal play?He Graduation Luncheon explained how this question, which seems to be At noon, immediately after Graduation Exer- about cards, is actually about geometry over a - cises on Sunday May 15, senior math majors nite eld.He presented several results and ended and their families will meet in the LSRC dining with a number of open mathematical problems room.
    [Show full text]
  • Combinatorial Geometry of Threshold-Linear Networks
    COMBINATORIAL GEOMETRY OF THRESHOLD-LINEAR NETWORKS CARINA CURTO, CHRISTOPHER LANGDON, AND KATHERINE MORRISON Abstract. The architecture of a neural network constrains the potential dynamics that can emerge. Some architectures may only allow for a single dynamic regime, while others display a great deal of flexibility with qualitatively different dynamics that can be reached by modulating connection strengths. In this work, we develop novel mathematical techniques to study the dynamic con- straints imposed by different network architectures in the context of competitive threshold-linear n networks (TLNs). Any given TLN is naturally characterized by a hyperplane arrangement in R , and the combinatorial properties of this arrangement determine the pattern of fixed points of the dynamics. This observation enables us to recast the question of network flexibility in the language of oriented matroids, allowing us to employ tools and results from this theory in order to charac- terize the different dynamic regimes a given architecture can support. In particular, fixed points of a TLN correspond to cocircuits of an associated oriented matroid; and mutations of the matroid correspond to bifurcations in the collection of fixed points. As an application, we provide a com- plete characterization of all possible sets of fixed points that can arise in networks through size n = 3, together with descriptions of how to modulate synaptic strengths of the network in order to access the different dynamic regimes. These results provide a framework for studying the possible computational roles of various motifs observed in real neural networks. 1. Introduction An n-dimensional threshold-linear network (TLN) is a continuous dynamical system given by the nonlinear equations: n X (1)x _ i = −xi + Wijxj + bi i = 1; : : : ; n j=i + wherex _ i = dxi=dt denotes the time derivative, bi and Wij are all real numbers, Wii = 0 for each i, and [x]+ = maxfx; 0g.
    [Show full text]
  • Homotopy Theory and Neural Information Networks
    Homotopy Theory and Neural Information Networks Matilde Marcolli (Caltech, University of Toronto & Perimeter Institute) Focus Program on New Geometric Methods in Neuroscience Fields Institute, Toronto, 2020 Matilde Marcolli (Caltech, University of Toronto & Perimeter Institute)Homotopy Theory and Neural Information Networks • based on ongoing joint work with Yuri I. Manin (Max Planck Institute for Mathematics) • related work: M. Marcolli, Gamma Spaces and Information, Journal of Geometry and Physics, 140 (2019), 26{55. Yu.I. Manin and M. Marcolli, Nori diagrams and persistent homology, arXiv:1901.10301, to appear in Mathematics of Computer Science. • building mathematical background for future joint work with Doris Tsao (Caltech neuroscience) • this work partially supported by FQXi FFF Grant number: FQXi-RFP-1804, SVCF grant number 2018-190467 Matilde Marcolli (Caltech, University of Toronto & Perimeter Institute)Homotopy Theory and Neural Information Networks Motivation N.1: Nontrivial Homology Kathryn Hess' applied topology group at EPFL: topological analysis of neocortical microcircuitry (Blue Brain Project) formation of large number of high dimensional cliques of neurons (complete graphs on N vertices with a directed structure) accompanying response to stimuli formation of these structures is responsible for an increasing amount of nontrivial Betti numbers and Euler characteristics, which reaches a peak of topological complexity and then fades proposed functional interpretation: this peak of non-trivial homology is necessary for the
    [Show full text]
  • Carina Curto · Curriculum Vitae
    Carina Curto · Curriculum Vitae Department of Mathematics tel: (814) 863 9119 The Pennsylvania State University email: [email protected] 331 McAllister Building website: http://www.personal.psu.edu/cpc16/ University Park, State College, PA 16802 CV last updated: September 24, 2020 Research · Theoretical and mathematical neuroscience Interests · Applied algebra, geometry, and topology · Neural networks and neural codes: theory, modeling, and data analysis Education & Employment Academic The Pennsylvania State University (PSU) State College, PA Positions Professor of Mathematics (2019–) Aug 2014–present co-Associate Head for Graduate Studies (2018–) Associate Professor of Mathematics (2014–2019) Member of the Center for Neural Engineering & the Institute for Neuroscience University of Nebraska-Lincoln (UNL) Lincoln, NE Assistant Professor of Mathematics Aug 2009–Aug 2014 Courant Institute, New York University (NYU) New York, NY Courant Instructor (Mathematics) Sep 2008–Aug 2009 Rutgers, The State University of New Jersey Newark, NJ Center for Molecular and Behavioral Neuroscience May 2005–Aug 2008 Postdoctoral Associate in the lab of Kenneth D. Harris (Neuroscience) Education Duke University Durham, NC Ph.D. in Mathematics (Algebraic Geometry & String Theory) Aug 2000–May 2005 Advisor: David R. Morrison Harvard University Cambridge, MA A.B. in Physics (Cum Laude) Sep 1996–Jun 2000 Iowa City West High School Iowa City, IA Valedictorian (4.0 GPA) Aug 1992–Jun 1996 Additional MBL Neuroinformatics Course, Woods Hole, MA Aug 2006 Education IAS
    [Show full text]
  • Arxiv:1605.01905V1 [Q-Bio.NC] 6 May 2016
    WHAT CAN TOPOLOGY TELL US ABOUT THE NEURAL CODE? CARINA CURTO Abstract. Neuroscience is undergoing a period of rapid experimental progress and expansion. New mathematical tools, previously unknown in the neuroscience community, are now being used to tackle fundamen- tal questions and analyze emerging data sets. Consistent with this trend, the last decade has seen an uptick in the use of topological ideas and methods in neuroscience. In this talk I will survey recent applications of topology in neuroscience, and explain why topology is an especially natural tool for understanding neural codes. Note: This is a write-up of my talk for the Current Events Bulletin, held at the 2016 Joint Math Meetings in Seattle, WA. 1. Introduction Applications of topology to scientific domains outside of pure mathemat- ics are becoming increasingly common. Neuroscience, a field undergoing a golden age of progress in its own right, is no exception. The first reason for this is perhaps obvious { at least to anyone familiar with topological data analysis. Like other areas of biology, neuroscience is generating a lot of new data, and some of these data can be better understood with the help of topological methods. A second reason is that a significant portion of neuro- science research involves studying networks, and networks are particularly amenable to topological tools. Although my talk will touch on a variety of such applications, most of my attention will be devoted to a third reason { namely, that many interesting problems in neuroscience contain topological questions in disguise. This is especially true when it comes to understand- ing neural codes, and questions such as: how do the collective activities of arXiv:1605.01905v1 [q-bio.NC] 6 May 2016 neurons represent information about the outside world? I will begin this talk with some well-known examples of neural codes, and then use them to illustrate how topological ideas naturally arise in this context.
    [Show full text]
  • Signature Redacted Author
    Towards an Integrated Understanding of Neural Networks by David Rolnick Submitted to the Department of Mathematics in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Mathematics at the MASSACHUSETTS INSTITUTE OF TECHNOLOGY September 2018 @ Massachusetts Institute of Technology 2018. All rights reserved. Signature redacted Author .. Department of Mathematics Signature redacted August 10, 2018 Certified by Nir Shavit Professor Signature redacted hesis Supervisor Certified by Edward S. Boyden Signature redacted Professor Thesis Supervisor Certified by .. ................... Max Tegmark Professor Thesis Supervisor Signature redacted-~~ Accepted by ... MASSACHUSMlS INSTITUTE Jonathan Kelner OF TECHNOLOGY Chairman, Department Comniittee on Graduate Theses OCT 022018 LIBRARIES ARCHIVES 2 Towards an Integrated Understanding of Neural Networks by David Rolnick Submitted to the Department of Mathematics on August 10, 2018, in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Mathematics Abstract Neural networks underpin both biological intelligence and modern Al systems, yet there is relatively little theory for how the observed behavior of these networks arises. Even the connectivity of neurons within the brain remains largely unknown, and popular deep learning algorithms lack theoretical justification or reliability guaran- tees. This thesis aims towards a more rigorous understanding of neural networks. We characterize and, where possible, prove essential properties of neural algorithms: expressivity, learning, and robustness. We show how observed emergent behavior can arise from network dynamics, and we develop algorithms for learning more about the network structure of the brain. Thesis Supervisor: Nir Shavit Title: Professor Thesis Supervisor: Edward S. Boyden Title: Professor Thesis Supervisor: Max Tegmark Title: Professor 3 4 Acknowledgments I am indebted to my thesis committee, Nir Shavit, Edward S.
    [Show full text]
  • Department of Mathematics Newsletter
    { Winter 2011 {MMathath NewsNews A publication of the Department of Mathematics at the University of Nebraska–Lincoln VIEW FROM THE Curto wins noted Sloan Research Fellowship CHAIR arina Curto, assistant profes- tion, which Csor of mathematics, has been announced its John Meakin selected for a Sloan Research Fel- newest recipi- lowship for her research in the fi eld ents on Feb. 15, he year 2011 of mathematical neuroscience. This 2011, awards Thas been one two-year fellowship awards Curto 118 Sloan of opportuni- $50,000 to put toward her research. Research Fel- ties, challenges “I was thrilled to receive the lowships each and successes for news,” Curto said. “This award will year, bringing the Department benefi t my research signifi cantly, es- total grants in of Mathematics pecially because of its fl exible nature. the program to $5.9 million annu- and indeed for I greatly appreciate all those who ally. The fellowships seek to stimulate the University of supported me in my nomination, as fundamental research by early-career Nebraska-Lin- well as my close collaborators.” scientists and scholars of outstand- coln. In what seems destined to be one The Alfred P. Sloan Founda- CURTO of the most signifi cant events in the See on Page 11 history of the university, UNL became one of the 12 members of the Big Ten (as opposed to one of the 10 remain- ing members of the Big 12). A tribute to Meakin’s leadership This move is much more than a As this edition’s View from the Chair addresses, John Meakin, after eight years of change in athletic conference for the serving as Department Chair, will be handing over the reins as chair to Judy Walker.
    [Show full text]
  • Combinatorial Neural Codes from a Mathematical Coding Theory Perspective
    LETTER Communicated by Ilya Nemenman Combinatorial Neural Codes from a Mathematical Coding Theory Perspective Carina Curto [email protected] Vladimir Itskov [email protected] Katherine Morrison [email protected] Zachary Roth [email protected] Judy L. Walker [email protected] Department of Mathematics, University of Nebraska-Lincoln, Lincoln, NE 68588, U.S.A. Shannon’s seminal 1948 work gave rise to two distinct areas of research: information theory and mathematical coding theory. While information theory has had a strong influence on theoretical neuroscience, ideas from mathematical coding theory have received considerably less attention. Here we take a new look at combinatorial neural codes from a mathemat- ical coding theory perspective, examining the error correction capabilities of familiar receptive field codes (RF codes). We find, perhaps surprisingly, that the high levels of redundancy present in these codes do not support accurate error correction, although the error-correcting performance of re- ceptive field codes catches up to that of random comparison codes when a small tolerance to error is introduced. However, receptive field codes are good at reflecting distances between represented stimuli, while the random comparison codes are not. We suggest that a compromise in error- correcting capability may be a necessary price to pay for a neural code whose structure serves not only error correction, but must also reflect relationships between stimuli. 1 Introduction Shannon’s seminal work (Shannon, 1948) gave rise to two
    [Show full text]
  • An ADE Classification
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by DigitalCommons@University of Nebraska University of Nebraska - Lincoln DigitalCommons@University of Nebraska - Lincoln Faculty Publications, Department of Mathematics Mathematics, Department of 2005 Matrix Model Superpotentials and Calabi–Yau Spaces: An A-D-E Classification Carina Curto University of Nebraska - Lincoln, [email protected] Follow this and additional works at: https://digitalcommons.unl.edu/mathfacpub Part of the Mathematics Commons Curto, Carina, "Matrix Model Superpotentials and Calabi–Yau Spaces: An A-D-E Classification" (2005). Faculty Publications, Department of Mathematics. 41. https://digitalcommons.unl.edu/mathfacpub/41 This Article is brought to you for free and open access by the Mathematics, Department of at DigitalCommons@University of Nebraska - Lincoln. It has been accepted for inclusion in Faculty Publications, Department of Mathematics by an authorized administrator of DigitalCommons@University of Nebraska - Lincoln. arXiv:math/0505111v1 [math.AG] 6 May 2005 Copyright c 2008 by Carina Curto All rights reserved MATRIX MODEL SUPERPOTENTIALS AND CALABI–YAU SPACES: AN A-D-E CLASSIFICATION by Carina Curto Department of Mathematics Duke University Date: Approved: David R. Morrison, Supervisor M. Ronen Plesser Paul S. Aspinwall William L. Pardon Dissertation submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in the Department of Mathematics in the Graduate School of Duke University 2005 ABSTRACT (Mathematics) MATRIX MODEL SUPERPOTENTIALS AND CALABI–YAU SPACES: AN A-D-E CLASSIFICATION by Carina Curto Department of Mathematics Duke University Date: Approved: David R. Morrison, Supervisor M. Ronen Plesser Paul S.
    [Show full text]