UC Berkeley UC Berkeley Electronic Theses and Dissertations

Title Geometry, Dynamics, and Emergence: Form, Image Geometry, and Coupled Subcritical Oscillations

Permalink https://escholarship.org/uc/item/8rd8n022

Author Levy, Michael Gabriel

Publication Date 2019

Peer reviewed|Thesis/dissertation

eScholarship.org Powered by the California Digital Library University of California Geometry, Dynamics, and Emergence: Cowrie Form, Image Geometry, and Coupled Subcritical Oscillations

by

Michael Gabriel Levy

A dissertation submitted in partial satisfaction of the

requirements for the degree of

Doctor of Philosophy

in

Biophysics

in the

Graduate Division

of the

University of California, Berkeley

Committee in charge:

Associate Professor Michael Robert DeWeese, Chair Professor Bruno Olshausen Associate Professor Oskar Halletschk

Fall 2019 Geometry, Dynamics, and Emergence: Cowrie Form, Image Geometry, and Coupled Subcritical Oscillations

Copyright 2019 by Michael Gabriel Levy 1

Abstract

Geometry, Dynamics, and Emergence: Cowrie Form, Image Geometry, and Coupled Subcritical Oscillations

by

Michael Gabriel Levy

Doctor of Philosophy in Biophysics

University of California, Berkeley

Associate Professor Michael Robert DeWeese, Chair

Living Systems are dynamically controlled by processes that they themselves create: here we study the emergence of the sophisticated control regimes in models of biological systems. A major route to understanding in Biology today is discovering which genes lead to what, almost always neglecting a story for how the genes achieve their ends. From an understand- ing perspective this is disappointing, and we strive to make more holistic models which try to get at the underlying nature of biological logic. Towards this end, we notice geometric regimes in cowrie growth and work towards a falsifiable and explanatory mechanistic model, aiming to make the case for the importance of mechanics and dynamics in development. We also present the first systematic study of globally-coupled subcritical limit cycle oscillators, exploring a rarely remarked upon dynamical regime. This regime is biologically interesting as it has bistability between oscillations and quiescence and is a simple excitable media which could be used as a reduced neural model. I am interested in clustering in the system as an example of symmetry breaking in identical systems; small differences in initial conditions lead to differentiation of the oscillators into particular varieties, analogous to cell fate de- cisions. Progress on these problems works towards illuminating the biological approach to self-assembly. i

To those that did not survive this dissertation

George F. Oster and Harold Lecar and Nico Linesh

May this work do honor to your lives and legacy. ii

Contents

Contents ii

1 Dynamics and Aims 1 1.1 Emergence and Model Levels ...... 1 1.2 Why Chimeras? ...... 2 1.3 Why Cowrie Seashells? ...... 2 1.4 Dynamics: Bifurcation, Continuation, and Chaos ...... 3

2 Dynamics of Linearly Coupled Subcritical Oscillatators 5 2.1 Introduction to and Motivation for Equations ...... 5 2.2 Linear Stability of Synchronized and Splay States ...... 10 2.3 Empirical Bifurcation Plots and Unbalanced Optimal Transport ...... 18 2.4 Conclusions and Synthesis ...... 22

3 Image Geometry and Dynamics: Information Extraction and Seashell Pattern 23 3.1 Geometric Approaches to Image Analysis ...... 23 3.2 Splines and Active Contours ...... 24 3.3 Seashell Pattern Generation ...... 25 3.4 Conclusions and Synthesis ...... 40

4 Cowrie Shape and Form 41 4.1 Introduction ...... 41 4.2 Cowrie Shell Observables ...... 43 4.3 Extracting Relevant Data ...... 49 4.4 Towards Models of Cowrie Growth ...... 52 4.5 The Physics of Wrinkling Sheets ...... 56 4.6 Model consolidation and a Coherent Story of Cowrie Form ...... 57

5 Conclusions 58 5.1 Future Directions ...... 58

Bibliography 59 iii

Acknowledgments

I would like to thank Edgar Knobloch for his patience and Michael DeWeese for his support. I would like to thank Dawn Song, George Oster, Padmini Rangamani, George Roderick and the ESPM Department, the NIH, the NSF, and the Cognitive Science, Physics, and Bioengineering Departments, Joseph Levy, and Leslie Feinberg for funding. I would like to thank Kate Chase and Susan Marqusee and the rest of the Biophysics Graduate Group for providing “enough rope to hang” myself and enough time to get myself untangled. I would like to thank HiP House and the Berkeley Student Cooperative: without this afford- able housing option (and community) I would have dropped out long ago. I want to thank Terry Regier of Cog Sci and Holger Meuller and Hilary Jacks of Physics for giving me the opportunity to teach classes of my own design. I want to thank the Compass Project for experience teaching — both students and teachers — and providing a network of similarly minded folk to think with. I would like to thank“math club” in all its iterations: Paul Glenn, Danny Broberg, Michelle Liu, and Ben Larson; Lunch Buddies Madeline Forester, Vanessa Carels, Sam Harding-Forester, Brian Isett, Ben McInroe, Ben Forester, and Alex Takeda. I would like to thank MCB/Neuroscience for the many faculty lunches they took me out to; and UC Berkeley, MBI at the Ohio State University, and The Champaulimaud Center for the Unknown for travel funds; and to thank Punit Ghandi, Kelly Clancey, Jacque Bothma, Jasmine Nirody, Aisha Wilson, Amy Shyer, Neha Wadia, Ryan Zarcone, Gautum Agar- wal, Jasha Sohl-Dickstein, Jean-Michel Mongeau, Alistair Boettiger, Kranthi Mandadapu, Padmini Rangamani, Richard Barnes, Anna Schnider, Julian Hassinger, John Haberstroh, Jesse Livzey, Michael Grabe, Ken Kim, John Neu, and the greater Biophysics/George Oster community for whatever I have gleaned purposely or in passing. I would like to thank Fred- erick Theunissen, Dan Rockshar, Udi Isacoff, and Philip Geissler for rotation projects, and Richard Kramer, Bruno Olshausen, Oskar Halletchk, David Lindberg, the Architecture Fab Lab, the geology department rock cutting facility, and David Stiegmann for their time and interest. I would like to greatly acknowledge George Oster and Harold Lecar for giving what they could. I would like to acknowledge Gloria Lee, Devyn Shafer, and — most importantly — Sarah Alice McCracken for the connection, tethering, and growth they fostered. I would like to thank the place and the tenor of Berkeley California for all I’ve learned from living here and I would like to acknowledge my family and friends for their unwavering support particularly my brother and sister Joshua and Rebecca Levy. Finally, I would like to ac- knowledge my aforementioned parents Joseph and Leslie: I wouldn’t be let alone be a PhD without your support. Thank you fam: we made it. 1

Chapter 1

Dynamics and Aims

1.1 Emergence and Model Levels

Cognitive Science has a conceptual frame that they refer to as Marr’s Levels[12], which con- tends there are three ways one can attack understanding a system: at the conceptual level, at the algorithm and representation level, and at the implementation level. This distinction is epistomologically helpful and is intellectually satisfying for someone not incredibly interested in molecular detail. Interestingly, following the tendency for things not to be named after their originator, this conceptual distinction was discussed in its entirety many years before by Shannon[21], and feels like good common sense in the“more is different” vein [3]. The distinction is as follows: when one is seeking to understand something there are basically three questions one can ask: Why, How, and What. The Why is the conceptual level which involves evolutionary and optimization arguments. The How is the level of emergence, con- densed matter, and statistical physics which tries to link microscopic details to macroscopic phenomena. In the cognitive frame, this would be how is information manipulated and stored in the wetware that is your brain to lead to percepts, conciousness, and whatnot. This is the level which I find most intellectually satisfying, as it avoids the “just so” stories of the level above it and the particular details of the level below it. The implementation level is the level of molecular Neuroscience, how do individual cells encode information, very specifically which molecules do what when. I find this level incredibly interesting yet overwhelming – the torrent of information and details at this level seems to get in the way of understanding, the ol’ seeing the forrest for the trees. This reminds of Poincare’s dictum “on fait la science avec des faits comme une maison avec des pierres ; mais une accumulation de faits n’est pas plus une science qu’un tas de pierres n’est une maison.” —Science is built up with facts, as a house is with stones. But a collection of facts is no more a science than a heap of stones is a house. It is the goal of this thesis to provide some tools engaging with the algorithmic layer, codifying models that provide hypotheses, synthesizing and predicting implentational details, and providing constraints on the computational possibilities. CHAPTER 1. DYNAMICS AND AIMS 2

1.2 Why Chimeras?

Chimeras are a strange situation where chaotic motion and synchronization coexist in iden- tical oscillators[1]. The usual reported biological motivation for studying this state is uni- hemispheric sleep and study of this state emerged from the study of synchronizing oscillators, a quintessential biophysical endeavor[4]. This state, which has since its discovery been shown to be realizable in many physical systems, is an example of a state whose components have only a dynamical identity. The only thing different between a chaotic and a synchronized oscillator is its dynamical history. This dynamical state allows oscillating agents to behave in an environment of their own creation effectively leading to differentiation of oscillators defined by niches provided by themselves. These oscillators can switch between dynamical phases and act with a quintessentially biological robustness: neurons maintain their synap- tic weights and dynamics with constant recycling of their proteins[36], stem cells respond to mechanical cues in their environment to undergo differentiation[42], there are many differ- ent balances of ion channels which can give the same dynamic[43] , and it has been shown that even individual neurons change their implementation but maintain their dynamical state over time. Chimeras are an example of a surprising dynamical emergence which is not amenable to current techniques of analysis. We are, in particular interested in subcritical chimeras, a dynamical state which has not yet been studied thoroughly to date [44]. All the chimera literature is with supercritical oscillators, meaning that the steady state loses stability when the oscillation emerges. We are interested in looking at chimera states where each individual oscillator maintains bistability with the steady state that that oscillation emerged from. This introduces another layer of complexity: can we find chimera states with some oscillators stable at the origin? This is even more interesting biologically, as you can now think of each oscillator as having two states: quiescent or active, much like a neuron. Looking at what dynamics the possibility of quiescence allows is the goal of this work and could provide insights into the basis of neural computation.

1.3 Why Cowrie Seashells?

I began studying seashells as they are an example of a closed neural loop: they construct their own patterns via feedback with those very patterns. This loop is like any other affector loop we just also have two dimensional readouts of both the input and the output of the network. These highly constrained beautiful images provide a proving ground for developing new views of computation. However, the seashell isn’t only a neural loop: mechanical forces — the elasticity of the which lays down the shell — gives rise to the seashell form. We are interested in looking at a shell with slightly more complicated than normal growth dynamics: a shell very popular with collectors — the . Cowries are a large gastropod family which produce shells of a peculiar shape as the mantle changes shape during their development. We argue that the development of hard shell is completely emergent from the growth and dynamics of this soft body. This connects to the newly vibrant field of thin CHAPTER 1. DYNAMICS AND AIMS 3 elastic sheets and provides an extremely biological example of pattern formation. A frontier of neuroscience is understanding how the body can be used to compute and sense things in ways dependent on its material properties and dynamical structures. Centeralized neural computation isn’t necessarily the way organisms determine behavior. If you are balancing on a beam you are best off letting the actuators and control loops of your legs figure out how to keep you upright rather than looking down. Human response time can be much faster than the time it takes a photon to transverse the eye to the brain. This idea seems to have general implications about how biological systems work: they must maintain a dynamic equilibrium. Organisms need to respond to their environments at the same time they are affecting it and it is also being changed by other external agents. Letting a body do the computation via a heuristic is cognition at the evolutionary scale. As discussed above, it is not abundantly clear where the information that gives an organ- ism its form and identity is stored. Generally in biology if there are many ways a problem could be solved evolution chooses all of them. Development must occur through some combi- nation of mechanical, chemical, electrical, and behavioral pathways, and we are interested in taking mechanics of development as seriously as possible, noting that the nonlinear mechan- ics of the tissues themselves are in general interesting open problems, and that intrinsically localizing the signal in space rather than in a chemical gradient on that space leads to a patterning system more robust to growth[28]. In the cowrie seashell we are interested in modeling the deviation in the internal spiraling dynamics and the formation of a perfectly periodic structure which forms along its aperture. The cartoon we develop requires the de- velopment of new theoretical tools in order to study its dynamics in a way that doesn’t get bogged down in formal nonlinear elasticity. Mechanical forces could underlie a lot of devel- opment and embodied computation seems to happen at both the cellular and the organismal scale. Understanding the dynamics of tissues in general is an exciting open problem, and the cowrie shell provides a history of the organisms developmental trajectory not normally accessible to an organ: a concrete static signal related to its growth. This makes the model one that is more easily falsifiable, as there are more constraints to meet. Biological pat- tern theory should step beyond just replicating patterns, we should try to provide falsifiable mechanistic insight. Having numerous seashell features to predict simultaneously provides enough constraint to lead to useful theory.

1.4 Dynamics: Bifurcation, Continuation, and Chaos

Dynamics is the study of complication emerging from letting a model of a system run its course over time, The integration of these differential equations often yield rich structures and patterns in time. Underneath this complication is a fundamental geometric simplicity, as there is no magic here, everything must continue from what was before it. How then does a system change? Through most of history, people demanded that their models were structurally stable [59] but with advent of the discovery of chaos, there has been a lot of work studying systems undergoing drastic radial change. We are particularly interested CHAPTER 1. DYNAMICS AND AIMS 4 in the hopf bifurcation [27], which is how oscillations emerge with growing amplitude but constant frequency as the system gets farther away from a steady non-oscillatory state. The problems described above requires the integration of biological, physical, mathe- matical, and computational ideas. It can be said that dynamics underlies reality and that geometry underlies dynamics. I enjoyed the opportunity these projects provided for me to study the connections between these two fields of math and its rich application to life and patterning. Bifurcation is the study of the dynamical attractors of a system, whether it be a limit cycle, a fixed point, or something chaotic. If you run your problem forward in time you can find attracting states, and if you run it backwards in time you can find repelling ones. Bifurcation analysis follows particular branches of fixed points as parameters are var- ied and you can watch stabilities flip, attractors and repellors born or die, and oscillations emerge from steady states. One often looks at these things locally via a linearization and one can use this jacobian and the eigenvalues it provides to tell lots of stability properties. Systematically taking small steps from a single attractor state, and continuing along it even when it loses stability is known as Numerical Continuation and is implemented by such soft- ware as AUTO [16]. These numerical continuation suites allows one to systematically probe bifurcation behavior and track where the continuation fails; a possible hallmark of chaos. Nonlinear mechanics is in general a field where you try your hand at solving unsolvable problems to some acceptable level of error. Often one studies systems just marginally more complicated than the simplest systems available. Systematic probing of dynamics in incre- mentally more sophisticated problems has a rich physical and mathematical history. In this domain one doesn’t necessarily try to generate physical or biological hypotheses, one more tries to uncover dynamically novel regimes, which would hopefully then be found in reality. The projects I describe subsequently attempt to both uncover new dynamical regimes and provide new ways of thinking about biological problems. 5

Chapter 2

Dynamics of Linearly Coupled Subcritical Oscillatators

2.1 Introduction to and Motivation for Equations

Synchronization is a fundamental feature of physical systems that describes a host of in- teresting phenomena: from the flashing dynamics of fireflies, Huygens’s clocks on the wall, or the millennium bridge. The modern study of these models was instigated by Winfree [4] and furthered by the work of Kuramoto [64] and others and started mostly interested in how oscillators of slightly different frequency can — when coupled together — find a compromise frequency and thus synchronize. Changing parameters in these systems lead to the discovery of all sorts of exotic and interesting dynamics. We were inspired by one such counter intuitive state: which Strogatz called the Chimera State.[1] This state is one in which a population of synchronized and a population of chaotic unsynchronized oscillators coexist in a system of identical coupled oscillators. This counterintuitive inversion of the synchronization phenomena — where different dynamics emerge from identical components — inspired a burgeoning field of dynamical research. We are interested in probing this and other exotic states in a system which ought to be qualitatively different from the dynamics of all the coupled oscillator systems studied above, and which may have interesting applications in turbulence research or in neuroscience. To understand how our system is different but connected to the extant research literature and how the dynamics of our system are both a reasonable and interesting extension to the coupled oscillators that came before it we need to first define some terms. A structurally stable system is one in which a slight variation in the parameters used to describe the system leads to only a slight change in the dynamical properties of that system. A bifurcation boundary is where structural stability no longer holds: on either side of the boundary you have qualitatively different dynamics. One could imagine that there are many different ways systems can change and that each transition over a boundary would need its own theory and explanation: however, if you zoom closely enough into any curve it CHAPTER 2. DYNAMICS OF LINEARLY COUPLED SUBCRITICAL OSCILLATATORS 6 appears to be a straight line — the very local dynamics of the bifurcation is described by the linearization of the system at that point and there are only a finite number of ways a dynamic can locally change from one variety to another. Much of bifurcation theory involves description of bifurcation boundaries at this level. There is the additional possibility of more global changes of dynamical properties of a system, but we will not be particularly interested in those sorts of dynamics here, mostly because the dynamics we are studying are in fact a local description of the natural phenomena we are interested in modeling; the location and prevalence of global bifurcations are predicated on the very higher order terms we have thrown away to have a system amenable to analysis to begin with. The theory of global bifurcations is interesting and can provide qualitative explanations to dynamics seen in reality, as the dynamics described by research in this field can often be found in the real system studied as the dynamical properties can be robust enough to survive the transition to reality, but we often have no way of quantitatively describing where and when these bifurcations occur in a real system, so thus the predictive power of global bifurcation boundaries is relatively limited. When one is studying dynamics one is not only interested in the possible trajectories of a system, but also if these trajectories are robust to tiny perturbations; if they aren’t you’ll be hard pressed to find them in reality. Harking back to ideas of linearization, if you zoom in closely enough to a point you can describe it’s dynamics linearly: a linear dynamical system can be solved by integration yielding an equation

x = Aeαt.

This system can either have a positive or a negative α and thus locally will either lead to growth or decay. A fixed point is one which has derivative zero, and thus remains constant over time. However, we do not know if small perturbations to this point lead to growth or decay. To study this we simply need to look at the linearization of the system and study the growth or decay of a small perturbation. We call a trajectory attracting if local dynamics decay to it and we call a trajectory repulsive if local dynamics grow away from it. We call an attracting state stable and a repulsive state unstable. We are interested in ways in which stable trajectories become unstable and vice versa, mostly because we are interested in using the machinery of continuation to probe the pos- sible dynamics of a system over wide parameter ranges and how the properties of single trajectories change as we vary system parameters. Qualitatively different behavior occurs when a trajectory gains or loses stability, and the study of these changes in coupled oscillator dynamics is what we are primarily interested in here. There are two main ways oscillations emerge from steady states: one is the local hopf bifurcation and the other is the global saddle node on an invariant cycle (SNIC) bifurcation. The amplitude of the emergent oscillation after a hopf bifurcation grows as you vary the parameter past the bifurcation while the frequency of that oscillation remains the same, while after a SNIC bifurcation the amplitude of the oscillation is an inherent property of the bifurcation while the frequency of the oscillation changes. We are interested in a situation CHAPTER 2. DYNAMICS OF LINEARLY COUPLED SUBCRITICAL OSCILLATATORS 7 that kind of lies between these two pathways which has not been engaged with too much (perhaps at all) in the research literature: what sorts of oscillations occur in systems of coupled oscillators that have a discontinuous transition underlying their dynamics – where we dramatically change the process whereby the same limit cycle emerges from a given hopf bifurcation point. Bifurcations can be either subcritical or supercritical, where subcritical bifurcations have unstable objects appear and supercritical bifurcations have stable objects appear at the bifurcation boundary. All the systems of reduced coupled oscillator systems have been of the supercritial variety. Here we investigate how the dynamics of coupled oscillators change as you change the criticality of the system, leaving the dynamics the same at the limit cycle. Most careful bifrucation studies expand around a point. Here we elect to expand around a cycle as we are interested in how changing the local structure around the origin changes the global dynamics, leaving the dynamics at the limit cycle as unchanged as possible. We start with a phase-amplitude equation called the stuart-landau equation, which has been broadly studied in the field. It is a good oscillator to study because it is directly related to the normal form of the hopf bifurcation. Normal forms are a way to reduce systems of many parameters to as few parameters as possible while still capturing all the possible dynamics of the system. This is accomplished through clever change of variable and systematically presuming the unimportance of higher order terms. The first modern study of coupled oscillators [63] simultaneously solved three integral equations with six dimensionless parameters. You’ll be hard pressed to systematiacally probe the possible dynamics in such a large parameter space and the theory of bifurcation is mostly worked out in differential equations, so it would be difficult to apply the approaches mentioned above. This complicated system was only of phase oscillators, meaning the amplitude of the oscillation is fixed. Kuramoto recognized the importance of the ideas inherent in the paper above and basically gave rise to an entire field of study: the dynamics of coupled oscillators. The Kuramoto equation is a differential equation with much fewer parameters [2] yet recovers all qualitatively possible dynamics of the more complicated Winfree system. The reduced dynamics can over represent how pervasive particular dynamical patterns are as the changing of the reduced parameters does not correspond to any real physical knob in the system. The Kuramoto oscillator can be recovered from the more general Stuart-Landau system by only integrating its phase dynamics.

Our Equation and its Motivation Stuart-Landau models each oscillator as a hopf bifurcation point – the dynamical route to oscillatory behavior which occurs when linearization around a steady state gives rise to a purely imaginary eigenvalue. The oscillation part of all variable amplitude oscillations arise through this hopf mechanism and Stuart-Landau Oscillators is just the normal form of this bifurcation to third order in the amplitude of this oscillation. The hopf bifurcations modeled in this system is supercritical, meaning that at the hopf bifurcation point a stable steady state loses its stability. We are interested in the subcritical case where instead an CHAPTER 2. DYNAMICS OF LINEARLY COUPLED SUBCRITICAL OSCILLATATORS 8 unstable limit cycke is born. The system we are studying – the hopf normal form to 5th order in the amplitude – has a superposition of hopf bifurcations at the origin where there is a larger amplitude stable oscillation in addition to a smaller amplitude unstable one, this is an example of a degenerate hopf bifurcation, also known as a Bautin point. We start with the mean-field coupled stuart-landau equation:

2 z˙ = z (1 + c2i)z z¯ + κ(1 + c1i)( z z), (2.1) − h i − where z is the population mean of all the oscillators. We take as our oscillator –where we h i p a additionally unfold in a — where the amplitude of our unstable limit cycle is a−1 :

2 3 2 z˙ = az (2a 1 + c2i)z z¯ (1 a)z z¯ + κ(1 + c1i)( z z). (2.2) − − − − h i − We are interested in the range α : (- ,1] as α = 1 and as α the unstable cycle approaches radius 1. The nullclines of the∞ amplitude of the limit→ cycle −∞ of this uncoupled system can be viewed in 2.1 We got this oscillator equation by adding a 5th order term with a real coefficient, matching the jacobian at R=1, the fixed points at R=1 and 0, and keeping the phase dynamics the same as in Stuart-Landau. We will discuss different regimes of dynamics at various values of a. There are reduced forms of this equation that are useful for numerical simulating these equations. The polar form is useful as it represents periodic solutions, mapping hopf bifurca- tions in the reduced system to torus bifurcation in the full equations above. This periodicity is nice as it allows us to start our continuation from easy to find fixed points rather than from a periodic state found via direct numerical simulation. Most of the simulations and calculations are done by projecting into the amplitude of oscillator 1 (R1) the amplitude of oscillator 2 (R2) and the phase difference (θ). This is equivalent to moving into the rotating frame of the first oscillator. In order to make this reduction, we need to declare what our mean field is. Ku Ott and Girven [32] use as their mean field a weighted average of the two states to allow for the analysis of clusters, instead of just single oscillators. We note that the cluster reduced dynamics don’t take into consideration the integrity of the clusters: a small within cluster perturbation may me unstable despite the stability of the reduced system. We refer to the stability of the reduced system as the orbital stability. Thankfully, if we are only looking at two oscillator states, there is no cluster stability and these equations describe the dynamics faithfully. For our dynamical studies we will mostly be interested in the fa = .5 case, where this equation represents two identical coupled oscillators. CHAPTER 2. DYNAMICS OF LINEARLY COUPLED SUBCRITICAL OSCILLATATORS 9

3 5 ρ˙1 = (κfa κ + a)ρ1 + (1 2a)ρ + (1 a)ρ − − 1 − 1 + κ(1 fa)ρ2(cos(φ) + C1sin(φ)), − 3 5 ρ˙2 = (κ(1 fa) κ + a)ρ2 + (1 2a)ρ + (1 a)ρ − − − 2 − 2 + κ(fa)ρ1(cos(φ) C1sin(φ)), − 2 2 (2.3) φ˙ = κC1(2fa 1) C2(ρ ρ ) − − 1 − 2 ρ2 ρ1 + κC1cos(φ)((1 fa) fa ) − ρ1 − ρ2 (1 fa)ρ2 faρ1 κsin(φ)( − + ). − ρ1 ρ2 The aforementioned authors also used the nuclines associated with the three equations above to eliminate the third equation and graphically find fixed points in ρ1 and ρ2 corresponding to stable orbits. They were particularly intersted in studying the break down of two cluster states into a chaotic attractor as the coupling is slowly varied in direct numerical simulation until the extant cluster state becomes unstable at its current fractional state. They showed that when this happens an oscillator is ejected from the newly unstable cluster and the whole system skirts along a chaotic attractor until it coalesces into a new clustering still stable in the new dynamic, unsurprisingly always one oscillator smaller as the dominant mode is always the one closest to being unstable. Despite the numerical convenience of the above equation, this reduction introduced an artificial singularity at the origin, meaning we cannot study transitions to and from this state in this paramterization. An alternative parameterization is to cast the second oscillator back to cartesian coordinates by declaring x = ρ2cos(φ) and y = ρ2sin(φ) yielding the following:

3 5 ρ˙1 = (κfa κ + a)ρ1 + (1 2a)ρ + (1 a)ρ − − 1 − 1 + κ(1 fa)(x + C1y), − 2 2 2 2 2 x˙ = ρ1faκ + x(x + y ) + (x(1 2a) C2y)(x + y ) − − κ(1 fa) 2 c1(fa 1)xy 2 + − y + − + (a faκ)x + (ρ1C2 (1 + 2fa)C1κ)y, (2.4) ρ1 ρ1 − − 2 2 2 2 2 y˙ = ρ1faκ + y(x + y ) + (y(1 2a) + C2x)(x + y ) − − κC1(1 fa) 2 (fa 1)xy 2 + − x + − + (a faκ)y + ( ρ1C2 + (1 + 2fa)C1κ)x). ρ1 ρ1 − − Which enables us to continue from states with the second oscillator at the origin. A way to study cluster integrity is to split each cluster into two pieces and look at the mean and difference between each sub part. Splitting each cluster into perfect halves is not physically unrealistic, as any small perturbative difference to a cluster – regardless as to how many oscillators participate – still lead to the same linearized equations. CHAPTER 2. DYNAMICS OF LINEARLY COUPLED SUBCRITICAL OSCILLATATORS 10

Z nullcline for differnet values of a 0.4 | | a = 1 a = 0 0.3 a = -0.5 a = -1 0.2 a = -1.5 a = -10

0.1

Z˙ | |0.0

0.1 −

0.2 −

0.3 −

0.4 − 0.0 0.2 0.4 0.6 0.8 1.0 1.2 Z

Figure 2.1: Radial Nullclines

2.2 Linear Stability of Synchronized and Splay States

We can think of synchronization as the situation in which z = z and thus solve 2.2’s dynamic as h i 2 3 2 z˙ = az (2a 1 + c2i)z z¯ (1 a)z z¯ . (2.5) − − − − bt 2 4 Applying the ansatz z=βe , yields: b = a (2a 1+c2i)β +(a 1)β . As we want b = 0 2 a − − − <{ } we find that β = a−1 or 1, recapitulating the radial fixed points discussed above, and thus 2 b = β c2, yielding possible limit cycle solutions: ={ } − r −ic t a −i a c t z = e 2 and z = e a−1 2 , (2.6) sync 1 sync mid a 1 − as expected from the nullcline consideration above. Next, we consider the dynamics in region two, where the oscillators are splayed out in such a way that z = 0. Our situation is much more complicated than the corresponding Stuart Landau regionh i and will be a prime object of study. The equation we are solving:

2 3 2 z˙ = (a κ(1 + c1i))z (2a 1 + c2i)z z¯ (1 a)z z¯ , (2.7) − − − − − can be solved with the ansatz z = √βei(bt+φ). We get b by forcing the real part zero and solving a quadratic equation for β, which yields solutions:

p −i[(κc1−c2β−)t+φ−] p −i[(κc1−c2β+)t+φ+] zsplay = β−e and β+e (2.8) p 2a 1 1 + 4(a 1)κ β = − ± − , ± 2(a 1) − CHAPTER 2. DYNAMICS OF LINEARLY COUPLED SUBCRITICAL OSCILLATATORS 11 where β is by necessity real. Thus this state no longer exists when a < 1 1 and has beta − 4κ values at a=1, via l’Hopital’s rule, of β+ = 0 and β− = 2. This means that for sufficiently small magnitude positive κ (weak attraction) this splay− state exists over a meaningful range 1 of subcritical oscillators, especially for small κ. Once κ becomes 4 , however there are no subcritical oscillators that have an existing splay state of this type. Finally, we look at solutions associated with the chaotic regime, which in the Stuart- landau case has a peaked power spectrum. Aiming to explain the ρ shaped chaotic dynamics found in the supercritical chaotic region Hakim Chabanol and Rappel [9] [22] were able to recreate the ρ shaped dynamics by studying a reduced representation of a single oscillator coupled to an external mean field. We represent the mean-field by a single dominant mode by taking z = Reiωt and considering the motion of a single oscillator coupled to it. We move intoh ai rotating frame and change variable from z to B and t to τ via the ansatz i(ωτ+φ) z = γBe . Substituting and choosing φ = arctan c1, γ = √κ, and  = κ yields

a κ 3 5 Bτ = [ − Ωi]B (2a 1 + c2i)B + (a 1)κB + F, (2.9) κ − − − − q 2 Ω = c1 + ω F = 1 + c1R, which reduces all possible dynamics to 5 parameters where to look exhaustively at dynamics in the Ω F plane we would need to choose an a, a κ, and a c2. This equation can be analyzed× self-consistently by changing our mean-field average to instead an average over the temporal dynamics of a single oscillator. Declaring Reiωt = z = √κ Beiωt eiφ: h i h i

iφ 1 ic1 F R = √κ B e or B = 2− , h i h i c1 + 1 √κ

which when plugged into with B τ = 0, yields two equations for two unknowns (the real and the imaginary parts of the equation)h i which can then be used to solve for Ω as a function of F . We note that B can be written as an average over each accessible attractor (in the a=1 case this correspondsh i to a weighted average over a limit cycle and a fixed point) the above equation instead lets us trace out the self consistent solutions as a function of this fraction. This gives us some insight into this strange attracting state. As we note that adding any z = 0 solution (i.e. one of the locked states) to this equation yields the same B equation handi we could thus look at the superposition of 2.2 ?? and look at the stabilityh ofi composed states with some fractional component of β+,β−,B, and zero states:

zcomp = zsplay + fρBf− + f+ + f0 + fρ = 1. (2.10) P Finally, we can also consider cluster states with z = i fizi for small i, the cluster states. In extensive surveys of the dynamics of many coupledh i supercritical stuart-landau oscillators mostly groups of two clusters, rarely three, were found in the clustering regime of the bifurca- tion diagram. It would be interesting to further analyze this clustering regime, similar to the work done in [31], and find a cluster singularity point, the parameter values where a periodic CHAPTER 2. DYNAMICS OF LINEARLY COUPLED SUBCRITICAL OSCILLATATORS 12 state loses stability, and is stabilized by splitting into two clusters. The two cluster state has also been shown to underlie the chaotic inverted ρ state mentioned above, and seeing how this connection is different in the subcritical state would be interesting as well. Additionally, connecting the loss of stability of the two cluster states to the amplitude mediated chimera states found in [51] in the supercritical (let alone the subcritical) would allow us a richer picture of the dynamics. Furthermore, it would be interesting to more closely probe the clustering region to see if we can find certain parameter regimes that see clustering states that rescue the unstable two (or three) cluster states (perhaps 5 clusters?) by splitting – via a cluster instability – into stable high cluster number states. Also, within this framework we can compose zsync states with zero states and thus study clusters with z = 0. It would be interesting if we could find coexistance of these two possible cluster solutionsh i and watch dynamical transitions between different cluster states.

Linear Stability To calculate the linear stability we add a small complex value to the amplitude of the so- lution, linearize in our small amplitude and then calculate the eigenvalues of this linearized dynamic. If the real part of all the eigenvalues are negative then the solution is stable. For stationary solutions with two pertinant eigenvalues, we note that the sum of those two eigenvalues must be negative and the product of the eigenvalues must be positive (station- ary solutions don’t have a complex component). In this situation just finding where the determinant and the trace of our matrices equal zero gives the boundary of the domain of stability. If we are linearizing around a complex state or expect complex eigenvalues we still require the sum of the eigenvalues to be ¡ 0 as all complex eigenvalues come in complex con- jugate pairs and thus the imaginary part of the sum cancels out. For the second condition demanding the two eigenvalues are the same sign we can explicitly calculate the real part of the eigenvalues and multiply and require the product to be positive. If we have more than two complex eigenvalues we can utilize some advanced matrix mathematics and theory of polynomials — the Generalized Routh-Hurwitz Theory [fr˙gantmacher˙applications]— to give a condition that determines when all eigenvalues have negative real part and thus our dynamic has stability.

Synchronized States The main trick we use to linearize these equations is to note that, as the coupling is linear, we can decompose it into a mean zero and a mean term. This gives us two eigenvalues per an oscillator for each part of the coupling perturbation, thus leading to a system of 4 identical eigenvalues for each oscillator. We refer to these two eigenvalue sets as the cluster and the −ic2t orbital eigenvalues. For the amplitude zsync 1 = (1 + αj)e we have a matrix composed of N-1 cluster and 1 orbital matricies: CHAPTER 2. DYNAMICS OF LINEARLY COUPLED SUBCRITICAL OSCILLATATORS 13

  κ 2 c1κ − − c1κ 2c2 κ − − −  2 0 − 2c2 0 − as we want our 2N-2 cluster eigenvalues to be negative we can make sure they are negative under the constraint that their sum is negative and their product is positive. This leads to the following constraints which if satisfied tell us that zsync 1 is stable.

λ1 + λ2 : 2κ 2 < 0 − − 2 2  (λ1) (λ2): κ c + 1 + κ (2c1c2 + 2) > 0 < < 1 Thus the synchronized state has the same region of stability for any value of a, as expected, and the synchronized state is at least locally attracting and stable as long as κ > 1 and 2 − κ(c1c2 +κ(c1 +1)+2) > 0, since we enforced the jacobain at the limit cycle to be independent of a in our equation definition. You can see the stability of this state traced out in the green on the right side of 2.2 and as the thick dividing line in the empirical bifurcation plots Following the same logic as above we also calculate the region of stability for zsync mid. We are able to calculate the region of stability for this state which is always unstable in the uncoupled case. I am quite intrigued to see where this branch stabilizes. Since this branch only exists for a < 0 we take a = a for convenience . The matrices and eigenvalues take up too much space to be included−| here| but we will provide the same conditions as above for this state. The zsync mid solutions are stable when they exist (a < 0) and

16a2 a + 11a2 + 6a | | < 0, − a2 + 2 a + 1 | | 8 6 2 6 6 4 2 4 2 112a + 8a c2 + 760a a + 2225a + 48a c2 a + 120a c2 4 4 2 2 2 2 | | 2 2 | | 2 2 +3662a a + 3695a + 160a c2 a + 120a c2 + 2332a a + 895a + 48c2 a + 8c2 | | | | | | 8 6 2 | | 6 +190 a 16a 8a c + 104a a | | − − 2 | | > 0. +289a6 48a4c2 a 120a4c2 + 446a4 a + 415a4 160a2c2 a 120a2c2 − 2 | | − 2 | | − 2 | | − 2 +236a2 a + 79a2 48c2 a 8c2 + 14 a + 1 + 17 | | − 2 | | − 2 | | 2 (a8 + 8a6 a + 28a6 + 56a4 a + 70a4 + 56a2 a + 28a2 + 8 a + 1) | | | | | | | | and they are internally stable when

2a2κ + 16a2 a + 22a2 + 4κ a + 2κ + 6 a | | | | | | < 0, − a2 + 2 a + 1 | | CHAPTER 2. DYNAMICS OF LINEARLY COUPLED SUBCRITICAL OSCILLATATORS 14

2 2 2 2 2 48a + c1κ 6c1c2κ + 8c2 + κ + 16κ a + 118κ 56 a 376 − 8 8 | 6| − | 6| − 6 2 128a κ 448a 6a c1c2κ a 42a c1c2κ + 16a c2 a − − | | − 6| 2| +104a c2 +1014a6κ a + 3514a6κ 3496a6 a 11944a6 |4 | − 4 | | − 4 2 126a c1c2κ a 210a c1c2κ + 288a c2 a − | | − 4| 2| +440a c2 +6958a4κ a + 8610a4κ 4 4 2 | | 2 23336a a 28520a 210a c1c2κ a 126a c1c2κ − | | − − |2 |2 − 2 2 +400a c2 a + 216a c2 2 2 2 2 | | +6818a κ a + 3374a κ 22328a a 10936a 42c1c2κ a 6c1c2κ | | − | | − − | | − +64c2 a + 8c2 + 954κ a + 118κ 3064 a 376 2 | | 2 | | − | | − > 0. − a8 + 8a6 a + 28a6 + 56a4 a + 70a4 + 56a2 a + 28a2 + 8 a + 1 | | | | | | | | This region of stability can be seen traced out in yellow in the lower right hand part of 2.2.

Splay States

We can also follow a similar logic for the zsplay states we derived above. Here, as above, we −i[(κc1+c2β)t+φi] linearize about zsplay = (√β + i)e yielding

−iφi iφ 2 3 2 2 ˙ = κ(1 + c1i) ie e + (a + aβ (3β 4) + (2 c2i 3β )β κ) h i 2 2 − 2 − 4 − − (2.11) + ¯(2aα (α 1) + c2iα 2α ), − − where we can decompose into mean (orbital) and interior (cluster) dynamics as above. The cluster integrity (with e−φi = 0 follows the same logic as the sync states, yielding N-2 sets of two eigenvalues, buth the orbitali dynamic is a bit more complicated: we note that we can project each oscillator term in 2.11 oscillator zk into a space dependent on its own phase i.e. −φk φk into ae + be then the i / term will be rewritten h i

1 X iφk−iφ 1 X i2φi −φ −φ i = aie = b1 + b2e = b1e + b2δe , (2.12) h i N N i k where δ = P ei2φi . This gives us a four dimensional space (the real and imaginary com- ponents of b1 and b2) which is orthogonal to the cluster dynamic. Thus we have a six dimensional space which define our splay splay stability and we need to be stable in all of them. The four eigenvalues in this subspace can be found by plugging ?? into ??, setting  =  , and projecting 2.11 onto (b1, b¯2, b2, b¯1), yielding the following linearization: h i   + A1 A2 ∆ 0 K  A¯2 A¯1 0 0    (2.13)  0 0 A1 A2  0 ∆¯ A¯2 ¯ + A¯1 K CHAPTER 2. DYNAMICS OF LINEARLY COUPLED SUBCRITICAL OSCILLATATORS 15 where N 1 X 2iφk = κ(1 + c1i), ∆ = δ, δ = e , K K N k 2 4 A1 = a (4a 2 + c2i) α 3(1 a) α κ, − − | | − − | | −

2 4 A2 = (2a 1 + c2i) α 2(1 a) α , − | | − − | | with α referring to both β+ and β− respectively. First we notice that if δ = 0 2.13 reduces to a two by two matrix and its complex conjugate. The eigenvalues of that matrix are roots of: 2 G0 = λ + g1λ + g2, with g1 = ( + A1 + A¯2), − K 2 2 g2 = A¯1 + A1 A2 , K | | − | | and thus we have G0 stability when 2 2 (g1) > 0 and (g1) (g2) + (g1) (g1) (g2) (g2) > 0. < < < < = = − = Multiplying 2.13 out leads to the following polynomial in terms of G0: 2 2 Gδ = G0G¯0 ∆ A1 . − | | | | Our splay states will be stable when all all the roots of the polynomial above have negative real parts. To learn where this is true we apply the Generalized Routh-Hurwitz criterion, which, though convoluted, gives a condition for all roots (and thus eigenvalues) having negative real component (and thus spectral stability). To apply this condition to a quartic 4 3 2 4 f(z) with no imaginary highest order term if(iz) = b0z + b1z + b2z + b3z + b4 + i(a0z + 3 2 a1z + a2z + a3z + a4) is arranged into the following four determinants     a0 a1 a2 a3 a4 0    a0 a1 a2 a3 b0 b1 b2 b3 b4 0          a0 a1 b0 b1 b2 b3   0 a0 a1 a2 a3 a4  1, det , det   , det   ,  b0 b1  0 a0 a1 a2  0 b0 b1 b2 b3 b4        0 b b b 0 0 a a a a   0 1 2  0 1 2 3   0 0 b b b b   0 1 2 3      a0 a1 a2 a3 a4 0 0 0     b0 b1 b2 b3 b4 0 0 0       0 a0 a1 a2 a3 a4 0 0       0 b0 b1 b2 b3 b4 0 0   det     0 0 a0 a1 a2 a3 a4 0       0 0 b0 b1 b2 b3 b4 0       0 0 0 a0 a1 a2 a3 a4 0 0 0 b0 b1 b2 b3 b4 CHAPTER 2. DYNAMICS OF LINEARLY COUPLED SUBCRITICAL OSCILLATATORS 16 which are required to be all positive. In our case:

4 3 2 iGδ(iλ) = iλ + (g1 +g ¯1)λ (g1g¯1 + g2 +g ¯2)iλ (g1g¯2 + g2g¯1)λ + i(g2g¯2 A1A¯1∆∆)¯ − − − so:

a0 = 1 b0 = 0 a1 = 0 b1 = 2 (g1) < a2 = (g1g¯1 + 2 (g2)) b2 = 0 − < a3 = 0 b3 = 2 (g1g¯2) − < a4 = g2g¯2 A1A¯1∆∆¯ b4 = 0, − 2 b3 a2b1b3−b3 so thus via the determinants above we require b1 > 0, > a2 and 2 > a4 for our b1 b1 system to have Gδ stable splay states. You can see where these states are stable in the results section with the key provided in the caption.

Results CHAPTER 2. DYNAMICS OF LINEARLY COUPLED SUBCRITICAL OSCILLATATORS 17

Figure 2.2: What we have here are κ vs c1 plots for varying values of a (the y label). The left columns are the β− states, the middle the β+ states and the right column is a summary of all the stabilities we’ve calculated. For the β− states the red marks the region of existence, the green the the area of cluster stability and the blue the region of G0 (the weaker stability criteria which corresponds to the solid lower left line in the subsequent plots. For β+ the green corresponds to the weaker Gδ criteria, red are states that have cluster stability, lighter red are states that exist and blue are states with G0 stability. but without considering cluster stability. The final panel summarises the stability of all the states we have considered. Green is the region of stability for the amplitude 1 sync state, yellow the region of stability for the mid state, while blue corresponds to the region of G0 cluster stability while the light blue corresponds to regions cluster stable but not G0 stable. The red corresponds to the region of G0 cluster stability while the light red corresponds to regions of cluster stability. Regions worth a remark are the tiny slicer at a=-.33 that has both sync states stable and that our regions of splay state cluster stability seems to correspond petty well with the dynamics of the splay state uncovered in the next section. CHAPTER 2. DYNAMICS OF LINEARLY COUPLED SUBCRITICAL OSCILLATATORS 18 2.3 Empirical Bifurcation Plots and Unbalanced Optimal Transport

We were interested in more systematically probing the clustering region discussed above and to do so we developed an empirical bifurcation technique building off recent advances in Optimal Transport. We use a new (and inexact) technique to approximate the Unbalanced Optimal Transport between two distributions which we derive from direct numerical simula- tion of our system of coupled oscillators. Optimal transport currently a hot topic as people contend that this branch of mathematics is the new perspective needed to make sense of and advance the modern revolution in deep learning. I’m less enthusiastic about this, but I am interested in how these optimal transport ideas connect to ideas from Information Geometry, statistical mechanics, and information theory in general. Regardless, we began this foray by trying to think of a good order parameter for our coupled oscillator system so we can mean- ingfully represent bifurcation boundaries in this high dimensional system. We decided that the right way to consider the system was to look at the mean-field over time, as it is both the dynamical forcing to the system and permutation invariant, as we are neither interested in the system differences that come from where the oscillators are stored in memory nor intend to privilege the dynamics of one oscillator over the others. We take the power spectrum of fourier transform of a down sampled version of the last 10 percent of the direct numerical simulation to throw out transients and phase information. As we found that the strange and peculiar chaotic inverted ρ dynamics persists to systems of very small number of oscillators, we study ensembles of twelve oscillators. We then use Unbounded Optimal transport to estimate the Wasserstein Earth Mover Distance between two adjacent distributions in either of the two bifurcation parameters. We then plot the magnitude of the earth mover distance between the fourier representation of adjacent dynamics to show where large changes in dy- namics occur. Using this technique we study both how the trajectories of a single initial condition changes as we vary parameters or study how the dynamics of an ensemble changes. We work in the paradigm of Optimal Transport because it is useful to a symmetric distance measure and it allows local differences in the distribution to be more preferred than global differences. For example, a L1 metric would not differentiate positively between a slight shift of the entire distribution to the right versus the emergence of a new small peak in the spectrum. We need to use a new extension of the general ideas of optimal transport because our case is not the typical situation of comparing two probability distributions (which have the same amount of stuff in any case, as all probability distributions integrate to one). We are not interested in throwing out the magnitude of our mean oscillation and thus do not want to normalize. To deal with these unbalanced distributions we need to use unbalanced optimal transport which allows the creation and destruction of probability particles. The earth mover distance is the minimal amount of sand particles you would need to move from one sand pile to make it identical to another sand pile. The generalization of this concept and fast approximations to the unbounded transport distance between two distributions was developed by [11][10]. We implemented a version of this algorithm in julia. CHAPTER 2. DYNAMICS OF LINEARLY COUPLED SUBCRITICAL OSCILLATATORS 19 Results

Figure 2.3: Empirical Bifurcation for a=1,.005,-1,.005 for a single initial condition. The underlying plot is from [51] and marks out the splay (lower left) , the dark black like the region of sync stabiity. C is a region of chaos. CL is a region of clustering and the dashed line is where cluster states are no longer stable in the stable sync region. The lower dashed region correspnds to regions with Gδ stability and the middle slightly darker region is the region of stability for their Amplitude Mediated Chimera State. The darker the color means that a tiny upwards paramater step doesn’t change the dynamical regime you are in. The dark circles in the clustering region correspond to clusters. It would be interesting to try to quantify basins of cluster stability. It is also remarkable that a weakly subcritial and a weakly supercritial dynamic appears quite similar and it is interesting that there appears to still be a tiny difference between the sync and the splay state and that as expected from the linear stability analysis most of the intersting dynamics moves closer (and probably over the κ = 0 axis). CHAPTER 2. DYNAMICS OF LINEARLY COUPLED SUBCRITICAL OSCILLATATORS 20

Figure 2.4: Empirical Bifurcation for a=-.005,.005 for a differnet initial condition on the left and the right. This plot looks at initial condition dependence which seems fairly localized to the stable sync side. It would be interesting to run a few more of these and maybe try to get a handle on how consistent the yellow boundary is and to perhaps study what it is a dynamical trace of. At a cursory level analysis it would seam that the only changes between weakly super critical and weakly subcritical are small changes in the sync unstable clustering region. CHAPTER 2. DYNAMICS OF LINEARLY COUPLED SUBCRITICAL OSCILLATATORS 21

Figure 2.5: all from same initial condition ,a=-.16,-.02,-.08,-.16,-.25,-.5 It appears that that flaming yellow curve is the herald of a new dynamical state emerging from the Gδ stable splay position. It would be very interesting to get a handle on what this emerging state is and how the sync coexisting clusters fuse with increasing a. CHAPTER 2. DYNAMICS OF LINEARLY COUPLED SUBCRITICAL OSCILLATATORS 22 2.4 Conclusions and Synthesis

As the study of simple coupled oscillators becomes more and more interesting and popular [48][5], there is space for a through exploration of even more reduced dynanics than we mention here. Our future work will be to make sense of these larger scale dynamics by thoroughly studying the two oscillator case, leveraging the tools of numerical continuation to thoroughly probe the local dynamics of how two oscillators that are either in sync (in phase) or in splay (out of phase) and starting in the bistable state with one oscillator at the origin and the other on the limit cycle. This work is in good company with other small systems being studied today and sets us up with some ideas about where to start looking for that analysis.

Connection to the Aging Transition? Daido has been doing a bunch of work looking at synchronization in groups of coupled oscillators as individual oscillators “wear” or age into being forced to the value 0. The subcritical oscillator can maintain it’s steady state at the origin so it could be interesting to look at some of the dynamical phenomena that Daido[14][13] found in the context of a dynamical oscillator that can find zero on its own, rather than being forced to zero as it “breaks.” It would be very interesting to also study these sorts of metadynamics in out alternative oscillator system. 23

Chapter 3

Image Geometry and Dynamics: Information Extraction and Seashell Pattern

3.1 Geometric Approaches to Image Analysis

In some sense image analysis is the inverse of pattern formation. Here we are interested in extracting geometric information from images in order to quantify properties of seashells, as discussed in the next chapter. We use this discussion of pattern and and geometry to also remark on and report what started our interest in seashells, modeling their patterns as a neural mediated pattern formation feedback loop.[20] [8] [17] This was interesting as it is a very reduced system in which to explore how neural systems can both interact with and effect the world, a low dimensional and visual example of the computational ability of neural systems. The system is two dimensional, the leading edge of the shell representing space and the transverse extent of the shell representing time, as the shell is deposited in a step-wise bout based manner. The goal of neural modeling is to explain how input is encoded into the system and how it comes out the other side as action. It seems neural modeling in general is much more interested in the representation part of the story than closing the action feedback loop to fully model a neural system. The closed neural system of a seashell can be fully modeled from beginning to end and any modeling of this loop can be easily tested against experiment (aka looking at shells). This system is an ideal proving ground for engaging with the kernel of what neural systems are for: engaging with an environment to create action. The snail creates its own environment (previously deposited pattern) and has clearly defined output (the next bit of pattern). The aim of grappling with this kernel is what began this whole line of inquiry. Towards this pattern formation end, we got excited about the geometry of a particular type of seashell which tend to have interesting yet to be modeled patterns — the shells of the family Cypraidae. Cowries as they are commony known are a suprisingly understudied [45] CHAPTER 3. IMAGE GEOMETRY AND DYNAMICS: INFORMATION EXTRACTION AND SEASHELL PATTERN 24 and highly collected family with both intricate geometry and pattern. We found our study converging on images from two directions: the generation of images via a neural patterning algorithm with the hope of extending the model we have to more sophisticated patterns and the summarization of images, either extracting out lines and curves from two dimensional images in order to report the geometry of these cowrie shells or by developing methods to think about how much positional information is encoded in a single seashell pattern which allows us to grapple with the question of why the patterns are there to begin with. Exploring these two themes — geometry and images — is the purpose of this chapter. We first describe some techniques of finding and representing geometry from images and then report our work on seashell neural pattern formation.

3.2 Splines and Active Contours

Given a two dimensional image we want to extract out a contour that corresponds to seashell relevant information. Towards this end we use the technique of active contours, which in turn is based on the technology of splines [15][30][50]. A kth order spline has basis sets which are nonzero only over k+1 intervals — the spaces between n chosen points (called knots). These knots are used to create a local basis via Newton Divided Differences, which is kind of the inverse of a taylor approximation: instead of representing a function using derivative infor- mation at a single point you are expanding around, you can instead use difference information between adjacent data points to represent your data with an interpolating polynomial of or- der k. The procedure leads to n + k basis functions that can be used to perfectly interpolate all your data. The spline interpolation is smooth to the k 1th order. If we don’t want our interpolation to just faithfully represent our points, but to− instead yield a smooth and sensi- ble interpolation between our data points we will need to set a smoothness criteria allowing us to hand tune between a faithful representation of our recorded points (weighted L2 mini- mized error) and the smoothness of the curve (sum of the amount of discontinuity in the kth derivative at the knot boundaries). To get a spline interpolation that doesn’t necessarily pass P through every point we instead minimize E = derivative discontinuity + p[L2 error S] where S is a chosen smoothness parameter and p is the variable being minimized over.− The way the interpolation works is the knots are added algorithmically until the spline corre- sponding to Emin also has an L2 error less than S. This trade off allows us to fit both the data curve and a baseline (which is required to extract information about seashell shape) in a rational way, as both splines are parametrically defined with the same parametric variable thus subtracting one spline from the other along that parameterization yields the signal with a geometrically accurate baseline, which we will see is necessary for extracting the informa- tion we need. Effectively, this smoothness parameter interpolates between a perfect spline th interpolation and an L2 fit of the data with a k order polynomial. Active snakes generalize the technique above where instead of the least square error and the derivative discontinuity, the energy minimized is a weighted sum of internal and image energies. The internal energy is a weighted sum of the magnitude of the first and second CHAPTER 3. IMAGE GEOMETRY AND DYNAMICS: INFORMATION EXTRACTION AND SEASHELL PATTERN 25 derivative of the spline relating to how stretchy and bendable the spline is and the image energy is a function of image properties like the intensity, gradient, or the Gaussian blurred squared Laplacian which can be used to bias the snake towards image edges. Also, instead of a discrete minimization, the generalized splines used in the active snakes model move into a continuous space turning the functional minimization into time dynamic Euler Lagrange equations (hence the active part of the active snakes) and the discrete sum over the basis functions of splines into an integral instead over a spline basis which is invariant under translations, rotations, and similarity transformations. To see these methods in practice please look at the cowrie chapter.

3.3 Seashell Pattern Generation

Equations of Seashell Patterning Reaction Diffusion equations are the standard equations in the study of pattern formation as if you asymptotically expand any spatialy distributed operator to lowest order you dif- fusion kernels and a local intrinsic dynamic. [33][28]. Any pattern forming equation to lowest order can be represented as a reaction diffusion equation expanded about the re- gion of interest. Some of the greatest success of this approach to pattern formation are the results of Hans Meinhardt, who spent the better part of his career looking through parameter and model space in the reaction-diffusion framework to replicate a wide host of patterns seen on seashells, among other biological forms[24]. Contemporaneous to the first paper he published replicating seashell patterns with only low order equations, a paper was published using instead an integro-differential equation approach, which attempted to capture more of the essential biology instead of just the simplest mathematical situation which could replicate the seashell patterns[17]. The reaction diffusion approach required many ad-hoc phenomenological steps, such as deciding how many substances are reacting or diffusing and the exact structure of the reaction diffusion equations, which were written as the ratio of arbitrary functions of the activator and the inhibitor. The beginning of this approach to pattern formation stems from the work of Alan Turing, who first suggested that a quickly diffusing inhibiting substance could lead to stable, time independent patterns of a more slowly diffusing activating substance. It is the interaction between local temporal dynamics and this diffusional instability (referred to as a Turing Instability) which give rise to patterning in this framework. Tuning these dynamics allow people to recreate two di- mensional patterns. However, this highly local approach doesn’t necessarily connect to the actual dynamics underlying the formation of the pattern, just a simplification of whichever dynamics are present to allow mathematical tractability. If instead one were to start with a model which takes into better consideration the actual process of seashell formation, the integro-differential equation alluded to above, one has a system with phenomenological pa- rameters which may be more related to the dynamics seen in practice and could give more insight into how similar different patterns are, as the reaction diffusion equations operate in CHAPTER 3. IMAGE GEOMETRY AND DYNAMICS: INFORMATION EXTRACTION AND SEASHELL PATTERN 26 a phenomenological space with possibly no relation to the processes which actually underlie the formation of the pattern. Modeling within the integro-differential framework provides a better cartoon interpretation of the processes underlying the dynamic, replicates a wider range of seashell patterns, and does not need to have its fundamental equation changed for each possible pattern, as the linearization around a pattern forming state that the reaction diffusion equation represents requires a different number and functional form of the under- lying differential equations integrated to yield the pattern. The integro-differential equation framework, which in its most modern published form has 17 tunable parameters, represents a nonlocal nonlinear framework[8] [20], the linearization thereof would lead to different re- action diffusion equations in different parameter regimes. If we take the integrodifferential framework as the “ground truth” we could recapture reaction diffusion equations in a more principled rather than ad-hoc way. Unfortunately, a thorough understanding and analysis of a 17 dimensional system is hard to undertake, and a more careful analysis of the results mentioned above show that a large portion of the dyanmics studied to date arose explicitly from the discritization of the dynamic instead of the integro-differential equation (or its reaction diffusion reduction), meaning one could consider the model as a cellular autonoma[61], a phenonenologial approximation even more divorced from reality than reaction diffusion equations. To address this issue, and that the pattern seemed more dependent on the lattice grid spacing chosen for the integration than the pheonenological dynamic itself we aimed to move to a slightly different framework which captures the same phenomenological cartoon but transforms it into a more local differential equation which does not depend as much on the discritization as the model described above. The cartoon which is being discussed is a bout based model of seashell deposition where one can think of the pattern on the seashell as being a history of the thoughts of the mollusc and the local activation lateral inhibition as a receptive field, meaning we can think of our equations as a neural pattern forming system. The model alluded to above presumes a center surround receptive field at each point in space that integrates over all the pattern which was previously deposited. The model requires numerous channels and convolutions: The equations were written as thus:

Z t Z ∞ 0 0 0 0 0 0 ut+1 = Sdep (KE SE(Sin(u(x , t )) SI (KI Sin(u(x , t ))) dx dt , (3.1) 0 −∞ ∗ ∗ − ∗ ∗

a where S is a sigmoid (familiar from neural networks) of form S(x) 1+eb(x−c) and K is a Gaussian −x Kernel K(x) = ae 2b2 , yielding the aforementioned 17 parameters once the amplitude of the excitation kernel is set to 1. The cartoon is as follows: given some shell pattern, place a sensory cell upon it (Sin) which reads in the shell pattern. That sensory cell scales the pattern input and sends gaussian defined projections of different extend to local excitatory and inhibitory cells (KE and KI ). These cells fire according to their internal dyanmics (SE and SI ) and then sum upon a depositing cell, with its own internal dynamics (Sdep). This model supposes four populations of identical cells which interact in a very sterotyped way. CHAPTER 3. IMAGE GEOMETRY AND DYNAMICS: INFORMATION EXTRACTION AND SEASHELL PATTERN 27 The very large parameter space associated with this model is not particularly amenable to analysis, as there is no established theory of integrodifferntial dynamics unlike differential equations, which have many analysis techniques and entire fields of mathematics to draw from. This equation, as it lies, makes intuitive sense and in theory makes testable predictions for the extent of the sensory receptive fields in the mantle of each shell in relation to its shell pattern. Presuming that closely related have closely related neural structures, the phylogenetic tree built of using the parameters used to fit the pattern approximately replicated the actual phylogenetic relationships among the shells. This isn’t particularly interesting, as the shells were fit by going through a shell book (arranged by phylogeny) and blindly tuning single parameters to change one pattern to another. Given the ad- hoc method of shell pattern fitting and the structure of the input data, it would be more surprising if a reasonable phylogenetic tree couldn’t have been built. Anyway, the cartoon and motivation as described above has little to no conection to the actual dynamics solved in the paper. Instead of a highly motivated and intuitive integro-differential equation framework the equations were actually solved as follows: Have one matrix and two one dimensional arrays, the pattern and the last instance of the excitatory and inhibitory cellular neural activity. 1D-convolve a discritized Gaussian kernel along the previous time step to get the activation and inhibition of the current time step. Add that number to a constant times the previous time step’s (effectively an exponential kernel in time) activation and inhibition respectively. Pass the difference between the sigmoid scaled activation and inhibiton into the deposition sigmoid to receive the next time step in u. Given that most of the 1d kernels had extremely short spatial extent (usually only neighbor or next nearest neighbor, especially because there was a hard cutoff on the gaussian, meaning it was zero below some threshold as opposed to having weak interation at a distance which would actually represent what gaussian connectivity would actually entail). The simulation of this model had basically no connection to the model as it was conceived and what set out as an activity to more rationally and physically replicate a biological process instead, actually just created an extraordinarily complicated nonintuitive cellular autonoma to replicate the pattern. It took me a very long time to see just how divorced from what it was claiming to accomplish (most of the more exciting patterns were fit by including a hidden layer, effectively a cellular autonoma on top of a cellular autonoma, but also keeping track of its history in extraordinarily non-physical ways). Yes, patterns were replicated. No, was any insight gained. Attempting to do this more rationally led to me actually implementing the convolution as described above where the patterns had no stability to changes in lattice spacing or the gaussian cutoff. As I was attempting to do much more through analyses than the strange non-physical processes described above, working towards both figuring out the domain of stability of a pattern in order to see how the range of parameter space changes on an evolutionary time scale (which required both making sure we explored parameter space in a more thorough manner to make sure there wasn’t more than one region which lead to the same dynamics) and the ability to expect patterns to slowly change as the dynamical space is changed so that we could tell if modern and ancient seashells took up more or less parametric space. I took the motivation of making our model as physical as possible a little too seriously and CHAPTER 3. IMAGE GEOMETRY AND DYNAMICS: INFORMATION EXTRACTION AND SEASHELL PATTERN 28 spent a lot of time writing python code (its quite hard to replicate a model when you are simulating something different) and being confused when my results didn’t necessarily correspond to what we said they would be. If we had said this model was what it actually was: a physically inspired cellular autonoma, I would have found it much easier to just do the small incremental changes required to actually do the science I was being asked to do. My cognitive dissonance between what I was told the model was, what I perceived as being necessary to begin doing the assigned tasks, and what the model actually was sent me down a track of confusion. As I was more interested in the system as a reduced neural system than as a strange way of replicating seashell patterns, I found myself mostly running self-contradictory simulations that attempted to more faithfully represent the dynamics we purported to be simulating, trying to better characterize the pattern morphospace in models more faithful to the cartoon of the process, and dealing with the high dimensional pattern space. I ran some random sampling simulations trying to characterize different pattern regimes. I also analyzed Meinhardt’s [24], reaction diffusion equations with the hope of connecting it to our neural ones, but didn’t take any more steps in that direction once I realized how discretization dependent our simulations were meaning the pattern formation concepts I was attempting to use to understand the patterns (phase plane analysis and dispersion relations) were pretty poorly defined. I never believed in my math enough to push forward on something, because the simulation didn’t correspond to the model, let alone reality. I also put some effort into utilizing the fast gaussian transform [55] to expedite the simulations, but often has a hard time replicating any interesting patterns as the important part wasn’t the gaussian per se but its discretization. Another path I took around this confusion was to derive an alternative framework which more faithfully implements what our cartoon posited and is described in the next section. CHAPTER 3. IMAGE GEOMETRY AND DYNAMICS: INFORMATION EXTRACTION AND SEASHELL PATTERN 29

(a)

(b)

Figure 3.1: Here we seriously integrated the model setting Sin = 1 for simplicity using the fast gaussian transform to simulate spherically symmetric gaussian inhibitory and excitatory neural fields. In the figures here we varied the slope/4 (b plotted on the y-axis) and the cutoff (c plotted on the x-axis) for each thresholding function to see how dependent the pattern is on the internal cellular properties. The color represents the sum of the absolute value of the two dimensional gradient, which we use as a proxy for the existence of patterns absent a better technique (see Wasserstein Metric in the Chimera Chapter), demonstrating the pattern’s dependence on the thresholding functions (a low dimensional representation of neural activity). See next figure CHAPTER 3. IMAGE GEOMETRY AND DYNAMICS: INFORMATION EXTRACTION AND SEASHELL PATTERN 30

(c)

(d)

Figure 3.2: Here we used an actually spatially extended gaussian (the discretization can be seen by the green dots in each inset) – meaning we have a field theory as opposed to a cellular autonoma. (a) is the patterning regime which stems from varying the secretory thresholding function, (b) from varying the excitatory thresholding function (c) from the inhinitory thresholding function and (d) from varying all thresholds simultaneously. One can see that by solving this much more computationally difficult problem (using the finicky pyfigtree fast gaussian transform library) we more faithfully represent our cartoon, but fail at replicating realistic seashell patterns. CHAPTER 3. IMAGE GEOMETRY AND DYNAMICS: INFORMATION EXTRACTION AND SEASHELL PATTERN 31

(a)

(b)

Figure 3.3: Here we show phase plane dynamics and dispersion relations of Reaction Diffusion equations from Meinhardt’s book. This figure demonstrates bifurcations in the reaction diffusion system and it would be interesting to compare the local dynamics of similar patterns across the different modalities of seashell pattern formation. the phase plane dynamics were studied using PyDSTools and the Reaction Diffusion system was simulated using a handwritten euler integrator in python. CHAPTER 3. IMAGE GEOMETRY AND DYNAMICS: INFORMATION EXTRACTION AND SEASHELL PATTERN 32

(a)

(b)

Figure 3.4: Another representative sample of the complicated dynamics of Meinhardt’s many moiety reaction diffusion framework. Much like the cellular autonoma system discussed above, the reaction diffusion system can be seen as a limit of a neural field equation. These dynamics were simulated to try to connect pattern formation in the two frameworks. CHAPTER 3. IMAGE GEOMETRY AND DYNAMICS: INFORMATION EXTRACTION AND SEASHELL PATTERN 33

(a) (b)

(c) (d)

Figure 3.5: Here we show how small changes in a discritization parameter dramatically changes the patterns much more than we would like. (a) has a time and space step of 1, (b) .95 and (c) .9. (d) are all the kernels and the dicritizastion overlaid on the difference of Gaussians associated with the patterns of discritization (b). The equations were solved via convolutions using numba and scipy. CHAPTER 3. IMAGE GEOMETRY AND DYNAMICS: INFORMATION EXTRACTION AND SEASHELL PATTERN 34

Figure 3.6: The top figure is a tSNE [60] projection of the original shell equations derived cellular autonoma. The data which we hoped to finds some topology in was a randomly sampled subset of parameters which were deemed to be possibly pattern generating. We then took those parameters, in addition to two pattern related variables (the absolute value of the total gradient and the absolute value of just the gradient in the x dimension) and did tSNE on the dataset hoping to find some interesting structure. From the green circle some representative seashell images are taken and plotted bellow. One can see that these patterns appear way more physically relevant than the patterns shown in the other figures. CHAPTER 3. IMAGE GEOMETRY AND DYNAMICS: INFORMATION EXTRACTION AND SEASHELL PATTERN 35 Green’s Function Reduction and Associated Dynamics Given that the integrodifferential equation can be difficult to solve and sketch out pattern domains any way other than empirically, and that there aren’t rigorous ways in which to do this, we start with the convolutional map

ut+1 = S(k ut) ∗ ∗ xt+1−xt and subtract ut from both sides and remember a definition of the derivative  = ∂x to yield, and then absorbing that factor of epsilon into the time yields:

ut = u + S[k u], (3.2) − ∗ ∗ where S is the essential nonlinearity and k is a spatio-temporal kernel, which we presume to be separable into k(x) and l(t), and thus the convolution is defined:

Z t Z ∞ k u = k(x x0)l(t t0)u(x0, t0)dx0dt0. (3.3) ∗ ∗ 0 −∞ − − We investigate the green’s function

g00 a2g = δ(x). − − Which has the homogeneous solution:

g(x) = Ce−a|x| at the origin. So, (integrating the green’s function) Z  0  2 g − a g dx = 1 | − − − 1  0 g0(0+) g0(0−) = Ca Ca = 2Ca = 1 C = . → ⇒ − − − − − ⇒ 2a Thus, 1 g(x) = e−a|x|, (3.4) 2a which enables us to to solve the equation (with y bounded (0 at )): ± ∞ y00 a2y = u(x) (3.5) − − with Z ∞ y(x, t0) = g(x x0)u(x0, t0)dx0 = g u. −∞ − ∗ We now declare the mexican-hat-like kernels CHAPTER 3. IMAGE GEOMETRY AND DYNAMICS: INFORMATION EXTRACTION AND SEASHELL PATTERN 36

−α1|x| −|x| −t −α2t k(x) := c1e e l(t) := e c2 e , (3.6) − − where c1 > 1; α1 > 1; c2 < 1; α2 < 1. So we see:

∞ Z 0 0 −α1 |x−x | −|x−x | 0 0 0 0 0 k u = (c1e e )u(x , t ) dx = 2c1α1m(x, t ) 2n(x, t ), (3.7) ∗ −∞ − −

with m and n recognized as solutions to (4): Z ∞ 1 0 m(x, t0) = e−α1|x−x |u(x0, t0) dx0 −∞ 2α1

Z ∞ 1 0 n(x, t0) = e−|x−x |u(x0, t0) dx0. −∞ 2 Now (2) can be rewritten

Z t k u = k u l(t t0) dt0 ∗ ∗ 0 ∗ · − 0 and a change of variable s := t t z(x, t s) := k u = 2c1α1m(x, t s) 2n(x, t s) yields : − ⇒ − − ∗ − − −

Z 0 Z 0 −s −αs K(x, t) := k u = K1 K2 = z(x, t s) e ds z(x, t s) c e ds. (3.8) ∗ ∗ − t − · − t − · Taking the time derivative of K(x,t) and integrating by parts yields the following two equations: −t K˙ 1(x, t) + K1(x, t) = z(x, t) z(x, 0)e (3.9) − 1 −t K˙ 2(x, t) + α K2(x, t) = [z(x, t) z(x, 0)e ]. (3.10) c − Bringing 3.5, 3.7, 3.9, and 3.10 together, and also imposing z(x,0) = 0, yields:

ut = u + S(K1 K2) − − K1t + K1 = z

K2 z K2t + = α2 α2c2

z = 2n 2c1α1m − m00 α2m = u − 1 − n00 n = u. − − CHAPTER 3. IMAGE GEOMETRY AND DYNAMICS: INFORMATION EXTRACTION AND SEASHELL PATTERN 37 Looking at the last two equations we define ψ = m n yielding the equation m = 1 00 − α2−1 [ψ ψ]. Plugging the m equation for -u into the first equation and plugging in our new 1 − form of m yields the following system:

2 1 0000 α1 00 ut = [ψ + ψ] [ψ ψ] + S(K1 K2) α2 1 − α2 1 − − 1 − 1 − [c1α1 1] 00 K1t = 2[ψ − [ψ ψ]] K1 − α2 1 − − 1 − 2 [c1α1 1] 00 1 K2t = [ψ 2 − [ψ ψ]] K2. α2c2 − α1 1 − − α2 − P Presuming ψ is real and can be spectrally decomposed: ψ(x) = i Aicos(qix). Plugging this back into the above equations and decomposing u,K1,and K2 into the same spatial modes yields:

2 1 4 α1 1 u(qi, t)t = 2 [qi + 1]Ai 2 [qi + 1]Ai + S(K1 K2) α 1 − α 1 cos(qix) − 1 − 1 − [c1α1 1] 2 K1(qi, t)t = 2[1 − [q 1]]Ai K1 − α2 1 i − − 1 − 2 [c1α1 1] 2 1 K2(qi, t)t = [1 2 − qi 1]]Ai K2. α2c2 − α 1 − − α2 1 − We can consider purely spatial patterns for ut,K1t,K2t = 0 , or traveling waves plugging in xwave = x γt making the ut equation nonautonomous and giving our equations an inherent − wavespeed γ, or purely temporal patterns taking only qi=0. We refer to these as turing, traveling wave, and hopf patterns. Studying the types of instabilities we see would be sweet! The stability and domains of where these different patterns can be found would be a very interesting analysis. It may also be interesting to complicate our nonlineraity S, to instead look like S(S1(K1) S2(K2)) to see how discretizing the excitatory and inhibitory channels (something that was− found to yield more interesting pattering) changes the stability and se- lection properties of different patterns. I’m also not certain about how to handle the cos(qix) term that needs to be folded into the singularity. I think it makes sense to consider single mode patterns only considering the dominant wavelength (found via dispersion relations). It would be interesting to look at cross mode effects too. Assuming traveling wave solutions for everything ζ := ζ(x bζ t), we can rewrite the above equations: − 0 1 u = (u S(K1 K2)) b1 − −

0 1 K1 = (z K1) b2 −

0 1 z K2 = ( K2) b3 c1 − CHAPTER 3. IMAGE GEOMETRY AND DYNAMICS: INFORMATION EXTRACTION AND SEASHELL PATTERN 38

z = 2n 2c1α1m − m00 = α2m u 1 − n00 = n u. − P ikj x Or, rather, assuming a time independent spatial pattern: u(x, t) = j cje yielding 1−x −1 −1 j−ln( x ) S(K1 K2) = u or K1 K2 = S (u) where S (x) = k . Since u is a nonlinear combination− of the K’s, u’s− time independence translates into K’s time independence and we are left with the following equations:

u = S(γ1 (n γ2m)) · − γ u = ( 2 )2m m00 c1 − u = n n00. − c2−1 Where γ1 = 2 and γ2 = α1 c1. These equations and those with K1t α2K2t = 0, which c2 can have a time-dependence of· the K’s, contain Turing bifurcations. − c1 Constant x Hopf bifurcations are found, with γ1 = 2(1 ), when − α1

ut = u + S(K1 K2) − −

K1 = K1t + γ1u − γ1 K2 = α2 K2t + u. − · c2 With more time it would be interesting to use these better posed equations to get a dynamical handle on what my misplaced overriding goal for this analysis was: to see what patterns are possible in a simple but inherently nonlocal framework that aren’t possible, or as accessible, from a reaction diffusion standpoint. The equations as they are written would allow a more standard bifurcation analysis.

Generative Directionality in Seashells: What’s the purpose of a pattern? Another problem that I found particularly interesting about seashells was why the pattern was there at all. Most snails live in mud and many place a leathery substance over the top of their shell (called the periostracum) which means the shell pattern isn’t even visible during its life time. While this problem isn’t terribly remarked upon, I found it fascinating and was only able to find one hypothesis in the literature [6], which again connects to the neural nature of seashell pattern deposition: perhaps the shell provides positional information to allow the mantle to more properly align in order to get the mantle in register to continue 3D printing the correct nanoscale structures at the correct place to allow for the impresive physical properties of the shell biomaterial. The impressive properties of the shell are only CHAPTER 3. IMAGE GEOMETRY AND DYNAMICS: INFORMATION EXTRACTION AND SEASHELL PATTERN 39 possible because of the fine-scale structure which needs to be printed. This bout based deposition requires as much help aligning as possible, and perhaps that is what the shell pattern is doing. I figured we could test this hypothesis while at the same time explaining why there is a large region of possible patterns in the model described above that don’t seem to be instantiated on any shell. If we could quantify how much directional information there is in a pattern, how alignable to a pattern is, we could explain why particular patterns are not seen and also bolster the case for seashell being a neural pattern formation process, a belief that is near the fringe in malacology at the present. Another intriguing evidence for the pattern being neural is its malability. Certain limpets are known to change their pattern based on the substrate they are living on[35], but the current consensus is that its because of the diet change [62] but there are two points against this belief: 1) If you eat a carrot and become orange, fine, but if you eat a zebra and become striped something else is going on 2) not all animals change their shell pattern when they are put on a different substrate, in fact even within the species limpets that live above a certain latitude don’t change their color at all in the same situation when a lower latitude limpet would. The major push of this dissertation is attempting to show that nonstandard methods of pattern formation (mechanics and neural activity) leave specific traces and make testable interesting predictions which reaction diffusion equations (a purely descriptive approach) are not able to do. The unfinished work described here was stymied primarily by the fact that adherence to what the neural model was said to be got in the way of actually representing seashell patterns. With more time we would describe structure tensor[7] and other orientation based direction metrics [52] to directional information and discuss how only some generative receptive fields are useful for providing pattern that is useful positional information for the snail depositing its shell[56]. It would seem that considering position and direction could be important for all receptive fields, not just for seashell deposition. We imagine that there would be analogous constraints on receptive field properties that would be necessary for a neuron to know where it is in time. There may be interesting connections between arrow of time considerations, Excitatory/Inhibitory balance[19], and the established structure of receptive fields. A more thorough analysis of these possibilities in this simpler neural pattern formation system could yield insight into more general constraints on receptive field structure given the requirements of neural computation. CHAPTER 3. IMAGE GEOMETRY AND DYNAMICS: INFORMATION EXTRACTION AND SEASHELL PATTERN 40

Figure 3.7: Here we show an example of positional information calculated from a real shell pattern using a structure Tensor Approach. Our aim here is to subsequently develop an information measure to describe how directional a pattern is and then see if extant shell patterns are more directional than those possible in the model but not found in nature.

3.4 Conclusions and Synthesis

This chapter reported some approaches to the mathematics of pattern formation from the perspective of seashell patterning and geometry that relied pretty heavily on approaches that stem from techniques from the beginning of computer vision. That field in general, has moved on from the low level techniques and approaches as outlined above, relying fairly heavily on the modern advances of machine learning. I feel that the more basic approaches outlined above can be used to tell interesting stories about systems that are vision adjacent and perhaps modifying (or even just applying) these techniques for use in novel situations could provide insight backwards as alternative approaches to vision science itself. The work outlined above hints at interesting stories in pattern development and ways of extracting information from reality. 41

Chapter 4

Cowrie Shape and Form

4.1 Introduction

“Nothing in Biology Makes Sense Except in the Light of Evolution” — Theodosius Dobzhansky

Biology most often takes as its question: ”what?” What type of snails is that? What is its shell made out of? What is the evolutionary benefit of its behavioral choices? What strategies are they using to avoid predation? Every single one of these questions is answered by the organism and its evolutionary history. Rarely is the more proximal question asked: ”How?” How does the snail decide what shape its shell should be? How is its behavioral repertoire encoded by its body plan? How do those particular minerals and materials get to where they need to be to make that shell? How did it evolve this way; through which proximal evolutionary steps did the snail become what it is. Biology as a whole is looking at a snapshot in time and trying to explain the phenomenology in front of us. We aim to examine how a shell is built and develop a model that can explain both why the shell is shaped as it is and how it got that way. This wraps the work into a dynamical vein, as we are interested in how things became as they are.

“Ontogenesis is a brief and rapid recapitulation of phylogenesis, determined by the physiological functions of heredity (generation) and adaptation (main- tenance).” — Ernest Haeckel

The idea that ”ontolology recapitulates phlogeny” is both an old and not very good idea. However it is asking the right question: from where does the form of an organism come? Often a biologist wants to boil everything down to genetics: believing if they knew which gene lead to what they would understand the process. Alas, a gene doesn’t do anything: it either leads to a protein or regulates the transcription of a whole host of genes, which in turn lead to a protein. While knowing what gene gives rise to what behavior or looking at a genetic screen will help you determine what the players are, not how they interact or how CHAPTER 4. COWRIE SHAPE AND FORM 42 they do what they do. This is the question we are interested in: how do biological systems become what they are? For example, whats the algorithm that gives rise to a brain? We know its not encoded connection by connection in the genome, as a back of the envelope calculation shows there’s simply not enough information in the genome for this. That means that what’s in that genome is an algorithm, and these genetically (and thus behaviorally – at either the molecule, the cell, the organism, or the societal scale) encoded algorithms are what this thesis undertakes to study. A good first place to start is with a primordial example of biological form and function[57]: I am interested in studying how seashells are made. Cowries are collected systematically by enthusiasts across the globe and have import historically as a primordial currency. Despite their fame and importance, very little — besides their systematics and phylogeny — is actually known about their biology[45], while there has been a lot of work modeling seashells (both pattern and form) in general no one has modeled mechanistically to any level of sophistication this distinctive shell form (or pattern) in particular. See:[49][26] for the most sophisticated approaches to date. In this chapter we aim to discuss the cowries in general and a how to think about their shell growth in terms of their life cycle, quantify some aspects of the seashell form, and describe why this problem is worth the effort of solving. CHAPTER 4. COWRIE SHAPE AND FORM 43

Figure 4.1: This figure orients you to the problem if you are unfamiliar with it: A is a series of shells showing the developmental transition we are discussing here B is an example of the living creature doing the behavior we are interested in. C is a schematic showing the neural feedback loop discussed in Chapter 3, D is an example of a cross-section of the shell showing the complicated microstructure which gives rise to the sophisticated mechanical and physical properties of seashell and E is a diagram showing properties of the mantle edge and the shell gland which deposits the shell. The figure to the right is a schematic representation of the program described in the text.

4.2 Cowrie Shell Observables

The most systematic study of the shape and size of the cowrie is an extensive examination of over 130,000 individuals and reporting summary statistics of different species[37]. This gives us a place to start. Here, some data extracted from this paper is reconciled with the genetic distances published by Christopher Meyer in his most recent systematic study of the family[38]. The Schilder paper, only reports wave number which doesn’t need to necessarily correspond to wavelength. We go out of our way to extract the from of the teeth more precisely and I believe are the first to report tooth height in the literature. Are the teeth CHAPTER 4. COWRIE SHAPE AND FORM 44 a bunch of individual ornaments, or are they part of a larger general periodic structure? Below, we examine the possibilities and implications of these two possibilities. But first, we examine and quantify properties of the shell we are interested in modeling. In figure 4.2 you can see that the snail has an inflection point where it begins deviating from the shell growth trajectory of its juvenile form. You can see this where the curve is replaced by the green curve tracing out the shell. We argue that this deviation occurs because the snail begins growing its mantle to reach up over the top of its shell as mature cowries do. Thus the growing leading edge no longer drives expansion of the shell. The shell now has a linear growth rate as opposed to an exponential one. We also argue that the second inflection point off the green curve is caused by the slackening of the mantle (the soft elastic body part which reaches up over the shell but is just a small lip to allow the snail to seal off its biomineralization from the environment) which will occur when it no longer needs to strain to reach the top and can thus dangle. The dangling both leads to a void which will be filled via shell repair mechanisms . and a localized bending at the edge, which are the two things we attempted to model in the mathematics below. We are interested in modeling this because there is a fun cross check on the model in so far that any model we create to explain the shell curve dynamics also needs to wrinkle at a measurable wavelength, where this sheet wrinkling will give rise to an infilling similar to the accretion happening with the dangle. Shell created through the shell repair mechanism has a different chemical composition of the base shell, and the base thickening and teeth are made of that substance. We think the difference is between a specialized shell gland at the mantle edge (which we tried to localize in histology, unsuccessfully) and the ability of any part of the snail to deposit amorphous calcium carbonate to defend from wounds, just as our dermis will repair itself to injury. Fantastically, while uncovering the geometry we expected in the shell spiral form if you draw a line tangent from the second deviation point to the existing shell and draw a circle centered at that tangent point with radius half way back to that deflection point it intersects perfectly with the termination point of the shell and the point of first deviation. We look forward to testing this on more shells and developing an geometrical explanation for this fact once our model is up and running,. CHAPTER 4. COWRIE SHAPE AND FORM 45

Figure 4.2: This image shows the shape of the shell where we carefully recorded positions along the pigmented band at the outer edge of the shell. We then carefully selected the centeral spot and moved into polar coordiantes around it. After we fit the curve with the points closer to the center weighted more we noticed a clear pont of deviation from the red and blue line. If you take the slope of the red curve where it deviates from the blue data curve and fit a spiral that grows at the same rate CHAPTER 4. COWRIE SHAPE AND FORM 46

Figure 4.3: An example of our spline fit to hand recorded data. The number to the left is the amplitude in mm and the one to the right is the wavelength in mm CHAPTER 4. COWRIE SHAPE AND FORM 47

Figure 4.4: Representative examples of active snakes automatic fits using image properties. It worked relatively well without too much fine tuning, but we were not able to figure out more effective automated techniques, as we anticipated. The extra wiggly lines are the initial conditions we started out active snake calculation from. CHAPTER 4. COWRIE SHAPE AND FORM 48

Figure 4.5: 3D scanned cowrie shell CHAPTER 4. COWRIE SHAPE AND FORM 49

4.3 Extracting Relevant Data

The OCR Package Tesseract-ocr [53] was used as well as custom written python scripts to extract the morphological data shown. The package Tree Snatcher Plus [34] was used to extract phylogenetic distances from plotted trees in [38] and was imported from Newick form for plotting using the package Dendropy[58], which we also used to calculate phylogenetic distances. The two datasets were brought into register at the species level. Shell images were taken with cell phone cameras. Image fitting was done using a variety of schemes and packages using the skimage Python library, numpy, and matplotlib. We reference the discussion here in section 3.2 and remark on the amount of fine tuning required to get the spline-on-spline curve fitting we developed to work robustly to changes in the density of data points and to get the active snakes to work at all. We also mention needing to use Pisarenko harmonic decomposition [46] as we cannot fourier transform on the non uniform spaced domains. The 3D Scans were done at the UC Berkeley Architecture Fab Lab and the shell cutting was done with mineral cutting diamond saws in the geology department. Between the data in [38] and and [37] we were able to find 167 taxa represented in both datasets.. The paper reports the mean length, breadth, and tooth numbers along the upper and the the lower margin and the range of values that remain after the most deviant one third of the data was removed. The paper also provides a code in a german paper[18], which I’ve yet to decipher which describes the ”closeness of the teeth,” which has something to do with rescaling the shell to as if it was 25 mm in length and reporting characteristics of the shell given this information. They found that there was no correlation between the length of the cowrie and the the teeth numbers. The data they report allow us to calculate an ersatz wavelength for each species and to look at the difference between the top wavelength and the bottom wavelength along with differences in width ratio. The main result they reported was a linear relationship between the closeness of the top and the bottom teeth. CHAPTER 4. COWRIE SHAPE AND FORM 50

Figure 4.6: To the left, we examine the wavelength of the teeth above (C) and below (L) the aperature. You can see that there is a linear relationship. You can see the same linear relationship to the right, where instead we compare tooth number between the two lips. the orange dots correspond to the higher bound and blue the lower of 2/3 region of variability while green is the mean. CHAPTER 4. COWRIE SHAPE AND FORM 51

Figure 4.7: Here we plot the difference of the mean wavelength versus the evolutionary dis- tance between different cowrie species with the goal towards isolating related species with large differences in the wavelength of the teeth separation to better test the hypothesis dis- cussed later. The color is the percent change in the breadth to length ratio which corresponds to how bulging the cowrie is. This height is not-correlated with any of the other parameters. Below are our selection of good pairs to study, shown in the figure with blue circles and in left to right decending ∆λ order: Frendii and Rosselli, Ventustra and Rosselli, Frendii and Marginata, Aurantium and Leocodon, and Decipinens and rosselli. Data Extracted from [37] and [38]. Photographs provided by Herbert R Axelrod [29] CHAPTER 4. COWRIE SHAPE AND FORM 52

4.4 Towards Models of Cowrie Growth

Geometric Variational Approach to the Central Spiral To model the growth curve in all its glory we aim towards phenomenological model, not one based on rigorous elasticity theory. This is both because we don’t want to model the full three dimensional dynamics of the shell — we would rather just model the cross section — and we feel that any standard elasticity approximaton brings in physics we may not necessarily want. We aim not to declare our mantle to be anything physical – a plate, a shell, or a rod – as the material is very nonlinear, and there are no measurements (not even of thickness) to go off of in the literature. We could attempt to do some force measurements of the mantle or take measurements from some fixed tissue we have, but we didn’t. Given the forces we want to include (stretching and bending) and many rigorous mathematical necessities we would like to ignore, it makes very little sense to start from a rigorous elasticity perspective, so instead we start very phenomenological: Can we replicate our posited behavior driving development and see under what physical material regimes would we predict the creature operates at given that it develops on the purported trajectory described in this chapter. To recapitulate the findings discussed above, we start with only what we think are the crucial balancing and driving forces behind the developmental process: the position of the shell gland, the total growth rate, the length of the mantle, the bending stiffness of the mantle and the boundary condition with one end of the mantle fixed and tangent to the growth margin and the other end free to move along the extant shell as long as it remains tangent to it. Handily as the shell coils geometrically before the mantle extends, we can model the extant shell as just a circle. We would like to minimize a functional that balances off all these constraints to give a mantle curve for every point in development. We can integrate over the curve taking local bending at the fixed end as the boundary condition for the next step, under the presumption that the shell gland extended the shell up to the position it was bent to at the last time step. We refer to this technique as a mixed discrete difference variational approach, and as far as we know is a novel approach to this sort of problem. To formulate our dynamical system we start by selecting all the bells-and-whistles described in Gelfand and Fomin’s Calculus of Variations[25]. By their nomenclature, our model is a variational parametric higher order derivative problem with endpoints lying on two curves and subsidiary conditions. Our plan is to write this geometrical variational problem as a differential equation with an integral condition so we can easily integrate our model in AUTO [16] and examine stability and bifurcation in the model as it is written, hopefully avoiding pathology and perhaps gaining insight into how fine-tuned the model parameters need to be in order to recapitulate shell form. I would argue that structural stability of this dynamical shell growth algorithm would be a strong suggestion that the cartoon we are applying reflects some aspect of reality, while qualitatively predicting bifurcations would turn our model from descriptive to predictive. With the goal to recapitulate this growth transition we need to explain the developmental pathway that gives rise to the teeth, the deviation from the normal spiral, and the thickening of the shell at its base. As our story is about the interaction of the soft mantle with the hard CHAPTER 4. COWRIE SHAPE AND FORM 53 constraints of the extant shell we need to model the mantle. The most straight forward way to do this is to model the shell in cross section and to model the mantle as a flexible and extensible planar curve. In order to model the shell we need to, similar the recent work [39] [40], incorporate mantle elasticity into our calculations. For our purposes, we want to model only the midline of the mantle, so we can think about a 1d curve which lies in the plane of the cross section shown in Figure 4.1.

A Variational Approach to Cowrie Shell Shape Combining the ideas above, we write down an integral equation for an energy which we can minimize given the constraint that one end is touching the old shell and the other is at the position of the shell gland, which we want to move as the shell grows. The energy function we write down to approximate these dynamics is:

Z x1 F (y, y0, y00)dx x0 with F either:

q y00 F = (` 1 + y02)2 + β , 0 2 3 − 0 2 | {z } (1 + y ) Length Deformation of a Harmonic Solid | {z } Curvature Based Bending Deformation or q 02 2 00 Fsimple = (`0 1 + y ) + βy . − | {z } |{z} Length Deformation of a Harmonic Solid Curvature Approximation

Parameterized by the step size for the resting mantle length (`0) and a term representing the bending stiffness β, this model energy lets to give mathematical legs to the cartoon described above. To solve this calculus of variation problem we start with a higher order derivative Euler Equation

d d2 Fy Fy0 + Fy00 = 0 − dx dx2 which we then paramterize in terms of the arc length which then yields the following two third order equations in x and y, where the subscripts are the number of arc length derivatives:

4 3 2 3βx1x2y2 + 3βx1y1y2 5 4 2 3 3 2 4 5 6 +`0 4.0x y1y2 6x x2y + 6x y y2 10x x2y + 2x1y y2 4x2y 1 − 1 1 1 1 − 1 1 1 − 1 s 2 y1 5 4 2 3 3 2 4 5 6 + 1 + 2 4x1y1y2 + 6x1x2y1 8x1y1y2 + 12x1x2y1 4x1y1y2 + 6x2y1 x1 − − − y3 = 3 2 2 βx1 (x1 + y1) CHAPTER 4. COWRIE SHAPE AND FORM 54

7 6 5 2 4 3 2x1y2 + 4x1x2y1 6x1y1y2 + 12x1x2y1 − 3 4 2 −5 6 7 6x y y2 + 12x x2y 2x1y y2 + 4x2y − 1 1 1 1 − 1 1 s 2 5 2 4 ! y1 3βx1x2 + 3βx1x2y1y2 + 1 + 2 7 6 5 2 4 3 2 5 x1 +`0 2x1y2 4.0x1x2y1 + 2x1y1y2 6x1x2y1 2x1x2y1 x = − − − 3 q 2 4 y1 2 2 βx1 1 + 2 (x1 + y1) x1 or alternatively, the simpler:

q y2 1 4 3 2 2 3 4 2 `0 1 + x2 (4x1y1y2 6x1x2y1 + 2x1y1y2 4x1x2y1) 1 − − 3x2y2 4y1y2 6x2y1 y3 = 4 2 2 4 + + βx1 + 2βx1y1 + βy1 x1 − β βx1

q y2 1 5 4 2 3 2 `0 1 + x2 (2x1y2 4x1x2y1 2x1x2y1) 1 − − 3x2 2x1y2 4x2y1 x3 = 4 2 2 4 + + βx1 + 2βx1y1 + βy1 x1 − β β It would be interesting to see how the results of the two dynamical formalisms are different if you use the mathematical curvature or just the second derivative. With these equations in hand, we note that separating them as we did into many first order equations will enable us to use standard numerical continuation packages to apply boundary and integral conditions and study parametric dependencies. For boundary conditions we want a fixed point and first derivative at the left boundary and for the right boundary to be on and tangent to the existing shell. Thus, our left boundary conditions are Y(0) = 0, X(0) = d, X˙ (0) = 1, Y˙ (0) = α while our right boundary conditions are integral conditions insuring that the final points are on M(X) = √R2 Y 2 and N(Y ) = √R2 X2. we enforce this by setting: − − Z 1 p 2 2 `0Xdt˙ + X(0) = R µ 0 − Z 1 `0Y˙ dt + Y (0) = µ 0

1 Z 1 p 2 2 2 2 − `0Xdt¨ + X˙ (0) = R µ (R + R µ ) 2 0 − − Z 1 2 − 1 `0Y¨ dt + Y˙ (0) = µ(R + µ ) 2 . 0 CHAPTER 4. COWRIE SHAPE AND FORM 55

Unfortunately, due to time constraints we were not able to get these boundary conditions to play well with the equations described above where our errors could be in those equa- tions, our boundary conditions, needing to change numerical constants, being in the wrong parametruc regime to see the behavior we are looking for, or an implementation error in our code. We look forward to rectifying this and publishing our results elsewhere. There are many parameters in this system: we have R which is the size of the shell circle, which is important for setting up angle and arc-length based aspects of the problem: it may be intacting poorly with the fact that we set an effective arclength parameter elsewere in our equations (`0). There is alpha which is the slope at the first point, while d is a measure that lets you move the starting point for the method to start closer to the shell. Forcing the slope at a point on the x axis enables us to probe a wide range of dynamics as we can effectively move off the line by setting the derivative appropriately. β is the relative cost of bending to extension and epsilon may need a dynamic associated with it as its effectively the strain. Perhaps other alternative approaches to these boundary condtions would make our analysis easier. Also, just for reference, an aborted (and doomed) earlier approach took F to be:

T q Z 2 Y¨ X˙ (1 )2 X˙ 2 + Y˙ + β − dt 2 2 3 0 − (X˙ + Y˙ ) 2 if we rescale arclength into a variable t = τ then our two terms balence an extensibility and a bending Z x1 F (x, y, y00)dx x0 as we want the energy we are minimizing to have contributions from only the length of the mantle which corresponds to position variables (x,y) — the tension or compression the mantle is under — and the bending stiffness of the mantle — the curvature, second derivative term.

Z t1 x˙ F [x(t), y(t), ]dx t0 y˙ We can take a variational approach to the dynamics of the growing sheet, straight-forwardly applying all the bells and whistles in the Variational Calculus Text tk. The higher derivative Euler Equation looks like d d2 Fy Fy0 + Fy00 = 0 − dx dx2 d x˙(Φx Φ d2 d x˙+ Φx¨)+y ˙(Φy− Φ 2 − dt dt2 dt y˙+ d Φ ) dt2 y¨ p y¨ Φ(x, y, x,˙ y,˙ x,¨ y¨) = (` x˙ 2 +y ˙2 + β( )2 − x¨ CHAPTER 4. COWRIE SHAPE AND FORM 56

d d `2x ˙ 2`x¨ `x˙[2x ˙x¨ + 2y ˙y¨] Φx˙ = [p 2x ˙] = p 2 2 −dx −dt x˙ 2 +y ˙2 − x˙ 2 +y ˙2 − x¨ +y ¨ ...... d2 d2 2βy¨ 2β y 6β y β[ y x +y ¨ x ] 24βy¨x 2 Φx¨ = (− ) = + + 6 dt2 dt2 x¨3 − x¨3 x¨4 x¨4 − x¨5 d d `2y ˙ 2`y¨ `y˙[2x ˙x¨ + 2y ˙y¨] Φy˙ = [p 2y ˙] = p 2 2 −dx −dt x˙ 2 +y ˙2 − x˙ 2 +y ˙2 − x¨ +y ¨ ...... d2 d2 2βy¨ y y y¨ Φy¨ = (− ) = β + 4β 4β ... dt2 dt2 x¨ − x¨ x¨2 − x 3 ...... 2`x¨ `x˙[2x ˙x¨ + 2y ˙y¨] 2β y 6β y β[ y x +y ¨ x ] 24βy¨x 2 x˙(p + 2 2 3 + 4 + 6 4 5 ) x˙ 2 +y ˙2 x¨ +y ¨ − x¨ x¨ x¨ − x¨ ...... 2`y¨ `y˙[2x ˙x¨ + 2y ˙y¨] y y y¨ = y˙(p 2 2 β + 4β 2 4β ...3 ). − x˙ 2 +y ˙2 − x¨ +y ¨ − x¨ x¨ − x This final equation has terms of the same order on each side and thus isn’t seperable. This caused us some consternation until we realized it is a peculiar mathematical beast which is only beginning to be examined to much lower order than what our equation would require[47]. The energy functions above were inspired by stiff polymer chains [54], the equations of which are included for completeness. The bending term:

Z L 2 2 ∂r ! ∂ r 2 ∂ r ∂s 2 ds κ ( 2 ) ( 2 ∂r ) 0 ∂s − ∂s · | ∂s | and the extension term:

Z L  ∂r 2 ds  1 . 0 | ∂s | − 4.5 The Physics of Wrinkling Sheets

The reason we are interested in this cowrie problem is because we believe the teeth on the bottom edge of the shell form via a wrinkling process of the soft thin elastic material which the cowrie engulfs itself with. Here we solve a Naghdi linear thin elastic sheet [41] to demonstrate the ability of a sheet to wrinkle, and when it bends over itself resemble cowrie teeth, More through and sophisticated analysis is required. We solved this system using the fenics-shell framework.[23] CHAPTER 4. COWRIE SHAPE AND FORM 57

Figure 4.8: Solving elasticity equations to demonstrate the physics of thin elastic sheets. Here an infinitesimal amplitude wrinkle was set in the initial condition as otherwise the sheet will not buckle to wrinkling on its own in this simple linear theory.

4.6 Model consolidation and a Coherent Story of Cowrie Form

It is still not abundantly clear how to think about the mechanics of development. In our attempts to investigate a system of reduced complexity in order to develop tools useful in other biological systems we were rewarded with curious geometries and interesting dynamical possibilities that brought us to a current hot topic in soft matter, wrinkle formation of thin elastic sheets. We still work towards illuminating the mechanistic cartoon we believe is at play in this system. The ontolological switch to place the mantle outside the shell leads to the rest of the shape and form of the shell. We look forward to furthering this model in the future. 58

Chapter 5

Conclusions

What biology needs theorists for is to figure out how to transform ideas from one domain into another. One can take information about one domain and extrapolate one community’s advances into another’s. Dynamics is a field which is a culmination of generations and generations building on the shoulders of those who came before. A theorist doesn’t need to understand better that which we kind of already know or study a model for the sake of hoping to discover a new fillip or mode of behavior: a theorist should reach out into the world to help others make sense of their confusions and theory should be in the service of discovering something new. A theory is only as good as it’s predictions and dynamics, though fascinating and often geometrically insightful, very rarely makes a concrete prediction because usually one’s dynamics descriptions are only good very locally and the non-linearity which is reality often overwhelms the crisp yet complicated story that dynamics tells. It is important for society to keep track of possible true geometric understanding but recognize reality usually gets in the way. There are ”more is different” [3] ways to be thinking about reality and reductive dynamics may not be the best place to find truly new and widely useful ideas. The flip side of this is complexity science, which — much like dynamics — has divorced itself from domain knowledge to become a cottage industry of its own. We need theorists native to the disciplines they are representing who can also bring new analytic and mechanistic insight to problems resonant outside the the standard fare of combinatorial theoretical inquiry. The problems worked on here bring together proving grounds for new ideas in both Development and Neuroscience. The big open problems in these domains: questions about how things become what they are and how that constraints what they can become are rising on the horizon. We must reach out to each other to find this new knowledge together.

5.1 Future Directions

The work presented above entails only a beginning of progress towards a complete under- standing of either mechanistic seashell growth or how bistability and unstable cycles can effect synchronization and chaos. Understanding better these domains will reflect back onto other human concerns and I look forward to continuing inquiry in this direction. 59

Bibliography

[1] Daniel M. Abrams and Steven H. Strogatz. “Chimera States for Coupled Oscillators”. en. In: Phys. Rev. Lett. 93.17 (Oct. 2004), p. 174102. [2] Juan A Acebr´onet al. “The Kuramoto model: A simple paradigm for synchronization phe- nomena”. In: Reviews of modern physics 77.1 (2005), p. 137. [3] P. W. Anderson. “More Is Different”. In: Science 177.4047 (1972), pp. 393–396. [4] Arthur T. Winfree. The Geometry of Biological Time. Second Edition. Springer-Verlag, 2001. [5] Naziru M. Awal, Domenico Bullara, and Irving R. Epstein. “The smallest chimera: Periodicity and chaos in a pair of coupled chemical oscillators”. In: Chaos 29.1 (Jan. 2019), p. 013131. [6] Vincent Bauchau. “Developmental stability as the primary function of the pigmentation patterns in bivalve shells?” In: Belg. J. Zool 131.2 (2001), pp. 23–28. [7] J. Bigun, T. Bigun, and K. Nilsson. “Recognition by symmetry derivatives and the generalized structure tensor”. In: IEEE Transactions on Pattern Analysis and Machine Intelligence 26.12 (Dec. 2004), pp. 1590–1605. [8] Alistair Boettiger, Bard Ermentrout, and George Oster. “The neural origins of shell structure and pattern in aquatic mollusks”. In: Proceedings of the National Academy of Sciences 106.16 (2009), pp. 6837–6842. [9] Marie-Line Chabanol, Vincent Hakim, and Wouter-Jan Rappel. “Collective chaos and noise in the globally coupled complex Ginzburg-Landau equation”. en. In: Physica D: Nonlinear Phenomena. Lattice Dynamics 103.1 (Apr. 1997), pp. 273–293. [10] L´ena¨ıcChizat et al. “Scaling algorithms for unbalanced optimal transport problems”. en. In: Math. Comp. 87.314 (2018), pp. 2563–2609. [11] L´ena¨ıc Chizat et al. “Unbalanced optimal transport: Dynamic and Kantorovich formula- tions”. en. In: Journal of Functional Analysis 274.11 (June 2018), pp. 3090–3123. [12] Marr D. “Vision: A Computational Investigation into the Human Representation and Pro- cessing of Visual Information”. In: (). [13] Hiroaki Daido and Kenji Nakanishi. “Aging and clustering in globally coupled oscillators”. en. In: Phys. Rev. E 75.5 (May 2007), p. 056206. [14] Hiroaki Daido and Kenji Nakanishi. “Aging Transition and Universal Scaling in Oscillator Networks”. en. In: Phys. Rev. Lett. 93.10 (Aug. 2004), p. 104101. BIBLIOGRAPHY 60

[15] P. Dierckx. “An algorithm for smoothing, differentiation and integration of experimental data using spline functions”. en. In: Journal of Computational and Applied Mathematics 1.3 (Sept. 1975), pp. 165–184. [16] Eusebius J. Doedel et al. AUTO-07P: Continuation and bifurcation software for ordinary differential equations. Tech. rep. 2007. [17] Bard Ermentrout, John Campbell, and George Oster. “A model for shell patterns based on neural activity”. In: Veliger 28.4 (1986), pp. 369–388. [18] FA Schilder. “Die Bezeichnung der Zahndichte der Cypreacca”. In: Archiv fur Molluskenkunde QL401.A6 v.87-89 (1958-60).87 (1958), pp. 77–80. [19] Robert C. Froemke. “Plasticity of Cortical Excitatory-Inhibitory Balance”. en. In: Annual Review of Neuroscience 38.1 (July 2015), pp. 195–219. [20] Zhenqiang Gong et al. “Evolution of patterns on Conus shells”. en. In: PNAS 109.5 (Jan. 2012), E234–E241. [21] Martin Greenberger et al. “Computers and the world of the future”. In: (1965). [22] Vincent Hakim and Wouter-Jan Rappel. “Dynamics of the globally coupled complex Ginzburg- Landau equation”. en. In: Phys. Rev. A 46.12 (Dec. 1992), R7347–R7350. [23] Jack S. Hale et al. “Simple and extensible plate and shell finite element models through automatic code generation tools”. In: Computers Structures 209 (2018), pp. 163–181. [24] Hans Meinhardt. The Algorithmic Beauty of Sea Shells. Springer-Verlag, 1995. [25] I.M. Gelfand and S.V. Fomin. Calculus of Variations. Dover edition. Dover Publications, Inc, 2000. [26] Takahiro Irie and Yoh Iwasa. “Optimal growth model for the latitudinal cline of shell mor- phology in cowries (genus )”. In: Evolutionary Ecology Research 5.8 (2003), pp. 1133– 1149. [27] J. E. Marsden and M. McCracken. The Hopf Bifurcation and Its Applications. Springer-Verlag New York, Inc, 1976. [28] J.D. Murry. Mathematical Biology, II: Spatial Models and Biomedical Applications. Thrid Edition. Springer Science & Business Media, 2003. [29] Jerry G. Walls. Cowries. Second Edition, Revised. T.F.H Publications Inc. Ltd., 1979. [30] Michael Kass, Andrew Witkin, and Demetri Terzopoulos. “Snakes: Active contour models”. en. In: Int J Comput Vision 1.4 (Jan. 1988), pp. 321–331. [31] Felix P. Kemeth, Sindre W. Haugland, and Katharina Krischer. “Cluster singularity: The unfolding of clustering behavior in globally coupled Stuart-Landau oscillators”. In: Chaos 29.2 (Feb. 2019), p. 023107. [32] Wai Lim Ku, Michelle Girvan, and Edward Ott. “Dynamical transitions in large systems of mean field-coupled Landau-Stuart oscillators: Extensive chaos and cluster states”. In: Chaos 25.12 (Dec. 2015), p. 123122. [33] L.M. Pismen. Patterns and Interfaces in Dissipative Dynamics. Springer Berlin Heidelberg NewYork, 2006. BIBLIOGRAPHY 61

[34] Thomas Laubach, Arndt von Haeseler, and Martin J. Lercher. “TreeSnatcher plus: capturing phylogenetic trees from images”. In: BMC Bioinformatics 13.1 (May 24, 2012), p. 110. [35] David R. Lindberg and John S. Pearse. “Experimental manipulation of shell color and mor- phology of the limpets Lottia asmi (Middendorff) and Lottia digitalis (Rathke) (: Patellogastropoda)”. In: Journal of Experimental Marine Biology and Ecology 140.3 (Aug. 14, 1990), pp. 173–185. [36] Eve Marder and Jean-Marc Goaillard. “Variability, compensation and homeostasis in neuron and network function”. In: Nature Reviews Neuroscience 7.7 (July 2006), pp. 563–574. [37] Maria Schilder. “Length, Breadth, and Dentation in Living Cowries”. In: The Veliger 9.4 (1967), pp. 369–376. [38] Christopher Meyer. “Toward comprehensiveness: Increased molecular sampling within Cypraei- dae and its phylogenetic implications”. In: Malacologia 46 (Jan. 2004). [39] D. E. Moulton, A. Goriely, and R. Chirat. “Mechanical growth and morphogenesis of seashells”. en. In: Journal of Theoretical Biology 311 (Oct. 2012), pp. 69–79. [40] Derek E Moulton, Alain Goriely, and R´egisChirat. “Mechanics unlocks the morphogenetic puzzle of interlocking bivalved shells”. In: Proceedings of the National Academy of Sciences (2019). [41] P. M. Naghdi. “On the theory of thin elastic shells”. In: Quarterly of Applied Mathematics 14.4 (1957), pp. 369–380. [42] Celeste M. Nelson and Mina J. Bissell. “Of Extracellular Matrix, Scaffolds, and Signaling: Tissue Architecture Regulates Development, Homeostasis, and Cancer”. en. In: Annual Re- view of Cell and Developmental Biology 22.1 (Nov. 2006), pp. 287–309. [43] Timothy O’Leary. “Homeostasis, failure of homeostasis and degenerate ion channel regula- tion”. en. In: Current Opinion in Physiology 2 (Apr. 2018), pp. 129–138. [44] Oleh E Omel’chenko and Edgar Knobloch. “Chimerapedia: coherence–incoherence patterns in one, two and three dimensions”. en. In: New J. Phys. 21.9 (Sept. 2019), p. 093034. [45] Marco Passamonti. “The family ( Cypraeoidea) an unexpected case of neglected animals”. In: Biodiversity Journal 6.1 (2015), pp. 449–466. [46] V. F. Pisarenko. “The Retrieval of Harmonics from a Covariance Function”. In: Geophysical Journal International 33.3 (Sept. 1, 1973), pp. 347–366. [47] Andrei D. Polyanin and Alexei I. Zhurov. “Parametrically defined nonlinear differential equa- tions, differential–algebraic equations, and implicit ODEs: Transformations, general solutions, and integration methods”. en. In: Applied Mathematics Letters 64 (Feb. 2017), pp. 59–66. [48] Andr´eR¨ohm,Kathy L¨udge,and Isabelle Schneider. “Bistability in two simple symmetrically coupled oscillators with symmetry-broken amplitude- and phase-locking”. en. In: Chaos 28.6 (June 2018), p. 063114. [49] Enrico Savazzi. “The colour patterns of cypraeid gastropods”. In: Lethaia 31.1 (1998), pp. 15– 27. BIBLIOGRAPHY 62

[50] I. J. Schoenberg. “Spline functions, convex curves and mechanical quadrature”. EN. In: Bull. Amer. Math. Soc. 64.6 (Nov. 1958), pp. 352–357. [51] Gautam C. Sethia and Abhijit Sen. “Chimera States: The Existence Criteria Revisited”. en. In: Phys. Rev. Lett. 112.14 (Apr. 2014), p. 144101. [52] E. Simoncelli. “A rotation invariant pattern signature”. In: Proceedings of 3rd IEEE Inter- national Conference on Image Processing. Proceedings of 3rd IEEE International Conference on Image Processing. Vol. 3. ISSN: null. Sept. 1996, 185–188 vol.3. [53] R. Smith. “An Overview of the Tesseract OCR Engine”. In: Proceedings of the Ninth Inter- national Conference on Document Analysis and Recognition - Volume 02. ICDAR ’07. USA: IEEE Computer Society, 2007, pp. 629–633. [54] Kunitsugu Soda. “Dynamics of Stiff Chains. I. Equation of Motion”. In: J. Phys. Soc. Jpn. 35.3 (Sept. 1973), pp. 866–870. [55] John Strain. “The Fast Gauss Transform with Variable Scales”. en. In: SIAM Journal on Scientific and Statistical Computing 12.5 (Sept. 1991), pp. 1131–1139. [56] S. P. Strong et al. “Entropy and Information in Neural Spike Trains”. In: Physical Review Letters 80.1 (Jan. 5, 1998), pp. 197–200. [57] Darcy Wentworth Thompson et al. “On growth and form.” In: On growth and form. (1942). [58] Alan Mathison Turing. “The chemical basis of morphogenesis”. In: Philosophical Transactions of the Royal Society of London. Series B, Biological Sciences 237.641 (1952), pp. 37–72. [59] V.I. Arnold. Geometrical Methods in the Theroy of Ordinary Differential Equations. Second Edition. Springer Science & Business Media, 1988. [60] L.J.P. van der Maaten and G.E. Hinton. “Visualizing High-Dimensional Data Using t-SNE”. In: Journal of Machine Learning Research 9 (2008), pp. 2579–2605. [61] John Von Neumann, Arthur W Burks, et al. “Theory of self-reproducing automata”. In: IEEE Transactions on Neural Networks 5.1 (1966), pp. 3–14. [62] Suzanne T. Williams. “Molluscan shell colour”. In: Biological Reviews 92.2 (2017), pp. 1039– 1058. [63] Arthur T. Winfree. “Biological rhythms and the behavior of populations of coupled oscilla- tors”. In: Journal of Theoretical Biology 16.1 (July 1, 1967), pp. 15–42. [64] Y. Kuramoto. Chemical Oscillations, Waves, and Turbulence. Dover Publications, Inc, 2003.