<<

Ordinary Difference Eq of order r = r-order ode:

K[yn, yn-1, yn-2,... yn-r ; t]=0 yn = y(t0+nτ) = y(t0+nτ)

Solution: y = y(t) , t0+nτ, n takes values

To solve the r-order difference means to find solutions y = y(t) such that the sequence yn, yn-1, yn-2,... yn-r satisfies the given equation

The solution of the ode of order r, in general involves r arbitrary constants which must be determined by limit conditions on the yn, yn-1, yn-2,... yn-r, (initial conditions , boundary conditions) ode like ODE describe processes in terms of local rates of change

Ordinary Difference Equation of order r, r=1,2,… as an equation on yt and its differences . Βackward Difference Equation = Αναδρομες Εξισωσεις Διαφορων 2 r K[yt, yt-1, yt-2,... yt-r ; t] = 0 ⟺ B[yt ,∇yt , ∇ yt , …∇ yt ; t] =0

∇r are the Backward Difference Operators of order r=1,2,…

∇ yt = yt − yt −1 = (I−V) yt ∇ = (I−V) ,

V=S−1 is the Backward Shift Operator on Sequences V(ψ1, ψ2, ψ3,...)= (0, ψ1, ψ2, ...) (Vψ)ν = ψν−1 , ν=1,2,3,… ψ0 = O

2 ∇ yt = ∇(∇ yt ) = ∇ (yt − yt −1) = ∇yt − ∇yt −1 = (yt − yt −1 )−( yt −1 − yt−2 )

2 2 ∇ yt = (yt − 2yt −1 + yt−2 ) = (1−V) yt

3 ∇ yt = ∇(yt − 2yt −1 + yt−2 ) = (yt − yt −1)−2 (yt −1 − yt−2 )− (yt−2 − yt −3) =

3 = yt − 3yt −1 + yt−2 + yt −3 = (1−V) yt

The r-th Backward Difference Operator ∇r

∇r =(I−V)r Forward Difference Equation

r r-1 r-2 K[yt+r, yt+r-1, yt+r-2,... yt ; t]=0 ⟺ F[∆ yt , ∆ yt , ∆ yt , …, yt ; t] =0

n n r yt+r = ∑ ( ) ∆ y k=0 k t

∆r are the Forward Difference Operators of order r=1,2,…

∆ yt = yt+1 − yt = (S−I) yt

∆ = (S−I) ,

S is the Forward Shift Operator on Sequences S(ψ1, ψ2, ψ3,...)=(ψ2, ψ3, ψ4,...)

(Sψ)ν = ψν+1 , ν=1,2,3,…

∆2yt = ∆ (∆ yt ) = ∆ (yt+1 − yt) = ∆yt+1 − ∆yt = (yt+2 − yt +1 )−( yt +1− yt)

∆2yt = (yt+2 − 2yt +1 + yt ) = (S+I)2 yt

∆3yt = ∆(yt+2 − 2yt +1 + yt) = (yt+3 − yt +2)−2 (yt +2 − yt+1)− (yt+1 − yt) =

= yt+3 − 3yt +2 + yt+1 + yt = (S+I)3 yt

n n n−k n n n−k n k ∆ yt = ∑ (−1) ( ) y = ∑ (−1) ( ) S y k=0 k t+k k=0 k t n ∆r =(S−I)r =∑n (−1)n−k ( ) Sk k=0 k

EXAMPLE:

2 2 1 3∇ yt + 2∇yt + 7yt =0 ⟺ yt = yt−1 − yt−2 3 4

Recurrence and DS as de

The time independent ode K[yt, yt-1, yt-2,... yt-r]=0 is an Implicit Eq for yt

For solvable ode, the implicit Function S defines the Recurrence Equation of order r yt =S[yt-1, yt-2,...,yt-r]

The Recurrence Equations of order r define the r-order Recursive Sequences = Recurrence Sequences

= Aναδρομικες Ακολουθιες r-ταξεως

The solution to the Recurrence Equation is obtained by iterating the S with initial conditions y0, y1,...,yr−1 yt =S[yt-1, yt-2,...,yt-r] is also called Formula

(Y,S) define a Discrete DS

푑푦 3) The Euler ode of the ODE = 퐹(y(t), 푡) 푑푡

with y0 = y(t0)

is the 1st-order RE (Αναδρομικη Ακολουθια) :

yν+1 = yν + τ F(yν , t0+ντ) = S(yν), ν = 0,1,2,…

with: η0 = y0 = y(t0)

η1 = τ F(y0 , t0)

η2 = τ F(y1 , t0+τ)

ην = τ F(yν−1 , t0+(ν−1)τ)

Maps as DS

1-order Recurrence Equation yt+1 =S[yt] , S is the (Forward) Shift Map

Linear ode

Solutions by Systematic Μethods. The most developed Theory

Analogy with Linear ODE

EXAMPLES Aριθμητικη Προοδος yt+1 = yt + w , t=0,1,2,3,... , yt Πραγματικοι Αριθμοι

yt = y0 + tw

Γεωμετρικη Προοδος yt+1 = ayt , t=0,1,2,3,... , yt Πραγματικοι Αριθμοι

t yt = a y0

Γραμμικη Αναδρομικη Ακολουθια 1ης ταξεως yt+1 = ayt + w , t=0,1,2,3,... , yt Πραγματικοι Αριθμοι a≠1 t t 1−a yt = a y0 + w 1−a

Euler ode

Reduction to Linear

2 2 2 2 yt+3 + αyt+2 + βyt+1 + γyt = 0 ⟺ xt+3 +αxt+2 + βxt+1 +γxt = 0

2 with xt = yt

Fibonacci Sequence

Ft = Ft−1 + Ft−2

F0 = 0 , F1 =1

Solution Binet 1843

흋풕−(−흋)−풕 (ퟏ+√ퟓ)풕−(ퟏ−√ퟓ)풕 Ft = = Aσκ 0,02 √ퟓ ퟐ풕√ퟓ 0.3 ευρεση της λυσης

ퟏ+ ퟓ 흋 = √ the Golden Number ퟐ Nοn Linear ode Explicit solutions are known only for Some isolated special classes of ΝL RR and for specific parameters. Solutions for general parameter values are not known. In such ode: we use computers to calculate large numbers of terms we study qualitative questions: -the behaviour of the solutions as n→∞, -Stability (Dynamical Structural), which on the whole is analogous to the for ordinary differential equations -Statistical Properties

Aρμονικη Προοδος

풚풕 풚풕+ퟏ = t=0,1,2,3,... , yt Πραγματικοι Αριθμοι ퟏ−풘풚풕

ퟏ , t=0,1,2,3,... Aριθμητικη Προοδος 풚풕

Generalization to Riccati, Oμογραφικες,

The Hofstadter–Conway $10,000 sequence is defined as follows

푎(푛) = 푎(푎(푛 − 1)) + 푎(푛 − 푎(푛 − 1)), 푛 = 3,4, … 푎(1) = 푎(2) = 1

The first few terms of this sequence are 1, 1, 2, 2, 3, 4, 4, 4, 5, 6, 7, 7, 8, 8, 8, 8, 9, 10, 11, 12, ...

(A004001 in OEIS)

Theorem Conway: 0.5 푎(푛) 1 퐥퐢퐦 = 푛→∞ 푛 2 Conway J. 1988, Some Crazy Sequences, Lecture at AT&T Bell Labs, July 15

Batrachion Functions

John Horton Conway offered a prize of $10,000 to find a value of n for which 푎(푛) 1 1 | − | < 푛 2 20 퐧 = ퟏퟒퟖퟗ Mallows Schroeder, M. 1991, "John Horton Conway's 'Death Bet.' " , Chaos, Power Laws, New York: W. H. Freeman, pp. 57-59

Hofstadter later claimed he had found the sequence and its structure some 10– 15 years before Conway posed his challenge [private communication with Klaus Pinn]

Απεικονιση Renyi S: [0,1)→ [0,1): yt+1 = 2yt (mod 1) = 2 yt 1[o, ½)(yt)+(2yt - 1) 1[½, 1)(yt)

Γεωμετρικη Προοδος με λογο 2 στο [0,1)

Periodic Orbits Error Free Computation

Απεικονιση Σκηνης ()

S: [0,1)→ [0,1): yt+1=2 yt 1[o, ½)(yt)+(2 - 2yt) 1[½, 1)( yt)

Συμμετρια S(y)=S(1-y) 1 t Λυση Ulam-Von Neumann: yt = arcsin (cos2 πy0), t=1,2,3,..., ∀ y0 in [0,1) 휋 Aσκ Επαληθευση {0.2} 3 υπολογισμοι {0.8} Computability of Ulam-Von Neumann Analytic Formula {2}

Λογιστικη Απεικονιση ()

S: [0,1)→ [0,1): yt+1 = α yt (1- yt) has known exact solutions only for α=−2,2 and 4.

2 t Λυση Ulam-Von Neumann yt = sin (2 arcsin √y0), t=1,2,3,... Aσκ {0.2}

Ισοδυναμια Λογιστικης Απεικονισης με Απεικονιση Σκηνης Aσκ {0.5} Απεικονιση Chebychev

Sm: [-2,2)→ [-2,2): yt+1 = 2cos(m arcos yt) , t=1,2,3,... m = 2,3 ....

Απεικονιση Gauss S: [0,1)→ [0,1): 1 1 1 1 1 yt+1 = (mod1) = { } = − [ ] = the Fractional part of ,t=1,2,3,... 푦푡 푦푡 푦푡 푦푡 푦푡

Continued Fractions representations of Real numbers

Απεικονιση Gauss 2dim= Απεικονιση Mixmaster

1 1 − [ ] 푥푡 푥푡 x S:[0,1)x[0,1) → [0,1)x[0,1) : (y)↦( 1 ) 1 1 −[ ] 푦푡 푥푡

1 1 x 푥+[ ] S−1 ( )=( 푦푡 ) y 1 1 − [ ] 푦푡 푦푡

Απεικονιση Σφηνας (Cusp)

S :[ 1,1]  [  1,1] y S y 12 y

Απεικονιση Πλαστη (Baker) 풙 ퟐ풙 B:[0,1)x[0,1) → [0,1)x[0,1) :(풚) ↦ ( 풚 ) mod1 ퟐ

Στατιστικη Μη Αναστρεψιμοτητα Ασταθεια στην οριζοντια κατευθυνση Ευσταθεια στην καθετη κατευθυνση Υπερβολικη Δομη Τοπικο Γεωμετρικο Χαρακτηριστικο Χαους

Baker Map as Source of Information

Information of the Baker Map with respect to the generating partition {Ξ0 = lower, Ξ1 =upper} 1 1 풮0(ξ) = 2[− 푙표푔 ] = 푙표푔 2 = 1bit 2 2 2 2 1 1 풮1(ξ) = 4[− 푙표푔 ] = 푙표푔 4 = 2bit 4 2 4 2 1 1 풮2(ξ) = 8[− 푙표푔 ] = 푙표푔 8 = 3bit 8 2 8 2 풮t(ξ) = (t+1) ℐ0 = (t+1) bits

The Linear Hyperbolic Map 2 2 풙 ퟐ풙 S:ℝ → ℝ :(풚) ↦ ( 풚 ) ퟐ

B = S mod 1

The = 2dimensional Tent Map

Τhe Cat Map on the x 1 1 x x + y S:[0,1)x[0,1) → [0,1)x[0,1) : ( ) ↦ ( ) ( ) = ( ) mod1 y 1 2 y x + 2y

Anosov 1963, Sov. Math. Dokl 4, 1153-6 ퟏ ퟏ J=det( )=1 ퟏ ퟐ

1 1 −1 ퟐ −ퟏ ( ) = ( ) 1 2 −ퟏ ퟏ

1 1 3+√5 3−√5 Eigenvalues of ( ) λ+= >1 λ-= <1 1 2 2 2

λ+λ- =1

1 1 Eigenvectors of ( ) 1 2 1 η+ = ( ) κατευθυνση διαστολης 휑

1 1 η- = (− ) κατευθυνση συστολης 휑

ퟏ+ ퟓ φ= √ ο Χρυσος Αριθμος ퟐ

2 of the Cat Map

After streching the deformed square is cut into 4 pieces The 4 pieces are assembled and stacked to a square

Mixing 2 or more metals by mechanical alloying takes place due to repeated streching and folding described by simple chaotic maps (Baker, Cat) [Shingrou,ea 1995 ]

Periodic Points x x T (y) is Periodic of S with period T → (y) is fixed point of S

x 0 ( ) = ( ) is the only fixed point of S y 0

1 2 3 4 5 5 5 5 Only 4 period 2 points exist (3) , (1) , (4) , (2) 5 5 5 5

x Θ. 1) (y) is of S ⟺ Both x and y are rationals with the same denominator 2) Τhe Periodic Points of S are countably infinite 3) Τhe of Periodic Points of S is dense in the Unit Square [Guckenheimer 21]

Εμβαπτιση σε Ροη Ασκηση 2 Υπολογισιμοτης Ασκηση 2

Torus Maps x a b x ax + by S:[0,1)x[0,1) → [0,1)x[0,1) : ( )↦( ) ( ) = ( ) mod1 y d y cx + dy a,b,c,d are

Henon Map x 1 − αx(1 − x) + y 2 S:ℝ2 → ℝ2 : ( )↦( ) = (1 − ax + y) y bx bx

y 1 x Jacobian: J=- b , J-1 = − S−1 ( ) = (x−a+y2) b y b b≠0 Reversible b=0 Reduces to the Logistic Map b=1 Conservative 0

The Henon Map is the Positive Dilation of the Logistic Map

Simplification of the Poincare Map of the Lorenz ODE Aσκηση 2

Hénon M. 1976, A two-dimensional mapping with a strange ", Communications in Mathematical 50, 69–77. Curry J. 1979, On the Henon Transformation, Commun. Math. Phys. 68, 129-140 Χαος Υπολογιστικα + Διερευνηση Aσκηση 2

Marotto F. 1979, Chaotic Behavior in the Henon Mapping, Commun. Math. Phys. 68, 187-194 Χαος Θεωρητικα Aσκηση 2

Hénon attractor for a = 1.4 and b = 0.3

Lozi Map

2 2 x 1−a|x|+y S:ℝ → ℝ : (y) ↦ ( bx )

There is a Strange Attractor made up of Straight Segments Ασκηση 1 + 1(simulation 2D) + 1 (simulation 3D)

The Nicholson–Bailey Host-Parasite Dynamics Parasites lay an egg at every encounter with the host −훼휫풕 푵풕+ퟏ = 풓푵풕푒 −훼휫풕 휫풕+ퟏ = 품푵풕(ퟏ − 푒 ) t=0,1,2…,T the successive stages - generations times t=0 is the initial generation. t=T is the last generation

푵ퟎ is the initial Host population density 휫ퟎ is the initial parasite population density. 푁푡 the Host population at time (generation) t 훱푡 the Parasite population at time (generation) t 푒−훼훱푡 is the fraction of hosts not parasitized α is the searching efficiency (the probability that a parasite will encounter some host over the course of their lifetime) r is the number of eggs laid by a host g is the number of eggs laid by a parasite on a single host

Nicholson, A. J., and V. A. Bailey. 1935, The balance of animal populations, Proceedings of the Zool. Soc. of London. 1: 551-598. 1 (Th) + 1 (sim) Rational Complex Maps =RCM

푃(푧) S: ℂ→ ℂ : z↦ S(z) = , P,Q 푄(푧) S: Rational Complex Function

Julia Set = [Fatou Set]c

Fractals

Demetriou A., Christou C., Spanoudis G., Platsidou M. 2002

Blackwell, Boston

Quadratic Map Sz= z2 + c, c Complex Parameter

The for the family of Quadratic Maps z ⟼ z2 + c M = {c ∈ℂ | the of 0 (0 , c , c2 + c , (c2 + c)2 + c , ...) is bounded} c = 1 ( 0, 1, 2, 5, 26,…) unbounded ⟹ 1 is not in the Mandelbrot set. c = i (0, i, (−1 + i), −i, (−1 + i), −i, ...) bounded ⟹ i is in the Mandelbrot set. The Mandelbrot set is Self- Similar ()

Fractal Aesthetics

Fractals as of Dynamical Systems

VIDEO

Correspondence between the Mandelbrot Set M and the Logistic Map

The intersection of M with the real axis is the interval [-2, 0.25]. The parameters along this interval can be put in one-to-one correspondence with the parameter α of the Logistic family x ⟼ αx(1−x) , α ∈ [1,4]

훼 훼 The correspondence between Re(c) and α is: Re(c) = (1 − ) Aσκ 1 2 2

Constructions from the Mandelbrot Set

Buddhabrot http://www.superliminal.com/fractals/bbrot/bbrot.htm Aσκ 1

ΙΤΕRATED FUNCTION SYSTEMS = IFS

A family of Contractions: Sν: Y→Y , ν = 1,2,…,Ν on the Complete Y

Theorem Hutchinson 1981 Every IFS on Y=ℝn has a unique Attracting Invariant Set Ξ (Set of Fixed Points) which is Compact (closed and bounded)

Proof Aσκηση 0,5

Hutchinson J. 1981, Fractals and Self Similarity, Indiana Univ. Math. J. 30, 713–747

Two problems arise. 1) Direct Problem: For a given IFS Find the Invariant Set Ξ. 2) Inverse Problem: For a given Compact set Ξ,

Find an IFS {Fi} that has Ξ as its Invariant Set.

The Direct Problem is solved by deterministic based on the proof of the FP Theorem

Construction of the Fixed Set Ξ Aσκηση 0,5 Aσκηση 0,5 Examples Explicit (2) Aσκηση 0,5

The Inverse Problem is solved approximately by the Collage Theorem. The construction involves random iteration algorithms

훮 Random IFS (pν Sν) , ν=1,2,…,Ν ∑휈=1 푝휈 = 1 The Collage Theorem tells us that “to find an IFS whose attractor is "close to" or "looks like" a given set, one must endeavor to find a set of transformations - contraction mappings on a suitable set within which the given set lies - such that the union, or collage, of the images of the given set under transformations is near to the given set”. Barnsley M. 1988, "Fractals Everywhere", Academic Press, New York IFS Examples produced by pairs of 2-dimensional affine maps.

Sprott J. 1994, Automatic Generation of Systems, Comput. & Graphics 18, 417-425

Barnsley Fern

IFS Applications

Image Representation-Reconstruction as IFS

Fractal Compression

Fractal Interpolation. Increase resolution

Iterated Systems Inc (1987) over 20 patents related to fractal compression

Recursive Functions Wolfram, S. 2002, "Recursive Sequences." . Champaign, IL: Wolfram Media, pp. 128-131 and 890-891.

ΕΞΙΣΩΣΕΙΣ ΜΕΡΙΚΩΝ ΔΙΑΦΟΡΩΝ και ΚΥΨΕΛΙΚΑ ΑΥΤΟΜΑΤΑ pde and Cellular Automata

Partial Difference Equations Yk+1,m+1 - q Yk+1,m - pYk,m = 0 k,m = 1,2,3,… Wolfram S. 2002, A New Kind of Science, Wolfram Media, Champaign, Illinois.

Net of Cells Finite states (2adic ON, OFF) Κυψελικά Aυτόματα Kανόνες γειτνίασης στο πλέγμα.

An initial state (time t=0) is selected by assigning a state for each cell.

A new generation is created (t + 1), according to some fixed rule that determines the new state of each cell in terms of the current state of the cell and the states of the linked cells

Typically, the rule for updating the state of cells is the same for each cell and does not change over time, and is applied to the whole grid simultaneously Example: the cell is "ON" in the next generation if exactly two of the cells in the neighborhood are "ON" in the current generation, otherwise the cell is "OFF" in the next generation

Game of Life 1970

VIDEO

Probabilistic Cellular Automata

Network Dynamical Systems t=0,1,2,…  = {0,1,2,…} 푥 푥 State Dynamics: 푆푡: Y ⟶ Y ∶ ( ) ⟼ 푆푡 ( ) 푤 푤 t 푥 t 푡 S : Y ⟶Σ : ( ) ⟼ xRκR(t) = S (푥, 푤) = 푆 (푥 , … 푥 , 푤 ) κ 푤 κ 휅 1 푁 훼훽 the (Activation) Dynamics- of the node κ

t 푥 t 푡 S : Y ⟶ℝ : ( ) ⟼ wRκλR(t) = S (푥, 푤) = 푆 (푥 , … 푥 , 푤 ) κλ 푤 κ휆 휅휆 1 푁 훼훽 the (Learning) Dynamics-Algorithm

푆푡(푥 , … 푥 , 푤 ) 푥휅(푡) 푡 푥휅 휅 1 푁 훼훽 ( ) = 푆 (푤 ) = ( 푡 ) 푤κλ(푡) κλ 푆휅휆(푥1, … 푥푁, 푤훼훽) the solution of the Dynamical Equation.

Weighted Graphs are defined by the Adjacency 퐴휅휆 Weighted Graph are Graphs with weights at the Links Network are Weighted Graphs with Activation Variables at the Nodes y the state of the Graph, Weighted Graph, Network

⟦∑흀(휶ퟏ흀 + 휶흀ퟏ) > 0⟧ ⟦∑ ( ) ⟧ Graphs 푦 = ( 흀 휶ퟐ흀 + 휶흀ퟐ > 0 ) ⋮ 퐴휅휆

⟦∑흀(휶ퟏ흀 + 휶흀ퟏ) > 0⟧ 1 Weighted Graphs 푦 = ( ) = ( ∑흀(휶ퟐ흀 + 휶흀ퟐ) > 0 ) 푤 ⋮ 푤휅휆 휓1 휓 Νetworks y=( ) = ( 휓2 ) 푤 ⋮ 푤휅휆

the Iverson Bracket, named after Kenneth E. Iverson, is a notation that denotes a number that is 1 if the condition in square brackets is satisfied, and 0 otherwise: ퟏ, 풊풇 푸 풊풔 푻풓풖풆 ⟦푸⟧ = { ퟎ, 풊풇 푸 풊풔 푭풂풍풔풆 where Q is a statement that can be true or false. This notation was introduced by Kenneth E. Iverson in his programming APL, Kenneth E. Iverson, A Programming Language, New York: Wiley, p. 11, 1962. Ronald Graham, , and Oren Patashnik. Concrete Mathematics, Section 2.2: Sums and Recurrences. The specific restriction to square brackets was advocated by Donald Knuth to avoid ambiguity in parenthesized logical expressions Knuth D. 1992, "Two Notes on Notation", American Mathematical Monthly, Volume 99, Number 5, May 1992, pp. 403–422. (TEX, arXiv:math/9205211)

The Iverson Bracket converts a Boolean value to a 0, 1 Counting is represented as summation.

Examples:

ퟏ, 풊풇 휿 = 흀 훿 = { =[휿 = 흀] 휅휆 ퟎ, 풊풇 휿 ≠ 흀 ퟏ, 풊풇 풙 ∈ 휩 1 (푥) = { = [풙 ∈ 휩] 훯 ퟎ, 풊풇 풙 ∉ 휩

The Hamming Distance between two strings of equal length N 풙ퟏ 풚ퟏ 휨 풅 (( ⋮ ) , ( ⋮ )) = ∑ [풙흂 ≠ 풚흂] 흂=ퟏ 풙푵 풚푵

Number of Active Nodes = ∑휿[∑흀(휶휿흀 + 휶흀휿)]

Differential Equation for Continuous Time t ≥ 0 or Real Νet Flow dy(t) = 훷(y(t)) dt d 푥(푡) 푥(푡) ( ) = 훷 ( ) dt 푤(푡) 푤(푡)

훷 (푥 (푡), … 푥 (푡), 푤 (푡)) 푥1(푡) 1 1 푁 훼훽 d ⋮ ⋮ ( ) = dt 푥훮(푡) 훷훮(푥1(푡), … 푥푁(푡), 푤훼훽(푡)) 푤κλ(푡) (훷κλ(푥1(푡), … 푥푁(푡), 푤훼훽(푡)))

Difference Equation for Discrete Time t=0,1,2,…, or Integer Net Update at discrete steps y(t+1) = S(푦(푡))

푥(푡 + 1) 푥(푡) ( ) = 푆 ( ) 푤(푡 + 1) 푤(푡)

푥1(푡 + 1) 푆1(푥1(푡), … , 푥푁(푡), 푤훼훽(푡)) ⋮ ⋮ ( ) = 푥푁(푡 + 1) 푆푁(푥1(푡), … , 푥푁(푡), 푤훼훽(푡)) 푤휅휆(푡 + 1) (푆휅휆(푥1(푡), … , 푥푁(푡), 푤훼훽(푡)))

Cognition Graphs are Directed Layered Representation (Drawing, Layout) of Graphs as Cognition Graphs (Neural Nets, Mind)

Bastert O. and Matuszewski C. 2001, Layered Drawings of Digraphs, in: Kaufmann M. and D. Wagner D. eds., 2001, Drawing Graphs. LNCS 2025, pp. 87-120, Springer, Berlin

Node Update Dynamics 푥 푥 S : Y ⟶Σ : ( ) ⟼ S ( ) = S (푥 , … 푥 , 푤 ) , κ∈ 풱 κ 푤 κ 푤 κ 1 푁 훼훽 xκ (t+1) =푆휅(푥1(푡), … , 푥푁(푡), 푤훼훽(푡)) the Update Dynamics of the node κ

1st order Discrete GDS

For each Node v,

The node Update map Sv depends on the states associated to the 1-neighborhood of each node

EXAMPLE : Cellular Automata

Neural Networks

Sκ(푥1, … 푥푁, 푤훼훽)= φκ(ζκ(푥1, … 푥푁, 푤훼훽))

푥 ζ : Y ⟶ ℝ : ( ) ⟼ z = ζ (푥 , … 푥 , 푤 ) the Input Aggregation Map κ 푤 κ κ 1 푁 훼훽

φκ : ℝ ⟶ Σ: zκ ⟼ φκ (zκ) the Οutput Activation Map = Transition Map

Input Output Prosynaptic Postsynaptic Activity Activity

Neuron Model

[Heykin S. 1999, Neural Networks. A Comprehensive Foundation, Pearson Prentice Hall, New Jersey]

AGGREGATION Maps EXAMPLES Bilinear AM zκ= ζκ(푥1, … 푥푁, 푤훼훽) = ∑λ x휆 푤휆휅 Since the early dates of the study of ΝΝ, the bilinear function has been used to model the aggregated input on each node at time t

Bilinear AM with bias bκ zκ= ζκ(푥1, … 푥푁, 푤훼훽) = ∑λ x휆 푤휆휅 + bκ

Perceptron

NN Οutput Activation Maps = Transition Maps

EXAMPLES

Unit Step Heaviside Map

1, 𝑖푓 푥 ≥ 푎휅 φκ (x)= θ(x− ακ) = { } 0, 𝑖푓 푥 < 푎휅

ακ = the Activation Threshold of the Node κ,

We may consider AM with values in [−1,1]

+1, 𝑖푓 푥 > 0 φκ (x)= { 0, 𝑖푓 푥 = 0 } −1, 𝑖푓 푥 < 0

All or None Property

McCullock-Pitts Neuron Model 1943

Piecewise Linear Map

1 1, 𝑖푓 푥 ≥ 2 1 1 φκ (x)= 푥, 𝑖푓 푥 ∈ (− , + ) 2 2 1 0, 𝑖푓 푥 ≤ { 2 }

Sigmoid Map (Innovation)

β=2 β=1 β=0.5

1 φκ (x)= 1+푒−훽(푥−푎휅)

ακ = Activation Threshold of the Node κ

β = the slope parameter of the Sigmoid Map

Hyperbolic Tangent Map

Ferrazzi et al. BMC Bioinformatics 2007 8(Suppl 5):S2 doi:10.1186/1471-2105-8-S5-S2

φκ (x)= tanh(αx)

Link Update Dynamics = Learning Algorithm

푥 푥 S : Y ⟶ℝ : ( ) ⟼ S ( ) = S (푥 , … 푥 , 푤) λκ 푤 λκ 푤 λκ 1 푁 wλκ(t+1) = 푆휆푘(푥1(푡), … , 푥푁(푡), 푤(푡))

EXAMPLES Constant Weights - No Learning w(t) = w , wκλ(t) = wκλ

NN wλκ(t+1) = 푆휆푘(푥1(푡), … , 푥훼(푡), w1κ(t), … , wακ(t)) the Update Dynamics of the presynaptic link (λ,κ) , λ=1,2,…α to Neuron κ

NN Hebb Learning wακ (t+1) = wακ (t) + η(t) xα(t) ζκ(x(t),w1κ(t),…, wακ(t)) wακ (t+1) = wακ (t) + η(t) xα(t) ∑푣 x푣 (t) 푤푣휅(t) , η(t)= the Learning rate

Simplest Hebb Learning: wακ (t+1) − wακ (t) = η xα(t) xκ(t)

η = the constant Learning rate

NN Oja Learning (as Principal Components Analysis) wακ(t+1) = wακ(t) + η(t) [xα(t)− wακ(t) ζκ(x(t),w(t))] ζκ(x(t),w(t)) wακ(t+1) = wακ(t) + η(t) [xα(t)− wακ(t) ∑푣 x푣 (t) 푤푣휅(t)] ∑푣 x푣 (t) 푤푣휅(t) η(t)= the Learning rate

NN Learning by Supervision Widrow-Hoff rule or the delta rule

The adjusting of the weights wακ depend not the actual activation xκ(t) of the node κ but on the difference between the actual activation xκ(t) and desired activation dκ provided by a teacher: wακ (t+1) − wακ (t) = η xα(t) [dκ(t)−xκ(t)] wακ (t+1) − wακ (t) = η xα(t) [dκ−xκ(t)]

Stochastic NN Stochastic Activation Stochastic McCullock-Pitts Neuron Model

+1, 푤𝑖푡ℎ 푝푟표푏푎푏𝑖푙𝑖푡푦 푝(푥)

φκ(x)= { } −1, 푤𝑖푡ℎ 푝푟표푏푎푏𝑖푙𝑖푡푦 1 − 푝(푥)

1 p(x)= [Haukin 1999, 37] 1+푒−훽푥

Boltzmann Machines, the activation as the probability of generating an action potential spike, and is determined via a on the sum of the inputs to a unit.

Does Goddess Μνημωσυνη Operates through the Nervous System?

Is Memory Encoded in the Nervous System?

If YES, What are the traces of Memory in the Nervous System?

Semon 1921 Assumed psycho-physiological parallelism (Every psychological state corresponds to alterations in the Nervous System. Therefore, Memory traces are stored as biophysical or biochemical changes in the Neural tissue, in response to external and internal stimuli. Semon introduced the term Engrams for Memories Representation in the Neural System and found evidence that different parts of the body relate to each other involuntarily, such as "reflex spasms, co-movements, sensory radiations," and noticed the non-uniform distribution and revival of Engrams. Semon R. 1921. The Mneme, George Allen & Unwin, London. p. 24"Chapter II. Engraphic Action of Stimuli on the Individual".

After K. Lashley failed (1930-1950) to find a single biological locus of memory (engram), he suggested that memories were not localized to one part of the brain, but were widely and distributed throughout the cerebral cortex. Later it was found that engrams are non- uniformly distributed across all cortical areas. Hebb Learning Rule “cells that fire together, wire together” was based on Lashley’s work Hebb, D.O. 1949. The Organization of Behavior. New York: John Wiley Jack Orbach. 1998. The Neuropsychological Theories of Lashley and Hebb. University Press of America. ISBN 0-7618-1165-6. Josselyn S. A. 2010, Continuing the search for the engram: examining the mechanism of fear memories". J. Psychiatry Neurosci. 35(4): 221–228

BCM Rule Modification of Hebb Learning

Bienenstock E., Cooper L., and Munro P. 1982, Theory for the development of neuron selectivity: orientation specificity and binocular interaction in visual cortex. Journal of Neuroscience, 2, 32–48.

Cooper, L. 2000, Memories and memory: A physicist's approach to the brain". International Journal of Modern Physics A 15 (26): 4069–4082. doi:10.1142/s0217751x0000272x.

Memory and Long-Term Potentiation Long-term potentiation (LTP) is a persistent increase in synaptic strength following high- frequency stimulation of a chemical synapse from recent patterns of activity. Paradiso, Michael A.; Bear, Mark F.; Connors, Barry W. (2007). Neuroscience: Exploring the Brain. Hagerstwon, MD: Lippincott Williams & Wilkins. p. 718.

Verification of Memory and Long-Term Potentiation The cellular mechanism of memory formation is clear by Optogenetics Nabavi, S., et al. (2014). Engineering a memory with LTD and LTP. Nature, doi: 10.1038/nature13294

Working Memory (WM) WM or Short-Term Memory is the RAM of the Mind Short-Term Memory was called thus to distinguish it from "long-term memory" the memory store. WM contains the information of which we are immediately aware.

Miller G. demonstrated that: 1) the capacity of WM cannot be defined in terms of the amount of information it contains. Instead WM stores information in the form of “Modules-Cells-Categories-Classes-Chunks- Semantic Units”

2) the Human Brain can store only about 7±2 Modules in the WM 3) the Human Brain uses Recoding. Brain’s ability to recode is one of the keys to artificial intelligence. Until a computer could reproduce this ability, it could never match the performance of the human brain.

Miller G. 1956, The magical number seven, plus or minus two: Some limits on our capacity for processing information. Psychological Review, 63, 81-97 Simon H. 1969 showed that the ideal number of Modules is 3. This is evidenced by the tendency to remember phone numbers as several chunks of 3 numbers with the final 4- number groups generally broken down into two groups of 2. Simon H. 1969, The Sciences of the Artificial, M.I.T. Press, Cambridge, Massachusetts Εach Module may contain a very small amount of information or complex big data equivalent to large amounts of information. Module Constructon: Classification, Categorization

WM Capacity is a of Intelligence determining the ability to : take good notes, read efficiently, understand complex issues, reason.

Turing Machines may Realize Neural Networks and WM

Graves A., Wayne G., Danihelka I. 2014 Neural Turing Machines, arxiv.org/abs/1410.5401 Aσκ. Αναλυση της Working Memory 4

NN Integrated Circuit

Merolla P., Arthur J. , ea 2014, A million spiking-neuron integrated circuit with a scalable communication network and interface, Science 345, 668-673

Aσκ. Αναλυση του Σχετικου Δικτυου 4

Coupled Map System (CMS) or (CML) DS modeling the behavior of non-linear Dynamics of spatially extended systems. spatiotemporal chaos (the number of effective degrees of freedom diverge as the size of the system increases) populations, chemical reactions, convection, fluid flow biological networks. computational networks (identifying detrimental attack methods and cascading failures). Web (not yet)

Kaneco CML 1D Lattice, κ in ℤ coupling with coupling parameter ε in [0,1]. ε xκ (t+1) = S (x (t), … , x (t), w (t)) = (1−ε) F(xκ(t)) + [F(xκ−1(t))+ F(xκ+1(t))] κ 1 N αβ 2 F: ℝ→ℝ the Local Dynamics Map

Example: The logistic map F(x(t)) = r x(t) (1− x(t)) ε xκ (t+1) =S (x (t), … , x (t), w (t)) = (1−ε) r xκ(t) (1− xκ(t)) + r [xκ-1(t) (1− xκ-1(t)) + xκ-1(t) (1− xκ-1(t))] κ 1 N αβ 2 Even though each native is chaotic, a more solid form develops in the evolution. Elongated convective spaces persist throughout the lattice History CMLs were introduced in the mid 1980’s K. Kaneko 1984, Prog. Theor. Phys. 72, 480 I. Waller and R. Kapral 1984, Phys. Rev. A 30 2047 J. Crutchfield 1984, Physica D 10, 229 S. P.Kuznetsov and A. S. Pikovsky 1985 , Izvestija Radiofizika 28, 308

Boolean Networks = SHANNON GRAPH = Binary decision diagram (BDD) propositional directed acyclic graphs (PDAG), as a data structures representing Boolean functions

Logical operations SG

Many logical operations on SGs can be implemented by -time graph algorithms.

• conjunction

• disjunction

• negation

• existential abstraction

• universal abstraction

Repeating these operations several times, may in the worst case result in an exponential time.

Random Boolean Networks = Kauffman Networks proposed by S. Kauffman, as models of genetic regulatory networks

Kauffman, S. A. 1969, Metabolic Stability and Epigenesis in randomly constructed genetic nets. Journal of Theoretical Biology 22, 437-467.

Kauffman, S. A. 1993, Origins of Order: Self-Organization and Selection in Evolution, Oxford University Press. ISBN 0-19-507951-5A

RBN is a system of N binary-state nodes (representing genes) with K inputs to each node representing regulatory mechanisms. The two states (on/off) represent respectively, the status of a gene being active or inactive. The state of a network at any point in time is given by the current states of all N genes. Thus the state space of any such network is 2N. Simulation of RBNs is done in discrete time steps.

Bayesian Networks , Graphical Models

P[A=α|B=β] = the degree of belief that the RV A has the value α based on the fact that B=β

P[A |B] = P[A∩B] = P[A] P[B|A] P[B] P[B]

P[x1, x2,…, xN ] = ∏푣,휅 P[xv|x휅푎푣휅 ] (the parents of each node v)

1 1

3

2 1

P[x1, x2, x3, x4, x5] = P[x1] P[x2] P[x3 |x1, x2] P[x4|x3] P[x5 | x3, x4]

The graph is Acyclic

1

3

2

P[x3 | x2] P[x1|x3] P[x2 | x1] is not consistent probability

Links indicate Direction, Causality, Temporal succession

GM include:

Markov Nets

Hidden Markov Nets

Kalman Filters

Neural Nets

Genetic Algorithms, Evolutionary Strategies, Net Games

Swarms, Agents, Sensors Nets Collective Intelligence

Immune System Models