Product Form Solutions for Discrete Event Systems:

Discovery or Invention?

Erol Gelenbe

Professor in the Chair www.ee.imperial.ac.uk/gelenbe Dept of Electrical and Imperial College London SW7 2BT Discovery or Invention?

An Ambiguous Relationship [the Turing Machine vs the Von Neumann Machine]

Discovery - revealing something that exists: - The basic activity of science e.g. Columbus’ “discovery” of America, Brouwer’s fixed-point theorem

Invention – finding an ingenious way, using existing (or assumed) knowledge to solve a significant problem – Newcomen and Watt before Carnot - The basic activity of engineering e.g. invention of the steam engine before the discovery of the Laws of Thermodynamics, the microprocessor

Discrete Event System: Discrete, often Unbounded State-Space + Continuous Time Product Form Solution • A stochastic system defined over a space of random variables X(t) , taking values in some state space Ξ , and evolving over time t>0 • Assume that this evolution is described by a joint probability

distribution function FX[x,t]: FX[x,t] = Prob [X(t) ε x; x subset of Ξ ]

• Let there exist a non-trivial partition of Ξ, π = {Ξ1 , Ξ2 , .. , ΞΝ }

(where Ξi . Ξj = φ ) such that X(t) = (X1(t),X2(t), … , XN(t)) and Xi(t) ε Ξ I • We say that the system has strict product form

N FX[x,t] = Π i=1 Prob [Xi (t) ε xi; xi subset of Ξi ]

• If FX[x,t] à FX[x], as tàoo so that X(t) à X , the system has

N product form if FX[x] = Π i=1 Prob [Xi ε xi; xi subset of Ξi ]

Examples of Systems with Strict Product Form EG “Stochastic automata with structural restrictions”,PhD Thesis, Polytechnic Institute of Brooklyn, Oct. 1969.

• Consider a Markov Chain {X(t): t=0,1,2, .. } with state space Ξ ,

and let FX[x,t] = Prob [X(t) ε x; x subset of Ξ ]

• Suppose that it is lumpable on the non-trivial partition of its state-space

π = {Ξ1 , Ξ2 , .. , ΞΝ } so that X(t) = (X1(t),X2(t), … , XN(t)) and Xi(t) ε Ξ i

Then each {Xi(t)} is also a Markov chain, and

N FX[x,t] = Π i=1 Prob [Xi (t) ε xi; xi subset of Ξi ] • A Jackson Queuing Network is generally not Lumpable; it does not have strict product form, but it does have product form.

Physically Meaningful Networks with Product Form

• Whittle Networks (1965)

• Jackson Queuing Network (1967) & Gordon &Newell Closed Queuing Network (1968)

• Buzen: Closed QN State Computation is Polynomial Time

• Baskett-Chandy-Muntz-Palacios Queueing Networks (1973, Kelly Networks (1980)

• Whittle Polymer, Potts & Population Models, .. (1994, 2004)

• Conditions: Reversibility, Quasi-Reversibility, MàM Property

• G-Networks do not have “Quasi-Reversibility”: Random Neural Networks, G-Networks, Gene Regulatory Networks 1989à2009

Neural Networks G-Networks • E. Gelenbe, ``Random neural networks with negative and • E. Gelenbe ``Réseaux stochastiques ouverts avec positive signals and product form solution,'' Neural clients négatifs et positifs, et réseaux neuronaux'', Computation, 1 (4): 502-511, 1989. Comptes-Rendus Acad. Sciences de Paris, t. 309, • E. Gelenbe, ``Stability of the random neural network Série II, 979-982, 1989. model,'' Neural Computation, 2 (2): 239-247, 1990. • E. Gelenbe “Queueing networks with negative and • E. Gelenbe, A. Stafylopatis, and A. Likas, ``Associative positive customers”, Journal of Applied Probability, memory operation of the random network model,'' Proc. Int. 28, 656-663 (1991). Conf. Artificial Neural Networks, Helsinki: 307-312, 1991. • E. Gelenbe, P. Glynn, K. Sigman “Queues with • E. Gelenbe, ``Learning in the recurrent random neural negative arrivals”, Journal of Applied Probability, network,'' Neural Computation, 5 (1): 154-164, 1993. 288: 245-250, 1991. • C. Cramer, E. Gelenbe, H. Bakircioglu ``Low bit rate video • E. Gelenbe, M. Schassberger ``Stability of product compression with neural networks and temporal sub- form G-Networks'', Probability in the Engineering sampling,'' Proc. IEEE, 84 (10): 1529--1543, 1996. and Informational Sciences, 6, pp 271-276, 1992. • E. Gelenbe, T. Feng, K.R.R. Krishnan ``Neural network • E. Gelenbe ``G-networks with instantaneous methods for volumetric magnetic resonance imaging of the customer movement'', Journal of Applied human brain,'' Proc. IEEE, 84 (10): 1488--1496, 1996. Probability, 30 (3), 742-748, 1993. • E. Gelenbe, A. Ghanwani, V. Srinivasan, ``Improved neural • Erol Gelenbe ``G-Networks with signals and batch heuristics for multicast routing,'' IEEE J. Selected Areas in removal'', Probability in the Engineering and Communications, 15 (2): 147-155, 1997. Informational Sciences, 7, pp 335-342, 1993. • E. Gelenbe, Z. H. Mao, and Y. D. Li, ``Function approximation • J.M. Fourneau, E. Gelenbe, R. Suros ``G-networks with the random neural network,'' IEEE Trans. Neural with multiple classes of positive and negative Networks, 10 (1), 1999. customers,'' Theoretical , 155 • E. Gelenbe, J.M. Fourneau ``Random neural networks with (1996)):141-156. multiple classes of signals,'' Neural Computation ( 11): • E. Gelenbe, A. Labed ``G-networks with multiple 721--731, 1999. classes of signals and positive customers'', • E. Gelenbe, Z.-H. Mao and Y-D. Li "Function approximation European J. Opns. Res., 108 (2)): 293-305, 1998. by random neural networks with a bounded number of • E. Gelenbe, J.M. Fourneau ``G-Networks with layers", Differential Eqns. & Dyn. Syst., 12): 143-170, 2004. resets'', Performance Evaluation, 49: 179-191, 2002, • E. Gelenbe and S. Timotheou ``Random neural networks • Performance '02 Conf., Rome, Italy, October 2002. with synchronized interactionsí, Neural Computation 20: • J.M. Fourneau, E. Gelenbe ``Flow equivalence and 2308--2324, 2008. stochastic equivalence in G-Networks'', Computational Management Science, 1 (2): 179-192, 2004. Generalisation of a Problem Suggested by Laszlo Pap which does not have product form

• N robots Search in an Unknown Universe • N Packets Travel in an Unknown Network esp. Wireless • Search by Software Robots for Data in a Very Large Distributed • Biological Agents Diffusing through a Random Medium until they Encounter a Docking Point • Particles Moving in a Random Medium until they Encounter an Oppositely Charged Receptor • Randomised Gradient Minimisation (e.g. Simulated Annealing) on Parallel Processors

10/9/14 7 Packets Travel, Get Lost, Some Time Later .. Packet Retransmission The packet had visited 6 hops .. I’ll drop it ‘coz ‘tis too old!!

A Destination

B 6+M Time units elapsed: the packet must be Source lost. I’ll send it again

C

10/9/14 8 2 ∂fi ∂fi 1 ∂ fi = −b + c 2 − ai fi +[µWi (t) + Pi (t)]δ (xi − D) ∂t ∂xi 2 ∂xi N dPi (t) 1 ∂fi = −Pi (t) + lim [−bf i + c ] N Coupled ∑ x →0+ dt i=1 i 2 ∂xi

∞ dLi (t) Brownian = λ fidxi − (r + ai )Li (t) dt ∫0+ ∞ Motions dWi (t) = r fidxi + rLi (t) − (µ + ai )Wi (t) dt ∫0+ With N 1 ∂fi a j = − lim [−bf i + c ], ∑ x →0+ i=1,i≠ j i 2 ∂xi Dynamic ∞

Pi (t) + Li (t) +Wi (t) + + fidxi =1 ; lim f = 0. ∫0 x→0+ Attractor: E[T*] P -1 1 obtained from the stationary solution = i − N Searchers P(t) xà x=0 infinity x=D W (t) 10/9/14 L(t) i 9 u1z u2 z fi (z) = A[e − e ],0 ≤ z ≤ D

(u1 −u2 )D u2 z fi (z) = A[e −1]e , z ≥ D b ± + 2c(λ + r + a) u = 1,2 c N ∂ 2 f (z ) lim[ 1 j j ai = bf j (z j ) + c 2 ] ∑ z j →0 j=1,i≠ j 2 ∂z j

P(t) xà x=0 infinity x=D W (t) 10/9/14 L(t) i 10 Time of first arrival among N to Destination conditioned by Initial Distance D

T* = inf {T1, ... , TN} • Drift b < 0 or b>0, Second Moment Param. c>0 • Avg Time-Out R=1/r , M=1/µ, and we obtain: λ+r+a −2D( ) 1 b b2 2c( r a) µ + r + a E[T | D] = [e − + λ+ + −1][ ] N (r + a)µ + a)

10/9/14 11 Effective Travel Time and Energy

T* = inf {T1, ... , TN}

• E[τeff|D] = [1+E[T*|D]].P[Searcher is travelling]

• J(N|D) = N.E[τeff |D]

λ+r+a −2D( ) 2 1 J(N | D) = [e b− b +2c(λ+r+a) −1][ ] λ + r + a

10/9/14 12 AverageTravel Time & Energy Consumed vs Time- Out and Number of Searchers

10/9/14 13 Average Travel Time & Energy Consumed vs Loss Rate, Time-Out and Number of Searchers

10/9/14 14 Outline of what follows • Biological Inspiration • Chapman Kolmogorov Equations, Steady-State Solution • O(n3) Learning Algorithm • Function Approximation • Some Applications of the RNN – modeling biological neuronal systems – texture recognition and segmentation – image and video compression – multicast routing – Network routing (Cognitive Packet Network) • G-Networks and their Generalisations • Gene Regulatory Networks • Chemical Reaction Networks & Population Networks • Networks of Auctions Random Spiking Behaviour of Brain Neurons Random Spiking Behaviour of Neurons (T. Sejnowski) Invention of the Random Neural Network: A Mathematical Model of Random Spiking Neurons Inventor’s agenda in 1990: the model should include - Action potential “Signals” in the form of spikes - Electrical store and discharge (fire) behaviour of the soma - Excitation-inhibition spikes - Recurrent networks: Feedforward networks are the norm .. Even today - Random delays between spikes - Conveying information along axons via variable spike rates - Reduction of neuronal potential after firing - Possibility of representing axonal delays between neurons - Arbitrary network topology - Approximation capability - Ability to incorporate learning algorithms: Hebbian, Gradient Descent, Reinforcement Learning, ..

Later - Synchronised firing patterns & Logic in neural networks? Technical Tool: Exploiting the Analogy with Queuing Networks

Discrete state space, typically continuous time, stochastic models arising in the study of populations, dams, production systems, communication networks ..

o Theoretical foundation for computer and network systems performance analysis o Open (external Arrivals and Departures), as in Telephony, or Closed (Finite Population) as in Compartment Models o Systems comprised of Customers and Servers o Theory is over 100 years old and still very active .. o Big activity at Telecom labs in Europe and the USA, Bell Labs, AT&T Labs, IBM Research o More than 100,000 papers on the subject .. Queuing Network <-> Random Neural Network o Both Open and Closed Systems o Systems comprised of Customers and Servers o Servers = Neurons o Customer = Spike: Arriving to server will increase the queue length by +1 o Excitatory spike arriving to neuron will increase its soma’s potential by +1 o Service completion (neuron firing) at server (neuron) will send out a customer (spike), and reduce queue length by 1 o Inhibitory spike arriving to neuron will decrease its soma’s potential by 1 o Spikes (customers) leaving neuron i (server i) will move to neuron j (server j) in a probabilistic manner RNN & G-Networks Mathematical properties that we have established: o Product form solution with non-linear traffic equations o Existence and uniqueness of product form solution and closed form analytical solutions for arbitrarily large systems in terms of rational functions of polynomials o Strong inhibition – inhibitory spikes reduce the potential to zero o Feed-forward RNN is a universal computing element: For any bounded continuous function f: Rn -> Rm, and error ε, there is a FF-RNN g such that ||g(x)-f(x)||< ε for all x in Rn o O(n3) speed for recurrent network’s gradient descent algorithm, and O (n2) for feedforward network

Mathematical Model: A “neural” network with n neurons

• Internal State of Neuron i at time t, is an Integer Ki(t) > 0

• Network State at time t is a Vector

K(t) = (K1(t), … , Ki(t), … , Kk(t), … , Kn(t))

• If Ki(t)> 0, we say that Neuron i is excited it may fire (in which case it will send out a spike)

• Also, if Ki(t)> 0, the Neuron i will fire with probability riΔt +o(Δt) in the interval [t,t+Δt]

+ • If Ki(t)=0, the Neuron cannot fire at t

When Neuron i fires at time t:

- It sends a spike to some Neuron j, with probability pij + - Its internal state changes Ki(t ) = Ki(t) - 1 Mathematical Model: A “neural” network with n neurons The arriving spike at Neuron j is an: + - Excitatory Spike w.p. pij - - Inhibitory Spike w.p. pij + - n - pij = pij + pij with Σ j=1 pij < 1 for all i=1,..,n From Neuron i to Neuron j: + + - Excitatory Weight or Rate is wij = ri pij - - - Inhibitory Weight or Rate is wij = ri pij n + – - Total Firing Rate is ri = Σ j=1 (wij + wij ) To Neuron i, from Outside the Network

- External Excitatory Spikes arrive at rate Λi

- External Inhibitory Spikes arrive at rate λi State Equations & The Breakthrough – Discovery: The Analytical Solution

p(k,t) = Pr[x(t) = k] where {x(t) : t ! 0} is a discrete state-space Markov process,

+" ++ and kij = k + ei " ej , kij = k + ei + ej

+ " ki = k + ei , ki = k " ei : The Chapman - Kolmogorov Equations [Neural Master Equations]:

d +" + ++ " + " p(k,t) = #[p(kij ,t)ri pij1[k j (t) > 0]+ p(kij ,t)ri pij ]+#[p(ki ,t)(!i +ridi )+ $i p(ki ,t)1[ki (t) > 0]] dt i, j i

" p(k,t)#[(!i + ri )1[ki (t) > 0]+ $i ] i Let :

p(k) = lim Pr[x(t) = k], and qi = lim Pr[xi (t) > 0] t%& t%&

Theorem: [Gelenbe Neural Computation '90] If the C-K equations have a stationary solution, n k then the solution has the ''product form'' p(k)= ' q i(1"q ) , where i=1 i i External Arrival + Rate of Excitatory ji Spikes ω Probability that Neuron i is excited

Λ + q r p+ i ∑ j j j ji 0 ≤ q = < 1 i r + λ + q r p− i i ∑ j j j ji

Firing Rate of External Arrival − Neuron i Rate of Inhibitory ji Spikes ω Further Discoveries Theorem (Gelenbe Neural Computation '93) The system of non!linear equations + " +# jq r p q = i j j ji , 1$i $ n i r q r p! i +!i +# j j j ji has an unique solution if all the q 1 . i < Theorem (Gelenbe, Mao, Da-Li IEEE Trans. Neural Nets. '99)

Let g:[0,1]v % R be a continuous and bounded function. Then, for any " > 0,

qo+ qo! then there exits a function y(x)= ! such that qo+(x), qo!(x) are the outputs 1!qo+ 1!qo! of two neurons of an RNN with input vector x, so that

sup |g(x)! y(x)|<" x&[0,1]v Inventing a Practical “Learning” Algorithm and Discovering that Gradient Computation for the Recurrent RNN is O(n3) Discovering G-Networks: Generalising the RNN • Batch Removal + Triggers (Lists) + Resets (F) Resets in an M/M/1 Queue Birth ! Death Equations : Ordinary M/M/1 queue with resets on feedback (Linear)

! ! + µFq n n = 0 : p(0)! = p(1)µ(1! F) => q = or q = and let p(n) = q (1! q) µ(1! F) µ n > 0 : p(n)[! + µ] = ! p(n !1)+ µp(n +1)+ µF" (n)p(1) (1)

! " (n) ! " (n)q n 1 ! + µ = + qµ + µF = µ(1! F)+ + µF => " (n) ~ q ! q qn (1! F) qn " " a But we must have # " (n) =1 => #aqn = =1 => " (n) = qn!1(1! q) n=1 n=0 1! q ! ! !F and (1) => ! + µ = µ(1! F)+ + µF(1! q) = µ ! µF + + µF ! (1! F) (1! F) (1! F) Discovering G-Networks: Generalising the RNN • Negative Customers and Resets in M/M/1 Queue – A Non-Linear System BDE : Ordinary M/M/1 queue with feedback resets (F) or negative customers (f) n = 0 : p(0)! = p(1)µ(1! F)+ p(2)µ f =>

n ! Assume p(n) = q (1! q), q = µ(1! F)+ qµ f n > 0 : p(n)[! + µ] = ! p(n !1)+ µ(1! f )p(n +1)+ µ fp(n + 2)+ µF" (n)p(1)

n 1 ! 2 " (n)q Take " (n) = q ! (1! q) so [! + µ] = + qµ(1! f )+ q µ f + µF q qn or ! + µ = µ(1! F)+ qµ f + qµ(1! f )+ q2µ f + µF(1! q)

=> ! = qµ(1! F)+ q2µ f QED Discovering G-Networks è Multiple Classes • Negative Customers with Batch Removal Flow Control in Networks • Triggers (Also List Triggers) Routing Control • Resets Repairing a System that had Broken Down • Multiple Classes Different Traffic Flows with Distinct Characteristics

• A Non-Linear System with a Unique Solution whose Existence is Proved

Discovering G-Networks • Negative Customers with Batch Removal, Triggers and Resets – A Non-Linear System

N queues, State K(t) = (K1(t), ... , KN (t)), p(k) = P[K(t) = k], k = (k1, ... , kN ) N + ki #i p(k) = !qi (1" qi ), qi = i=1 " µi + #i

fi (qi ) is a function of the batch removal distrbution N N N + + #i = #i +$ $q jµ jqkQjki +$q jµ j[P ji + R ji ] k=1 j=1 j=1 N N N " " #i = [!i +$q jµ j P ji ] fi (qi )+$ $q jµ jQjik j=1 k=1 j=1 Some Applications of the RNN

• Modeling Cortico-Thalamic Response … • Texture based Image Segmentation • Image and Video Compression • Multicast Routing • CPN Routing Cortico-Thalamic Oscillatory Response to Somato-Sensory Input (what does the rat think when you tweak her/his whisker?)

Input from the brain stem (PrV) and response at thalamus (VPM) and cortex (SI), reprinted from M.A.L. Nicollelis et al. “Reconstructing the engram: simultaneous, multiple site, many single neuron recordings”, Neuron 18, 529-537, 1997 Scientific Objective Elucidate Aspects of Observed Brain Oscillations

Building the Network Architecture from Physiological Data First Step: Comparing Measurements and Theory: Calibrated RNN Model and Cortico-Thalamic Oscillations

Predictions of Calibrated Simultaneous Multiple RNN Mathematical Model Cell Recordings (Gelenbe & Cramer ’98, ’99) (Nicollelis et al., 1997) Gedanken Experiments that cannot be Conducted in Vivo: Oscillations Disappear when Signaling Delay in Cortex is Decreased

Brain Stem Input Pulse Rate Gedanken Experiments: Removing Positive Feedback in Cortex Eliminates Oscillations in the Thalamus

Brain Stem Input Pulse Rate When Feedback in Cortex is Dominantly Negative, Cortico- Thalamic Oscillations Disappear Altogether

Brain Stem Input Pulse Rate Summary of Findings Resulting from the Model Texture Based Object Identification Using the RNN US Patent ’99 (E. Gelenbe, Y. Feng) 1) MRI Image Segmentation MRI Image Segmentation Brain Image Segmentation with RNN Extracting Abnormal Objects from MRI Images of the Brain

Separating Healthy Tissue from Tumor

Simulating and Planning Gamma Therapy & Surgery

Extracting Tumors from MRI T1 and T2 Images 2) RNN based Adaptive Video Compression: Combining Motion Detection and RNN Still Image Compression

RNN Neural Still Image Compression Find RNN R that Minimizes || R(Ι) - Ι || Over a Training Set of Images { Ι } RNN based Adaptive Video Compression Original

After decom- pression

3) Multicast Routing Analytical Annealing with the RNN similar improvements were obtained for (a) the Vertex Covering Problem (b) the Traveling Salesman Problem

• Finding an optimal “many- to-many communications path” in a network is equivalent to finding a Minimal Steiner Tree. This is an NP-Complete problem • The best purely combinatorial heuristics are the Average Distance Heuristic (ADH) and the Minimal Spanning Tree (MSTH) for the network graph • RNN Analytical Annealing improves the number of optimal solutions found by ADH and MST by more than 20% 4) Learning and Reproduction of Colour Textures

• The Multiclass RNN is used to Learn Existing • The same RNN is then used as a Relaxation Machine to Generate the Textures • The “use” of this approach is to store textures in a highly compressed manner • Gelenbe & Khaled, IEEE Trans. On Neural Networks (2002). Cognitive Adaptive Routing

• Conventional QoS Goals are extrapolated from Paths, Traffic, Delay & Loss Information – this is the “Sufficient Level of Information” for Self-Aware Networking • Smart packets collect path information and dates • ACK packets return Path, Delay & Loss Information and deposit W(K,c,n,D), L(K,c,n,D) at Node c on the return path, entering from Node n in Class K • Smart packets use W(K,c,n,D) and L(K,c,n,D) for decision making using Reinforcement Learning Packet P with 1) N Uses the Data in Mailbox to Update the RNN Weights Source S and N Computes the q(i) from Destination D 2) If d is the current date the RNN, picks largest q(X) Arrives at Node N at N, node N stores the pair with X different from Link L, Via Link L (N,d) in the CP and sends the CP out from N NO along Link X

Is P a Is N the N Creates 1) From CP’s route r, N gets Destination ACK Shortest Inverse Route R CP D of the CP ? Packet 2) N Stores R in ACK with YES ? all Dates when CP visited YES For CP each node in R NO Since P (DP or ACK) contains its own route R, Node N Is P a Sends Packet P out DP From the output Link to N sends ACK along ? Its neighboring node Route R back to the YES Source Node S of the CP NO that comes after N in R

P is thus an ACK Let T be the current date at N: Node S copies Route R 1) N copies the date d from P into all DPs that corresponds to node N going to D, until a new 2) N computes Delay = T-d and ACK brings a new route R’ updates its mailbox with Delay Goal Based Reinforcement Learning in CPN

• The Goal Function to be minimized is selected by the user, e.g. G = [1-L]W + L[T+W]

• On-line measurements and probing are used to measure L and W, and this information is brought back to the decision points • • The value of G is estimated at each decision node and used to compute the estimated reward R = 1/G

• The RNN weights are updated using R stores G(u,v) indirectly in the RNN which makes a myopic (one step) decision

Routing with Reinforcement Learning using the RNN

• Each “neuron” corresponds to the choice of an output link in the node • Fully Recurrent Random Neural Network with Excitatory and Inhibitory Weights • Weights are updated with RL • Existence and Uniqueness of solution is guaranteed • Decision is made by selecting the outgoing link which corresponds to the neuron whose excitation probability is largest Reinforcement Learning Algorithm

• The decision threshold is the Most Recent Historical Value of the Reward −1 Tl = aTl−1 + (1− a)Rl , R = G • Recent Reward Rl If

then + + w (i, j) ← w (i, j) + Rl T ≤ R l−1 l R w− (i, k) ← w− (i, k) + l , k ≠ j n − 2 else R w+ (i,k) ← w+ (i,k) + l ,k ≠ j n − 2 − − w (i, j) ← w (i, j) + Rl • Re-normalise all weights n r* [w+ (i,m) w− (i,m)] i = ∑ + 1

+ + ri w (i, j) ← w (i, j) * ri

− − ri w (i, j) ← w (i, j) * ri

• Compute q = (q1, … , qn) from the fixed-point

• Select Decision k such that qk > qi for all i=1, …, n

CPN Test-Bed Measurements On-Line Route Discovery by Smart Packets CPN Test-Bed Measurements Ongoing Route Discovery by Smart Packets Route Adaptation without Obstructing Traffic Packet Round-Trip Delay with Saturating Obstructing Traffic at Count 30 Route Adaptation with Saturating Obstructing Traffic at Count 30 Packet Round-Trip Delay with Link Failure at Count 40 Packet Round-Trip Delay with Link Failure at Count 40 Average Round-Trip Packet Delay VS Percentage of Smart Packets

SP

All

DP RNN Other Extensions to the Mathematical Model o Model with resets – a node can reactivate its neughbours state if they are quiescent .. Idea about sustained oscillations in neuronal networks o Model with synchronised firing inspired by observations in vitro o Extension of product form result and O(n3) gradient learning to networks with synchronised firing (2007) o Hebbian and reinforcement learning algorithms o Analytical annealing – Links to the Ising Model of Statistical Mechanics o New ongoing chapter in queuing network theory now called “G-networks” extending the RNN o Links with the Chemical Master Equations, Gene Regulatory Networks, Predator/Prey Population Models Model Extensions: Synchronous Firing Synchronous Firing: Solution Gene Regulatory Networks A Generalisation: the G-Network

G-Network Dynamics and Stationary Solution Logical Interactions of Agents Thresholding and the CNF

Obtaining the Complement – Exact Approach Al = U {ΠAi Π[not Aj]} The G-Network model can provide the mathematical structure to include Boolean dependencies between agents Conjunctive Normal Form

AF = U {ΠAi Π[not Aj]} qF = Σ { Πqi Πρj}

Toy Example of four agents {A0,A1,A2,A3} Ai inhibits [A(i+1)mod4 and A(i+2)mod4]

Interpretation 1: q= 1/(1+2q)= 0.5 Interpretation 2: q = (1-q)(1-q)= 0.382

Thus the “semantics” we associate with a regulatory network model has to be precisely indicated so as to derive the appropriate probabilistic interpretation Logical Dependencies in Gene Regulatory Networks

The Acve Bidders’ Assumpon “Bidders are not Window Shoppers”

Income per unit me versus decision rate of seller, for high and low Bid arrival rates Sample of Publications • EG. Random neural networks with negative and positive signals and product form solution. Neural Computation, 2:239-247, Feburary 1990. • EG. Learning in the recurrent random neural network. Neural Computation, 5:154-164, 1993. • EG and C. Cramer. Oscillatory corthico-thalamic response to somatosensory input. Biosystems, 48(1-3):67-75, November 1998. • EG and J.M. Fourneau. Random neural networks with multiple classes of signals. Neural Computation, 11(4):953-963, May 1999. • EG, Z.H. Mao, and Y.D. Li. Function approximation with spiked random networks. IEEE Transactitons on Neural Networks, 10(1):3-9, January 1999. • EG and K. Hussain. Learning in the multiple class random neural network. IEEE Transactions on Neural Networks, 13(6):1257-1267, November 2002. • EG, T. Koçak, and Rong Wang. Wafer surface reconstruction from top-down scanning electron microscope images. Microelectronic Engineering, 75(2):216-233, August 2004. • EG, Z.H. Mao, and Y.D. Li. Function approximation by random neural networks with a bounded number of layers. J. Differential Equations and Dynamical Systems, 12(1-2): 143-170, 2004. • EG. Steady-state solution of probabilitsic gene regulatory networks. Phys Rev E 76 (1), 2007. • EG, S. Timotheou. Random Neural Networks with Synchronised Interactions. Neural Computation, 20 (9), 2008. • EG. Network of interacting synthetic molecules in steady-state. Proc. Roy. SocA, 2008. • EG. Analysis of single and networked auctions. ACM Trans. Internet Tech. 2009. Electronic Network <-> Random Neural Network Future Work: Back to our Origins ?? o Very Lower Power Ultrafast “Pseudo-Digital” Electronics o Network of interconnected probabilistic circuits o Only pulsed signals with negative or positive polarity o Integrate and fire circuit = Neuron [RC circuit at input, followed by transistor, followed by monostable] o When RC circuit’s output voltage exceeds a threshold, the “Neuron’s” output pulse train is a sequence of pulses at the characteristic spiking rate (µ) of the neuron o Frequency dividers (eg flip flops) create appropriate pulse trains that emulate the appropriate neural network weights o Threshold circuits (eg biased diodes and inverters) create appropriate positive or negative pulse trains for different connections http://san.ee.ic.ac.uk