Product Form Solutions for Discrete Event Systems: Discovery Or
Total Page:16
File Type:pdf, Size:1020Kb
Product Form Solutions for Discrete Event Systems: Discovery or Invention? Erol Gelenbe Professor in the Dennis Gabor Chair www.ee.imperial.ac.uk/gelenbe Dept of Electrical and Electronic Engineering Imperial College London SW7 2BT Discovery or Invention? An Ambiguous Relationship [the Turing Machine vs the Von Neumann Machine] Discovery - revealing something that exists: - The basic activity of science e.g. Columbus’ “discovery” of America, Brouwer’s fixed-point theorem Invention – finding an ingenious way, using existing (or assumed) knowledge to solve a significant problem – Newcomen and Watt before Carnot - The basic activity of engineering e.g. invention of the steam engine before the discovery of the Laws of Thermodynamics, the microprocessor Discrete Event System: Discrete, often Unbounded State-Space + Continuous Time Product Form Solution • A stochastic system defined over a space of random variables X(t) , taking values in some state space Ξ , and evolving over time t>0 • Assume that this evolution is described by a joint probability distribution function FX[x,t]: FX[x,t] = Prob [X(t) ε x; x subset of Ξ ] • Let there exist a non-trivial partition of Ξ, π = {Ξ1 , Ξ2 , .. , ΞΝ } (where Ξi . Ξj = φ ) such that X(t) = (X1(t),X2(t), … , XN(t)) and Xi(t) ε Ξ I • We say that the system has strict product form N FX[x,t] = Π i=1 Prob [Xi (t) ε xi; xi subset of Ξi ] • If FX[x,t] à FX[x], as tàoo so that X(t) à X , the system has N product form if FX[x] = Π i=1 Prob [Xi ε xi; xi subset of Ξi ] Examples of Systems with Strict Product Form EG “Stochastic automata with structural restrictions”,PhD Thesis, Polytechnic Institute of Brooklyn, Oct. 1969. • Consider a Markov Chain {X(t): t=0,1,2, .. } with state space Ξ , and let FX[x,t] = Prob [X(t) ε x; x subset of Ξ ] • Suppose that it is lumpable on the non-trivial partition of its state-space π = {Ξ1 , Ξ2 , .. , ΞΝ } so that X(t) = (X1(t),X2(t), … , XN(t)) and Xi(t) ε Ξ i Then each {Xi(t)} is also a Markov chain, and N FX[x,t] = Π i=1 Prob [Xi (t) ε xi; xi subset of Ξi ] • A Jackson Queuing Network is generally not Lumpable; it does not have strict product form, but it does have product form. Physically Meaningful Networks with Product Form • Whittle Networks (1965) • Jackson Queuing Network (1967) & Gordon &Newell Closed Queuing Network (1968) • Buzen: Closed QN State Computation is Polynomial Time • Baskett-Chandy-Muntz-Palacios Queueing Networks (1973, Kelly Networks (1980) • Whittle Polymer, Potts & Population Models, .. (1994, 2004) • Conditions: Reversibility, Quasi-Reversibility, MàM Property • G-Networks do not have “Quasi-Reversibility”: Random Neural Networks, G-Networks, Gene Regulatory Networks 1989à2009 Neural Networks G-Networks • E. Gelenbe, ``Random neural networks with negative and • E. Gelenbe ``Réseaux stochastiques ouverts avec positive signals and product form solution,'' Neural clients négatifs et positifs, et réseaux neuronaux'', Computation, 1 (4): 502-511, 1989. Comptes-Rendus Acad. Sciences de Paris, t. 309, • E. Gelenbe, ``Stability of the random neural network Série II, 979-982, 1989. model,'' Neural Computation, 2 (2): 239-247, 1990. • E. Gelenbe “Queueing networks with negative and • E. Gelenbe, A. Stafylopatis, and A. Likas, ``Associative positive customers”, Journal of Applied Probability, memory operation of the random network model,'' Proc. Int. 28, 656-663 (1991). Conf. Artificial Neural Networks, Helsinki: 307-312, 1991. • E. Gelenbe, P. Glynn, K. Sigman “Queues with • E. Gelenbe, ``Learning in the recurrent random neural negative arrivals”, Journal of Applied Probability, network,'' Neural Computation, 5 (1): 154-164, 1993. 288: 245-250, 1991. • C. Cramer, E. Gelenbe, H. Bakircioglu ``Low bit rate video • E. Gelenbe, M. Schassberger ``Stability of product compression with neural networks and temporal sub- form G-Networks'', Probability in the Engineering sampling,'' Proc. IEEE, 84 (10): 1529--1543, 1996. and Informational Sciences, 6, pp 271-276, 1992. • E. Gelenbe, T. Feng, K.R.R. Krishnan ``Neural network • E. Gelenbe ``G-networks with instantaneous methods for volumetric magnetic resonance imaging of the customer movement'', Journal of Applied human brain,'' Proc. IEEE, 84 (10): 1488--1496, 1996. Probability, 30 (3), 742-748, 1993. • E. Gelenbe, A. Ghanwani, V. Srinivasan, ``Improved neural • Erol Gelenbe ``G-Networks with signals and batch heuristics for multicast routing,'' IEEE J. Selected Areas in removal'', Probability in the Engineering and Communications, 15 (2): 147-155, 1997. Informational Sciences, 7, pp 335-342, 1993. • E. Gelenbe, Z. H. Mao, and Y. D. Li, ``Function approximation • J.M. Fourneau, E. Gelenbe, R. Suros ``G-networks with the random neural network,'' IEEE Trans. Neural with multiple classes of positive and negative Networks, 10 (1), 1999. customers,'' Theoretical Computer Science, 155 • E. Gelenbe, J.M. Fourneau ``Random neural networks with (1996)):141-156. multiple classes of signals,'' Neural Computation ( 11): • E. Gelenbe, A. Labed ``G-networks with multiple 721--731, 1999. classes of signals and positive customers'', • E. Gelenbe, Z.-H. Mao and Y-D. Li "Function approximation European J. Opns. Res., 108 (2)): 293-305, 1998. by random neural networks with a bounded number of • E. Gelenbe, J.M. Fourneau ``G-Networks with layers", Differential Eqns. & Dyn. Syst., 12): 143-170, 2004. resets'', Performance Evaluation, 49: 179-191, 2002, • E. Gelenbe and S. Timotheou ``Random neural networks • Performance '02 Conf., Rome, Italy, October 2002. with synchronized interactionsí, Neural Computation 20: • J.M. Fourneau, E. Gelenbe ``Flow equivalence and 2308--2324, 2008. stochastic equivalence in G-Networks'', Computational Management Science, 1 (2): 179-192, 2004. Generalisation of a Problem Suggested by Laszlo Pap which does not have product form • N robots Search in an Unknown Universe • N Packets Travel in an Unknown Network esp. Wireless • Search by Software Robots for Data in a Very Large Distributed Database • Biological Agents Diffusing through a Random Medium until they Encounter a Docking Point • Particles Moving in a Random Medium until they Encounter an Oppositely Charged Receptor • Randomised Gradient Minimisation (e.g. Simulated Annealing) on Parallel Processors 10/9/14 7 Packets Travel, Get Lost, Some Time Later .. Packet Retransmission The packet had visited 6 hops .. I’ll drop it ‘coz ‘tis too old!! A Destination B 6+M Time units elapsed: the packet must be Source lost. I’ll send it again C 10/9/14 8 2 ∂fi ∂fi 1 ∂ fi = −b + c 2 − ai fi +[µWi (t) + Pi (t)]δ (xi − D) ∂t ∂xi 2 ∂xi N dPi (t) 1 ∂fi = −Pi (t) + lim [−bf i + c ] N Coupled ∑ x →0+ dt i=1 i 2 ∂xi ∞ dLi (t) Brownian = λ fidxi − (r + ai )Li (t) dt ∫0+ ∞ Motions dWi (t) = r fidxi + rLi (t) − (µ + ai )Wi (t) dt ∫0+ With N 1 ∂fi a j = − lim [−bf i + c ], ∑ x →0+ i=1,i≠ j i 2 ∂xi Dynamic ∞ Pi (t) + Li (t) +Wi (t) + + fidxi =1 ; lim f = 0. ∫0 x→0+ Attractor: E[T*] P -1 1 obtained from the stationary solution = i − N Searchers P(t) xà x=0 infinity x=D W (t) 10/9/14 L(t) i 9 u1z u2 z fi (z) = A[e − e ],0 ≤ z ≤ D (u1 −u2 )D u2 z fi (z) = A[e −1]e , z ≥ D b ± + 2c(λ + r + a) u = 1,2 c N ∂ 2 f (z ) lim[ 1 j j ai = bf j (z j ) + c 2 ] ∑ z j →0 j=1,i≠ j 2 ∂z j P(t) xà x=0 infinity x=D W (t) 10/9/14 L(t) i 10 Time of first arrival among N to Destination conditioned by Initial Distance D T* = inf {T1, ... , TN} • Drift b < 0 or b>0, Second Moment Param. c>0 • Avg Time-Out R=1/r , M=1/µ, and we obtain: λ+r+a −2D( ) 1 b b2 2c( r a) µ + r + a E[T | D] = [e − + λ+ + −1][ ] N (r + a)µ + a) 10/9/14 11 Effective Travel Time and Energy T* = inf {T1, ... , TN} • E[τeff|D] = [1+E[T*|D]].P[Searcher is travelling] • J(N|D) = N.E[τeff |D] λ+r+a −2D( ) 2 1 J(N | D) = [e b− b +2c(λ+r+a) −1][ ] λ + r + a 10/9/14 12 AverageTravel Time & Energy Consumed vs Time- Out and Number of Searchers 10/9/14 13 Average Travel Time & Energy Consumed vs Loss Rate, Time-Out and Number of Searchers 10/9/14 14 Outline of what follows • Biological Inspiration • Chapman Kolmogorov Equations, Steady-State Solution • O(n3) Learning Algorithm • Function Approximation • Some Applications of the RNN – modeling biological neuronal systems – texture recognition and segmentation – image and video compression – multicast routing – Network routing (Cognitive Packet Network) • G-Networks and their Generalisations • Gene Regulatory Networks • Chemical Reaction Networks & Population Networks • Networks of Auctions Random Spiking Behaviour of Brain Neurons Random Spiking Behaviour of Neurons (T. Sejnowski) Invention of the Random Neural Network: A Mathematical Model of Random Spiking Neurons Inventor’s agenda in 1990: the model should include - Action potential “Signals” in the form of spikes - Electrical store and discharge (fire) behaviour of the soma - Excitation-inhibition spikes - Recurrent networks: Feedforward networks are the norm .. Even today - Random delays between spikes - Conveying information along axons via variable spike rates - Reduction of neuronal potential after firing - Possibility of representing axonal delays between neurons - Arbitrary network topology - Approximation capability - Ability to incorporate learning algorithms: Hebbian, Gradient Descent, Reinforcement Learning, .. Later - Synchronised firing patterns & Logic in neural networks? Technical Tool: Exploiting the Analogy with Queuing Networks Discrete state space, typically continuous time, stochastic models arising in the study of populations, dams, production systems, communication networks .