ARTICLE IN PRESS

Neurocomputing 70 (2007) 991–999 www.elsevier.com/locate/neucom

Criticality of lateral inhibition for edge enhancement in neural systems

Pradeep Arkachar, Meghanad D. WaghÃ

Department of Electrical and Computer Engineering, Lehigh University, Bethlehem, PA 18015, USA

Received 19 December 2004; received in revised form 8 March 2006; accepted 20 March 2006 Communicated by V. Jirsa Available online 13 October 2006

Abstract

Although the role of lateral inhibition in edge () enhancement is well known, it has not been parametrized. This paper investigates the imbalance between the lateral inhibitory and excitatory stimuli and its effect on the edge enhancement and stability. It is shown that this imbalance can be expressed through g, a ratio of inhibitory to excitatory weights in a . Stability requires g to be less than the critical ratio Y.Asg approaches Y, edge enhancement increases, the rise being the sharpest just before instability. The increase in edge enhancement is also accompanied by an increase in the lateral spread of perturbations across the neuron layer. r 2006 Elsevier B.V. All rights reserved.

Keywords: Neural system; Lateral inhibition; Edge enhancement; Stability; Contrast enhancement

1. Introduction instability in neural networks [12]. Later, Shadlen and Newsome [17,16] proved that relative strengths of excita- Lateral inhibition refers to the phenomenon where a tory and inhibitory inputs are important to stabilize neural neuron in a neural network inhibits its neighbors, thus response. Their research indicates that avoid creating a competition between neurons. This phenomenon saturation by balancing excitation with inhibition. Their is observed in the visual cortex [9,5] as well as auditory analysis of high-input regime, where integrate-and-fire cortex [7,19]. The role of inhibition in contrast enhance- neurons receive a large number of both excitatory and ment has been recognized for a long time. Ernst Mach, a inhibitory inputs, showed that such a balance is responsible 19th century physicist, observed that retinal inhibition for the dynamic range of neurons’ response. Burkitt [3] has accentuates contours and borders in the visual field. This also shown that balanced inhibition with reversal potential finding received a boost by the experimental work of can resolve the problem of dynamic range of neural Hartline and Ratliff on the eyes of horseshoe crab Limulus response. Amit and Brunel [1] have shown that cortical [9]. They observed that when illuminated, the retinal cells network of integrate-and-fire neurons with balanced are excited by the light within a central area and inhibited inhibition and excitation and stochastic background is by light in the surrounding area. This resulted in the stable. Simulations of Gutkin and Smith [8] have further enhancement of the edges or contours between the regions strengthened the argument that the right amount of lateral of different intensities. This effect is called edge or contrast inhibition is critical to the stability of cortical neural enhancement. networks as well as to edge enhancement. Their work A number of researchers have studied lateral inhibition shows that a recurrent cortical neural network reaches a and its role in edge enhancement and stability of neural steady state and enhances the edges only in the presence of networks. In 1990, Kelly showed that large imbalance weak lateral inhibition. between the excitatory and inhibitory stimulus can cause Thus, the balance between the inhibitory and the excitatory characteristics of a neuron and its impact on ÃCorresponding author. Tel.: +1 610 7584142; fax: +1 610 7586279. the stability and edge enhancement is well documented. E-mail addresses: [email protected] (P. Arkachar), But no results pertaining to quantitative estimates of the [email protected] (M.D. Wagh). imbalance that cause the instability are available. In this

0925-2312/$ - see front matter r 2006 Elsevier B.V. All rights reserved. doi:10.1016/j.neucom.2006.03.017 ARTICLE IN PRESS 992 P. Arkachar, M.D. Wagh / Neurocomputing 70 (2007) 991–999 paper, we consider a planer neural network with lateral primary visual cortex to be linear cells that compute a inhibition. We show that the stability and the edge weighted sum of light intensities falling on the receptive enhancement of this network is governed by g, the ratio field. Evidence is also available showing that the P cells, a of the inhibitory to excitatory strengths. In particular, type of retinal ganglion cells, behave linearly for small when g is less than the critical ratio Y, the network is stable. variations in the stimulus [13,2]. Even though our network The bound Y is only dependent upon the model and the geometry closely resembles that of Gutkin and Smith [8] connectivity of the network. Further, when the network is and Kelly [12], they use sigmoid or other non-linear stable, its edge enhancement increases with g and becomes functions to compute the neuron response against our very large when g nears Y. linear function. Further, the connection matrix used in [12] is a cyclic Toeplitz matrix and is quite different from the 2. Model of the neural system one we have employed. Finally, unlike others, our weights are functions of the distance between the source and Neural network analysis and simulation is generally destination nodes with the inhibitory weight of an edge of difficult because the number of neurons that influence each the same length being a constant g times the weight of an other is very large. Previous research has shown the excitatory edge. columnar organization of neurons, i.e., groups of similar Our model of neuron network shown in Fig. 1 closely neurons behave like single neurons (see review article [15]). resembles the arrangement of retinal neurons. It could be We take advantage of this fact to partition the retinal considered a logical representation of the retinal pathway neurons within a layer into large clusters of neighboring with the excitatory input coming from the rods/cones and neurons. Each of these clusters represents a single neuron inhibitory inputs coming from lateral horizontal cells. The in our model. Thus, the response of each of our neurons is sole purpose of the horizontal cells is to inhibit the neurons an aggregate of the responses of all the neurons in the they synapse with. This is responsible for a lateral corresponding cluster. This model influences our analysis inhibition in the cortical network in the eyes. The output in two ways. Firstly, even though individual neurons may of our neural system is then applied to bipolar cells and be integrate-and-fire type with spikes at the output, the finally to ganglion cells. composite output of a large number of such neurons can be The assumptions and notation used in this paper are as considered continuous. Further, it also allows us to ignore follows. threshold while computing the neuron output since that output is the result of the collective behavior of a large number of neurons. Thus the neurons in our model exhibit The N denotes the number of identical neurons in the linear behavior. array as shown in Fig. 1. Each of our neurons in Fig. 1 There is substantial amount of evidence that dendrites represents a cluster of real neurons. develop randomly and consequently real neural networks Each neuron has ne excitatory input connections from have a random structure which is extremely difficult to the neurons in the lower layer and ni inhibitory input analyze except in a stochastic manner [3,4]. In our model, connections from neurons in its own layer (lateral we hypothesize that the probability of neuron connectivity connections). We assume that the weight of a connec- depends upon the proximity. In other words, the number of tion is a function of the distance between the source and connections across the clusters of neurons (which represent the destination neurons. A neuron is affected the same our aggregate neurons) is a function of the physical way by neurons equidistant from it on either side. The distance between the clusters. We realize the interconnec- excitatory weights are denoted by wj where j is the tion density between the clusters through the weight of the difference in the source and the destination neuron connections between our neurons. Thus, both the excita- indices. The inhibitory weights are assumed to follow tory and the inhibitory weights of our neuron connections can be assumed to be functions of the distance between the Lateral Inhibitory connections y1 clusters that the neurons represent. yN Our model differs from models used by earlier research- ers substantially. In particular, our representation of neuron clusters as single neurons allows us to use simpler (linear) model of an individual neuron, while allowing for a more detailed network topology. In much of the previous research in this area, the network topology was not a consideration. Burkitt, for example, deals with a stochastic distribution of neurons [3]. We use a simpler model of the x1 x2 Excitatory connections xN cortical networks consisting of neurons laid out uniformly Fig. 1. The logical representation of the neural network in . The in space. Each neuron provides lateral inhibition to certain excitatory inputs (xi’s) come from the photoreceptors and lateral range of its neighbors. We use a linear model of a neuron. inhibitory inputs represent the collections of horizontal cells. The yi’s Several previous papers [14,6,5] have considered neurons in represent the outputs of neurons. ARTICLE IN PRESS P. Arkachar, M.D. Wagh / Neurocomputing 70 (2007) 991–999 993

the same function of distance as the excitatory weights. above, if the index i þ j of x or y in (1) goes out of its The proportions of the inhibitory and the excitatory defined range of 12N, then it needs to be reflected with neurons in the cluster are responsible for the relative respect to the neuron index, i.e., an out of range index j strengths of the inhibitory and excitatory behavior of needs to be changed to 2i j. For example, when the entire cluster. To account for this, we scale the ne ¼ ni ¼ 5, the output of the second neuron, y2 is inhibitory weights by the weight ratio g. The g allows one to compare the populations of the inhibitory and the y2ðt þ 1Þ¼Z½w2x4ðtÞþw1x1ðtÞþw0x2ðtÞþw1x3ðtÞþw2x4ðtÞ excitatory neurons within a cluster. Thus, the inhibitory 0 0 0 0 gðw2y4ðtÞþw1y1ðtÞþw1y3ðtÞþw2y4ðtÞÞ. weight of an edge between neurons j0 away is denoted by 0 gwj where the part of inhibitory weight that depends Eq. (1) can also be written in the following matrix form: 0 upon the distance is wj. Fig. 2 shows the weights of the connections from a single neuron. YðtÞ¼Z½AXðtÞgBYðt 1Þ , (2) In the analysis we have ignored self-inhibition. In other 0 T T words, w0 was set to 0. But it should be pointed out that where X ¼ðx1; x2; ...; xN Þ , Y ¼ðy1; y2; ...; yN Þ and A the presence of self-inhibition does not change the and B are N N square matrices: overall behavior of the system, though it reduces the 0 1 edge enhancement. All the simulations reported in w0 2w1 2w2 ... 2wr 0 ... B C Section 4 use self-inhibition. B C B w1 w0 w1 2w2 ... 00C In order to minimize the edge effects due to a finite array B C B ...... C of neurons, we reflect the connections at the end of the B ...... C B ...... C array. In other words, if the jth neighbor of a neuron is B C B C outside the range of neurons under consideration, then A ¼ B 0 wr ... w0 ... wr ...C, B C the weight of the connection from its mirror image B C B ...... C (from the array edge) is doubled. As long as the B ...... C B C discontinuities in the input stimulus are sufficiently B C @ 0 ...... 2w2 w1 w0 w1 A away from the edge, this is sufficient to suppress the ... 00...... 2w w edge effects of the finite array. 0 1 01 The input stimulus at time t is denoted by xiðtÞ; 1pioN. 02w0 ... 2w0 ... 00 B 1 t C In order to produce a constant (and equal) output B C B w0 0 w0 2w0 ... 00C corresponding to a constant input, we scale the output B 1 1 2 C B C of each neuron by a normalization factor Z. It is shown B ...... C B ...... C later that Z is inversely proportional to the sum of all the B C B C excitatory and the inhibitory weights. B ... w0 ... 0 ... w0 ...C B ¼ B t t C. We assume a linear time-invariant neural system. Thus, B C B ...... C our weights are constant over time and are independent B ...... C B C of either the excitation or the output. B C B 0 ...... w0 0 w0 C Based on the model described above, the discrete-time @ 1 1 A 0 0 output yi of the ith neuron at time t þ 1 is given by 002w ...... 2w 0 "# t 1 Xr Xt 0 yiðt þ 1Þ¼Z wjjjxiþjðtÞg wjjjyiþjðtÞ , (1) Fig. 3 represents a linear feedback system based on Eq. (2). j¼r j¼t Because of our assumption that the weights and con- nectivity are constant, this is a time-invariant system. It will where r ¼ðne 1Þ=2 and t ¼ðni 1Þ=2. It should be noted here that because of the infinite array assumption stated be shown in the subsequent sections that g should be less than Y for a stable output.

yi γ γ w’2 w’2 Feedback formed by lateral inhibition y y i y y i-2 i-1 γ γ i+1 i+2 w’1 w’1 γ < Θ B

w2 w2 Excitatory connections - w1 w0 w1 + η A Σ Y(t) xi-2 xi-1 xi xi+1 xi+2 X(t) Normalization Fig. 2. Connections to the ith neuron. The lateral inhibitory weight of the 0 edges coming from neurons i þ t and i t is gwt. Weight of the edges from Fig. 3. The neural network of Fig. 1 represented as a linear time-invariant 0 inputs xiþt and xit is wt. There is no self-inhibition, i.e., w0 ¼ 0. feedback system from Eq. (2). ARTICLE IN PRESS 994 P. Arkachar, M.D. Wagh / Neurocomputing 70 (2007) 991–999

3. Mathematical analysis of the neural system inputs and outputs till time t. Let the stimulus X be constant over time. Then we have 3.1. Normalization Yð0Þ¼0, We use a constant normalization factor, Z, in our analysis Yð1Þ¼ZAX. such that if the input stimulus, xi, is constant over time at By using recursion for solving Eq. (2) we get some value, then the output also settles at that same value. YðtÞ¼Z½AX gBYðt 1Þ Let xi ¼ x for all i. Then, the output y of each neuron is obtained from (1) as ¼ Z½I gZB þþð1Þt1ðgZBÞt1AX "# Xr Xt ¼ ZCAX, ð8Þ y ¼ Z w x g w0 y . (3) jjj jkj where j¼r k¼t t1 t1 From (3) we get Z as C ¼½I gZB þþð1Þ ðgZBÞ ¼½I þð1Þt1ðgZBÞt½I þ gZB1, ð9Þ P 1 P Z ¼ r t 0 ð4Þ j¼rwjjj g k¼twjkj provided none of the eigenvalues l satisfy gZl ¼1. Eq. 1 (9) can be verified by noting that ¼ . Sum of excitatory weights sum of lateral inhibitory weights C½I þ gZB¼½I þð1Þt1ðgZBÞt, ð5Þ and that the matrix ½I þ gZB is invertible if no eigenvalue l of B equals ðgZÞ1. Combining (8) and (9) one gets 3.2. Critical ratio YðtÞ¼Z½I þð1Þt1ðgZBÞt½I þ gZB1AX. Let li, i ¼ 1; ...; N, denote the eigenvalues of B. The As t !1, the geometric progression, C, converges if characteristic equation of B is given by and only if jgZlmjo1. Therefore, the steady-state output is jB lIj¼0. Yð1Þ ¼ Z½I þ gZB1AX. Let us denote the sum of each row in B as S. In the above The neural system would be stable only if the output Y determinant, after adding the first N 1 columns to the converges asymptotically. By substituting values of Z and last column, we get the equivalent equation l from (4) and (6) in the inequality, we deduce that for m l 2w0 ... 2w0 ... 0 S l stability, one should have 1 t P 0 0 0 t 0 w1 l w1 2w2 ... 0 S l g w P j¼tPjjj 1. (10) ...... r t 0 o ...... j¼rwjjj g j¼twjjj ...... ... w0 ... l ... w0 ... We assume that the lateral inhibition effect is less than t t ¼ 0. the overall excitatory effect, so as to produce a meaningful ...... ...... output. Since the ratio in (10) is positive, we get the 0 ...... w0 l S l condition for a stable output to be 1 P 0 0 r 002wt ...... 2w1 S l j¼rwjjj go P . (11) 2 t w0 One can see that by setting l ¼ S, the last column of the j¼t jjj determinant becomes 0 indicating that one of the We refer to the right-hand side of (11) as critical ratio eigenvalues is and denote it by Y in the rest of the paper. The condition for the stability of the neural system then becomes lm ¼ S. (6) From Gerschgorin’s circle theorem [18], all the eigenva- goY. (12) 0 lues lie within or on the boundary of the circles whose radii Since gwjjj is the jth inhibitory weight, an alternate way to are the sum of absolute values of all entries in each row express the stability condition (12) is except the value on the principal diagonal. Applying this to Sum of lateral inhibitory weights 1 matrix B,weget o . Sum of excitatory weights 2 jl j S; i ¼ 1; 2; ...; N. (7) i p Thus, if the sum of lateral inhibitory weights for a neuron From (6) and (7), lm is the maximum eigenvalue of B. is less twice the sum of its excitatory weights, the system is Consider now the system equation given by (2). Assume stable. Note that Y, as defined by (11), is a function of only that the system outputs are zero before the stimulus is the network connectivity specified by constants t and r and applied. This system is causal because as shown in (2), the the weights determined by the relative distance between the output of the system at time t is only dependent upon its source and the destination neurons. The constant nature of ARTICLE IN PRESS P. Arkachar, M.D. Wagh / Neurocomputing 70 (2007) 991–999 995

Y allows us to characterize network stability rather easily to the input edge amplitude. Simulations show that edge in the rest of the paper. enhancement is independent of the value of inputs or the magnitude of edge. 4. Simulation results Following subsections present the simulation results.

We used MATLAB programs for all our simulations. In 4.1. Uniform weight distribution all simulation we used the same number of neurons, N ¼ 40. In order to characterize the effect of input In this set of simulations, all the excitatory weights ðw Þ discontinuities on the output, we chose an input stimulus i were assumed to be equal and so were the inhibitory that has a single edge sufficiently far from the boundaries weights ðgw0Þ. Further, without loss of generality, these of the neural network. This edge was located between i weights may be chosen to be 1, i.e., neurons 20 and 21, i.e., xiðtÞ¼55, for 1pip20 and xiðtÞ¼65, for 21pip40. We also assumed that there is wj ¼ 1 for rpjpr, only one layer of neurons. We used 500 iterations in each 0 w ¼ 1 for tpjpt. simulation run to allow the system to stabilize, if at all. We j did two sets of simulations: with self-inhibition and Fig. 4 shows the input/output behavior of such a system without any self-inhibition; but report here only the results with ni ¼ ne ¼ 3. From this figure one can see the criticality with self-inhibition. Systems without self-inhibition showed of the weight ratio in keeping the neural system stable. As the same qualitative behavior. Particularly, when the soon as the weight ratio exceeds Y, the system oscillates number of connections to a neuron (ni or ne) was large, with disproportionate magnitude. In Fig. 4(d), a weight both sets of simulations gave identical outputs. For small ratio of mere 1% more than the critical ratio Y causes the number of connections, networks with little or no self- output to oscillate about four orders of magnitude larger inhibition produced much larger edge enhancement. than the input discontinuity after 500 iterations. We used three weight distributions that we believe are The critical ratio dependence on the number connections probable in biological systems. In each case, the con- is shown in Fig. 5. One can see that the critical ratio nectivity of the neural system and the weight ratio g were increases with increasing number of excitatory connections varied and their effect on the critical ratio and output was and decreasing number of inhibitory connections. But determined. In order to parametrize the edge enhancement, when the number of excitatory connections equals the we define the edge enhancement as the ratio of the number of inhibitory connections, Y approaches 0.5 as the difference in the stable maximum and minimum outputs number of connections becomes larger.

70 70 Input Input Output Output 65 65

60 60 Amplitude Amplitude 55 55

50 50 0 10 20 30 40 0 10 20 30 40 (a)Neuron array (b) Neuron array

x 105 80 Input Input Output Output 70 0

60 -5 Edge Amplitude Amplitude 50 -10

40 -15 0 10 20 30 40 10 20 30 40 (c)(Neuron array d) Neuron array

Fig. 4. Response of a neural system with uniform weights for ni ¼ ne ¼ 3. Notice how the system becomes unstable when g exceeds Y ¼ 0:75. The edge enhancement increases with increase in g. (a) g ¼ 0:8Y, (b) g ¼ 0:95Y, (c) g ¼ 0:99Y and (d) g ¼ 1:01Y. ARTICLE IN PRESS 996 P. Arkachar, M.D. Wagh / Neurocomputing 70 (2007) 991–999

2 1.25 n =3 1.8 e E=3,I=3 n =5 e E=5,I=5 1.6 ne=7

1.4 ne=9 1.2 Θ n =11 1.2 e 1 Θ = 0.5 0.8 1.15

Critical Ratio 0.6 0.4 Edge Enhancement 1.1 0.2 0 3 4 5 6 7 8 9 10 11 ni 1.05 0.25 0.3 0.35 0.4 0.45 0.5 0.55 Fig. 5. Dependence of the critical ratio Y on number of inhibitory γ connections ðniÞ and excitatory connections ðneÞ for uniform weight distribution. Fig. 6. Edge enhancement as a function of g in case of uniform weight distribution for two network connectivities. Note that the edge enhance- Fig. 4 also shows that as the weight ratio approaches the ment increases with g and is largest just before g exceeds Y. critical ratio, the edge enhancement increases, the rise being the sharpest just before the system becomes unstable. This 0.8 dependence of edge enhancement on the weight ratio is ne=3 shown in Fig. 6. 0.7 ne=5

ne=7 0.6 n =9 4.2. Inversely linear weight distribution e Θ ne=11 0.5 Our second set of simulations assumed signal loss along the length of synapses and axons. We therefore used 0.4 weights that are inversely proportional to the length of the Critical Ratio connection. Assuming unit distance between neighboring 0.3 neurons and unit distance between neuron layers, we got 0.2 1 wj ¼ pffiffiffiffiffiffiffiffiffiffiffiffi for rpjpr, 0.1 j 1 þ j2j 3 4 5 6 7 8 9 10 11 ni 0 1 a 0 wj ¼ for tpjpt; j 0 and w0 ¼ 1:5. jjj Fig. 7. Dependence of the critical ratio Y on number of inhibitory The behavior of this neural system with respect to connections ðniÞ and excitatory connections ðneÞ for inversely linear weight distribution. stability was similar to the one described in Section 4.1 and 1.14 shown in Fig. 4. The dependence of the critical ratio on ne E=3,I=3 and ni in this case is shown in Fig. 7. A neural system in this E=5,I=5 Θ = 0.3449 case has a much lower critical ratio than one with uniform 1.12 weights with the same number of connections. Therefore, the signal losses within the neuron connections significantly reduce the range of weight ratio g to keep the neural system 1.1 stable. The dependence of edge enhancement on g for this 1.08 weight distribution is shown in Fig. 8. One can see that for Θ = 0.3676 the same connectivity, the edge enhancement in this case is Edge Enhancement much smaller than in the case of uniform weights. Thus, 1.06 loss of signal along the neural connections affects both the stability and the edge enhancement adversely. 1.04 0.2 0.25 0.3 0.35 0.4 4.3. Normal weight distribution γ Fig. 8. Edge enhancement as a function of g in case of inversely linear In the third set of simulations we assumed the weights to weight distribution for two network connectivities. Note the similarity have a normal distribution. As before, we let the weights be with the curves in Fig. 6. ARTICLE IN PRESS P. Arkachar, M.D. Wagh / Neurocomputing 70 (2007) 991–999 997 functions of the distance between the neurons. In addition, ne ¼ ni ¼ 3, the edge enhancement is about 1:3 when g ¼ we choose large r and t so as to span all the neurons in a 0:9Y and 2:3 when g ¼ 0:99Y. layer. However, with the standard deviations we chose, the An edge in the input produces perturbations in the distributions fall off sharply and there are no edge effects to system response (see Fig. 4). We also observe that these consider. Thus, in this case, the weights become perturbations spread laterally over a larger number of neurons as g increases; the increase being more dramatic as 1 ð1þj2Þ=2s2 wj ¼ pffiffiffiffiffiffi e e for rpjpr, g nears the critical ratio. Table 1 shows this behavior. 2pse 0 1 j2 2 0 ffiffiffiffiffiffi =2si a wj ¼ p e for tpjpt; j 0andw0 ¼ 0:5, 1.4 σ 2psi e=.5 σ 1.2 e=1 where se and si denote the standard deviations of the σ e=1.5 excitatory and inhibitory weight distributions, respectively. 1 σ e=2 Θ This allows us to use a connectivity strength that drops off σ e=2.5 gradually as we move further away from a neuron. This is 0.8 in sharp contrast to the earlier two cases where the 0.6 connectivity strength dropped abruptly beyond a chosen number of neurons. In this case, the weight distribution can Critical Ratio 0.4 be made flatter or narrower by increasing or decreasing the standard deviation. This allows us to control the sphere of 0.2 influence of each neuron on its neighbors to a greater 0 degree than the other two distributions. When se !1and 0.5 1 1.5 2 2.5 σ si !1, the normal distribution degenerates into a i uniform distribution. The dependence of critical ratio on the standard Fig. 9. Dependence of the critical ratio Y on the standard deviations of the inhibitory weights ðsiÞ and the excitatory weights ðseÞ in case of the deviations se and si in case of the normal weight normal weight distribution. distribution is shown in Fig. 9. This figure shows that increasing si decreases the critical ratio. Thus, inhibitory influence from more neurons reduces the range of weight 1.25 σ σ ratio over which the neural system is stable. On the other e=0.4, i=0.4 hand, increasing s allows more excitatory connections to e σ =0.5,σ =0.5 affect the output of a neuron and increase the critical ratio. 1.2 e i The edge enhancement in the case of the normal weight Θ = 0.1148 Θ distribution is shown in Fig. 10. It is interesting to note that 1.15 = 0.2040 one gets substantial edge enhancement in the case of normal weight distribution. Further, this edge enhance- 1.1 ment increases with smaller standard deviations of the weight profiles. This is in line with the earlier two weight Edge Enhancement distributions, in which a smaller number of connections 1.05 tended to give a larger edge enhancement. 1 0.06 0.08 0.1 0.12 0.14 0.16 0.18 0.2 0.22 γ 5. Other effects near the critical region Fig. 10. Edge enhancement as a function of g in case of a normal weight distribution for two network connectivities. Note the similarity with the As has been discussed earlier, the relationship between g curves in Figs. 6 and 8. and Y determines the stability of the neural network of Fig. 1 as well as the value of the edge enhancement. Our simulations show that the value of g relative to that of Y Table 1 plays a greater role in deciding many other properties of Spread of perturbations in the system response because of an edge in the neural networks as well. Interestingly, as g ! Y, the input pattern as g approaches the critical ratio Y system behavior changes rapidly and other phenomena g Number of neurons affected by perturbations such as hysteresis become evident. An example of this rapid change can be seen from the perturbations around the 0:3Y 5 input edge as g approaches the critical ratio in Fig. 4. These 0:5Y 5 radical changes are even more prominent in the absence of 0:8Y 7 : the self-inhibition. For example, in the case of inversely 0 99Y 11 linear weight distribution, with no self-inhibition and The case illustrated here uses uniform weights, ni ¼ ne ¼ 5. ARTICLE IN PRESS 998 P. Arkachar, M.D. Wagh / Neurocomputing 70 (2007) 991–999

Forward Forward Backward Backward Output Output

(a) Input (b) Input

Fig. 11. The hysteresis in the neural system output when (a) g ¼ 0:8Y and (b) g ¼ 0:99Y. The case illustrated uses uniform weights with ni ¼ ne ¼ 5. The same input is applied to all the neurons. Solid line indicates the neural network output as the input is increased, the dotted line shows the output as the input is decreased.

Jacobs and Werblin [11] have observed experimentally that functions and the overall conclusions of this work seem to a high-illumination narrow spike stimulus produced ridges be independent of them. in the retinal output which spread laterally. This phenom- From the mathematical analysis and the simulations we enon can be explained in the context of our model as observe that in stable neural networks, g has upper bound follows. It is known that stimulus intensity alters the which we call the critical ratio Y. This bound Y is balance between excitation and inhibition in visual cortex, dependent only on the network connectivity and the weight and in particular, it suppresses the excitatory response functions. When goY, the neural system is able to enhance while enhancing the inhibitory response [20]. In other any discontinuities in the input stimulus. If g exceeds Y words, a high-illumination spike such as the one used by even minimally, the system output instantly becomes Jacobs and Werblin may indeed cause g to increase, and erratic. The edge enhancement increases rapidly as the even bring it close to the critical ratio Y, thus producing a system approaches instability (i.e., as g ! Y). The number lateral spread of the output spike. As the ridge progresses, of neurons affected by the input discontinuity also grows at it changes the g of the surrounding neurons, which in turn the same time. The spacial discontinuity in the stimulus spread the ridge further. produces stable periodic patterns around the discontinuity The g-dependent feedback in the neural system (see Fig. 3) region. Such periodic patterns have been observed by produces hysteresis effects, particularly when g is close to the others in intracortical neuronal fields [10]. Simulations critical ratio. Fig. 11 shows a typical behavior of the system show that edge enhancement is independent of the actual for two values of g. During these simulations, all the neurons values of the inputs. were applied the same input. The input was either linearly The stability region ðgoYÞ depends on the localization increased or decreased with time and the output of the neural of connections. If there are very few lateral inhibitory system was recorded. Note the substantial difference in the connections or a large number of excitatory connections output for the same value of input depending upon whether from a lower layer, then the stability region is wider. In the input was increasing or decreasing. Note that for low other words, in this case, a larger amount of imbalance values of g, hysteresis effects are not observed. between the excitatory and inhibitory strengths may be Cai et al. have demonstrated hysteresis by using kinetic tolerated. For g values close to Y (but less than Y), the theory to a mixture of simple (linear) and complex (non- system exhibits hysteresis. Many of the system properties linear) neurons [4]. Our relatively simple model also change at a rapid rate when g ! Y creating unwanted displays hysteresis albeit only when g is very close to Y. anomalies. Determination of the stability region and the critical ratio Y identified here will help avoid such situations and achieve a balance between the excitatory 6. Conclusions and inhibitory strengths for well-behaved neural networks. This paper deals with the imbalance between the Acknowledgment strengths of the inhibitory and excitatory phenomena in the neuron. We assume that the effect of neighboring The authors wish to thank the reviewers for suggesting neurons on any neuron is dependent upon their distances several improvements to the manuscript. from the neuron, but given the same distance, an inhibitory influence is g times stronger than an excitatory influence. In References terms of neural networks this implies that the weights of inhibitory connections are g times larger than the [1] D.J. Amit, N. Brunel, Model of global spontaneous activity and local excitatory connections. We have assumed only lateral structured activity during delay periods in the , Cereb. inhibition. We have considered three very different weight Cortex 7 (1997) 237–252. ARTICLE IN PRESS P. Arkachar, M.D. Wagh / Neurocomputing 70 (2007) 991–999 999

[2] E.A. Benardete, E. Kaplan, The receptive field of the primate p [15] V.B. Mountcastle, The columnar organization of the neocortex, , i: linear dynamics., Visual Neurosci. 14 (1) Brain 120 (1997) 701–722. (1997) 169–185. [16] M.N. Shadlen, W.T. Newsome, Noise, neural codes and cortical [3] A.N. Burkitt, Balanced neurons: analysis of leaky integrate- organization, Curr. Opinion Neurobiol. 4 (4) (1994) 569–579. and-fire neurons with reversal potentials, Biol. Cybern. 85 (4) [17] M.N. Shadlen, W.T. Newsome, The variable discharge of cortical (2001) 247–255. neurons: implications for connectivity, J. Neurosci. 18 (10) (1998) [4] D. Cai, L. Tao, M. Shelley, D.W. McLaughlin, An effective kinetic 3870–3896. representation of fluctuation-driven neuronal networks with applica- [18] R.S. Verga, Gerschgorin and His Circles, Springer, Berlin, 2004. tion to simple and complex cells in visual cortex, Proc. Natl. Acad. [19] M. Wehr, A.M. Zador, Balanced inhibition underlies tuning and Sci. 101 (20) (2004) 7757–7762. sharpens spike timing in auditory cortex, Lett. Nat. 426 (2003) [5] M. Carandini, D.J. Heeger, J.A. Movshon, Linearity and normal- 442–446. ization in simple cells of the macaque primary visual cortex, J. [20] M. Welky, K. Kandler, D. Fitzpatrick, L.C. Katz, Patterns of Neurosci. 17 (21) (1997) 8621–8644. excitation and inhibition evoked by horizontal connections in visual [6] C. Enroth-Cugell, J.G. Robson, D.E. Schweitzer-Tong, A.B. Watson, cortex share a common relationship to orientation columns, Neuron Spatio-temporal interactions in cat retinal ganglion cells showing 15 (3) (1995) 541–552. linear spatial summation, J. Physiol. 347 (1983) 297–307. [7] R. Galambos, H. Davis, Inhibition of activity in single auditory Pradeep Arkachar received his B.E. in Electronics nerver fibers by acoustic simulations, J. Neurophysiol. 7 (1944) and Communications from Mysore University in 287–304. 2000. He received his M.S. in Electrical Engineer- [8] B.S. Gutkin, C.E. Smith, Conditions for noise reduction and stable ing from Lehigh University in 2003. His interests encoding of spatial structure by cortical neural networks, Biol. are in the fields of signal processing, bioengineer- Cybern. 82 (6) (2000) 469–475. ing and economics. He is currently working at [9] H. Hartline, F. Ratliff, Studies on Excitation and Inhibition in the Ease Diagnostics as embedded electronics design Retina, The Rockefeller University Press, New York, 1974. engineer. [10] A. Hutt, M. Bestehorn, T. Wennekers, Pattern formation in intracortical neuronal fields, Network: Comput. Neural Syst. 14 (2003) 351–368. [11] A.L. Jacobs, F.S. Werblin, Spatiotemporal patterns at the retinal Meghanad D. Wagh got his B.Tech and Ph.D., output, J. Neurophysiol. 80 (1) (1998) 447–451. both in Electrical Engineering, from Indian [12] D.G. Kelly, Stability in contractive nonlinear neural networks, IEEE Institute of Technology, Bombay. His research Trans. Biomed. Eng. 37 (3) (1990). interests include algorithms and architectures for [13] B.B. Lee, J. Pokorny, V.C. Smith, J. Kremers, Responses to pulses high-speed applications and alternate computing and sinusoids in macaque ganglion cells, Vision Res. 34 (23) (1994) paradigms. He is currently an associate professor 3081–3096. of Computer and Electrical Engineering at [14] L. Maffei, A. Fiorentini, The visual cortex as a spatial frequency Lehigh University. analyzer, Vision Res. 13 (7) (1973) 1255–1267.