On the EXISTENCE OF HOPFIELD NEURAL NETWORKS: SYNTHESIS OF HOPFIELD TYPE ASSOCIATIVE MEMORIES:

Garimella Rama Murthy* and Moncef Gabbouj**

*International Institute of Information Technology, Hyderabad, India **Tampere University of Technology, Tampere, Finland

ABSTRACT

In this research paper, the problem of existence of the associative memory synthesized by Hopfield is addressed and solved. Using Hadamard of suitable dimension, an algorithm to synthesize real valued Hopfield neural network is discussed. The problem of existence and synthesis of a certain complex Hopfield neural network is addressed and solved. Also, synthesis of real and complex Hopfield type associative memories is discussed. Novel Complex Hadamard matrices are defined and studied. Using such matrices Novel Complex Hadamard transform is proposed. Also complex Walsh functions are proposed. Certain interesting distance measures are defined.

1. INTRODUCTION : In the progress of Science and Technology, homosapiens attempted mathematical model of natural ( physical / chemical / biological ) systems. Specifically, there were attempts to model the biological neural network. McCulloch and Pitts proposed a model of artificial neuron. Using that model of neuron and its variations, researchers comprehensively built the theory of Artificial Neural Networks (ANNs). In this theory, some important milestones were Single Layer Perceptron (SLP) and Multi Layer Perceptron (MLP). Multi Layer Perceptron is an example of feedforward neural network that received considerable attention and many applications continue to utilize such a neural network. Researchers were naturally motivated to investigate feedback neural networks. Hopfield effectively succeeded in proposing a feedback neural network model that acts as an associative memory. Hopfield neural network model led to intense research activity on various types of associative memories such as the Bi-directional Associative Memory (BAM ). Researchers were also motivated to investigate neural networks in which the input, synaptic weights, outputs are allowed to be complex numbers. Thus, these research efforts led to the comprehensive research area of Complex Valued Neural Networks (CVNNs) [2, MGZ], [6]. The authors proposed a complex valued generalization of Hopfield Neural Network (HNN) [3, RaP] . It is a different model from other types of Complex Hopfield Neural Network investigated by other researchers. In this research paper, the authors investigate the question of existence and synthesis ( algorithmic procedure ) of real valued associative memory proposed by Hopfield. The results are also generalized to a Complex Valued Hopfield neural network. This research paper is organized in the following manner. In Section 2, detailed explanation of real valued Hopfield neural network is provided. Also, an interesting Complex Valued Hopfield Neural Network proposed by the authors is discussed. In Section 3, existence of real valued Hopfield neural network is discussed. In Section 4, existence of a complex valued Hopfield neural network is discussed. In Section 5, synthesis of Hopfield type Associative Memories is discussed. The research paper concludes in Section 6.

2. Hopfield Neural Network: Review of Research Literature : Hopfield neural network is an Artificial Neural Network model. It is a nonlinear dynamical system represented by a weighted, undirected graph. The nodes of the graph represent artificial neurons and the edge weights correspond to synaptic weights. At each neuron / node, there is a threshold value. Thus, in summary, a Hopfield neural network can be represented by a synaptic weight matrix, M and a threshold vector, T. The order of the network corresponds to the number of neurons [1, Hop]. Every neuron is in one of the two possible states +1 or -1. Thus, the state space of Nth order Hopfield Neural Network ( HNN ) is the N- dimensional unit hypercube. Let the state of ith neuron at time ‘t’ be denoted by ͐$ ʚͨʛ ) . Thus, the state of the non-linear dynamical system is ʚ ͐$ ʚͨʛ Ǩ ʜ ƍ1 ͣͦ Ǝ 1 ʝ represented by the N x 1 vector, ͐Ŭ ʚͨʛ. The state updation at the ith node is governed by the following equation

= Sign {  ….(2.1) , ͐$ ʚ ͨ ƍ 1 ʛ ∑%Ͱͥ ͇$% ͐% ʚͨʛ Ǝ ͎$ ʝ

where Sign (. ) is the Signum function. In other words, is +1 if the term in ͐$ ʚͨ ƍ 1ʛ the brackets is non-negative, otherwise is -1. Depending on the set of nodes at ͐$ ʚͨ ƍ 1ʛ which the state updation (2.1) is performed at any time t, the Hopfield Neural Network ( HNN ) operation is classified into the following modes.

• Serial Mode: The state updation in equation (2.1) is performed exactly at one of the nodes / neurons at time ‘t’

• Fully Parallel Mode: The state updation in equation (2.1) is performed simultaneously at all the “N” nodes / neurons at time ‘t’.

In the state space of Hopfield Neural Network, a non-linear dynamical system, there are certain distinguished states, called the “stable states”.

• Definition : A state ͐Ŭ ʚͨʛ is called a stable state if and only if

͐Ŭ ʚͨʛ = Sign { M ͐Ŭ ʚͨʛ - T } ….(2.2)

Thus, once the Hopfield Neural Network ( HNN ) reaches the stable state, irrespective of the mode of operation of the network, the HNN will remain in that state for ever. Thus, there is no further change of the state of HNN once a stable state is reached. The following convergence Theorem summarizes the dynamics of Hopfield Neural Network ( HNN ). It characterizes the operation of neural network as an associative memory.

Theorem 1: Let the pair N = ( M,T ) specify a Hopfield neural network. Then the following hold true:

[1] Hopfield : If N is operating in a serial mode and the elements of the diagonal of M are non-negative, the network will always converge to a stable state ( i.e. there are no cycles in the state space ).

[2] Goles: If N is operating in the fully parallel mode, the network will always converge to a stable state or to a cycle of length 2 ( i.e the cycles in the state space are of length ƙ 2 ).

Remark 1: It should be noted that in [Rama1], it is shown that the synaptic weight matrix can be assumed to be an arbitrary .

• NOVEL COMPLEX HOPFIELD NEURAL NETWORK

In research literature on artificial neural networks, there were attempts by many researchers to propose Complex Valued Neural Networks ( CVNNs) in which the synaptic weights, inputs are necessarily complex numbers [2, MGZ]. In our research efforts on CVNNs, we proposed and studied one possible Complex Hopfield neural network first discussed in [3, RaP], [7-12]. The detailed descriptions of such an artificial neural network requires the following concepts. A. Complex Hypercube: Consider a vector of dimension “N” whose components assume values in the following set { 1+ j1 , 1 –j 1, -1 + j 1, -1 – j 1 }.Thus there are 4N points as the corners of a set called the ‘Complex Hypercube”. B. Complex Signum Function: Consider a complex number “a + j b”. The “Complex Signum Function” is defined as follows: Csign ( a + j b ) = Sign ( a ) + j Sign ( b ).

Now we briefly summarize the Complex Hopfield Neural Network first proposed in [3,RaP ].

• The synaptic weight matrix of such an artificial neural network is a . The activation function in such a complex valued neural network is the complex signum function. The convergence Theorem associated with such a neural network is provided below.

Theorem 2: Let N=( W, T) be a neural network, with W being a synaptic weight matrix which is a Hermitian matrix with nonnegative diagonal elements and T being the threshold vector; then the following hold.

1) If N is operating in a serial mode and the consequent state computation of any arbitrary node ( i) is as follows: where ͐$ʚͨ ƍ 1ʛ Ɣ ̽ͧ͛͢Ƴ͂$ʚͨʛƷ, ) , ͂$ʚͨʛ Ɣ ∑%Ͱͥ ͑$,% ͐%ʚͨʛ Ǝ ͎$ then the network will always converge to an stable state.

2) If N is operating in a fully parallel mode and the consequent state computation of any arbitrary node (i) is similar to that of the serial mode, then the network will always converge to a stable state or to a cycle of length 2.

Remark 2 : It has been shown in [RaN], the diagonal elements of W can be arbitrary real numbers and need not be necessarily non-negative.

3. Existence of Real Valued Hopfield Neural Network:

• Linear Algebraic Structure of Hopfield Neural Network: Hopfield synthesized a real valued synaptic weight matrix from the patterns to be stored in such a way that the network so obtained has these patterns as stable states. The weight matrix given by Hopfield is as follows:   ....(3.1) ͑ Ɣ ∑%Ͱͥʚ͒%͒% Ǝ ̓ʛ

where S is the number of patterns to be stored ( S < N ), I is the identity matrix and are ͒ͥ, ͒ͦ … ͒ the orthogonal real patterns (lying on the real hypercube) to be stored. Now, we would like to understand the linear algebraic structure of the symmetric synaptic weight matrix, W. Some results related to W are as follows

• Trace (W) is zero. Thus, the sum of eigenvalues of W are zero. • It is easy to see that

͑ ͒& Ɣ ʚ ͈ Ǝ ͍ʛ ͒& ͫͨ͜͝ ͍ Ɨ ͈.

. Now, we need the following definition in the succeeding discussion.

Definition : The local minimum vectors of the energy function associated with W are called anti-stable states . Thus, if ͩ is an anti-stable state of matrix W, then it satisfies the condition ͩ Ɣ Ǝ͍͛͢͝ ʚ ͑ ͩ ʛ.

Using the definitions, we have the following general result

Lemma 1: If a corner of unit hypercube is an eigenvector of W corresponding to positive / negative eigenvalue, then it is also a stable / anti-stable state.

Proof: Follows from the utilization of definitions of eigenvectors, stable / anti-stable states. Q.E.D.

Now we arrive at the spectral representation of W using the above results. Let } be orthogonal vectors that are corners of unit hypercube. ʜ ͒ͥ, ͒ͦ … ͒, ͓ͮͥ, ͓ͮͦ … ͓ It is easy to see that

͑ ͓& Ɣ ʚƎ͍ʛ ͓& ͚ͣͦ ʚ ͍ ƍ 1 ʛ ƙ ͟ ƙ ͈.

Thus, the corner of hypercube is also an eigenvector corresponding to the negative ͓& eigenvalue ‘ – S ‘. Hence, the spectral representation of the connection matrix of associative memory synthesized by Hopfield is given by č č ĝ ĝ ĝ ĝ ʚ ͯ ʛ ʚ ͯ ʛ ∑ ʚ ʛ ∑ ʚ ʛ ∑  ∑  . ͑ Ɣ %Ͱͥ ͈ Ǝ ͍ √ √ Ǝ %Ͱͮͥ Ǝ͍ √ √ Ɣ %Ͱͥ  ͒% ͒% Ǝ %Ͱͥ  ͓% ͓%

Remark 3 : Hopfield synthesized such a synaptic weight matrix to ensure that the diagonal elements of W are all zero ( and necessarily non-negative ) as required by his proof of the convergence Theorem ( i.e. Theorem 1 ). In the above synthesis of associative memory, Hopfield implicitly assumed that the corners of hypercube that are also “orthogonal” exist in all dimensions. The authors questioned this assumption. The effort of the authors on this issue is summarized in the following. We now propose some interesting distance measures. • Some Interesting Distance Measures : The following “distance” definitions are useful in the succeeding discussion. Consider { +1, -1 } vectors on the unit hypercube. Consider the mapping: ƍ1 Ŵ ƍ1 Ǝ1 Ŵ 0 Definition: “Hamming—Like” distance between any two vectors on the unit hypercube is the Hamming distance between them ( with the above mapping )

Remark : Consider any two vectors P, Q on the N-dimensional Euclidean space. By dividing each of the components of P, Q by the corresponding Euclidean norm, we arrive at vectors on the unit hypersphere ( i.e.after normalization by the corresponding Euclidean norm ). Using this approach, we now restrict consideration to vectors on the unit hypersphere obtained from the corresponding vectors on the N-d Euclidean space.

Definition: Consider any two points X, Y on the unit hypersphere. Define

Z = Sign ( X )

W= Sign( Y )

i.e. the components of vector Z are obtained as the sign of the corresponding components of X . It should be noted that the sign of ZERO component is consistently defined as +1 or -1 ( i.e. Sign (0) = +1 or -1 ). The “induced Hamming like Distance” between { X, Y } i.e. ͘ ʚ͒, ͓ʛ is defined as the Hamming like distance between Z, W. Let the Euclidean distance between X, Y be denoted by . ͘ʚ͒, ͓ʛ The following properties of the distance measures can be easily verified:

(I ) = 0 implies that = 0. But = 0 doesnot imply that . ͘ʚ͒, ͓ʛ ͘ ʚ͒, ͓ʛ ͘ ʚ͒, ͓ʛ ͘ʚ͒, ͓ʛ Ɣ 0

(II) The distances and cannot be consistently ordered. ͘ʚ͒, ͓ʛ ͘ ʚ͒, ͓ʛ

The above discussion is now generalized to determine the “ generalized induced hamming distance” between two vectors on the N-dimensional Euclidean space. We consider two points / vectors on the “bounded” ͒ͥ , ͒ͦ ( in the sense of Euclidean norm ) N-dimensional Euclidean space. We quantize the components of the two vectors using the ceiling function

͓ͥʚ͞ʛ Ɣ ͙̽͛͢͝͠͝ ʞ͒ͥʚ͞ʛ ʟ ͚ͣͦ 1 ƙ ͞ ƙ ͈ ͕͘͢

͓ͦʚ͞ʛ Ɣ ͙̽͛͢͝͠͝ ʞ͒ͦ ʚ͞ʛ ʟ ͚ͣͦ 1 ƙ ͞ ƙ ͈ .

Note: In the above equations, we can use Lower Ceiling or Upper Ceiling function. Hence, in effect the components of vectors are rounded off / truncated to the nearest integer. It should be noted that after quantization, the components of the vectors will be positive or negative integers. Boundedness of vectors ensures that after quantization, all the components are below certain integer. Thus, the operation of quantization ensures that the vectors lie on the bounded lattice.

Definition: Consider any two bounded vectors lying on the N-dimensional Euclidean ͒ͥ , ͒ͦ Space. Let

͓ͥ Ɣ ͙̽͛͢͝͠͝ ʚ ͒ͥ ʛ

. ͓ͦ Ɣ ͙̽͛͢͝͠͝ ʚ ͒ͦ ʛ

The “generalized induced Hamming Distance” between the bounded vectors is defined ͒ͥ , ͒ͦ as the Hamming distance between the vectors ͓ͥ , ͓ͦ .

Note : We can also define “induced Manhattan distance” between as the Manhattan ͒ͥ , ͒ͦ distance between the vectors . ͓ͥ , ͓ͦ In the above two definitions, a countable / finite set of vectors / points is extracted ( through the process of appropriate quantization ) from the uncountable set of vectors. Using Hamming / Manhattan distance on the countable / finite set, the “induced” Hamming / Manhattan distance is defined.

Remark 4 : Using similar idea, distance between vectors on complex unit hypersphere can be defined and studied. Details are avoided for brevity. First we provide the following well known definition of orthogonality of vectors:

Definition: Two vectors X, Y lying on the unit hypercube are orthogonal if their inner product is zero i.e. < X, Y > = 0.

Note: We are interested in the existence of orthogonal binary vectors on the hypercube when the dimension is odd or even.

Lemma 2: If the dimension of unit hypercube is odd, then there are NO orthogonal vectors lying on the unit hypercube.

Proof : Consider two vectors X, Y lying on the unit hypercube. Let the “Hamming—Like distance” between them be ‘d’. The inner product of X, Y is given by’

< X , Y > = { Number of components where X, Y agree }–{ Number of components where X, Y disagree }

= ( N – d ) - d = N – 2 d .

Thus for the two vectors X , Y to be orthogonal, it is necessary and sufficient that “N” is an even number. Q.E.D.

Corollary : If the dimension is odd, there are no orthogonal vectors lying on any one of the countably many hypercubes.

Remark 5 : When two real valued vectors X, Y lying on the unit hypercube are orthogonal to  each other, then the Hamming-Like distance between them is . ͦ Remark 6: Let us consider the following hypercube ( not necessarily the unit hypercube )

. ʜ ͒Ŭ Ɣ ʚ ͬͥ, ͬͦ, … , ͬʛ ȏ ͬ$ Ɣ ǰͅ ͚ͣͦ 1 ƙ ͝ ƙ ͈ ʝ

Let us denote it as “K-th order hypercube. Suppose two vectors X, Y lie on the above hypercube. It readily follows from an argument similar to the one employed in the above Lemma that

< X, Y > = ͦͅ ʚ ͈ Ǝ 2 ͘ ʛ , where

d is the Hamming—Like distance” between X, Y.

Thus, we can study the properties of the function f(d) = N – 2d and interpret it suitably. Suppose, we fix “X’ and vary the vector Y ( among other vectors on the unit hypercube. ). We have that

͈ Number of vectors at a Hamming Like distance of ‘k’ to the vector X = ʠ ʡ for 0ƙ ͟ ƙ ͈. . ͟ ͈ Thus there are ʠ ʡ vectors for which < X, Y > = N - 2 k. ͟ The above lemma sheds light on the structure of unit hypercube when the dimension of it is even / odd. The generalization of above Lemma to higher dimensional { +1, - 1 } valued tensors ( i.e. inner product of two { +1, -1 } valued tensors ) is straightforward and is avoided for brevity.

• On The Existence of Associative Memory Synthesized by Hopfield:

Thus, in view of the above Lemma 1, we have the following result.

Lemma 3: Hopfield construction of associative memory exists only when the dimension of the hypercube is EVEN.

Proof: It follows from Lemma 1.

The above associative memory synthesized by Hopfield requires the choice of “s” corners of hypercube that are mutually ( any pair of them ) orthogonal to one another. These “s” orthogonal corners of hypercube are desired stable states. In literature, there is no algorithmic procedure to arrive at , say “s” orthogonal { +1 , -1 } vectors. Thus, we need a constructive procedure to choose “s” corners of hypercube that are mutually ( any two corners ) orthogonal to one another. In solving this problem, some results on Hadamard matrices are very relevant [13].

Definition: A of order ‘m’, denoted by , is an m x m matrix of +1’s and ͂( -1’s such that  ͂( ͂( Ɣ ͡ ̓( where is the m x m . This definition is equivalent to saying that any two ̓( rows of are orthogonal. ͂( In view of the above definition and Lemma 1, we have the following interesting result

Lemma 4: Hadamard matrices of odd order donot exist.

Proof: Follows directly from Lemma 1 Q.E.D. Thus, we are interested in determining whether Hadamard matrices of any arbitrary EVEN ORDER exist. A partial answer to this question is well known. Specifically, it is known that Hadamard matrices of order 2& exist for all ͟ ƚ 0. The so called Sylvester construction is provided below:

͂ͥ Ɣ ʞ1ʟ 1 1 ͂ͦ Ɣ ʢ ʣ 1 Ǝ1 ͂ͦġ ͂ͦġ ͂ͦġ~u Ɣ Ƭ ư ͂ͦġ Ǝ͂ͦġ

Now we provide an algorithm to synthesize Hopfield Associative memory:

Step 1: Thus, to choose “s” pairwise ( any two ) orthogonal corners, compute a Hadamard matrix, ( using Sylvester construction ) with & for some ‘k’ such that ͂( ͡ Ɣ 2 ͡ ƚ ͧ.

Step 2: Pick any “s” rows of that are pairwise orthogonal to one another. ͂(

Step 3: Synthesize Hopfield Associative memory as specified in equation (3.1).

Remark: It is possible to synthesize a Hopfield network with desired stable states ( odd dimensional vectors ) by moving to the next immediate even dimension.

4. Existence and Synthesis of a Complex Valued Hopfield Neural Network: Novel Complex Hadamard Matrices

We now consider the Complex Hopfield neural network discussed in Section 2. The existence and synthesis of such an artificial neural network is discussed below. First we derive the conditions for two vectors on the complex hypercube to be unitary to each other.

• Conditions on Unitary Vectors on Complex Hypercube: Now let us consider the complex unit hypercube. Let the two complex valued vectors lying on it be denoted by X, Y. They can be represented in the following manner:

X = A + j B, Y = C + j D , where A, B, C, D lie on the real valued unit hypercube. • Definition: X and Y unitary / orthogonal when ͒dz ͓ Ɣ 0, ͙͙ͫͦ͜ ͒dz denotes the of X.

Now we find the conditions for unitarity / orthogonality of X, Y. We need the following definitions. Let

, ͥ͘ ͧ͝ ͙ͨ͜ ͖͙ͩͦ͢͡ ͚ͣ ͕͙ͤ͗ͧ͠ ͙͙ͫͦ͜ ̻, ̽ ͚͚͙ͦ͘͝ , ͦ͘ ͧ͝ ͙ͨ͜ ͖͙ͩͦ͢͡ ͚ͣ ͕͙ͤ͗ͧ͠ ͙͙ͫͦ͜ ̼, ̾ ͚͚͙ͦ͘͝

ͧ͘ ͧ͝ ͙ͨ͜ ͖͙ͩͦ͢͡ ͚ͣ ͕͙ͤ͗ͧ͠ ͙͙ͫͦ͜ ̻, ̾ ͚͚͙ͦ͘͝

ͨ͘ ͧ͝ ͙ͨ͜ ͖͙ͩͦ͢͡ ͚ͣ ͕͙ͤ͗ͧ͠ ͙͙ͫͦ͜ ̼, ̽ ͚͚͙ͦ͘͝

Lemma 5: The vectors X and Y are unitary if and only if = . ͥ͘ Ɣ ͦ͘ ͧ͘ Ɣ ͨ͘ Ɣ Proof: It is easy to see that ͦ ͒dzY = ( ̻ Ǝ ͞ ̼ ʛʚ ̽ ƍ ͞ ̾ ʛ = ( ̻̽ ƍ ̼̾ ʛ ƍ ͞ ʚ ̻̾ Ǝ ̼̽ ʛ . Now using the same idea as in Lemma 1, we have that

dzY = [ ( N - 2 + N - 2 ) ] + j[( N - 2 - ( N - 2 ) ] . ͒ ͥ͘ ͦ͘ ͧ͘ʛ ͨ͘ Thus, we necessarily have that dzY = [ ( 2N- 2( + j 2( ] . ͒ ͥ͘ ƍ ͦ͘ʛ ʛ ͨ͘ Ǝ ͧ͘ʛ Thus, for X, Y to be unitary / orthogonal to one another, we must have that N = and . ʚ ͥ͘ ƍ ͦ͘ ʛ ͧ͘ Ɣ ͨ͘

Note: One sufficient condition for unitarity / orthogonality of X, Y is that the vectors A, B, C, D lying on the real valued unit hypercube are such that ̻̽ Ɣ 0 , ̼̾ Ɣ 0, ̻̾ Ɣ 0, ̼̽ Ɣ 0 i.e. Vector pairs (A,C) , (B,D), (A,D), (B,C) are orthogonal. This can happen if = . ͥ͘ Ɣ ͦ͘ ͧ͘ Ɣ ͨ͘ Ɣ Now using the “ identity swappingͦ argument”, it can be easily shown that the conditions N = and ʚ ͥ͘ ƍ ͦ͘ ʛ ͧ͘ Ɣ ͨ͘ necessarily require that = . ͥ͘ Ɣ ͦ͘ ͧ͘ Ɣ ͨ͘ Ɣ ͦ Hence this condition is necessary as well as sufficient. Q. E.D

Suppose X, Y lie on the k-th order complex hypercube. Then, we have that the inner product of X, Y is expressed by the following formula.

dz ͦ [ ( N- ( + j ( ] . ͒ ͓ Ɣ 2ͅ ͥ͘ ƍ ͦ͘ʛ ʛ ͨ͘ Ǝ ͧ͘ʛ

Note: It should be noted that the above inner product is a complex number with even integer real and complex parts. It is a real number if and only if . ͧ͘ Ɣ ͨ͘ The above discussion naturally leads to the study of structured matrices whose elements are from the set D = { +1+ j 1, +1 – j 1, -1 + j 1, -1 – j 1 }.

• NOVEL COMPLEX HADAMARD MATRICES :

Remark : In the literature, there is an effort to study “complex Hadamard matrices” in which the elements are ͤ/# roots of unity. Motivated by the definition of complex hypercube, we are naturally led to the study of “novel complex Hadamard matrices” whose rows and columns are corners of the complex hypercube. Formally, we have the following definition.

Definition: A of order ‘m’, denoted by , is an m x m matrix ̽͂ ( whose elements belong to the set D such that

dz ̽͂ ( ̽͂ ( Ɣ 2͡ ̓(

where is the m x m identity matrix and dz is the conjugate transpose of This ̓( ̽͂ ( ̽͂ (. definition is equivalent to saying that any two rows of are unitary. ͂( In view of the above definition and Lemma 4, we have the following interesting result

Lemma 5: The above novel Complex Hadamard matrices of odd order donot exist.

Proof: Follows directly from Lemma 4 Q.E.D.

• Synthesis of Complex Hopfield Neural Networks: One way of synthesizing unitary vectors X, Y lying on the complex hypercube is based on the following idea. X = A + j A ; Y = B + j B (or) X = A- j A ; Y = B – j B with the condition that A, B are real valued orthogonal vectors lying on the unit hypercube. Suppose, using the suitable Hadamard matrix, we are able to determine vectors on the real unit hypercube that are pairwise orthogonal to each other. Let them be labelled as . Based on the above discussion, we now synthesize unitary vectors on ͓ͥ, ͓ͦ … ͓ the complex hypercube in the following manner. Define

͒& Ɣ ͓& ƍ ͞ ͓& ͚ͣͦ 1 ƙ ͟ ƙ ͧ. We synthesize a complex valued Hermitian synaptic weight matrix from the patterns to be stored in such a way that the network so obtained has these patterns as stable states. The weight matrix is as follows:  ʝ ͑ Ɣ ȕʚ͒%͒% Ǝ 2̓ʛ %Ͱͥ where S is the number of patterns to be stored, I is the identity matrix and are the complex ͒ͥ, ͒ͦ … ͒ patterns (unitary to each other and also lying on the complex hypercube) to be stored. Thus the synthesis of Hermitian synaptic weight matrix requires the determination of vectors on the complex hypercube that are unitary to each other. ͒ͥ, ͒ͦ … ͒

• Novel Research Directions related to Novel Complex Hadamard Matrices:

• A. Complex Walsh Functions: The rows of complex hadamard matrices ( discussed above ) are complex Walsh functions ( could also be called complex Radamacher functions ). Any pair of such functions f(t), g(t) are orthogonal on a finite domain, say [0,1] in the sense that < f(t) , g(t) > = ͥ dz where dz is the complex conjugate of g(t). Ȅ ͚ʚͨʛ ͛ ʚͨʛ ͨ͘ Ɣ 0, ͛ ʚͨʛ In a similar manner, complexͤ Haar functions are defined and studied [RaM]. Also, one can consider complex valued functions of complex variable i.e. J(z), K(z) where “z” is a complex variable. The range of functions { J(.), K(.) } is the set { 1+j1, 1-j1, -1+j1, -1-j1 }. Specifically, one can consider the domain of functions J(.), K(.) to be the set D i.e. D = { z : z = a + j b , where ͕ Ǩ ʞ0,1ʟ ͕͘͢ ͖ Ǩ ʞ0,1ʟ ʝ Orthogonality of such functions is defined in a similar manner. Other interesting complex valued orthogonal functions are discussed in [RaM].

• B. Complex Hadamard Transform: From the discussion above and the real valued Hadamard transform definition, it is clear that complex Hadamard transform is a Complex Hadamard matrix scaled by a normalization factor. i.e. 1 ͈̽͂ ( Ɣ ̽͂ ( . 2͡ Thus, it is clear that is a .√ If such a matrix is constructed using ͈̽͂ ( the generalized Sylvester construction, it is also a Hermitian matrix. The order of such a matrix is a “power of 2”.

Definition: The Complex Hadamard transform of a complex valued vector ͏ŭ is obtained in the following manner: = . ͐Ŭ ͈̽͂ ( ͏ŭ Also, it is easy to see that ( since is unitary and Hermitian ), the inverse ͈̽͂ ( complex Hadamard transform of ͐Ŭ is given by = . ͏ŭ ͈̽͂ ( ͐Ŭ

• In view of the above discussion, it is easy to see that the Complex Hadamard matrix ( i.e. also Complex Hadamard Transform ) can be decomposed in the following ͈̽͂ ( manner , ͈̽͂ ( Ɣ ́ ƍ ͞ ͂

where G and H are real Hadamard matrices ( i.e. real Hadamard transforms ).

• Consider two real input signals i.e. two real valued vectors ̻¿ , ̼Ŭ. Let us form a complex signal ͏ŭ in the following manner i.e. ͏ŭ Ɣ ̻¿ ƍ ͞ ̼Ŭ Now let us compute the Complex Hadamard transform of the complex signal ͏ŭ . It is easy to that = = ( ( . ͐Ŭ ͈̽͂ ( ͏ŭ ́¿ ̻¿ Ǝ ͂ŭ ̼Ŭ ʛ ƍ ͞ ́¿ ̼Ŭ ƍ ͂ŭ ̻¿ ʛ

In a similar manner form another complex signal in the following manner: ͑ Ɣ ̼Ŭ ƍ ͞ ̻¿ . The Complex Hadamard transform of the complex signal ͑ŭ is given by

= = ( ( . ͔¿ ͈̽͂ ( ͑ŭ ́¿ ̼Ŭ Ǝ ͂ŭ ̻¿ ʛ ƍ ͞ ́¿ ̻¿ ƍ ͂ŭ ̼Ŭ ʛ

Thus, the complex valued vectors ͐Ŭ, ͔¿ can be decomposed in the following manner: ͐Ŭ Ɣ ͙͕͌͠ ͕͊ͦͨ ͚ͣ ʚ ͐Ŭ ʛ ƍ ͞ ͕͕̓͛ͦͭ͢͡͝ ͕͊ͦͨ ͚ͣ ʚ ͐Ŭ ʛ . ͔¿ Ɣ ͙͕͌͠ ͕͊ͦͨ ͚ͣ ʚ ͔¿ ʛ ƍ ͞ ͕͕̓͛ͦͭ͢͡͝ ͕͊ͦͨ ͚ͣ ʚ ͔¿ ʛ .

Using the above equations, we get the following results:

(I) ¿ ŭ ͥ ( Ŭ ¿ . ́ ̻ Ɣ ͦ ͙͕͌͠ ͕͊ͦͨ ͚ͣ ʚ ͐ ʛ ƍ ͕͕̓͛ͦͭ͢͡͝ ͕͊ͦͨ ͚ͣ ʚ ͔ ʛ ʛ (II) ŭ ŭ ͥ ( ¿ Ŭ . ͂ ̼ Ɣ ͦ ͕͕̓͛ͦͭ͢͡͝ ͕͊ͦͨ ͚ͣ ʚ ͔ ʛ Ǝ ͙͕͌͠ ͕͊ͦͨ ͚ͣ ʚ ͐ ʛ ʛ (III) ¿ ŭ ͥ ( ¿ . ́ ̼ Ɣ ͦ ͙͕͌͠ ͕͊ͦͨ ͚ͣ ʚ ͔ ʛ ƍ ͕͕̓͛ͦͭ͢͡͝ ͕͊ͦͨ ͚ͣ ʚ ͐ ʛ ʛ ͥ (IV) ͂ŭ ̻ŭ Ɣ ( ͕͕̓͛ͦͭ͢͡͝ ͕͊ͦͨ ͚ͣ ʚ ͐Ŭ ʛ Ǝ ͙͕͌͠ ͕͊ͦͨ ͚ͣ ʚ ͔¿ ʛ ʛ . ͦ • Thus, using the complex Hadamard transform of complex signals ͏ŭ , ͑ŭ ( formed from the real signals ̻¿ , ̼Ŭ ) , we are able to compute the real Hadamard transforms ́¿ ̻¿ , ͂ŭ ̼ŭ , ́¿ ̼ŭ ͕͘͢ ͂ŭ ̻ŭ . The detailed properties and applications of complex Hadamard transform are discussed in [RaM]. It should be noted that real valued Hadamard transform finds many applications in research areas such as Digital Image Processing [13].

• It is well known that every finite dimensional linear operator has a matrix representation i.e. If “M” dimensional function of N variables satisfies the ͚ ¿ ʚ ͒ͥ, ͒ͦ, … ͒ ʛ superposition property, then it can be represented in the following form:

= , where G is an M x N matrix and  ͚ ¿ ʚ ͒ͥ, ͒ͦ, … ͒ ʛ ́ ͒Ŭ ͒Ŭ Ɣ ʞ͒ͥ, ͒ͦ, … ͒ ʟ .

Using the principle of Threshold decomposition, the input ͒Ŭ can be decomposed into binary signals. Suppose the components of vector ͒Ŭ can assume say atmost L integer values. Then ͒Ŭ can be decomposed in the following manner:

, where are binary / Boolean vectors. ͒Ŭ Ɣ ͓ŭͥ ƍ ͓ŭͦ ƍ ˆ ƍ ͓ŭ ͓ŭr

Thus = . ͚ ¿ ʚ ͒ͥ, ͒ͦ, … ͒ ʛ ́ ͒Ŭ Ɣ ́ ͓ŭͥ ƍ ́ ͓ŭͦ ƍ ˆ ƍ ́ ͓ŭ

Hence such linear functions can be computed using Threshold decomposition. If G is a Hadamard matrix, then the computation on the right hand side involves only additions and subtractions.

Remark: It should be noted that finite impulse response linear filters, involve filtering operation on signal values in a finite length window. Thus such linear filters can be implemented using Thresold decomposition.

Note: It is well known that the concept of complex numbers is generalized to arrive at the concepts of quaternions and octanions. Thus, all the above results have natural generalization for quaternions and octanions. Specifically, neural networks based on quaternions and octanions ( also Clifford algebra ) are already proposed and studied by researchers. Therefore, the generalization of above results ( e.g. unitary quaternions ) naturally enable us to address the existence and synthesis of such associative memories in various dimensions.

5. Synthesis of Real and Complex Hopfield Type Associative Memories:

In view of the fact that the synaptic weight matrix can be an arbitrary symmetric matrix, we can generalize the Hopfield construction ( of associative memory ) in the following manner. It should be noted that the associative memory synthesized by Hopfield highly constrains the eigenvalues ( and consequently the stable/anti-stable values ). We donot have those constraints in the following synthesis procedure. Let [ĝ  be desired positive eigenvalues ( with ʜ  ʝ %Ͱͥͥ being the desired stable value ) and let  be the desired stable states. Then it is % ʜ ͒% ʝ%Ͱͥ easy to see that the following symmetric matrix constitutes the desired synaptic weight matrix of Hopfield neural network ( Using the spectral representation of symmetric matrix W ):

  ͑ Ɣ ȕ %͒%͒% %Ͱͥ

Once again , in this case N must be even. Also, it is clear that the synaptic weight matrix is positive definite.

Note: It should be noted that ɑ are mutually at a Hamming distance of  from each other. ͒% ͧ ͦ Thus the minimum distance between any two of them is  . Now we have the following definition. ͦ In the same spirit as above, we now synthesize a synaptic weight matrix, with desired stable/anti-stable values and the corresponding stable / anti-stable states. Let  be ʜ ͒%ʝ%Ͱͥ desired orthogonal stable states and be the desired orthogonal anti-stable states. Let ʜ ͓% ʝ%Ͱͥ the desired stable states be eigenvectors corresponding to positive eigenvalues and let the desired anti-stable states be eigenvectors corresponding to negative eigenvalues. The spectral representation of desired synaptice weight matrix is given by  %  %  ͑ Ɣ ȕ ͒% ͒ Ǝ ȕ ͓% ͓ ͈ % ͈ % %Ͱͥ %Ͱͥ where are desired positive eigenvalues and are desired negative eigenvalues. %ĺͧ Ǝ %ĺͧ Hence the above construction provides a method of arriving at desired energy landscape ( with orthogonal stable /anti-stable states and the corresponding positive / negative energy values ).

Remark : It can be easily shown that Trace(W) is constant contribution ( DC value ) to the value of quadratic form ͒͑͒ at all corners, X of unit hypercube. Thus, the location of stable / anti-stable states is invariant under modification of Trace (W) value. Thus, from the standpoint of location /computation of stable / anti-stable states Trace (W) can be set to zero. Let ͑ȩ be the matrix obtained from W by setting all the diagonal elements of W to zero. It can be easily seen that ͑ȩ is an indefinite matrix. In view of Lemma 1, when the dimension of the unit hypercube is odd, there are only the following two possibilities:

CASE A: Only one corner of the hypercube is an eigenvector in the spectral representation of W. Thus, the desired corner is a stable state or anti-stable state.

CASE B: None of the corners of the hypercube is an eigenvector in the spectral representation of W.

• In case A, if the corresponding eigenvalue is positive, then the corner is a stable state, else, the corner is an anti-stable state.

• In case B, if the spectral representation of W is of the following form ( Rank one matrix )

 with all the other eignvalues are zeroes, then ͑ Ɣ ͚$ ͚ $ F = Sign ( } ͚$

is the global optimum stable state. The logical reasoning is fairly simple and avoided for brevity.

• Synthesis of Novel Complex Hopfield Type Associative Memory: As in the case of real valued Hopfield type associative memory, we now synthesize the synaptic weight matrix of complex Hopfield associative memory considered in this research paper. It is based on the spectral representation of Hermitian matrix i.e.  % dz % dz ͑ Ɣ ȕ ͒% ͒ Ǝ ȕ ͓% ͓ . 2͈ % 2͈ % %Ͱͥ %Ͱͥ

As in the real case such an associative memory exists only in the even dimension. Once again, as in the real case, when the dimension is odd, there are only two possible cases:

CASE A: Only one corner of the complex hypercube is an eigenvector in the spectral representation of W. Thus, the desired corner is a stable state or anti-stable state.

CASE B: None of the corners of the hypercube is an eigenvector in the spectral representation of W.

Note: The synthesis of real as well as complex Hopfield type associative memories can be done using corresponding real / complex Hadamard matrices ( based on say Sylvester construction ). Detailed specification of algorithm is avoided for brevity.

6. Conclusion: In this research paper, it is reasoned that the associative memory synthesized by Hopfield doesnot exist when the dimension of associated synaptic weight matrix is odd. Using Hadamard matrices, an algorithm to synthesize Hopfield neural network in even dimension is proposed. Existence and synthesis of a complex Hopfield neural network ( on the compelx hypercube ) is discussed. Also synthesis of real and complex Hopfield type associative memories is discussed.

References:

[1] [Hop] J.J. Hopfield, “Neural Networks and Physical Systems with Emergent Collective Computational Abilities,” Proceedings of National Academy Of Sciences, USA Vol. 79, pp. 2554-2558, 1982

[2] [MGZ] Mehemet Kerem Muezzinoglu, Cuneyt Guzelis and Jacek. M. Zurada, “A new design method for the Complex-valued Multistate Hopfield Associative Memory,” IEEE Transactions on Neural Networks, Vol. 14, No. 4, July 2003.

[3] [RaP] G. Rama Murthy and D. Praveen, “Complex-valued Neural Associative Memory on the Complex hypercube,” IEEE Conference on Cybernetics and Intelligent Systems, Singapore, 1-3 December, 2004.

[4] [Rama1] G. Rama Murthy,”Optimal Signal Design for Magnetic and Optical Recording Channels, “ Bellcore Technical Memorandum, TM-NWT-018026, April 1st , 1991

[5] [RaN] G. Rama Murthy and B. Nischal,” Hopfield-Amari Neural Network : Minimization of Quadratic forms,” The 6th International Conference on Soft Computing and Intelligent Systems, Kobe Convention Center (Kobe Portopia Hotel), November 20-24, 2012, Kobe, Japan

[6] Akira Hirose, “Complex Valued Neural Networks: Theories and Applications,” World scientific publishing Co, November 2003.

[7] G. Rama Murthy and D. Praveen, “ A Novel Associative Memory on the Complex Hypercube Lattice,” 16 th European Symposium on Artificial Neural Networks, April 2008.

[8] G. Rama Murthy, “Some Novel Real/ Complex Valued Neural Network Models,” Advances in Soft Computing, Springer Series on Computational Intelligence: Theory and Applications, Proceedings of 9 th Fuzzy days, September 18-20 2006, Dortmund, Germany.

[9] G. Jagadeesh, D. Praveen and G. Rama Murthy, “Heteroassociative Memories on the Complex Hypercube,” Proceedings of 20 th IJCAI Workshop on Complex Valued Neural Networks, January 6-12 2007.

[10] V. Sree Hari Rao and G. Rama Murthy, “Global Dynamics of a class of Complex Valued Neural Networks,” Special issue on CVNNS of International Journal of Neural Systems, April 2008.

[11] G. Rama Murthy, “Infinite Population, Complex Valued State Neural Network on the Complex Hypercube,” Proceedings of International Conference on Cognitive Science, 2004.

[12] G. Rama Murthy, “Multidimensional Neural Networks-Unified Theory,” New Age International Publishers. New Delhi, 2007.

[13] A. K. Jain,” Fundamentals of Digital Image Processing, “Prenctice Hall of India, New Delhi , 2009

[14] [RaM] G. Rama Murthy and Moncef Gabbouj,” Complex Hadamard Transform and Applications, “ Manuscript in preparation