S S symmetry

Article Reconstructing Damaged Complex Networks Based on Neural Networks

Ye Hoon Lee 1 and Insoo Sohn 2,* 1 Department of Electronic and IT Media Engineering, Seoul National University of Science and Technology, Seoul 01811, Korea; [email protected] 2 Division of Electronics and Electrical Engineering, Dongguk University, Seoul 04620, Korea * Correspondence: [email protected]; Tel.: +82-2-2260-8604

Received: 2 October 2017; Accepted: 9 December 2017; Published: 9 December 2017

Abstract: Despite recent progress in the study of complex systems, reconstruction of damaged networks due to random and targeted attack has not been addressed before. In this paper, we formulate the network reconstruction problem as an identification of network structure based on much reduced link information. Furthermore, a novel method based on multilayer perceptron neural network is proposed as a solution to the problem of network reconstruction. Based on simulation results, it was demonstrated that the proposed scheme achieves very high reconstruction accuracy in small-world network model and a robust performance in scale-free network model.

Keywords: network reconstruction; neural networks; small world networks; scale free networks

1. Introduction Complex networks have received growing interest from various disciplines to model and study the and interaction between nodes within a modeled network [1–3]. One of the important problems that are actively studied in the area of is the robustness of a network under random failure of nodes and intentional attack on the network. For example, it has been found that scale-free networks are more robust compared to random networks against random removal of nodes, but are more sensitive to targeted attacks [4–6]. Furthermore, many approaches have been proposed to optimize conventional networks against random failure and intentional attack compared to the conventional complex networks [7–10]. However, these approaches have been mainly concentrated on designing network topology based on various optimization techniques to minimize the damage to the network. So far, no work has been reported on techniques to repair and recover the network topology after the network has been damaged. Numerous solutions have been proposed on reconstruction of spreading networks based on spreading data [11–13], but the problem on the reconstruction of damaged networks due to random and targeted attacks has not been addressed before. Artificial neural networks (NNs) have been applied to solve various problems in complex systems due to powerful generalization abilities of NNs [14]. Some of the applications where NNs have been successfully used are radar waveform recognition [15], image recognition [16,17], indoor localization [18,19], and peak-to-average power reduction [20,21]. In this paper, we propose a novel network reconstruction method based on the NN technique. To the best of our knowledge, this work is the first attempt to recover the network topology after the network has been damaged and also the first attempt to apply NN technique for optimization. We formulate the network reconstruction problem as an identification of a network structure based on a much reduced amount of link information contained in the of the damaged network. The problem is especially challenging due to (1) very large number of possible network configurations 2 on the order of 2N , where N is the number of nodes and (2) very small number of node interaction

Symmetry 2017, 9, 310; doi:10.3390/sym9120310 www.mdpi.com/journal/symmetry Symmetry 2017, 9, 310 2 of 11

Symmetryespecially2017 challenging, 9, 310 due to (1) very large number of possible network configurations on the 2order of 11 2 of 2N , where N is the number of nodes and (2) very small number of node interaction information informationdue to node due removals. to node We removals. simplify We the simplify problem the by problem the following by the assumptions following assumptions (1) average (1) number average of numberreal connections of real connections of a network of is a networkmuch smaller is much than smaller all thethan possible all the link possible configurations link configurations and (2) link andinformation (2) link information of M undamaged of M undamaged networks are networks available. are Basedavailable. on these Based assumptions, on these assumptions, we chose we the chosemultiple the-layer multiple-layer perceptron perceptron neural network neural network (MLPNN), (MLPNN), which whichis one is of one the of frequently the frequently used used NN NNtechniques, techniques, as the as basis the basis of our of method. our method. We evaluate We evaluate the performance the performance of the of proposed the proposed method method based basedon simulations on simulations in two in two classical classical complex complex networks networks (1) (1) small small-world-world network network and and (2) scale-free scale-free networks,networks, generatedgenerated by by Watts Watts and and Strogatz Strogatz model model and and Barab Barabásiási and and Albert Albert model, model, respectively. respectively. TheThe restrest of the the paper paper is is organized organized as asfollows. follows. Section Section 2 describes2 describes the small the small-world-world network network model modeland scale and-free scale-free network, network, followed followed by the by the network network damage damage model. model. In In Section Section 3,3, we we propose propose the the reconstructionreconstruction methodmethod basedbased onon NNNN technique.technique. InIn SectionSection4 ,4, we we present present the the numerical numerical results results of of the the proposedproposed method, method, and and conclusions conclusions are are given given Section Section5 .5.

2.2. ModelModel

2.1.2.1. Small-WorldSmall-World NetworkNetwork Small-worldSmall-world networknetwork isis anan importantimportant networknetwork modelmodel withwith lowlow averageaverage pathpath lengthlength andand highhigh clusteringclustering coefficientcoefficient [22[22––2424].]. Small-worldSmall-world networksnetworks havehave homogeneoushomogeneous networknetwork topologytopology withwith aa degreedegree distribution distribution approximated approximated by by the the Poisson . distribution. A small-world A small-worl networkd network is created is created based onbased a regular on a regular lattice such lattice a ring such of aN ringnodes, of N where nodes, each where node each is connected node is connected to J nearest to nodes. J nearest Next, nodes. the linksNext, are the randomly links are rewired randomly to one rewired of N nodes to one in theof N network nodes within the probability network withp. The probability rewiring process p. The isrewiring repeated process for all is the repeated nodes nfor= 1all... theN nodes. By controlling n = 1 … N. theBy controlling rewiring probability the rewiringp, the probability network p will, the interpolatenetwork will between interpolate a regular between lattice a regular (p = 0) tolattice a random (p = 0) network to a random (p = 1).network Figure (1p shows= 1). Figure the detailed 1 shows algorithmthe detailed for algorithm small-world for networksmall-world construction network inconstruction pseudocode in format.pseudocode format.

1: Initialization: 2: Set the total number of nodes N 3: Set the rewiring probability p 4: Set the node K 5: end initialization 6: 7: for each node n  (1, N) 8: Connect to nearest K nodes. 9: end for 10: 11: for each node n  (1, N) 12: for each link k  (1, K) 13: Generate a random number r  (0, 1) 14: if r < p 15: Select a node randomly m  (1, N) 16: Rewire to node m 17: end if 18: end for 19: end for

FigureFigure1. 1.Small-world Small-world networknetwork algorithm.algorithm.

2.2.2.2. Scale-FreeScale-Free NetworkNetwork Scale-freeScale-free networks, networks, such such as asBarab Barabáásisi and and AlbertAlbert (BA)(BA) network,network, areare evolvingevolving networksnetworks thatthat havehave degreedegree distributiondistribution followingfollowing power-lawpower-lawmodel model[ 4[4––66].]. Scale-freeScale-free networksnetworks consistconsist ofof aa smallsmall numbernumber ofof nodesnodes withwith veryvery high degree of of connections connections and and rest rest of of the the nodes nodes with with low low degree degree connections. connections. A Ascale scale-free-free network network starts starts the the evolution evolution process process with a small number ofof mm00 nodes. Next, aa newnew nodenode isis introduced to the network and attaches to m existing nodes with high degree k. The process consisting

Symmetry 2017, 9, 310 3 of 11 Symmetry 2017, 9, 310 3 of 11 introduced to the network and attaches to m existing nodes with high degree k. The process consisting of new node introduction and is repeated until a network with N = t + m0 of new node introduction and preferential attachment is repeated until a network with N = t + m0 nodesnodes has has been been constructed. constructed. Figure Figure2 shows 2 the shows detailed the algorithm detailed foralgorithm scale-free for network scale-free construction network inconstruction pseudocode in format. pseudocode format.

1: Initialization: 2: Set the initial number of nodes m0 3: Set the total number of nodes N 4: Set the node degree K 5: end initialization 6:

7: for each node n  (1, m0) 8: Connect to nearest K nodes. 9: end for 10: 11: for each new node 12: Calculate the maximum degree probability as follows:

(k(h))  k(h) / k(m), m

where (k(h)) is the probability of selecting node h, k(h) is the degree of node h, and k(m) is the total number of links in the network. m 13: for each link k  (1, K) 14: Connect to node m with max((k(m))) 15: end for 16: end for

FigureFigure 2.2. Scale-freeScale-free networknetwork algorithm.algorithm.

2.3.2.3. NetworkNetwork DamageDamage ModelModel WeWe simulatesimulate thethe damagedamage processprocess onon complexcomplex networksnetworks byby considering the random attack model. InIn thethe randomrandom attackattack model,model, nodesnodes areare randomlyrandomly selectedselected andand removed.removed. NoteNote thatthat whenwhen aa nodenode isis removed,removed, all all the the links links connected connected to to that that node node are are also also removed removed [25 [25].]. To To evaluate evaluate the the performance performance of the of proposedthe proposed reconstruction reconstruction method, method, the difference the difference in the number in the number of links ofbetween links between the original the network original andnetwork the reconstructed and the reconstructed network is network used and is used represented and represented as probability as probability of reconstruction of reconstruction error, PRE ,error, that isPRE defined, that is as defined follows: as follows: NL,di f f PRE = , (1) NNLL,diff PRE  , (1) where NL is the total number of existing links in theN L complex network that were damaged due to random attack and NL,diff is the total number of links in the reconstructed network that are different fromwhere the N linksL is the in total the original number network of existing before links any in node the complex removal. network For example, that were let us damaged assume that due the to random attack and NL,diff is the total number of links in the reconstructed network that are different number of nodes N = 4 and the node pair set in the original network is equal to Eo = {(1, 2), (1, 4), (2, 3), from the links in the original network before any node removal. For example, let us assume that the (3, 4)}. If the reconstructed network has node pair set as Er = {(1, 2), (1, 3), (1, 4), (2, 3)}, then NL,diff = 2 number of nodes N = 4 and the node pair set in the original network is equal to Eo = {(1, 2), (1, 4), (2, and NL = 4, giving us PRE = 0.5. Note that the estimated links in the reconstructed network that were not3), (3, in the4)}. originalIf the reconstructed network, in network additions has to linksnode thatpair wereset as not Er = reproduced, {(1, 2), (1, 3), are (1, all 4), counted (2, 3)}, then as errors. NL,diff = 2 and NL = 4, giving us PRE = 0.5. Note that the estimated links in the reconstructed network that were 3.not Reconstruction in the original Methodnetwork, in additions to links that were not reproduced, are all counted as errors.

3.1.3. Reconstruction Neural Network Method Model Neural networks are important tools that are used for system modeling with good generalization properties.3.1. Neural Network We propose Model to use MLPNN employing backpropagation based supervised learning in this work. A MLPNN has three types of layers: An input layer, an output layer, and multiple hidden

Symmetry 2017, 9, 310 4 of 11

SymmetryNeural2017, networks9, 310 are important tools that are used for system modeling with good generalization4 of 11 properties. We propose to use MLPNN employing backpropagation based supervised learning in this work. A MLPNN has three types of layers: An input layer, an output layer, and multiple hidden layers in betweenbetween thethe inputinput andand outputoutput layerlayer asas shownshown inin FigureFigure3 3.. TheThe inputinput layerlayer receivesreceives inputinput data and is passed to the neurons or units in the hidden layer. The hidden layer units are nonlinear activation functionfunction ofof thethe weightedweighted sum sum of of inputs inputs from from the the previous previous layer. layer. The The output output of of the thejth jth unit unit in thein the hidden hidden layer layerO( jO) can(j) can be be represented represented as [as26 [26]]

m m

A(jA)( =j)  ww((j,j,ii))OO(ii))−U(Uj),(j),(2 (2)) ∑i1 i=1

11 O(jO)( =j)=(A((Aj())j)) = , ,(3 (3)) 11+eeA−( jA) (j) where AA((jj)) isis thethe activation activation input input to to jth jth hidden hidden layer layer unit, unit,w (wi,(ji), isj) theis the weights weights from from unit uniti to i toj, O j, (Oj)( isj) theis the input input to to unit unitj, Uj, (Uj)(j is) is the the threshold threshold of of unit unitj, j,and and= ( •()) is is a a nonlinear nonlinear activationactivation functionfunction suchsuch as sigmoid function, as shown in Equation (3), hardlimit function, radial basis function, and triangular function. The The number number of of units units in in the the output output layer layer is equal is equal to the to thedimension dimension of the of desired the desired output output data dataformat. format. The weights The weights on the on network the network connections connections are adjusted are adjusted using training using training input data input and data desired and desiredoutput outputdata until data the until mean the mean square square error error (MSE) (MSE) between between them them are are minimized. minimized. To To implement implement the MPLNN, the feedforwardnet function provided by the MATLAB Neural Network Toolbox waswas utilized.utilized. Additionally, thethe MPLNNMPLNN weightsweights werewere trainedtrained usingusing thethe Levenberg-MarguardtLevenberg-Marguardt algorithm algorithm [ 27[27].].

X1

Y

X2

Input Hidden Output layer layer layer

Figure 3.3. MMultiple-layerultiple-layer perceptron neural network (MLPNN) model.model.

3.2. Neural Network Based Method 3.2. Neural Network Based Method To reconstruct the network topology of a damaged complex network due to random attack, we To reconstruct the network topology of a damaged complex network due to random attack, we apply MLPNN as a solution to solve the complex network reconstruction problem. One of the key apply MLPNN as a solution to solve the complex network reconstruction problem. One of the key design issues in MLPNN is the training process of weights on the network connections such that the design issues in MLPNN is the training process of weights on the network connections such that the MLPNN is successfully configured to reconstruct damaged networks. Usually, a complex network MLPNN is successfully configured to reconstruct damaged networks. Usually, a complex network topology is represented by an adjacency matrix describing interactions of all the nodes in the network. topology is represented by an adjacency matrix describing interactions of all the nodes in the network. However, an adjacency matrix is inappropriate as training input data for MLPNN training process However, an adjacency matrix is inappropriate as training input data for MLPNN training process due due to its . Thus, we define a link list (LL) that contains binary elements representing to its complexity. Thus, we define a link list (LL) that contains binary elements representing existence existence of node pairs among all possible combination of node pairs!  N  , where N is the number of N   of node pairs among all possible combination of node pairs , where2  N is the number of nodes nodes in the network. For example, for a network with N = 4, 2possible node pair set is equal to, E = in{(1, the 2), network.(1, 3), (1, 4), For (2, example, 3), (2, 4), for (3, a 4)}. network If a network with N to= be 4, possiblereconstructed node pairhas four setis nodes equal and to, Efour= {(1, links 2), (1,given 3), (1,by 4),E = (2, {(1, 3), 2), (2, (1, 4), 4), (3, (2, 4)}. 3), If(3, a 4)}, network then LL to = be [1 reconstructed 0 1 1 0 1], where has 1 fours represents nodes and the four existence links givenof the byfourE = specific {(1, 2), (1,links 4), (2,among 3), (3, six 4)} , possible then LL = ones. [1 0 1To 1 0 obtain 1], where the 1 training s represents input the data, existence M networks of the four are specificdamaged links by among randomly six possible removing ones. f percent To obtain of theN nodes training in input a network. data, M Asnetworks shown inare Figure damaged 4, the by randomly removing f percent of N nodes in a network. As shown in Figure4, the proposed method

Symmetry 2017, 9, 310 5 of 11

Symmetry 2017, 9, 310 5 of 11 consists of the following modules: Adjacency matrix of damaged network input module, adjacency Symmetry 2017, 9, 310 5 of 11 matrixproposed to link listmethod transformation consists of the module, following MLPNN modules: module, Adjacency and matrix network of damaged index to network adjacency input matrix transformationmoduproposedle, adjacency method module. consistsmatrix In the toof second linkthe followinglistmodule, transformation modules: the adjacency module, Adjacency MLPNN matrices matrix module, of of damaged the damagedand networknetwork networks indexinput are pre-processedtomodu adjacencyle, adjacency into matrix LLs matrix thattransformation can to link be enteredlist module. transformation into In thethe MLPNN. secondmodule, module, MLPNN Note t thathe module, adjacency the inputand matricesnetwork dimension ofindex the of the MLPNNdamagedto adjacency is equal networks tomatrix the are dimensiontransformation pre-processed of the module. into training LLs In that the input can second be data entered module, format. into the the Thus, adjacency MLPNN. input matrices Note dimension that of the of the damaged networks are !pre-processed into LLs that can be entered into the MLPNN. Note that the input dimension of theN MLPNN is equal to the dimension of the training input data format. Thus, MLPNNinput is dimensiondimension equal to ofof thethe MLPNNMLPNN. As for is is equal equal the desiredto to theN  . dimensionAs output for the data, desiredof the which training output is data,input the outputwhich data format. is ofthe the output Thus, MLPNN 2   input dimension of the MLPNN is equal to 2N  . As for the desired output data, which is the output   module,of the binary MLPNN sequence module, numbers binary sequence are used numbers to2 represent are used the indicesto represent of the the original indices of complex the original networks that havecomplexof the been MLPNN networks damaged. module, that The havebinary number been sequence damaged. of MLPNNnumbers The numberare output used of to will MLPNNrepresent depend output the on indices thewill number dependof the original on of the training networksnumbercomplex used of networks totraining train the thatnetworks MLPNN. have used been For to damaged. train example, the TheMLPNN. eight number binary For of example, MLPNN outputs eight output will binary be will sufficient outputs depend will to on represent thebe 256 complexsufficientnumber networks. of to training represent Based networks 256 oncomplex theused training tonetworks. train inputthe Based MLPNN. data on and the For desiredtraining example, outputinput eight data data,binary and representing outputsdesired willoutput networkbe sufficient to represent 256 complex networks. Based on the training input data and desired output topologydata, of representing different complex network networks,topology of thedifferent goal complex of the MLPNN networks, is the to begoal able of the to MLPNN identify, is reconstruct, to be abledata, torepresenting identify, reconstruct, network topology and produce of different node complex pair informa networks,tion of the the goal original of the network,MLPNN is among to be and produce node pair information of the original network, among numerous networks used to train numerousable to identify, networks reconstruct, used to train and the produce neural node network. pair The informa detailedtion training of the original algorithm network, of MLPNN among is the neuraldescribednumerous network. in networks Figure The 5 used. detailed to train training the neural algorithm network. of The MLPNN detailed is training described algorithm in Figure of MLPNN5. is described in Figure 5.

adjacency matrix adjacency matrix adjacencyof matrix adjacencyto matrix MLPNN damagedof n etwork linkto list MLPNN damaged network link list estimated desired network index networkestimated index desired(in training network mode) index network index (in training mode)

adjacency matrix network index adjacencyto matrix networkto index linkto list adjacencyto matrix link list adjacency matrix

Figure 4. MLPNN based reconstruction method. Figure 4. MLPNN based reconstruction method. Figure 4. MLPNN based reconstruction method.

1: Initialization: 2:1: Initialization: Generate a set of M complex networks. 32:: SetGenerate number a setof hiddenof M complex layers. networks. 43:: Set number of neuronshidden layers. in each hidden layers. 54:: DefineSet number activation of neurons function in each in hidden hidden layers layers. . 65:: SetDefine number activation of training function iteration in hidden Iteration layers Num. 76:: end I nitializationSet number of training iteration Iteration Num 8:7: end Initialization 98:: for i = 1 : Iteration Num 109: : for i =Randomly 1 : Iteration select Num a complex networks m  (1, M). 1110:: ApplyRandomly random select attack a complex procedure networks to the mcomplex  (1, M network). m. 1211:: ObtainApply randomadjacency attack matrix procedure for the damagedto the complex complex network networks m. m. 1312:: TransformObtain adjacency adjacency matrix matrix for theinto damaged LL. complex networks m. 1134:: InputTrainSeqTransform adjacency = LL(m matrix) into LL. 154:: DesOutputTrainSeqInputTrainSeq = LL (=m dec2bin) (m). 165:: end for DesOutputTrainSeq = dec2bin(m). 17:16: end for 117:8: while estimation error > threshold do 198:: while estimationfor i = 1 : Iterationerror > threshold Num do 2019:: for i = 1MLP : IterationNN (InputTrainSeq Num ) = MLPNN_OUT. 2120:: MeanMLPNN Squrare (InputTrainSeq Error (MLPNN_OUT) = MLPNN_OUT, DesOutputTrainSeq. ). 2221:: UpdateMean Squrare weights Error w in each(MLPNN_OUT layer using, LMSDesOutputTrainSeq algorithm. ). 2322:: end for Update weights w in each layer using LMS algorithm. 2423:: end while end for 24: end while Figure 5. MLPNN training algorithm. FigureFigure 5. 5MLPNN. MLPNN training algorithm algorithm..

Symmetry 2017, 9, 310 6 of 11

4. Performance Evaluations

4.1. Simulation Environment We study and evaluate the proposed reconstruction method based on the probability of Symmetry 2017, 9, 310 6 of 11 reconstruction error PRE described in Section2. For the network damage model, we assume random attack process,4. Performance where Evaluations nodes are randomly removed with attached links. The MLPNN used in our method has two hidden layers with 64 neurons in the first layer and four neurons in the second layer. The nonlinear4.1. Simulation activation Environment function in the hidden layer is chosen to be triangular activation function. The numberWe of inputsstudy and to the evaluate MLPNN the proposed depends reconstruction on the number method of nodes based in on the the network. probability To oftrain and test thereconstruction MLPNN, using error complex PRE described networks in Section with 2.N For= 10,theN network= 30, and damageN = model, 50, the we number assume of random inputs are set equal toattack the possible process, where number nodes of nodeare ran pairdomly combinations, removed with which attached are links. 45, 435,The andMLPNN 1225, used respectively. in our As for the numbermethod has of two outputs, hidden eight layers are with chosen 64 neurons to represent in the first maximum layer and four number neurons of in 256 the second complex layer. networks. The nonlinear activation function in the hidden layer is chosen to be triangular activation function. The trainingThe number input of and inputs output to the data MLPNN patterns depends are on randomly the number chosen of nodes from in the LL network. of M damaged To train and complex networkstest with the MLPNN, different using percentage complexf networksof failed with nodes N = out10, N of = total30, andN Nnodes = 50, the and number corresponding of inputs are indices of the complexset equal networks. to the possible number of node pair combinations, which are 45, 435, and 1225, respectively. As for the number of outputs, eight are chosen to represent maximum number of 256 complex 4.2. Small-Worldnetworks. NetworkThe training input and output data patterns are randomly chosen from LL of M damaged complex networks with different percentage f of failed nodes out of total N nodes and corresponding Toindices evaluate of the the complex performance networks. of the proposed method in small-world network model, the network is implemented based on the algorithm described in Figure1. Figure6 and Table1 shows the reconstruction4.2. Small error-World probability Network as a function of percentage of random node failure f. Furthermore, we study the influenceTo evaluate of the the numberperformance of node of the on proposed the network method reconstruction in small-world performance network model, with the N = 10, N = 30,netw andorkN is= implemented 50. The initial based degree on the Kalgorithmof the networkdescribed in is Figure set to 1 two. Figure and 6 and the Table links 1 are shows randomly rewiredthe with reconstruction probability errorp = probability 0.15. One as can a function see that of percentage with the increaseof randomin node the failure number f. Furthermore, of node failures, we study the influence of the number of node on the network reconstruction performance with N = the reconstruction performance deteriorates for all different N, but for f = 0.1, PRE is less than 0.35 and 10, N = 30, and N = 50. The initial degree K of the network is set to two and the links are randomly for f = 0.5,rewiredPRE withis less probability than 0.5. p = In 0.15. another One can words, see that the with proposed the increase method in the number can reconstruct of node failures, almost close to 70% ofthe thereconstructio networkn topologyperformance for deteriorates 10% node for failures all different and N more, but for than f = 0.1, 60% PRE ofis less the than network 0.35 and topology for 50%for node f = 0.5, failures. PRE is less Note than that 0.5. In lower another reconstruction words, the proposed error probabilitymethod can reconstruct is observed almost for largerclose to number of nodes.70% The of the reason network for topology this results for 10% is node due failures to the higherand more dimension than 60% ofof the input network data topology to the for MLPNN, 50% node failures. Note that lower reconstruction error probability is observed for larger number of e.g., 1225 for N = 50. Furthermore, from the figure, we observe that PRE is less than what one might nodes. The reason for this results is due to the higher dimension of input data to the MLPNN, e.g., expect for the case where most of the nodes are destroyed, e.g., f = 0.7. This phenomenon is due to 1225 for N = 50. Furthermore, from the figure, we observe that PRE is less than what one might expect the largefor number the case where of overlap most of in the the nodes node are connections destroyed, e.g., in f LL= 0.7.among This phenomenon the M damaged is due to networks the large due to small rewiringnumber probabilityof overlap inp the. To node study connections how the rewiringin LL among probability the M damaged affects ne thetworks reconstruction due to small accuracy, simulationsrewiring are performedprobability p with. To studyp = 0.3, howp =the 0.5, rewiring and p probability= 0.7, as shown affects inthe Figure reconstruction7 and Table accuracy,2. From the figure, wesimulations can see are that performed there is with a significant p = 0.3, p = deterioration 0.5, and p = 0.7, in asperformance shown in Figure in 7 reconstruction and Table 2. From accuracy with increasethe figure, inrewiring we can see probability that there p is. a This significant is because deterioration the small-world in performance network in reconstruction topology becomes accuracy with increase in rewiring probability p. This is because the small-world network topology increasingly disordered with increase in rewiring probability and results in decrease in ability of the becomes increasingly disordered with increase in rewiring probability and results in decrease in proposedability method of the toproposed reproduce method the to original reproduce network the original topology. network topology.

0.5 N=10 N=30 N=50

0.4

0.3

RE

P

0.2

0.1 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 f

Figure 6.FigureProbability 6. Probability of reconstruction of reconstructionerror error for for small small-world-world networks networks with N with = 10, NN = 30, 10, NN = 50,= 30, M N = 50, = 10, and p = 0.15. M = 10, and p = 0.15.

Symmetry 2017, 9, 310 7 of 11 Symmetry 2017, 9, 310 7 of 11

Table 1. Probability of reconstruction error for small-world networks with N = 10, N = 30, N = 50, Table 1. Probability of reconstruction error for small-world networks with N = 10, N = 30, N = 50, M M = 10, and p = 0.15. = 10, and p = 0.15.

N/fN/f 0.10.1 0.20.2 0.30.3 0.4 0.5 0.5 0.6 0.60.7 0.70.8 0.8 1010 0.3070.307 0.3250.325 0.3430.343 0.361 0.379 0.379 0.397 0.397 0.415 0.4150.431 0.431 30Symmetry30 2017 0.273, 9,0.273 310 0.2840.284 0.2950.295 0.308 0.321 0.321 0.335 0.335 0.347 0.3470.3587 of 0.358 11 5050 0.1860.186 0.1960.196 0.2050.205 0.216 0.225 0.225 0.234 0.234 0.241 0.2410.246 0.246 Table 1. Probability of reconstruction error for small-world networks with N = 10, N = 30, N = 50, M = 10, and p = 0.15. 0.9 P=0.3 N/f 0.1 0.8 0.2 P=0.50.3 0.4 0.5 0.6 0.7 0.8 P=0.7 10 0.307 0.325 0.343 0.361 0.379 0.397 0.415 0.431 0.7 30 0.273 0.284 0.295 0.308 0.321 0.335 0.347 0.358 50 0.186 0.60.196 0.205 0.216 0.225 0.234 0.241 0.246

0.5 RE 0.9 P P=0.3 0.4 0.8 P=0.5 P=0.7 0.3 0.7

0.6 0.2

0.5

RE

0.1 P 0.00.4 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 f 0.3

Figure 7 7.. ProbabilityProbability of of reconstruction reconstruction0.2 error error for for small small-world-world networks networks with with p =p 0.3,= 0.3, p =p 0.5,= 0.5,p = p0.7,= 0.7,M =M 10,= 10, and and NN = =50 50.. 0.1 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 f Table 2. Probability of reconstruction error for small-world networks with p = 0.3, p = 0.5, p = 0.7, M = Table 2.FigureProbability 7. Probability of reconstruction of reconstruction error error for small small-world-world networks networks with p with = 0.3,p p = 0.5, 0.3, p p= 0.7,= 0.5, M p = 0.7, 10,M = and 10, =N and 10, = 50andN. = N 50. = 50. P/f 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 P/fTable 2 0.1. Probability of 0.2 reconstruction 0.3 error for small 0.4-world networks 0.5 with p = 0.6 0.3, p = 0.5, p0.7 = 0.7, M = 0.8 0.310, and N0.315 = 50. 0.319 0.334 0.356 0.373 0.399 0.431 0.477 0.30.5 0.3150.430 0.3190.438 0.3340.457 0.477 0.356 0.494 0.373 0.524 0.399 0.566 0.431 0.6160.477 P/f 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.50.7 0.4300.534 0.4380.529 0.4570.534 0.556 0.477 0.590 0.494 0.653 0.524 0.705 0.566 0.7700.616 0.70.3 0.534 0.315 0.5290.319 0.5340.334 0.356 0.556 0.373 0.590 0.399 0.6530.431 0.7050.477 0.770 0.5 0.430 0.438 0.457 0.477 0.494 0.524 0.566 0.616 However, even0.7 in0.534 the case0.529 of high 0.534rewiring 0.556 probability 0.590 p = 0.6530.5, 50%0.705 of links 0.770 can be successfully estimated.However, In Figure even in8 and the caseTable of 3 high, we rewiringstudy the probability influence ofp =the 0.5, number 50% of of links networks can be M successfully that were usedestimated. to train InHowever, Figureand test8 even and the in Table MLPNNthe case3, we of on studyhigh the rewiring the reconstruction influence probability of the performance.p = number0.5, 50% of of links networksThe can number be successfullyM thatof nodes were usedN is estimated. In Figure 8 and Table 3, we study the influence of the number of networks M that were assumedto train and to be test 50 the and MLPNN the rewiring on the probability reconstruction p is performance.set to 0.5. It can The be number observed of nodesfrom theN isfigure assumed that used to train and test the MLPNN on the reconstruction performance. The number of nodes N is ttohere be 50is assumeda and small the todegradation rewiring be 50 and probabilitythe in rewiring performance probabilityp is set with to p 0.5. isincrease set It canto 0.5. bein It M observedcan, but, be observed PRE from remains from the figuretheless figure than that that 0.3. there is a small degradationthere is a small in degradation performance in performance with increase with in increaseM, but, in PMRE, but,remains PRE remains less thanless than 0.3. 0.3. 0.4 M=10 0.4 M=30 M=10 M=50 M=30 M=50

0.3 0.3

RE

RE

P

P

0.2 0.2

0.1 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 0.1 f 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 Figure 8. Probability of reconstruction error for smallf-world networks with M = 10, M = 30, M = 50, Figure 8. Probability of reconstruction error for small-world networks with M = 10, M = 30, M = 50, N = 50, and p = 0.15. FigureN = 50, 8 and. Probabilityp = 0.15. of reconstruction error for small-world networks with M = 10, M = 30, M = 50, N = 50, and p = 0.15.

Symmetry 2017, 9, 310 8 of 11 Symmetry 2017, 9, 310 8 of 11

Table 3.3. Probability of of reconstruction reconstruction error error for for small small-world-world networks networks with with MM = =10, 10, MM = 30,= 30, M M= 50,= 50,N =N 50,= 50,and and p =p 0.15.= 0.15. M/f 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 M/f 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 10 0.186 0.196 0.205 0.216 0.225 0.234 0.241 0.246 1030 0.1860.218 0.1960.226 0.233 0.205 0.239 0.216 0.246 0.225 0.253 0.234 0.260 0.241 0.268 0.246 3050 0.2180.26 0.2260.267 0.267 0.233 0.269 0.239 0.274 0.246 0.279 0.253 0.285 0.260 0.293 0.268 50 0.26 0.267 0.267 0.269 0.274 0.279 0.285 0.293 4.3. Scale-Free Network 4.3. Scale-Free Network The proposed method is also evaluated in scale-free network model that is generated using the algorithmThe proposed described method in Figure is also2. Figure evaluated 9 and in Table scale-free 4 compares network the model reconstruction that is generated error probability using the algorithm described in Figure2. Figure9 and Table4 compares the reconstruction error probability for different number of nodes N = 10, N = 30, and N = 50. The initial number of nodes m0 was set to twofor different and the number node degree of nodes K = N 2 = for 10, theN preferential= 30, and N attachment= 50. The initial process. number Figure of nodes 9 showsm0 thatwas the set reconstructionto two and the accuracy node degree in scaleK =-free 2 for network the preferential model is attachment significantly process. lower Figurecompared9 shows to the that small the- worldreconstruction network. accuracy The reason in scale-free for the poor network performance model is significantly is that the network lower compared topologies to theof M small-world scale-free networksnetwork. Theare more reason complex for the poorcompared performance to the small is that-world the network network topologies models. Furthermore, of M scale-free the networks links in LLare between more complex the M compareddamaged networks to the small-world do not overlap network as models.much as Furthermore,in the small-world the links network in LL betweenmodels. the M damaged networks do not overlap as much as in the small-world network models. In Figure 10 In Figure 10 and Table 5, the reconstruction error probability performance with N = 30 and m0 = 2, for differentand Table number5, the reconstruction of networks M error = 10, probability M = 30, and performance M = 50, is shown. with N Compared= 30 and m to0 =the 2, small for different-world networknumber of environment, networks M = the 10, reconstructionM = 30, and M error= 50, isproba shown.bility Compared values are to thequite small-world high even network in low percentageenvironment, of thenode reconstruction failures for M error = 30 probabilityand M = 50. values Due to are the quite complex high eventopology in low of percentagethe scale-free of networknode failures model, for increaseM = 30 in and MM affects= 50. the Due link to estimation the complex ability topology of the of MLPNN. the scale-free Finally, network Figure model,11 and Tableincrease 6 shows in M affects the reconstruction the link estimation accuracy ability performance of the MLPNN. with Finally, different Figure initial 11 and number Table 6of shows nodes the in constructingreconstruction scale accuracy-free performance network model. with different One can initial observe number that of there nodes is in a constructing small difference scale-free in reconstructionnetwork model. performance One can observe for high that percentage there is a of small node difference failures, regardless in reconstruction of the initial performance number forof nodes.high percentage This is because of node the failures, link estimation regardless difficulty of the initialis almost number equal ofto nodes.the MLPNN, This is even because if they the have link differentestimation degree difficulty distributions is almost, equalwhen tothe the number MLPNN, of hubs even remains if they havethe same different in scale degree-free distributions,networks. when the number of hubs remains the same in scale-free networks.

0.8

0.7

0.6

0.5

RE P 0.4

0.3

N=10 0.2 N=30 N=50 0.1 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 f

Figure 9.9. Probability ofof reconstructionreconstruction error error for for scale-free scale-free networks networks with withN =N10, = 10,N =N 30, = 30,N = N 50, = 50,M =M 10 =, and m = 2. 10, and0 m0 = 2.

TableTable 4.4. Probability of reconstruction errorerror forfor scale-freescale-freenetworks networks with withN N= = 10,10,N N == 30, N = 50, M = 10, 10,

and m0 == 2 2..

N/f 0.1N/f 0.1 0.2 0.2 0.30.3 0.40.4 0.5 0.5 0.6 0.60.7 0.8 0.7 0.8 10 0.411 0.423 0.437 0.455 0.484 0.518 0.551 0.588 10 0.411 0.423 0.437 0.455 0.484 0.518 0.551 0.588 30 0.45530 0.455 0.471 0.471 0.487 0.487 0.5100.510 0.542 0.542 0.582 0.5820.628 0.671 0.628 0.671 50 0.49050 0.490 0.507 0.507 0.525 0.525 0.5470.547 0.575 0.575 0.618 0.6180.669 0.730 0.669 0.730

Symmetry 2017, 9, 310 9 of 11 Symmetry 2017, 9, 310 9 of 11 Symmetry 2017, 9, 310 9 of 11 0.9

0.9 0.8

0.8 0.7

0.7 0.6

0.6 0.5

RE

P 0.5 RE 0.4

P

0.4 0.3

0.3 M=10 0.2 M=30 M=10 0.2 M=50 0.1 M=30 0.0 0.1 M=500.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 0.1 f 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 f Figure 10 10.. ProbabilityProbability of of reconstruction reconstruction error error for for scale scale-free-free networks networks with with M M= 10,= 10,M =M 30,= M 30, =M 50,= N 50, = 30,FigureN = and 30, 10 andm.0 Probability= m2.0 = 2. of reconstruction error for scale-free networks with M = 10, M = 30, M = 50, N =

30, and m0 = 2. Table 5.5. ProbabilityProbability ofof reconstructionreconstruction error error for for scale-free scale-free networks networks with withM M=10, = 10,M =M 30, = 30,M =M 50, = 50,N = N 30, = 30,Tableand andm 05 .=m Probability0 2. = 2. of reconstruction error for scale-free networks with M = 10, M = 30, M = 50, N =

30, and m0 = 2. M/fM/f 0.10.1 0.20.2 0.3 0.4 0.4 0.5 0.5 0.6 0.60.7 0.70.8 0.8 M10/f 0.4550.1 0.4710.2 0.4870.3 0.5100.4 0.5420.5 0.5820.6 0.6280.7 0.6710.8 10 0.455 0.471 0.487 0.510 0.542 0.582 0.628 0.671 303010 0.6430.6430.455 0.6520.6520.471 0.6690.487 0.6850.510 0.685 0.7020.542 0.702 0.7170.582 0.717 0.7300.628 0.730 0.7380.671 0.738 505030 0.7080.7080.643 0.7180.7180.652 0.7290.669 0.7350.685 0.735 0.7450.702 0.745 0.7550.717 0.755 0.7610.730 0.761 0.7660.738 0.766 50 0.708 0.7180.9 0.729 0.735 0.745 0.755 0.761 0.766 0.9 0.8

0.8 0.7

0.7 0.6

0.6 0.5

RE

P 0.5 RE 0.4

P

0.4 0.3 m0=2

0.3 m0=3 0.2 m0=2 m0=4 m0=3 0.2 0.1 m0=4 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 0.1 f 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 f Figure 11. Probability of reconstruction error for scale-free networks with m0 = 2, m0 = 3, m0 = 4, N = 30,

andFigure M =11 11. 10. Probability. of reconstruction errorerror forfor scale-free scale-free networks networks with withm m0 0= = 2, 2,m m00= = 3,3,m m00 = 4, N = 30, 30, and M = 10 10.. Table 6. Probability of reconstruction error for scale-free networks with m0 = 2, m0 = 3, m0 = 4, N = 30,

Tableand M 6. 6=. Probability10. ofof reconstructionreconstruction errorerror forfor scale-freescale-free networks networks with withm m0 0= = 2,2,m m00 == 3,3, mm00 = 4, N = 30, 30, and M = 10 10.. mo/f 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8

m2o /f 0.4550.1 0.4710.2 0.4870.3 0.5100.4 0.5420.5 0.5820.6 0.6280.7 0.6710.8 mo/f 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 32 0.4680.455 0.4830.471 0.5030.487 0.5280.510 0.5590.542 0.5970.582 0.6360.628 0.6790.671 243 0.4550.5080.468 0.4710.5220.483 0.5300.503 0.487 0.5580.528 0.510 0.5820.559 0.542 0.6120.597 0.582 0.6460.636 0.628 0.6890.679 0.671 3 0.468 0.483 0.503 0.528 0.559 0.597 0.636 0.679 4 0.508 0.522 0.530 0.558 0.582 0.612 0.646 0.689 4 0.508 0.522 0.530 0.558 0.582 0.612 0.646 0.689 5. Conclusions 5. Conclusions 5. ConclusionsIn this paper, we proposed a new method that efficiently reconstructs the topology of the damagedIn this complex paper, networks we proposed based a on new NN method technique. that To efficiently the best reconstructsof our knowledge, the topology our proposed of the In this paper, we proposed a new method that efficiently reconstructs the topology of the damaged methoddamaged is complexthe first known networks attempt based in on the NN literature technique. to recover To the the best network of our topologyknowledge, after our the proposed network complex networks based on NN technique. To the best of our knowledge, our proposed method is hasmethod been is damaged the first known and also attempt the first in theknown literature application to recover of the the NN network technique topology for complex after the network the first known attempt in the literature to recover the network topology after the network has been optimizationhas been damaged. The main and purposealso the offirst our known work wasapplication to design of a the NN NN solution technique based for on complex known damaged network networkoptimization topology. The mainfor accurate purpose reconstruction. of our work was The to proposeddesign a NN reconstruction solution based method on known was evaluated damaged network topology for accurate reconstruction. The proposed reconstruction method was evaluated

Symmetry 2017, 9, 310 10 of 11 damaged and also the first known application of the NN technique for complex network optimization. The main purpose of our work was to design a NN solution based on known damaged network topology for accurate reconstruction. The proposed reconstruction method was evaluated based on the probability of reconstruction error in small-world network and scale-free network models. From simulation results, the proposed method was able to reconstruct around 70% of the network topology for 10% node failures for small-world networks and around 50% of the network topology for 10% node failures for scale-free networks. Important topics that need to be considered in the future work is to develop a new link list that can represent both unidirectional and bidirectional link information and various performance metric needs to be developed that can be used to provide deeper understanding on the network reconstruction performance.

Acknowledgments: This research was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education (2017R1D1A1B03035522). Author Contributions: Insoo Sohn conceived, designed the experiments, and wrote the paper. Ye Hoon Lee analyzed data and designed the experiments. Conflicts of Interest: The authors declare no conflict of interest.

References

1. Wang, X.F.; Chen, G. Complex networks: Small-world, scale-free and beyond. IEEE Circuits Syst. Mag. 2003, 3, 6–20. [CrossRef] 2. Newman, M. Networks: An Introduction; Oxford University Press: Oxford, UK, 2010. 3. Sohn, I. Small-world and scale-free network models for IoT systems. Mob. Inf. Syst. 2017, 2017.[CrossRef] 4. Barabási, A.; Albert, R. Emergence of scaling in random networks. Science 1999, 286, 509–512. [PubMed] 5. Albert, R.; Barabási, A. Statistical mechanics of complex networks. Rev. Mod. Phys. 2002, 74, 47–97. [CrossRef] 6. Barabási, A.L. Scale-free networks: A decade and beyond. Science 2009, 325, 412–413. [CrossRef][PubMed] 7. Crucitti, P.; Latora, V.; Marchiori, M.; Rapisarda, A. Error and attack tolerance of complex networks. Phys. A Stat. Mech. Appl. 2004, 340, 388–394. [CrossRef] 8. Tanizawa, T.; Paul, G.; Cohen, R.; Havlin, S.; Stanley, H.E. Optimization of network robustness to waves of targeted and random attacks. Phys. Rev. E 2005, 71, 1–4. [CrossRef][PubMed] 9. Schneider, C.M.; Moreira, A.A.; Andrade, J.S.; Havlin, S.; Herrmann, H.J. Mitigation of malicious attacks on networks. Proc. Natl. Acad. Sci. USA 2011, 108, 3838–3841. [CrossRef][PubMed] 10. Ash, J.; Newth, D. Optimizing complex networks for resilience against cascading failure. Phys. A Stat. Mech. Appl. 2007, 380, 673–683. [CrossRef] 11. Clauset, A.; Moore, C.; Newman, M.E.J. Hierarchical structure and the prediction of missing links in networks. Nature 2008, 453, 98–101. [CrossRef][PubMed] 12. Guimerà, R.; Sales-Pardo, M. Missing and spurious interactions and the reconstruction of complex networks. Proc. Natl. Acad. Sci. USA 2010, 106, 22073–22078. [CrossRef][PubMed] 13. Zeng, A. Inferring network topology via the propagation process. J. Stat. Mech. Theory Exp. 2013.[CrossRef] 14. Haykin, S. Neural Networks: A Comprehensive Foundation; Macmillan: Basingstoke, UK, 1994. 15. Zhang, M.; Diao, M.; Gao, L.; Liu, L. Neural networks for radar waveform recognition. Symmetry 2017, 9. [CrossRef] 16. Lawrence, S.; Giles, C.L.; Tsoi, A.C.; Back, A.D. Face recognition: A convolutional neural-network approach. IEEE Trans. Neural Netw. 1997, 8, 98–113. [CrossRef][PubMed] 17. Lin, S.H.; Kung, S.Y.; Lin, L.J. Face recognition/detection by probabilistic decision-based neural network. IEEE Trans. Neural Netw. 1997, 8, 114–132. [PubMed] 18. Fang, S.H.; Lin, T.N. Indoor location system based on discriminant-adaptive neural network in IEEE 802.11 environments. IEEE Trans. Neural Netw. 2008, 19, 1973–1978. [CrossRef][PubMed] 19. Sohn, I. Indoor localization based on multiple neural networks. J. Inst. Control Robot. Syst. 2015, 21, 378–384. [CrossRef] 20. Sohn, I. A low complexity PAPR reduction scheme for OFDM systems via neural networks. IEEE Commun. Lett. 2014, 18, 225–228. [CrossRef] Symmetry 2017, 9, 310 11 of 11

21. Sohn, I.; Kim, S.C. Neural network based simplified clipping and filtering technique for PAPR reduction of OFDM signals. IEEE Commun. Lett. 2015, 19, 1438–1441. [CrossRef] 22. Watts, D.; Strogatz, S. Collective dynamics of ‘small-world’ network. Nature 1998, 393, 440–442. [CrossRef] [PubMed] 23. Kleinberg, J.M. Navigation in a small world. Nature 2000, 406, 845. [CrossRef][PubMed] 24. Newman, M.E.J. Models of the small world. J. Stat. Phys. 2000, 101, 819–841. [CrossRef] 25. Holme, P.; Kim, B.J.; Yoon, C.N.; Han, S.K. Attack vulnerability of complex networks. Phys. Rev. E 2002, 65, 1–14. [CrossRef][PubMed] 26. Johnson, J.; Picton, P. How to train a neural network: An introduction to the new computational paradigm. Complexity 1996, 1, 13–28. [CrossRef] 27. Marquardt, D.W. An algorithm for least-squares estimation of nonlinear parameters. J. Soc. Ind. Appl. Math. 1963, 11, 431–441. [CrossRef]

© 2017 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).