J.K. Periasamy et al., International Journal of Advanced Engineering Technology E-ISSN 0976-3945

Research Paper AUGMENTATION WITH MULTI USER REPUDIATION AND DATA DE-DUPLICATION 1J.K. Periasamy, 2B. Latha

Address for Correspondence 1Research Scholar, Information and Communication Engineering, Anna University, Chennai & Computer Science and Engineering, Sri Sairam Engineering College, Chennai, Tamilnadu, India. 2Professor and Head, Computer Science and Engineering, Sri Sairam Engineering College, Chennai,Tamilnadu, India. ABSTRACT One of the major applications of Cloud computing is “Cloud Storage” where data is stored in virtual cloud servers provided by numerous third parties. De-duplication is a technique established in cloud storage for eliminating duplicate copies of repeated data. The storage space is reduced and the capacity of bandwidth is increased in the server using Data De- duplication. It is related to intelligent data compression and single-instance data storage.To take the complexity out of managing the Information Technology infrastructure, the storage outsourcing has become the popular option. The latest techniques to solve the complications of protective and efficient public auditing for dynamic and shared data are still not secure against the collusion that is, the illegal agreement of the cloud storage server and the multiple user repudiation in workable cloud storage. Hence, to prevent the collusion attack in the existing system and to provide an effective global auditing and data integrity, the group user repudiation is performed based on ordered sequence of values commit and group signature is generated with secure hash algorithm. The group user data is encrypted using block ciphers and bilinear transformation. This work also introduces a new approach in which each user holds an independent master key for encryption using convergent keys technique and out sourcing them to the cloud. The storage optimization was achieved with the help of messaging scheme between sender and receiver over the network, which reduces the overheads correlated with the duplication detection and query processes. The planned system also uses binary diff technique rule to identify the unique data chunk which is stored in the cloud. The breach of privacy and leakage of data can be prevented to acceptable level. The data chunk size is set by the user. Moreover, this work also proposes a feasible technique to detect storage of copyright and hazardous content in Cloud. KEYWORDS—Cloud Computing, Global auditing, Data integrity, User repudiation, Data De-duplication, Hashing, Data Chunks, Bundling, Copyrighted contents I. INTRODUCTION A. Data De-Duplication Process Storing data in the cloud has become the integral part Data De-duplication is one of the specialized data of business organizations and other enterprise compression techniques for eliminating repeated solutions. The best cloud data storage services are data. It is used to enhance the storage and network provided by , , Mega, , utilization for reducing the number of bytes during pCloud and OneDrive [12]. The authorization of data transfers. During the De-duplication process, the cloud storage provides the determination of transferred file is divided into number of chunks authenticated identity for specified set of resources. based on dynamically specified bytes by the user and Consolidation of data centers reduces the cost then unmatched chunks of data are identified and stored during the analysis process. In the analysis, significantly. The management of cloud storage other chunks were compared to the already stored includes the apparent new risk management such as copy and whenever a match occurs, the matched data unavailability, breaching privacy, multiple chunk was replaced with a reference point to the integration of organizations, leakage of data and already stored chunk. In the given file, the same byte issues in compliances. The security aspect of cloud of the chunks may occur numerous times, thus the storage involves protection, privacy, recoverability, amount of chunks storage and time can be reduced reliability and integrity. Effective security in cloud using this technique. The match frequency is can be achieved using data encryption services, calculated based on chunk size. authorization and management services, access In this system is based on the in-line data De- control services, monitoring and auditing services. duplication method which occurs during data Cloud storage is a virtualized infrastructure with transfer. The transferred data is split into chunks and instant scalability, elasticity, multi-tenancy and huge each chunk is encrypted using convergent encryption resources. where the key for encryption is generated from plain The surface area attack increases in storage text of the chunk itself and the checksum has been outsourcing. The data is distributed and stored in generated for each chunk are used for naming of different location which increases the risk of respective chunks which can be easily recognized the unauthorized physical access. The multiple data is duplicates. shared with multiple users in cloud hence, large Once the duplicates are recognized, they will be numbers of keys are required for secure storage. The truncated and the rest of the chunks will be sent to the number of networks over which the data travels gets storage. There are different overheads in implementing as follows (a) Every duplication query increased. During transmission the risk in data read brings one extra round trip time of latency. (b) can be mitigated by encryption technology. Termination of TCP connections for non-duplicate Encryption protects the data that is transmitted in chunks in the original data. cloud. Outsourced data in cloud is more prone to To overcome these derelictions, a technique called hackers and national security agencies. The sites in-memory filter and inherent data locality to reduce which permit file sharing must enable piracy and the frequency of disk lookups, is proposed. The copyright infringement. Reliability and availability scattered chunks are bundled together into a single depends on the network and the service provider. The TCP connection and this method is named as essential aspect of the cloud service provider is the bundling. The duplicate chunks are usually clustered security audit. and data locality preserved for already detected

Int J Adv Engg Tech/Vol. VII/Issue I/Jan.-March,2016/797-803 J.K. Periasamy et al., International Journal of Advanced Engineering Technology E-ISSN 0976-3945 chunk is utilized to reduce the number of duplication by the revoked user with [3] proxy re-signatures. The queries. Moreover, the messaging back scheme helps public or private verifier can audit the integrity of the sender to recognize the duplicate hash value that shared data without retrieving the entire data from the has been sent from the receiver. The redundant cloud database, even if some of the blocks in shared chunk transfer will be reduced using Messaging Back data have been re-signed by the cloud. Scheme. Group signatures are the central cryptographic basic B. Role of Dedup in Cloud where users can anonymously and accountably sign Backing up the data from your cell phone to the messages on behalf of the group they belong to. cloud is a routine process. In the back end, the Several adequate constructions with security proofs service providers need to protect and store massive in the standard model has been appeared in the amounts of data within the data centers. So, the frequent years. Like standard PKIs, group signatures Cloud Service Providers (CSPs) have implemented need a powerful revocation system to be practical. A the technology named De-duplication shortly called new method for scalable revocation, related to the as Dedup. Dedup is used to eliminate the duplicate Naor-Naor-Lotspiech (NNL) [4] broadcast files occupy the storage unnecessarily. CSPs can encryption framework that interacts easily with enhance their storage and maintenance easily with at techniques for building group signatures in the most security and provide service to the customer standard model was proposed. with minimum cost. Data De-duplication removes the duplicates in the C. In-line De-duplication storage to optimize the storage. This is done based on In-line De-duplication is the pre-process De- a prescribed time interval in the data center. Initially, duplication, which removes the duplicates before this needs storage in plethora. However, it is unlike storing it in the permanent storage. It removes the the in-line De-duplication process where the chunk in the file when the file is transferred; the duplicates are removed before storing. Data De- identified chunk matches with the chunk in the duplicate over networks, that is in-line process De- storage or receiver. With this method the storage duplication brings more advantages to storage required to store the file after transfer is very less optimization. But all these are achieved with the cost compared to post process De-duplication where the of various parameters like latency, disk lookups and De-duplication is done in a periodic manner after TCP terminations. Few drawbacks are there in the storing the files with duplicates in the storage. This existing system like latency, Unnecessary TCP system uses this in-line De-duplication method with terminations, Unwanted disk lookups. some other added techniques like filters, Cache, disk A method was developed for more protected De- chunks, and messaging back mechanism that duplication. It introduces a new key known as ‘de- facilitates the process in a great manner. key’ concept where the master key of the user is D. Convergent Encryption shared among the key management service providers. Convergent encryption is one of the encryption So the master key will not be compromised by the methods where the encryption key is generated from hackers easily. Unfortunately, it is implemented in the plain text of the file itself. When the file produces post process De-duplication, which needs larger the same checksum thereby; the duplicates are storage for initial storage and might have duplicate recognized with no collision. This procedure has files in abundance. [5] advantages when compared to the other any De-duplication method was designed for cloud encryption techniques, namely SHA-1. There is less backup in both local and global, especially for chance of collision. Probably, duplicates personal storage. This scheme is better suited for identification is false in SHA-1. The files with same laptops, desktops, and data backups, but plain text may produce a different checksum and in incompatibility when it comes to smart phones and other cases files with different plain text may produce tablet data backups. This scheme focuses on post the same checksum. So the system cannot eliminate process De-duplication where the initial storage will the duplicates accurately in this case it is said to be be large and needs larger bandwidth for data transfer high in collision. to back up storage.[6] II. RELATED WORK The system was proposed for content delivery A scheme to realize an efficient and secure data acceleration and wide area network optimization integrity auditing for share dynamic data with multi- using a pack junking algorithm. The drawbacks of user modification was used. The techniques like this system are High computational cost and tedious vector commitment, Asymmetric Group Key efficiency. [7,8] Agreement (AGKA) and group signature with user A system was proposed for the avionics network revocation [1] was adopted to achieve the data where the main work is to deal with the satellite integrity auditing of remote data. The innovative communication and other geographical message design on polynomial authentication tags which passing. So it obviously needs predictability. This allows composition of tags of different data blocks increases the necessity of large bandwidth. The was achieved. For system scalability, in addition possibility of redundant data generation is a real time entrust the cloud with the ability to accumulate scheduling which makes De-duplication difficult. So authentication tags [2] from multiple writers into one they have proposed a De-duplication-aware Deficit writer while sending the integrity proof of Round Robin based scheduling. They have used a information to the verifier (who may be general cloud filtering method to speed up the De-duplication users). process. [9] A new auditing mechanism for shared data with A method was proposed for removing the redundant efficient user revocation in the cloud was provided. data using various techniques such as a caching When a user in the group is revoked, the partially management mechanism, fingerprinting technique, trusted cloud will re-sign the blocks that were signed etc., in cellular networks. They have shown the Int J Adv Engg Tech/Vol. VII/Issue I/Jan.-March,2016/797-803 J.K. Periasamy et al., International Journal of Advanced Engineering Technology E-ISSN 0976-3945 experimental results as 50% bandwidth ig savings in the hash values in each query. This is achieved with wi-fi networks and 60% savings in mobile data the high cost of large memory for caching the data networks. [11]. A genetic algorithm is used to store being queried. Usage of convergent encryption makes the records in the database in an efficient manner. no collision in the chunk recognition for duplicates. [13] In the Fig. 1,the owner of the group will be activated III. SYSTEM ARCHITECTURE by the cloud service provider (CSP). It offers the De-duplication of similar chunks within the incoming storage and application services available via a files towards the receiver or storage is done by private or public network [14]. They will enable the several continuous processes that include Cache users to access the necessary resources through the mechanisms in both sender and receiver. Here the internet. To provide such resources providers often sender de-duplicates the input file’s similar chunks fall back upon new providers in the cloud. The group using the Cache that contains the duplicates chunks owners are authenticated by the cloud service hash value received from the receiver and transfers provider. There are several group members registered the input file to the receiver for further duplicate into a group which will be maintained by the group detection. Then the receiver starts checking for owner. There are two types of data that are stored in duplicates with Cache on it and the filter as well. the cloud. The first kind is the data that is created by Once the files are recognized as unique, it is stored the user and then uploaded in the cloud. The other on disk with its supplementing details. [10] kind is the data that is created on the cloud platform. The bundling of the sparsely distributed multiple Data produced prior to the already uploaded data chunks into one TCP connection removes the into a cloud platform may be governed by overhead of unnecessary TCP terminations that appropriate copyright laws depending on the cloud occurs often when it sees the duplicate file. In server, while the data that is created after storage additional, the duplicate queries reduced to fewer brings about a totally advanced dimension of numbers by increasing the size of data represented by ownership.

Fig. 1. System Architecture

IV. IMPLEMENTATION chunks and is sent from the sender to the receiver. A. Splitting Large File and Transferring into These chunks are received at the receiver as it is, receiver which is merged into a large file as per its source file The large file is sent from sender to receiver without in the sender side, and then it is stored in the splitting it into chunks and this has been received at database. the receiver. The large file is split into less size of

Fig. 2. Splitting a Large file and transferring into the receiver Int J Adv Engg Tech/Vol. VII/Issue I/Jan.-March,2016/797-803 J.K. Periasamy et al., International Journal of Advanced Engineering Technology E-ISSN 0976-3945 From the above given Fig. 2, it is understood that these similar chunks with the help of convergent files are sent from sender to receiver in a sequential encryption is the concept of this procedure, as shown process. In the sender side, files are split into a in the Fig. 3. This process is done by encrypting each number of chunks and these chunks are bundled. chunk using the convergent encryption. It is the type These chunks are transferred through one TCP. of encryption, the key is generated from the plain text These chunks are received and checked and finally it of the chunk itself and the key is used encrypt and is sent to the receiver. decrypt the data in a secure manner. Any chunk is B. Encrypting the Chunks and Producing the Hash having the same plain text will be creating the same Value checksum as a result of convergent encryption, and In transferring the chunks, there may be a possibility this is different from other techniques. of transferring duplicate chunks as well as identifying

Fig. 3. Encrypting the chunks and producing the Hash Value C. Bundling the Chunks in a Single TCP Session original data are transformed into another set of bits. in Sender Side The constants like a, b, c, d acts as the key. The The bundling method is implemented to avoid the encryption is done using pseudo random numbers to TCP terminations and establishments for non- choose the index of the shared data. It generates a duplicated chunks that may be sparse in the original random number in a specified range which will be the file. There are two bundling techniques to bundle index for the position selected in the shared data. sparsely distributed chunks into a large block of data The text that is selected will be transformed into 128 to be carried in a single TCP session. This uses disk bit hash code. This 128 bit hash code will be chunk, Cache, messaging back scheme to evolve this separated into four parts and then it will be denoted work efficiently. as constants.In the Fig. 4, a mapping , where D. Implementing the messaging back technique A denotes the plane which may be complex and it is from receiver to sender of the form In this system, the messaging back technique is and implemented. In this technique, the sender recognized the duplicate hash values of chunks that c,d,e,f A are constants. be are already present in the receiver or stored. So the the complex variables located in different planes is sender will not send the duplicate query to receiver called the bilinear transformation from the plane A to in-order to check whether the data chunk is already A. present in storage. This technique stops the The bilinear transformation is the blend of the numerous of disk lookup operations and duplicate rotation and translation. The cipher text will be queries. The hash value of duplicate chunks is received by the third party auditor and he will divide identified by the receiver and send the hash value of the shared data into pseudo random numbers and that particular chunk to the sender. . The cipher text is produced using the known The sender’s Cache could help the sender check the mathematical function. input file’s value against this hash value. If these chunks of has values are equivalanced, then the input file will not transfer to the receiver. Otherwise, it transfers the file as it has different hash value and will be checked for further duplicate detection in the receiver. E. Implementing Binary Diff technique at the receiver Once the original chunks are received at the receiver, Fig. 4. Bilinear transformation from domain (x,y) to they are further split into unique chunks in domain (u,v) accordance with the predetermined size using Binary B. Vector Commitment Diff technique. Binary Diff detects further duplicate The commitment must hide the information without chunks. It highlights duplicate chunks after revealing it to the unauthorized user. The committed comparison with stored chunks. The red mark message allows position binding so that the ordered indicates unique chunks and black mark indicates duplicate. values can be committed together rather than a single message [15]. The position binding states that no two V. TECHNIQUES INVOVLED A. Bilinear Transformation values can be located in the same position. The The ciphertext is produced through transformations commitment needs to be updated. Position binding is from one domain to other domain. The bits of satisfied by the vector commitment. Int J Adv Engg Tech/Vol. VII/Issue I/Jan.-March,2016/797-803 J.K. Periasamy et al., International Journal of Advanced Engineering Technology E-ISSN 0976-3945 a. Where is the security  The sender need not send the key to the server. parameter and is the vector that has been The server will validate the receiver while not committed. knowing the key just by checking the hash of the b. Where is encrypted data.  A sender and receiver will view the data from the input sequence of messages. storage using the key. The sender sends info to c. Where is the old the server then the server search the data for them message and is the updated or new message. and response it. Finally, decryption is done by d. it can be verified only if the sender side which is used key. This result is is a valid proof. 100% settled, thus any purchasers encrypting an equivalent data can generate an equivalent key, C. Group Signature locator, and encrypted data. The original identity of the individual signer cannot be determined by the secret key of the group owner. F. Cache Cache is implemented in both sender and receiver, This property is known as anonymity. The owner of the group must be able to trace the signer. The which identifies duplicates in two steps. First, the sender Cache identifies the duplicate chunks. colluding group members cannot create a valid Second, the duplicates of the file are identified by the signature. Cache and filtered in the receiver. a. Setup : given a parameter for security G. Bundling and the number of users permitted are TCP connections get terminated and established for . Choose only the bilinear each non-duplicate chunk that may be sparse in the groups of prime order with a original file. To avoid this frequent setting up and generator . termination of TCP connections, there are two b. Join : the group manager and their bundling modules to bundle the distributed scatter corresponding user. chunks into a large block of data to be carried in a single TCP session. c. Sign : where to sign H. Filtering , generate a one-time signature. If no identical value is found, the filter is queried to D. Multi User Revocation determine if the incoming value is new. The Bloom The group signature assigned to that particular group filter indicates that the probability of duplicates ought user will be removed from the cloud database and the to be high, the receiver confirms duplicate clasp on group user will no longer be able to access the group. the hard drive. Another way, the copyrighted If the revoked user tries to access the group by contents are not allowed to store if it already exists. hacking other user’s signature then an alert will be VI. RESULTS AND EVALUATIONS provided to the owner and the TPA to modify the The owner of the group must be activated by the CSP hacked user’s signature. Hence this user must resign as shown in Fig. 5,the services will be available to the group with new signature. the owner only when it is activated by the CSP. CSP a. : First, find the subset of authenticates the group owner by activating it in its revoked users and . portal. Group id will be selected by each group owner b. : The valid proof must during their registration with CSP. Secure code will be mailed to the group users for return 1 otherwise 0. their personal access. Number of group users gets The signature check must be carried out by the third registered under each group owner and they will also party auditor to investigate the validity of the group receive a unique secret code. Secure code is signature. It makes use of the verifier local revocation generated using polynomial time algorithm. algorithm [1] to check the validity. The owner of the group will generate two keys as E. Convergent Encryption shown in Fig. 6, the public key will be used by the Convergent encryption is the method of generating user of the group to encrypt the file. The private key the key for encryption from the plain text of the file will be used by the public auditor to verify the file itself so that the files that have the same text will and to decrypt it. deliver the same checksum from that the duplicates The group signature for each encrypted block will be can be recognized easily. If it's implemented generated and it will be stored secretly as cipher text properly, it is one of secure form of encryption in which is shown in Fig. 7. preventing those who are not aware of obtaining it In the Fig. 8,the private key must be correctly entered from the encrypted data. Such type of algorithms is by the authorized group user. If not the user will be as follows immediately revoked. This will prevent the file  Find the hash worth for every chunk once spilt the access by robots. file. The key for encryption is the hash value. This method can be implemented in various fields  The encrypted data is then hashed. This hash where a group of users are maintained by an owner. value is termed as the 'locator'. The application includes educational institutions,  The receiver receives locator value which is sent health care organizations, banking sectors and by the sender. If the server already has the info, it multinational companies which are much prone to will increment the reference count doubled if cloud services.zz desired. If the server doesn't, the sender uploads it.

Int J Adv Engg Tech/Vol. VII/Issue I/Jan.-March,2016/797-803 J.K. Periasamy et al., International Journal of Advanced Engineering Technology E-ISSN 0976-3945

Fig. 5. Activation of group owner by CSP

Fig. 6. Key Generation

Fig. 7. Generation of Group Signature

Fig. 8. Revoking Group User for Entering Incorrect Private Key It also examines the behavior of the proposed marks indicate the unique chunks and black marks system’s data De-duplication result and identifies the indicate the duplicate chunks. copyrighted contents. As all the data chunks are flagged, the following value is calculated,

------> (1) Where m is the number of chunks that are flagged to be copyright and n is the total number of data chunks. If the percentage of the equation (1) value is greater than 50%, the system is identified that the file is copyrighted content. The performance of bandwidth increases and delay is reduced in the storage. The plots are drawn based on the m and n values and the value of “m” is from 1000 to 4000. In the Fig. 9,the result is observed that the greater the value of “n” number of chunks the file is split into, the more the accuracy of the results, which grows linearly. The sample results of Binary Diff Technique are shown in the below Fig. 10, which is done in Operating System. The highlights are shown of Fig. 9. Comparison ratio of m and n values duplicate chunks after comparison with stored chunks and unique chunks in the below Fig. 10. The red

Int J Adv Engg Tech/Vol. VII/Issue I/Jan.-March,2016/797-803 J.K. Periasamy et al., International Journal of Advanced Engineering Technology E-ISSN 0976-3945

[10] Peter Christen “A survey of Indexing Techniques for Scalable Record Linkage and De-duplication”, IEEE Transaction on knowledge and Data Engineering, Vol 24, No.9, Sept. 2012, PP.1537-1555. [11] W.K. Ng, Y. Wen, and H. Zhu, ‘‘Private Data De- duplication Protocols in Cloud Storage,’’ in the Proceedings. 27th Anniversary. ACM Symposium. Appl. Comput., S. Ossowski and P. Lecca, Eds., Mar.2012, pp. 441-446. [12] http://www.pcadvisor.co.uk/test-centre/internet/14-best- cloud-storage-services-2016-uk-copy-3614269/. [13] Moise´s G. De Carvalho, Alberto H.F. Laender, Marcos Andre´ Gonc¸alves, and Altigran S. Da Silva,‘‘A genetic programming approach to record De-duplication’’, IEEE Fig. 10. Sample results of Binary Diff Technique transactions on knowledge and data engineering, Vol. 24, VII. CONCLUSION Issue:3, March 2012. The verifiable database with frequent updates is an [14] http://searchcloudprovider.techtarget.com/definition/cloud important way to solve the problem of verifiable -provider -38. [15] D. Catalano and D. Fiore, “Vector commitments and their outsourcing of storage. A scheme to acquire effective applications,” in Public-Key Cryptography - PKC 2013, and secure data auditing for shared dynamic data Nara, Japan, 2013, pp. 55–72 with multi-user modification is proposed. The scheme bilinear transformation, vector commitment, Asymmetric Group Key Engagement (AGKE) and group signatures with group user revocation are adopted to achieve the data integrity auditing of remote data. Beside the public data auditing, the aggregation of the three primitive enable the scheme to outsource cipher text database to remote cloud and support protective group users repudiation to shared data. Each and every chunk in the file is encrypted using the convergent encryption so that the similar chunk could produce the same checksum; thereby it can identify the duplicates. So far this has been implemented and will be used for the further development of the system that uses Cache and filtering mechanisms to overcome the disk lookups. The binary Diff technique is identified further duplicate chunks, and allocates reference points. Consequently, the copyrighted contents are not allowed to store if it already exists. REFERENCES [1] Jiang, Xiaofeng Chen and Jianfeng Ma, “Public integrity auditing for shared dynamic cloud data with group user revocation”, IEEE Transactions , 2015, DOI 10.1109. [2] Jiawei Yuan and Shucheng Yu, “Efficient public integrity checking for cloud data sharing with multi-user modification”, Proc. of IEEE INFOCOM, Toronto, Canada, 2014, pp.2121–2129. [3] Boyang Wang, Baochun Li and Hui Li, “Public auditing for shared data with efficient user revocation in the cloud”, in Proc. of IEEE INFOCOM, Turin, Italy, 2013, pp. 2904–2912. [4] Benoit Libert, Thomas Peters, and Moti Yung,, “Group signatures with almost-for-free revocation”, in Proc. of CRYPTO 2012, USA, pp. 571–589. [5] Jin Li, Xiaofeng Chen, Mingqiang Li, Jingwei Li, Patrick P.C. Lee, and Wenjing Lou “Secure De-duplication with Efficient and Reliable Convergent Key Management’’, IEEE Transactions On Parallel And Distributed Systems, Vol. 25, No. 6, June 2014, PP. 1615-1625. [6] Yinjin Fu, Hong Jiang, Nong Xiao, Lei Tian, Fang Liu, And Lei Xu “Application-aware Local-global Source De- duplication For Cloud Backup Services of Personal Storage”, IEEE Transactions On Parallel And Distributed Systems, Vol. 25, No. 5, May 2014, PP: 1155-1165. [7] Yan Zhang, Nirwan Ansari “On Protocol-independent Data Redundancy Elimination”, IEEE Communications On Surveys & Tutorials, Vol. 16, No. 1, First Quarter 2014,PP: 455-472. [8] Bing Zhou and Jiangtao Wen ” Efficient File Communication Via De-duplication Over Networks With Manifest Feedback”, IEEE communications, letters, Vol. 18, No. 1, January 2014. [9] Yinjin Fu, Hong Jiang, Nong Xiao, Lei Tian, Fang Liu, And Lei Xu, “Scheduling Heterogeneous Flows with Delay-Aware De-duplication for Avionics Applications”, IEEE Transactions on Parallel and Distributed Systems, Vol. 23, No.9, Sept.2012, PP. 1790-1802.

Int J Adv Engg Tech/Vol. VII/Issue I/Jan.-March,2016/797-803