International Journal of Computer Science & Information Security

Total Page:16

File Type:pdf, Size:1020Kb

International Journal of Computer Science & Information Security IJCSIS Vol. 12 No. 7, July 2014 ISSN 1947-5500 International Journal of Computer Science & Information Security © IJCSIS PUBLICATION 2014 JCSI I S ISSN (online): 1947-5500 Please consider to contribute to and/or forward to the appropriate groups the following opportunity to submit and publish original scientific results. CALL FOR PAPERS International Journal of Computer Science and Information Security (IJCSIS) January-December 2014 Issues The topics suggested by this issue can be discussed in term of concepts, surveys, state of the art, research, standards, implementations, running experiments, applications, and industrial case studies. Authors are invited to submit complete unpublished papers, which are not under review in any other conference or journal in the following, but not limited to, topic areas. See authors guide for manuscript preparation and submission guidelines. Indexed by Google Scholar, DBLP, CiteSeerX, Directory for Open Access Journal (DOAJ), Bielefeld Academic Search Engine (BASE), SCIRUS, Scopus Database, Cornell University Library, ScientificCommons, ProQuest, EBSCO and more. Deadline: see web site Notification: see web site Revision: see web site Publication: see web site Context-aware systems Agent-based systems Networking technologies Mobility and multimedia systems Security in network, systems, and applications Systems performance Evolutionary computation Networking and telecommunications Industrial systems Software development and deployment Evolutionary computation Knowledge virtualization Autonomic and autonomous systems Systems and networks on the chip Bio-technologies Knowledge for global defense Knowledge data systems Information Systems [IS] Mobile and distance education IPv6 Today - Technology and deployment Intelligent techniques, logics and systems Modeling Knowledge processing Software Engineering Information technologies Optimization Internet and web technologies Complexity Digital information processing Natural Language Processing Cognitive science and knowledge Speech Synthesis Data Mining For more topics, please see web site https://sites.google.com/site/ijcsis/ For more information, please visit the journal website (https://sites.google.com/site/ijcsis/) Editorial Message from Managing Editor The International Journal of Computer Science and Information Security (IJCSIS) presents research, review and survey papers which offer a significant contribution to the computer science knowledge, and which are of high interest to a wide academic/research/practitioner audience. Coverage extends to all main-stream and state of the art branches of computer science, security and related information technology applications. As a scholarly open access peer-reviewed journal, IJCSIS mission is to provide an outlet for quality research publications. It aims to promote universal access with equal opportunities for international scientific community; to scientific knowledge, and the creation, and dissemination of scientific and technical information. IJCSIS archives all publications in major academic/scientific databases. Indexed by the following International agencies and institutions: Google Scholar, Bielefeld Academic Search Engine (BASE), CiteSeerX, SCIRUS, Cornell’s University Library EI, Scopus, DBLP, DOI, ProQuest, EBSCO. Google Scholar reported increased in number cited papers published in IJCSIS (No. of Cited Papers:520, No. of Citations:981, Years:5). Abstracting/indexing, editorial board and other important information are available online on homepage. This journal supports the Open Access policy of distribution of published manuscripts, ensuring "free availability on the public Internet, permitting any users to read, download, copy, distribute, print, search, or link to the full texts of [published] articles". IJCSIS editorial board, consisting of international experts, ensures a rigorous peer-reviewing process. We look forward to your collaboration. For further questions please do not hesitate to contact us at [email protected]. , A complete list of journals can be found at: http://sites.google.com/site/ijcsis/ IJCSIS Vol. 12, No. 7, July 2014 Edition ISSN 1947-5500 © IJCSIS, USA. Journal Indexed by (among others): IJCSIS EDITORIAL BOARD Dr. Yong Li School of Electronic and Information Engineering, Beijing Jiaotong University, P. R. China Prof. Hamid Reza Naji Department of Computer Enigneering, Shahid Beheshti University, Tehran, Iran Dr. Sanjay Jasola Professor and Dean, School of Information and Communication Technology, Gautam Buddha University Dr Riktesh Srivastava Assistant Professor, Information Systems, Skyline University College, University City of Sharjah, Sharjah, PO 1797, UAE Dr. Siddhivinayak Kulkarni University of Ballarat, Ballarat, Victoria, Australia Professor (Dr) Mokhtar Beldjehem Sainte-Anne University, Halifax, NS, Canada Dr. Alex Pappachen James (Research Fellow) Queensland Micro-nanotechnology center, Griffith University, Australia Dr. T. C. Manjunath HKBK College of Engg., Bangalore, India. Prof. Elboukhari Mohamed Department of Computer Science, University Mohammed First, Oujda, Morocco TABLE OF CONTENTS 1. Paper 30061401: Logical Analysis of an Accelerated Secure Multicast Authentication Protocol (pp. 1-10) Full Text: PDF Ghada Elkabbany, Informatics Dept., Electronics Research Institute, Cairo, Egypt Mohamed Rasslan, Informatics Dept., Electronics Research Institute, Cairo, Egypt Heba Aslan, Informatics Dept., Electronics Research Institute, Cairo, Egypt Abstract — Multicast authentication is a challenging problem, because it should verify the received packets without assuming the availability of the entire original stream and resist many types of attacks, such as pollution attacks. Researchers have proposed many solutions in literature with major drawbacks in high communication and computation overheads. Others suffer from packet loss and pollution attacks. Recently, signature techniques were used to provide multicast authentication. Signcryption techniques have the advantage of achieving the basic goals of encryption and signature schemes. But, it suffers from the inability to resist packet loss. In a previous work, we proposed a multicast authentication protocol that is based on signcryption techniques and erasure code function to solve the packet loss problem. In this paper, we utilize pipelining technique to reduce the computation overhead. Pipelined technique is chosen due to its suitability for signcryption algorithm nature. The pipelined technique reduces the computation time. Moreover, a verification of our protocol using BAN logic is performed. The analysis shows that it achieves the goals of authentication without bugs or redundancies. A comparison of multicast authentication protocols is carried out. The results show that the accelerated multicast authentication protocol resists packet loss and pollution attacks with low computation and communication overheads, therefore, it could be used in real-time applications. Keywords - Multicast Communication, Authentication, Signcryption, Erasure Code Functions, Parallel Pipelined, BAN Logic. 2. Paper 30061403: Image Zooming using Sinusoidal Transforms like Hartley, DFT, DCT, DST and Real Fourier Transform (pp. 11-16) Full Text: PDF Dr. H. B. Kekre, Senior Professor Computer Engineering Department MPSTME, NMIMS University, Vile Parle, Mumbai, India, Dr. Tanuja Sarode, Associate Professor, Computer Department, Thadomal Shahani Engg. College, Bandra, Mumbai 50, India Shachi Natu, Ph.D. Research Scholar, Computer Engineering Department MPSTME, NMIMS University, Vile Parle, Mumbai, India Abstract — A simple method of resizing the image using the relation between sampling frequency and zero padding in frequency and time domain or vice versa of Fourier transform is proposed. Padding zeroes in frequency domain and then taking inverse gives zooming effect to image. Transforms like Fourier transform, Real Fourier transform, Hartley transform, DCT and DST are used. Their performance is compared and Hartley is found to be giving better performance. As we increase the size of image, DCT starts giving better performance. Performance of all these transforms is also compared with another resizing technique called grid based scaling and transformed based resizing is observed to be better than grid based resizing. Keywords-Image zooming; DFT; Hartley Transform; Real Fourier Transform; DCT; DST 3. Paper 30061404: A Self-Training with Multiple CPUs Algorithm for Load Balancing using Time estimation (pp. 17-21) Full Text: PDF Aziz Alotaibi, Fahad Alswaina Department of Computer Science, 221 University Ave, University of Bridgeport, Bridgeport, CT, USA Abstract - In this paper, we propose a self-trading algorithm using two new parameters: time execution and type of priority to improve the load balancing performance. Load balancing uses information such as CPU load, memory usage, and network traffic which has been extracted from previous execution to increase the resource’s utilization. We have included time execution for each property individually such as CPU bound, and Memory bound to balance the work between nodes. Type of priority has been taken into account to enhance and expedite the processing of request with high priority. Keywords – Cloud Computing, Load Balancing, Resource allocation. 4. Paper 30061405: Result-Oriented Approach for Websites Accessibility Evaluation (pp. 22-30) Full Text: PDF Marya Butt, School of Governance, Utrecht University, Utrecht, the Netherlands Abstract — The paper attempts to devise a
Recommended publications
  • The Pillars of Lossless Compression Algorithms a Road Map and Genealogy Tree
    International Journal of Applied Engineering Research ISSN 0973-4562 Volume 13, Number 6 (2018) pp. 3296-3414 © Research India Publications. http://www.ripublication.com The Pillars of Lossless Compression Algorithms a Road Map and Genealogy Tree Evon Abu-Taieh, PhD Information System Technology Faculty, The University of Jordan, Aqaba, Jordan. Abstract tree is presented in the last section of the paper after presenting the 12 main compression algorithms each with a practical This paper presents the pillars of lossless compression example. algorithms, methods and techniques. The paper counted more than 40 compression algorithms. Although each algorithm is The paper first introduces Shannon–Fano code showing its an independent in its own right, still; these algorithms relation to Shannon (1948), Huffman coding (1952), FANO interrelate genealogically and chronologically. The paper then (1949), Run Length Encoding (1967), Peter's Version (1963), presents the genealogy tree suggested by researcher. The tree Enumerative Coding (1973), LIFO (1976), FiFO Pasco (1976), shows the interrelationships between the 40 algorithms. Also, Stream (1979), P-Based FIFO (1981). Two examples are to be the tree showed the chronological order the algorithms came to presented one for Shannon-Fano Code and the other is for life. The time relation shows the cooperation among the Arithmetic Coding. Next, Huffman code is to be presented scientific society and how the amended each other's work. The with simulation example and algorithm. The third is Lempel- paper presents the 12 pillars researched in this paper, and a Ziv-Welch (LZW) Algorithm which hatched more than 24 comparison table is to be developed.
    [Show full text]
  • International Journal of Computer Science & Information Security
    IJCSIS Vol. 11 No. 11, November 2013 ISSN 1947-5500 International Journal of Computer Science & Information Security © IJCSIS PUBLICATION 2013 JCSI I S ISSN (online): 1947-5500 Please consider to contribute to and/or forward to the appropriate groups the following opportunity to submit and publish original scientific results. CALL FOR PAPERS International Journal of Computer Science and Information Security (IJCSIS) January-December 2014 Issues The topics suggested by this issue can be discussed in term of concepts, surveys, state of the art, research, standards, implementations, running experiments, applications, and industrial case studies. Authors are invited to submit complete unpublished papers, which are not under review in any other conference or journal in the following, but not limited to, topic areas. See authors guide for manuscript preparation and submission guidelines. Indexed by Google Scholar, DBLP, CiteSeerX, Directory for Open Access Journal (DOAJ), Bielefeld Academic Search Engine (BASE), SCIRUS, Scopus Database, Cornell University Library, ScientificCommons, ProQuest, EBSCO and more. Deadline: see web site Notification: see web site Revision: see web site Publication: see web site Context-aware systems Agent-based systems Networking technologies Mobility and multimedia systems Security in network, systems, and applications Systems performance Evolutionary computation Networking and telecommunications Industrial systems Software development and deployment Evolutionary computation Knowledge virtualization Autonomic and
    [Show full text]
  • LCH-Png-Chapter.Pdf
    1 Overview PNG, the Portable Network Graphics format, is a robust, extensible, general-purpose and patent-free image format. In this chapter we briefly describe the events that led to its creation in early 1995 and how the resulting design decisions affect its compression. Then we examine PNG’s compression engine, the deflate algorithm, quickly touch on the zlib stream format, and discuss some of the tunable settings of its most popular implementation. Since PNG’s compression filters are not only critical to its efficiency but also one of its more unusual features, we spend some time describing how they work, including several examples. Then, because PNG is a practical kind of a format, we give some practical rules of thumb on how to optimize its compression, followed by some “real-world” comparisons with GIF, TIFF and gzip. Finally we cover some of the compression-related aspects of MNG, PNG’s animated cousin, and wrap up with pointers to other sources for further reading. 2 Historical Background Image formats are all about design decisions, and such decisions often can be understood only in an historical context. This is particularly true of PNG, which was created during—and as part of—the “Wild West” days of the Web. Though PNG’s genesis was at the beginning of 1995, its roots go back much further, to the seminal compression papers by Abraham Lempel and Jacob Ziv in 1977 and 1978.[?, ?] The algorithms described in those papers, commonly referred to as “LZ77” and “LZ78” for obvious reasons,[?] spawned a whole host of refinements.
    [Show full text]
  • Chapter 7 Lossless Compression Algorithms
    Chapter 7 Lossless Compression Algorithms 7.1 Introduction 7.2 Basics of Information Theory 7.3 Run-Length Coding 7.4 Variable-Length Coding (VLC) 7.5 Dictionary-based Coding 7.6 Arithmetic Coding 7.7 Lossless Image Compression 7.8 Further Exploration 1 Fundamentals of Multimedia 7.5 Dictionary-based Coding • LZW uses fixed-length codewords to represent variable- length strings of symbols/characters that commonly occur together, e.g., words in English text. • the LZW encoder and decoder build up the same dictionary dynamically while receiving the data. • LZW places longer and longer repeated entries into a dictionary, and then emits the code for an element, rather than the string itself, if the element has already been placed in the dictionary. 2 Fundamentals of Multimedia History • Lempel–Ziv–Welch (LZW) is a universal lossless data compression algorithm created by Abraham Lempel, Jacob Ziv, and Terry Welch. It was published by Welch in 1984 as an improved implementation of the LZ78 algorithm published by Lempel and Ziv in 1978. • The algorithm is designed to be fast to implement but is not usually optimal because it performs only limited analysis of the data. • LZW is used in many applications, such as UNIX compress, GIF for images, V.42 bits for modems. 3 Fundamentals of Multimedia ALGORITHM 7.2 - LZW Compression BEGIN s = next input character; while not EOF { c = next input character; if s + c exists in the dictionary s = s + c; else //new word { output the code for s; add string s+c to the dictionary with a new code; s = c; //the
    [Show full text]
  • Compression Algorithm in Mobile Packet Core
    Master of Science in Computer Science September 2020 Compression Algorithm in Mobile Packet Core Lakshmi Nishita Poranki Faculty of Computing, Blekinge Institute of Technology, 371 79 Karlskrona, Sweden This thesis is submitted to the Faculty of Computing at Blekinge Institute of Technology in partial fulfilment of the requirements for the degree of Master of Science in Computer Science. The thesis is equivalent to 20 weeks of full time studies. The authors declare that they are the sole authors of this thesis and that they have not used any sources other than those listed in the bibliography and identified as references. They further declare that they have not submitted this thesis at any other institution to obtain a degree. Contact Information: Author(s): Lakshmi Nishita Poranki E-mail: [email protected] University advisor: Siamak Khatibi Department of Telecommunications Industrial advisor: Nils Ljungberg Ericsson, Gothenburg Faculty of Computing Internet : www.bth.se Blekinge Institute of Technology Phone : +46 455 38 50 00 SE–371 79 Karlskrona, Sweden Fax : +46 455 38 50 57 Abstract Context: Data compression is the technique that is used for the fast transmission of the data and also to reduce the storage size of the transmitted data. Data compression is the massive and ubiquitous technology where almost every communication company make use of data compression. Data compression is categorized mainly into lossy and lossless data compression. Ericsson is one of the telecommunication company that deals with millions of user data and, all these data get compressed using the deflate compression algorithm. Due to the compression ratio and speed, the deflate algorithm is not optimal for the present use case(compress twice and decompress once) of Ericsson.
    [Show full text]