International Journal of Computer Science & Information Security

Total Page:16

File Type:pdf, Size:1020Kb

International Journal of Computer Science & Information Security IJCSIS Vol. 11 No. 10, October 2013 ISSN 1947-5500 International Journal of Computer Science & Information Security © IJCSIS PUBLICATION 2013 JCSI I S ISSN (online): 1947-5500 Please consider to contribute to and/or forward to the appropriate groups the following opportunity to submit and publish original scientific results. CALL FOR PAPERS International Journal of Computer Science and Information Security (IJCSIS) January-December 2013 Issues The topics suggested by this issue can be discussed in term of concepts, surveys, state of the art, research, standards, implementations, running experiments, applications, and industrial case studies. Authors are invited to submit complete unpublished papers, which are not under review in any other conference or journal in the following, but not limited to, topic areas. See authors guide for manuscript preparation and submission guidelines. Indexed by Google Scholar, DBLP, CiteSeerX, Directory for Open Access Journal (DOAJ), Bielefeld Academic Search Engine (BASE), SCIRUS, Cornell University Library, ScientificCommons, EBSCO, ProQuest and more. Deadline: see web site Notification: see web site Revision: see web site Publication: see web site Context-aware systems Agent-based systems Networking technologies Mobility and multimedia systems Security in network, systems, and applications Systems performance Evolutionary computation Networking and telecommunications Industrial systems Software development and deployment Evolutionary computation Knowledge virtualization Autonomic and autonomous systems Systems and networks on the chip Bio-technologies Knowledge for global defense Knowledge data systems Information Systems [IS] Mobile and distance education IPv6 Today - Technology and deployment Intelligent techniques, logics and systems Modeling Knowledge processing Software Engineering Information technologies Optimization Internet and web technologies Complexity Digital information processing Natural Language Processing Cognitive science and knowledge Speech Synthesis Data Mining For more topics, please see web site https://sites.google.com/site/ijcsis/ For more information, please visit the journal website (https://sites.google.com/site/ijcsis/) Editorial Message from Managing Editor International Journal of Computer Science and Information Security (IJCSIS – established since May 2009), is a venue to disseminate research and development results of lasting significance in the theory, design, implementation, analysis, and application of computing and security. As a scholarly open access peer-reviewed international journal, the primary objective is to provide the academic community and industry a forum for ideas and for the submission of original research related to Computer Science and Security. High caliber authors are solicited to contribute to this journal by submitting articles that illustrate research results, projects, surveying works and industrial experiences that describe significant advances in the Computer Science & Security. IJCSIS archives all publications in major academic/scientific databases; abstracting/indexing, editorial board and other important information are available online on homepage. Indexed by the following International agencies and institutions: Google Scholar, Bielefeld Academic Search Engine (BASE), CiteSeerX, SCIRUS, Cornell’s University Library EI, Scopus, DBLP, DOI, ProQuest, EBSCO. Google Scholar reported a large amount of cited papers published in IJCSIS. IJCSIS supports the Open Access policy of distribution of published manuscripts, ensuring "free availability on the public Internet, permitting any users to read, download, copy, distribute, print, search, or link to the full texts of [published] articles". IJCSIS editorial board consisting of international experts solicits your contribution to the journal with your research papers, projects, surveying works and industrial experiences. IJCSIS is grateful for all the insights and advice from authors & reviewers. We look forward to your collaboration. For further questions please do not hesitate to contact us at [email protected]. A complete list of journals can be found at: http://sites.google.com/site/ijcsis/ IJCSIS Vol. 11, No. 10, October 2013 Edition ISSN 1947-5500 © IJCSIS, USA. Journal Indexed by (among others): IJCSIS EDITORIAL BOARD Dr. Yong Li School of Electronic and Information Engineering, Beijing Jiaotong University, P. R. China Prof. Hamid Reza Naji Department of Computer Enigneering, Shahid Beheshti University, Tehran, Iran Dr. Sanjay Jasola Professor and Dean, School of Information and Communication Technology, Gautam Buddha University Dr Riktesh Srivastava Assistant Professor, Information Systems, Skyline University College, University City of Sharjah, Sharjah, PO 1797, UAE Dr. Siddhivinayak Kulkarni University of Ballarat, Ballarat, Victoria, Australia Professor (Dr) Mokhtar Beldjehem Sainte-Anne University, Halifax, NS, Canada Dr. Alex Pappachen James (Research Fellow) Queensland Micro-nanotechnology center, Griffith University, Australia Dr. T. C. Manjunath HKBK College of Engg., Bangalore, India. Prof. Elboukhari Mohamed Department of Computer Science, University Mohammed First, Oujda, Morocco TABLE OF CONTENTS 1. Paper 30091303: The Application of Data Mining to Build Classification Model for Predicting Graduate Employment (pp. 1-7) Bangsuk Jantawan, Department of Tropical Agriculture and International Cooperation, National Pingtung University of Science and Technology, Pingtung, Taiwan Cheng-Fa Tsai, Department of Management Information Systems, National Pingtung University of Science and Technology, Pingtung, Taiwan Abstract — Data mining has been applied in various areas because of its ability to rapidly analyze vast amounts of data. This study is to build the Graduates Employment Model using classification task in data mining, and to compare several of data-mining approaches such as Bayesian method and the Tree method. The Bayesian method includes 5 algorithms, including AODE, BayesNet, HNB, NaviveBayes, WAODE. The Tree method includes 5 algorithms, including BFTree, NBTree, REPTree, ID3, C4.5. The experiment uses a classification task in WEKA, and we compare the results of each algorithm, where several classification models were generated. To validate the generated model, the experiments were conducted using real data collected from graduate profile at the Maejo University in Thailand. The model is intended to be used for predicting whether a graduate was employed, unemployed, or in an undetermined situation. Keywords-Bayesian method; Classification model; Data mining; Tree method 2. Paper 30091323: Randomness Analysis of 128 bits Blowfish Block Cipher on ECB mode (pp. 8-21) (1) Ashwak ALabaichi (2) Ramlan Mahmod (3) Faudziah Ahmad (1)(3) Information Technology Department, University Utara Malaysia, 06010, Sintok, Malaysia (1) Department of computer science, Faculty of Sciences , Kerbala University, Kerbala,00964, Iraq (2) Faculty of Computer Science and Information Technology, University Putra Malaysia, Serdang, Selangor, 43400, Malaysia Abstract - This paper presents the statistical test of randomness on the Blowfish Block Cipher 128-bit is continuation with our eaelier papers title" Randomness Analysis on Blowfish Block Cipher using ECB mode" and "Randomness Analysis on Blowfish Block Cipher using ECB and CBC Modes", . Blowfish128-bit is extention of blowfish 64-bit. Blowfish 128-bit resemble blowfish 64-bit but only in blowfish algorithm block size is 64 bits but in an extension version block size is 128 bits and all operations based on 64 bits instead of on 32 bits. Blowfish 128-bit is a symmetric block cipher with variable key lengths from 64 bits up to a maximum of 896 bits this leads to increase security of the algorithm. The randomness testing was performed using NIST Statistical Test Suite. The tests were performed on Blowfish 128-bit with ECB mode.Our analysis showed that Blowfish 128-bit algorithm with ECB mode such as blowfish 64-bit where is not inappropriate with text and images files that contain huge number of identical bytes and better than blowfish 64-bit with video files. c++ is used in the implementation of the blowfish 128-bit while NIST is implemented under Linux operating system. Keywords: Block Cipher, Blowfish Algorithm 128-bit, ECB mode, randomness testing. 3. Paper 30091328: Relay Assisted Epidemic Routing Scheme for Vehicular Ad hoc Network (pp. 22-26) Murlidhar Prasad Singh, Deptt of CSE, UIT, RGPV Bhopal, MP, India Dr. Piyush Kumar Shukla, Deptt of CSE, UIT, RGPV Bhopal,MP, India Anjna Jayant Deen, Deptt of CSE, UIT, RGPV Bhopal, MP, India Abstract — Vehicular ad hoc networks are networks in which no simultaneous end-to-end path exists. Typically, message delivery experiences long delays as a result of the disconnected nature of the network. In this form of network, our main goal is to deliver the messages to the destination with minimum delay. We propose relay assisted epidemic routing scheme in which we tend to use relay nodes (stationary nodes) at an intersection with a completely different number of mobile nodes which differs from existing routing protocols on how routing decision are made at road intersection where relay nodes are deployed. Vehicles keep moving and relay nodes are static. The purpose of deploy relay nodes is to increase the contact opportunities, reduce the delays and enhance the delivery rate. With various simulations it has been shown that relay nodes improves the message delivery probability rate and decreases
Recommended publications
  • Nxadmin CLI Reference Guide Unity Iv Contents
    HYPER-UNIFIED STORAGE nxadmin Command Line Interface Reference Guide NEXSAN | 325 E. Hillcrest Drive, Suite #150 | Thousand Oaks, CA 91360 USA Printed Thursday, July 26, 2018 | www.nexsan.com Copyright © 2010—2018 Nexsan Technologies, Inc. All rights reserved. Trademarks Nexsan® is a trademark or registered trademark of Nexsan Technologies, Inc. The Nexsan logo is a registered trademark of Nexsan Technologies, Inc. All other trademarks and registered trademarks are the property of their respective owners. Patents This product is protected by one or more of the following patents, and other pending patent applications worldwide: United States patents US8,191,841, US8,120,922; United Kingdom patents GB2466535B, GB2467622B, GB2467404B, GB2296798B, GB2297636B About this document Unauthorized use, duplication, or modification of this document in whole or in part without the written consent of Nexsan Corporation is strictly prohibited. Nexsan Technologies, Inc. reserves the right to make changes to this manual, as well as the equipment and software described in this manual, at any time without notice. This manual may contain links to web sites that were current at the time of publication, but have since been moved or become inactive. It may also contain links to sites owned and operated by third parties. Nexsan is not responsible for the content of any such third-party site. Contents Contents Contents iii Chapter 1: Accessing the nxadmin and nxcmd CLIs 15 Connecting to the Unity Storage System using SSH 15 Prerequisite 15 Connecting to the Unity
    [Show full text]
  • Smaller Flight Data Recorders{
    Available online at http://docs.lib.purdue.edu/jate Journal of Aviation Technology and Engineering 2:2 (2013) 45–55 Smaller Flight Data Recorders{ Yair Wiseman and Alon Barkai Bar-Ilan University Abstract Data captured by flight data recorders are generally stored on the system’s embedded hard disk. A common problem is the lack of storage space on the disk. This lack of space for data storage leads to either a constant effort to reduce the space used by data, or to increased costs due to acquisition of additional space, which is not always possible. File compression can solve the problem, but carries with it the potential drawback of the increased overhead required when writing the data to the disk, putting an excessive load on the system and degrading system performance. The author suggests the use of an efficient compressed file system that both compresses data in real time and ensures that there will be minimal impact on the performance of other tasks. Keywords: flight data recorder, data compression, file system Introduction A flight data recorder is a small line-replaceable computer unit employed in aircraft. Its function is recording pilots’ inputs, electronic inputs, sensor positions and instructions sent to any electronic systems on the aircraft. It is unofficially referred to as a "black box". Flight data recorders are designed to be small and thoroughly fabricated to withstand the influence of high speed impacts and extreme temperatures. A flight data recorder from a commercial aircraft can be seen in Figure 1. State-of-the-art high density flash memory devices have permitted the solid state flight data recorder (SSFDR) to be implemented with much larger memory capacity.
    [Show full text]
  • The Pillars of Lossless Compression Algorithms a Road Map and Genealogy Tree
    International Journal of Applied Engineering Research ISSN 0973-4562 Volume 13, Number 6 (2018) pp. 3296-3414 © Research India Publications. http://www.ripublication.com The Pillars of Lossless Compression Algorithms a Road Map and Genealogy Tree Evon Abu-Taieh, PhD Information System Technology Faculty, The University of Jordan, Aqaba, Jordan. Abstract tree is presented in the last section of the paper after presenting the 12 main compression algorithms each with a practical This paper presents the pillars of lossless compression example. algorithms, methods and techniques. The paper counted more than 40 compression algorithms. Although each algorithm is The paper first introduces Shannon–Fano code showing its an independent in its own right, still; these algorithms relation to Shannon (1948), Huffman coding (1952), FANO interrelate genealogically and chronologically. The paper then (1949), Run Length Encoding (1967), Peter's Version (1963), presents the genealogy tree suggested by researcher. The tree Enumerative Coding (1973), LIFO (1976), FiFO Pasco (1976), shows the interrelationships between the 40 algorithms. Also, Stream (1979), P-Based FIFO (1981). Two examples are to be the tree showed the chronological order the algorithms came to presented one for Shannon-Fano Code and the other is for life. The time relation shows the cooperation among the Arithmetic Coding. Next, Huffman code is to be presented scientific society and how the amended each other's work. The with simulation example and algorithm. The third is Lempel- paper presents the 12 pillars researched in this paper, and a Ziv-Welch (LZW) Algorithm which hatched more than 24 comparison table is to be developed.
    [Show full text]
  • The Deep Learning Solutions on Lossless Compression Methods for Alleviating Data Load on Iot Nodes in Smart Cities
    sensors Article The Deep Learning Solutions on Lossless Compression Methods for Alleviating Data Load on IoT Nodes in Smart Cities Ammar Nasif *, Zulaiha Ali Othman and Nor Samsiah Sani Center for Artificial Intelligence Technology (CAIT), Faculty of Information Science & Technology, University Kebangsaan Malaysia, Bangi 43600, Malaysia; [email protected] (Z.A.O.); [email protected] (N.S.S.) * Correspondence: [email protected] Abstract: Networking is crucial for smart city projects nowadays, as it offers an environment where people and things are connected. This paper presents a chronology of factors on the development of smart cities, including IoT technologies as network infrastructure. Increasing IoT nodes leads to increasing data flow, which is a potential source of failure for IoT networks. The biggest challenge of IoT networks is that the IoT may have insufficient memory to handle all transaction data within the IoT network. We aim in this paper to propose a potential compression method for reducing IoT network data traffic. Therefore, we investigate various lossless compression algorithms, such as entropy or dictionary-based algorithms, and general compression methods to determine which algorithm or method adheres to the IoT specifications. Furthermore, this study conducts compression experiments using entropy (Huffman, Adaptive Huffman) and Dictionary (LZ77, LZ78) as well as five different types of datasets of the IoT data traffic. Though the above algorithms can alleviate the IoT data traffic, adaptive Huffman gave the best compression algorithm. Therefore, in this paper, Citation: Nasif, A.; Othman, Z.A.; we aim to propose a conceptual compression method for IoT data traffic by improving an adaptive Sani, N.S.
    [Show full text]
  • The Illusion of Space and the Science of Data Compression And
    The Illusion of Space and the Science of Data Compression and Deduplication Bruce Yellin EMC Proven Professional Knowledge Sharing 2010 Bruce Yellin Advisory Technology Consultant EMC Corporation [email protected] Table of Contents What Was Old Is New Again ......................................................................................................... 4 The Business Benefits of Saving Space ....................................................................................... 6 Data Compression Strategies ....................................................................................................... 9 Data Compression Basics ....................................................................................................... 10 Compression Bakeoff .............................................................................................................. 13 Data Deduplication Strategies .................................................................................................... 16 Deduplication - Theory of Operation ....................................................................................... 16 File Level Deduplication - Single Instance Storage ................................................................. 21 Fixed-Block Deduplication ....................................................................................................... 23 Variable-Block Deduplication .................................................................................................. 24 Content-Aware Deduplication
    [Show full text]
  • Data Compression in Solid State Storage
    Data Compression in Solid State Storage John Fryar [email protected] Santa Clara, CA August 2013 1 Acknowledgements This presentation would not have been possible without the counsel, hard work and graciousness of the following individuals and/or organizations: Raymond Savarda Sandgate Technologies Santa Clara, CA August 2013 2 Disclaimers The opinions expressed herein are those of the author and do not necessarily represent those of any other organization or individual unless specifically cited. A thorough attempt to acknowledge all sources has been made. That said, we’re all human… Santa Clara, CA August 2013 3 Learning Objectives At the conclusion of this tutorial the audience will have been exposed to: • The different types of Data Compression • Common Data Compression Algorithms • The Deflate/Inflate (GZIP/GUNZIP) algorithms in detail • Implementation Options (Software/Hardware) • Impacts of design parameters in Performance • SSD benefits and challenges • Resources for Further Study Santa Clara, CA August 2013 4 Agenda • Background, Definitions, & Context • Data Compression Overview • Data Compression Algorithm Survey • Deflate/Inflate (GZIP/GUNZIP) in depth • Software Implementations • HW Implementations • Tradeoffs & Advanced Topics • SSD Benefits and Challenges • Conclusions Santa Clara, CA August 2013 5 Definitions Item Description Comments Open A system which will compress Must strictly adhere to standards System data for use by other entities. on compress / decompress I.E. the compressed data will algorithms exit the system Interoperability among vendors mandated for Open Systems Closed A system which utilizes Can support a limited, optimized System compressed data internally but subset of standard. does not expose compressed Also allows custom algorithms data to the outside world No Interoperability req’d.
    [Show full text]
  • Better Adaptive Text Compression Scheme
    Journal of Education and Science Vol. 27, No.2, 2018, pp. 48-57 ISSN 1812-125X http://edusj.mosuljournals.com Better Adaptive Text Compression Scheme Duha Amir Sultan Dept. of Biology/ Education College of Girls/ University of Mosul تاريخ اﻻستﻻم 17/04/2012 تاريخ القبول 2012/11/07 الخﻻصة ظهرت طر ق عدي دي لكبس البيدااداتو حاةدي أش ر ددددددددددددددهر طديق اللرق ر ط اللريقد ال قترة أش قبل العال ي ش Ziv ح Lempel حالت رطلق عليها اسددددددددد LZ77و ث طورت طيق اللريق أش قبددل العددال يش Storer و Szymanski لتظهر طريقدد يدديدددي ط LZSS و حالت رعلددن اتددا رفضدل أقارا ساداسقاتها سعي رط طبقن عل اصدو ثيير حخدة ف طاتيش اللريقتيش حي ي ع اللرق اﻷخرى الت تنضوي تحن تصنيف اللرق اليدناأيك ي لكبس البيااات دت أاح ال لف أش اليادددار ل الي يش اد اط رطول تلابق بيش أةزط البيااا ت )الن ص ال شدددمر أادددبقاص حالن ص اليي سدديشددمر اللريق ال قترة ف طيا البحث تعت ي أبير البح ث سإت اطيشو أش الياددار ل الي يش ث أش الي يش ل اليادددددددددددددارو عل الرر رط طيق اللريق تحتان حقتا رطول اوعا أا و ﻻ راها رعل ن اتي ثبس اكبر للبيااات Abstract A data compression scheme suggested by Ziv and Lempel, LZ77, is applied to text compression. A slightly modified version suggested by Storer and Szymanski ,LZSS, is found to achieve compression ratios as good as most existing schemes for a wide range of texts. In these two methods and all other dynamic methods, a text file is searched from left to right to find the longest match between the lookahead buffer (the previously encoded text) and the characters to be encoded.
    [Show full text]
  • Use Style: Paper Title
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Diponegoro University Institutional Repository On The Implementation of ZFS (Zettabyte File System) Storage System Eko D. Widianto, Agung B. Prasetijo, and Ahmad Ghufroni Department of Computer Engineering, Faculty of Engineering – Diponegoro University Jl. Prof. Soedarto, SH, Tembalang, Semarang, Indonesia E-mail: [email protected]; [email protected]; [email protected] Abstract – Digital data storage is very critical in computer 32 KB, ZFS is 200MB/s faster than EXT3. The throughput of systems. Storage devices used to store data may at any time suffer write process in ZFS is as good as EXT3 for the size of I/O from damage caused by its lifetime, resource failure or factory less than 32 KB, whilst, for the size of I/O larger than 32 KB defects. Such damage may lead to loss of important data. The risk ZFS is 150 MB/s faster compared to EXT3. of data loss in the event of device damage can be minimized by Literature [2] explored the performance, capacity and building a storage system that supports redundancy. The design of storage based on ZFS (Zettabyte File System) aims at building a integrity of ZFS raidz. The tests for its read and write speed is storage system that supports redundancy and data integrity without as follows: in mirror mode using two 4TB hard drive, ZFS has requiring additional RAID controllers. When the system fails on 488 MB/s for read speed and 106MB/s for write speed.
    [Show full text]
  • Mimicking Datasets for Content Generation in Storage Benchmarks
    SDGen: Mimicking Datasets for Content Generation in Storage Benchmarks Raúl Gracia-Tinedo, Universitat Rovira i Virgili; Danny Harnik, Dalit Naor, and Dmitry Sotnikov, IBM Research Haifa; Sivan Toledo and Aviad Zuck, Tel-Aviv University https://www.usenix.org/conference/fast15/technical-sessions/presentation/gracia-tinedo This paper is included in the Proceedings of the 13th USENIX Conference on File and Storage Technologies (FAST ’15). February 16–19, 2015 • Santa Clara, CA, USA ISBN 978-1-931971-201 Open access to the Proceedings of the 13th USENIX Conference on File and Storage Technologies is sponsored by USENIX SDGen: Mimicking Datasets for Content Generation in Storage Benchmarks Raul´ Gracia-Tinedo Danny Harnik, Dalit Naor, Dmitry Sotnikov Universitat Rovira i Virgili (Spain) IBM Research-Haifa (Israel) [email protected] dannyh, dalit, dmitrys @il.ibm.com { } Sivan Toledo, Aviad Zuck Tel-Aviv University (Israel) stoledo, aviadzuc @tau.ac.il { } Abstract However, most storage benchmarks do not pay partic- Storage system benchmarks either use samples of pro- ular attention to the contents generated during their exe- prietary data or synthesize artificial data in simple ways cution [35] (see examples in Table 1). For instance, Im- (such as using zeros or random data). However, many pressions [7] implements accurate statistical methods to storage systems behave completely differently on such model the structure of a file system, but the contents of artificial data than they do on real-world data. This is the files are by default zeros or statically generated by third- case with systems that include data reduction techniques, party applications. Another example is OLTPBench [15], such as compression and/or deduplication.
    [Show full text]
  • QZFS: QAT Accelerated Compression in File System for Application
    QZFS: QAT Accelerated Compression in File System for Application Agnostic and Cost Efficient Data Storage Xiaokang Hu and Fuzong Wang, Shanghai Jiao Tong University, Intel Asia-Pacific R&D Ltd.; Weigang Li, Intel Asia-Pacific R&D Ltd.; Jian Li and Haibing Guan, Shanghai Jiao Tong University https://www.usenix.org/conference/atc19/presentation/wang-fuzong This paper is included in the Proceedings of the 2019 USENIX Annual Technical Conference. July 10–12, 2019 • Renton, WA, USA ISBN 978-1-939133-03-8 Open access to the Proceedings of the 2019 USENIX Annual Technical Conference is sponsored by USENIX. QZFS: QAT Accelerated Compression in File System for Application Agnostic and Cost Efficient Data Storage Xiaokang Hu Fuzong Wang∗ Weigang Li Shanghai Jiao Tong University Shanghai Jiao Tong University Intel Asia-Pacific R&D Ltd. Intel Asia-Pacific R&D Ltd. Intel Asia-Pacific R&D Ltd. Jian Li Haibing Guan Shanghai Jiao Tong University Shanghai Jiao Tong University Abstract Hadoop [3], Spark [4] or stream processing job [40] is com- pressed, the data processing performance can be effectively Data compression can not only provide space efficiency with enhanced as the compression not only saves bandwidth but lower Total Cost of Ownership (TCO) but also enhance I/O also decreases the number of read/write operations from/to performance because of the reduced read/write operations. storage systems. However, lossless compression algorithms with high com- It is widely recognized that the benefits of data compression pression ratio (e.g. gzip) inevitably incur high CPU resource come at the expense of computational cost [1,9], especially consumption.
    [Show full text]
  • These Algorithmes De Compression Et
    ~~··f.. ..• L Laboratoire d'Informatique L Fondamentale de Ulle ~/ Numéro d'ordre: 1663 THESE' Nouveau Régime Présentée à L'UNIVERSITÉ DES SCIENCES ET TECHNOLOGIES DE LILLE Pour obtenir le titre de DOCTEUR en INFORMATIQUE par Éric RIVALS ALGORITHMES DE COMPRESSION ET , APPLICATIONS A L'ANALYSE DE SEQUENCES / / GENETIQUES Thèse soutenue le 10 janvier 1996, devant la Commission d'Examen: Président Alain HÉNAUT Université de Versailles-Saint-Quentin Rapporteurs Maxime CROCHEMORE Université de Marne la Vallée Alain HÉNAUT Université de Versailles-Saint-Quentin Examinateurs Didier ARQUÈS Université de Marne la Vallée MaxDAUCHET Université de Lille I Jean-Paul DELAHAYE Université de Lille I Christian GAUTIER Université de Lyon 1 Mireille RÉGNIER INRIA Rocquencourt UNIVERSITE DES SCIENCES ET TECHNOLOGIES DE Lll...LE POYENS HONORAIRES PE L'ANCIENNE FACULTE DES SCIENCES M. H. LEFEBVRE, M. PARREAU PROFESSEURS HONORAIRES DES ANCIENNES FACULTES DE QROIT ET SCIENCES ECONOMIQUES. PES SCIENCES ET QES LETTRES MM. ARNOULT, BONTE, BROCHARD, CHAPPELON, CHAUDRON, CORDONNIER, DECUYPER, DEHEUVELS, DEHORS, DION, FAUVEL. FLEURY, GERMAIN, GLACET, GONTIER, KOURGANOFF, LAMOTTE, LASSERRE. LELONG. LHOMME, LIEBAERT, MARTINOT-LAGARDE, MAZET, MICHEL, PEREZ, ROIG, ROSEAU. ROCELLE, SCHILTZ, SAVARD, ZAMANSKI, Mes BEAUJEU, LELONG. PROFESSEUR EMERITE M. A. LEBRUN ANCIENS PRESIDENTS DE L'UNIVERSITE DES SCIENCES ET TECHNIQUES DE LILLE MM. M. PARREAU, J. LOMBARD, M. !\UGEO!'. J. CORTOIS. A.DlJBRULLE PRESIDENT PE L'UNIVERSITE PES SCIENCES ET TECHNOLOGIES DE LILLE M. P. LOUIS PROFESSEURS • CLASSE EXCEPTIONNELLE M. CHAMLEY Hervé G~otechnique M. CONSTANT Eugène Elecrronique M. ESCAIG Bertrand Physique du solide M. FOURET René Physique du solide M. GABll..LARD Roben Elecrroniqu~ M. LABLACHE COMBlER Alain Chimie M. LOMBARD Jacques Sociologie M.
    [Show full text]
  • Data Compression for Climate Data
    Data compression for climate data Article Published Version Creative Commons: Attribution-Noncommercial 3.0 Open access Kuhn, M., Kunkel, J. M. and Ludwig, T. (2016) Data compression for climate data. Supercomputing Frontiers and Innovations, 3 (1). pp. 75-94. ISSN 2313-8734 doi: https://doi.org/10.14529/jsfi160105 Available at http://centaur.reading.ac.uk/77684/ It is advisable to refer to the publisher’s version if you intend to cite from the work. See Guidance on citing . Identification Number/DOI: https://doi.org/10.14529/jsfi160105 <https://doi.org/10.14529/jsfi160105> Publisher: South Urals University All outputs in CentAUR are protected by Intellectual Property Rights law, including copyright law. Copyright and IPR is retained by the creators or other copyright holders. Terms and conditions for use of this material are defined in the End User Agreement . www.reading.ac.uk/centaur CentAUR Central Archive at the University of Reading Reading’s research outputs online DOI: 10.14529/jsfi160105 Data Compression for Climate Data Michael Kuhn1, Julian Kunkel2, Thomas Ludwig2 © The Authors 2016. This paper is published with open access at SuperFri.org The different rates of increase for computational power and storage capabilities of super- computers turn data storage into a technical and economical problem. As storage capabilities are lagging behind, investments and operational costs for storage systems have increased to keep up with the supercomputers’ I/O requirements. One promising approach is to reduce the amount of data that is stored. In this paper, we take a look at the impact of compression on performance and costs of high performance systems.
    [Show full text]