CFS/EDT Für UNIX Benutzerhandbuch

Total Page:16

File Type:pdf, Size:1020Kb

CFS/EDT Für UNIX Benutzerhandbuch CFS - Catalog & File Services und EDT für UNIX Benutzerhandbuch Ausgabe August 2021 (CFSX V1.678) Weitergabe sowie Vervielfältigung dieser Unterlage, Verwendung und Mitteilung ihres Inhalts sind nicht gestattet, soweit nicht ausdrücklich zugestanden. Zuwiderhandlungen verpflichten zu Schadensersatz. Im Laufe der Entwicklung des Programms können Leistungsmerkmale ohne vorhergehende Ankündigung hinzugefügt bzw. geändert werden oder entfallen. Copyright @ OPG Online-Programmierung GmbH, 1993 - 2021 Sendlinger Str. 28, 80331 München, Tel. 089/267831, Fax 089/260 9929 E-Mail [email protected], http://www.opg.de Alle Rechte vorbehalten. Vorwort Was enthält dieses Manual Das vorliegende Manual beschreibt das Programm CFS (Catalog & File Services), das auf allen DV-Anlagen mit dem Betriebssystem SCO-UNIX, SINIX und UNIX ablauffähig ist. Das Manual besteht aus folgenden Kapiteln: 1 Einführung Der Leser erhält eine Übersicht über die Funktionen von CFS, sowie eine Auswahl einfacher Anwendungsbeispiele für CFS. 2 Begriffserläuterungen In diesem Kapitel werden die in dieser Beschreibung verwendeten Metazeichen sowie die am häufigsten verwendeten Fachbegriffe erklärt. 3 Die wichtigsten Bildschirmformate von CFS Einführung in die Arbeitsweise mit dem Programm CFS anhand der verwendeten Bildschirmformate (Masken). 4 - 8 Beschreibung der Eingabemöglichkeiten und Kommandos 9 - 11 Zusammenfassende Darstellung bestimmter Themenkreise. Tastaturbelegung, Kommandogedächtnis ... 12 Parameter ändern Beschreibung der Kommandos zur Änderung von CFS-internen Parametern. Durch diese Parameter kann das Programmverhalten beeinflußt werden. 13 Hardcopy 14 Help-System Beschreibung des Online-Hilfesystems in CFS. Über dieses System erhalten Sie auch am Bildschirm alle in diesem Manual abgedruckten Informationen. 15 File-Transfer 16 Parameterdatei Schalter, Konstanten, Tastenzuweisung, Farbattribute, Tastencodes 17 Prozedursprache 18 Installation 19 Von CFS benutzte Dateien und Variablen 20 Besonderheiten bei POSIX im BS2000/OSD2 S Stichwortverzeichnis Welche Vorkenntnisse sind nötig ? Für die Arbeit mit dem Programm CFS benötigen Sie Grundkenntnisse des Betriebssystems UNIX. Begriffe wie Datei (File), Pfad (Path), Verzeichnis (Directory), Umgebungsvariable (Environment) sollten Ihnen bekannt sein. Wie finden Sie sich in diesem Manual zurecht ? Über das Inhaltsverzeichnis können Sie ein Kommando, dessen Name Ihnen bereits bekannt ist, direkt aufsuchen. Das Inhaltsverzeichnis ist so aufgebaut, daß Sie den Namen einer Funktion auch aufgrund der angegebenen Beschreibung auffinden können. Das Stichwortverzeichnis bietet darüber hinaus Querverweise über das ganze Manual zu allen CFS-spezifischen Begriffen. In den Kapiteln 9 bis 11 finden Sie zusammengefaßte Erläuterungen zu bestimmten Themen- kreisen. Falls Sie Verbesserungsvorschläge oder Anregungen zu dem vorliegenden Manual haben, so rufen Sie uns an oder schreiben Sie uns auf dem dafür vorgesehenen Formblatt. OPG Online-Programmierung GmbH Sendlinger Str. 28, 80331 München Tel. 089/267831, Fax 089/260 9929 E-Mail [email protected], http://www.opg.de Änderungsprotokoll Änderungsprotokoll CFSX August 2021 Unterstützung ZIP- und TAR-Archive (.zip, .gz, .tar, .gtar) durch Action Codes D (Display) und NP (New Parameters) Dezember 2017Erweiterte Parameter bei den EDT-Kommandos REFORMAT (S. 9-46) / UNFORMAT (S. 9-62) : LF, LF2, LF2+hhhh, LF4. Juni 2012 Neue EDT-Kommandos: @+ [:text] (S. 9-71): Aktuelle Zeilennummer erhöhen @- [:text] (S. 9-71): Aktuelle Zeilennummer vermindern EDIT SEQUENTIAL (S. 9-26): SEQUENTIAL-Modus ein/ausschalten. Erweiterung EDT-Kommando @ln(inc) (S. 9-71) um den Parameter :text November 2011Neues EDT-Kommando RUN (S. 9-48): Ausführung einer Benutzerroutine. Erweiterung des Kommandos PROC FREE|USED (S. 9-80): Informationen über Arbeitsbereiche (aktueller Daten-Arbeitsbereich / Prozedur-Arbeitsbereich bzw. früher gültige Arbeitsbereiche) in eine Variable übertragen. Erweiterung des Kommandos PROC n (S. 9-80): Zum Umschalten in einen anderen Arbeitsbereich kann die Nummer des Arbeitsbereichs auch als Integer- Variable angegeben werden. Kommando DO (S. 72): Die Nummer des Arbeitsbereichs kann auch als Integer- Variable angegeben werden. Mai 2011 Beim Lesen, Schreiben und Kopieren von großen Dateien wird in der Statuszeile der Prozentsatz der verarbeiteten Daten angezeigt. Februar 2011 Neue Option zum EDT-Kommando HALT (S. 9-29): Es kann zusätzlich ein Returncode angegeben werden, der nach Programmende in der Variablen $? zur Verfügung gestellt wird. November 2010Bisher wurden Leersätze, die bei der Ausführung der Kommandos ON&CHANGE...., ON&DEL und DEL:..: entstanden sind, automatisch gelöscht. Nun besteht die Möglichkeit, das Löschen von Leersätzen zu unterbinden. Dazu wurde der Parameter set_edt_auto_erd (S. 16-8) in der Parameterdatei eingeführt: SET_EDT_AUTO_ERD=ON: Leersätze werden gelöscht (Standard, falls der Parameter fehlt). SET_EDT_AUTO_ERD=OFF: Leersätze werden nicht gelöscht. Oktober 2010 Neue Variante Kommando IF (S. 9-74) zum Abfragen von Fehlern beim Kommando COMP: IFCOMPERR und IF NO COMPERR. Januar 2010 Neue EDT-System-Variable !%umgebungsvariable%: Die EDT-System- Variablen (S. 9-106) kann in allen Zeichenfolgen verwendet werden, z.B. create1:'!%HOME%/datei1'. Die Variablen-Substitution muss eingeschaltet sein (siehe Kommando PAR VARSUBST=YES). Änderungsprotokoll Mit dem neuen Parameter "TO line[(inc)]" zum Kommando SYSTEM (S. 9-59) kann die Ausgabe des Kommandos in den aktuellen Arbeitsbereich geschrieben werden (line = Zeilennummer, inc = Schrittweite). Dezember 2009Neuer Parameter CODE zu den Kommandos READ (S. 9-41) und WRITE (S. 9- 66). Damit kann erreicht werden, daß beim Lesen und Schreiben die Daten umcodiert werden. Die Bearbeitung der Daten erfolgt immer im Standardcode des Betriebssystems. Mit dem Startparameter "-t" (S. 9-1) kann eine Standard-Umcodierung schon beim Laden des EDT angegeben werden. Mit dem Actioncode EDT[n]T (S. 9- 13) kann ebenfalls die Standard-Umcodierung erreicht werden. Das Kommando CODE (S. 9-17) wurde um die Code-Variante EDF03DRV erweitert. Die Translate-Tabellen wurden an ISO88591 und EDF041 angepaßt. November 2009Neue User-Option NO (Names Only) (S. 4-26): In der Dateienliste werden nur die Namen angezeigt. Dies ermöglicht bei sehr großen Verzeichnissen einen schnelleren Aufbau der Dateienliste. Zur komfortableren Eingabe kann die User- Option auch als Zusatz ",na" zum Dateinamen angegeben werden. Zusätzlich kann die Option bei den Action-Codes NP (NPNA) (S. 6-18) und AL (ALNA) (S. 6-6) angegeben werden. August 2009 Neues EDT-Kommando Cnnn (S. 9-16) zum Positionieren auf eine Spalte. Bisher konnte der Spaltenbereich nur relativ mit dem Kommando ">" bzw. "<" nach links und rechts verschoben werden. Das Kommando steht im EDT und im CFS-Display/Editor zur Verfügung. Im EDT wird das Satzendezeichen bei Dateien mit gemischten Satzendezeichen wie im EDTW angezeigt (u/d: u=Unix, d=Dos). Die Datei edt.lrf wird nur noch erzeugt, wenn der LRF-Modus (S. 16-11) eingeschaltet ist. Neue Optionen *DOS / *UNIX / *NO zum Kommando ON..FIND (S. 9-36) im EDT und Variable Action FIND (S. 5-24) im CFSX: Damit ist es möglich, nach DOS- bzw. Unix-Satzendezeichen und Sätze ohne Satzendezeichen zu suchen. Januar 2009 Erweiterung der Dateiauswahl nach Datum/Age (S. 4-11) um die Möglichkeit nach Datum/Uhrzeit, Datum/Uhrzeit bis Datum/Uhrzeit bzw. Datum bis Datum /Uhrzeit bis Uhrzeit zu selektieren. Die zusätzlichen Auswahlmöglichkeiten gelten auch für die User-Options LSTA (S. 25) und LACC (S. 24) CFS-Kommando REWRITE (S. 7-25): Nach dem Aktualisieren von Dateien bleiben die ursprünglichen Zugriffs-Rechte erhalten wie beim EDT-Kommando REWRITE. Kommando S im CFS-Editor: Falls ein Satz mehrere Treffer enthält, wird wie im BS2000-CFS nach dem Anzeigen des ersten Treffers automatisch auf den nächsten Treffer positioniert. Die automatische Anpassung der Zeilen und Spalten an die Fenstergröße funktioniert nun auch für POSIX unter BS2000. Mai 2008 EDT-Kommando COMP (S. 20): Die Routine zum Vergleichen von 2 Arbeitsbereichen wurde optimiert. Bisher konnte es vorkommen, daß in bestimmten Konstellationen der Vergleich abgebrochen wurde. In diesem Fall wurde die folgende Meldung ausgegeben: "Spurious match. Output is not possible." Änderungsprotokoll Februar 2007 Das EDT-Kommando SEQUENCE wurde um die Variante 3 (wie BS2000) ergänzt (S. 9-55). Im Protokoll für das Kommando ON&FIND (S. 5-24) wird auch die Anzahl der gefundenen Treffer zusätzlich zu der Anzahl der Treffersätze ausgegeben. Januar 2007 Erweiterung des SET-Kommandos zum Rechnen mit Datum und Uhrzeit. Dazu werden Datums- und Zeitangaben in Sekunden seit 1.1.1970 umgewandelt. Danach kann es verändert werden und wieder in eine Zeichenfolge umgewandelt werden. a) SET intvar = TIME [string] (S. 9-84) b) SET intvar = DAY [intvar] (S. 9-85) c) SET strvar|linevar = DATE|TIME [intvar] (S. 9-93) Dezember 2006 Die max. Satzlänge zum Anzeigen von Dateien mit dem CFS-Editor (Actioncode D, siehe Seite S. 8-1) wurde von 1.024 auf 32.752 erweitert. Dadurch wird vermieden, daß Sätze in mehrere Teile aufgeteilt werden. Mit dem Programm compress bzw. gzip komprimierte Dateien (Endung ".Z" bzw. ".gz") werden bei den Actioncodes EDT bzw. dem EDT-Kommando READ (S. 9- 41) und Actioncode D (S. 8-1) automatisch dekomprimiert und als temporäre Datei in den Arbeitsbereich eingelesen bzw. geöffnet. Das Zurückschreiben bzw. der Modify-Modus ist bei solchen Dateien nicht zulässig. Februar 2005 Die automatische Anpassung der Zeilen und Spalten an die
Recommended publications
  • Scalable Speculative Parallelization on Commodity Clusters
    Scalable Speculative Parallelization on Commodity Clusters Hanjun Kim Arun Raman Feng Liu Jae W. Leey David I. August Departments of Electrical Engineering and Computer Science y Parakinetics Inc. Princeton University Princeton, USA Princeton, USA [email protected] {hanjunk, rarun, fengliu, august}@princeton.edu Abstract by the cluster is difficult. For these reasons, such programs have been deemed unsuitable for execution on clusters. While clusters of commodity servers and switches are the most popular form of large-scale parallel computers, many programs are Recent research has demonstrated that a combination of not easily parallelized for execution upon them. In particular, high pipeline parallelism (DSWP), thread-level speculation (TLS), inter-node communication cost and lack of globally shared memory and speculative pipeline parallelism (Spec-DSWP) can un- appear to make clusters suitable only for server applications with lock significant amounts of parallelism in general-purpose abundant task-level parallelism and scientific applications with regu- programs on shared memory systems [6, 24, 27, 29, 30, 32, lar and independent units of work. Clever use of pipeline parallelism (DSWP), thread-level speculation (TLS), and speculative pipeline 36, 37]. Pipeline parallelization techniques such as DSWP parallelism (Spec-DSWP) can mitigate the costs of inter-thread partition a loop in such a way that the critical path is executed communication on shared memory multicore machines. This paper in a thread-local fashion, with off-critical path data flowing presents Distributed Software Multi-threaded Transactional memory unidirectionally through the pipeline. The combination of these (DSMTX), a runtime system which makes these techniques applicable two factors makes the program execution highly insensitive to non-shared memory clusters, allowing them to efficiently address inter-node communication costs.
    [Show full text]
  • Symantec Web Security Service Policy Guide
    Web Security Service Policy Guide Revision: NOV.07.2020 Symantec Web Security Service/Page 2 Policy Guide/Page 3 Copyrights Broadcom, the pulse logo, Connecting everything, and Symantec are among the trademarks of Broadcom. The term “Broadcom” refers to Broadcom Inc. and/or its subsidiaries. Copyright © 2020 Broadcom. All Rights Reserved. The term “Broadcom” refers to Broadcom Inc. and/or its subsidiaries. For more information, please visit www.broadcom.com. Broadcom reserves the right to make changes without further notice to any products or data herein to improve reliability, function, or design. Information furnished by Broadcom is believed to be accurate and reliable. However, Broadcom does not assume any liability arising out of the application or use of this information, nor the application or use of any product or circuit described herein, neither does it convey any license under its patent rights nor the rights of others. Policy Guide/Page 4 Symantec WSS Policy Guide The Symantec Web Security Service solutions provide real-time protection against web-borne threats. As a cloud-based product, the Web Security Service leverages Symantec's proven security technology, including the WebPulse™ cloud community. With extensive web application controls and detailed reporting features, IT administrators can use the Web Security Service to create and enforce granular policies that are applied to all covered users, including fixed locations and roaming users. If the WSS is the body, then the policy engine is the brain. While the WSS by default provides malware protection (blocks four categories: Phishing, Proxy Avoidance, Spyware Effects/Privacy Concerns, and Spyware/Malware Sources), the additional policy rules and options you create dictate exactly what content your employees can and cannot access—from global allows/denials to individual users at specific times from specific locations.
    [Show full text]
  • IDOL Connector Framework Server 12.0 Administration Guide
    Connector Framework Server Software Version 12.0 Administration Guide Document Release Date: June 2018 Software Release Date: June 2018 Administration Guide Legal notices Copyright notice © Copyright 2018 Micro Focus or one of its affiliates. The only warranties for products and services of Micro Focus and its affiliates and licensors (“Micro Focus”) are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. Micro Focus shall not be liable for technical or editorial errors or omissions contained herein. The information contained herein is subject to change without notice. Trademark notices Adobe™ is a trademark of Adobe Systems Incorporated. Microsoft® and Windows® are U.S. registered trademarks of Microsoft Corporation. UNIX® is a registered trademark of The Open Group. Documentation updates The title page of this document contains the following identifying information: l Software Version number, which indicates the software version. l Document Release Date, which changes each time the document is updated. l Software Release Date, which indicates the release date of this version of the software. To verify you are using the most recent edition of a document, go to https://softwaresupport.softwaregrp.com/group/softwaresupport/search-result?doctype=online help. You will also receive new or updated editions of documentation if you subscribe to the appropriate product support service. Contact your Micro Focus sales representative for details. To check for new versions of software, go to https://www.hpe.com/software/entitlements. To check for recent software patches, go to https://softwaresupport.softwaregrp.com/patches. The sites listed in this section require you to sign in with a Software Passport.
    [Show full text]
  • Migratory Compression
    Migratory Compression: Coarse-grained Data Reordering to Improve Compressibility Xing Lin, University of Utah; Guanlin Lu, Fred Douglis, Philip Shilane, and Grant Wallace, EMC Corporation— Data Protection and Availability Division https://www.usenix.org/conference/fast14/technical-sessions/presentation/lin This paper is included in the Proceedings of the 12th USENIX Conference on File and Storage Technologies (FAST ’14). February 17–20, 2014 • Santa Clara, CA USA ISBN 978-1-931971-08-9 Open access to the Proceedings of the 12th USENIX Conference on File and Storage Technologies (FAST ’14) is sponsored by Migratory Compression: Coarse-grained Data Reordering to Improve Compressibility Xing Lin1, Guanlin Lu2, Fred Douglis2, Philip Shilane2, Grant Wallace2 1University of Utah, 2EMC Corporation – Data Protection and Availability Division Abstract data; the larger the window size, the greater the opportu- nity to find redundant strings, leading to better compres- We propose Migratory Compression (MC), a coarse- sion. However, to limit the overhead in finding redun- grained data transformation, to improve the effectiveness dancy, most real-world implementations use small win- of traditional compressors in modern storage systems. dow sizes. For example, DEFLATE, used by gzip, has a In MC, similar data chunks are re-located together, to 64 KB sliding window [6] and the maximum window for improve compression factors. After decompression, bzip2 is 900 KB [8]. The only compression algorithm migrated chunks return to their previous locations. We we are aware of that uses larger window sizes is LZMA evaluate the compression effectiveness and overhead in 7z [1], which supports history up to 1 GB.1 It usually of MC, explore reorganization approaches on a variety compresses better than gzip and bzip2 but takes sig- of datasets, and present a prototype implementation of nificantly longer.
    [Show full text]
  • Oracle Universal Installer and Opatch User's Guide for Windows and UNIX
    Oracle® Universal Installer and OPatch User’s Guide 10g Release 2 (10.2) for Windows and UNIX B16227-12 August 2010 Oracle Universal Installer and OPatch User's Guide, 10g Release 2 (10.2) for Windows and UNIX B16227-12 Copyright © 1996, 2010, Oracle and/or its affiliates. All rights reserved. This software and related documentation are provided under a license agreement containing restrictions on use and disclosure and are protected by intellectual property laws. Except as expressly permitted in your license agreement or allowed by law, you may not use, copy, reproduce, translate, broadcast, modify, license, transmit, distribute, exhibit, perform, publish, or display any part, in any form, or by any means. Reverse engineering, disassembly, or decompilation of this software, unless required by law for interoperability, is prohibited. The information contained herein is subject to change without notice and is not warranted to be error-free. If you find any errors, please report them to us in writing. If this software or related documentation is delivered to the U.S. Government or anyone licensing it on behalf of the U.S. Government, the following notice is applicable: U.S. GOVERNMENT RIGHTS Programs, software, databases, and related documentation and technical data delivered to U.S. Government customers are "commercial computer software" or "commercial technical data" pursuant to the applicable Federal Acquisition Regulation and agency-specific supplemental regulations. As such, the use, duplication, disclosure, modification, and adaptation shall be subject to the restrictions and license terms set forth in the applicable Government contract, and, to the extent applicable by the terms of the Government contract, the additional rights set forth in FAR 52.227-19, Commercial Computer Software License (December 2007).
    [Show full text]
  • Digital Forensics Formats: Seeking a Digital Preservation Storage Container Format for Web Archiving
    doi:10.2218/ijdc.v7i2.227 Digital Forensics Formats 21 The International Journal of Digital Curation Volume 7, Issue 2 | 2012 Digital Forensics Formats: Seeking a Digital Preservation Storage Container Format for Web Archiving Yunhyong Kim, Humanities Advanced Technology and Information Institute, University of Glasgow Seamus Ross, Faculty of Information, University of Toronto, Canada and Humanities Advanced Technology and Information Institute, University of Glasgow Abstract In this paper we discuss archival storage container formats from the point of view of digital curation and preservation, an aspect of preservation overlooked by most other studies. Considering established approaches to data management as our jumping off point, we selected seven container format attributes that are core to the long term accessibility of digital materials. We have labeled these core preservation attributes. These attributes are then used as evaluation criteria to compare storage container formats belonging to five common categories: formats for archiving selected content (e.g. tar, WARC), disk image formats that capture data for recovery or installation (partimage, dd raw image), these two types combined with a selected compression algorithm (e.g. tar+gzip), formats that combine packing and compression (e.g. 7-zip), and forensic file formats for data analysis in criminal investigations (e.g. aff – Advanced Forensic File format). We present a general discussion of the storage container format landscape in terms of the attributes we discuss, and make a direct comparison between the three most promising archival formats: tar, WARC, and aff. We conclude by suggesting the next steps to take the research forward and to validate the observations we have made.
    [Show full text]
  • Economic Value of Instream Flows in Montana: a Travel Cost Model Approach
    University of Montana ScholarWorks at University of Montana Graduate Student Theses, Dissertations, & Professional Papers Graduate School 1989 Economic value of instream flows in Montana: A travel cost model approach Christopher J. Neher The University of Montana Follow this and additional works at: https://scholarworks.umt.edu/etd Let us know how access to this document benefits ou.y Recommended Citation Neher, Christopher J., "Economic value of instream flows in Montana: A travel cost model approach" (1989). Graduate Student Theses, Dissertations, & Professional Papers. 8515. https://scholarworks.umt.edu/etd/8515 This Thesis is brought to you for free and open access by the Graduate School at ScholarWorks at University of Montana. It has been accepted for inclusion in Graduate Student Theses, Dissertations, & Professional Papers by an authorized administrator of ScholarWorks at University of Montana. For more information, please contact [email protected]. COPYRIGHT ACT OF 1976 Th is is an unpublished manuscript in which copyright SUBSISTS. Any further reprinting of its contents must be APPROVED BY THE AUTHOR. Mansfield Library Un iv e r s ity of Montana Date :____1989 __ Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. THE ECONOMIC VALUE OF INSTREAM FLOWS IN MONTANA: A TRAVEL COST MODEL APPROACH By Christopher J. Neher B.S. University of Idaho, 1979. Presented in partial fulfillment of the requirements for the degree of Master of Arts University of Montana 1989 Approved by Chairman, Board of Examiners Dean, Graduate School Date 7/ Reproduced with permission of the copyright owner.
    [Show full text]
  • Bogog00000000000000
    US 20190236273A1 ( 19) United States (12 ) Patent Application Publication (10 ) Pub. No. : US 2019 /0236273 A1 SAXE et al. (43 ) Pub. Date : Aug. 1 , 2019 ( 54 ) METHODS AND APPARATUS FOR GO6N 3 /04 (2006 .01 ) DETECTION OF MALICIOUS DOCUMENTS G06K 9 /62 (2006 .01 ) USING MACHINE LEARNING (52 ) U .S . CI. CPC .. G06F 21/ 563 ( 2013 .01 ) ; GO6N 20 / 20 (71 ) Applicant: Sophos Limited , Abingdon (GB ) (2019 .01 ) ; G06K 9 /6256 ( 2013 .01 ) ; G06K ( 72 ) Inventors: Joshua Daniel SAXE , Los Angeles, 9 /6267 ( 2013 .01 ) ; G06N 3 / 04 ( 2013 .01 ) CA (US ) ; Ethan M . RUDD , Colorado Springs , CO (US ) ; Richard HARANG , (57 ) ABSTRACT Alexandria , VA (US ) An apparatus for detecting malicious files includes a memory and a processor communicatively coupled to the ( 73 ) Assignee : Sophos Limited , Abingdon (GB ) memory. The processor receives multiple potentially mali cious files. A first potentially malicious file has a first file ( 21) Appl . No. : 16 /257 , 749 format , and a second potentially malicious file has a second file format different than the first file format. The processor ( 22 ) Filed : Jan . 25 , 2019 extracts a first set of strings from the first potentially malicious file , and extracts a second set of strings from the Related U . S . Application Data second potentially malicious file . First and second feature (60 ) Provisional application No . 62/ 622 ,440 , filed on Jan . vectors are defined based on lengths of each string from the 26 , 2018 . associated set of strings . The processor provides the first feature vector as an input to a machine learning model to Publication Classification produce a maliciousness classification of the first potentially (51 ) Int.
    [Show full text]
  • Using Data Compression for Energy-Efficient Reprogramming Of
    Using Data Compression for Energy-Efficient Reprogramming of Wireless Sensor Networks Nicolas Tsiftes <[email protected]> Swedish Institute of Computer Science Box 1263, SE-164 29 Kista, Sweden SICS Technical Report T2007:13 ISSN 1100-3154 ISRN:SICS-T–2007/13-SE December 18, 2007 Keywords: wireless sensor networks, data compression, reprogramming, experimental eval- uation, energy efficiency Abstract Software used in sensor networks to perform tasks such as sensing, network routing, and operating system services can be subject to changes or replacements during the long lifetimes of many sensor networks. The task of reprogramming the network nodes consumes a significant amount of energy and increases the network congestion because the software is sent over radio in an epidemic manner to all the nodes. In this thesis, I show that the use of data compression of dynamically link- able modules makes the reprogramming more energy-efficient and reduces the time needed for the software to propagate. The cost for using data compression is that the decompression algorithm must execute for a relatively long time, and that the software takes five to ten kilobytes to store on the sensor nodes. Contents 1 Introduction 5 1.1 Problem Statement . 5 1.2 Method ................................ 6 1.3 Limitations . 6 1.4 Alternative Approaches . 6 1.5 Scientific Contributions . 7 1.6 ReportStructure............................ 7 2 Background 8 2.1 WirelessSensorNetworks ...................... 8 2.1.1 Software Updates in Wireless Sensor Networks . 8 2.1.2 The Contiki Operating System . 10 2.2 Data Compression Algorithms . 12 2.2.1 Lossless and Lossy Algorithms . 12 2.2.2 Statistical Algorithms .
    [Show full text]
  • Data Description and Usage
    HEALTH AND RETIREMENT STUDY Respondent Cross-Year Benefits Restricted Data Data Description and Usage Version 5.2 October 2020 To the Restricted Data Investigator: This restricted data set is intended for exclusive use by you and the persons specified in the Agreement for Use of Restricted Data from the Health and Retirement Study and/or the Supplemental Agreement with Research Staff for Use of Restricted Data from the Health and Retirement Study. If there are any questions about this data set and its use, refer to the HRS Restricted Data page (https://hrs.isr.umich.edu/data-products/restricted-data) or contact the HRS Help Desk ([email protected]). This document may not be reproduced without the written consent of the staff of the Health and Retirement Study, The Institute for Social Research, The University of Michigan. Table of Contents 1. Overview and Acknowledgments ............................................................................................... 3 2. Obtaining the Data ........................................................................................................................... 3 2c. Publications Based on Restricted Data ............................................................................................. 3 3. Cross-Year Benefit Data Set Content ......................................................................................... 4 3a. File Structure .............................................................................................................................................
    [Show full text]
  • Research Article N-Gram-Based Text Compression
    Hindawi Publishing Corporation Computational Intelligence and Neuroscience Volume 2016, Article ID 9483646, 11 pages http://dx.doi.org/10.1155/2016/9483646 Research Article n-Gram-Based Text Compression Vu H. Nguyen,1 Hien T. Nguyen,1 Hieu N. Duong,2 and Vaclav Snasel3 1 Faculty of Information Technology, Ton Duc Thang University, Ho Chi Minh City, Vietnam 2Faculty of Computer Science and Engineering, Ho Chi Minh City University of Technology, Ho Chi Minh City, Vietnam 3Faculty of Electrical Engineering and Computer Science, VSB-Technical University of Ostrava, Ostrava, Czech Republic Correspondence should be addressed to Hien T. Nguyen; [email protected] Received 21 May 2016; Revised 2 August 2016; Accepted 25 September 2016 Academic Editor: Geun S. Jo Copyright © 2016 Vu H. Nguyen et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. We propose an efficient method for compressing Vietnamese text using n-gram dictionaries. It has a significant compression ratio in comparison with those of state-of-the-art methods on the same dataset. Given a text, first, the proposed method splits it into n-grams and then encodes them based on n-gram dictionaries. In the encoding phase, we use a sliding window with a size that ranges from bigram to five grams to obtain the best encoding stream. Each n-gram is encoded by two to four bytes accordingly based on its corresponding n-gram dictionary. We collected 2.5 GB text corpus from some Vietnamese news agencies to build n- gram dictionaries from unigram to five grams and achieve dictionaries with a size of 12 GB in total.
    [Show full text]
  • Pentest-Report Chubaofs Pentest 08.-09.2020 Cure53, Dr.-Ing
    Dr.-Ing. Mario Heiderich, Cure53 Bielefelder Str. 14 D 10709 Berlin cure53.de · [email protected] Pentest-Report ChubaoFS Pentest 08.-09.2020 Cure53, Dr.-Ing. M. Heiderich, M. Wege, MSc. D. Weißer, MSc. F. Fäßler, MSc. R. Peraglie Index Introduction Scope Identified Vulnerabilities CFS-01-003 WP1: Insecure SHA1 password-hashing (Low) CFS-01-004 WP1: Linux file permissions ineffective for ACL (High) CFS-01-005 WP1: Missing HMAC leads to CBC padding oracle (Medium) CFS-01-007 WP1: No brute-force protection on time-unsafe comparisons (Low) CFS-01-008 WP1: Unencrypted raw TCP traffic to Meta- and DataNode (High) CFS-01-009 WP1: Unauthenticated raw TCP traffic to Meta- and DataNode (High) CFS-01-010 WP1: Lack of TCP traffic message replay protection (Medium) CFS-01-011 WP1: Rogue Meta- and DataNodes possible due to lack of ACL (High) CFS-01-015 WP1: API freely discloses all user-secrets (Critical) CFS-01-016 WP2: Default Docker deployment insecure on public hosts (High) CFS-01-022 WP1: HTTP clear-text ObjectNode REST API exposed (High) CFS-01-024 WP2: Bypassing Skip-Owner-Validation header authentication (Medium) Miscellaneous Issues CFS-01-001 WP1: Usage of math/rand within crypto-utils and utils (Info) CFS-01-002 WP1: TLS version not enforced for AuthNode HTTP server (Low) CFS-01-006 WP1: Password hashes can be used to authenticate (Low) CFS-01-012 WP1: HTTP parameter pollution in HTTP clients (Medium) CFS-01-013 WP1: Unsalted MD5 authKey-Computation in ObjectNode (Low) CFS-01-014 WP2: Lack of password complexity in MasterNode (Low) CFS-01-017 WP2: Docker deployment stores client.json as world-readable (Medium) CFS-01-018 WP1: Docker deployment logs credentials as world-readable (Medium) CFS-01-019 WP1: Folders can be moved into their own child folders (Low) CFS-01-020 WP1: Missing filename-validation allows folder corruption (Low) CFS-01-021 WP1: Debugging endpoint /debug/pprof exposed (Info) CFS-01-023 WP1: Build system lacks stack canaries, PIE and FORTIFY (Medium) Cure53, Berlin · 09/08/20 1/37 Dr.-Ing.
    [Show full text]