Databridge ETL Solution Datasheet

Total Page:16

File Type:pdf, Size:1020Kb

Databridge ETL Solution Datasheet DATASHEET Extract and Transform MCP Host Data for Improved KEY FEATURES Client configuration tool for Analysis and Decision Support easy customization of table layout. Fast, well-informed business decisions require access to your organization’s key performance Dynamic before-and-after indicators residing on critical database systems. But the prospect of exposing those systems images (BI-AI) based on inevitably raises concerns around security, data integrity, cost, and performance. key change. 64-bit clients. For organizations using the Unisys ClearPath MCP server and its non-relational DMSII • Client-side management database, there’s an additional challenge: Most business intelligence tools support only console. relational databases. • Ability to run the client as a service or a daemon. The Only True ETL Solution for DMSII Data • Multi-threaded clients to That’s why businesses like yours are turning to Attachmate® DATABridge™. It’s the only increase processing speed. true Extract, Transform, Load (ETL) solution that securely integrates Unisys MCP DMSII • Support for Windows Server and non-DMSII data into a secondary system. 2012. • Secure automation of Unisys With DATABridge, you can easily integrate production data into a relational database or MCP data replication. another DMSII database located on an entirely different Unisys host system. And because • Seamless integration of DATABridge clients for Oracle and Microsoft SQL Server support a breadth of operating both DMSII and non-DMSII environments (including Windows 7, Windows Server 2012, Windows Server 2008, UNIX, data with Oracle, Microsoft SQL, and other relational AIX, SUSE Linux, and Red Hat Linux), DATABridge solutions fit seamlessly into your existing databases. infrastructure. Organizational decision making has never been more streamlined and effective. • Support for a variety of database client platforms including Windows, AIX, Solaris, HP-UX, SUSE Linux, and Red Hat Linux. • Audit mirroring for real-time data recovery. • High-speed file transfer to lower the use of mainframe resources. Here, the non-relational DMSII database is replicated to a target relational database on a Windows, UNIX, or AIX server. Product Highlights On-the-Fly Integration of DMSII Data Using DATABridge to dynamically integrate DMSII data with Quick access to company data provides a definite competitive a SQL Server or Oracle database, you can provide relational edge, but access is often difficult. Furthermore, customized data that can be manipulated more easily and completely— reporting ties up MIS and uses mainframe resources that without the drawbacks of customized reporting. Unlike other could otherwise handle transaction processing. solutions that support ETL technology via separate modules, DATABridge delivers an entirely cohesive, all-in-one solution DATASHEET that allows simultaneous ETL, without the use of additional utilities. The DATABridge approach is direct and real-time, so you don’t have to create additional files that increase back-end overhead. Complete Control and Reliable Results Database administrators have complete control over what gets extracted because DATABridge provides granular filtering, both at the data source and data target. DATABridge also supports cascade filtering, so administrators can ensure that each functional group in an organization receives data entirely customized for their needs. With DATABridge, you can clone a DMSII database in its entirety, limit replication to selected datasets, or further filter down to individual records and items within specific datasets. The cloned database is refreshed at user-defined intervals. The DATABridge client-side management console provides a new user interface for ease of use. Since only changed records are replicated from the original to the destination database, updates can You control access to the replicated DMSII host data, so customers, partners, be executed more frequently, with and prospects can see selected information without compromising the host minimal host-system overhead. This system. Moreover, your host security is protected from unknown users. information can then be reported on, queried, or analyzed. Improved Application Development With DMSII Replication DATABridge also supports automatic, If you replicate MCP host data to a development platform, you can safeguard event-driven updates. Because it the transaction processing of the production system; you can also protect can read and decipher DMSII audit data integrity by letting developers access a mirror image of production data. trails, you can sync up the source Developers can further protect sensitive data by configuring DATABridge to and destination databases whenever replicate only selected portions of the database. necessary. This capability provides ideal synchronization of relational Data replication improves the application-development process by filtering and databases with DMSII end-of-day formatting DMSII data into a form that is usable by off-the-shelf applications, processing or other scheduled events, which are typically based on relational database models. Using DATABridge to so you can help ensure the integrity replicate information from a DMSII database to a relational database means of replicated data. that the data will be easier for developers to manipulate. Convenient Access Via Components to Simplify Every Task Web Browser In addition to the product highlights above, DATABridge offers these resource- Publishing MCP host data to an saving components: intranet or extranet gives users the DATABridge Audit Mirroring for real-time data recovery convenience they need. And when This component can aid in disaster recovery by mirroring audit files in the you use DATABridge to replicate background to a Windows platform or secondary MCP. You now have the ability DMSII data onto a relational to get a quicker recovery point objective for your data, in real time. database for web access, you can front-end that data with any of the web publishing tools available today. DATABridge high-speed file transfer to reduce the use of mainframe resources You can significantly minimize use of your mainframe resources when transferring files with DATABridge. The high-speed file-transfer component helps eliminate most MCP overhead to transfer text, printer, and data files. DATABridge Plus for near real-time replication of DMSII databases This component provides continuous database updates between DMSII audit file closures. DATABridge Plus unobtrusively reads the current audit file and immediately delivers updates to the relational database or replicated DMSII database. Small, periodic audit file updates eliminate uneven strains on network bandwidth. DATABridge FileXtract for non-DMSII data replication To perform effective processing and The new DATABridge client configuration tool gives you control over your cloning options via an easy- analysis, organizations need the ability to-use interface. to replicate non-DMSII information from Unisys MCP hosts. DATABridge FileXtract empowers organizations to operations before sending data on to client systems. DBE accelerates data cloning analyze file data within application and updates as a direct result of the reduction in host overhead. It processes an audit files, Communications Transaction file only once before parsing data out to multiple clients. Trails, system summary logs, and others. Replicating data to a secondary DATABridge Twin Client for host-to-host cloning of an entire DMSII database system (such as a UNIX or Windows DATABridge Twin Client replicates all of the datasets from the primary database server) means the production data and performs continuous updates to the secondary database. Because it makes an source is never impacted. With identical (vs. selective) clone of the primary database, host applications written for FileXtract you can leverage industry- the primary database can seamlessly be used on the secondary one. standard reporting tools including DATABridge DMSII Client for selective cloning Microsoft Access, Excel, and The DMSII Client performs selective cloning while making replications and updates, Crystal Reports, for effortless to create a unique secondary database different from the primary. It supports row and reporting projects. column filtering as well as the ability to reclone a structure after reorganization on the DATABridge Enterprise Server for primary database. Using this client, you can send customized data from one host to increased processing efficiency multiple hosts, potentially servicing hundreds or even thousands of end users. DATABridge Enterprise Server (DBE) significantly conserves mainframe A Modern Breakthrough in Business Decision Making resources by offloading current host DATABridge sets a new standard for enabling comprehensive, integrated data processing to a Windows Server. You analysis. By moving selected data from the host to a relational database (or multiple can install DBE on either the server databases), organizations can combine data from several external sources, perform portion of a Unisys MCP-hosted trend analysis, and generate a wide variety of reports for improved decision support. mainframe or on a separate machine With DATABridge, you can also store large amounts of historical data on a UNIX that has visibility to the mainframe or Windows system. Offloading storage of historical data means maximizing host disk units. DBE resides between the transaction processing performance. And thanks to its native architecture, DATABridge
Recommended publications
  • Data & Computer Recovery Guidelines
    Data & Computer Recovery Guidelines Data & Computer Recovery Guidelines This document contains general guidelines for restoring computer operating following certain types of disasters. It should be noted these guidelines will not fit every type of disaster or every organization and that you may need to seek outside help to recover and restore your operations. This document is divided into five parts. The first part provides general guidelines which are independent of the type of disaster, the next three sections deal with issues surrounding specific disaster types (flood/water damage, power surge, and physical damage). The final section deals with general recommendations to prepare for the next disaster. General Guidelines 2. Your first step is to restore the computing equipment. These are general guidelines for recovering after any type If you do try to power on the existing equipment, it of disaster or computer failure. If you have a disaster is best to remove the hard drive(s) first to make sure recovery plan, then you should be prepared; however, the system will power on. Once you have determined there may be things that were not covered to help the system powers on, you can reinstall the hard drive you recover. This section is divided into two sections and power the system back on. Hopefully, everything (computer system recovery, data recovery) works at that point. Note: this should not be tried in the case of a water or extreme heat damage. Computer System Recovery 3. If the computer will not power on then you can either The first step is to get your physical computer systems try to fix the computer or in many cases it is easier, running again.
    [Show full text]
  • Data Remanence in Non-Volatile Semiconductor Memory (Part I)
    Data remanence in non-volatile semiconductor memory (Part I) Security Group Sergei Skorobogatov Web: www.cl.cam.ac.uk/~sps32/ Email: [email protected] Introduction Data remanence is the residual physical representation of data that has UV EPROM EEPROM Flash EEPROM been erased or overwritten. In non-volatile programmable devices, such as UV EPROM, EEPROM or Flash, bits are stored as charge in the floating gate of a transistor. After each erase operation, some of this charge remains. It shifts the threshold voltage (VTH) of the transistor which can be detected by the sense amplifier while reading data. Microcontrollers use a ‘protection fuse’ bit that restricts unauthorized access to on-chip memory if activated. Very often, this fuse is embedded in the main memory array. In this case, it is erased simultaneously with the memory. Better protection can be achieved if the fuse is located close to the memory but has a separate control circuit. This allows it to be permanently monitored as well as hardware protected from being erased too early, thus making sure that by the time the fuse is reset no data is left inside the memory. In some smartcards and microcontrollers, a password-protected boot- Structure, cross-section and operation modes for different memory types loader restricts firmware updates and data access to authorized users only. Usually, the on-chip operating system erases both code and data How much residual charge is left inside the memory cells memory before uploading new code, thus preventing any new after a standard erase operation? Is it possible to recover data application from accessing previously stored secrets.
    [Show full text]
  • Database Analyst Ii
    Recruitment No.: 20.186 Date Opened: 5/25/2021 DATABASE ANALYST II SALARY: $5,794 to $8,153 monthly (26 pay periods annually) FINAL FILING DATE: We are accepting applications until closing at 5 pm, June 8, 2021 IT IS MANDATORY THAT YOU COMPLETE THE SUPPLEMENTAL QUESTIONNAIRE. YOUR APPLICATION WILL BE REJECTED IF YOU DO NOT PROVIDE ALL NECESSARY INFORMATION. THE POSITION The Human Resources Department is accepting applications for the position of Database Analyst II. The current opening is for a limited term, benefitted and full-time position in the Information Technology department, but the list may be utilized to fill future regular and full- time vacancies for the duration of the list. The term length for the current vacancy is not guaranteed but cannot exceed 36 months. The normal work schedule is Monday through Friday, 8 – 5 pm; a flex schedule may be available. The Information Technology department is looking for a full-time, limited-term Database Analyst I/II to develop and manage the City’s Open Data platform. Initiatives include tracking city council goals, presenting data related to capital improvement projects, and measuring budget performance. This position is in the Data Intelligence Division. Our team sees data as more than rows and columns, it tells stories that yield invaluable insights that help us solve problems, make better decisions, and create solutions. This position is responsible for building and maintaining systems that unlock the power of data. The successful candidate will be able to create data analytics & business
    [Show full text]
  • Error Characterization, Mitigation, and Recovery in Flash Memory Based Solid-State Drives
    ERRORS, MITIGATION, AND RECOVERY IN FLASH MEMORY SSDS 1 Error Characterization, Mitigation, and Recovery in Flash Memory Based Solid-State Drives Yu Cai, Saugata Ghose, Erich F. Haratsch, Yixin Luo, and Onur Mutlu Abstract—NAND flash memory is ubiquitous in everyday life The transistor traps charge within its floating gate, which dic- today because its capacity has continuously increased and cost has tates the threshold voltage level at which the transistor turns on. continuously decreased over decades. This positive growth is a The threshold voltage level of the floating gate is used to de- result of two key trends: (1) effective process technology scaling, termine the value of the digital data stored inside the transistor. and (2) multi-level (e.g., MLC, TLC) cell data coding. Unfortu- When manufacturing process scales down to a smaller tech- nately, the reliability of raw data stored in flash memory has also nology node, the size of each flash memory cell, and thus the continued to become more difficult to ensure, because these two trends lead to (1) fewer electrons in the flash memory cell (floating size of the transistor, decreases, which in turn reduces the gate) to represent the data and (2) larger cell-to-cell interference amount of charge that can be trapped within the floating gate. and disturbance effects. Without mitigation, worsening reliability Thus, process scaling increases storage density by enabling can reduce the lifetime of NAND flash memory. As a result, flash more cells to be placed in a given area, but it also causes relia- memory controllers in solid-state drives (SSDs) have become bility issues, which are the focus of this article.
    [Show full text]
  • EEPROM Emulation
    ...the world's most energy friendly microcontrollers EEPROM Emulation AN0019 - Application Note Introduction This application note demonstrates a way to use the flash memory of the EFM32 to emulate single variable rewritable EEPROM memory through software. The example API provided enables reading and writing of single variables to non-volatile flash memory. The erase-rewrite algorithm distributes page erases and thereby doing wear leveling. This application note includes: • This PDF document • Source files (zip) • Example C-code • Multiple IDE projects 2013-09-16 - an0019_Rev1.09 1 www.silabs.com ...the world's most energy friendly microcontrollers 1 General Theory 1.1 EEPROM and Flash Based Memory EEPROM stands for Electrically Erasable Programmable Read-Only Memory and is a type of non- volatile memory that is byte erasable and therefore often used to store small amounts of data that must be saved when power is removed. The EFM32 microcontrollers do not include an embedded EEPROM module for byte erasable non-volatile storage, but all EFM32s do provide flash memory for non-volatile data storage. The main difference between flash memory and EEPROM is the erasable unit size. Flash memory is block-erasable which means that bytes cannot be erased individually, instead a block consisting of several bytes need to be erased at the same time. Through software however, it is possible to emulate individually erasable rewritable byte memory using block-erasable flash memory. To provide EEPROM functionality for the EFM32s in an application, there are at least two options available. The first one is to include an external EEPROM module when designing the hardware layout of the application.
    [Show full text]
  • PROTECTING DATA from RANSOMWARE and OTHER DATA LOSS EVENTS a Guide for Managed Service Providers to Conduct, Maintain and Test Backup Files
    PROTECTING DATA FROM RANSOMWARE AND OTHER DATA LOSS EVENTS A Guide for Managed Service Providers to Conduct, Maintain and Test Backup Files OVERVIEW The National Cybersecurity Center of Excellence (NCCoE) at the National Institute of Standards and Technology (NIST) developed this publication to help managed service providers (MSPs) improve their cybersecurity and the cybersecurity of their customers. MSPs have become an attractive target for cyber criminals. When an MSP is vulnerable its customers are vulnerable as well. Often, attacks take the form of ransomware. Data loss incidents—whether a ransomware attack, hardware failure, or accidental or intentional data destruction—can have catastrophic effects on MSPs and their customers. This document provides recommend- ations to help MSPs conduct, maintain, and test backup files in order to reduce the impact of these data loss incidents. A backup file is a copy of files and programs made to facilitate recovery. The recommendations support practical, effective, and efficient back-up plans that address the NIST Cybersecurity Framework Subcategory PR.IP-4: Backups of information are conducted, maintained, and tested. An organization does not need to adopt all of the recommendations, only those applicable to its unique needs. This document provides a broad set of recommendations to help an MSP determine: • items to consider when planning backups and buying a backup service/product • issues to consider to maximize the chance that the backup files are useful and available when needed • issues to consider regarding business disaster recovery CHALLENGE APPROACH Backup systems implemented and not tested or NIST Interagency Report 7621 Rev. 1, Small Business planned increase operational risk for MSPs.
    [Show full text]
  • SŁOWNIK POLSKO-ANGIELSKI ELEKTRONIKI I INFORMATYKI V.03.2010 (C) 2010 Jerzy Kazojć - Wszelkie Prawa Zastrzeżone Słownik Zawiera 18351 Słówek
    OTWARTY SŁOWNIK POLSKO-ANGIELSKI ELEKTRONIKI I INFORMATYKI V.03.2010 (c) 2010 Jerzy Kazojć - wszelkie prawa zastrzeżone Słownik zawiera 18351 słówek. Niniejszy słownik objęty jest licencją Creative Commons Uznanie autorstwa - na tych samych warunkach 3.0 Polska. Aby zobaczyć kopię niniejszej licencji przejdź na stronę http://creativecommons.org/licenses/by-sa/3.0/pl/ lub napisz do Creative Commons, 171 Second Street, Suite 300, San Francisco, California 94105, USA. Licencja UTWÓR (ZDEFINIOWANY PONIŻEJ) PODLEGA NINIEJSZEJ LICENCJI PUBLICZNEJ CREATIVE COMMONS ("CCPL" LUB "LICENCJA"). UTWÓR PODLEGA OCHRONIE PRAWA AUTORSKIEGO LUB INNYCH STOSOWNYCH PRZEPISÓW PRAWA. KORZYSTANIE Z UTWORU W SPOSÓB INNY NIŻ DOZWOLONY NA PODSTAWIE NINIEJSZEJ LICENCJI LUB PRZEPISÓW PRAWA JEST ZABRONIONE. WYKONANIE JAKIEGOKOLWIEK UPRAWNIENIA DO UTWORU OKREŚLONEGO W NINIEJSZEJ LICENCJI OZNACZA PRZYJĘCIE I ZGODĘ NA ZWIĄZANIE POSTANOWIENIAMI NINIEJSZEJ LICENCJI. 1. Definicje a."Utwór zależny" oznacza opracowanie Utworu lub Utworu i innych istniejących wcześniej utworów lub przedmiotów praw pokrewnych, z wyłączeniem materiałów stanowiących Zbiór. Dla uniknięcia wątpliwości, jeżeli Utwór jest utworem muzycznym, artystycznym wykonaniem lub fonogramem, synchronizacja Utworu w czasie z obrazem ruchomym ("synchronizacja") stanowi Utwór Zależny w rozumieniu niniejszej Licencji. b."Zbiór" oznacza zbiór, antologię, wybór lub bazę danych spełniającą cechy utworu, nawet jeżeli zawierają nie chronione materiały, o ile przyjęty w nich dobór, układ lub zestawienie ma twórczy charakter.
    [Show full text]
  • Explore IBM DS8000 Safeguarded Copy
    IBM Systems Solution Brief Safeguard against cyberthreats IBM Safeguarded Copy provides powerful data protection for mission- Highlights critical environments Whether they are caused by human error, system glitches, or • Provides immutable copies of data for Logical malicious criminal acts, data breaches are among the gravest Corruption Protection and most expensive threats to today’s businesses. The annual Cost of a Data Breach Report, conducted by the • Enables hidden, non- Ponemon Institute found that the average cost worldwide of addressable backups for additional security a data breach in the preceding 12 months was $3.9M.1 • Offers simple implementation on IBM DS8900F and DS8880/F systems • Integrates with many disaster-recovery and high- availability environments The costs of a data breach rise along with the growth in the value of data. There is also a corresponding human toll. The World Economic Forum’s (WEF) 2020 Global Risks Report rated cyberattacks as one of the top risks to human welfare. 75% of those surveyed by the WEF said they expect the risk of theft of data or money from cyberattacks to increase, while 76.1% also saw an increased risk of disruption of operations.2 IBM Systems Solution Brief Organizations affected by a breach run the risk of having their normal business operations disrupted, as well as losing valuable data, customers and reputation within their industry. Over the last decades, most organizations have concentrated on developing and implementing high availability (HA) and disaster recovery (DR) solutions to protect their enterprise data against hardware and software failures or data center outages, but these measures may no longer be enough protection against cyberattacks.
    [Show full text]
  • A Modern Approach to Backup & Recovery
    A Modern Approach to Backup & Recovery Deploy in Minutes. Search Anything. Recover Anywhere. Rubrik simplifies backup and recovery for hybrid cloud environments. It eliminates legacy backup complexity by integrating data orchestration, catalog management, and continuous data protection into a single software platform. With Rubrik, enterprises can unlock cloud for long-term data retention or DR and deliver automation with an API-first software platform. Designed to be vendor-agnostic, Rubrik supports the leading operating systems, databases, hypervisors, clouds, and SaaS applications. NEAR-ZERO RPO & RTO SIMPLE TO SCALE LOWER TCO Avoid data loss and minimize Start small and leverage Rubrik’s Deliver hard savings of up to downtime with continuous data scale-out architecture to grow-as- 30-50% with the industry’s protection and instant recovery. you go without forklift upgrades. slimmest footprint. PROTECT • Intelligent Log Management: Automate log backups with policy-based • Continuous Data Protection: Rubrik enables a continuous stream management. Quickly recover from any point-in-time snapshots. of recovery points to minimize data loss in the event of a failure or • Cloud: Use cloud for long-term data retention, test/dev, or DR. ransomware attack. Enable near-zero RPOs with a single click. For applications running in the cloud, Rubrik offers cloud-native • Rapid Parallel Ingest: Minimize impact to production and eliminate data protection. application stun for highly transactional applications. Rubrik ingests • Global Management: Use Polaris GPS for centralized management of large volumes of data rapidly by maximizing the number of parallel data your distributed Rubrik environment to monitor global activity, analyze streams processed. on-demand, and troubleshoot faster.
    [Show full text]
  • Data Remanence in Flash Memory Devices
    Data Remanence in Flash Memory Devices Sergei Skorobogatov University of Cambridge, Computer Laboratory, 15 JJ Thomson Avenue, Cambridge CB3 0FD, United Kingdom [email protected] Abstract. Data remanence is the residual physical representation of data that has been erased or overwritten. In non-volatile programmable devices, such as UV EPROM, EEPROM or Flash, bits are stored as charge in the floating gate of a transistor. After each erase operation, some of this charge remains. Security protection in microcontrollers and smartcards with EEPROM/Flash memories is based on the assumption that information from the memory disappears completely after erasing. While microcontroller manufacturers successfully hardened already their designs against a range of attacks, they still have a common problem with data remanence in floating-gate transistors. Even after an erase operation, the transistor does not return fully to its initial state, thereby allowing the attacker to distinguish between previously programmed and not programmed transistors, and thus restore information from erased memory. The research in this direction is summarised here and it is shown how much information can be extracted from some microcontrollers after their memory has been ‘erased’. 1 Introduction Data remanence as a problem was first discovered in magnetic media [1,2]. Even if the information is overwritten several times on disks and tapes, it can still be possible to extract the initial data. This led to the development of special methods for reliably removing confidential information from magnetic media. Semiconductor memory in security modules was found to have similar prob- lems with reliable data deletion [3,4]. Data remanence affects not only SRAM, but also memory types like DRAM, UV EPROM, EEPROM and Flash [5].
    [Show full text]
  • Simplifying Data Recovery with Advance Techniques and Operations 1Advait Chandsarkar, 2Suchitra Patil 1,2Dept
    ISSN : 0976-8491 (Online) | ISSN : 2229-4333 (Print) IJCST VOL . 7, Iss UE 4, Oct - DE C 2016 Simplifying Data Recovery with Advance Techniques and Operations 1Advait Chandsarkar, 2Suchitra Patil 1,2Dept. of IT, KJ Somaiya College of Engineering, Mumbai, Maharashtra, India Abstract III. Techniques for Data Recovery “Data Recovery” is the process of salvaging data from deleted, formatted, damaged, corrupted, or inaccessible secondary A. For Logical Errors storage media when it cannot be accessed normally. Often the data salvaging is done from storage media such as internal or 1. Deleted/Formatted/Partition Loss external hard disk drives, solid-state drives (SSD), USB flash Any Operating System/ file system considers deleted files as just drive, storage tapes, CDs, DVDs, RAID, and other electronics free space on the disk .So there are always chances that these devices. In Data Recovery different techniques for overcoming will overwrite the lost files and make their recovery impossible. logical & Physical errors in storage devices are used. This process So the best practice would be to avoid starting the computer with may need an extra software and hardware support for accessing, lost files. Instead, disassemble it, disconnect the hard drive with restructuring, repairing the affected storage media for completing lost files and connect it to another computer as secondary (Write- data recovery. Few such processes and techniques are discussed Blocked if possible) [14]. in this paper. In any case, avoid installing the data recovery software on the disk that has lost files i.e. subject drive. Do not restore files or write Keywords images into the drive that contains deleted files.
    [Show full text]
  • A Study of Hard Drive Forensics on Consumers' Pcs: Data
    A Study of Hard Drive Forensics on Consumers’ PCs: Data Recovery and Exploitation B. Dawn Medlin Appalachian State University Joseph A. Cazier Appalachian State University One of the first actions to take when getting rid of an old personal computer is to remove all of the files that contain identifying and personal information. Individuals can be surprisingly negligent in this effort. Many individuals may also believe that by simply moving their files to the recycle bin and then emptying that bin that all of their programs and files are permanently erased. If personal information is not totally deleted, acts of identity theft can easily occur. Our research study identified the types of information found and/or recovered from hard disk drives on computers that have been donated to charity, sold second-hand, or otherwise donated to other organizations for reuse. Of the fifty-five hard drives studied approximately 300,000 files contained identifiable information. Results showed the need for further training in relation to total file erasure from a hard drive as well as the negative results such as identity theft that can occur due to this lack of training or knowledge. INTRODUCTION Wiping a computer clean is not as easy as it may appear. Just deleting the personal files and emptying the recycle bin is essentially next to useless. The delete function only removes file names from a directory list and makes the sectors the files occupy on the hard drive available for future use. Meanwhile, these files actually continue to exist. To positively prevent data from recovery, disks can be removed from disk drives and broken up, or even ground to microscopic pieces.
    [Show full text]