Trustworthy Whole-System Provenance for the Linux Kernel

Total Page:16

File Type:pdf, Size:1020Kb

Trustworthy Whole-System Provenance for the Linux Kernel Trustworthy Whole-System Provenance for the Linux Kernel Adam Bates, Dave (Jing) Tian, Thomas Moyer Kevin R.B. Butler University of Florida MIT Lincoln Laboratory {adammbates,daveti,butler}@ufl.edu [email protected] Abstract is presently of enormous interest in a variety of dis- In a provenance-aware system, mechanisms gather parate communities including scientific data processing, and report metadata that describes the history of each ob- databases, software development, and storage [43, 53]. ject being processed on the system, allowing users to un- Provenance has also been demonstrated to be of great derstand how data objects came to exist in their present value to security by identifying malicious activity in data state. However, while past work has demonstrated the centers [5, 27, 56, 65, 66], improving Mandatory Access usefulness of provenance, less attention has been given Control (MAC) labels [45, 46, 47], and assuring regula- to securing provenance-aware systems. Provenance it- tory compliance [3]. self is a ripe attack vector, and its authenticity and in- Unfortunately, most provenance collection mecha- tegrity must be guaranteed before it can be put to use. nisms in the literature exist as fully-trusted user space We present Linux Provenance Modules (LPM), applications [28, 27, 41, 56]. Even kernel-based prove- the first general framework for the development of nance mechanisms [43, 48] and sketches for trusted provenance-aware systems. We demonstrate that LPM provenance architectures [40, 42] fall short of providing creates a trusted provenance-aware execution environ- a provenance-aware system for malicious environments. ment, collecting complete whole-system provenance The problem of whether or not to trust provenance is fur- while imposing as little as 2.7% performance overhead ther exacerbated in distributed environments, or in lay- on normal system operation. LPM introduces new mech- ered provenance systems, due to the lack of a mechanism anisms for secure provenance layering and authenticated to verify the authenticity and integrity of provenance col- communication between provenance-aware hosts, and lected from different sources. also interoperates with existing mechanisms to provide In this work, we present Linux Provenance Modules strong security assurances. To demonstrate the poten- (LPM), the first generalized framework for secure prove- tial uses of LPM, we design a Provenance-Based Data nance collection on the Linux operating system. Mod- Loss Prevention (PB-DLP) system. We implement PB- ules capture whole-system provenance, a detailed record DLP as a file transfer application that blocks the trans- of processes, IPC mechanisms, network activity, and mission of files derived from sensitive ancestors while even the kernel itself; this capture is invisible to the ap- imposing just tens of milliseconds overhead. LPM is the plications for which provenance is being collected. LPM first step towards widespread deployment of trustworthy introduces a gateway that permits the upgrading of low provenance-aware applications. integrity workflow provenance from user space. LPM also facilitates secure distributed provenance through an authenticated, tamper-evident channel for the transmis- 1 Introduction sion of provenance metadata between hosts. LPM inter- operates with existing security mechanisms to establish a A provenance-aware system automatically gathers and hardware-based root of trust to protect system integrity. reports metadata that describes the history of each ob- ject being processed on the system. This allows users to Achieving the goal of trustworthy whole-system track, and understand, how a piece of data came to ex- provenance, we demonstrate the power of our approach ist in its current state. The application of provenance by presenting a scheme for Provenance-Based Data Loss Prevention (PB-DLP). PB-DLP allows administrators to The Lincoln Laboratory portion of this work was sponsored by the reason about the propagation of sensitive data and control Assistant Secretary of Defense for Research & Engineering under Air its further dissemination through an expressive policy Force Contract #FA8721-05-C-0002. Opinions, interpretations, con- clusions and recommendations are those of the author and are not nec- system, offering dramatically stronger assurances than essarily endorsed by the United States Government. existing enterprise solutions, while imposing just mil- /etc/rc.local:0 /bin/ps:0 /var/spool/cron/root:0 /etc/passwd:0 /etc/shadow:0 root Used Used Used Used Used WasControlledBy Malicious Binary WasGeneratedBy WasGeneratedByWasGeneratedBy WasGeneratedBy WasGeneratedBy /etc/rc.local:1 /bin/ps:1 /var/spool/cron/root:1 /etc/passwd:1 /etc/shadow:1 Figure 1: A provenance graph showing the attack footprint of a malicious binary. Edges encode relationships that flow backwards into the history of system execution, and writing to an object creates a second node with an incremented version number. Here, we see that the binary has rewritten /etc/rc.local, likely in an attempt to gain persistence after a system reboot. liseconds of overhead on file transmission. To our knowl- and “In what environment was the data produced?" Con- edge, this work is the first to apply provenance to DLP. versely, provenance can also answer questions about the Our contributions can thus be summarized as follows: successors of a piece of data, such as “What objects on the system were derived from this object?" Although po- • Introduce Linux Provenance Modules (LPM). tential applications for such information are nearly lim- LPM facilitates secure provenance collection at the itless, past proposals have conceptualized provenance in kernel layer, supports attested disclosure at the ap- different ways, indicating that a one-size-fits-all solution plication layer, provides an authenticated channel to provenance collection is unlikely to meet the needs of for network transmission, and is compatible with all of these audiences. We review these past proposals the W3C Provenance (PROV) Model [59]. In eval- for provenance-aware systems in Section 8. uation, we demonstrate that provenance collection The commonly accepted representation for data prove- imposes as little as 2.7% performance overhead. nance is a directed acyclic graph (DAG). In this work, we use the W3C PROV-DM specification [59] because it is • Demonstrate secure deployment. Leveraging pervasive and facilitates the exchange of provenance be- LPM and existing security mechanisms, we create tween deployments. An example PROV-DM graph of a a trusted provenance-aware execution environment malicious binary is shown in Figure 1. This graph de- for Linux. Through porting Hi-Fi [48] and provid- scribes an attack in which a binary running with root ing support for SPADE [29], we demonstrate the privilege reads several sensitive system files, then ed- relative ease with which LPM can be used to secure its those files in an attempt to gain persistent access to existing provenance collection mechanisms. We the host. Edges encode relationships between nodes, show that, in realistic malicious environments, ours pointing backwards into the history of system execution. is the first proposed system to offer secure prove- Writing to an object triggers the creation of a second ob- nance collection. ject node with an incremented version number. This par- ticular provenance graph could serve as a valuable foren- • Introduce Provenance-Based Data Loss Preven- sics tool, allowing system administrators to better under- tion (PB-DLP). We present a new paradigm for stand the nature of a network intrusion. the prevention of data leakage that searches object provenance to identify and prevent the spread of sensitive data. PB-DLP is impervious to attempts to launder data through intermediary files and IPC. 2.1 Data Loss Prevention We implement PB-DLP as a file transfer applica- tion, and demonstrate its ability to query object an- Data Loss Prevention (DLP) is enterprise software that cestries in just tens of milliseconds. seeks to minimize the leakage of sensitive data by moni- toring and controlling information flow in large, complex organizations [1].1 In addition to the desire to control in- 2 Background tellectual property, another motivator for DLP systems is demonstrating regulatory compliance for personally- Data provenance, sometimes called lineage, describes identifiable information (PII),2 as well as directives such the actions taken on a data object from its creation up to the present. Provenance can be used to answer a va- 1 riety of historical questions about the data it describes. Our overview of data loss prevention is based on review of pub- licly available product descriptions for software developed by Bit9, Such questions include, but are not limited to, “What CDW, Cisco, McAfee, Symantec, and Titus. processes and datasets were used to generate this data?" 2 See NIST SP 800-122 2 as PCI,3 HIPAA,4 SOX.5 or E.U. Data Protection.6 As 3.1 Defining Whole-System Provenance encryption can be used to protect data at rest from unau- thorized access, the true DLP challenge involves prevent- In the design of LPM, we adopt a model for whole- 9 ing leakage at the hands of authorized users, both mali- system provenance that is broad enough to accom- cious and well-meaning agents. This latter group is a modate the needs of a variety of existing provenance surprisingly big problem in the fight to control an organi- projects. To arrive at a definition, we inspect four zation’s intellectual property; a 2013 study conducted by past proposals that collect broadly scoped provenance: the Ponemon Institute found that over half of companies’ SPADE [29], LineageFS [53], PASS [43], and Hi-Fi [48]. employees admitted to emailing intellectual property to SPADE provenance is structured around primitive oper- their personal email accounts, with 41 percent admitting ations of system activities with data inputs and outputs. to doing so on a weekly basis [2]. It is therefore im- It instruments file and process system calls, and asso- portant for a DLP system to be able to exhaustively ex- ciates each call to a process ID (PID), user identifier, and plain which pieces of data are sensitive, where that data network address.
Recommended publications
  • A Combinatorial Game Theoretic Analysis of Chess Endgames
    A COMBINATORIAL GAME THEORETIC ANALYSIS OF CHESS ENDGAMES QINGYUN WU, FRANK YU,¨ MICHAEL LANDRY 1. Abstract In this paper, we attempt to analyze Chess endgames using combinatorial game theory. This is a challenge, because much of combinatorial game theory applies only to games under normal play, in which players move according to a set of rules that define the game, and the last player to move wins. A game of Chess ends either in a draw (as in the game above) or when one of the players achieves checkmate. As such, the game of chess does not immediately lend itself to this type of analysis. However, we found that when we redefined certain aspects of chess, there were useful applications of the theory. (Note: We assume the reader has a knowledge of the rules of Chess prior to reading. Also, we will associate Left with white and Right with black). We first look at positions of chess involving only pawns and no kings. We treat these as combinatorial games under normal play, but with the modification that creating a passed pawn is also a win; the assumption is that promoting a pawn will ultimately lead to checkmate. Just using pawns, we have found chess positions that are equal to the games 0, 1, 2, ?, ", #, and Tiny 1. Next, we bring kings onto the chessboard and construct positions that act as game sums of the numbers and infinitesimals we found. The point is that these carefully constructed positions are games of chess played according to the rules of chess that act like sums of combinatorial games under normal play.
    [Show full text]
  • LNCS 7215, Pp
    ACoreCalculusforProvenance Umut A. Acar1,AmalAhmed2,JamesCheney3,andRolyPerera1 1 Max Planck Institute for Software Systems umut,rolyp @mpi-sws.org { 2 Indiana} University [email protected] 3 University of Edinburgh [email protected] Abstract. Provenance is an increasing concern due to the revolution in sharing and processing scientific data on the Web and in other computer systems. It is proposed that many computer systems will need to become provenance-aware in order to provide satisfactory accountability, reproducibility,andtrustforscien- tific or other high-value data. To date, there is not a consensus concerning ap- propriate formal models or security properties for provenance. In previous work, we introduced a formal framework for provenance security and proposed formal definitions of properties called disclosure and obfuscation. This paper develops a core calculus for provenance in programming languages. Whereas previous models of provenance have focused on special-purpose languages such as workflows and database queries, we consider a higher-order, functional language with sums, products, and recursive types and functions. We explore the ramifications of using traces based on operational derivations for the purpose of comparing other forms of provenance. We design a rich class of prove- nance views over traces. Finally, we prove relationships among provenance views and develop some solutions to the disclosure and obfuscation problems. 1Introduction Provenance, or meta-information about the origin, history, or derivation of an object, is now recognized as a central challenge in establishing trust and providing security in computer systems, particularly on the Web. Essentially, provenance management in- volves instrumenting a system with detailed monitoring or logging of auditable records that help explain how results depend on inputs or other (sometimes untrustworthy) sources.
    [Show full text]
  • Sabermetrics: the Past, the Present, and the Future
    Sabermetrics: The Past, the Present, and the Future Jim Albert February 12, 2010 Abstract This article provides an overview of sabermetrics, the science of learn- ing about baseball through objective evidence. Statistics and baseball have always had a strong kinship, as many famous players are known by their famous statistical accomplishments such as Joe Dimaggio’s 56-game hitting streak and Ted Williams’ .406 batting average in the 1941 baseball season. We give an overview of how one measures performance in batting, pitching, and fielding. In baseball, the traditional measures are batting av- erage, slugging percentage, and on-base percentage, but modern measures such as OPS (on-base percentage plus slugging percentage) are better in predicting the number of runs a team will score in a game. Pitching is a harder aspect of performance to measure, since traditional measures such as winning percentage and earned run average are confounded by the abilities of the pitcher teammates. Modern measures of pitching such as DIPS (defense independent pitching statistics) are helpful in isolating the contributions of a pitcher that do not involve his teammates. It is also challenging to measure the quality of a player’s fielding ability, since the standard measure of fielding, the fielding percentage, is not helpful in understanding the range of a player in moving towards a batted ball. New measures of fielding have been developed that are useful in measuring a player’s fielding range. Major League Baseball is measuring the game in new ways, and sabermetrics is using this new data to find better mea- sures of player performance.
    [Show full text]
  • Qualys Policy Compliance Getting Started Guide
    Policy Compliance Getting Started Guide July 28, 2021 Verity Confidential Copyright 2011-2021 by Qualys, Inc. All Rights Reserved. Qualys and the Qualys logo are registered trademarks of Qualys, Inc. All other trademarks are the property of their respective owners. Qualys, Inc. 919 E Hillsdale Blvd Foster City, CA 94404 1 (650) 801 6100 Table of Contents Get Started ........................................................................................................ 5 Set Up Assets............................................................................................................................ 6 Start Collecting Compliance Data ............................................................... 8 Configure Authentication....................................................................................................... 8 Launch Compliance Scans ................................................................................................... 10 We recommend you schedule scans to run automatically .............................................. 12 How to configure scan settings............................................................................................ 12 Install Cloud Agents.............................................................................................................. 17 Evaluate Middleware Assets by Using Cloud Agent .......................................................... 17 Define Policies ................................................................................................. 21
    [Show full text]
  • Call Numbers
    Call numbers: It is our recommendation that libraries NOT put J, +, E, Ref, etc. in the call number field in front of the Dewey or other call number. Use the Home Location field to indicate the collection for the item. It is difficult if not impossible to sort lists if the call number fields aren’t entered systematically. Dewey Call Numbers for Non-Fiction Each library follows its own practice for how long a Dewey number they use and what letters are used for the author’s name. Some libraries use a number (Cutter number from a table) after the letters for the author’s name. Other just use letters for the author’s name. Call Numbers for Fiction For fiction, the call number is usually the author’s Last Name, First Name. (Use a comma between last and first name.) It is usually already in the call number field when you barcode. Call Numbers for Paperbacks Each library follows its own practice. Just be consistent for easier handling of the weeding lists. WCTS libraries should follow the format used by WCTS for the items processed by them for your library. Most call numbers are either the author’s name or just the first letters of the author’s last name. Please DO catalog your paperbacks so they can be shared with other libraries. Call Numbers for Magazines To get the call numbers to display in the correct order by date, the call number needs to begin with the date of the issue in a number format, followed by the issue in alphanumeric format.
    [Show full text]
  • Minorities and Women in Television See Little Change, While Minorities Fare Worse in Radio
    RunningRunning inin PlacePlace Minorities and women in television see little change, while minorities fare worse in radio. By Bob Papper he latest figures from the RTNDA/Ball State Uni- Minority Population vs. Minority Broadcast Workforce versity Annual Survey show lit- 2005 2004 2000 1995 1990 tle change for minorities in tel- evision news in the past year but Minority Population in U.S. 33.2% 32.8% 30.9% 27.9% 25.9% slippage in radio news. Minority TV Workforce 21.2 21.8 21.0 17.1 17.8 TIn television, the overall minority Minority Radio Workforce 7.9 11.8 10.0 14.7 10.8 workforce remained largely un- Source for U.S. numbers: U.S. Census Bureau changed at 21.2 percent, compared with last year’s 21.8 percent. At non-Hispanic stations, the the stringent Equal Employment minority workforce in TV news is up minority workforce also remained Opportunity rules were eliminated in 3.4 percent. At the same time, the largely steady at 19.5 percent, com- 1998. minority population in the U.S. has pared with 19.8 percent a year ago. News director numbers were increased 7.3 percent. Overall, the After a jump in last year’s minority mixed, with the percentage of minor- minority workforce in TV has been at radio numbers, the percentage fell this ity TV news directors down slightly to 20 percent—plus or minus 3 per- year.The minority radio news work- 12 percent (from 12.5 percent last cent—for every year in the past 15.
    [Show full text]
  • Command-Line Sound Editing Wednesday, December 7, 2016
    21m.380 Music and Technology Recording Techniques & Audio Production Workshop: Command-line sound editing Wednesday, December 7, 2016 1 Student presentation (pa1) • 2 Subject evaluation 3 Group picture 4 Why edit sound on the command line? Figure 1. Graphical representation of sound • We are used to editing sound graphically. • But for many operations, we do not actually need to see the waveform! 4.1 Potential applications • • • • • • • • • • • • • • • • 1 of 11 21m.380 · Workshop: Command-line sound editing · Wed, 12/7/2016 4.2 Advantages • No visual belief system (what you hear is what you hear) • Faster (no need to load guis or waveforms) • Efficient batch-processing (applying editing sequence to multiple files) • Self-documenting (simply save an editing sequence to a script) • Imaginative (might give you different ideas of what’s possible) • Way cooler (let’s face it) © 4.3 Software packages On Debian-based gnu/Linux systems (e.g., Ubuntu), install any of the below packages via apt, e.g., sudo apt-get install mplayer. Program .deb package Function mplayer mplayer Play any media file Table 1. Command-line programs for sndfile-info sndfile-programs playing, converting, and editing me- Metadata retrieval dia files sndfile-convert sndfile-programs Bit depth conversion sndfile-resample samplerate-programs Resampling lame lame Mp3 encoder flac flac Flac encoder oggenc vorbis-tools Ogg Vorbis encoder ffmpeg ffmpeg Media conversion tool mencoder mencoder Media conversion tool sox sox Sound editor ecasound ecasound Sound editor 4.4 Real-world
    [Show full text]
  • BSD UNIX Toolbox 1000+ Commands for Freebsd, Openbsd
    76034ffirs.qxd:Toolbox 4/2/08 12:50 PM Page iii BSD UNIX® TOOLBOX 1000+ Commands for FreeBSD®, OpenBSD, and NetBSD®Power Users Christopher Negus François Caen 76034ffirs.qxd:Toolbox 4/2/08 12:50 PM Page ii 76034ffirs.qxd:Toolbox 4/2/08 12:50 PM Page i BSD UNIX® TOOLBOX 76034ffirs.qxd:Toolbox 4/2/08 12:50 PM Page ii 76034ffirs.qxd:Toolbox 4/2/08 12:50 PM Page iii BSD UNIX® TOOLBOX 1000+ Commands for FreeBSD®, OpenBSD, and NetBSD®Power Users Christopher Negus François Caen 76034ffirs.qxd:Toolbox 4/2/08 12:50 PM Page iv BSD UNIX® Toolbox: 1000+ Commands for FreeBSD®, OpenBSD, and NetBSD® Power Users Published by Wiley Publishing, Inc. 10475 Crosspoint Boulevard Indianapolis, IN 46256 www.wiley.com Copyright © 2008 by Wiley Publishing, Inc., Indianapolis, Indiana Published simultaneously in Canada ISBN: 978-0-470-37603-4 Manufactured in the United States of America 10 9 8 7 6 5 4 3 2 1 Library of Congress Cataloging-in-Publication Data is available from the publisher. No part of this publication may be reproduced, stored in a retrieval system or transmitted in any form or by any means, electronic, mechanical, photocopying, recording, scanning or otherwise, except as permitted under Sections 107 or 108 of the 1976 United States Copyright Act, without either the prior written permission of the Publisher, or authorization through payment of the appropriate per-copy fee to the Copyright Clearance Center, 222 Rosewood Drive, Danvers, MA 01923, (978) 750-8400, fax (978) 646-8600. Requests to the Publisher for permis- sion should be addressed to the Legal Department, Wiley Publishing, Inc., 10475 Crosspoint Blvd., Indianapolis, IN 46256, (317) 572-3447, fax (317) 572-4355, or online at http://www.wiley.com/go/permissions.
    [Show full text]
  • Precise Radioactivity Measurements a Controversy
    http://cyclotron.tamu.edu Precise Radioactivity Measurements: A Controversy Settled Simultaneous measurements of x-rays and gamma rays emitted in radioactive nuclear decays probe how frequently excited nuclei release energy by ejecting an atomic electron THE SCIENCE Radioactivity takes several different forms. One is when atomic nuclei in an excited state release their excess energy (i.e., they decay) by emitting a type of electromagnetic radiation, called gamma rays. Sometimes, rather than a gamma ray being emitted, the nucleus conveys its energy electromagnetically to an atomic electron, causing it to be ejected at high speed instead. The latter process is called internal conversion and, for a particular radioactive decay, the probability of electron emission relative to that of gamma-ray emission is called the internal conversion coefficient (ICC). Theoretically, for almost all transitions, the ICC is expected to depend on the quantum numbers of the participating nuclear excited states and on the amount of energy released, but not on the sometimes obscure details of how the nucleus is structured. For the first time, scientists have been able to test the theory to percent precision, verifying its independence from nuclear structure. At the same time, the new results have guided theorists to make improvements in their ICC calculations. THE IMPACT For the decays of most known excited states in nuclei, only the gamma rays have been observed, so scientists have had to combine measured gamma-ray intensities with calculated ICCs in order to put together a complete picture known as a decay scheme, which includes the contribution of electron as well as gamma-ray emission, for every radioactive decay.
    [Show full text]
  • Multiplication As Comparison Problems
    Multiplication as Comparison Problems Draw a model and write an equation for the following: a) 3 times as many as 9 is 27 b) 40 is 4 times as many as 10 c) 21 is 3 times as many as 7 A Write equations for the following: a) three times as many as four is twelve b) twice as many as nine is eighteen c) thirty-two is four times as many as eight B Write a comparison statement for each equation: a) 3 x 7 = 21 b) 8 x 3 = 24 c) 5 x 4 = 20 ___ times as many as ___ is ___. C Write a comparison statement for each equation a) 45 = 9 x 5 b) 24 = 6 x 4 c) 18 = 2 x 9 ___ is ___ times as many as ___. ©K-5MathTeachingResources.com D Draw a model and write an equation for the following: a) 18 is 3 times as many as 6 b) 20 is 5 times as many as 4 c) 80 is 4 times as many as 20 E Write equations for the following: a) five times as many as seven is thirty-five b) twice as many as twelve is twenty-four c) four times as many as nine is thirty-six F Write a comparison statement for each equation: a) 6 x 8 = 48 b) 9 x 6 = 54 c) 8 x 7 = 56 ___ times as many as ___ is ___. G Write a comparison statement for each equation: a) 72 = 9 x 8 b) 81 = 9 x 9 c) 36 = 4 x 9 ___ is ___ times as many as ___.
    [Show full text]
  • Girls' Elite 2 0 2 0 - 2 1 S E a S O N by the Numbers
    GIRLS' ELITE 2 0 2 0 - 2 1 S E A S O N BY THE NUMBERS COMPARING NORMAL SEASON TO 2020-21 NORMAL 2020-21 SEASON SEASON SEASON LENGTH SEASON LENGTH 6.5 Months; Dec - Jun 6.5 Months, Split Season The 2020-21 Season will be split into two segments running from mid-September through mid-February, taking a break for the IHSA season, and then returning May through mid- June. The season length is virtually the exact same amount of time as previous years. TRAINING PROGRAM TRAINING PROGRAM 25 Weeks; 157 Hours 25 Weeks; 156 Hours The training hours for the 2020-21 season are nearly exact to last season's plan. The training hours do not include 16 additional in-house scrimmage hours on the weekends Sep-Dec. Courtney DeBolt-Slinko returns as our Technical Director. 4 new courts this season. STRENGTH PROGRAM STRENGTH PROGRAM 3 Days/Week; 72 Hours 3 Days/Week; 76 Hours Similar to the Training Time, the 2020-21 schedule will actually allow for a 4 additional hours at Oak Strength in our Sparta Science Strength & Conditioning program. These hours are in addition to the volleyball-specific Training Time. Oak Strength is expanding by 8,800 sq. ft. RECRUITING SUPPORT RECRUITING SUPPORT Full Season Enhanced Full Season In response to the recruiting challenges created by the pandemic, we are ADDING livestreaming/recording of scrimmages and scheduled in-person visits from Lauren, Mikaela or Peter. This is in addition to our normal support services throughout the season. TOURNAMENT DATES TOURNAMENT DATES 24-28 Dates; 10-12 Events TBD Dates; TBD Events We are preparing for 15 Dates/6 Events Dec-Feb.
    [Show full text]
  • Oracle Optimized Solution for Oracle E-Business Suite a High-Performance, Flexible Architecture on SPARC T5-2 Servers and Oracle Exadata
    Oracle Optimized Solution for Oracle E-Business Suite A High-Performance, Flexible Architecture on SPARC T5-2 Servers and Oracle Exadata ORACLE WHITE P A P E R | OCTOBER 2015 Table of Contents Introduction 1 Solution Overview 2 Oracle Technologies—Everything Needed for Oracle E-Business Suite Deployment 2 Platform Infrastructure 3 Network Infrastructure and Remote Management 4 Built-in Virtualization for Simplified Oracle E-Business Suite Application Consolidation 4 High Availability Features to Keep Oracle E-Business Suite Running 6 Backup, Restore, and Disaster Recovery Solutions 6 Built-in Security Technology and Comprehensive Tools for Secure Deployment 7 Cryptographic Acceleration for Oracle E-Business Suite 7 Secure Isolation 9 Secure Access Control 9 Data Protection 9 Compliance 10 Security Best Practices for Oracle E-Business Suite Deployments 10 Security Technical Implementation Guides 11 My Oracle Support Documents 11 Component-Level Security Recommendations 12 Mapping an Oracle E-Business Suite Deployment to SPARC T5 Servers and Oracle Exadata 13 Consolidating to Oracle Systems 14 ORACLE OPTIMIZED SOLUTION FOR ORACLE E-BUSINESS SUITE A Basic Production System 15 Test Systems, Disaster Recovery Systems, and Other Systems 17 Solution Scalability 18 Consolidation of Quality Assurance, Disaster Recovery, and Other Systems 18 Consolidating onto a Single Oracle System 18 Cloud-Based Deployments 19 Additional Oracle Optimized Solutions for Oracle E-Business Suite Deployments 20 Oracle Optimized Solution for Secure Backup and Recovery
    [Show full text]