Hardware-Based Performance Monitoring with Vtune Performance Analyzer Under Linux

Total Page:16

File Type:pdf, Size:1020Kb

Hardware-Based Performance Monitoring with Vtune Performance Analyzer Under Linux Hardware-based performance monitoring with VTune Performance Analyzer under Linux Hassan Shojania [email protected] Abstract are excellent sources for more information about perfor- All new modern processors have hardware support for mance monitoring hardware and features of Pentium 4 in monitoring processor performance. In this project, we this area. Of course, [3] provides raw register-level infor- try to explore use of VTune Performance Analyzer for mation for programming Pentium 4 performance monitor- hardware-based performance monitoring of a Linux clus- ing hardware. ter of Pentium 4 Xeon processors. 2 Basics of performance monitoring hard- 1 Introduction ware Performance measurement of any high-performance There can be different approaches for collecting proces- cluster system is very critical for development and de- sor performance data. ployment of efficient applications for such system. All new modern processors have special hardware to monitor 1. Modifying the application to add instrumentation processor performance. This hardware-based performance code for collecting various data like instruction trace measurement has many advantageous over traditional in- and memory reference data. This requires either re- trusive methods of performance measurements based on building it from source code or modifying its exe- adding code for probing execution time of portion of a cutable version; both not favorable usually (especially program. For example, data collected by this hardware for operating system code). Also, these approaches provides performance information on applications, the op- can disturb the applications behavior, bringing ques- erating system, and the processor. These data can guide tions about validity of the collected data. performance improvement efforts by helping programmers 2. Another way to collect processor performance data is tuning the algorithms used, and the code sequences that by using a simulator to model the processor as it ex- implement those algorithms [1]. ecutes the application. This simulation approach can In this project, we intend to explore VTune Performance yield a detailed data on processor blocks like pipeline Analyzer set from Intel for performance measurement of stalls, branch prediction, cache performance, and so standard MPI/OpenMP-based benchmarks under our clus- on. However, processor manufacturers do not usu- ter of Dell PowerEdge 2650/6650 (2/4 way SMP systems ally provide simulators for advanced processor de- with Xeon processors) running Linux. As the first step to- signs and third parties don’t know enough about the wards VTune, we are targeting mainly the proper system hardware detail to build such a simulator. setup/configuration with limited trials of few benchmarks. This should provide enough background for more in depth 3. Using performance-monitoring hardware has several analysis of other benchmarks. distinct advantages over previous approaches. Hav- This report is organized as follows: First we overview ing the processor itself actually collect performance the basics of performance monitoring hardware in Section data as it executes an application have several bene- 2. Features of Pentium 4 performance monitoring hard- fits. First, the application and operating system re- ware is overviewed in Section 3. In Section 4 we describe main largely unmodified. Second, the accuracy of the performance data collection in VTune with the help of con- collected event counts is much higher compared to us- cepts presented in Sections 2 and 3. Section 5 explains dif- ing loose simulators which are not capable of simu- ferent deployment choices of VTune and installation steps. lating exact hardware behavior. Third, performance- Section 6 provides some sample test results. In Section 7 monitoring hardware collects data on the fly as the we present our conclusions. application executes, avoiding the slow simulation- Section 2 and 3 have heavily used [1] and [2]. These based approaches. Fourth, this approach can collect data for both the application and the operating system. can not improve the hardware/software performance. Two These advantages often make hardware performance common approaches are: monitoring the preferred, and sometimes only choice for collecting processor performance data. Time-based Sampling (TBS) approach tries to capture the percentage of time an application spends in its dif- Performance-monitoring hardware typically has two ferent sections through exposing the most frequently components: performance event detectors and event coun- executed portions of the code. It is usually imple- ters. Users can configure performance event detectors to mented through interrupting an application’s execu- detect anyone of several performance events (for example, tion at regular time intervals and recording the pro- cache misses or branch mispredictions). Often, event de- gram counter. At the end of the application, a his- tectors have an event mask field that allows further qualifi- togram will show the number of samples collected for cation of the event. For example, based on processors priv- each section of code. ilege mode (user/supervisor) to separate events generated Event-based Sampling(EBS) produces a histogram of by application from operating system code, or for filtering performance event counts based on code location. accesses to the L2 cache based on cache line’s specific state Now the application is interrupted after a specific (i.e. modified, shared, exclusive, or invalid). number of performance events (a counter reaching Further configuration is usually possible through en- some threshold) rather than at regular time intervals abling event counters only under certain edge and threshold in TBS. Similar to a time-based profile indication of conditions. The edge detection feature is most often used most frequently executed code locations, an event- for events that detect the presence or absence of certain based profile indicates the most frequently executed conditions every cycle, like a pipeline stall. The thresh- code locations that cause a particular performance old feature lets the event counter compare the value it re- event. This requires performance-monitoring hard- ports each cycle to a threshold value and then increment ware support for generating performance monitor in- the counter. The threshold feature is only useful for per- terrupt when a performance event counter overflows. formance events that report values greater than one in each Then the Interrupt Service Routine (ISR) handler cap- cycle, for example for an ”Instructions Completed” event, tures sample data from the program (e.g. program number of cycles when three or more instructions were counter) and re-enables the interrupt for another in- completed (in one cycle) can be counted by using a thresh- terrupt. old of two. 2.1 Performance event monitoring 2.3 Limitations Mainstream processors suffer from a common set of Performance events can be grouped into five cat- problems: egories: program characterization, memory accesses, pipeline stalls, branch prediction, and resource utilization. Too few counters. Limited number of counters restricts Program characterization events show largely processor- monitoring multiple events concurrently. independent attributes of a program like number and type of instructions (for example, loads, stores, floating point, Speculative counts. Some processors can’t distinguish branches, and so on) completed by the program. Mem- between performance events for instructions that do ory access events aid performance analysis of the proces- not complete (speculative execution) from the one sor’s memory hierarchy, like references and misses to vari- who really complete. This is important because per- ous caches and transactions on the processor memory bus. formance events are still generated (like cache misses) Pipeline stall event information helps users analyze how even for speculatively executed instructions that never well the program’s instructions flow through the pipeline. retire. Branch prediction events show performance of branch pre- Sampling delay. The sampling of program counter when diction hardware. Resource utilization events can monitor a hardware event counter overflows (commonly used the usage of certain resources like number of cycles spent with EBS) can not identify the exact instruction that using a floating-point divider. caused the overflow. This degree of accuracy is not 2.2 Performance Profiles required all the time (e.g. the associated module or Though hardware performance counters reveal many in- thread is of more interest) but there are cases when an formation about the software, but this information is very event frequency needs to be related to particular type low level and mainly expose global state of the proces- of instruction. The program counter is sampled by a sor not a particular application behavior. And as long as performance monitor interrupt (PMI) after overflow the source of monitored behavior is not detected, the user of an event counter. However, the current instruction 2 Figure 1: The general structure of the Pentium 4 event counnters and detectors) (from [2]) is usually not the instruction causing the overflow as processors pipeline cause an arbitrary delay between counter overflow and signaling of an interrupt. Lack of data-address profiling. Usually there is no sup- port for collecting profiles of data addresses gener- ated by CPU when a memory hierarchy performance event happens. 3 Pentium 4 Performance-monitoring Fea- tures Intel Pentium 4 has tried
Recommended publications
  • Application Performance Optimization
    Application Performance Optimization By Börje Lindh - Sun Microsystems AB, Sweden Sun BluePrints™ OnLine - March 2002 http://www.sun.com/blueprints Sun Microsystems, Inc. 4150 Network Circle Santa Clara, CA 95045 USA 650 960-1300 Part No.: 816-4529-10 Revision 1.3, 02/25/02 Edition: March 2002 Copyright 2002 Sun Microsystems, Inc. 4130 Network Circle, Santa Clara, California 95045 U.S.A. All rights reserved. This product or document is protected by copyright and distributed under licenses restricting its use, copying, distribution, and decompilation. No part of this product or document may be reproduced in any form by any means without prior written authorization of Sun and its licensors, if any. Third-party software, including font technology, is copyrighted and licensed from Sun suppliers. Parts of the product may be derived from Berkeley BSD systems, licensed from the University of California. UNIX is a registered trademark in the U.S. and other countries, exclusively licensed through X/Open Company, Ltd. Sun, Sun Microsystems, the Sun logo, Sun BluePrints, Sun Enterprise, Sun HPC ClusterTools, Forte, Java, Prism, and Solaris are trademarks or registered trademarks of Sun Microsystems, Inc. in the United States and other countries. All SPARC trademarks are used under license and are trademarks or registered trademarks of SPARC International, Inc. in the US and other countries. Products bearing SPARC trademarks are based upon an architecture developed by Sun Microsystems, Inc. The OPEN LOOK and Sun™ Graphical User Interface was developed by Sun Microsystems, Inc. for its users and licensees. Sun acknowledges the pioneering efforts of Xerox in researching and developing the concept of visual or graphical user interfaces for the computer industry.
    [Show full text]
  • Memory Subsystem Profiling with the Sun Studio Performance Analyzer
    Memory Subsystem Profiling with the Sun Studio Performance Analyzer CScADS, July 20, 2009 Marty Itzkowitz, Analyzer Project Lead Sun Microsystems Inc. [email protected] Outline • Memory performance of applications > The Sun Studio Performance Analyzer • Measuring memory subsystem performance > Four techniques, each building on the previous ones – First, clock-profiling – Next, HW counter profiling of instructions – Dive deeper into dataspace profiling – Dive still deeper into machine profiling – What the machine (as opposed to the application) sees > Later techniques needed if earlier ones don't fix the problems • Possible future directions MSI Memory Subsystem Profiling with the Sun Studio Performance Analyzer 6/30/09 2 No Comment MSI Memory Subsystem Profiling with the Sun Studio Performance Analyzer 6/30/09 3 The Message • Memory performance is crucial to application performance > And getting more so with time • Memory performance is hard to understand > Memory subsystems are very complex – All components matter > HW techniques to hide latency can hide causes • Memory performance tuning is an art > We're trying to make it a science • The Performance Analyzer is a powerful tool: > To capture memory performance data > To explore its causes MSI Memory Subsystem Profiling with the Sun Studio Performance Analyzer 6/30/09 4 Memory Performance of Applications • Operations take place in registers > All data must be loaded and stored; latency matters • A load is a load is a load, but > Hit in L1 cache takes 1 clock > Miss in L1, hit in
    [Show full text]
  • Oracle Solaris Studio 12.2 Performance Analyzer MPI Tutorial
    Oracle® Solaris Studio 12.2: Performance Analyzer MPITutorial Part No: 820–5551 September 2010 Copyright © 2010, Oracle and/or its affiliates. All rights reserved. This software and related documentation are provided under a license agreement containing restrictions on use and disclosure and are protected by intellectual property laws. Except as expressly permitted in your license agreement or allowed by law, you may not use, copy, reproduce, translate, broadcast, modify, license, transmit, distribute, exhibit, perform, publish, or display any part, in any form, or by any means. Reverse engineering, disassembly, or decompilation of this software, unless required by law for interoperability, is prohibited. The information contained herein is subject to change without notice and is not warranted to be error-free. If you find any errors, please report them to us in writing. If this is software or related software documentation that is delivered to the U.S. Government or anyone licensing it on behalf of the U.S. Government, the following notice is applicable: U.S. GOVERNMENT RIGHTS Programs, software, databases, and related documentation and technical data delivered to U.S. Government customers are “commercial computer software” or “commercial technical data” pursuant to the applicable Federal Acquisition Regulation and agency-specific supplemental regulations. As such, the use, duplication, disclosure, modification, and adaptation shall be subject to the restrictions and license terms setforth in the applicable Government contract, and, to the extent applicable by the terms of the Government contract, the additional rights set forth in FAR 52.227-19, Commercial Computer Software License (December 2007). Oracle America, Inc., 500 Oracle Parkway, Redwood City, CA 94065.
    [Show full text]
  • Oracle® Solaris Studio 12.4: Performance Analyzer Tutorials
    ® Oracle Solaris Studio 12.4: Performance Analyzer Tutorials Part No: E37087 December 2014 Copyright © 2014, Oracle and/or its affiliates. All rights reserved. This software and related documentation are provided under a license agreement containing restrictions on use and disclosure and are protected by intellectual property laws. Except as expressly permitted in your license agreement or allowed by law, you may not use, copy, reproduce, translate, broadcast, modify, license, transmit, distribute, exhibit, perform, publish, or display any part, in any form, or by any means. Reverse engineering, disassembly, or decompilation of this software, unless required by law for interoperability, is prohibited. The information contained herein is subject to change without notice and is not warranted to be error-free. If you find any errors, please report them to us in writing. If this is software or related documentation that is delivered to the U.S. Government or anyone licensing it on behalf of the U.S. Government, the following notice is applicable: U.S. GOVERNMENT END USERS. Oracle programs, including any operating system, integrated software, any programs installed on the hardware, and/or documentation, delivered to U.S. Government end users are "commercial computer software" pursuant to the applicable Federal Acquisition Regulation and agency-specific supplemental regulations. As such, use, duplication, disclosure, modification, and adaptation of the programs, including any operating system, integrated software, any programs installed on the hardware, and/or documentation, shall be subject to license terms and license restrictions applicable to the programs. No other rights are granted to the U.S. Government. This software or hardware is developed for general use in a variety of information management applications.
    [Show full text]
  • Openmp Performance 2
    OPENMP PERFORMANCE 2 A common scenario..... “So I wrote my OpenMP program, and I checked it gave the right answers, so I ran some timing tests, and the speedup was, well, a bit disappointing really. Now what?”. Most of us have probably been here. Where did my performance go? It disappeared into overheads..... 3 The six (and a half) evils... • There are six main sources of overhead in OpenMP programs: • sequential code • idle threads • synchronisation • scheduling • communication • hardware resource contention • and another minor one: • compiler (non-)optimisation • Let’s take a look at each of them and discuss ways of avoiding them. 4 Sequential code • In OpenMP, all code outside parallel regions, or inside MASTER and SINGLE directives is sequential. • Time spent in sequential code will limit performance (that’s Amdahl’s Law). • If 20% of the original execution time is not parallelised, I can never get more that 5x speedup. • Need to find ways of parallelising it! 5 Idle threads • Some threads finish a piece of computation before others, and have to wait for others to catch up. • e.g. threads sit idle in a barrier at the end of a parallel loop or parallel region. Time 6 Avoiding load imbalance • It’s a parallel loop, experiment with different schedule kinds and chunksizes • can use SCHEDULE(RUNTIME) to avoid recompilation. • For more irregular computations, using tasks can be helpful • runtime takes care of the load balancing • Note that it’s not always safe to assume that two threads doing the same number of computations will take the same time.
    [Show full text]
  • Openmp 4.0 Support in Oracle Solaris Studio
    OpenMP 4.0 Support In Oracle Solaris Studio Ruud van der Pas Disnguished Engineer Architecture and Performance, SPARC Microelectronics SC’14 OpenMP BOF November 18, 2014 Copyright © 2014, Oracle and/or its affiliates. All rights reserved. 1 Safe Harbor Statement The following is intended to outline our general product direcEon. It is intended for informaon purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or funcEonality, and should not be relied upon in making purchasing decisions. The development, release, and Eming of any features or funcEonality described for Oracle’s products remains at the sole discreEon of Oracle. Copyright © 2014, Oracle and/or its affiliates. All rights reserved. 2 What Is Oracle Solaris Studio ? • Supported on Linux too • Compilers and tools, • It is actually a comprehensive so-ware suite, including: – C, C++ and Fortran compilers – Various tools (performance analyzer, thread analyzer, code analyzer, debugger, etc) • Plaorms supported – Processors: SPARC, x86 – Operang Systems: Solaris, Linux Copyright © 2014, Oracle and/or its affiliates. All rights reserved. 3 hp://www.oracle.com/technetwork/ server-storage/solarisstudio/overview/index.html Copyright © 2014, Oracle and/or its affiliates. All rights reserved. 4 OpenMP Specific Support/1 • Compiler – Various opEons and environment variables – Autoscoping – Compiler Commentary • General feature to inform about opEmizaons, but also specific to OpenMP • Performance Analyzer – OpenMP “states”, metrics, etc – Task and region specific informaon Copyright © 2014, Oracle and/or its affiliates. All rights reserved. 5 Performance Analyzer - Profile Comparison Copyright © 2014, Oracle and/or its affiliates. All rights reserved. 6 A Comparison– More Detail Copyright © 2014, Oracle and/or its affiliates.
    [Show full text]
  • 1 a Study on Performance Analysis Tools for Applications Running On
    A Study on Performance Analysis Tools for Applications Running on Large Distributed Systems Ajanta De Sarkar 1 and Nandini Mukherjee 2 1 Department of Computer Application, Heritage Institute of Technology, West Bengal University of Technology, Kolkata 700 107, India [email protected] 2 Department of Computer Science and Engineering, Jadavpur University, Kolkata 700 032, India [email protected] Abstract. The evolution of distributed architectures and programming paradigms for performance-oriented program development, challenge the state-of-the-art technology for performance tools. The area of high performance computing is rapidly expanding from single parallel systems to clusters and grids of heterogeneous sequential and parallel systems. Performance analysis and tuning applications is becoming crucial because it is hardly possible to otherwise achieve the optimum performance of any application. The objective of this paper is to study the state-of-the-art technology of the existing performance tools for distributed systems. The paper surveys some representative tools from different aspects in order to highlight the approaches and technologies used by them. Keywords: Performance Monitoring, Tools, Techniques 1. Introduction During the last decade, automatic or semi-automatic performance tuning of applications running on parallel systems has become one of the highly focused research areas. Performance tuning means to speed up the execution performance of an application. Performance analysis plays an important role in this context, because only from performance analysis data the performance properties of the application can be observed. A cyclic and feedback guided methodology is often adopted for optimizing the execution performance of an application – every time the execution performance of the application is analyzed and tuned and the application is then sent back for execution again in order to detect the existence of further performance problems.
    [Show full text]
  • Java™ Performance
    JavaTM Performance The Java™ Series Visit informit.com/thejavaseries for a complete list of available publications. ublications in The Java™ Series are supported, endorsed, and Pwritten by the creators of Java at Sun Microsystems, Inc. This series is the official source for expert instruction in Java and provides the complete set of tools you’ll need to build effective, robust, and portable applications and applets. The Java™ Series is an indispensable resource for anyone looking for definitive information on Java technology. Visit Sun Microsystems Press at sun.com/books to view additional titles for developers, programmers, and system administrators working with Java and other Sun technologies. JavaTM Performance Charlie Hunt Binu John Upper Saddle River, NJ • Boston • Indianapolis • San Francisco New York • Toronto • Montreal • London • Munich • Paris • Madrid Capetown • Sydney • Tokyo • Singapore • Mexico City Many of the designations used by manufacturers and sellers to distinguish their products are claimed as trademarks. Where those designations appear in this book, and the publisher was aware of a trademark claim, the designations have been printed with initial capital letters or in all capitals. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. AMD, Opteron, the AMD logo, and the AMD Opteron logo are trademarks or registered trademarks of Advanced Micro Devices. Intel and Intel Xeon are trademarks or registered trademarks of Intel Corporation. All SPARC trademarks are used under license and are trademarks or registered trademarks of SPARC Inter- national, Inc. UNIX is a registered trademark licensed through X/Open Company, Ltd.
    [Show full text]
  • Characterization for Heterogeneous Multicore Architectures
    S. Arash Ostadzadeh antitative Application Data Flow Characterization for Heterogeneous Multicore Architectures antitative Application Data Flow Characterization for Heterogeneous Multicore Architectures PROEFSCHRIFT ter verkrijging van de graad van doctor aan de Technische Universiteit Del, op gezag van de Rector Magnificus prof. ir. K.Ch.A.M. Luyben, voorzier van het College voor Promoties, in het openbaar te verdedigen op dinsdag december om : uur door Sayyed Arash OSTADZADEH Master of Science in Computer Engineering Ferdowsi University of Mashhad geboren te Mashhad, Iran Dit proefschri is goedgekeurd door de promotor: Prof. dr. K.L.M. Bertels Samenstelling promotiecommissie: Rector Magnificus voorzier Prof. dr. K.L.M. Bertels Technische Universiteit Del, promotor Prof. dr. ir. H.J. Sips Technische Universiteit Del Prof. Dr.-Ing. Michael Hübner Ruhr-Universität Bochum, Duitsland Prof. Dr.-Ing. Mladen Berekovic Technische Universität Braunschweig, Duitsland Prof. dr. Henk Corporaal Technische Universiteit Eindhoven Prof. dr. ir. Dirk Stroobandt Universiteit Gent, België Dr. G.K. Kuzmanov Technische Universiteit Del Prof. dr. ir. F.W. Jansen Technische Universiteit Del, reservelid S. Arash Ostadzadeh antitative Application Data Flow Characterization for Heterogeneous Multicore Architectures Met samenvaing in het Nederlands. Subject headings: Dynamic Binary Instrumentation, Application Partitioning, Hardware/Soware Co-design. e cover images are abstract artworks created by the Agony drawing program developed by Kelvin (hp://www.kelbv.com/agony.php). Copyright © S. Arash Ostadzadeh ISBN 978-94-6186-095-8 All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmied, in any form or by any means, electronic, mechanical, photocopying, recording, or otherwise, without the permission of the author.
    [Show full text]
  • Oracle Developer Studio Performance Analyzer Brief
    O R A C L E D A T A S H E E T Oracle Developer Studio Performance Analyzer The Oracle Developer Studio Performance Analyzer provides unparalleled insight into the behavior of your application, allowing you to identify bottlenecks and improve performance by orders of magnitude. Bring high- performance, high-quality, enterprise applications to market faster and obtain maximum utilization from today’s highly complex systems. KEY FEATURES Introduction • Low overhead for fast and accurate Is your application performing optimally? Are you taking full advantage of your high- results throughput multi-core systems? The Performance Analyzer is a powerful market- • Advanced profiling of serial and leading profiler for optimizing application performance and scalability. It provides both parallel applications source-level and machine-specific analysis that enables you to drill-down on application • Rich set of performance data and metrics behavior, allowing you to quickly isolate hotspots and eliminate bottlenecks. • Easy to use GUI Analyze Serial and Parallel Applications • Remote analysis to simplify cloud development The Performance Analyzer is optimized for a variety of programming scenarios and • Supports C, C++, Java, Fortran and helps pinpoint the offending code with the fewest number of clicks. In addition to Scala supporting C, C++, and Fortran applications, the Performance Analyzer also includes K E Y BENEFITS support for Java and Scala that supersedes other tools’ accuracy and machine-level • Maximize application performance observability. Identification of a hot call path is streamlined with an interactive, graphical • Improve system utilization and call tree called a flame graph. After a hot function is identified, navigating call paths software quality while viewing relevant source is performed with single mouse-clicks.
    [Show full text]
  • Optimizing Applications with Oracle Solaris Studio Compilers and Tools
    An Oracle Technical White Paper December 2011 How Oracle Solaris Studio Optimizes Application Performance Oracle Solaris Studio 12.3 How Oracle Solaris Studio Optimizes Application Performance Introduction ....................................................................................... 1 Oracle Solaris Studio Compilers and Tools ....................................... 2 Conclusion ........................................................................................ 6 For More Information ......................................................................... 7 How Oracle Solaris Studio Optimizes Application Performance Introduction This white paper provides an overview of Oracle Solaris Studio software. Modern processors and systems provide myriad features and functionality that can dramatically accelerate application performance. The latest high-performance SPARC® and x86 processors provide special enhanced instructions, and the commonality of multicore processors and multisocket systems means that available system resources are greatly increased. At the same time, applications must still be properly compiled and tuned to effectively exploit this functionality and performance for key applications. Selecting appropriate compiler flags and optimization techniques and applying appropriate tools is essential for creating accurate and performant application code. Oracle Solaris Studio software includes a full-featured, integrated development environment (IDE) coupled with the compilers and development tools that are required to
    [Show full text]
  • Oracle Solaris Studio July, 2014
    How to Increase Applicaon Security & Reliability with SoGware in Silicon Technology Angelo Rajuderai, SPARC Technology Lead, Oracle SysteMs Partners Ikroop Dhillon, Principal Product Manager, Oracle Solaris Studio July, 2014 Please Stand By. This session will begin promptly at the me indicated on the agenda. Thank You. Copyright © 2014 Oracle and/or its affiliates. All rights reserved. | 1 Safe Harbor StateMent The following is intended to outline our general product direcAon. It is intended for inforMaon purposes only, and May not be incorporated into any contract. It is not a coMMitment to deliver any Material, code, or funcAonality, and should not be relied upon in Making purchasing decisions. The developMent, release, and AMing of any features or funcAonality described for Oracle’s products reMains at the sole discreAon of Oracle. Copyright © 2014 Oracle and/or its affiliates. All rights reserved. | 2 Agenda 1 Hardware and SoGware Engineered to work together 2 SoGware in Silicon Overview 3 Security with Applicaon Data Integrity 4 Oracle Solaris Studio DevelopMent Tools 5 Q + A Copyright © 2014 Oracle and/or its affiliates. All rights reserved. | 3 The Unique Oracle Advantage Hardware and Software Engineered to Work Together One Engineering Team vs Copyright © 2014 Oracle and/or its affiliates. All rights reserved. | Application Accelerators In SPARC T5 The Integrated Stack Advantage One Engineering Team Copyright © 2014 Oracle and/or its affiliates. All rights reserved. | Partners Benefit froM Innovaon & Integraon All Software Benefit System Software Copyright © 2014 Oracle and/or its affiliates. All rights reserved. | Oracle SPARC No Change in ABI Your exisng apps just run beer .. Copyright © 2014 Oracle and/or its affiliates.
    [Show full text]