Performance Profiling with Cactus Benchmarks

Total Page:16

File Type:pdf, Size:1020Kb

Performance Profiling with Cactus Benchmarks Performance Profiling with Cactus Benchmarks Sasanka Madiraju April 6th, 2006 System Science Master’s Project Report Department of Computer Science Louisiana State University Acknowledgment This project has evolved out of a cumulative effort at the Center for Computation & Tech- nology towards the proposal written in reply to the bigiron solicitation from NSF. Over a span of six months there was a lot of activity to collect and organize data and participate in rigorous analysis. I would like to thank all the people who helped me during the course of this project. I would like to thank my committee members Dr. Gabrielle Allen, Dr. Edward Seidel and Dr. Gerald Baumgartner, for their support and suggestions. I would also like to offer my appreciation to Dr. Thomas Sterling and Dr. Ram Ramanujam for their constant encouragements and help. I would especially like to thank Dr. Erik Schnetter and Dr. Maciej Brodowicz for their constant advice and support and I am grateful to them for their exceptional guidance. I am also grateful to Mr. Yaakoub El-Khamra and Dr. Steven Brandt for their help and sug- gestions. Finally, I would like to thank all the people at CCT for being a source of inspiration. Sasanka Madiraju Contents 1 Introduction 1 1.1 Project Scope . 1 1.2 Goals . 3 1.2.1 Phase 1 . 3 1.2.2 Phase 2 . 4 1.3 Contributions and People Involved . 4 2 Cactus Toolkit: Benchmarks 6 2.1 Cactus Toolkit . 6 2.2 Types of Benchmarks . 8 2.3 Bench BSSN PUGH ................................. 9 2.4 Bench Whisky Carpet .............................. 9 2.5 Duration of Run . 10 2.6 Problems with IPM Results . 10 3 Phase 1: Analysis of Results 12 3.1 IBM Power5 Architecture . 12 3.2 Intel Itanium2 Architecture . 13 3.3 Intel Xeon Architecture . 14 3.4 Details of Machines Used . 15 i 3.5 Classification of Results . 15 3.5.1 Single Processor Performance Results . 16 3.5.2 Scaling Results for different machines . 19 4 Phase 2: Profiling Bench BSSN PUGH 29 4.1 Reason for Profiling . 29 4.2 Tools used . 30 4.2.1 PAPI: Performance Application Programming Interface . 31 4.2.2 Valgrind: A linux based profiling tool . 33 4.2.3 Valkyrie: A GUI based tool for Valgrind . 34 4.2.4 gprof: GNU Tool for profiling . 34 4.2.5 VTune: Intel Performance Analyzer . 35 4.2.6 Oprofile: Linux based Profiling tool . 35 4.3 Results and Analysis . 36 4.3.1 Gprof Results: . 36 4.3.2 Valgrind Results . 39 4.3.3 Oprofile Results . 40 5 Conclusion 44 ii Chapter 1 Introduction 1.1 Project Scope Modeling real world scenarios requires high performance systems capable of performing ex- tremely large number of concurrent operations. The increasing complexity included in these models implies, there will always be a perennial demand for faster and more robust com- puting resources. There is a cyclic dependency between the applications and the resources as the applications are being driven by the presence of the resources and the resources are being enhanced in order to accommodate the demands of the applications. Supercomputers and cluster systems are used for performing operations on similar kinds of data, but on a gigantic scale. For example, an average desktop system can perform several million complex calculations per second. Supercomputers and clusters on the other hand can perform extremely large number of operations per second when compared to a stand alone system. In a very simplistic sense a supercomputer is a collection of large number of smaller computers, put together to exploit the availability of computational power. It is a well know fact that, over the past few decades the field of Computer Engineering has shown exponential growth in terms of microprocessor technology, resulting in overall development 1 of computing resources and architectures. Hence there is a need to understand and improve the performance of the existing computing resources to accommodate the requirements of these applications. One tricky aspect of using supercomputers is to come up with applications that can maximize the utilization of available resources. This is not easy as there are a number of limiting factors that prevent maximal utilization of the existing resources. So, there are two solutions to this problem, understand the limiting factors and try to find a work around for them or try to add measured resources to the system to improve its performance. The major goal of this project is to understand the limiting factors and come up with a detailed analysis of the results. In order to write better code, it is imperative to understand the working of the ap- plication on the given architecture. This is not a simple task as the possible number of architectures and machine configurations is quite large, bordering on being limitless. Each computational resource is unique in terms of the processor architecture, peak performance, type of interconnect, available memory and the I/O system used. The aim of this project is to study the performance characteristics of Cactus [6] bench- marks on different architectures and analyze the results obtained from the benchmark runs. Cactus [6] is a computational toolkit used for problem solving in different scenarios by scien- tists and engineers. The reason Cactus [6] was picked for profiling was, it is a good example of a software that has excellent modularity and portability. It is known to run on different architectures and on machines of different sizes. For example, these benchmarks can be run on a laptop or standalone system or they can just as easily be run on a supercomputer or a cluster. Also Cactus [6] supports many technologies like HDF5 parallel file I/O, the PETSc scientific library, adaptive mesh refinement, web interfaces and advanced visualization tools. The study of profiling results also helps to understand the different aspects of the system architecture that effect the performance of applications. The project also aims to turn on the 2 trace on these benchmarks so that the time spent in different operations can be measured. A number of profiling tools like PAPI [1], IPM [2], gprof [8], valgrind [4], Oprofile [3], are used to collect information about the total number of floating point operations performed, the number of Level1, Level2 and some times Level3 data as well as instruction cache misses, and call graph information. This information can be used to tweak the performance of the benchmark. The results of the first phase of the project show a need to understand the benchmark in detail. Hence this project not only tries to understand the performance of the code on different architectures but also tries to find avenues to optimize the benchmark code. 1.2 Goals This project aims to understand the performance characteristics of the Cactus [6] benchmark Bench BSSN PUGH on different architectures. This analysis helps to understand the architec- tural aspects of such complex systems. Also, this study aims to provide a basis to tweak the performance of the actual benchmark so that it can become more efficient. This project is divided into two phases. The following sections explain the details of the two phases. 1.2.1 Phase 1 The first phase of the project deals with running Cactus [6] benchmarks on different archi- tectures and collecting the run statistics. These results are analyzed and the reasons for the differences in the wall times are studied in detail. This phase of the project was important for collecting results for the recent CCT proposal in reply to the NSF “bigiron” solicitation. In this phase, it was found that among different NSF benchmarks the Cactus [6] benchmarks “Bench BSSN PUGH” and “Bssn Whisky Carpet” were found dominated by load and store operations. The reasons for this result are studied in detail in the second phase of the 3 project. 1.2.2 Phase 2 The second phase of the project involves profiling Cactus [6] benchmark Bench BSSN PUGH. This benchmark was chosen over the Bench Whisky Carpet benchmark as it is already known to scale well and shows more consistent behavior. Bench Whisky Carpet benchmark also shows consistent results, but if the need arises to run it on a large number of processors its performance deteriorates rapidly making it unsuitable for profiling. Moreover both the benchmarks have similar kinds of code with the main change being the kind of driver being used, as one uses PUGH (structured unigrid) and the other used CARPET (adaptive mesh refinement). Profiling tools like “IPM” [2], “PAPI” [1], “gprof” [8], “valgrind” [4] and “Oprofile” [3] were used to investigate the reasons for the number of load and store operations. Also, it is clear that the load and store operations happen when there are a lot L2 (or L3 if there is one) cache misses, due to which the processor needs to fetch the instructions from the cache. Hence, the profiling tools are used to identify those parts of the code, which are responsible for a majority of cache misses at L2 or L3. 1.3 Contributions and People Involved My contributions to the proposal submitted by CCT for the NSF “bigiron” solicitation, have been compiled and further analysis is added as a part of this project. A lot of data and results from the benchmark runs went into the proposal. There were a number of other benchmarks, that were run by other members of the team which are not part of this project. Results from the Cactus [6] benchmark runs on the NERSC machine called Bassi, were provided to me by one our team members Dr.Erik Schnetter. More over a lot of information about the architecture and the kind of runs that need to be performed and invaluable suggestions 4 about the analysis of data were provided to me by Dr.
Recommended publications
  • Application Performance Optimization
    Application Performance Optimization By Börje Lindh - Sun Microsystems AB, Sweden Sun BluePrints™ OnLine - March 2002 http://www.sun.com/blueprints Sun Microsystems, Inc. 4150 Network Circle Santa Clara, CA 95045 USA 650 960-1300 Part No.: 816-4529-10 Revision 1.3, 02/25/02 Edition: March 2002 Copyright 2002 Sun Microsystems, Inc. 4130 Network Circle, Santa Clara, California 95045 U.S.A. All rights reserved. This product or document is protected by copyright and distributed under licenses restricting its use, copying, distribution, and decompilation. No part of this product or document may be reproduced in any form by any means without prior written authorization of Sun and its licensors, if any. Third-party software, including font technology, is copyrighted and licensed from Sun suppliers. Parts of the product may be derived from Berkeley BSD systems, licensed from the University of California. UNIX is a registered trademark in the U.S. and other countries, exclusively licensed through X/Open Company, Ltd. Sun, Sun Microsystems, the Sun logo, Sun BluePrints, Sun Enterprise, Sun HPC ClusterTools, Forte, Java, Prism, and Solaris are trademarks or registered trademarks of Sun Microsystems, Inc. in the United States and other countries. All SPARC trademarks are used under license and are trademarks or registered trademarks of SPARC International, Inc. in the US and other countries. Products bearing SPARC trademarks are based upon an architecture developed by Sun Microsystems, Inc. The OPEN LOOK and Sun™ Graphical User Interface was developed by Sun Microsystems, Inc. for its users and licensees. Sun acknowledges the pioneering efforts of Xerox in researching and developing the concept of visual or graphical user interfaces for the computer industry.
    [Show full text]
  • Memory Subsystem Profiling with the Sun Studio Performance Analyzer
    Memory Subsystem Profiling with the Sun Studio Performance Analyzer CScADS, July 20, 2009 Marty Itzkowitz, Analyzer Project Lead Sun Microsystems Inc. [email protected] Outline • Memory performance of applications > The Sun Studio Performance Analyzer • Measuring memory subsystem performance > Four techniques, each building on the previous ones – First, clock-profiling – Next, HW counter profiling of instructions – Dive deeper into dataspace profiling – Dive still deeper into machine profiling – What the machine (as opposed to the application) sees > Later techniques needed if earlier ones don't fix the problems • Possible future directions MSI Memory Subsystem Profiling with the Sun Studio Performance Analyzer 6/30/09 2 No Comment MSI Memory Subsystem Profiling with the Sun Studio Performance Analyzer 6/30/09 3 The Message • Memory performance is crucial to application performance > And getting more so with time • Memory performance is hard to understand > Memory subsystems are very complex – All components matter > HW techniques to hide latency can hide causes • Memory performance tuning is an art > We're trying to make it a science • The Performance Analyzer is a powerful tool: > To capture memory performance data > To explore its causes MSI Memory Subsystem Profiling with the Sun Studio Performance Analyzer 6/30/09 4 Memory Performance of Applications • Operations take place in registers > All data must be loaded and stored; latency matters • A load is a load is a load, but > Hit in L1 cache takes 1 clock > Miss in L1, hit in
    [Show full text]
  • Oracle Solaris Studio 12.2 Performance Analyzer MPI Tutorial
    Oracle® Solaris Studio 12.2: Performance Analyzer MPITutorial Part No: 820–5551 September 2010 Copyright © 2010, Oracle and/or its affiliates. All rights reserved. This software and related documentation are provided under a license agreement containing restrictions on use and disclosure and are protected by intellectual property laws. Except as expressly permitted in your license agreement or allowed by law, you may not use, copy, reproduce, translate, broadcast, modify, license, transmit, distribute, exhibit, perform, publish, or display any part, in any form, or by any means. Reverse engineering, disassembly, or decompilation of this software, unless required by law for interoperability, is prohibited. The information contained herein is subject to change without notice and is not warranted to be error-free. If you find any errors, please report them to us in writing. If this is software or related software documentation that is delivered to the U.S. Government or anyone licensing it on behalf of the U.S. Government, the following notice is applicable: U.S. GOVERNMENT RIGHTS Programs, software, databases, and related documentation and technical data delivered to U.S. Government customers are “commercial computer software” or “commercial technical data” pursuant to the applicable Federal Acquisition Regulation and agency-specific supplemental regulations. As such, the use, duplication, disclosure, modification, and adaptation shall be subject to the restrictions and license terms setforth in the applicable Government contract, and, to the extent applicable by the terms of the Government contract, the additional rights set forth in FAR 52.227-19, Commercial Computer Software License (December 2007). Oracle America, Inc., 500 Oracle Parkway, Redwood City, CA 94065.
    [Show full text]
  • Oracle® Solaris Studio 12.4: Performance Analyzer Tutorials
    ® Oracle Solaris Studio 12.4: Performance Analyzer Tutorials Part No: E37087 December 2014 Copyright © 2014, Oracle and/or its affiliates. All rights reserved. This software and related documentation are provided under a license agreement containing restrictions on use and disclosure and are protected by intellectual property laws. Except as expressly permitted in your license agreement or allowed by law, you may not use, copy, reproduce, translate, broadcast, modify, license, transmit, distribute, exhibit, perform, publish, or display any part, in any form, or by any means. Reverse engineering, disassembly, or decompilation of this software, unless required by law for interoperability, is prohibited. The information contained herein is subject to change without notice and is not warranted to be error-free. If you find any errors, please report them to us in writing. If this is software or related documentation that is delivered to the U.S. Government or anyone licensing it on behalf of the U.S. Government, the following notice is applicable: U.S. GOVERNMENT END USERS. Oracle programs, including any operating system, integrated software, any programs installed on the hardware, and/or documentation, delivered to U.S. Government end users are "commercial computer software" pursuant to the applicable Federal Acquisition Regulation and agency-specific supplemental regulations. As such, use, duplication, disclosure, modification, and adaptation of the programs, including any operating system, integrated software, any programs installed on the hardware, and/or documentation, shall be subject to license terms and license restrictions applicable to the programs. No other rights are granted to the U.S. Government. This software or hardware is developed for general use in a variety of information management applications.
    [Show full text]
  • Openmp Performance 2
    OPENMP PERFORMANCE 2 A common scenario..... “So I wrote my OpenMP program, and I checked it gave the right answers, so I ran some timing tests, and the speedup was, well, a bit disappointing really. Now what?”. Most of us have probably been here. Where did my performance go? It disappeared into overheads..... 3 The six (and a half) evils... • There are six main sources of overhead in OpenMP programs: • sequential code • idle threads • synchronisation • scheduling • communication • hardware resource contention • and another minor one: • compiler (non-)optimisation • Let’s take a look at each of them and discuss ways of avoiding them. 4 Sequential code • In OpenMP, all code outside parallel regions, or inside MASTER and SINGLE directives is sequential. • Time spent in sequential code will limit performance (that’s Amdahl’s Law). • If 20% of the original execution time is not parallelised, I can never get more that 5x speedup. • Need to find ways of parallelising it! 5 Idle threads • Some threads finish a piece of computation before others, and have to wait for others to catch up. • e.g. threads sit idle in a barrier at the end of a parallel loop or parallel region. Time 6 Avoiding load imbalance • It’s a parallel loop, experiment with different schedule kinds and chunksizes • can use SCHEDULE(RUNTIME) to avoid recompilation. • For more irregular computations, using tasks can be helpful • runtime takes care of the load balancing • Note that it’s not always safe to assume that two threads doing the same number of computations will take the same time.
    [Show full text]
  • Openmp 4.0 Support in Oracle Solaris Studio
    OpenMP 4.0 Support In Oracle Solaris Studio Ruud van der Pas Disnguished Engineer Architecture and Performance, SPARC Microelectronics SC’14 OpenMP BOF November 18, 2014 Copyright © 2014, Oracle and/or its affiliates. All rights reserved. 1 Safe Harbor Statement The following is intended to outline our general product direcEon. It is intended for informaon purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or funcEonality, and should not be relied upon in making purchasing decisions. The development, release, and Eming of any features or funcEonality described for Oracle’s products remains at the sole discreEon of Oracle. Copyright © 2014, Oracle and/or its affiliates. All rights reserved. 2 What Is Oracle Solaris Studio ? • Supported on Linux too • Compilers and tools, • It is actually a comprehensive so-ware suite, including: – C, C++ and Fortran compilers – Various tools (performance analyzer, thread analyzer, code analyzer, debugger, etc) • Plaorms supported – Processors: SPARC, x86 – Operang Systems: Solaris, Linux Copyright © 2014, Oracle and/or its affiliates. All rights reserved. 3 hp://www.oracle.com/technetwork/ server-storage/solarisstudio/overview/index.html Copyright © 2014, Oracle and/or its affiliates. All rights reserved. 4 OpenMP Specific Support/1 • Compiler – Various opEons and environment variables – Autoscoping – Compiler Commentary • General feature to inform about opEmizaons, but also specific to OpenMP • Performance Analyzer – OpenMP “states”, metrics, etc – Task and region specific informaon Copyright © 2014, Oracle and/or its affiliates. All rights reserved. 5 Performance Analyzer - Profile Comparison Copyright © 2014, Oracle and/or its affiliates. All rights reserved. 6 A Comparison– More Detail Copyright © 2014, Oracle and/or its affiliates.
    [Show full text]
  • 1 a Study on Performance Analysis Tools for Applications Running On
    A Study on Performance Analysis Tools for Applications Running on Large Distributed Systems Ajanta De Sarkar 1 and Nandini Mukherjee 2 1 Department of Computer Application, Heritage Institute of Technology, West Bengal University of Technology, Kolkata 700 107, India [email protected] 2 Department of Computer Science and Engineering, Jadavpur University, Kolkata 700 032, India [email protected] Abstract. The evolution of distributed architectures and programming paradigms for performance-oriented program development, challenge the state-of-the-art technology for performance tools. The area of high performance computing is rapidly expanding from single parallel systems to clusters and grids of heterogeneous sequential and parallel systems. Performance analysis and tuning applications is becoming crucial because it is hardly possible to otherwise achieve the optimum performance of any application. The objective of this paper is to study the state-of-the-art technology of the existing performance tools for distributed systems. The paper surveys some representative tools from different aspects in order to highlight the approaches and technologies used by them. Keywords: Performance Monitoring, Tools, Techniques 1. Introduction During the last decade, automatic or semi-automatic performance tuning of applications running on parallel systems has become one of the highly focused research areas. Performance tuning means to speed up the execution performance of an application. Performance analysis plays an important role in this context, because only from performance analysis data the performance properties of the application can be observed. A cyclic and feedback guided methodology is often adopted for optimizing the execution performance of an application – every time the execution performance of the application is analyzed and tuned and the application is then sent back for execution again in order to detect the existence of further performance problems.
    [Show full text]
  • Java™ Performance
    JavaTM Performance The Java™ Series Visit informit.com/thejavaseries for a complete list of available publications. ublications in The Java™ Series are supported, endorsed, and Pwritten by the creators of Java at Sun Microsystems, Inc. This series is the official source for expert instruction in Java and provides the complete set of tools you’ll need to build effective, robust, and portable applications and applets. The Java™ Series is an indispensable resource for anyone looking for definitive information on Java technology. Visit Sun Microsystems Press at sun.com/books to view additional titles for developers, programmers, and system administrators working with Java and other Sun technologies. JavaTM Performance Charlie Hunt Binu John Upper Saddle River, NJ • Boston • Indianapolis • San Francisco New York • Toronto • Montreal • London • Munich • Paris • Madrid Capetown • Sydney • Tokyo • Singapore • Mexico City Many of the designations used by manufacturers and sellers to distinguish their products are claimed as trademarks. Where those designations appear in this book, and the publisher was aware of a trademark claim, the designations have been printed with initial capital letters or in all capitals. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. AMD, Opteron, the AMD logo, and the AMD Opteron logo are trademarks or registered trademarks of Advanced Micro Devices. Intel and Intel Xeon are trademarks or registered trademarks of Intel Corporation. All SPARC trademarks are used under license and are trademarks or registered trademarks of SPARC Inter- national, Inc. UNIX is a registered trademark licensed through X/Open Company, Ltd.
    [Show full text]
  • Characterization for Heterogeneous Multicore Architectures
    S. Arash Ostadzadeh antitative Application Data Flow Characterization for Heterogeneous Multicore Architectures antitative Application Data Flow Characterization for Heterogeneous Multicore Architectures PROEFSCHRIFT ter verkrijging van de graad van doctor aan de Technische Universiteit Del, op gezag van de Rector Magnificus prof. ir. K.Ch.A.M. Luyben, voorzier van het College voor Promoties, in het openbaar te verdedigen op dinsdag december om : uur door Sayyed Arash OSTADZADEH Master of Science in Computer Engineering Ferdowsi University of Mashhad geboren te Mashhad, Iran Dit proefschri is goedgekeurd door de promotor: Prof. dr. K.L.M. Bertels Samenstelling promotiecommissie: Rector Magnificus voorzier Prof. dr. K.L.M. Bertels Technische Universiteit Del, promotor Prof. dr. ir. H.J. Sips Technische Universiteit Del Prof. Dr.-Ing. Michael Hübner Ruhr-Universität Bochum, Duitsland Prof. Dr.-Ing. Mladen Berekovic Technische Universität Braunschweig, Duitsland Prof. dr. Henk Corporaal Technische Universiteit Eindhoven Prof. dr. ir. Dirk Stroobandt Universiteit Gent, België Dr. G.K. Kuzmanov Technische Universiteit Del Prof. dr. ir. F.W. Jansen Technische Universiteit Del, reservelid S. Arash Ostadzadeh antitative Application Data Flow Characterization for Heterogeneous Multicore Architectures Met samenvaing in het Nederlands. Subject headings: Dynamic Binary Instrumentation, Application Partitioning, Hardware/Soware Co-design. e cover images are abstract artworks created by the Agony drawing program developed by Kelvin (hp://www.kelbv.com/agony.php). Copyright © S. Arash Ostadzadeh ISBN 978-94-6186-095-8 All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmied, in any form or by any means, electronic, mechanical, photocopying, recording, or otherwise, without the permission of the author.
    [Show full text]
  • Oracle Developer Studio Performance Analyzer Brief
    O R A C L E D A T A S H E E T Oracle Developer Studio Performance Analyzer The Oracle Developer Studio Performance Analyzer provides unparalleled insight into the behavior of your application, allowing you to identify bottlenecks and improve performance by orders of magnitude. Bring high- performance, high-quality, enterprise applications to market faster and obtain maximum utilization from today’s highly complex systems. KEY FEATURES Introduction • Low overhead for fast and accurate Is your application performing optimally? Are you taking full advantage of your high- results throughput multi-core systems? The Performance Analyzer is a powerful market- • Advanced profiling of serial and leading profiler for optimizing application performance and scalability. It provides both parallel applications source-level and machine-specific analysis that enables you to drill-down on application • Rich set of performance data and metrics behavior, allowing you to quickly isolate hotspots and eliminate bottlenecks. • Easy to use GUI Analyze Serial and Parallel Applications • Remote analysis to simplify cloud development The Performance Analyzer is optimized for a variety of programming scenarios and • Supports C, C++, Java, Fortran and helps pinpoint the offending code with the fewest number of clicks. In addition to Scala supporting C, C++, and Fortran applications, the Performance Analyzer also includes K E Y BENEFITS support for Java and Scala that supersedes other tools’ accuracy and machine-level • Maximize application performance observability. Identification of a hot call path is streamlined with an interactive, graphical • Improve system utilization and call tree called a flame graph. After a hot function is identified, navigating call paths software quality while viewing relevant source is performed with single mouse-clicks.
    [Show full text]
  • Optimizing Applications with Oracle Solaris Studio Compilers and Tools
    An Oracle Technical White Paper December 2011 How Oracle Solaris Studio Optimizes Application Performance Oracle Solaris Studio 12.3 How Oracle Solaris Studio Optimizes Application Performance Introduction ....................................................................................... 1 Oracle Solaris Studio Compilers and Tools ....................................... 2 Conclusion ........................................................................................ 6 For More Information ......................................................................... 7 How Oracle Solaris Studio Optimizes Application Performance Introduction This white paper provides an overview of Oracle Solaris Studio software. Modern processors and systems provide myriad features and functionality that can dramatically accelerate application performance. The latest high-performance SPARC® and x86 processors provide special enhanced instructions, and the commonality of multicore processors and multisocket systems means that available system resources are greatly increased. At the same time, applications must still be properly compiled and tuned to effectively exploit this functionality and performance for key applications. Selecting appropriate compiler flags and optimization techniques and applying appropriate tools is essential for creating accurate and performant application code. Oracle Solaris Studio software includes a full-featured, integrated development environment (IDE) coupled with the compilers and development tools that are required to
    [Show full text]
  • Oracle Solaris Studio July, 2014
    How to Increase Applicaon Security & Reliability with SoGware in Silicon Technology Angelo Rajuderai, SPARC Technology Lead, Oracle SysteMs Partners Ikroop Dhillon, Principal Product Manager, Oracle Solaris Studio July, 2014 Please Stand By. This session will begin promptly at the me indicated on the agenda. Thank You. Copyright © 2014 Oracle and/or its affiliates. All rights reserved. | 1 Safe Harbor StateMent The following is intended to outline our general product direcAon. It is intended for inforMaon purposes only, and May not be incorporated into any contract. It is not a coMMitment to deliver any Material, code, or funcAonality, and should not be relied upon in Making purchasing decisions. The developMent, release, and AMing of any features or funcAonality described for Oracle’s products reMains at the sole discreAon of Oracle. Copyright © 2014 Oracle and/or its affiliates. All rights reserved. | 2 Agenda 1 Hardware and SoGware Engineered to work together 2 SoGware in Silicon Overview 3 Security with Applicaon Data Integrity 4 Oracle Solaris Studio DevelopMent Tools 5 Q + A Copyright © 2014 Oracle and/or its affiliates. All rights reserved. | 3 The Unique Oracle Advantage Hardware and Software Engineered to Work Together One Engineering Team vs Copyright © 2014 Oracle and/or its affiliates. All rights reserved. | Application Accelerators In SPARC T5 The Integrated Stack Advantage One Engineering Team Copyright © 2014 Oracle and/or its affiliates. All rights reserved. | Partners Benefit froM Innovaon & Integraon All Software Benefit System Software Copyright © 2014 Oracle and/or its affiliates. All rights reserved. | Oracle SPARC No Change in ABI Your exisng apps just run beer .. Copyright © 2014 Oracle and/or its affiliates.
    [Show full text]