Enterprise Test Strategy Just for an Enterprise Project Release? CHICAGO April 18Th — April 22Th Renaissance Hotel 1 West Wacker Drive Chicago IL 60601

Total Page:16

File Type:pdf, Size:1020Kb

Enterprise Test Strategy Just for an Enterprise Project Release? CHICAGO April 18Th — April 22Th Renaissance Hotel 1 West Wacker Drive Chicago IL 60601 Isn’t an Enterprise Test Strategy just for an Enterprise Project Release? CHICAGO April 18th — April 22th Renaissance Hotel 1 West Wacker Drive Chicago IL 60601 Speaker(s): Susan Schanta When: April 22, 2016 Company: Cognizant Technology Solutions Time: 10:00 am - 11:00 am 04/22/16 Isn’t an Enterprise Test Strategy just for an Enterprise Project Release? Susan Schanta, Director © 2015 Cognizant 1 © 2015 Cognizant Why do I need an Enterprise Test Strategy? Why do I need an Enterprise Test Strategy? Because quality can’t be tested into the product… We need a shared vision to achieve quality… What is the Challenge We’re Solving for? . Lack of shared understanding regarding the role of the Quality organization − We pay a lot for QA, why isn’t quality better? − Why do we need QA so early in the lifecycle? − QA just tests at the end, right? − Why does it take QA so long to test? . How do I change our persona from a Testing Department to a Quality Center of Excellence − Does the organization understand QA’s role in the software development lifecycle? − Have I set expectations for how and when project stakeholders should engage QA? − How can I establish a collaborative relationship where cross-functional teams understand the interdependencies between their work product and the QA work product? 2 © 2015 Cognizant What is an Enterprise Strategy? The Enterprise Test Strategy establishes a framework for how the QA organization operates and interacts with project team members. When collaboratively developed with Project Management, Business Analysts and Development, the Enterprise Test Strategy provides a foundation for how the organization will build quality into product releases while reducing the cost of quality. Establishes standards for test analysis, planning and validation . Helps drive behavioral change in QA and cross-functional teams . Introduces shared responsibility for best practices . Drives defect reduction in the Requirements, Design and Construction Phases of the lifecycle . Institutes a shared and disciplined approach to automation standards to achieve automation sustainability . Defines standards for performance standards for critical applications to address operational and business continuity goals . Aligns test data management to corporate security policies . Creates strategies where gaps exist for specialty testing such as mobility, big data, automation, performance, etc. 3 © 2015 Cognizant The Value Proposition The Enterprise Test Strategy provides a framework for how QA drives quality throughout the lifecycle and provides a foundation for cross-team collaboration, quality disciplines and operating guidelines. The ultimate goal is to deliver the best quality while driving down the cost of quality. Limit Scope Creep . Increase Test Coverage . Increases velocity of Test Case creation . Increase Requirements Traceability to Test Cases . Reduce Defect Leakage to Production . Reduce Cost of Maintenance . Reduces redo work 4 © 2015 Cognizant An Examination of Strategy Types Enterprise Test Strategy Program / Project Test Strategy . Mission Statement . Project scope and objectives . Standards for test analysis, planning and validation Risks & Mitigation Definition of Test Types Assumptions, Dependencies & Constraints RACI for Test Phases – Unit through UAT . Test Scope – what will be tested Tiered approach for test documentation Limitations to testing based on tools needed . Automation Disciplines – ROI driven automation, Limitations to testing based on environment development disciplines, availability . Performance Disciplines – load/stress, usability and . Test Approach business continuity Manual test activities . Defect Management Guidelines Automated test activities . Test Data Management Guidelines Test Environment Requirements Required hardware, software and licenses . Test Environment Management . Test Data Requirements . Test Tool Data extraction requirements from production . Metrics beyond Defect Rates Parameters for manipulation of test data (such as aging) 5 © 2015 Cognizant Establishing the Enterprise Quality Framework 6 © 2015 Cognizant Defining the Mission Create a definitive statement that defines your organizational goals… Quality Assurance is dedicated to reducing the cost of quality by improving overall product reliability. Our mission to attain Quality lies in defect prevention by establishing a precisely measurable process to ensure conformance to requirements. We believe that Quality is the all important catalyst that makes the difference between success and failure. Our mission is to control, manage and drive all testing services, ensure quality in our software solutions, deliver on-time, on-budget, goal-oriented and cost-effective solutions for business, satisfy customer requirements, strive for continuous process improvements and contribute to the overall growth of the organization through streamlined, efficient and best-in-class testing practices, governance model with highly skilled & motivated people.” 7 © 2015 Cognizant Tiered Approach to Test Documentation Maintenance # Deliverable SDLC Phase PMO Releases After defining the full library of QA 1 L0 QA Project Estimate Initiation Y Y artifacts, determine mandatory 2 QCOE Project Plan (Schedule) Requirements Y N use based on set criteria 3 L1 QA Project Estimate Requirements Y N . Based on project size & 4 Requirements Traceability Matrix Requirements Y N complexity 5 Test Scenario Requirements Y Y . Based on project duration 6 Master Test Plan Requirements Y N . Based on test scope 7 L2 QA Project Estimate Design Y N 8 Project Level Test Plan Design Y N 9 Regression Test Case Selection Design Y Y 10 Test Case Design Y Y 11 Test Data Requirements Template Design Y Y 12 Test Environment Readiness Checklist Construction Y N 13 Defect Report Template Validation Y Y 14 Productivity Loss Log Validation Y Y 15 Test Summary & Closure Report Validation Y N 16 Lessons Learnt Document Validation Y N 8 © 2015 Cognizant Communication Guidelines Communication Deliverable Objective Owner Frequency Test conditions based on requirements, business Business/Test Scenario QA Lead Once during Requirements Phase rules, constraints, etc. Business/Test Scenario Stakeholder feedback collected and incorporated QA Manager As scheduled Stakeholder Review Code turnover notes of functions ready to test, Code Turnover Notes Dev Lead For each code turnover workarounds and open defects Estimate given based on business case and discussion- L0 QCOE Project Estimate QA Manager Once in Proposal Phase no requirements defined. Estimate given based on Elicitation Sessions and L1 QCOE Project Estimate QA Lead Once in Requirements Phase Requirements Requirements Traceability Trace requirements to test cases to defects QA Lead Weekly from Test Design forward Matrix Steps to validate a test condition based on business, Test Case QA Lead Once/Stored in Test Repository user, functional/nonfunctional requirements Test Case Peer Review Peer feedback collected and incorporated QA Lead As scheduled Defines the tactical and operational approach to Initiated in Requirements and Test Plan QA Lead validation of test conditions updated as needed Test Plan Stakeholder Review Stakeholder feedback collected and incorporated QA Manager As scheduled 9 © 2015 Cognizant Definition of Test Types Test Type Test Definition Unit Test Unit Test is performed by the developer to validate units of source code, modules and functions using controlled data to measure that each unit of code operates as designed. Integrated Unit Test Integration unit testing is a logical extension of unit testing. In its simplest form, two units that were individually tested are combined into a component and the interface between them is tested. A component, in this sense, refers to an integration of one or more units. Unit Regression Test When developers modify code, regression unit testing is required to evaluate code quality. In this case, the developer can reuse his original unit test cases. In some cases, the developer may be required to modify the existing tests or create new ones depending on the extent of the changes made. Smoke Test The Smoke Test provides a preliminary evaluation to identify failures severe enough to reject a prospective code turnover. A subset of test cases that cover the most important functionality. 10 © 2015 Cognizant Definition of Test Types Test Type Test Definition System Test System Test is performed to compare a program’s behavior against the functional business and technical specifications. Positive system testing involves testing users can successfully use all paths of functionality with expected results. Negative system testing is checking to make sure users are not allowed to use improper paths. System System Integration Test is performed on the complete, fully integrated system with a focus on Integration Test Role Based Testing to emulate real life scenarios, sometimes called end-to-end testing. The purpose of integrated system testing is to detect any inconsistencies between the functions that are integrated together. Regression Test Regression test is any type of software testing that seeks to uncover software errors by partially retesting a modified program. The intent of regression testing is to provide a general assurance that no additional errors were introduced in the process of fixing other problems. User Acceptance User Acceptance Testing (UAT) is a process to obtain confirmation that a system meets Test mutually agreed-upon requirements. The UAT acts as a final verification
Recommended publications
  • Microprocessor
    MICROPROCESSOR www.MPRonline.com THE REPORTINSIDER’S GUIDE TO MICROPROCESSOR HARDWARE EEMBC’S MULTIBENCH ARRIVES CPU Benchmarks: Not Just For ‘Benchmarketing’ Any More By Tom R. Halfhill {7/28/08-01} Imagine a world without measurements or statistical comparisons. Baseball fans wouldn’t fail to notice that a .300 hitter is better than a .100 hitter. But would they welcome a trade that sends the .300 hitter to Cleveland for three .100 hitters? System designers and software developers face similar quandaries when making trade-offs EEMBC’s role has evolved over the years, and Multi- with multicore processors. Even if a dual-core processor Bench is another step. Originally, EEMBC was conceived as an appears to be better than a single-core processor, how much independent entity that would create benchmark suites and better is it? Twice as good? Would a quad-core processor be certify the scores for accuracy, allowing vendors and customers four times better? Are more cores worth the additional cost, to make valid comparisons among embedded microproces- design complexity, power consumption, and programming sors. (See MPR 5/1/00-02, “EEMBC Releases First Bench- difficulty? marks.”) EEMBC still serves that role. But, as it turns out, most The Embedded Microprocessor Benchmark Consor- EEMBC members don’t openly publish their scores. Instead, tium (EEMBC) wants to help answer those questions. they disclose scores to prospective customers under an NDA or EEMBC’s MultiBench 1.0 is a new benchmark suite for use the benchmarks for internal testing and analysis. measuring the throughput of multiprocessor systems, Partly for this reason, MPR rarely cites EEMBC scores including those built with multicore processors.
    [Show full text]
  • Load Testing, Benchmarking, and Application Performance Management for the Web
    Published in the 2002 Computer Measurement Group (CMG) Conference, Reno, NV, Dec. 2002. LOAD TESTING, BENCHMARKING, AND APPLICATION PERFORMANCE MANAGEMENT FOR THE WEB Daniel A. Menascé Department of Computer Science and E-center of E-Business George Mason University Fairfax, VA 22030-4444 [email protected] Web-based applications are becoming mission-critical for many organizations and their performance has to be closely watched. This paper discusses three important activities in this context: load testing, benchmarking, and application performance management. Best practices for each of these activities are identified. The paper also explains how basic performance results can be used to increase the efficiency of load testing procedures. infrastructure depends on the traffic it expects to see 1. Introduction at its site. One needs to spend enough but no more than is required in the IT infrastructure. Besides, Web-based applications are becoming mission- resources need to be spent where they will generate critical to most private and governmental the most benefit. For example, one should not organizations. The ever-increasing number of upgrade the Web servers if most of the delay is in computers connected to the Internet and the fast the database server. So, in order to maximize the growing number of Web-enabled wireless devices ROI, one needs to know when and how to upgrade create incentives for companies to invest in Web- the IT infrastructure. In other words, not spending at based infrastructures and the associated personnel the right time and spending at the wrong place will to maintain them. By establishing a Web presence, a reduce the cost-benefit of the investment.
    [Show full text]
  • Overview of the SPEC Benchmarks
    9 Overview of the SPEC Benchmarks Kaivalya M. Dixit IBM Corporation “The reputation of current benchmarketing claims regarding system performance is on par with the promises made by politicians during elections.” Standard Performance Evaluation Corporation (SPEC) was founded in October, 1988, by Apollo, Hewlett-Packard,MIPS Computer Systems and SUN Microsystems in cooperation with E. E. Times. SPEC is a nonprofit consortium of 22 major computer vendors whose common goals are “to provide the industry with a realistic yardstick to measure the performance of advanced computer systems” and to educate consumers about the performance of vendors’ products. SPEC creates, maintains, distributes, and endorses a standardized set of application-oriented programs to be used as benchmarks. 489 490 CHAPTER 9 Overview of the SPEC Benchmarks 9.1 Historical Perspective Traditional benchmarks have failed to characterize the system performance of modern computer systems. Some of those benchmarks measure component-level performance, and some of the measurements are routinely published as system performance. Historically, vendors have characterized the performances of their systems in a variety of confusing metrics. In part, the confusion is due to a lack of credible performance information, agreement, and leadership among competing vendors. Many vendors characterize system performance in millions of instructions per second (MIPS) and millions of floating-point operations per second (MFLOPS). All instructions, however, are not equal. Since CISC machine instructions usually accomplish a lot more than those of RISC machines, comparing the instructions of a CISC machine and a RISC machine is similar to comparing Latin and Greek. 9.1.1 Simple CPU Benchmarks Truth in benchmarking is an oxymoron because vendors use benchmarks for marketing purposes.
    [Show full text]
  • Leading Practice: Test Strategy and Approach in Agile Projects
    CA SERVICES | LEADING PRACTICE Leading Practice: Test Strategy and Approach in Agile Projects Abstract This document provides best practices on how to strategize testing CA Project and Portfolio Management (CA PPM) in an agile project. The document does not include specific test cases; the list of test cases and steps for each test case are provided in a separate document. This document should be used by the agile project team that is planning the testing activities, and by end users who perform user acceptance testing (UAT). Concepts Concept Description Test Approach Defines testing strategy, roles and responsibilities of various team members, and test types. Testing Environments Outlines which testing is carried out in which environment. Testing Automation and Tools Addresses test management and automation tools required for test execution. Risk Analysis Defines the approach for risk identification and plans to mitigate risks as well as a contingency plan. Test Planning and Execution Defines the approach to plan the test cases, test scripts, and execution. Review and Approval Lists individuals who should review, approve and sign off on test results. Test Approach The test approach defines testing strategy, roles and responsibilities of various team members, and the test types. The first step is to define the testing strategy. It should describe how and when the testing will be conducted, who will do the testing, the type of testing being conducted, features being tested, environment(s) where the testing takes place, what testing tools are used, and how are defects tracked and managed. The testing strategy should be prepared by the agile core team.
    [Show full text]
  • Opportunities and Open Problems for Static and Dynamic Program Analysis Mark Harman∗, Peter O’Hearn∗ ∗Facebook London and University College London, UK
    1 From Start-ups to Scale-ups: Opportunities and Open Problems for Static and Dynamic Program Analysis Mark Harman∗, Peter O’Hearn∗ ∗Facebook London and University College London, UK Abstract—This paper1 describes some of the challenges and research questions that target the most productive intersection opportunities when deploying static and dynamic analysis at we have yet witnessed: that between exciting, intellectually scale, drawing on the authors’ experience with the Infer and challenging science, and real-world deployment impact. Sapienz Technologies at Facebook, each of which started life as a research-led start-up that was subsequently deployed at scale, Many industrialists have perhaps tended to regard it unlikely impacting billions of people worldwide. that much academic work will prove relevant to their most The paper identifies open problems that have yet to receive pressing industrial concerns. On the other hand, it is not significant attention from the scientific community, yet which uncommon for academic and scientific researchers to believe have potential for profound real world impact, formulating these that most of the problems faced by industrialists are either as research questions that, we believe, are ripe for exploration and that would make excellent topics for research projects. boring, tedious or scientifically uninteresting. This sociological phenomenon has led to a great deal of miscommunication between the academic and industrial sectors. I. INTRODUCTION We hope that we can make a small contribution by focusing on the intersection of challenging and interesting scientific How do we transition research on static and dynamic problems with pressing industrial deployment needs. Our aim analysis techniques from the testing and verification research is to move the debate beyond relatively unhelpful observations communities to industrial practice? Many have asked this we have typically encountered in, for example, conference question, and others related to it.
    [Show full text]
  • Evaluation of AMD EPYC
    Evaluation of AMD EPYC Chris Hollowell <[email protected]> HEPiX Fall 2018, PIC Spain What is EPYC? EPYC is a new line of x86_64 server CPUs from AMD based on their Zen microarchitecture Same microarchitecture used in their Ryzen desktop processors Released June 2017 First new high performance series of server CPUs offered by AMD since 2012 Last were Piledriver-based Opterons Steamroller Opteron products cancelled AMD had focused on low power server CPUs instead x86_64 Jaguar APUs ARM-based Opteron A CPUs Many vendors are now offering EPYC-based servers, including Dell, HP and Supermicro 2 How Does EPYC Differ From Skylake-SP? Intel’s Skylake-SP Xeon x86_64 server CPU line also released in 2017 Both Skylake-SP and EPYC CPU dies manufactured using 14 nm process Skylake-SP introduced AVX512 vector instruction support in Xeon AVX512 not available in EPYC HS06 official GCC compilation options exclude autovectorization Stock SL6/7 GCC doesn’t support AVX512 Support added in GCC 4.9+ Not heavily used (yet) in HEP/NP offline computing Both have models supporting 2666 MHz DDR4 memory Skylake-SP 6 memory channels per processor 3 TB (2-socket system, extended memory models) EPYC 8 memory channels per processor 4 TB (2-socket system) 3 How Does EPYC Differ From Skylake (Cont)? Some Skylake-SP processors include built in Omnipath networking, or FPGA coprocessors Not available in EPYC Both Skylake-SP and EPYC have SMT (HT) support 2 logical cores per physical core (absent in some Xeon Bronze models) Maximum core count (per socket) Skylake-SP – 28 physical / 56 logical (Xeon Platinum 8180M) EPYC – 32 physical / 64 logical (EPYC 7601) Maximum socket count Skylake-SP – 8 (Xeon Platinum) EPYC – 2 Processor Inteconnect Skylake-SP – UltraPath Interconnect (UPI) EYPC – Infinity Fabric (IF) PCIe lanes (2-socket system) Skylake-SP – 96 EPYC – 128 (some used by SoC functionality) Same number available in single socket configuration 4 EPYC: MCM/SoC Design EPYC utilizes an SoC design Many functions normally found in motherboard chipset on the CPU SATA controllers USB controllers etc.
    [Show full text]
  • Infotek Solutions Inc
    Infotek Solutions Inc. Test Strategy Test Strategy is summarize or outline which describes the approach of software development cycle. Or The test levels to be performed in testing and the description of testing activities within those test levels are known as test strategy. It is created to inform project managers, testers and developers about some key points of testing process. Strategy also includes testing objective, methods to testing new functions, time and resource what will be used in project and testing environment. Test strategies describe how to mitigate the product risk at test level, which of types of test are to be performed in testing and which entry and exit criteria apply. Different Testing strategies may be adopted depending on the type of system to be test and the development process used. In this document we are going to discuss testing strategies on two different types of software testing. 1. Conventional Software 2. Object Oriented Software(OO software) Conventional Software Testing: It is traditional approach. It takes place mostly when water fall life cycle is used for development. Conventional software Testing focuses more on decomposition. Conventional software testing always takes place during the test phase of life cycle, which usually follows the development phase and proceeds the implementation phase. During the conventional testing phase, Three types of testing will be conducted 1. System Testing 2. Integration Testing 3. Unit Testing Object Oriented Software Testing: Using Object Oriented design or design along with Agile development methodology leads to Object Oriented Software Testing. Object Oriented testing is done having emphasis on Composition.
    [Show full text]
  • An Effective Dynamic Analysis for Detecting Generalized Deadlocks
    An Effective Dynamic Analysis for Detecting Generalized Deadlocks Pallavi Joshi Mayur Naik Koushik Sen EECS, UC Berkeley, CA, USA Intel Labs Berkeley, CA, USA EECS, UC Berkeley, CA, USA [email protected] [email protected] [email protected] David Gay Intel Labs Berkeley, CA, USA [email protected] ABSTRACT for a synchronization event that will never happen. Dead- We present an effective dynamic analysis for finding a broad locks are a common problem in real-world multi-threaded class of deadlocks, including the well-studied lock-only dead- programs. For instance, 6,500/198,000 (∼ 3%) of the bug locks as well as the less-studied, but no less widespread or reports in the bug database at http://bugs.sun.com for insidious, deadlocks involving condition variables. Our anal- Sun's Java products involve the keyword \deadlock" [11]. ysis consists of two stages. In the first stage, our analysis Moreover, deadlocks often occur non-deterministically, un- observes a multi-threaded program execution and generates der very specific thread schedules, making them harder to a simple multi-threaded program, called a trace program, detect and reproduce using conventional testing approaches. that only records operations observed during the execution Finally, extending existing multi-threaded programs or fix- that are deemed relevant to finding deadlocks. Such op- ing other concurrency bugs like races often involves intro- erations include lock acquire and release, wait and notify, ducing new synchronization, which, in turn, can introduce thread start and join, and change of values of user-identified new deadlocks. Therefore, deadlock detection tools are im- synchronization predicates associated with condition vari- portant for developing and testing multi-threaded programs.
    [Show full text]
  • BOOM): an Industry- Competitive, Synthesizable, Parameterized RISC-V Processor
    The Berkeley Out-of-Order Machine (BOOM): An Industry- Competitive, Synthesizable, Parameterized RISC-V Processor Christopher Celio David A. Patterson Krste Asanović Electrical Engineering and Computer Sciences University of California at Berkeley Technical Report No. UCB/EECS-2015-167 http://www.eecs.berkeley.edu/Pubs/TechRpts/2015/EECS-2015-167.html June 13, 2015 Copyright © 2015, by the author(s). All rights reserved. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission. The Berkeley Out-of-Order Machine (BOOM): An Industry-Competitive, Synthesizable, Parameterized RISC-V Processor Christopher Celio, David Patterson, and Krste Asanovic´ University of California, Berkeley, California 94720–1770 [email protected] BOOM is a work-in-progress. Results shown are prelimi- nary and subject to change as of 2015 June. I$ L1 D$ (32k) L2 data 1. The Berkeley Out-of-Order Machine BOOM is a synthesizable, parameterized, superscalar out- exe of-order RISC-V core designed to serve as the prototypical baseline processor for future micro-architectural studies of uncore regfile out-of-order processors. Our goal is to provide a readable, issue open-source implementation for use in education, research, exe and industry. uncore BOOM is written in roughly 9,000 lines of the hardware L2 data (256k) construction language Chisel.
    [Show full text]
  • A Randomized Dynamic Program Analysis Technique for Detecting Real Deadlocks
    A Randomized Dynamic Program Analysis Technique for Detecting Real Deadlocks Pallavi Joshi Chang-Seo Park Mayur Naik Koushik Sen Intel Research, Berkeley, USA EECS Department, UC Berkeley, USA [email protected] {pallavi,parkcs,ksen}@cs.berkeley.edu Abstract cipline followed by the parent software and this sometimes results We present a novel dynamic analysis technique that finds real dead- in deadlock bugs [17]. locks in multi-threaded programs. Our technique runs in two stages. Deadlocks are often difficult to find during the testing phase In the first stage, we use an imprecise dynamic analysis technique because they happen under very specific thread schedules. Coming to find potential deadlocks in a multi-threaded program by observ- up with these subtle thread schedules through stress testing or ing an execution of the program. In the second stage, we control random testing is often difficult. Model checking [15, 11, 7, 14, 6] a random thread scheduler to create the potential deadlocks with removes these limitations of testing by systematically exploring high probability. Unlike other dynamic analysis techniques, our ap- all thread schedules. However, model checking fails to scale for proach has the advantage that it does not give any false warnings. large multi-threaded programs due to the exponential increase in We have implemented the technique in a prototype tool for Java, the number of thread schedules with execution length. and have experimented on a number of large multi-threaded Java Several program analysis techniques, both static [19, 10, 2, 9, programs. We report a number of previously known and unknown 27, 29, 21] and dynamic [12, 13, 4, 1], have been developed to de- real deadlocks that were found in these benchmarks.
    [Show full text]
  • Specfp Benchmark Disclosure
    SPEC CFP2006 Result Copyright 2006-2016 Standard Performance Evaluation Corporation Cisco Systems SPECfp2006 = Not Run Cisco UCS C220 M4 (Intel Xeon E5-2667 v4, 3.20 GHz) SPECfp_base2006 = 125 CPU2006 license: 9019 Test date: Mar-2016 Test sponsor: Cisco Systems Hardware Availability: Mar-2016 Tested by: Cisco Systems Software Availability: Dec-2015 0 30.0 60.0 90.0 120 150 180 210 240 270 300 330 360 390 420 450 480 510 540 570 600 630 660 690 720 750 780 810 840 900 410.bwaves 572 416.gamess 43.1 433.milc 72.5 434.zeusmp 225 435.gromacs 63.4 436.cactusADM 894 437.leslie3d 427 444.namd 31.6 447.dealII 67.6 450.soplex 48.3 453.povray 62.2 454.calculix 61.6 459.GemsFDTD 242 465.tonto 52.1 470.lbm 799 481.wrf 120 482.sphinx3 91.6 SPECfp_base2006 = 125 Hardware Software CPU Name: Intel Xeon E5-2667 v4 Operating System: SUSE Linux Enterprise Server 12 SP1 (x86_64) CPU Characteristics: Intel Turbo Boost Technology up to 3.60 GHz 3.12.49-11-default CPU MHz: 3200 Compiler: C/C++: Version 16.0.0.101 of Intel C++ Studio XE for Linux; FPU: Integrated Fortran: Version 16.0.0.101 of Intel Fortran CPU(s) enabled: 16 cores, 2 chips, 8 cores/chip Studio XE for Linux CPU(s) orderable: 1,2 chips Auto Parallel: Yes Primary Cache: 32 KB I + 32 KB D on chip per core File System: xfs Secondary Cache: 256 MB I+D on chip per core System State: Run level 3 (multi-user) Continued on next page Continued on next page Standard Performance Evaluation Corporation [email protected] Page 1 http://www.spec.org/ SPEC CFP2006 Result Copyright 2006-2016 Standard Performance
    [Show full text]
  • A Real-World Benchmark Model for Testing Concurrent Real-Time Systems in the Automotive Domain
    A Real-World Benchmark Model for Testing Concurrent Real-Time Systems in the Automotive Domain Jan Peleska1, Artur Honisch3, Florian Lapschies1, Helge L¨oding2, Hermann Schmid3, Peer Smuda3, Elena Vorobev1, and Cornelia Zahlten2 1 Department of Mathematics and Computer Science University of Bremen, Germany fjp,florian,[email protected] 2 Verified Systems International GmbH, Bremen, Germany fhloeding,cmzg@verified.de 3 Daimler AG, Stuttgart, Germany fartur.honisch,hermann.s.schmid,[email protected] Abstract. In this paper we present a model for automotive system tests of functionality related to turn indicator lights. The model cov- ers the complete functionality available in Mercedes Benz vehicles, com- prising turn indication, varieties of emergency flashing, crash flashing, theft flashing and open/close flashing, as well as configuration-dependent variants. It is represented in UML2 and associated with a synchronous real-time systems semantics conforming to Harel's original Statecharts interpretation. We describe the underlying methodological concepts of the tool used for automated model-based test generation, which was developed by Verified Systems International GmbH in cooperation with Daimler and the University of Bremen. A test suite is described as initial reference for future competing solutions. The model is made available in several file formats, so that it can be loaded into existing CASE tools or test generators. It has been originally developed and applied by Daimler for automatically deriving test cases, concrete test data and test proce- dures executing these test cases in Daimler's hardware-in-the-loop sys- tem testing environment. In 2011 Daimler decided to allow publication of this model with the objective to serve as a "real-world" benchmark supporting research of model based testing.
    [Show full text]