Ssfm BPG 1: Validation of Software in Measurement Systems
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Measuring the Software Size of Sliced V-Model Projects
2014 Joint Conference of the International Workshop on Software Measurement and the International Conference on Software Process and Product Measurement Measuring the Software Size of Sliced V-model Projects Andreas Deuter PHOENIX CONTACT Electronics GmbH Gregor Engels Dringenauer Str. 30 University of Paderborn 31812 Bad Pyrmont, Germany Zukunftsmeile 1 Email: [email protected] 33102 Paderborn, Germany Email: [email protected] Abstract—Companies expect higher productivity of their soft- But, the “manufacturing” of software within the standard ware teams when introducing new software development methods. production process is just a very short process when bringing Productivity is commonly understood as the ratio of output the binaries of the software to the device. This process is hardly created and resources consumed. Whereas the measurement of to optimize and does not reflect the “production of software” the resources consumed is rather straightforward, there are at all. The creation of the software is namely done in the several definitions for counting the output of a software de- developing teams by applying typical software engineering velopment. Source code-based metrics create a set of valuable figures direct from the heart of the software - the code. However, methods. However, to keep up with the high demands on depending on the chosen process model software developers and implementing new functionality (e.g. for PLC) the software testers produce also a fair amount of documentation. Up to development process within these companies must improve. now this output remains uncounted leading to an incomplete Therefore, they start to analyze their software processes in view on the development output. This article addresses this open detail and to identify productivity drivers. -
Software Error Analysis Technology
NIST Special Publication 500-209 Computer Systems Software Error Analysis Technology U.S. DEPARTMENT OF COMMERCE Wendy W. Peng Technology Administration Dolores R. Wallace National Institute of Standards and Technology NAT L INST. OF ST4ND & TECH R I.C. NISI PUBLICATIONS A111D3 TTSTll ^QC ' 100 .U57 //500-209 1993 7he National Institute of Standards and Technology was established in 1988 by Congress to "assist industry in the development of technology . needed to improve product quality, to modernize processes, to reliability . manufacturing ensure product and to facilitate rapid commercialization . , of products based on new scientific discoveries." NIST, originally founded as the National Bureau of Standards in 1901, works to strengthen U.S. industry's competitiveness; advance science and engineering; and improve public health, safety, and the environment. One of the agency's basic functions is to develop, maintain, and retain custody of the national standards of measurement, and provide the means and methods for comparing standards used in science, engineering, manufacturing, commerce, industry, and education with the standards adopted or recognized by the Federal Government. As an agency of the U.S. Commerce Department's Technology Administration, NIST conducts basic and applied research in the physical sciences and engineering and performs related services. The Institute does generic and precompetitive work on new and advanced technologies. NIST's research facilities are located at Gaithersburg, MD 20899, and at Boulder, CO 80303. -
Software Metrics: Successes, Failures and New Directions
Software Metrics: Successes, Failures and New Directions Norman E Fenton Centre for Software Reliability Agena Ltd City University 11 Main Street Northampton Square Caldecote London EC1V OHB Cambridge CB3 7NU United Kingdom United Kingdom Tel: 44 171 477 8425 Tel 44 1223 263753 / 44 181 530 5981 Fax: 44 171 477 8585 Fax: 01223 263899 [email protected] [email protected] www.csr.city.ac.uk/people/norman.fenton/ www.agena.co.uk Abstract The history of software metrics is almost as old as the history of software engineering. Yet, the extensive research and literature on the subject has had little impact on industrial practice. This is worrying given that the major rationale for using metrics is to improve the software engineering decision making process from a managerial and technical perspective. Industrial metrics activity is invariably based around metrics that have been around for nearly 30 years (notably Lines of Code or similar size counts, and defects counts). While such metrics can be considered as massively successful given their popularity, their limitations are well known, and mis-applications are still common. The major problem is in using such metrics in isolation. We argue that it is possible to provide genuinely improved management decision support systems based on such simplistic metrics, but only by adopting a less isolationist approach. Specifically, we feel it is important to explicitly model: a) cause and effect relationships and b) uncertainty and combination of evidence. Our approach uses Bayesian Belief nets which are increasingly seen as the best means of handling decision-making under uncertainty. -
Data Validation As a Main Function of Business Intelligence
E-Leader Berlin 2012 Data Validation as a Main Function of Business Intelligence PhDr. M.A. Dipl.-Betriebswirt (FH) Thomas H. Lenhard, PhD. University of Applied Sciences Kaiserslautern Campus Zweibruecken Zweibruecken, Germany Abstract Thinking about Business Intelligence, Data Mining or Data Warehousing, it seems to be clear, that we generally use three levels to compute our data into information. These levels are validation of input data, transformation and making output data available. This study/paper will give a short overview about the importance of validation and the validation level of a Data Warehouse or any BI-System. Several examples shall prove that validation is not only important to get proper and useful output data. Some combinations of circumstances will show that validation might be the result that is requested. After an introduction concerning the levels of generating information and after explaining necessary basics about BI, different methods of validation will be discussed as well as it will be thought about different variations to use the validation level or the validated data. Examples from the practice of Business Intelligence in general and in the area of Health Care IT specifically will support the theory that the level of validation must be considered as the most important. Especially in the area of operational management, it will be proven that a profound use of data validation can cause benefits or even prevent losses. Therefore it seems to be a method to increase efficiency in business and production. The Importance of Quality in Business Intelligence Business Intelligence (BI) is a non-exclusive collection of methods that are used as a transformation process to generate information out of data. -
Software Measurement: a Necessary Scientific Basis Norman Fenton
IEEE TRANSACTIONS ON SOFTWARE ENGINEERING. VOL. 20, NO. 3, MARCH I994 I99 Software Measurement: A Necessary Scientific Basis Norman Fenton Abstruct- Software measurement, like measurement in any 11. MEASUREMENTFUNDAMENTALS other discipline, must adhere to the science of measurement if it is to gain widespread acceptance and validity. The observation In this section, we provide a summary of the key concepts of some very simple, but fundamental, principles of measurement from the science of measurement which are relevant to soft- can have an extremely beneficial effect on the subject. Measure- ware metrics. First, we define the fundamental notions (which ment theory is used to highlight both weaknesses and strengths of are generally not well understood) and then we summarise the software metrics work, including work on metrics validation. We representational theory of measurement. Finally, we explain identify a problem with the well-known Weyuker properties, but also show that a criticism of these properties by Cherniavsky and how this leads inevitably to a goal-oriented approach. Smith is invalid. We show that the search for general software complexity measures is doomed to failure. However, the theory A. What is Measurement? does help us to define and validate measures of specific complexity attributes. Above all, we are able to view software measurement Measurement is defined as the process by which numbers or in a very wide perspective, rationalising and relating its many symbols are assigned to attributes of entities in the real world diverse activities. in such a way as to describe them according to clearly defined Index Terms-Software measurement, empirical studies, met- rules [ 131, [36]. -
Complete Report
A Report on The Fifteenth International TOVS Study Conference Maratea, Italy 4-10 October 2006 Conference sponsored by: IMAA / CNR Met Office (U.K.) University of Wisconsin-Madison / SSEC NOAA NESDIS EUMETSAT World Meteorological Organization ITT Industries Kongsberg Spacetec CNES ABB Bomem Dimension Data LGR Impianti Cisco Systems and VCS Report prepared by: Thomas Achtor and Roger Saunders ITWG Co-Chairs Leanne Avila Maria Vasys Editors Published and Distributed by: University of Wisconsin-Madison 1225 West Dayton Street Madison, WI 53706 USA ITWG Web Site: http://cimss.ssec.wisc.edu/itwg/ February 2007 International TOVS Study Conference-XV Working Group Report FOREWORD The International TOVS Working Group (ITWG) is convened as a sub-group of the International Radiation Commission (IRC) of the International Association of Meteorology and Atmospheric Physics (IAMAP). The ITWG continues to organise International TOVS Study Conferences (ITSCs) which have met approximately every 18 months since 1983. Through this forum, operational and research users of TIROS Operational Vertical Sounder (TOVS), Advanced TOVS (ATOVS) and other atmospheric sounding data have exchanged information on data processing methods, derived products, and the impacts of radiances and inferred atmospheric temperature and moisture fields on numerical weather prediction (NWP) and climate studies. The Fifteenth International TOVS Study Conference (ITSC-XV) was held at the Villa del Mare near Maratea, Italy from 4 to 10 October 2006. This conference report summarises the scientific exchanges and outcomes of the meeting. A companion document, The Technical Proceedings of The Fourteenth International TOVS Study Conference, contains the complete text of ITSC-XV scientific presentations. The ITWG Web site ( http://cimss.ssec.wisc.edu/itwg/ ) contains electronic versions of the conference presentations and publications. -
Review Article Applied Machine Learning and Artificial Intelligence in Rheumatology
Rheumatology Advances in Practice 20;0:1–10 Rheumatology Advances in Practice doi:10.1093/rap/rkaa005 Review article Applied machine learning and artificial intelligence in rheumatology Maria Hu¨ gle1, Patrick Omoumi2, Jacob M. van Laar3, Joschka Boedecker1,* and Thomas Hu¨ gle4,* Abstract Machine learning as a field of artificial intelligence is increasingly applied in medicine to assist patients and physicians. Growing datasets provide a sound basis with which to apply machine learning meth- ods that learn from previous experiences. This review explains the basics of machine learning and its subfields of supervised learning, unsupervised learning, reinforcement learning and deep learning. We provide an overview of current machine learning applications in rheumatology, mainly supervised learn- ing methods for e-diagnosis, disease detection and medical image analysis. In the future, machine learning will be likely to assist rheumatologists in predicting the course of the disease and identifying important disease factors. Even more interestingly, machine learning will probably be able to make treatment propositions and estimate their expected benefit (e.g. by reinforcement learning). Thus, in fu- ture, shared decision-making will not only include the patient’s opinion and the rheumatologist’s empir- ical and evidence-based experience, but it will also be influenced by machine-learned evidence. Key words: machine learning, neural networks, deep learning, rheumatology, artificial intelligence Key messages . Deep learning can assist rheumatologists by disease classification and prediction of individual disease activity. Artificial intelligence has the potential to empower autonomy of patients, e.g. by providing individual treatment propositions. Automated image recognition and natural language processing will be likely to pioneer implementation of artificial intelligence in rheumatology. -
Data Migration
Science and Technology 2018, 8(1): 1-10 DOI: 10.5923/j.scit.20180801.01 Data Migration Simanta Shekhar Sarmah Business Intelligence Architect, Alpha Clinical Systems, USA Abstract This document gives the overview of all the process involved in Data Migration. Data Migration is a multi-step process that begins with an analysis of the legacy data and culminates in the loading and reconciliation of data into new applications. With the rapid growth of data, organizations are in constant need of data migration. The document focuses on the importance of data migration and various phases of it. Data migration can be a complex process where testing must be conducted to ensure the quality of the data. Testing scenarios on data migration, risk involved with it are also being discussed in this article. Migration can be very expensive if the best practices are not followed and the hidden costs are not identified at the early stage. The paper outlines the hidden costs and also provides strategies for roll back in case of any adversity. Keywords Data Migration, Phases, ETL, Testing, Data Migration Risks and Best Practices Data needs to be transportable from physical and virtual 1. Introduction environments for concepts such as virtualization To avail clean and accurate data for consumption Migration is a process of moving data from one Data migration strategy should be designed in an effective platform/format to another platform/format. It involves way such that it will enable us to ensure that tomorrow’s migrating data from a legacy system to the new system purchasing decisions fully meet both present and future without impacting active applications and finally redirecting business and the business returns maximum return on all input/output activity to the new device. -
DS18B20 Datasheet
AVAILABLE DS18B20 Programmable Resolution 1-Wire Digital Thermometer DESCRIPTION . User-Definable Nonvolatile (NV) Alarm The DS18B20 digital thermometer provides 9-bit Settings to 12-bit Celsius temperature measurements and . Alarm Search Command Identifies and has an alarm function with nonvolatile user- Addresses Devices Whose Temperature is programmable upper and lower trigger points. Outside Programmed Limits (Temperature The DS18B20 communicates over a 1-Wire bus Alarm Condition) that by definition requires only one data line (and . Available in 8-Pin SO (150 mils), 8-Pin µSOP, ground) for communication with a central and 3-Pin TO-92 Packages microprocessor. It has an operating temperature . Software Compatible with the DS1822 range of -55°C to +125°C and is accurate to . Applications Include Thermostatic Controls, ±0.5°C over the range of -10°C to +85°C. In Industrial Systems, Consumer Products, addition, the DS18B20 can derive power directly Thermometers, or Any Thermally Sensitive from the data line (“parasite power”), eliminating System the need for an external power supply. PIN CONFIGURATIONS Each DS18B20 has a unique 64-bit serial code, which allows multiple DS18B20s to function on the same 1-Wire bus. Thus, it is simple to use one MAXIM microprocessor to control many DS18B20s N.C. 1 8 N.C. 18B20 MAXIM 18B20 distributed over a large area. Applications that N.C. 2 7 N.C. can benefit from this feature include HVAC 1 2 3 environmental controls, temperature monitoring VDD 3 6 N.C. systems inside buildings, equipment, or DQ 4 5 GND machinery, and process monitoring and control systems. -
Objectives and Context of Software Measurement, Analysis, and Control Michael A
Objectives and Context of Software Measurement, Analysis, and Control Michael A. Cusumano Massachusetts Institute of Technology Sloan School WP#3471-92/BPS October 8, 1992 Objectives and Context of Software Measurement, Analysis and Control Michael A. Cusumano Sloan School of Management Massachusetts Institute of Technology Cambridge, Massachusetts USA 02142 Abstract This paper focuses on the what and why of measurement in software development, and the impact of the context of the development organization or of specific projects on approaches to measurement, analysis, and control. It also presents observations on contextual issues based on studies of the major Japanese software producers. 1 Introduction This paper focuses on the what and why of measurement in software development, and the impact of the context of the development organization or of specific projects on approaches to measurement, analysis, and control. It begins by discussing three basic questions: (1) Why do we measure? (2) What is it we want to know? And (3) how does the context affect the ability to measure? The second section presents observations on contextual issues based on my ongoing study of the Japanese [3]. This paper also relies heavily on extensive interviews conducted at Hitachi, Fujitsu, NEC, and Toshiba during May, August, and September 1992. 2 Basic Questions Why we measure aspects of the software development process and conduct analytical exercises or experiments, as well as what we try to do with this information, depends very much on who is doing or requesting the measuring and analysis. There already exists a broad literature on technical aspects software measurement and the relationship to process management, which will not be reviewed here [1, 2, 5, 6]. -
12 Steps to Useful Software Metrics
12 Steps to Useful Software Metrics Linda Westfall The Westfall Team [email protected] PMB 101, 3000 Custer Road, Suite 270 Plano, TX 75075 972-867-1172 (voice) 972-943-1484 (fax) Abstract: 12 Steps to Useful Software Metrics introduces the reader to a practical process for establishing and tailoring a software metrics program that focuses on goals and information needs. The process provides a practical, systematic, start-to-finish method of selecting, designing and implementing software metrics. It outlines a cookbook method that the reader can use to simplify the journey from software metrics in concept to delivered information. Bio: Linda Westfall is the President of The Westfall Team, which provides Software Metrics and Software Quality Engineering training and consulting services. Prior to starting her own business, Linda was the Senior Manager of the Quality Metrics and Analysis at DSC Communications where her team designed and implemented a corporate wide metric program. Linda has more than twenty-five years of experience in real-time software engineering, quality and metrics. She has worked as a Software Engineer, Systems Analyst, Software Process Engineer and Manager of Production Software. Very active professionally, Linda Westfall is a past chair of the American Society for Quality (ASQ) Software Division. She has also served as the Software Division’s Program Chair and Certification Chair and on the ASQ National Certification Board. Linda is a past-chair of the Association for Software Engineering Excellence (ASEE) and has chaired several conference program committees. Linda Westfall has an MBA from the University of Texas at Dallas and BS in Mathematics from Carnegie- Mellon University. -
17.0 Data Review, Verification and Validation
QA Handbook Vol II, Section 17.0 Revision No: 1 Date: 12/08 Page 1 of 7 17.0 Data Review, Verification and Validation Data review, verification and validation are techniques used to accept, reject or qualify data in an objective and consistent manner. Verification can be defined as confirmation, through provision of objective evidence that specified requirements have been fulfilled1. Validation can be defined as confirmation through provision of objective evidence that the particular requirements for a specific intended use are fulfilled. It is important to describe the criteria for deciding the degree to which each data item has met its quality specifications as described in an organization’s QAPP. This section will describe the techniques used to make these assessments. In general, these assessment activities are performed by persons implementing the environmental data operations as well as by personnel “independent” of the operation, such as the organization’s QA personnel and at some specified frequency. The procedures, personnel and frequency of the assessments should be included in an organization’s QAPP. These activities should occur prior to submitting data to AQS and prior to final data quality assessments that will be discussed in Section 18. Each of the following areas of discussion should be considered during the data review/verification/validation processes. Some of the discussion applies to situations in which a sample is separated from its native environment and transported to a laboratory for analysis and data generation; others are applicable to automated instruments. The following information is an excerpt from EPA G-52: Sampling Design - How closely a measurement represents the actual environment at a given time and location is a complex issue that is considered during development of the sampling design.