Scale Enterprise Systems – for Better Business Computing

Total Page:16

File Type:pdf, Size:1020Kb

Scale Enterprise Systems – for Better Business Computing Bringing Open Systems and Open Source Software to Large- Scale Enterprise Systems – for Better Business Computing Chu J. Jong and Kyoungwon Suh School of Information Technology, Illinois State University Normal, IL 61790, U.S.A. [email protected], [email protected] abandoned their business. In the meanwhile, the mainstream Abstract - Open systems and open sources have made computing industry moved to client-server based systems, significant contributions to the information technology field. followed by workstation based systems, and distributed The importance of their implementation along with the cluster systems. Today, a few mainframe manufacturers exist number of IT shops using open systems and open software and IBM is the one still actively producing improved continue to increase. The growing number of open software mainframe systems [2]. challenges the traditional mainframe based enterprise systems which mainly host propriety applications. Mainframe Mainframes maintained their long popularity until 1980s engineers realized the inevitable trend of openness and started because they not only offered better resource sharing but also to accept open applications and implement open systems to provided maintenance free services to their clients. In users’ host those applications. Moving toward the open systems and mind convenience, security, and robustness are always high open sources not only reduces the cost of software on their priority list. Mainframes are mainly designed for development life cycle but also improves the quality of the centralized computing environments in which users connect service. However, there exists a gap between the mainframe to their computing facility via a remote device. Information system programmers and the open system programmers. To (applications and data) are stored in a central area to serve bridge the gap, we need to bring the openness to the who logon the system. Users who have the access right to the mainframe computing field by educating both programmers mainframe computing facility will be able to perform tasks to the fundamental differences between the two systems. In this accomplish their business objectives regardless when they paper we present our efforts of bringing them together by need and where they are located. Mainframe users normally integrating the open systems/sources courses into the leave the supporting and maintenance jobs such as backup, mainframe computing education environment. Our initial security, recovery, upgrades, virus detections, and even attempts to bridge the gap include two open systems open printer cartage replacement to their centralized system sources course proposals and an internship plan for our administrator. Open-ECS development. Mainframe systems cannot be easily replaced by other kinds Keywords: mainframe, enterprise computing, Open-ECS, of computing systems mainly because they are backed by ECS many legacy business applications. According to recent studies, there are 200 billion lines of code written in COBOL that are currently used in today’s business applications, with 1 Introduction several billion lines of new code added annually [3, 4]. These applications, which are mainly used by major corporations for During the past two decades mainframes were their daily business operations, are called legacy software and announced to be dead many times and many people simply their hosting systems are called legacy computers believed that the good old days of mainframe computing were (mainframes). The emerging commodity computing systems gone. The history of mainframes [1] dates back to the 1940s. and the glories of Internet revolution forced mainframe First-generation computers were made of vacuum tubes. In systems to move out the mainstream computing. Mainframe the 1950s and 1960s, transistor and multiprocessing programmers and technicians were quieted by the technologies made evolution changes in the computer overwhelming fun stuffs such as parallel computing, internet industry, which became dominated by mainframes. In the late applications, game development, web services, and so on. In 1970s, mini-computers such as the PDP-11 started to take a late 1990’s, blowing of the inflated internet bubbles brought small portion of the computing market but that did not sustain people out of the hypes. People started to realize that a solid long and quickly were replaced by microprocessor based machine which runs forever is the one showing the real workstations in the mid-1980s. In the 1990s, the Internet beauty, and mainframes continue surviving. revolution further weakened mainframe systems. Most mainframe manufacturers ran out of resources and eventually Though the demand for mainframe systems does not diminish fall semester 2006, ITK has offered three ECS courses: significantly, we expect to fall short on the personnel who Introduction to Enterprise Computing Systems (three times); can develop applications for mainframes and maintain Operating, Data Communications, Networking, and Security mainframes in near future. The studies of local and national of Enterprise Systems (three times); and System Information Technology (IT) business indicate that current Programming and System Administration in Enterprise mainframe system programmers and system administrators Computing Systems (twice). Online course offerings are also are reaching retirement age; yet there does not exist younger under consideration. programmer equipped with sufficient knowledge to fill the gap for operating mainframe systems [5, 6]. We expect that 2 Objectives the demand to replace these employees will be high. In addition, a study of global computing markets shows that The major goals of the ECS program are to fulfill the mainframe usage continues to rise. From both economic (e.g. demand for mainframe IT personnel, to educate engineers on power, space, installation, and maintenance costs) and the Integrated Large-scale Enterprise Computing Systems business integration (e.g. centralized computing and storage (ILECS), and to develop the next-generation ILECS. An system with distributed recovery strategy) aspects, the Integrated Large-scale Enterprise Computing System is made number of integrated mainframe systems will continue to of a group of computing entities (including mainframes, grow in both major corporations and small to medium size servers, storages, and peripheral devices) which are businesses. interconnected by a network forming a virtual centralized computing facility. It is a computing system comprised of a Although these are good indications that the old mainframe set of computer technologies (hardware, software, and programming is coming back, we strongly believe that the practices) used in integrated large scale systems. These new business computing systems will have a shape of integrated computing systems are mainly for transaction- mainframes integrated with open systems. The evolutional based businesses and are widely used by service oriented growth of IT already changed the way that people enterprises for their business operations. communicate and the manner in which they purchase goods and services. In particular, it has required businesses to adapt The goals of ECS program have been carefully examined to their operational strategies to fast changing IT trends. These ensure their feasibility and deliverability. They should be phenomena have made a greater impact on transaction-based implemented in a sequence of manageable steps. One step of businesses, which use mainframes to conduct their daily the goal, educating engineers on the ILECS, is to emphasis operations. Based on our study of various mainframe the college education on the multi-platform computing computing paradigms, we believe that the integrated systems integrated into mainframe based systems, in computing systems composed of mainframes, multi-platform particular the IBM zSeries mainframe, for enterprises. servers and workstations, high-speed network, and heterogeneous storage devices will be widely used by In order to integrate multi-platform systems into mainframe enterprises to conduct their daily businesses. Based on this computing we need to bring the open systems and open belief, we have developed a series of courses for the sources into the mainframe enterprise environment, which is Enterprise Computing Systems (ECS) program at Illinois the main focus of this paper. We started with an education State University (ISU), which include two undergraduate and plan for the Linux operating system (the most popular open a graduate sequences, plus a graduate certificate program. source operating systems) when we started offer our first ECS The aim is to fulfill the demand of mainframe IT personnel, course in fall 2006. At the end of 2007 a Linux course educate engineers on large-scale enterprise computing development strategic team was formed, and later it became systems, and prepare students take the challenges of the the open systems and open sources integration steering growth of future integrated large-scale enterprise computing committee. The goal of this committee is to develop a systems. curriculum for open systems and open sources in the area of mainframe-based enterprise systems. We named the course Starting at the end of 2004 the School of Information and curriculum development Open-ECS. The goal of Open- Technology (ITK) at ISU met with local and regional ECS is to oversee the future demand of integrated large-scale companies regarding how to address the demand, produce enterprise computer for better
Recommended publications
  • Accelerated AC Contingency Calculation on Commodity Multi
    1 Accelerated AC Contingency Calculation on Commodity Multi-core SIMD CPUs Tao Cui, Student Member, IEEE, Rui Yang, Student Member, IEEE, Gabriela Hug, Member, IEEE, Franz Franchetti, Member, IEEE Abstract—Multi-core CPUs with multiple levels of parallelism In the computing industry, the performance capability of (i.e. data level, instruction level and task/core level) have become the computing platform has been growing rapidly in the last the mainstream CPUs for commodity computing systems. Based several decades at a roughly exponential rate [3]. The recent on the multi-core CPUs, in this paper we developed a high performance computing framework for AC contingency calcula- mainstream commodity CPUs enable us to build inexpensive tion (ACCC) to fully utilize the computing power of commodity computing systems with similar computational power as the systems for online and real time applications. Using Woodbury supercomputers just ten years ago. However, these advances in matrix identity based compensation method, we transform and hardware performance result from the increasing complexity pack multiple contingency cases of different outages into a of the computer architecture and they actually increase the dif- fine grained vectorized data parallel programming model. We implement the data parallel programming model using SIMD ficulty of fully utilizing the available computational power for instruction extension on x86 CPUs, therefore, fully taking advan- a specific application [4]. This paper focuses on fully utilizing tages of the CPU core with SIMD floating point capability. We the computing power of modern CPUs by code optimization also implement a thread pool scheduler for ACCC on multi-core and parallelization for specific hardware, enabling the real- CPUs which automatically balances the computing loads across time complete ACCC application for practical power grids on CPU cores to fully utilize the multi-core capability.
    [Show full text]
  • Thesis May Never Have Been Completed
    UvA-DARE (Digital Academic Repository) Digital Equipment Corporation (DEC): A case study of indecision, innovation and company failure Goodwin, D.T. Publication date 2016 Document Version Final published version Link to publication Citation for published version (APA): Goodwin, D. T. (2016). Digital Equipment Corporation (DEC): A case study of indecision, innovation and company failure. General rights It is not permitted to download or to forward/distribute the text or part of it without the consent of the author(s) and/or copyright holder(s), other than for strictly personal, individual use, unless the work is under an open content license (like Creative Commons). Disclaimer/Complaints regulations If you believe that digital publication of certain material infringes any of your rights or (privacy) interests, please let the Library know, stating your reasons. In case of a legitimate complaint, the Library will make the material inaccessible and/or remove it from the website. Please Ask the Library: https://uba.uva.nl/en/contact, or a letter to: Library of the University of Amsterdam, Secretariat, Singel 425, 1012 WP Amsterdam, The Netherlands. You will be contacted as soon as possible. UvA-DARE is a service provided by the library of the University of Amsterdam (https://dare.uva.nl) Download date:26 Sep 2021 Digital Equipment Corporation (DEC) (DEC) Corporation Digital Equipment David Thomas David Goodwin Digital Equipment Corporation (DEC): A Case Study of Indecision, Innovation and Company Failure David Thomas Goodwin Digital Equipment Corporation (DEC): A Case Study of Indecision, Innovation and Company Failure David Thomas Goodwin 1 Digital Equipment Corporation (DEC): A Case Study of Indecision, Innovation and Company Failure ACADEMISCH PROEFSCHRIFT ter verkrijging van de graad van doctor aan de Universiteit van Amsterdam op gezag van de Rector Magnificus prof.
    [Show full text]
  • Cluster Storage for Commodity Computation
    UCAM-CL-TR-690 Technical Report ISSN 1476-2986 Number 690 Computer Laboratory Cluster storage for commodity computation Russell Glen Ross June 2007 15 JJ Thomson Avenue Cambridge CB3 0FD United Kingdom phone +44 1223 763500 http://www.cl.cam.ac.uk/ c 2007 Russell Glen Ross This technical report is based on a dissertation submitted December 2006 by the author for the degree of Doctor of Philosophy to the University of Cambridge, Wolfson College. Technical reports published by the University of Cambridge Computer Laboratory are freely available via the Internet: http://www.cl.cam.ac.uk/techreports/ ISSN 1476-2986 Summary Standards in the computer industry have made basic components and en- tire architectures into commodities, and commodity hardware is increas- ingly being used for the heavy lifting formerly reserved for specialised plat- forms. Now software and services are following. Modern updates to vir- tualization technology make it practical to subdivide commodity servers and manage groups of heterogeneous services using commodity operating systems and tools, so services can be packaged and managed independent of the hardware on which they run. Computation as a commodity is soon to follow, moving beyond the specialised applications typical of today’s utility computing. In this dissertation, I argue for the adoption of service clusters— clusters of commodity machines under central control, but running ser- vices in virtual machines for arbitrary, untrusted clients—as the basic building block for an economy of flexible commodity computation.I outline the requirements this platform imposes on its storage system and argue that they are necessary for service clusters to be practical, but are not found in existing systems.
    [Show full text]
  • SPARC S7-2 and SPARC S7-2L Servers
    ORACLE DATA SHEET SPARC S7-2 and SPARC S7-2L Servers Oracle’s SPARC S7 servers extend the world’s most advanced systems for enterprise computing into scale-out and cloud applications, with unique capabilities for information security, core efficiency, and data analytics acceleration. Hardware security in silicon, combined with platform support, provides unmatched protection against data hacking and unauthorized access, while full-speed wide-key encryption allows transactions to be secured by default. Up to 1.7x better core efficiency than x86 systems lowers costs for running Java applications and databases1. Hardware acceleration of data analytics, big data, and machine learning deliver 10x faster time-to-insight and off-load processor cores to run other workloads. The combination of Oracle’s breakthrough Software in Silicon features and the highest performance is the foundation for building the most secure and efficient enterprise clouds. Overview Oracle’s SPARC S7-2 and S7-2L servers are designed to optimally address the requirements of scale-out and cloud infrastructure by removing the trade-off between security and high-performance and improving the overall efficiency of deploying mixed workloads. SPARC S7-2 and S7-2L servers are based on the SPARC S7 processor, which extends the Software in Silicon features of Oracle’s SPARC M7 processor onto scale-out form factors. K E Y B U S I N E S S BENEFITS • Common hacker exploits and programming The SPARC S7-2 server is a resilient 1U system that favors compute density, and the errors can be prevented by Silicon Secured SPARC S7-2L server is a resilient 2U system that offers versatile storage options, Memory.
    [Show full text]
  • ISCA-40-Tel-Aviv-Keynote-Dileepb.Pdf
    Dileep Bhandarkar, Ph. D. IEEE Fellow The Stops Along My Journey 1970: B. Tech, Electrical Engineering Indian Institute of Technology, Bombay 1973: PhD in Electrical Engineering • Carnegie Mellon University • Thesis: Performance Evaluation of Multiprocessor Computer Systems • 4 years - Texas Instruments • Research on magnetic bubble & CCD memory, FT DRAM • 17 years - Digital Equipment Corporation • Processor Architecture and Performance • 12 years - Intel • Performance, Architecture, Strategic Planning • 5.5 years - Microsoft • Distinguished Engineer, Data Center Hardware Engineering • 5 months and counting – Qualcomm Technologies Inc • VP Technology Birth of ISCA! My 15 Minutes of Fame! IBM 360/67 at CMU in 1970 • The S/360-67 operated with a basic internal cycle time of 200 nanoseconds and a basic 750 nanosecond magnetic core storage cycle • Dynamic Address Translation (DAT) with support for 24 or 32-bit virtual addresses using segment and page tables (up to 16 segments each containing up to 256 x 4096 byte pages) Snow White (IBM) and the Seven Dwarfs (RCA, GE, Burroughs, Univac, NCR, Control Data, Honeywell) IBM 370/168 – circa 1974 Multiple Virtual Storage (MVS) Virtual Machine Facility/370 (VM 370) 1971: 4004 Microprocessor • The 4004 was Intel's first microprocessor. This breakthrough invention powered the Busicom calculator and paved the way for embedding intelligence in inanimate objects as well as the personal computer. Introduced November 15, 1971 108 KHz, 50 KIPs , 2300 10m transistors 1971: 1K DRAM Intel® 1103 DRAM Memory Intel delivered the 1103 to 14 of the 18 leading computer manufacturers. Since the production costs of the 1103 were much lower than the costs of a core memory or a 1101 the 1103 could establish within the market rapidly, became the world's best selling memory chip and was finally responsible for the obsolescence of magnetic core memory.
    [Show full text]
  • SPARC S7-2 and S7-2L Servers Copyright © 2021, Oracle And/Or Its Affiliates
    SPARC S7-2 and SPARC S7-2L Servers Oracle’s SPARC S7 servers extend the world’s most advanced systems for enterprise computing into scale-out and cloud applications, with unique capabilities for information security, core efficiency, and data analytics acceleration. Hardware security in Key Benefits Common hacker exploits and silicon, combined with platform support, programming errors can be prevented by Silicon Secured provides unmatched protection against data Memory. Data encryption can be enabled by hacking and unauthorized access, while full- default, without compromise in performance, using wide-key speed wide-key encryption allows transactions cryptography accelerated in hardware. to be secured by default. Up to 1.7x better core Hackers are stopped from gaining a foothold through verified boot, and efficiency than x86 systems lowers costs for immutable zones and virtual 1 machines. running Java applications and databases . Up to 1.7x better core efficiency than x86 systems can lower costs for Hardware acceleration of data analytics, big running Java applications and databases1. data, and machine learning deliver 10x faster Hardware acceleration delivers 10x better time-to-insight on data time-to-insight and off-load processor cores to analytics, big data, and machine learning. run other workloads. The combination of Developer productivity and software quality are increased and Oracle’s breakthrough Software in Silicon applications are accelerated by Software in Silicon features. features and the highest performance is the
    [Show full text]
  • Intel® Pentium® Processor
    Dileep Bhandarkar, Ph. D. IEEE Fellow Computer History Museum 21 August 2014 The opinions expressed here are my own and may be a result of the way in which my highly disorganized and somewhat forgetful mind interprets a particular situation or concept. They are not approved or authorized by my current or past employers, family, or friends. If any or all of the information or opinions found here does accidentally offend, humiliate or hurt someone's feelings, it is entirely unintentional. “Come on the amazing journey And learn all you should know.” – The Who The Stops Along My Journey 1970: B. Tech, Electrical Engineering (Distinguished Alumnus) – Indian Institute of Technology, Bombay 1973: PhD in Electrical Engineering • Carnegie Mellon University • Thesis: Performance Evaluation of Multiprocessor Computer Systems • 4 years - Texas Instruments • Research on magnetic bubble & CCD memory, Fault Tolerant DRAM • 17.5 years - Digital Equipment Corporation • Processor Architecture and Performance • 12 years - Intel • Performance, Architecture, Strategic Planning • 5.5 years - Microsoft • Distinguished Engineer, Data Center Hardware Engineering • Since January 2013 – Qualcomm Technologies Inc • VP Technology “Follow the path of the unsafe, independent thinker. Expose your ideas to the danger of controversy. Speak your mind and fear less the label of ''crackpot'' than the stigma of conformity.” – Thomas J. Watson 1958: Jack Kilby’s Integrated Circuit SSI -> MSI -> LSI -> VLSI -> OMGWLSI What is Moore’s Law? 1970 – 73: Graduate School Carnegie Mellon University Pittsburgh, PA 1971: 4004 Microprocessor • The 4004 was Intel's first microprocessor. This breakthrough invention powered the Busicom calculator and paved the way for embedding intelligence in inanimate objects as well as the personal computer.
    [Show full text]
  • High Performance Computing on Commodity Pcs
    High Performance Computing on commodity PCs Alfio Lazzaro CERN openlab Seminar at Department of Physics University of Milan January 14th, 2011 Computing in the years Transistors used to increase raw-performance Increase global performance Moore’s law Alfio Lazzaro ([email protected]) 2 Frequency scaling and power consumption q The increase in performance was mainly driven by the increase of the clock frequency § Pentium Pro in 1996: 150 MHz § Pentium 4 in 2003: 3.8 GHz (~25x!) q However, this brought to a significant increase in power consumption http://www.processor-comparison.com/power.html § Pollack’s rule (perf ≅ power1/2) • 10% more performance costs about 20% more in power Alfio Lazzaro ([email protected]) 3 Reducing Power q Power = EnergyPerInst * InstPerSecond § To keep power constant, EPI has to decrease at the same pace as increase in IPS (IPS = performance) 2 q EPI = Vcc * C + Leakage § C is the capacitance 2 § Vcc is the supply voltage q Vcc needs to be kept as low as possible § It cannot be reduced by big margins, since a low voltage level slows down the switching speed and imposes limits on the maximum frequency q C is related to the physical properties of the material § Not easy to decrease q At the time of the Pentium 4 (2003), the increase in frequency was no more possible because of increase in leakage currents All these factors limited the increase in performance of the single computational unit (and it is very unlikely that the situation will change in the next 5-10 years) Alfio Lazzaro ([email protected])
    [Show full text]
  • Beowulf Cluster Computing with Linux (Scientific and Engineering
    Beowulf Cluster Computing with Linux Scientific and Engineering Computation Janusz Kowalik, editor Data-Parallel Programming on MIMD Computers, Philip J. Hatcher and Michael J. Quinn, 1991 Unstructured Scientific Computation on Scalable Multiprocessors, edited by Piyush Mehrotra, Joel Saltz, and Robert Voigt, 1992 Parallel Computational Fluid Dynamics: Implementation and Results, edited by Horst D. Simon, 1992 Enterprise Integration Modeling: Proceedings of the First International Conference, edited by Charles J. Petrie, Jr., 1992 The High Performance Fortran Handbook, Charles H. Koelbel, David B. Loveman, Robert S. Schreiber, Guy L. Steele Jr. and Mary E. Zosel, 1994 PVM: Parallel Virtual Machine–A Users’ Guide and Tutorial for Network Parallel Computing, Al Geist, Adam Beguelin, Jack Dongarra, Weicheng Jiang, Bob Manchek, and Vaidy Sunderam, 1994 Practical Parallel Programming, Gregory V. Wilson, 1995 Enabling Technologies for Petaflops Computing, Thomas Sterling, Paul Messina, and Paul H. Smith, 1995 An Introduction to High-Performance Scientific Computing, Lloyd D. Fosdick, Elizabeth R. Jessup, Carolyn J. C. Schauble, and Gitta Domik, 1995 Parallel Programming Using C++, edited by Gregory V. Wilson and Paul Lu, 1996 Using PLAPACK: Parallel Linear Algebra Package, Robert A. van de Geijn, 1997 Fortran 95 Handbook, Jeanne C. Adams, Walter S. Brainerd, Jeanne T. Martin, Brian T. Smith, Jerrold L. Wagener, 1997 MPI—The Complete Reference: Volume 1, The MPI Core, Marc Snir, Steve Otto, Steven Huss-Lederman, David Walker, and Jack Dongarra, 1998 MPI—The Complete Reference: Volume 2, The MPI-2 Extensions, William Gropp, Steven Huss-Lederman, Andrew Lumsdaine, Ewing Lusk, Bill Nitzberg, William Saphir, and Marc Snir, 1998 A Programmer’s Guide to ZPL, Lawrence Snyder, 1999 How to Build a Beowulf, Thomas L.
    [Show full text]
  • Implicit Operating System Awareness in a Virtual Machine Monitor
    IMPLICIT OPERATING SYSTEM AWARENESS IN A VIRTUAL MACHINE MONITOR by Stephen T. Jones A dissertation submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy Computer Sciences at the UNIVERSITY OF WISCONSIN–MADISON 2007 For my family i ii ACKNOWLEDGMENTS So many people have contributed to the success of this endeavor. I am indebted to them all. I will mention just a few by name here. First and most importantly I want to thank my wife and best friend Alyson whose persistent faith, love, encouragement,and support have been essential to all of my successes and have helped me learn from each of my failures. She got me started on this path and has seen me through to yet another beginning. My children Emma, Abbi, and Ian provide me with constant inspiration and perspective, often when I expect it least and need it most. My advisors Andrea and Remzi gently and deftly shepherded me through the some- times uncomfortable realization of what high quality research actually is. Their curiosity and enthusiasm about computer systems and, most importantly to me, their humanity form the core of what I take away from Wisconsin. I thank my Parents Ben and Shirley Jones for giving me life, teaching me the value of work, and nurturing a loveof discoveryand learning. I also appreciate the diverse examples of all my wonderful siblings who continue to teach me whenever we are together. My brother Benjamin contributed especially to my love of design and elegant technology by generously sharing his time and ideas with me when I was young.
    [Show full text]
  • IEEE CS 2022 Report
    IEEE CS 2022 Report Hasan Alkhatib, Paolo Faraboschi, Eitan Frachtenberg, Hironori Kasahara, Danny Lange, Phil Laplante, Arif Merchant, Dejan Milojicic, and Karsten Schwan with contributions by: Mohammed AlQuraishi, Angela Burgess, David Forsyth, Hiroyasu Iwata, Rick McGeer, and John Walz Preface In 2013-14, nine technical leaders wrote a report, entitled complete and exhaustive, it is inevitable that some technolo- IEEE CS 2022, surveying 23 innovative technologies that gies have been omitted, such as Bitcoin, future transportation, could change the industry by the year 2022. The report and the general notion of what technology contributes to the covers security cross-cutting issues, open intellectual prop- mankind. Our position, as well as the premise that this docu- erty movement, sustainability, massively online open courses, ment brings, is that technology is the enabler. What humanity quantum computing, device and nanotechnology, device takes out of it really depends on human society. and nanotechnology, 3D integrated circuits, multicore, pho- The IEEE CS 2022 report was presented at the Computer So- tonics, universal memory, networking and interconnectivity, ciety of India Congress, at the Information Processing Society software-defined networks, high-performance computing, of Japan (IPSJ) Congress, at the IEEE CS Board of Governors, cloud computing, the Internet of Things, natural user interfac- at the IEEE CS Industrial Advisory Board, and at Belgrade es, 3D printing, big data and analytics, machine learning and Chapter. We received positive feedback and excellent ideas intelligent systems, computer vision and pattern recognition, for improvement. This is a living document, because the tech- life sciences, computational biology and bioinformatics, and nology continuously changes.
    [Show full text]