How to Run SQL Queries Against the Database? How SELECT Works?

Total Page:16

File Type:pdf, Size:1020Kb

How to Run SQL Queries Against the Database? How SELECT Works? How to run SQL Queries against the database? As most of the databases in the world, our database uses a special language named SQL (Structured Query Language) to execute queries. This language allows different operations, but the most common is to retrieve the data. For this purpose SELECT statement is used. How SELECT works? Well, it is quite intuitive once you know the basic things. First it is good to know: The name of the table that you want to query. The structure of the table (fields, data types…) Let’s start with an example: We want to retrieve all the data included in the table: all_info. The list of the tables available is shown in the drop‐down menu Table Info: Select the option "all_info", and the description of the table will appear: … and scrolling: This basically means that: The table “all_info” has 13 fields (columns) with the names: cluster_id unique cluster identifier (ranges from 1 .. 29608) cluster_name ong name of the cluster (rA_c02_1 ‐ re‐assembly number 02 cluster 1) cluster_seq DNA sequence of the cluster cluster_length length of the sequence (in nucleotides) P_reads number of reads in POLYP stage S_reads number of reads in STROBILA stage E_reads number of reads in EPHYRA stage pep_seq predicted peptide sequence (or No Prediction tag) pep_length length of the peptide (in amino acids) score peptide prediction score (given by EstScan) start_nuc position of the first nucleotide of the ORF stop_nuc position of the last nucleotide of the ORF The table also indicates the type of the data: numeric (int or smallint) or text (varchar, mediumtext). Now we are going to construct and run our first query (retrieve all the data from a given table). Just type the following on the query field: SELECT * FROM all_info Which means: Retrieve (SELECT) all the fields (*) FROM table named: all_info. NOTE: The syntax is case insensitive, so “SELECT” is the same as “select” or “SElecT”. The same happens with the column names and symbols. Press the button and a table with the results should appear (If not, check that you have written the query correctly): IMPORTANT: The results are limited to 1000 rows by default. Depending of the query, the results can be very large and memory consuming. Type "0" in the field for using no limit. Saving results as CSV In order to export the result to a file, the button can be used. This will save the results and the filters applied to the table in a CSV file that one can download to the computer and open in Ms Excel, for example. Hiding Columns Question: How can I retrieve only data from columns cluster_id, P_reads, S_reads, E_reads and total_reads, for example? Answer: Just change the * for the field names separated by commas (,) except the last. SELECT cluster_id, P_reads, S_reads, E_reads, total_reads FROM all_info Ordering Data Question: How do I order the data ascendant or descendant? Answer: There are two possibilities, but with a slightly difference: Click on the column’s header. This will order the values of this column ascendant or descendant, but only affects to the values displayed. Here we sorted all the rows according to the expression level in strobila (see small arrowhead near "S_reads"). Use the clause ORDER BY in your query and use ASC or DESC to indicate the order. This affects to all the values in the database table. SELECT cluster_id, P_reads, S_reads, E_reads, total_reads FROM all_info ORDER BY S_reads DESC Filtering Data Question: How do I filter the data? Answer: There are two possibilities: Use the filter fields on the column header (text or numeric data). But, once time more, it only affects to the values displayed. If your result is greater than the limit specified (1000 by default) it could be some values on the results table that are not showed. For numeric values the following operators are available: equal to: = N greater than: > N lesser than: < N lesser or equal: <= N greater or equal: >= N range of values: N1 .. N2 Use the clause WHERE and then the condition of your filter (it allows more complex searches). This is the recommended way: SELECT cluster_id, P_reads, S_reads, E_reads, total_reads FROM all_info WHERE S_reads>1000 and E_reads<1000 Example with arithmetical and logical operations: SELECT cluster_id, P_reads, S_reads, E_reads, total_reads FROM all_info WHERE (S_reads+1) / (P_reads+1) >= 500 SELECT cluster_id, P_reads, S_reads, E_reads, total_reads FROM all_info WHERE (S_reads+1) / (P_reads+1) >= 500 and total_reads>2000 Adding an ORDER BY: SELECT cluster_id, P_reads, S_reads, E_reads, total_reads FROM all_info WHERE (S_reads+1) / (P_reads+1) >= 500 and total_reads>2000 ORDER BY E_reads DESC Selecting sub‐sets of genes Question: How can I retrieve the list of strobila‐specific genes? For example, we want to see only the clusters where more than 80% of all reads originate from the strobila stage. Moreover, we want to retrieve only the clusters where the total number of reads is >=10. Answer: We have to add additional conditions to our previous query. 80% threshold means that dividing the number of reads in strobila (S_reads) by the total number of reads (total_reads) we should get values >= 0.8. Total read number (total_reads value) should be >=10. So, we need to use a clause WHERE with two conditions: SELECT cluster_id, P_reads, S_reads, E_reads, total_reads FROM all_info WHERE S_reads / total_reads >= 0.8 and total_reads >=10 As a result we will get a list of 345 strobila‐specific clusters (see Fuchs et al. Fig. 2B): And now let us get the list of the polyp‐specific genes. We need to change just one parameter (P_reads) and our new query will be: SELECT cluster_id, P_reads, S_reads, E_reads, total_reads FROM all_info WHERE P_reads / total_reads >= 0.8 and total_reads >=10 As a result we will get a list of 336 polyp‐specific clusters (see Fuchs et al. Fig2B): Getting the list of the ephyra‐specific genes is easy now. Here is the corresponding query: SELECT cluster_id, P_reads, S_reads, E_reads, total_reads FROM all_info WHERE E_reads / total_reads >= 0.8 and total_reads >=10 Working with the microarray data The advantage of a relational database (like MySQL) is that it allows to work with large data sets and gives absolute flexibility in "asking" question of any level of complexity. It is easy to link different data types together, for example, sequence data with the corresponding expression values, peptide prediction, images and so on. For analysing the data one needs to describe the "question" as a set of mathematical and logical operators (in a similar way like in R, MatLab and similar programs). In the following examples we will use the table "array_normalized". This table contains mean signal values from independent experiments ((replicate_1+replicate_2+replicate_3)/3). Mean signal values across the stages (polyp, 14 days 5‐Aza‐Cytidine, 14 days control, ..., Ephyra) have been normalized based on the expression level of elongation factor‐1‐alpha (EF1α). Values in the table has not been subjected to logarithmic transformation. Log2 or Log10 transformation is important for presenting data in a form of a heat map, but for comparing expression that operation is not necessary. The table "array_normalized" contains 12 field (columns): id_entry unique entry identifier (primary key) id_oligo unique oligonucleotide name P_signal expression in POLYP (24h at 10°C) AZA14_signal expression in POLYP (14 days at 10°C, incubated in 5‐Aza‐cytidine) CON14_signal expression in POLYP (14 days at 10°C, DMSO control) AZA16_signal expression in POLYP (16 days at 10°C, incubated in 5‐Aza‐cytidine) CON16_signal expression in POLYP (16 days at 10°C, DMSO control) ES_signal expression in STROBILA with 1 segment LS_signal expression in STROBILA with 5 segments E_signal expression in EPHYRA (freshly detached) cl_name long name of the cluster (1‐RA_1 ‐ cluster 1, rc_8‐RA_8 ‐ cluster 8) cl_id unique cluster identifier (ranges from 1 .. 29608) 1) To view all the values from the table type: SELECT * FROM array_normalized The table with results should appear (if not, please check that the query has been correctly written): IMPORTANT: The results are limited to 1000 rows by default. Depending of the query, the results can be very large and memory consuming. Type "0" in the field for using no limit. 2) To find all the genes where expression in early strobila is 100 times stronger than in a polyp type: SELECT * FROM array_normalized WHERE ES_signal / P_signal >= 100 ORDER by cl_id ASC IMPORTANT: Results will be sorted according to the cluster identifiers in ascending order (ORDER by cl_id ASC). You can also sort the results by clicking on the column's headers. 3) To identify genes which are up‐regulated during the temperature induction and might function as a strobilation inducer one will need a bit more complex query with many conditions (now we will describe the hypothetical model in Fig.4A): SELECT * FROM array_normalized where (P_signal+AZA14_signal+CON14_signal+AZA16_signal+CON16_signal+ES_signal+LS_signal+E _signal>=100) and P_signal<50 and CON14_signal>AZA14_signal and CON16_signal>AZA16_signal and ES_signal / P_signal>=5 and LS_signal / P_signal>=10 and LS_signal >= 1000 order by cl_id ASC Here is the short explanation of the query: 1) We want to select genes which are expressed not extremely weak ‐ cumulative expression must be >= 100: (P_signal+AZA14_signal+CON14_signal+AZA16_signal+CON16_signal+ES_signal+LS_signal+E _signal>=100) 2) and the expression in the polyp stage must be weak: and P_signal<50 3) now we check that the genes are 5‐AZA‐cytidin sensitive and the expression increases at cold temperature: and CON14_signal>AZA14_signal and CON16_signal>AZA16_signal and ES_signal / P_signal>=5 and LS_signal / P_signal>=10 4) expression in late strobila must be relatively high: and LS_signal >= 1000 5) ordering according to the cluster IDs (gene idenifiers): order by cl_id ASC As a result we get a list of potential strobilation inducers (27 clusters) represented by the heat map in Fig.4B in Fuchs et al.
Recommended publications
  • The Origins of Word Processing and Office Automation
    Remembering the Office of the Future: The Origins of Word Processing and Office Automation Thomas Haigh University of Wisconsin Word processing entered the American office in 1970 as an idea about reorganizing typists, but its meaning soon shifted to describe computerized text editing. The designers of word processing systems combined existing technologies to exploit the falling costs of interactive computing, creating a new business quite separate from the emerging world of the personal computer. Most people first experienced word processing using a word processor, we think of a software as an application of the personal computer. package, such as Microsoft Word. However, in During the 1980s, word processing rivaled and the early 1970s, when the idea of word process- eventually overtook spreadsheet creation as the ing first gained prominence, it referred to a new most widespread business application for per- way of organizing work: an ideal of centralizing sonal computers.1 By the end of that decade, the typing and transcription in the hands of spe- typewriter had been banished to the corner of cialists equipped with technologies such as auto- most offices, used only to fill out forms and matic typewriters. The word processing concept address envelopes. By the early 1990s, high-qual- was promoted by IBM to present its typewriter ity printers and powerful personal computers and dictating machine division as a comple- were a fixture in middle-class American house- ment to its “data processing” business. Within holds. Email, which emerged as another key the word processing center, automatic typewriters application for personal computers with the and dictating machines were rechristened word spread of the Internet in the mid-1990s, essen- processing machines, to be operated by word tially extended word processing technology to processing operators rather than secretaries or electronic message transmission.
    [Show full text]
  • The Database Language 95030
    Tbe Database Language GEM Carlo Zaniolo Bell Laboratories Holmdel, New Jersey 07733 ABSTRACT GEM (bn acronym for General Entity Manipulator) is a general-purpose query and update language for the DSIS data model, which is a semantic data model of the Entity-Relationship type. GEM is designed as -an easy-to-use extension of the relational language QUEL. providing supporr for. the notions of entities with surrogates, aggregation, generalization, null values, and set-valued attributes. 1. INTRODUCTION generalization. The possibility of extending the relational model to capture more meaning - as A main thrust of computer technology is towards opposed to introducing a new model - was simplicity and ease of use. Database management investigated in [CoddZl, where, surrogates and null systems have come a long way in this respect, values were found necessaryfor the task. particularly after the introduction of the relational approach [Ullml, which provides users with a simple ,-‘Most previous work with semantic data models has tabular view of data and powerful and convenient concentrated on the problem of modeling reality and query languages for interrogating and manipulating on schema design; also the problem of integrating the database. These features were shown to be the the database into a programming environment key to reducing the cost of database-intensive supporting abstract data types has received application programming Ecddll and to providing considerable attention [Brad, KMCI. However, the a sound environment for back-end support and
    [Show full text]
  • Reviews Computer Applications Programs
    REVIEWS COMPUTER APPLICATIONS PROGRAMS INTRODUCTION TO SOFTWARE REVIEWS By Glen McAninch, University of Kentucky "You've come a long way baby!!!" Archivists have come a long way from the development of SPINDEX in the early 1970s. Back then, it seemed that archiv- ists needed software written for mainframe computers in order to have the flex- ibility (i. e. variable length fields and user designated tags) and storage capacity necessary to store and retrieve information on archival material. While SPIN- DEX has not been completely superceded for certain applications (i. e., print- ing lists for large institutions with a high volume of data), archivists in the 1980s have found that micro- and minicomputers can provide a relatively low cost alternative which can offer new features such as on-line searching and the generation of statistics. It was in the early 1980s that I first heard of archivists using commercially available software on microcomputers. They were using word processors and file managers like PFS, Data Factory, and DB Master on Apple computers with limited memory. Later, archivists stepped up to IBM PCs and look-alikes with two or three times the internal memory of the Apples. This enabled them to use more sophisticated relational database managers such as dBase II and later SAVVY, which require programming through the use of a command language. Archivists also began mixing powerful computers and mass storage (fixed disk) with simple file managers, some of which are reviewed in this issue of The Mid- western Archivist. Another software package that took advantage of the added power of the IBM PC was MARCON, a'database manager written especially for archivists by AIRS, a software developer group-with an archival background.
    [Show full text]
  • A Programming Paradigm for Autonomic Systems
    0 Context-Oriented Programming: A Programming Paradigm for Autonomic Systems GUIDO SALVANESCHI and CARLO GHEZZI and MATTEO PRADELLA, Politecnico di Milano Dynamic software adaptability is one of the central features leveraged by autonomic computing. However, developing software that changes its behavior at run time adapting to the operational conditions is a challenging task. Several approaches have been proposed in the literature to attack this problem at different and complementary abstraction levels: software architecture, middleware, and programming level. We focus on the support that ad-hoc programming language constructs may provide to support dynamically adaptive behaviors. We introduce context-oriented programming languages and we present a framework that positions the supported paradigm in the MAPE-K autonomic loop. We discuss the advantages of using context-oriented programming languages instead of other mainstream approaches based on dynamic aspect-oriented programming languages. We present a preliminary case study that shows how the proposed programming style naturally fits dynamic adaptation requirements and we extensively evaluate the use of COP in this scenario. Finally, we discuss some known problems and outline a number of open research challenges. Categories and Subject Descriptors: D.1 ,Software]: Programming .echniques—Object-oriented Programming0 D.3.3 ,Pro- gramming Languages]: Language Constructs and Features General .erms: Languages, Design Additional $ey Words and Phrases: Context, Self-adaptative software, Context-oriented programming, Autonomic comput- ing. 1. INTRODUCTION In the course of years, software systems complexity has been growing, increasing the required effort and the cost of management and maintenance. Autonomic computing 5"'6 ,$ephart and Chess 7881- aims at designing and building systems that can manage themselves with reduced human intervention pursuing high-level administrator’s goals.
    [Show full text]
  • Rocket Universe Guide to Retrieve
    Rocket UniVerse Guide to RetrieVe Version 11.2 November 2013 UNV-112-RETR-1 Notices Edition Publication date: November 2013 Book number: UNV-112-RETR-1 Product version: Rocket UniVerse V11.2 Copyright © Rocket Software, Inc. or its affiliates 1985-2014. All Rights Reserved. Trademarks Rocket is a registered trademark of Rocket Software, Inc. For a list of Rocket registered trademarks go to: www.rocketsoftware.com/about/legal. All other products or services mentioned in this document may be covered by the trademarks, service marks, or product names of their respective owners. Examples This information might contain examples of data and reports. The examples include the names of individuals, companies, brands, and products. All of these names are fictitious and any similarity to the names and addresses used by an actual business enterprise is entirely coincidental. License agreement This software and the associated documentation are proprietary and confidential to Rocket Software, Inc., are furnished under license, and may be used and copied only in accordance with the terms of such license. Note: This product may contain encryption technology. Many countries prohibit or restrict the use, import, or export of encryption technologies, and current use, import, and export regulations should be followed when exporting this product. Contact information Website: www.rocketsoftware.com Rocket Software, Inc. Headquarters 77 4th Avenue, Suite 100 Waltham, MA 02451-1468 USA Tel: +1 781 577 4321 Fax: +1 617 630 7100 2 Contacting Global Technical Support If you have current support and maintenance agreements with Rocket Software, you can access the Rocket Customer Portal to report and track a problem, to submit an enhancement request or question, or to find answers in the U2 Knowledgebase.
    [Show full text]
  • Dbase Plus 1 Table Designer Behavior
    User’s Guide VERSION 7.5 release 2.5 for Windows® 95, 98, 2000 NT, ME and XP dataBased Intelligence, Inc. Vestal, NY http://www.dbase.com news://news.dbase.com dataBased Intelligence, Inc. or Borland International may have patents and/or pending patent applications covering subject matter in this document. The furnishing of this document does not give you any license to these patents. COPYRIGHT © 2002 dataBased Intelligence, Inc. All rights reserved. All dBASE product names are trademarks or registered trademarks of dataBased Intelligence, Inc. All Borland product names are trademarks or registered trademarks of Borland International, Inc. Other brand and product names are trademarks or registered trademarks of their respective holders. Printed in the U.S.A. Contents Chapter 1 Source Editor behavior . .11 Introduction to dBASE Plus 1 Table Designer behavior . .11 Optimized ReportViewer . .11 Welcome to dBASE Plus !. 1 Overview of dBASE Plus version 2.50 . .11 What is dBASE Plus? . 1 Mouse events . .11 dBASE Newsgroups . 2 Grid Class . .11 The dBASE Plus Knowledgebase: . 2 Project Explorer . .12 Changes from earlier versions . 2 TreeView. .12 Visual dBase 5.x through Visual dBase 7.0 . 2 Array Class . .12 Report objects and the integrated Report designer . 3 Report . .12 Project Explorer . 3 Inspector . .12 Data objects. 3 _app Object . .12 Visual designers . 3 _app.frameWin . .12 ActiveX integration. 4 Procedure files . .13 The Inspector . 4 Report Designer . .13 Full-featured Source editor . 4 Error Handling. .13 SQL designer . 4 CHOOSEPRINTER( ) and choosePrinter( ) . .13 BDE Administrator and database support . 4 dBASE Plus documentation . .13 DBF7 file format features .
    [Show full text]
  • Dbms Languages
    • Data Definition Language (DDL) • Data Manipulation Language (DML) • High-Level or Non-procedural Languages: These include the relational language SQL • May be used in a standalone way or may be embedded in a programming language • Low Level or Procedural Languages: These must be embedded in a programming language Data Definition Language (DDL) Used by the DBA and database designers to specify the conceptual schema of a database. • In many DBMSs, the DDL is also used to define internal and external schemas (views). • In some DBMSs, separate storage definition language (SDL) and view definition language (VDL) are used to define internal and external schemas. • SDL is typically realized via DBMS commands provided to the DBA and database designers Data Manipulation Language (DML) Used to specify database retrievals and updates DML commands (data sublanguage) can be embedded in a general-purpose programming language (host language), such as COBOL, C, C++, or Java. • A library of functions can also be provided to access the DBMS from a programming language • Alternatively, stand-alone DML commands can be applied directly (called a query language). Types of DML • High Level or Non-procedural Language: For example, the SQL relational language are “set”-oriented and specify what data to retrieve rather than how to retrieve it. Also called declarative languages. • Low Level or Procedural Language: Retrieve data one record-at-a-time; Constructs such as looping are needed to retrieve multiple records, along with positioning pointers. DBMS Interfaces • Stand-alone query language interfaces Example: Entering SQL queries at the DBMS interactive SQL interface (e.g. SQL*Plus in ORACLE) • Programmer interfaces for embedding DML in programming languages • User-friendly interfaces • Menu-based, forms-based, graphics-based, etc.
    [Show full text]
  • Database Aware Memory Reconstruction for Object Oriented Programming Paradigm Dr
    INTERNATIONAL JOURNAL OF SCIENTIFIC & TECHNOLOGY RESEARCH VOLUME 8, ISSUE 10, OCTOBER 2019 ISSN 2277-8616 Database Aware Memory Reconstruction for Object Oriented Programming Paradigm Dr. Vandana Bhagat, Dr. (Fr.) Jossy George Abstract— Data storage is a big challenge in front of industries and researchers when its growing enormously. Traditional data storage strategy was fulfilling the business needs till the data was in structured format. But now due to Internet of Things (IoT) compatible devices unstructured data is more than structured one. In such cases traditional data storage strategy won't work efficiently. Initially data storage devices used to store the data irrespective of its logical storage. It means the record was stored either in array format or block format. Such type of storage was not matching physical and logical structure. Logically, structured data is generated as an attribute of particular entity, but physically it gets stored in a sequential memory storage either as file or as memory block. Object Based Storage pattern(OBS) stores the data in the way object gets generated by the programmer and accordingly memory is allocated for that object. Object contains all the data related to that particular entity which makes it easy to retrieve it back. Current study includes comparative advantages, operations and study of different service providers of object-based storage. We also focused on the current need of object-based storage structure for big data analysis and cloud computing. Index Terms— Block-Storage, Cloud Object Storage, Distributed file system, File Storage, Metadata, Object Based Storage, Object Storage Device. —————————— —————————— 1 INTRODUCTION Innovations in Artificial Intelligence are responsible to components, its service providers and its compatibility generate enormous amount of data every day.
    [Show full text]
  • Ashton-Tate Is Among the World's Premier Developers and Marketers of Business Software for Microcomputers
    Contact: Gail Pomerantz Ashton-Tate (213) 204-5570 Mel Webster Miller Communications (617) 536-0470 CORPORATE BACKGROUND Ashton-Tate is among the world's premier developers and marketers of business software for microcomputers. The company is the leading supplier of database management systems, offers a highly acclaimed multi-purpose product, and provides its customers with a level of service and support that sets the industry standard. Based in Culver City, California, Ashton-Tate was founded in 1980 and has grown rapidly into one of the leaders in the microcomputer software field worldwide. Its first product, the best-selling dBASE II, first shipped in 1981. The company has followed up with a succession of software packages that significantly increase business productivity, including Framework, a multi-purpose program that offers a complete business solution. The company is one of the few major microcomputer software firms to have best-selling products in two major categories. Ashton-Tate employs a "distributed development" strategy to design new products, and has set a trend in the industry. Under this strategy, the company maintains research and development centers throughout the U.s. to design new products. Distributed (more) Ashton-Tate Corporate Background 2-2-2 development has already resulted in a best-selling product -- Framework, which was designed by the recently acquired Forefront Corporation of Sunnyvale, California. In addition to dBASE II, which is the all-time, best- selling database program for 8-bit microcomputers, the company's database products include dBASE III, the most popular database package for 16-bit microcomputers since its introduction in mid- 1984.
    [Show full text]
  • Ashton-Tate's Investors Are Sleeping Easier
    CHAIRMAN ESBER: "THERE'S NO DOUBT WE'LL BE ABLE TO CONTINUE OUR TRACK RECORD" ASHTON-TATE'S INVESTORS ARE SLEEPING EASIER New programmers and a revised dBase reassure the Street After a hectic week, Edward M. Esber Jr. sometimes sends out for pizza and summons his friends or colleagues to his home to play his favorite game, "Risk." The name seems apt for Esber, the 35-year-old Harvard University MBA whose tactics as Ashton-Tate Co.'s chief executive for the past three years have kept the soft-ware company a bit too close to the edge to suit Wall Street. As rivals announced new products, Esber took two painful years to come up with a new version of the company's dBase program. Ashton-Tate's position as the leading supplier of data-base programs for personal computers seemed to be in danger. But Wall Street is breathing easier these days—and forecasting that Esber's gamble will pay off. The reason: a slew of product announcements and revisions, especially a dramatically reworked version of dBase that will let personal computers tap into large data banks. Equally important, in the past year, Esber has assembled an all-star cast of computer scientists to rev up software development. "Esber has finally given the market what it wants," says Bahar Gidwani, an analyst with Kidder, Peabody & Co. And to hear Esber tell it, Ashton-Tate's market dominance is now guaranteed. "There's no doubt we'll continue our track record," he says. Indeed, it's been an enviable record so far.
    [Show full text]
  • Lifeboat Associates CPM2 Users Notes for Altair Disk
    CP/M2 ON MITS DISK USER'S NOTES BY LIFEBOAT ASSOCIATES 1651 THIRD AVENUE NEW YORK, N.Y. 10028 TELEPHONE 212 860-0300 TELEX 220501 CP/M2 ON MITS DISK BY LIFEBOAT ASSOCIATES 1651 THIRD AVENUE, NEW YORK,N.Y.10028 COPYRIGHT (C) 1981 **NOTE** The name "MITS" is a registered trademark of Micro Instrumentation and Telemetry Systems, Albuquerque, Nm. "CP/M" is copyright and trademark of Digital Research, Pacific Grove, Ca. "Z80" is a trademark of Zilog Inc., Cupertino, Ca. This manual and portions of this software system are copyright by Lifeboat Associates, New York, N.Y. License to use this copyright material is granted to an individual for use on a single computer system only after execution and return of the registration card to Lifeboat Associates and Digital Research. This document was created by Patrick Linstruth from information contained in “CP/M on MITS DISK – USERS NOTES”, April 27, 1977, and “CP/M2 ON NORTH STAR – DOUBLE DENSITY – QUAD CAPACITY USER’S NOTES”, December 1979, produced by Lifeboat Associates. Errors may be reported at https://github.com/deltecent/lifeboat-cpm22. Revision 1.0 April 2020 CP/M2 ON MITS DISK TABLE OF CONTENTS INTRODUCTION ................................................... 1 GENERAL INPORMATION .......................................... 1 CP/M AND THE ALTAIR SYSTEM ................................... 1 WHAT IS CP/M? ................................................ 2 A BRIEF HISTORY OF CP/M ...................................... 2 GETTING STARTED ................................................ 3 YOUR CP/M
    [Show full text]
  • User Guide Notices Copyright
    MapInfo Pro Version 15.2 User Guide Notices Copyright Information in this document is subject to change without notice and does not represent a commitment on the part of the vendor or its representatives. No part of this document may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopying, without the written permission of Pitney Bowes Software Inc., One Global View, Troy, New York 12180-8399. © 2015 Pitney Bowes Software Inc. All rights reserved. Pitney Bowes Software Inc. is a wholly owned subsidiary of Pitney Bowes Inc. Pitney Bowes, the Corporate logo, MapInfo, Group 1 Software, and MapInfo Pro are trademarks of Pitney Bowes Software Inc. All other marks and trademarks are property of their respective holders. Contact information for all Pitney Bowes Software Inc. offices is located at: http://www.pitneybowes.com/us/contact-us.html. © 2015 Adobe Systems Incorporated. All rights reserved. Adobe, the Adobe logo, Acrobat and the Adobe PDF logo are either registered trademarks or trademarks of Adobe Systems Incorporated in the United States and/or other countries. © 2015 OpenStreetMap contributors, CC-BY-SA; see OpenStreetMap http://www.openstreetmap.org (license available at www.opendatacommons.org/licenses/odbl) and CC-BY-SA http://creativecommons.org/licenses/by-sa/2.0 libtiff © 1988-1997 Sam Leffler, © 2015 Silicon Graphics Inc. All Rights Reserved. libgeotiff © 2015 Niles D. Ritter. Amigo, Portions © 1999 Three D Graphics, Inc. All Rights Reserved. Halo Image Library © 1993 Media Cybernetics Inc. All Rights Reserved. Portions thereof LEAD Technologies, Inc. © 1991-2015. All Rights Reserved. Portions © 1993-2015 Ken Martin, Will Schroeder, Bill Lorensen.
    [Show full text]