A P L \ 1 1 3 0 Development Environment

A Guide to Building APL\1130 from Source

September 2011

********************************************************************** * * * This document applies to APL\1130 Release 2 published in May 1669. * * It describes a method to assemble and test APL\1130 and to build * * an APL\1130 binary distribution under the IBM 1130 simulator * * published at http://ibm1130.org. The sources used are from file * * apl_source.zip found in the download section of IBM1130.org. This * * file contains the complete sources for the APL\1130 system itself, * * but not for all utilities needed to build it. Some but not all of * * the missing utility components can be extracted in binary form * * from an APL\1130 binary distribution found in file aplsetup.zip on * * ibm1130.org. For the remaining utilities missing, workarounds have * * been put in place to support running the full APL\1130 development * * cycle (assemble, test and build distribution) under the IBM 1130 * * simulator. An extensive analysis of the differences between the * * APL\1130 system as built from source and the binary distribution * * is provided. * * * ********************************************************************** * * * A main source of information on APL\1130 is the User’s Manual * * dated 5/5/69. It can be found on bitsavers at the following link: * * http://bitsavers.org/pdf/ibm/1130/lang/1130-03.3.001_APL_1130_May69.pdf * * Information from the User’s Manual will be referenced throughout * * this guide. * * * ********************************************************************** * * * Jürgen Winkelmann, September 2011 * * [email protected] * * * **********************************************************************

2 Table of Contents A P L \ 1 1 3 0 Development Environment ...... 4 Introduction ...... 4 How to use this Guide ...... 4 Usage of the APL\1130 Development Environment ...... 5 Assemble APL\1130 and build the binary installation decks ...... 5 Run the APL\1130 system directly from the APL development disk ...... 7 Switch to DMS Mode and Assemble an Updated Source Program ...... 8 APL\1130 Distributions and Installation...... 9 Binary Distribution ...... 9 Source Distribution ...... 10 APL\1130 Disks ...... 14 APL runtime disk ...... 14 APL development disk ...... 14 Disk Layouts ...... 15 Why create Installation Card Decks at All? ...... 15 Was something like an APL development disk ever used in reality? ...... 15 Components of the APL\1130 Development Environment and their Usage ...... 17 Dump Utility Control Cards ...... 21 Generate the APL\1130 Development Environment ...... 23 IBM 1130 Simulator Issues ...... 28 Card Reader ...... 28 APL\1130 Version Considerations ...... 29 Code Analysis ...... 29 Code Differences ...... 34 Summary of Code Analysis and Differences ...... 41 Version Timeline ...... 42 Appendix ...... 44 Core Map ...... 44 Disk Map ...... 45 Download Links to ZIP Archives ...... 46

3 A P L \ 1 1 3 0 Development Environment

Introduction Norm Aleks and Brian Knittel, two IBM 1130 enthusiasts, have created a fully func- tional IBM 1130 simulator, which they published around 2002 on their website http://ibm1130.org. Besides the emulator this website contains tons of authentic ma- terials around the IBM 1130 system.

Having no idea what an IBM 1130 was or even that such a machine once existed I was browsing the web in search for vintage APL versions, because I wanted to find an APL version that would work on an MVS 3.8j system that I’m currently running on an emulated IBM S/370. This search brought me to IBM1130.org and it was most surprising for me to find a fully functioning binary version of the original APL\1130 Release 2 system and a source deck of it there. But obviously no one succeeded up to now in compiling the source deck and building a working APL\1130 system from it.

APL\1130 Release 2 was published in May 1969 as one of the first publicly available APL terminal systems. Although having been designed to run on an IBM 1130 with only 8 kW (=16 Kbytes) of core memory it supported a rich set of APL operators. On- ly some very few weren’t available, like for example the “circle” operator for trigono- metric and hyperbolic functions.

From further research on the web I learned that APL\1130 and APL\360 were quite close relatives. Thus, in absence of a usable APL implementation for the 360 or 370 architecture I became interested in learning a bit more about APL\1130 which finally lead me to trying to get it compiled and built from the source deck found on IBM1130.org.

As I’ve never seen or used a real IBM 1130 system (in fact I was in kindergarten age when IBM started marketing these systems) it was an absolutely thrilling experience to get beamed a decade further into the past by this excursion to the roots of APL\1130, than the era of S/370 systems running OS/VS2 MVS I “grew up” with.

Sharing Brian’s and Norm’s opinion on “Preserving Historic Software” as expressed on page http://ibm1130.org/sim/resurrecting-dms absolutely, I’ve created this guide to provide and hopefully preserve the information necessary to build that historic piece of software on the IBM 1130 simulator from source.

How to use this Guide Due to time constraints I’m not able to fully structure this guide into a “User’s Guide” part and a reference part containing background information, assumptions and ra- tionales that lead to the current layout of the APL development environment. So, us- age and background information is often somewhat intermixed.

If you really want to know what I did and why I did it this way, it probably will be nec- essary to read the whole guide (sorry about that!). But if you’re only interested in building APL\1130 from source and creating the three binary installation card decks

4 from the newly built APL\1130 system reading chapter “Usage of the APL\1130 De- velopment Environment” should be sufficient.

Before using APL\1130 (be it a version built from source as documented in this guide or the binary version found in aplsetup.zip or aplpreview.zip on IBM1130.org) I strongly recommend reading “Card Reader” on page 28.

Usage of the APL\1130 Development Environment The APL development environment is a set of tools supporting the following opera- tions:

 Assembly of the APL\1130 system under DMS using a disk with a special lay- out of files in the user area (UA). That disk is called the APL development disk.

 Build the three installation card decks that comprise the binary distribution of APL\1130 from the newly compiled APL\1130 system under DMS.

 Run the assembled APL\1130 system from the APL development disk it was compiled on by switching that disk to “APL mode”, thus bypassing the need to punch the three installation decks and install APL\1130 on its own dedicated disk.

 Switch the APL development disk back to “DMS mode”, for example to change and reassemble a source module that can then be tested immediately by switching the disk to APL mode again.

Switching back and forth between DMS and APL mode on the same APL develop- ment disk is supported as long as there are no more than 10 APL workspaces as- signed to users. Once more workspaces have been assigned, they will overlay DMS components on the disk and switching back to DMS mode isn’t possible any more. The disk then is equivalent to a dedicated APL disk as installed by loading the three installation decks to a DCIP initialized but otherwise empty disk.

The following sections give step by step instructions on how to perform the above mentioned operations. Before proceeding please extract folder APL_devel from the APL_1130_Development_Environment.zip archive to an arbitrary location on your Windows system. The logs of sample sessions referenced by the step by step in- structions assume that you’ve just dragged and dropped the folder to your desktop.

Assemble APL\1130 and build the binary installation decks  Open a command prompt window and change directory to the folder contain- ing the APL development environment.

 Enter the following command at the command prompt:

ibm1130 assemble_all

5 Once the assemblies are completed you’ll have a ready to run APL\1130 system with an empty user and workspace directory on your APL development disk (APL_devel.dsk).

The listing of the assemblies is placed in file assemble_all.lst. A console log of a typi- cal assemble_all session is provided as file assemble_all.log in the APL_1130_De- velopment_Environment.zip archive for reference purposes.

If you want to create the three binary installation decks this should be done now, i.e. before using the newly created APL\1130 system for the first time, to ensure that the directory contained in the “Empty Directories” deck really is empty.

To create the installation decks perform the following steps:

 If the command window used for running the assemblies is still open you can continue using it. Otherwise open a command prompt window and change di- rectory to the folder containing the APL development environment.

 Enter the following command at the command prompt:

makedeck IPL aplload1.bin

This makedeck command will start an IBM 1130 simulator session setup to punch the requested deck as defined by the dump utility control card IPL.DMP_control_card, complete it by prepending a coldstart card and a standalone loader and place it in file aplload1.bin.

To ensure that the dump utility will be able to reliably read the control card it is necessary to fully separate the monitor control stream from the control card. This is done by using the // TYP monitor control statement to switch the input device from the card reader to the console before placing the control card in the reader and consequently you will be prompted to provide monitor control input from the console. For each monitor control statement needed the mak- edeck command will display a message describing what to key in. Please enter these statements exactly as requested (you will be prompted for two state- ments, one // PAUS statement and one // XEQ DMP statement).

 Repeat the above step using the commands:

makedeck DIR aplload2.bin makedeck APL aplload3.bin

Note that of course any other names may be chosen for the files to place the decks in. The reason for choosing aplload[1-3].bin here simply is, that the decks created can be used to directly replace the corresponding ones from the binary distribution contained in aplsetup.zip and the loadapl script provided there will then generate the an APL runtime disk of the newly assembled system without having to change any filenames.

6 A console log of a typical session creating the installation decks is provided in file makedeck.log in the APL_1130_Development_Environment.zip archive for reference purposes.

To verify the integrity of the newly created installation decks I recommend testing them using the binary distribution in aplsetup.zip as a starting point: Replace the orig- inal decks with the newly created ones and run the loadapl installation procedure as pointed out in file setup.txt. Before running loadapl be sure you have read, under- stood and acted upon “Card Reader” on page 28.

Two very APL workspaces are provided in folder APL_test_programs of the APL_1130_Development_Environment.zip archive. These are by no means meant to be real test cases but they nonetheless allow verifying the basic operability of the system.

A console log of a typical session loading the newly created binary installation decks using loadapl to a new APL\1130 runtime disk and testing it using one of the work- spaces from APL_test_programs is provided in file testdecks.log in the APL_1130_Development_Environment.zip archive for reference purposes.

To keep things simple the APL test session logged in testdecks.log was conducted directly from the command prompt window which isn’t able to display the full APL character set. Consequently no commands using or displaying special APL symbols are used. Use for example the apl script from aplpreview.zip to run APL\1130 with a terminal capable to display the full APL character set.

Although the APL session in testdecks.log doesn’t use special APL symbols it should be noted that even entering digits or parenthesis is somewhat special with APL\1130. Refer to “Getting Started with APL.pdf” from aplpreview.zip if you are not yet accus- tomed to the 2-way shift system used for input at the IBM 1130 console keyboard.

Run the APL\1130 system directly from the APL development disk  Open a command prompt window and change directory to the folder contain- ing the APL development environment. The environment should be in the state as after having performed the assemble_all operation described in “Assemble APL\1130 and build the binary installation decks” on page 5 (of course it doesn’t matter if the installation decks also have been created, as this doesn’t change the environment). If you in the meantime have extracted a new clean environment from the APL_1130_Development_Environment.zip archive please run the assemble_all script as described in the chapter mentioned above.

 To switch the APL development disk to APL mode enter the following com- mand at the command prompt:

ibm1130 load_deck DMS2APL APL_devel.dsk

You can now boot APL\1130 from APL_devel.dsk using any method of your choice.

7 A console log of a typical session switching the APL development disk to APL mode and testing it using one of the workspaces from APL_test_programs is provided in file dms2apl.log in the APL_1130_Development_Environment.zip archive for reference purposes.

Note that after this sample session a user has been created with two workspaces assigned. Thus the directory is no longer empty and shouldn’t be used any more to create a binary “Empty Directories” installation deck. If this deck should ever needed to be created again a clean environment should be used (I can’t imagine that one ever would want to recreate the “Empty Directories” deck, though).

Switch to DMS Mode and Assemble an Updated Source Program  Open a command prompt window and change directory to the folder contain- ing the APL development environment. APL mode should already have been entered and tested successfully.

 To switch the APL development disk to DMS mode enter the following com- mand at the command prompt:

ibm1130 load_deck APL2DMS APL_devel.dsk

 To provide an example of an updated source program without running risk to unwantedly change functionality I created the easiest modification possible: Add a hardcoded assembly date to the APL\1130 signon message .

This message is issued by the APSC (APL System Command) processor, the source of which is found in APSC.asm in the src folder of the APL develop- ment environment. The APL development environment contains a folder named modsrc, which is intended to be used to store new or modified source, thus avoiding interference with the original source.

File APSC_with_hardcoded_asm_date.asm in the modsrc folder is a changed version of APSC.asm displaying the build date of the APL development envi- ronment (05 SEP 2011) in the signon message. Feel free to modify this file to display any message you want (of course taking care of a feeeeww things should you dare to also change the length of the message ). To point you quickly to the location of the message in the source I’ve changed the se- quence numbers of the modified part to start with VER0 instead of APSC, so just search for VER0 and you’re there.

 Once you’re satisfied with your changes enter the following command at the command prompt:

ibm1130 assemble modsrc\APSC_with_hardcoded_asm_date.asm

This assembles the updated program and places it at its designated location in the otherwise unchanged APL\1130 system, which is now ready for immediate testing of the modification.

8 You can now conduct yet another APL session as described in “Run the APL\1130 system directly from the APL development disk” on page 7 to test the modification. This cycle of changing operating mode, modifying and assembling source(s), chang- ing operating mode and testing can be repeated as often as desired.

It should be noted that, should you ever want to build a set of binary installation decks of your modified APL\1130 system, it is sufficient to create the APL system deck. The IPL sector deck and the Empty Directories deck can always be reused from the first build, they are absolutely static.

A console log of a typical session switching the APL development disk to DMS mode, assembling the modified APSC processor, switching it back to APL mode and testing it using the sample workspaces from folder APL_test_programs is provided in file apl2dms.log in the APL_1130_Development_Environment.zip archive for reference purposes.

Note that I made some value assignments (left arrow operator) in this session. As mentioned earlier, the command prompt window doesn’t support special APL charac- ters, which is why these left arrows don’t display correctly. On my system they are displayed as German Umlaut “ä”. For easier readability I edited the log file and changed the “ä” to the standard UTF-8 left arrow “←”. Thus use a program being able to handle UTF-8 encoding to display this log file (the notepad editor of not too old Windows systems, for example, does this flawlessly).

For regular APL\1130 use you would of course choose a terminal emulation that sup- ports the full APL character set instead of the command prompt window (for example Hyperterminal, as suggested by the sample apl script in aplpreview.zip).

APL\1130 Distributions and Installation The information on page 3 of the User’s Manual implies that there were two distinct distributions of APL\1130:

 The standard distribution was on punched cards. It contained all components in binary form that were needed to install and run APL. This distribution didn’t contain any source materials.  A source distribution on tape was available as optional material which was de- livered upon request only.

Binary Distribution This was the standard distribution of APL\1130. It was originally delivered on 526 punched cards:

 The five card “IPL” deck  The 438 card “APL” deck  The nine card “DIR” deck  Three identical “1442 coldstart cards” to boot standalone utilities from an IBM 1442 card reader

9  Three identical “2501 coldstart cards” to boot standalone utilities from an IBM 2501 card reader  Three identical decks of the eight card standalone “1442 loader” for use on an IBM 1442 card reader  Three identical decks of the 14 card standalone “2501 loader” for use on an IBM 2501 card reader  One “APLIPL” coldstart card to boot APL\1130 for day to day operations  One “APLIPLPR” coldstart card to boot APL\1130 in privileged mode for user and workspace administration

From these cards three installation decks named

 “IPL sector deck”  “APL system deck”  “Empty Directories deck” were put together, depending on the system configuration:

 In systems with an IBM 1442 card reader the installation decks consisted of the 1442 coldstart card followed by the 1442 loader and the IPL, APL or DIR deck, respectively.  In systems with an IBM 2501 card reader the installation decks consisted of the 2501 coldstart card followed by the 2501 loader and the IPL, APL or DIR deck, respectively.

It should be noted, that only these three decks were then needed to install APL, plus the two coldstart cards APLIPL and APLIPLPR to operate it. So it wouldn’t be surpris- ing if on a system with a 1442 reader the 2501 coldstart and loader cards got lost over time and vice versa. The binary APL\1130 distribution found in file aplsetup.zip on IBM1130.org is such an example: It only contains the three installation decks as put together for an IBM 1130 with a 1442 reader (named aplload[1-3].bin there), but none of the 2501 specific cards. This isn’t a problem when running under the IBM 1130 simulator, but these decks can of course not be used on a real IBM 1130 when it is equipped with a 2501 card reader.

APL\1130 is installed on a dedicated IBM 2315 disk cartridge, which needs to be ini- tialized using DCIP (DMS’ Disk Cartridge Initialization Program) before installing APL on it. Once APL is installed on such a dedicated disk, it is called an “APL runtime disk”.

The installation procedure is as simple as the 1130 three times, once with each of the installation decks in the card reader. This procedure loads the code from the IPL, APL and DIR decks to their appropriate locations on the APL runtime disk.

Source Distribution APL\1130 is written completely in IBM 1130 assembler language. The source code was available on tape as optional material which was delivered upon request only.

10 The reason for not shipping the source code on cards together with the binary card distribution might have been the sheer amount of cards: It would have been roughly 17,500 cards (the exact number is unknown because the source deck from IBM1130.org isn’t 100% complete).

To make use of the source one not only would have had to order the optional tape, but one also needed some equipment to read the tape and punch the source to cards (as of my knowledge the IBM 1130 didn’t support any tape drives). From this it be- comes clear, that there potentially were much more sites having only the binary dis- tribution at hand (more specifically the part of it matching their card reader configura- tion) than those having the source tape too and have archived a card deck or printed copy of it.

A reel tape from the 60’s probably would be too deteriorated today to be still reada- ble, so presumably none of the source distribution tapes delivered have survived. For me it almost sounds like a miracle that one of those probably very sparse source decks copied from tape to cards 40 years ago or listings of them have made it through times intact and finally found its way to IBM1130.org. From this point of view it is well worth to put some effort into getting it compiled and to conserve it together with the information needed to compile it.

The User’s Manual doesn’t contain any information on using the source except that on the tape there would be a complete job stream with all control cards needed to assemble the APL\1130 system under DMS Version 2. The source deck found on IBM1130.org, however, cannot be the “complete job stream” mentioned in the User’s Manual: “Newbie mode on” It runs lots of assemblies and finally some utilities but neither the installation card decks nor an otherwise usable APL\1130 system result from running it  “Newbie mode off”.

The source deck obviously was modified somehow from the original version on the tape. An explanation for this could be similar to what I already assumed for the binary distribution: To minimize the amount of cards that needed to be stored and to avoid confusion on which cards to use, one simply removed the cards that didn’t match the hardware configuration in use. It’s even thinkable that a documentation accompany- ing the tape recommended this.

Additionally one could have been tempted to remove everything considered being “static” and/or “transient”, i.e. the sources for the coldstart cards. Although needed to get APL\1130 installed and to IPL it, these cards don’t contribute anything to the sys- tem’s functionality and are completely inexistent when it’s running.

Looking at it from this point of view it becomes clear immediately, that we have here a source deck matching an IBM 1130 configuration with a 2501 card reader and hav- ing been minimized according to the above assumptions. The components removed from the original deck sum up to a difference of about 1,000 – 1,500 cards, which could have been well worth the effort of sorting them out.

11 In fact, the deck contains the complete sources to create the APL, IPL and DIR decks (comprising the full APL\1130 system!), plus the sources of the utilities needed to dump and restore them using a 2501 based configuration, but without the sources for the coldstart cards and the 1442 specific utilities. Also missing are the control cards needed by the dump utility.

While the control cards for the dump utility can easily be recreated, the 2501 coldstart card is a problem: It even cannot be taken from the binary distribution, as this only contains the 1442 coldstart card (it’s really a pity that the IBM 1130 installation from which the binary distribution originated was a 1442 based configuration, while the one where the source deck came from obviously was 2501 based ).

So, the 2501 coldstart card is really missing. No disassembler, no nothing can bring it back, except one would want to rewrite it, which probably wouldn’t be too complicat- ed when using a disassembly of the 1442 coldstart card as a starting point.

For the time being, until the 2501 coldstart card pops up from somewhere or gets rewritten by someone, I’ve put workarounds in place to get the job done without needing that card. These workarounds use the fact that the IBM 1130 simulator al- lows swapping the card reader between 1442 and 2501 mode on the fly. It should be noted, however, that the use of this capability of the simulator makes it impossible to execute the build procedure on a real IBM 1130 (except there existed a configuration with a 1442 and a 2501 reader at the same time, from which I don’t know if this was at all technically possible).

To make the long story short: Analogous to the list of components that, as confirmed by the User’s Manual, make up the binary distribution in “Binary Distribution” on page 9 I’m trying to provide a list of components here, that the complete source distribution might have been consisted of:

 Source of the APL\1130 system, consisting of 12 (from an assembler stand- point independent) programs, each of them wrapped in an envelope which writes it to one ore more disk areas when being executed under DMS. These areas build a compact block on the disk and would later be dumped as a whole to create the APL deck. One of these areas, the Disk I/O Routine DADSK is used as the IPL sector of the APL runtime disk.  Source of a program that creates an empty user and workspace directory structure on disk when executed under DMS. That directory structure would later be dumped to create the nine card DIR deck.  Sources for the four coldstart cards (1442, 2501, APLIPL and APLIPLPR), each probably enveloped such that execution under DMS would punch it di- rectly to cards (pure guesswork, none of these sources is available).  Sources of the standalone 1442 and 2501 loaders, each wrapped in an enve- lope which punches it to cards when being executed under DMS.  Sources of a standalone “1442 dump utility” and a standalone “2501 dump utility”, each wrapped in an envelope which punches it to cards when being

12 executed under DMS. It should be noted that the dump utilities depend on the card reader’s device type, because they need to read a control card specifying what they should dump and to which location on the target disk the resulting card deck is to be restored.  Three dump utility control cards: These direct the dump utility to punch the IPL, DIR or APL deck.

This list is based on assumptions noted earlier in this chapter and experiences made while experimenting with the incomplete source deck. As compared with the binary distribution list, the only new components are the device dependent dump utilities and control cards, which are needed to create the IPL, APL and DIR decks for the binary distribution.

The following table gives an overview on which components are available through the binary (aplsetup.zip) and the source (apl_source.zip) distribution on IBM1130.org and their relevance to the build process:

Binary Source Build Process Remarks APL\1130 system √ √ from source Empty User and Workspace Directory √ √ from source IPL program √ √ from source APL coldstart card √ not used privileged APL coldstart card √ not used 1442 coldstart card √ use binary version 2501 coldstart card use 1442 binary version and coldstart workarounds 1442 loader √ use binary version 2501 loader √ not used, but built from source for completeness 1442 dump utility not used 2501 dump utility √ from source, usable with coldstart workarounds Three dump utility control cards reconstructed

13 APL\1130 Disks I’m using two kinds of disks with APL\1130:

APL runtime disk In “Binary Distribution” on page 9 the notion of an APL runtime disk was introduced which is created by loading the three installation decks on an empty initialized IBM 2315 disk cartridge. This is considered to be the standard disk for using APL\1130. It provides space for the APL\1130 system itself and 40 saved workspaces. This disk was used in day to day operations at APL\1130 times.

APL development disk Developing and testing APL\1130 code means to assemble the new code on a DMS system and then test it on an APL\1130 system. If the APL\1130 system is on a dedi- cated APL runtime disk each development and test cycle would involve punching a new APL deck on the DMS system and then loading it to the APL runtime disk. On real IBM 1130 systems this would have been quite a time and material (400+ cards) consuming process. When running on the simulator it’s still not very efficient to work this way.

It is possible to have DMS and APL coexist on the same disk for development pur- poses, when a few requirements are met:

 Provision must be made to enable swapping sector 0 (the IPL sector) of the combined disk between the APL\1130 IPL sector (as created by the IPL sector deck) and the DMS IPL sector. This can easily be done by creating a fourth “installation” deck containing the DMS IPL sector, analogous to the IPL sector deck of APL\1130.

 Filler files must be created to extend UA such that the boundary between UA and WS is at least as far above sector /340 as there is space between the last real file in UA and sector /280. This requirement comes from APL\1130’s way to save workspaces symmetrically around its system code and work area, which occupies the center of the disk platter.

Using the DMS disk distributed with the IBM 1130 simulator, voiding to get the last real file in UA located in the lowest sector possible and filling UA to move the WS boundary to sector /3E0 allows saving up to 10 workspaces before DMS working storage starts overwriting APL workspaces and APL workspaces start overwriting DMS user area files.

A disk constructed such that APL and DMS can coexist is called “APL development disk”. Depending on which system’s sector 0 actually is installed on an APL devel- opment disk, it is in “APL mode” or in “DMS mode”. Thus it provides a convenient way to develop and test APL\1130 code without the need to punch a single card.

14 Disk Layouts The following table shows the layouts of the APL disks:

Sector Ad- APL runtime disk APL development disk Filename (LET) dress APL IPL sector 000 APL or DMS IPL sector not used 001 020 standard DMS layout 11F 110 start of DMS user area room for 19 workspaces 1BE 2501 dump utility DMP 1C1 not used FILLR 1E0 room for 5 workspaces WSBLW APL\1130 system work- 280 APL\1130 system work and see “Disk Map” on area and code code area page 45 user and workspace di- 2EC user and workspace direc- DADRU rectory tory MSP swap out area 2EE MSP swap out area TMTRX not used 2F4 not used NUSD2 340 room for 5 workspaces WSABV room for 21 workspaces 3E0 5E0 DMS working storage unused 63F End of Disk

Why create Installation Card Decks at All? It should be noted that once the APL system and the empty directories have been compiled and written to disk an APL development disk in APL mode is fully equiva- lent to an APL runtime disk (i.e. it is not limited to only hold a maximum of 10 work- spaces) as long as it will never be switched back to DMS mode. This makes it possi- ble to install APL\1130 from source by using an APL development disk in DMS mode to assemble the APL and empty directory sources and then switching it once and forever to APL mode. Using this way to build APL\1130 eliminates the need to punch the binary APL and DIR installation decks and thus also eliminates the problems caused by the missing 2501 coldstart card.

Why don’t I promote this as the way of choice to build APL\1130 from source and completely ignore the problems around the 2501 dump utility and its missing coldstart card? Simply: I wanted to get as close as possible to the assumed original process which for sure included, at least as an option, the final step of punching the IPL, APL and DIR decks.

Was something like an APL development disk ever used in reality? I didn’t find any hint in the source or on the web that an APL development disk or a similar construct was ever used in reality. Such a construct makes it necessary that

15 the “write to disk” envelopes of the APL programs write below the DMS File Protected Address ($FPAD) which is normally prevented by DISK1. I get around this by storing the desired value to $FPAD using the simulator’s deposit command whenever need- ed. But as far as I could see from a quick look at the hardware manuals the real IBM 1130 didn’t provide a direct way to store an arbitrary value at an arbitrary core ad- dress from console. If an “APL development disk”-like construct that keeps the com- piled code in UA was used then there also had to be a method to obtain the proper write permission (perhaps a patch to make DISK1 skip the check against $FPAD, trivial enough not to get mentioned anywhere? Or did I miss any special console switch setting allowing to write below $FPAD when using DISK1?).

On the other hand: When assembling and executing the envelope programs just as it is suggested in the source deck the modules get placed unprotected into DMS work- ing storage. I did several tests lowering or elevating the working storage boundary such the APL\1130 code still remains in WS and always it either gets hit from “below” by the output modules generated by the assembler or from “above” by fragments of auxiliary files (from listings, code parsing, etc.) created by the assembler.

If the APL\1130 developers really placed the modules in DMS working storage they were either extremely lucky that their code didn’t get hit by other uses of WS or ex- tremely geniuses (or both ) to find an unused gap in WS large enough to accom- modate the APL\1130 code. So, if the (incomplete) jobstream from the source distri- bution didn’t suggest that the original process stored the compiled and preloaded programs “in the wild” (amidst working storage) I’d never ever assumed that they re- ally might have done it this way.

Besides using some kind of an APL development disk on the IBM 1130 or storing the programs “in the wild” another way comes to mind on how the original development cycle might have been: As is already known from the resurrection work on DMS done by Brian and Norm there existed a cross assembler for the 1130 on System 360. Given additionally the fact that the original source distribution tape was, according to the User’s Manual, written using the S/360 DEBE utility one could assume that all development work was done on an S/360 using the cross assembler. Although I don’t know for sure, I assume that it was possible to attach an IBM 2310 drive to the S/360. Then the development cycle would have been as simple as inserting an IBM 2315 disk cartridge into the 2310, run the source through the cross assembler and some kind of preloader and write it directly to the disk cartridge (the envelope code might then have been S/360 code actually doing the preload and disk write operation, for example). To test it, one then would have removed the cartridge from the 2310 and inserted it into the 1130’s disk drive.

Anyway, this now tends to become philosophical. One easily could invent yet another dozen of possible scenarios of the original development process.

16 My solution is based on the concept of an APL development disk and most important- ly: It works. More cannot be expected as long as no additional information on the original process can be found. But perhaps one of the original developers is reading this and would kindly comment on how they really did it?

Components of the APL\1130 Development Environment and their Usage A ready to use APL\1130 development environment is contained in folder APL_devel of archive APL_1130_Development_Environment.zip. To use it simply unpack the APL_devel folder to an arbitrary folder on your Windows system. It consists of the following components: src This folder contains the source from apl_source.zip as found on IBM1130.org. All monitor and assembler control cards have been removed and the job stream has been split into one source file per program for easier handling. The file names have been created from the fix part of the sequence numbers (columns 73-75 or 73-76). Ex- cept the removal of the monitor and assembler control cards this is the unmodified source from IBM1130.org. modsrc Modified or added source files should be placed in this folder to pre- vent changing the original source. As delivered there are two files in modsrc:

 File DMP_without_END.asm is file DMP.asm from the src folder without its END statement. Depending on which END statement one adds to it at assembly time either the original standalone 2501 dump utility is created or a version that can be called from DMS UA (DSF format). As the DSF version can be used without the missing 2501 coldstart card it removes the dependency on that coldstart card when dumping the IPL, APL and DIR decks from an APL development disk operating in DMS mode.  File APSC_with_hardcoded_asm_date.asm is a modification of file APSC.asm (the APL System Command processor) from the src folder. It adds a hardcoded assembly date (05 SEP 2011) to the APL signon message and is used as an example on how to assemble and test a source modification. Feel free to replace the hardcoded date with your own assembly date if you dare  (search for sequence numbers starting with VER0 to find it). src_regressed_to_level_of_binary_dist The code resulting from assembling the source in folder src isn’t identical to the binary distribution found in file aplsetup.zip on IBM1130.org: The source is at a higher mainte- nance level than the binary distribution. In “APL\1130 Version Con- siderations” on page 29 the differences are explained in detail. Folder

17 src_regressed_to_level_of_binary_dist contains the regressed source files APIN.asm, APIX.asm and APOV.asm that need to be assembled instead of the corresponding ones in folder src to obtain exactly the same code as in the binary distribution.

In addition this folder contains the following files to support running the assemblies and to document the difference between the re- gressed source and the one distributed in aplsetup.zip:

assemble_all_regressed DO script for the IBM 1130 simulator to assemble and write to disk the complete regressed APL\1130 system. The assembler listing will be saved as assemble_all_regressed.lst.

assemble_all_regressed.deck Card deck used by the assemble_ all_regressed script.

regression_patch A patch in Linux/Unix syntax. Applying this patch to the APIN.asm, APIX.asm and APOV.asm versions from the src folder will create the respective regressed ver- sions. Thus this patch documents all differences between the two versions.

APL_devel.dsk An APL development disk in DMS mode. The user area (UA) files listed in the “Filename” column of the “Disk Layouts” table on page 15 are initialized to binary zeros except:

DMP The 2501 dump utility in DSF format created by assembling file DMP_without_END.asm from the modsrc folder with an alternate END statement and DUP *STOREing it in UA. As mentioned in chapter “Source Distribution” all APL\1130 programs are enveloped, which means that when loaded and executed under DMS not the program itself gets con- trol, but an envelope code which writes the program to its target location. As the standalone utilities’ target location is the card punch, they punch themselves to a card deck when executed under DMS. By changing the assembler END statement of a utility to point to its real entry point in- stead of the envelope’s one, it can be made DMS callable and exactly that has been done with the dump utility stored in UA file DMP.

This version of the dump utility is used to punch the IPL, APL and DIR decks from their respective locations on the APL development disk in DMS mode to be independent from the missing 2501 coldstart card.

18 Of course, this version of the dump utility isn’t usable for an APL runtime disk or an APL development disk in APL mode, and, to be on the safe side, also not usable to dump sector 0 of the DMS system it’s running under. coldstart_card The 1442 coldstart card. It has been extracted from the IPL sector deck (file aplload1.bin) of the binary distribution (file aplsetup.zip from IBM1130.org), because its source isn’t available.

1442_loader The eight card standalone 1442 loader deck. It has been extracted from the IPL sector deck (file aplload1.bin) of the binary distribution (file aplsetup.zip from IBM1130.org), because its source isn’t availa- ble.

2501_loader The 14 card standalone 2501 loader deck created by assembling and executing file 2501.asm from the src folder.

Due to the missing 2501 coldstart card the 2501 loader isn’t used at all in the APL development environment. It is provided for complete- ness only.

2501_dmp The 14 card standalone 2501 dump utility deck created by assem- bling and executing file DMP_without_END.asm from the modsrc folder with its original END statement (which is identical to assem- bling the original source DMP.asm from the src folder).

Due to the missing 2501 coldstart card the 2501 dump utility isn’t used at all in the APL development environment. It is provided for completeness only.

APL.DMP_control_card Control card used to instruct the dump utility to punch the APL deck.

DIR.DMP_control_card Control card used to instruct the dump utility to punch the DIR deck.

IPL.DMP_control_card Control card used to instruct the dump utility to punch the IPL deck.

DMS2APL IPL sector deck used to switch the APL development disk from DMS to APL mode. It is created by concatenating the 1442 coldstart card, the standalone 1442 loader and the IPL deck created by assembling src\APDK.asm and then dumping UA file DADSK from the APL de- velopment disk using the DSF version of the 2501 dump utility. This file is in fact the IPL sector installation deck, created from source.

To switch the APL development disk from DMS to APL mode open a command prompt window, change directory to the folder in which the APL development environment has been placed and enter

19 ibm1130 load_deck DMS2APL APL_devel.dsk

APL2DMS IPL sector deck used to switch the APL development disk from APL to DMS mode. It is created by concatenating the 1442 coldstart card, the standalone 1442 loader and the deck created by dumping sector 0 of the APL development disk (in DMS mode) using the standalone version of the 2501 dump utility.

To switch the APL development disk from APL to DMS mode open a command prompt window, change directory to the folder in which the APL development environment has been placed and enter

ibm1130 load_deck APL2DMS APL_devel.dsk dump DO script for the IBM 1130 simulator to run the DSF version of the dump utility under DMS. This script is called by makedeck.bat to cre- ate the IPL, APL or DIR deck as part of building the IPL sector, APL system or Empty Directories installation deck. The script takes the name of the dump utility control card as first and the name of the disk to dump from as second parameter. The script will request two DMS monitor control statements to be typed in at the console. load_deck DO script for the IBM 1130 simulator to load an installation deck by placing it into the card reader and booting. The script takes the name of the installation deck as first and the name of the disk onto which to load that deck as second parameter. assemble DO script for the IBM 1130 simulator to assemble and execute a sin- gle source file. The name of the source file is to be passed as first and only argument to that script. The assembler listing will be saved under the source name appended with .lst. assemble_all DO script for the IBM 1130 simulator to assemble and write to disk the complete APL\1130 system (except the standalone utilities, of course) from the original source in folder src. The assembler listing will be saved as assemble_all.lst.

Running this script and then switch the disk to APL mode is all that needs to be done to compile the whole APL\1130 system and start to use it. If no code development or modification is intended there is no worrying necessary about not to use more than 10 workspaces to be able to switch the disk back do DMS mode. assemble.deck Card decks used by the assemble and assemble_all scripts. assemble_all.deck jobcard.cntl Bear with me, I’m an MVS guy and need my job control . This is of exec.cntl course no JCL, but monitor control cards used by the decks above.

20 asm.cntl type.cntl ibm1130.exe The IBM 1130 simulator dd.exe A GPL licensed Windows version of the well know Unix/Linux dd command. It’s used here to extract parts from card decks, as for ex- ample the 1442 coldstart card from the binary aplload1.deck. dd_copying.txt GPL boilerplate for the dd command. makedeck.bat Windows script to create an installation deck. The name (IPL, APL or DIR) of the deck to be created is to be passed as first, the desired filename of the deck as second parameter. To execute the script the APL development disk must be in DMS mode. The script runs the IBM 1130 simulator to execute the DO script “dump” (note that the “dump” script will request two DMS monitor control statements to be typed in at the console). The resulting deck is then appended to the 1442 coldstart card and the 1442 loader. For example entering

makedeck APL aplload3.bin

creates the APL system deck with the same filename as used in the binary distribution archive aplsetup.zip found on IBM1130.org, i.e. an installation deck created this way can directly replace the corre- sponding deck in aplsetup.zip if one wants to distribute a binary ver- sion of the assembled APL\1130 system.

Dump Utility Control Cards The dump utility used to punch the IPL, DIR and APL decks reads a control card that defines the deck to be dumped. None of these control cards are contained in the source deck found in archive apl_source.zip on IBM1130.org.

Fortunately the source contains a version of the standalone dump utility and of the standalone loader, i.e. the programs punching and reading these decks. And even more fortunately the dump utility source has a comment section describing the con- trol card layout and the loader source has a comment section describing the format of the cards punched.

21 Control Card Format The control card for the dump utility is formatted as follows:

----+----1----+----2----+----3----+----4----+----5----+----6----+----7----+----8 cccsssnnnttt where ccc is the three character sequence number prefix (i.e. the “ID”) of the cards to be punched. sss is the sector address defining the begin of the area to be dumped (i.e. the source starting sector). nnn is the number of sectors to be dumped. ttt is the sector address defining the begin of the area at which the data is to be restored when loading the deck dumped later using the standalone loader (i.e. the target starting sector).

The sss, nnn and ttt fields are the BCD encoded hexadecimal representations of the respective values. All punches in columns 13 to 80 are ignored.

Deck Card Format Each card punched by the dump utility is formatted as follows:

----+----1----+----2----+----3----+----4----+----5----+----6----+----7----+----8 <------data ------>tisn cccnnnnn where data Columns 1-64 are 48 data words in IBM 1130 card data format (CDF). t specifies the sector address to which the data is to be restored, 12-bit binary encoded. i is an index in sector t, 12-bit binary enoded. sn is the sequence number, 24 bit binary encoded. ccc is the three character ID as specified on the control card used to punch the deck. nnnnn is the sequence number in BCD encoding.

Punches in columns 69 to 72 are ignored.

Control Card Reconstruction From the source it can easily be seen that the APL programs and the user and work- space directory occupy the same areas on the disk on which they are generated as on the disk from which they are used. Thus the control cards for the APL and the DIR decks each must have identical sss and ttt values. These can directly be taken from

22 the source but I nonetheless crosschecked them with the binary decks to be sure that they are correct (although these values are not easily extractable from the binary code itself, they are present in the respective decks by simply decoding columns 65 and 66 of the first card).

The nnn values of the APL and DIR control cards also have been taken from the source: For the user and workspace directory it can directly be seen that it’s two sec- tors in length. For the APL deck one simply needs to subtract the lowest envelope start write address from the highest one, add the length of the program located at the highest address to this difference and finally round this up to the full sector.

Reconstruction of the IPL sector deck control card is easy: The nnn and ttt values are obviously one and zero. The sss value is the envelope start write location of the DADSK (APDK) program, which is not only the Disk I/O Routine of the APL\1130 sys- tem, but also serves as the IPL sector program.

This leads to the following three dump utility control cards:

To dump the IPL deck:

IPL2C4001000

To dump the DIR deck:

DIR2EC0022EC

To dump the APL deck:

APL2B40382B4

Generate the APL\1130 Development Environment Folder make_APL_devel of archive APL_1130_Development_Environment.zip con- tains the tools neded to generate the APL\1130 development environment using a standard DMS disk, the APL\1130 source from the source distribution (apl_source.zip) and the IPL sector deck aplload1.bin from the binary distribution (aplsetup.zip) as a starting point.

The generated environment is identical to the ready to use one provided in folder APL_devel of archive APL_1130_Development_Environment.zip. Thus running the generation process is not necessary if one just wants to use the APL development environment to assemble and build APL\1130.

In fact, only the components

 APL_devel.dsk  coldstart_card  1442_loader  2501_loader  2501_dmp

23  DMS2APL  APL2DMS of the APL development environment are generated.

Other components (scripts, card decks, control cards for the dump utility, modified DMP and APSC sources) can of course not be generated, they were created manual- ly .

The Windows dd command (dd.exe) is copied from http://www.chrysocome.net/dd.

The IBM 1130 simulator (ibm1130.exe) and the original DMS disk (dms.dsk) are ex- tracted from archive ibm1130.zip, the APL\1130 sources are extracted from archive apl_source.zip from IBM1130.org.

Note that the 1442 coldstart card and the 1442 standalone loader, both extracted from aplload1.bin, need to be taken from the binary distribution because the source of the 2501 coldstart card isn’t contained in the source deck found in apl_source.zip.

Once the source of the 2501 coldstart card (or alternatively the source of all three, the 1442 coldstart card, the 1442 standalone loader and the 1442 standalone dump utility) would become available a 100% APL\1130 system generation from source would be possible.

It should also be noted that the sources for the APLIPL and APLIPLPR coldstart cards are missing too. Consequently the newly generated system cannot be booted without using these cards from the binary distribution. As these are included in binary form in the IBM 1130 simulator this doesn’t become very visible but it still needs to be stated for the sake of completeness.

So, to build the complete APL\1130 system from source (system generation and ex- ecution) the minimum requirement would be that the sources of all three coldstart cards become available: The 2501 coldstart card, the APLIPL coldstart card and the APLIPLPR coldstart card.

Folder make_APL_devel contains all components of the APL development environ- ment, except those to be generated as listed above. Additionally it contains the fol- lowing components that are needed for the generation process only but not for using the APL development environment: nnnn_null_cards These files exist for several 4 digit values of nnnn. They rep- resent card decks of nnnn cards containing 80 NUL charac- ter punches (12-0-1-8-9) each, used to initialize various UA files to binary zeros using the *STOREDATAE function of the DUP utility. These decks were created using the punch- es.exe utility.

24 aplload1.bin IPL sector deck from the binary distribution. The 1442 cold- start card and the 1442 loader are extracted from this deck and used to work around the missing 2501 coldstart card. assemble_2501.deck Card deck used to assemble and punch the standalone 2501 loader. This is done for completeness only. Due to the miss- ing 2501 coldstart card the 2501 loader doesn’t get used at all in the APL development environment. The 1442 loader from the binary distribution is used instead. assemble_DMP.deck Card deck used to assemble the 2501 dump utility. Two ver- sions are created: A standalone version is punched to cards, while a DMS callable version in DSF format is stored in UA.

To generate the APL development environment the standalone version is used to dump the IPL sector (sector 0) of the APL development disk in DMS mode to enable later swapping back from APL to DMS mode.

Due to the missing 2501 coldstart card the 1442 coldstart card is used to boot the 2501 standalone dump utility. After booting the card reader type is changed on the fly from 1442 to 2501 mode to enable the utility to read its control card. This reader type change makes the utility behave a bit shaky, which is why the APL development environment is designed to be completely independent from it by using the DMS call- able version only. The DMS version of the dump utility works absolutely reliable.

Nonetheless and for completeness only, the standalone 2501 dump utility deck is part of the APL development environ- ment. As with the 2501 loader, it doesn’t get used at all in the APL development environment.

DMP.deck Card deck used to boot the 2501 dump utility using the 1442 coldstart card. dump_DMS_sector_0.DMP_control_card Control card used to instruct the standalone 2501 dump utility to punch sector 0 of the APL development disk in DMS mode for later relocation to UA file DMS. This extraction is necessary to be able to switch the APL development disk from APL to DMS mode. dms.dsk Original DMS disk as copied from the IBM 1130 simulator distribution on IBM1130.org. This is the starting point for the creation of the APL development disk.

25 init_APL_devel_disk and init_APL_devel_disk.deck are an IBM 1130 simulator DO script and a card deck used to create the files in UA as listed in the “Filename” column of the “Disk Layouts” table on page 15. To provide enough free space in UA, FORTRAN is removed. Both versions of the 2501 dump utility and the standalone 2501 loader are generated by this script using the decks assemble_2501.deck and assemble_DMP.deck. make_APL_devel_environment.bat Windows script to run the complete generation process. It takes the folder name of the new APL develop- ment environment as first and only argument. This folder must not exist already, it is created by the script. run_standalone_dump DO script for the IBM 1130 simulator to run the standalone 2501 dump utility. Because the 2501 coldstart card isn’t available the 2501 dump utility is loaded using the 1442 coldstart card. This forces to change the card reader type on the fly from 1442 to 2501 once the utility is loaded so that it can read its control card from the 2501 reader.

This change of reader type is a bit shaky, which is the reason why the standalone 2501 dump utility is used only to dump the IPL sector of the APL development disk in DMS mode for the purpose of creating an IPL sector desk usable to switch the disk back from APL to DMS mode when needed. All oth- er dump operations are done using the DMS callable version of the dump utility which operates absolutely reliable. void_fortran.cntl Monitor and DUP control statements to remove FORTRAN from the DMS disk. This is done to provide the space needed to allow for DMS and APL coexistence on the same disk.

To generate an APL development environment use the following procedure:

 Extract folder make_APL_devel from archive APL_1130_Development_Envi- ronment.zip to an arbitrary location on your Windows system.  Open a command prompt window and change directory to the extracted make_APL_devel folder.  Enter the command

make_APL_devel_environment target_folder

where target_folder is the folder to place the newly created APL develop- ment environment in. This folder must not exist already, it will be created by the script.

The generation process calls the IBM 1130 simulator a few times. Some of these simulator sessions require manual input, which can be one of the following:

26  Enter // PAUS monitor control statement, followed by return  Enter // XEQ monitor control statement, followed by return  Enter // XEQ DMP monitor control statement, followed by return  Push the IMM STOP button

The request for manual input is always announced by a console message.

For the requests to enter a monitor control statement, the “lights” on the GUI stop flashing and the wait state indicator comes up, indicating clearly that the input is needed now. Due to the way the scripts work, it may well be that the message re- questing an input is followed by a few more lines of output before the system enters the wait state. So, examine the console messages carefully when the system enters a wait state to identify which input is requested. Then enter the statement exactly as requested.

The request to push IMM STOP occurs only once in the generation process. The corre- sponding console message will instruct you to wait a few seconds and then press the button once. The system will not enter a wait, the lights keep on flashing, so there is no clear indication of the “correct” moment to press the button… just wait a good few seconds (say 15 to 30), then push the button.

The request to push IMM STOP is issued whenever the standalone 2501 dump utility is used. It enables the script running the utility to change the reader type on the fly after having booted it using the 1442 coldstart card, which is necessary due to the missing 2501 coldstart card.

During my experiments to find out how to circumvent the need for the 2501 coldstart card to boot the 2501 standalone dump utility it became clear, that the whole thing is a bit shaky. Especially when it comes to dumping more than one or two sectors (i.e. building the APL system deck) incomplete decks sometimes result for whatever rea- son. On the other hand, it seems to work stable with the one sector dump that is re- quired to be built standalone: The extraction of the IPL sector from the APL devel- opment disk in DMS mode for use in an IPL sector deck to enable switching the disk back from APL to DMS mode as needed. All other dumps are obtained using the DMS callable version of the dump utility.

The shaky behavior of the standalone 2501 dump utility when used with the 1442 coldstart card is the main reason why I banned it from the APL development envi- ronment. It is used only on the single occasion mentioned above during the genera- tion of the environment. In the APL development environment itself only the DMS callable version of the utility (DSF program DMP in UA) is used, which is absolutely reliable.

File make_APL_devel_environment.log in the APL_1130_Development_Environ- ment.zip archive is a console log from running the generation process and can be used as a reference on how it should look like, in case problems would occur or things are unclear.

27 Once the generation process has completed successfully the target_folder speci- fied will contain a ready to use APL development environment which is identical to the one provided in folder APL_devel of archive APL_1130_Development_Environ- ment.zip.

IBM 1130 Simulator Issues

Card Reader APL\1130 supports to read statements and commands from an IBM 1442 or an IBM 2501 card reader attached to the IBM 1130 system. The IBM 1130 simulator supports both card reader types, switchable by a “set cr type” simulator command with type being 1442 or 2501 and a default of 1442.

With the (as of August 2011) current version 3.3-1 of the IBM 1130 simulator APL\1130 cannot read from an IBM 1442 card reader. An attempt to read cards from a 1442 reader using the )CARD system command results in the simulator throwing an

Invalid command, IAR: 00000002 (9000 S 0003 ) error. This problem is not specific to the version of APL\1130 assembled from source, it occurs with the binary distribution from archives aplpreview.zip or aplset- up.zip (as found on IBM1130.org) too.

The loadapl script used to install the binary distribution from archive aplsetup.zip reads a few )ASSIGN commands from the card reader as the final installation step, with the reader not being specified explicitly and thus defaulting to an IBM 1442. It terminates with exactly that same card reader error. It should of course be assumed that it did work correctly at the time it was created (Nov 19th, 2003). So, probably some version update to the IBM 1130 simulator between then and today broke the 1442 working with APL\1130.

The circumvention is easy: Issue simulator command set cr 2501 before booting the APL\1130 disk.

In the case of the loadapl installation script from aplsetup.zip this command can be inserted anywhere before the boot -q -a -p dsk command (which boots the APL disk) but after the last boot cr command (which loads an installation deck and requires the reader to be a 1442).

28 In the case of the apl script from aplpreview.zip the set cr 2501 command can be inserted anywhere before the boot -a dsk command.

APL\1130 Version Considerations The APL\1130 system has no command to display a version, a maintenance level or a build date and there are no comments in the source that would tell at which level it is. For me it’s totally unclear, whether different builds existed over time (I assume so) and how one would be able to distinguish them.

The main purpose of this chapter is to analyze the relationship between the binary distribution as found in aplsetup.zip and the source distribution as found in apl_source.zip on IBM1130.org. These are called the “binary version” and “source version”, respectively, throughout this chapter.

Because the number of cards in the different installation card decks of the binary ver- sion exactly matches the figures given on page 3 of the User’s Manual, one could as a first guess assume that the binary version is the build that is documented by the User’s Manual, and thus would be the first build of APL\1130 Release 2 published. But it will turn out later that this assumption is false.

For the source version even such a first assumption cannot be made: It could be the source from the first build or from a later one, it could be in its original state or have been modified and it could be identical to the binary version or not. Because the source doesn’t contain any comments concerning version or maintenance level and because there is no release documentation at all, these and many more assumptions are possible. Only a comparison of the source against a reference source at a known level or a comparison of the binary code resulting from assembling the source against a binary reference version at a known level can help confirm or rule out a given assumption.

Unfortunately no reference versions exist. So the only thing that can be done is to compare the source and the binary version as exact as possible and to draw a few conclusions about their relationship.

Code Analysis On first sight the task seems simple: Just compare the installation decks of the binary version with those generated from the assembled source version. Clearly, if these decks were equal the versions were identical. But, unfortunately, the decks are not identical at all. They don’t even contain the same number of cards. As will be shown later from the decks not being identical doesn’t it cannot be concluded that the source and the binary version are not identical.

29 Well then, next guess: Just compare the disk areas were the code gets loaded. Again it’s clear that if these areas were identical the versions were identical too. But, bad luck again, the disk areas are not identical. And, as with the card decks, the disk are- as being identical is a sufficient but not a necessary condition for the versions being identical.

So, from comparing the card decks and the disk areas it cannot be concluded wheth- er the source and the binary versions are identical or not.

The reason why one cannot simply compare the disk areas or the installation decks from the assembled source version with those from in the binary version lays in the way the APL\1130 system is generated:

 Each APL source program is wrapped in an envelope. When executed under DMS the entry point of that envelope gets control and writes its contents (i.e. the APL program it contains) to a predetermined disk area. That location al- ways starts at a sector boundary. The data written starts at the origin (load ad- dress) of the first word of the APL program. Its length is at least the assembled length of the program, in most cases rounded up to the full sector.

 Because the length of the programs written from core to disk is rounded up to a full sector in most cases, any garbage that might have been in core between the ends of the programs and the rounded lengths gets written to disk and be- comes part of the respective decks.

In most cases the envelope code is located just after the last word of a pro- gram. Thus these sector round up areas start with the envelope code followed by garbage until the end of the disk area written.

 The disk areas to which the APL programs are written are adjacent with one exception: Sector 2EA is unused. This means that any garbage left from pre- vious usages in this sector of the disk becomes part of the APL deck.

 Not all data areas of the APL programs are initialized explicitly during assem- bly. Any garbage in these uninitialized data areas gets written to disk and be- comes part of the respective decks.

 The dump utility doesn’t punch a card that would contain only words of binary zeros, probably to reduce the amount of cards that need to be punched. A single word of garbage amidst an otherwise long enough to be suppressed se- ries of zeros would cause that whole area (zeros plus garbage) caused to be punched. This means that the size of the decks can for the exactly same source vary significantly simply depending on the amount of core and disk garbage picked up from the system driving the assemblies.

This makes clear, that disk areas and installation decks generated using the same source will never be identical when the source wasn’t assembled under the exactly

30 same conditions (garbage amounts, contents and location). They installation decks will not even have the same amount of cards.

The binary version probably was generated in 1969 on a real IBM 1130 system, be- ing permanently in heavy use, perhaps not power cycled for days and using a disk that already has seen many other uses. On the other hand the APL\1130 system generated today uses a purpose made APL development environment on an IBM 1130 simulator with probably quite a clean core and using a target disk area initial- ized to binary zeros. It’s obvious that these are significantly changed conditions.

So it cannot be expected at all to get identical or even close to identical disk areas and installation decks, even if the binary and the source version were absolutely identical.

Note that the above is true for the APL system and the IPL sector deck and their re- spective disk areas only. The Empty Directories get 100% initialized by the envelope code before being written to disk and exactly fit two sectors (i.e. no gap for garbage) which makes this deck and disk area identical in each version (except for a dummy last card with arbitrary contents thrown by the dump utility at the end of each deck).

How to bring some more light into the issue? A few statistics:

 The APL system deck from the binary version comprises 438 cards.  With the current APL development environment (which initializes the target disk area to binary zeros) the APL system deck from the source version is 425 cards in size.  When changing the target disk initialization from binary zeros to ones (i.e. change from least disk garbage to highest disk garbage density) I get 432 cards for the source version, still six cards less than the binary version.

The difference of seven cards between the all zeros and all ones initialization is ex- actly the unused 2EA: One card in the dump utility format stores 48 words, thus dumping that full sector requires seven cards. Except identifying the data stored in this unused sector as garbage, global considerations like the amount of cards in the installation decks don’t help to further analyze potential differences be- tween the source and the binary version.

So, reliable statements can be made only after comparing the disk areas of the two versions word by word. Fortunately a first quick scan shows that the disk area sizes and structures are identical in both versions, which means that one can compare the words stored at the exactly same disk addresses with each other.

To correlate a word on disk with the source instruction from which it was assembled it is necessary to find out which source module the word belongs to and which core location it will get loaded to at execution time.

To facilitate finding the correct source module I’ve created files in DMS UA for each source module mapping to the parts of the total “APL\1130 system work and code

31 area” (as shown in “Disk Layouts” on page 15) its assemblies get written to. The re- spective filenames are shown in the detailed disk map on page 45 from sectors /2B4 to /2EB. This area contains the full APL\1130 code. Note that the IPL sector (sector 0) gets assembled to file DADSK (sector /2C4), which is inside this area und thus doesn’t need to be compared explicitly. So, to determine the relation between the binary and the source version, it suffices to compare the disk area ranging from sec- tor /2B4 to /2EB. This area will be called the “code area” in the rest of this chapter.

The detailed core map on page 44 provides an overview of the core layout to ease identifying the addresses the modules get loaded to. The map shows one box per assembly, drawn at its location in core, and filled with the following information:

 Upper left: Name as derived from columns 73 to 76 of the source (the ID part of the sequence numbers).

 Upper right: Name of the EQUate symbol that defines the origin (load ad- dress) of the assembly at APL\1130 run time. The value of this symbol is al- ways the address at which the assembly’s box starts.

In most cases this address is identical to the address where the module is loaded under DMS before it gets written to disk by its envelope, which means that the addresses from the assembly listing are equal to the core addresses at APL runtime.

In a few cases (APDK/ASMDK, APT2/ASMT2, APTA/BEGIN, APCT/ASMCT) loading the modules in the same locations under DSM as at APL run time would cause DMS code to get overwritten in core. In these cases an EQUate symbol named ORG is defined to provide for an offset. The value of this sym- bol needs to be added to the addresses in the assembly listings to get the runtime core addresses. Note that this symbol is defined as a negative value, which is why it needs to be added to the assembled addresses and not sub- tracted as one would assume on first sight.

 Lower left: Name of the EQUate symbol defining the overlay number of the assembly. Most of the APL assemblies are organized in an overlay structure that is managed by table OVLST in the ASMCT (ctray) assembly. The entries in this table are addressed by an “overlay number” being a multiple of four as each table entry is four words in length.

 Lower right: The overlay number of the assembly, i.e. the value if the overlay name EQUate.

The following procedure is used to categorize a word in the code area of the binary version that isn’t identical with the same word in the code area of the source version as being garbage or a code difference:

1. Locate the source statement this word belongs to in the source version with the help of the assembly listing, the core map and the disk map.

32 2. a) If the word is located within an area reserved by a BSS statement or is skipped due to alignment requirements and if it is preceded and followed by identical code sequences in both versions it is considered garbage. 2. b) In all other cases it is considered being a code change and the source ver- sion is changed to assemble to the code found in the binary version. This changed source version is then used to continue the analysis.

This procedure is continued until all words not being identical are categorized either as garbage or as code changes and for all code changes a source change that im- plements the code of the binary version has been created. The changed source ver- sion is then finally assembled and rechecked as above to verify that every word still not being identical with the binary version now gets categorized as garbage.

Comparing the code areas of the binary and the source version word by word re- veals:

complete code area without

code area unused sector 2EA total sectors 56 55 total words 17920 17600 total words not identical 4859 4554 garbage words 1693 1388 words with changed code 3166 3166

There is a code difference of 3166 words or 18% of the used part (i.e. without sector 2EA) of the code area and additionally the complete code area contains at least 1693 words of garbage (“at least” because garbage words accidentally having the same value in both versions are not detected).

Because sector 2EA is unused it doesn’t need to be considered from a code point of view. It is all zeros in the source version (because the code area is initialized to bina- ry zeros in the APL Development Environment), while in the binary version it contains whatever was written to that disk cartridge before it was used to generate the system from which the binary installation decks were punched. So comparing this sector ba- sically identifies all nonzero words that were stored there at system generation time… we really do have here 305 words of authentic garbage and 15 authentic zeros from the summer of ’69 (or from whenever that binary version was created) .

Although a difference of 18% of the code seems to be quite significant on first sight, the code changes needed to transform the source version to the binary version are minimal. There are

 10 paired changes (reformatted and paired inserts and deletes)  22 non-paired inserts  3 non-paired deletes summing up to a total of 35 changes. See “Code ” on page 34 for descriptions and

33 delta listings of all changes.

The reason for the discrepancy between the large number of words of code differing and the minimal actual source changes needed to implement that difference is the effect of small code shifts: Two of the source changes cause a shift of almost the whole code of their respective assemblies by two words. The majority of the code differences result from these small shifts.

Code Differences The word for word comparison of the code areas of the source and the binary version revealed differences in these assemblies:

 DAEDT from source APIN  DAINP from source APIN  DAIDX from source APIX  DAEOS from source APOV

The following sections present delta listings describing these differences as viewed from the source version, because the sources for the binary version of the assem- blies have been constructed by modifying the source version. That means that the source version is defined to be the “old” version and the binary version is defined to be the “new” version in the nomenclature of the compare program used.

This definition of course doesn’t imply that the binary version is newer than the source version. In fact an analysis of the differences in the APIN and the APIX source shows that it’s just the other way round: The source version is newer that the binary version.

In the APL Development Environment the constructed sources of the differing as- semblies of the binary version can be found in folder src_regressed_to_level_ of_binary_dist. This folder also contains the following files: assemble_all_regressed DO script for the IBM 1130 simulator to assemble and write to disk the APL\1130 system using the constructed sources of the differing assemblies in- stead of the original ones from folder src. The as- sembler listing will be saved in file assemble_all_ regressed.lst.

The script is called from the folder containing the APL Development Environment using the command

ibm1130 src_regressed_to_level_of_binary_dist\assemble_all_regressed

Running the assemble_all_regressed script will create the exact same APL\1130 system from source as is provided through the binary distribu-

34 tions in aplpreview.zip and aplsetup.zip on IBM1130.org. assemble_all_regressed.deck Card deck used by the assemble_all_regressed script. regression_patch A patch in Unix/Linux patch command syntax to create the constructed APIN, APIX and APOV sources found in this folder from the original ones in folder src. This patch is provided for documentation only. It doesn’t need to be applied, because the constructed sources resulting from applying that patch are already provided here.

To fully understand the delta listings in the following sections it is necessary to have the assembler listings of the APIN.asm, APIX.asm and APOV.asm assemblies in both, the source version from the src folder and the constructed binary version from folder src_regressed_to_level_of_binary_dist, at hand. So, before reading on, please assemble both versions using the assemble_all and assemble_all_regressed scripts (or use the assemble script to run the assemblies individually).

Note that the O-LN# and N-LN# columns in the delta listings denote absolute old and new line numbers. These differ from the sequence numbers, because the se- quence numbers don’t start at APIN0001, APIX0001 and APOV0001, respectively.

APIN

NEW: JUERGEN.APL.BINDIST(APIN) OLD: JUERGEN.APL.SRCDIST(APIN)

LISTING OUTPUT SECTION (LINE COMPARE)

ID SOURCE LINES TYPE LEN N-LN# O-LN# ----+----1----+----2----+----3----+----4----+----5----+----6----+----7----+----8 MAT= 237 I - SDS02 BSC L SDS03 APIN0242 INS= 2 00238 00238 I - SDS04 STO L 1 APIN0245 00239 D - SDS02 A 1 4 ADDR OF LAST LINE FOR DISP APIN0242 DEL= 4 00238 00238 D - STO L 2 APIN0243 00239 D - LD 1 4 APIN0244 00240 D - STO L 1 APIN0245 00241 MAT= 1513 I - SDS03 A 1 4 ADDR OF LAST LINE FOR DISP API17581 INS= 4 01753 01755 I - STO L 2 API17582 01754 I - LD 1 4 API17583 01755 I - BSC L SDS04 API17584 01756 MAT= 540 I - EDS03 BSC L START APIN2299 INS= 1 02297 02295 D - EDS03 MDX L R14,1 APIN2299 DEL= 2 02297 02295 D - BSI L EDFND APIN2300 02296 MAT= 1264 I - WRITE BSC L PATCH APIN3565 RPL= 1 03562 03561 D - LDX L1 ASMIN-2 APIN3565 MAT= 22 I - PATCH LDX 1 NCDEE-NCDES AP358702 INS= 16 03585 03584 I - MVC2 LDD L1 NCDES AP358704 03586 I - STD L1 START-1 AP358706 03587 I - MDX 1 -2 AP358708 03588 I - MDX MVC2 AP358710 03589 I - LD OCODE AP358712 03590 I - STO WRITE AP358714 03591 I - LD OCODE+1 AP358716 03592 I - STO WRITE+1 AP358718 03593 I - MDX WRITE AP358720 03594 I - BSS E 1 AP358722 03595 I - NCDES BSS 1 AP358724 03596 I - MDX L R14,1 AP358726 03597 I - BSI L EDFND AP358728 03598 I - NCDEE BSC L GNXST AP358730 03599 I - OCODE LDX L1 ASMIN-2 AP358732 03600 MAT= 1

35 These changes modify the DAINP and the DAEDT assemblies of the APIN program. The changes in both assemblies are similar, but those in the DAINP part (labels SDS…) are easier to understand as the DAEDT ones (labels EDS…), because the latter required a special implementation to avoid a conflict with the envelope code that writes the APIN assemblies to disk.

The DAINP Modification The DAINP modification inserts four lines of code before O-LN# 1755. The last of these lines is an unconditional branch to label SDS04, which is added to the original- ly unlabeled O-LN# 241. The first three of the inserted lines are identical to O-LN# 238-240 which are replaced by an unconditional branch to the inserted lines, ad- dressed using label SDS03.

Note that the label names SDS03 and SDS04 are introduced for readability and are derived from continuing the numbering scheme used in the source. These can of course not be reconstructed from anywhere and could in reality have been anything from absolute addresses to totally different names.

In total this change

 moves the three instructions at lines 238 to 240 towards the end of the as- sembly just before the ORG instruction in line 1755 where they get located into the unused core area skipped by that ORG instruction  includes a branch from the original location of the three instructions to the new location  includes after the new location of the three instructions a branch back to the original location.

The three instructions moved are in total four words in length, while the branch in- struction that replaces them is only two words in length. Thus all code between lines 239 and 1754 gets relocated two words below its original address, i.e. almost the whole assembly gets readdressed. But, in fact, both code sequences work 100% identical, despite the big change this modification creates in the assembled code.

So, what is this? Move three instructions all over the program right to the end without changing anything? Why would one ever want to do this? Of course this doesn’t make any sense. But if one changes the direction of view it makes sense immediate- ly:

Look at it as a change from the constructed source of the binary version to the source version. In this case the three “outplaced” instructions get moved to the location in the code where they belong and two “unnecessary” unconditional branches get re- moved, without changing any functionality. This of course already makes sense in itself (change a “bad” code structure to a “good” code structure) but it is also a typical “business as usual” thing in code maintenance:

36  A bug is reported.  Analysis of the error shows that two words of code need to be inserted right at the beginning of a large module, which basically would require sending out ei- ther a completely new distribution or quite a “heavy” binary patch of some 40 cards. Both variants aren’t optimal for such a small change.  So, a size optimized binary patch (hotfix, ZAP, or whatever it was called in these times) is created using some unused core at the end of the module as a “patch area” to place the required instructions there, replace the begin of the sequence that needs to be changed with a branch into that patch area and end the patch code with a branch back into the original code sequence. In the given case this would require sending out only two cards instead of about 40.  On the next regular maintenance cycle, or when enough patch type changes have been accumulated that it makes sense to send out a new full distribution the patch code is integrated into the source at the location where it belongs, thus cleaning up the code structure and freeing the patch area. Whether a customer ordering a source tape in the meantime (i.e. between the release date of the patch and the date of next full distribution which has the patch integrated) would receive the old source plus a source patch or the new source with the integrated patch depends on the source and version control model used and cannot be determined from code analysis.

So, to summarize, the code difference in the DAINP assembly

 doesn’t change anything in the program’s functionality,  doesn’t make any sense when seen as an update from the source version to the binary version,  but is clearly identifiable as an inline integration into the source version of a patch that is applied to the binary version.

The DAEDT Modification When looking at the modification of the DAEDT assembly as it appears in the code areas on disk of the binary and the source distribution, it can be seen that it is of the same type as in DAINP: A displacement of a set of instructions making no sense in one direction and being identifiable as the integration of a patch into the code base in the other direction. In particular, it too doesn’t change anything in the program’s func- tionality.

But there is a structural difference: The core region where the patch code is placed (the “patch area”) is unused at APL runtime only. At assembly and write to disk time it is used by the first five instructions of the APIN envelope code (lines 3524 to 3528 of the unmodified APIN.asm in the src folder).

The APIN envelope code is special when compared with the envelopes of other as- semblies: Prior to writing the loaded assembly to disk it initializes a data area (the “Card Code Index and Chain”). This code would get overwritten when constructing the source of the binary version the same way as it was done with DAINP above and

37 consequently the envelope execution would fail. This problem comes up only when trying to create a source modification, such that the exactly same code results as af- ter applying the binary patch to the binary version and might have been a reason for never distributing such a source. I.e. “our” source version with the patch already inte- grated inline at its final location was probably the only way this patch was distributed in source.

In the original context an unpatched binary version (i.e. a binary version distributed earlier than “our” binary version and having the patch not applied yet) already is in- stalled on disk and thus the envelope code doesn’t get used any more. It just sits there accidentally because it wrote itself there due to the sector round up at disk write time. In this context only the disk area that corresponds to the desired core location needs to be patched such that upon the next use of DAEDT the patch code would get loaded into core instead of the “dead” envelope code. So, from the binary point of view this patch is just a question of loading two cards (one to patch the original loca- tion and one to load the patch instructions to the patch area) as described above for DAINP.

Given by the procedure used to identify the differences between the binary and the source version, the goal here was not to create that binary patch (in fact, we don’t even have that earlier binary version to which it would have been applied) but to im- plement a regression to the source version that delivers the same code as the appli- cation of the binary patch would have done, thus ending up exactly at “our” binary version.

So, to make the long story short, this was done simply by assembling the instructions that need to go into the patch area temporarily at the end of the envelope code and modifying the envelope code such that it copies these instructions to their final loca- tions just after the code they are overlaying has been executed but before executing the write to disk operation. In the delta listing this means:

 O-LN# 2295 and 2296 are replaced by a branch to the patch area located a label START, which at the same time is the entry point of the envelope code to be overwritten later.  The patch code consists, as with the DAINP case, of the replaced instructions plus an instruction to branch back. These are the three instructions placed in N-LN# 3597 to 3599, plus the two preceding BSS instructions to ensure proper (odd!) alignment.  The original envelope code is “patched” at O-LN# 3561 to patch itself by copy- ing the three patch instructions to label START. This is done by replacing O- LN# 3561 with a branch to label PATCH.  The code at label PATCH from N-LN# 3585 to 3594 does the actual copying of the three patch instructions to label START, restores the envelope code at O- LN# 3561 to its original state and then branches there to continue writing the now fully patched assemblies to disk as if nothing had happened in between.

38 Using this method, the resulting code on disk is (modulo garbage, of course) identical to the one of the binary version.

For completeness it should be noted that the word at address /1F06 of the DAEDT assembly, which is initialized to binary zeros, has, after running the envelope code, the value of XR3 upon envelope entry from DMS and gets written to disk with that value. This has nothing to do with the patch. It’s just the logic of the envelope code which stores XR3 there for later restoration. This word thus can differ in the code ar- eas on disk of the different versions, although it’s no garbage. Because this word is part of the APIN envelope code its contents doesn’t matter at APL runtime, which is why in the comparisons it was accepted as being identical regardless of its value.

APIX

NEW: JUERGEN.APL.BINDIST(APIX) OLD: JUERGEN.APL.SRCDIST(APIX)

LISTING OUTPUT SECTION (LINE COMPARE)

ID SOURCE LINES TYPE LEN N-LN# O-LN# ----+----1----+----2----+----3----+----4----+----5----+----6----+----7----+----8 MAT= 432 I - STO STVEC SAVE LENGTH OF RHS APIX0436 RPL= 1 00433 00433 D - STO DIMNS SAVE LENGTH OF RHS APIX0436 MAT= 7 I - LD STVEC FIND SPACE REQD APIX0444 RPL= 1 00441 00441 D - LD DIMNS FIND SPACE REQD APIX0444 MAT= 9 I - LD STVEC SPACE FOR LHS APIX0454 RPL= 1 00451 00451 D - LD DIMNS SPACE FOR LHS APIX0454 MAT= 10 I - MDX L STVEC,-2 APIX0465 RPL= 1 00462 00462 D - MDX L DIMNS,-2 APIX0465 MAT= 217

As we’ve learned from the APIN modification the code differences are to be interpret- ed in the direction from the binary to the source version. Thus the change imple- mented here modifies some instructions to use a word named DIMNS instead of STVEC.

Looking at the APIX source shows that STVEC is the first word of a . Using STVEC as a location to store and retrieve data may be acceptable in a core con- straint system, although it has a certain bug potential. But it should be avoided if an- other location close by can be used instead and DIMNS is such a location: It already exists (i.e. not additional core is required as compared with using STVEC) and it can easily be seen that the original use of DIMNS and the additional use instead of STVEC will never create a conflict.

The use of STVEC as a data word has a bug potential: If, between a store and a load to that location, the STVEC subroutine is called then the data stored there gets cor- rupted by the return address from the STVEC call. Checking the code sequence from line 433 to line 462 (everything between the first and the last use of STVEC as a data word) shows that in this range the STVEC subroutine is neither called directly nor by any other subroutine called from there. So, obviously the use of STVEC in that code sequence was no bug.

Thus it can be concluded, that the APIX modification is just a code cleanup. It doesn’t change anything in the program’s functionality.

39 APOV

NEW: JUERGEN.APL.BINDIST(APOV) OLD: JUERGEN.APL.SRCDIST(APOV)

LISTING OUTPUT SECTION (LINE COMPARE)

ID SOURCE LINES TYPE LEN N-LN# O-LN# ----+----1----+----2----+----3----+----4----+----5----+----6----+----7----+----8 MAT= 138 I - BSI SUTOS AND SET UP TOP OF STACK APOV0143 RPL= 1 00139 00139 D - BSI L SUTOS AND SET UP TOP OF STACK APOV0143 MAT= 21 I - LD 2 2 APO01641 INS= 2 00161 00161 I - SRT 8 APO01642 00162 MAT= 8 I - BSI SUTOS SET UP TOP OF STACK APOV0173 RPL= 1 00171 00169 D - BSI L SUTOS SET UP TOP OF STACK APOV0173 MAT= 1153

This one is a removal of unnecessary code and it reconfirms that the update direction is from the binary to the source version: The LD and the SRT instruction in N-LN# 161 and N-LN# 162 are unnecessary because the code branches after the SRT in- struction to label XQNXL in the ctray assembly where the exactly same LD instruction is the next instruction issued. The change of the two BSI instructions close by from short to long format compensates for the two instructions eliminated, i.e. for whatever reason one wanted to avoid a code shift.

This modification doesn’t change anything in the program’s functionality.

40 Summary of Code Analysis and Differences The word by word comparison of the binary and the source version reveals a differ- ence of 3166 words in 3 of the 12 source programs or 18% of the total amount of code:

Assembled Words % of code Code Real Source length differing differing shifts differences APIN 3753 3128 83 3737 16 APIX 637 4 1 0 4 APOV 1110 34 3 28 6

The analysis of these differences shows:

Functional Source Remarks differences The binary version has a binary patch applied to two patch areas. The source version integrates that same patch inline, i.e. without branching into the patch areas, thus causing the massive amount of code shifts listed above. The code executed is identical in both versions, of course except the branches into and out of the patch are- APIN none as. A one to one (i.e. not inline, but using the patch areas) implementation of the binary patch in source causes a conflict with APIN’s envelope code, unless the envelope is modified such that it patches itself and thus probably was never published. Nonetheless this has been done here to verify the findings from the binary comparison. The binary and the source version use a different data word to temporarily store the length of the right hand side APIX none of a simple assignment. Both words are conflict free, so this is obviously a code cleanup only. The binary version contains two unnecessary instructions, which are removed in the source version. This is inter- APOV none preted as a code cleanup only, as the two instructions don’t seem to be in a high traffic region and thus no signif- icant performance gain will result from the removal.

Most interestingly, the result of this analysis is that both versions are 100% equiva- lent. Functionally they cannot be distinguished from each other, although their code differs by 18% when compared word by word!

41 Version Timeline To close I’d like to present a possible version timeline, which of course has the binary and the source version from IBM1130.org as the only fix points known as of yet.

From the code differences can be concluded:

 The binary version has at least one patch applied and thus is not the original distribution from May 1969 (except: The original binary distribution was a patched version already, which from my point of view is improbable).  The source version is newer than the binary version.  The source version integrates at least one fix into the code base that previous- ly had been distributed as a binary patch for an unpatched earlier version. No statement can be made on the patch level of that earlier version and whether it was the original distribution from May 1969 or not.

This leads to the following possible timeline:

1969-05-05 24.11.1970 29.05.1972 01.01.1975 APL\1130 Release 2 Binary APIN patch Level 0 Level n-1 for Level n-1 Level n

13.09.1973 31.12.1975 Binary Version Source Version from IBM1130.org from IBM1130.org

24.11.1970 - 01.01.1975 Source Cleanups APIX and APOV

In this diagram the red markers denote the delivery or last modification dates of the source and the binary versions from IBM1130.org, while the green markers are re- lease dates:

 The source version from IBM1130.org was delivered after the release of level n and is in its original unmodified state.  The binary version from IBM1130.org was patched at some point in time be- tween release of the binary APIN patch and level n. It was delivered as a level n-1 distribution between the release dates of levels n-1 and n.  Level n-1 is the level to which the binary APIN patch was to be applied, i.e. this patch was released between level n-1 and level n.  The source cleanups for APIX and APOV were created at some points be- tween level n-1 and level n. Because they didn’t have any functional impact they probably were not released before level n.

In the special case of n=1 the binary APIN patch would have been applied to the orig-

42 inal distribution from May 1669.

Probably there happened a lot more between levels n-1 and n than the APIN patch and the two code cleanups but additional patches introduced in that timeframe are invisible if they were applied to the binary version and their source representation matched the binary one. For example, if the APIX code cleanup was applied to the binary version as a patch, the binary and the source versions of APIN would be iden- tical and the APIN cleanup thus would be invisible.

So one can simply see it as: The binary version was delivered at a maintenance level x and the source version at a higher maintenance level y. All changes between levels x and y, except the “clean up only” ones to APIN and APOV, had been made availa- ble as binary patches to level x. All of these patches were applied at the installation where the binary version was in use, thus making it functionally equivalent to the source version.

43 Appendix

Core Map 0000 … APDK ASMDK 00DD 00DE … APT2 ASMT2 APTA BEGIN 021D 021E … 071E APCT ASMCT ASMSN is the last … sector of ASMCT 072E 0730 APIPL … APCP ASMCP APCT ASMSN 0799 APSC ASMSC CPYOV 16 SGNOV 20 … APPH ASMPH SCMOV 8 0C3B APIN ASMIN PCHOV 12 … INPOV 4 0D57 APXQ ASMXQ … SYNOV 24 0E70 … 0F9D 0F9E … 0FEE 0FEF Buffer … 18D1 APIX ASMIX 0FF7 Buffer … EOSOV 36 0FF8 1B4D … LWKSP 1B4E APFN ASMFN 0FFC 1B4F unused in EOSOV APIN ASMED IDXOV 32 0FFD CPTR 1B50 EDTOV 28 0FFE 1000 LOCOR-2 1B51 0FFF 1FFF LOCOR-1 … initialized by APIPL 1000 0000 LOCOR 1C69 1001 0001 LC Register 1 … APOV ASMES 1002 1ED1 … LC Registers 2 - 15 … 100F 1FA6 1010 unused 1FA7 unused

1011 0005 LENGL/MATRX 1FA8 1012 0000 NUMGL … Basic Symbol Table 1013 104A MSTRT 1FE7 1014 104A MNEXT 1FE8 unused 1015 2080 STUAD 1FE9 1016 2080 SOLPT 1FEA RSEND 1017 178C PAREL APIPL 1FEB unused 1018 0000 GRBCL 1FEC FINDPL LOCOR+17 1019 0000 LUNPL APIN 1FED CLEAR WS 101A 0000 1FEE unused RAND 101B 41A7 1FEF 101C 0000 GLSTB 1FF0 FINDPL+4 … 0000 1FF1 FINDPL+5 1036 0000 GLBTB 1FF2 FINDPL+6 … 0000 1FF3 ISBRN 1049 0000 1FF4 0000 MODE APIPL 104A MSTRT points here 1FF5 USER … 1FF6 SINON 1150 1FF7 FULST 1151 1FF8 ATTN … 1FF9 0001 CHRCT APIPL 178C 4000 STKOR APIPL 1FFA GTSPL … 1FFB GTSPL+1 1790 1FFC GTSPL+2 1791 IMTRX inactive MSP overlays 1FFD GTSPL+3 … into 28/32/36 when 1FFE MGCOL 18D0 needed 1FFF FGCOL

44 Disk Map Usage/UA File- Start End Count Source Comment name 000 000 1 IPL sector 001 01F 31 not used 020 03F 32 Workspace #38 040 05F 32 Workspace #36 060 07F 32 Workspace #34 080 09F 32 Workspace #32 0A0 0BF 32 Workspace #30 0C0 0DF 32 Workspace #28 0E0 0FF 32 Workspace #26 100 11F 32 Workspace #24 120 13F 32 Workspace #22 140 15F 32 Workspace #20 160 17F 32 Workspace #18 180 19F 32 Workspace #16 1A0 1BF 32 Workspace #14 1C0 1DF 32 Workspace #12 1E0 1FF 32 Workspace #10 200 21F 32 Workspace #8 220 23F 32 Workspace #6 240 25F 32 Workspace #4 260 27F 32 Workspace #2 280 299 26 STTRK Active FSP 29A 2B3 26 TMTRK Inactive FSP 2B4 2B8 5 DACMD APSC 2B9 2BE 6 DAPCH APPH 2BF 2C3 5 DACPY APCP 2C4 2C4 1 DADSK APDK IPL Sector 2C5 2C5 1 DATYP APT2 2C6 2CA 5 DACTY APCT 2CB 2D1 7 DAINP APIN 2D2 2D6 5 DAEDT APIN 2D7 2DD 7 DASYN APXQ 2DE 2E1 4 DAFUN APFN 2E2 2E3 2 DAIDX APIX 2E4 2E7 4 DAEOS APOV 2E8 2E9 2 DAT41 APTA 2EA 2EA 1 NUSD1 not used 2EB 2EB 1 DACLN APIN CLEAR WS data 2EC 2ED 2 DADRU APWD Directory 2EE 2F3 6 TMTRX MSP swap out area 2F4 33F 76 NUSD2 not used 340 35F 32 Workspace #1 360 37F 32 Workspace #3 380 39F 32 Workspace #5 3A0 3BF 32 Workspace #7 3C0 3DF 32 Workspace #9 3E0 3FF 32 Workspace #11 400 41F 32 Workspace #13 420 43F 32 Workspace #15 440 45F 32 Workspace #17 460 47F 32 Workspace #19 480 49F 32 Workspace #21 4A0 4BF 32 Workspace #23 4C0 4DF 32 Workspace #25 4E0 4FF 32 Workspace #27 500 51F 32 Workspace #29 520 53F 32 Workspace #31 540 55F 32 Workspace #33 560 57F 32 Workspace #35 580 59F 32 Workspace #37 5A0 5BF 32 Workspace #39 5C0 5DF 32 Workspace #40 5E0 63F 96 not used

45 Download Links to ZIP Archives

As of September 2011 the zip archives mentioned in this document can be found at the following links:

APL_1130_Development_Environment.zip: http://wotho.ethz.ch/APL-1130/APL_1130_Development_Environment.zip ibm1130.zip: http://media.ibm1130.org/sim/ibm1130.zip aplpreview.zip: http://media.ibm1130.org/sim/aplpreview.zip aplsetup.zip: http://media.ibm1130.org/sim/aplsetup.zip apl_source.zip: http://media.ibm1130.org/sim/apl_source.zip

46