|I||||||III US005619682A United States Patent (19) 11) Patent Number: 5,619,682 Mayer et al. (45) Date of Patent: Apr. 8, 1997

54 EXECUTING NETWORK LAYERED Hartig et al., "(s) on Top of Persistent COMMUNICATIONS OF A FIRST SYSTEM Object Systems-The Birlix Approach”, Jan. 1992, pp. ON A SECOND SYSTEM USINGA COMMUNICATION BRIDGE TRANSPARENT 790-799, IEEE. TO THE DIFFERENT COMMUNICATION LAYERS Primary Examiner-Thomas . Lee 75 Inventors: Bruce D. Mayer, Arlington; Martin Assistant Examiner-Sang Hui Kim Berkowitz, Newton; Sudershan K. Attorney, Agent, or Firm-Gary D. Clapp, Faith F. Driscoll; Sharma, Brookline, all of Mass. John S. Solakian 73) Assignee: Bull HN Information Systems Inc., Billerica, Mass. (57) ABSTRACT 21 Appl. No.: 127,925 A layered communications bridge mechanism connected 22 Filed: Sep. 28, 1993 between an upper communications layer of a first commu (51) int. Cl...... G06F 3/00 nications layer mechanism executing in a user level process 52 U.S. Cl...... 395/500; 364/264.3; 364/2809; and a layered communication kernel process of a second - 364/280; 364/DIG. 1 system corresponding to the next lower layers of the first 58) Field of Search ...... 395/500, 2.86, communications layer mechanism. The bridge includes an 395/700, 650, 882, 892 upper bridge mechanism operating to appear to the lowest (56 References Cited layer or the layers of the first communications layer mecha U.S. PATENT DOCUMENTS nism to be the next lower layer of the first layered commu nications mechanism and a lower bridge mechanism oper 4,727,480 2/1988 Albright et al.. 4,812,975 3/1989 Adachi et al...... 395/500 ating to appear to the upper communications layer of the 5,136,709 8/1992 Shirakake et al...... 395/700 second system kernel process to be the next higher layer of 5,179,666 1/1993 Rimmer et al...... 395/882 the communications layers of the second system and the 5,210,832 5/1993 Maier et al...... 395/375 5,265,252 11/1993 Rawson, III et al. ... 395/700 upper and lower bridge mechanisms operate to map between 5,416,917 5/1995 Adair et al...... 395/500 the operations of the lower layer of the first communications FOREIGN PATENT DOCUMENTS layer mechanism and the upper layer of the layered com munications layers of the second system. The upper bridge 0.1244935 8/1991 Japan. mechanism executes in the second system user process and OTHER PUBLICATIONS the lower communications layer bridge mechanism executes "Bull repond a ses utilisateurs', 01 Informatique, Jun. 12, in an emulator executive level. 1992. “HP 3000 Emulation on HP Precision Architecture Com puters', by Arndt B. Bergh, et al., Dec., 1987, Hewlett-Packard Journal, pp. 87-89. 7 Claims, 6 Drawing Sheets

FOSL

U.S. Patent Apr. 8, 1997 Sheet 1 of 6 5,619,682

"OIHI 0!WELSAS1SHI-H

||]-z?TEMAET?72LavsBEST!

91

U.S. Patent Apr. 8, 1997 Sheet 4 of 6 5,619,682

100 SEMAPHORE

FIG. 4

5,619,682 1. 2 EXECUTING NETWORK LAYERED the specific operations desired by the user to perform the COMMUNICATIONS OF A FRST SYSTEM user's work, such as word processing, spread sheets, and so ON A SECONO SYSTEM USING A forth. The hardware is comprised of the central processing COMMUNICATION BRIDGE TRANSPARENT unit, the memory and the input/output devices, such as TO THE DEFERENT COMMUNICATION 5 displays, printers, disk drives and communications devices, LAYERS which actually perform the required operations at the detailed level. CROSS REFERENCES TO RELATED The operating system is functionally located "between' APPLICATIONS the user programs and the hardware and is comprised of a set The present patent application is related to: 10 of programs and routines that control the overall operations of the system and a set of routines that control the detailed U.S. patent application Ser. No. 08/128,456, filed Sep. 28, operations of the hardware as necessary to manage and 1993, for Executing Programs Of A First System. On A execute the operations directed by the applications pro Second System by Richard S. Bianchi et al. pending; grams. In this regard, the operating system is frequently U.S. patent application Ser. No. 08/127,397, filed Sep. 28, 15 comprised of two functional layers. One layer, frequently 1993, for Emulation of Disk Drivers of AOf A First System referred to, for example, as the "executive' level, interfaces On A Second System by Richard S. Bianchi et al. now U.S. with the applications programs and is comprised of a set of Pat. No. 5,373,984; and programs and routines and data structures which create U.S. patent application Ser. No. 08/128,391, filed Sep. 28, operations referred to as "processes' or “tasks" which 1993, for Emulation Of The Memory Functions Of A First 20 execute, at a high level, the operations required by the user System On A Second System by Marek Grynberg et al. programs. The "executive' level also includes a set of which issued as U.S. Pat. No. 5,515,525. programs, routines and data structures that are used to manage and execute the operations required by the applica FIELD OF THE INVENTION tion programs and which generate requests to the lower level 25 The present invention relates to a method and apparatus of the operation system. for executing programs of a first system on a second system The lower level of the operating system, frequently and, more particularly, to a method and apparatus for emu referred to as the "kernel', interfaces with the hardware lating a first operating system and hardware platform on a elements of the system and is comprised of a set of routines, second operating system and hardware platform. frequently referred to as "drivers' or "servers', for detailed 30 control of the operations of the system hardware. The kernel BACKGROUND OF THE INVENTION routines receive the requests for operations from the execu tive level and in turn direct the detailed operations of the A recurring problem in computer systems is that of system hardware elements. executing, or running, programs written for a first computer The basic problem in moving an application program system having a first hardware platform, that is, processor, 35 from a first system to a second system arises because, memory and input/output devices, on a second computer although the system is comprised of separate functional system having a second and different hardware platform. layers, the characteristics of each functional layer and of the The problem is compounded when the second computer functions and operations performed by each functional layer system, as is frequently the case, uses a second operating are affected by the characteristics and functions of at least system which may be substantially different from the oper the next lower layer. That is, the application programs are ating system of the first system. written to take maximum advantage of the characteristics This problem usually occurs when a user or a manufac and features of the executive level of the operating system. turer of computer systems is attempting to move application The executive level of the operating system, in turn, is programs from a first system to a second system to upgrade 45 designed to take maximum advantage of the characteristics or update the computer system while, at the same time, and features of the kernel level of the operating system while preserving the user's investment in application programs the kernel level is similarly designed not only to carry out and data created through the application programs. This the operations and functions required by the executive level situation may arise, for example, when moving application but is influenced by the characteristics and functional fea programs from one proprietary system, that is, a system 50 tures of the system hardware devices. having an operating system and hardware platform which is It is apparent, therefore, that the characteristics of a particular to one manufacturer, to another proprietary system system as viewed by an application program are influenced OR when moving application programs from a proprietary by features and functions of the system from the executive system to a "commodity' system, that is, a system having a level of the operating system down to the actual hardware hardware platform and operating system which is used by 55 elements of the system. As a consequence, and even though many manufacturers. systems are designed to maintain the maximum clear sepa The problems arising from moving application programs ration and independence between functional layers, a func from a first system to a second system arise from the tional layer created for one system, such as an application fundamental functional structure of the systems and from the program or an operating system, will rarely be compatible interactions and interrelationships of the functional elements with or function with a functional layer from another system. of the systems. The two primary approaches taken in the prior art for Computer systems are constructed as layered levels of moving an application program from a first system to a functionality wherein the three principal layers in any sys second system are the recompilation of the application tem are, from top to bottom, the user programs, the operating program to run on the second system directly and the system and the hardware "platform'. The user programs 65 emulation of the first system on the second system so that the provide the primary interface to the users and provide the application program can be run unchanged on the second functions and operations to control the system in performing system. While it is very common for an application program 5,619,682 3 4 to be recompiled to run on a second system, this approach form and the user level includes at least one user program frequently essentially requires the recreation or rewriting of and at least one executive program for managing operations the application program if the two systems are sufficiently of the first data processing system while the hardware dissimilar, which requires a very substantial investment in platform includes a layered communications device. The man-hours. In addition, many application programs cannot executive level includes at least one user task performing be successfully recompiled onto a second system because user level program operations and at least one executive task the second system simply cannot support the operations performing executive program operations and the user and required by the application program. executive tasks generate requests for first system layered communications operations. A layered communications The present invention is concerned, however, with the mechanism of the first system is responsive to the requests second approach to moving an application program from a 10 for executing the layered communications operations of the first system to a second system, that is, the emulation of the first system, wherein the first layered communications functionality of the first system on the second system in such mechanism includes hierarchically organized layers for per a manner as to allow the application program to run forming communications layer operations. In the first sys unchanged on the second system as if the second system tem, the input/output level includes an input/output task were, in fact, the first system. 15 responsive to the first layered communications mechanism The systems of the prior art have in general taken two for controlling the first system input/output device in per approaches to emulating a first system on a second system forming layered communications operations. wherein the two approaches differ in the level of the system The layered communications mechanism and method at which the emulation is performed, that is, the level of the execute on the second system and a second system user level second system at which the transition occurs between the 20 process executing in a user level of the second system, the functionality of the first system and the functionality of the user level process including the first system user level second system. program, the first system executive program, the first system In the first approach, a layer of interpretive programs are user and executive tasks, and at least one upper communi interposed between the application programs and the oper cations layer of the first communications layer mechanism. ating system of the second system, that is, between the 25 The second system also includes kernel level, which application programs and the execute level of the second includes a layered communication kernel process executing operating system. The interpretive programs operate to layered communications layers of the second system corre translate each call, command or instruction of an application sponding to all layers of the layered communications mecha program into an operation or series of operations of the nism below the at least one upper communications layer of second operating system which are the equivalent of the 30 the first communications layer mechanism executing in the operations of the first operating system that would have been user level process. performed in response to the same calls, commands or The present invention provides a layered communications instructions from the application program. bridge mechanism connected between the at least one upper While this approach seems straightforward, it frequently 35 communications layer of the first communications layer results in severe performance penalties because all opera mechanism executing in the user level process and the tions must now be performed through yet another layer of layered communication kernel process. The layered com programs with the resulting increase in time required to munications bridge mechanism includes an upper commu perform each operation. In addition, many operations that nications layer bridge mechanism connected from the at would have been performed as a single operation in the first 40 least one upper communications layer of the first commu operation system may have to be performed by several nications layer mechanism executing in the user level pro operations in the second operating system, again resulting in cess and operating to appear to the lowest layer of the at least a performance penalty. one upper communications layer of the first communications In the second approach, the transition between the func layer mechanism to be the next lower layer of the first tionality of the first operating system and the functionality of 45 layered communications mechanism. The bridge mechanism the second operation system is made at a very low level in also includes a lower communications layer bridge mecha the second system by moving the executive level and the nism connected between the upper communications layer upper portions of the kernel level of the first operating emulation mechanism and the layered communication ker system onto the second system and providing new kernel nel process and operating to appear to the upper layer of the level routines to interface the hardware elements of the 50 layered communications layers of the second system execut second system. This approach again frequently results in ing in the communications kernel process to be the next significant performance penalties because of the added layer higher layer of the layered communications layers of the of programs, this time at the interface between the first second system. operating system kernel level and the second system hard The upper communications layer bridge mechanism and ware elements, and because operations that the first kernel 55 the lower communications layer bridge mechanism operate may have performed as a single operation with respect to a to map between the operations of the lowest layer of the at first system hardware element may now have to be per least one upper communications layer of the first commu formed by many operations with respect to the second nications layer mechanism and the upper layer of the layered system hardware elements. communications layers of the second system executing in 60 the communications kernel process. The second system layered communications input/output device is in turn SUMMARY OF THE INVENTION responsive to the layered communication kernel process for The present invention is directed to a method and a executing the layered communications operations. layered communications mechanism for executing the lay Further according to the present invention, the second ered communications operations of a first system on a 65 system further includes an emulator level interposed second system. The first system includes a user level, an between the second system user level process and the kernel executive level, an input/output level and a hardware plat level, wherein the upper communications layer bridge 5,619,682 5 6 mechanism executes in the second system user process and System 10 by a system administrator and maintenance and the lower communications layer bridge mechanism executes fault isolation programs. It is well known to those of in the emulator level. ordinary skill in the art that the System Administrative The communications mechanism in the second system Programs (SADs) 24 are a part of the operating system and thus execute below the user programs and are not actually a may also include a pseudo device driver executing in the part of User Level 12 indicated herein. System Administra emulation level between the upper communications layer tive Programs (SAADs) 24 are grouped together with Appli bridge mechanism and the lower communications layer cation Programs (APPs) 22, that is with the user programs, bridge mechanism for communicating layered communica for convenience in the present description and User Level 12 tions operation requests between the upper communications is used to generally represent all levels of the system above layer bridge mechanism and the lower communications 10 the First System Executive Level (FEXL) 16. First System layer bridge mechanism. Hardware Platform Level (FHPL) 20 is comprised of the Other features, objects and advantages of the present system Hardware Elements (HE) 26, which include a Cen invention will be understood by those of ordinary skill in the tral Processing Unit (CPU) 26a, physical Memory 26b, and art after reading the following descriptions of a present Input/Output Devices (IODs) 26c, such as displays, work implementation of the present invention, and after examin 15 stations, disk drives, printers and communications devices ing the drawings, wherein: and links. 1. FIRST SYSTEM EXECUTIVE LEVEL (FEXL) 16 BRIEF DESCRIPTION OF THE DRAWINGS As indicated in FIG. 1, First System Executive Level (FEXL) 16 includes a plurality of Executive Program Tasks FIG. 1 is a block diagram of certain aspects of a first 20 (EXPTasks) 28 which operate to manage the operations of system which is to be emulated on a second system; First System 10, including directing the overall operations of FIG. 2 is the emulation mechanism of the present inven First System 10, scheduling and managing the operations tion as implemented on a second system; executed by First System 10 on behalf of Application FIG.3 presents details of the pseudo device driver mecha Programs (APPs) 22 and System Administrative Programs nisms of the present invention; 25 (SADs) 24 and managing the resources of First System 10, FIG. 4 presents the internal structure of the queues of the such as assigning memory space for operations and carrying emulation mechanisms of the present invention; out data and program protection functions. FIG. 5 represents the memory spaces of the first system as The operations performed in First System 10 in execution implemented on the emulating second system; 30 of an Application Program (APP) 22 or a System Adminis FIG. 6 represents a virtual address of the first system; trative Program (SAD) 24 are executed through a plurality FIG. 7 represents the mapping of the memory spaces of of Tasks 30 and any program executing on First System 10 the first system into the memory spaces of the second may spawn one or more Tasks 30. A Task 30 may be system; and, regarded as being analogous to a process, wherein a process 35 is generally defined as a locus of control which moves FIG. 8 is the address translation mechanism and memory through the programs and routines and data structures of a space mapping mechanism of the emulation mechanism. system to perform some specific operation or series of operations on behalf of a program. There is a Task Control DETAILED DESCRIPTION Block (TCB) 32 associated with each Task 30 wherein the Referring to FIG. 1, therein are illustrated certain aspects 40 Task Control Block (TCB) 32 of a Task 30 is essentially a of a first system which is to be emulated on a second system. data structure containing information regarding and defining The system represented in FIG. 1 and in the following the state of execution of the associated Task 30. A Task discussions may be, for example, a DPS6 system running the Control Block (TCB) 32 may, for example, contain infor GCOS6 operating system and the second system, upon mation regarding the state of execution of tasks or opera which the first system is to be emulated, may be, for 45 tions that the Task 30 has requested be performed and the example, a DPX/20 system running the AIX* or BOS/X information contained in a Task Control Block (TCB) 32 is operating systems, which are derived from the UNIX oper available, for example, to the programs of Executive Pro ating system. The DPS6 system with GCOS6 and the gram Tasks (EXPTasks) 28 for use in managing the execu DPX/20 with BOSIX are available as products from Bull tion of the Task 30. Each Task 30 may also include an HN Information Systems Inc. of Billerica, Mass. while 50 Interrupt Save Area (ISA)34 which is used to store hardware AIX* is the International Business Machines Corporation parameters relevant to the Task 30. version of the UNIX* operating system. Any Task 30 may issue requests for operations to be *AIX is a registered trademark of Internation Business Machines Corpora performed by First System 10 on behalf of the Task 30 to tion. *UNIX is a registered trademark of X/Open Co. Ltd. Executive Program Tasks (EXP Tasks) 28 and Executive 55 Program Tasks (EXP Tasks) 28 will respond to each such A. General Description Of A System To Be request by issuing a corresponding Indirect Request Block (IRB) 36 wherein an Indirect Request Block (IRB) 36 is Emulated (FIG. 1) essentially a data structure containing the information nec As represented in FIG. 1, a First System 10 is a multi essary to define the operation requested by the Task 30 and layered mechanism comprised of a User Level 12, a First 60 will generally include pointers or other indicators identify System Operating System Level (FOSL) 14 comprised of a ing the corresponding Task 30 and its associated Task First System Executive Level (FEXL) 16 and a First System Control Block (TCB) 32. One form of request that can be Input/Output Level (I/O Level) 18, and a First System issued by a Task 30 is a request for an input/output opera Hardware Platform Level (FHPL) 20. User Level 12 is tion, that is, a transfer of data to or from an input/output comprised of the Application Programs (APPs) 22 and 65 device (IOD) 26c and a Task 30 will generate a request for various user visible System Administrative (SADs) pro an input/output operation in the form of an Input/Output grams 24, such as the programs used to administer First Request Block (IORB) 38 wherein each Input/Output 5,619,682 7 8 Request Block (IORB) 38 contains information defining the operations by the corresponding Driver 44 and Hardware data to be transferred. In this instance, the corresponding Element (HE) 26. Indirect Request Block (IRB) 36 will include a pointer or Requests may be enqueued in Queues 42 in the form of other indicator identifying the Input/Output Request Block Indirect Request Block (IRB) 36 Pointers, wherein an Indi (IORB) 38 which initiated the generation of the Indirect rect Request Block Pointer (IRBP)36p indicates the location Request Block (IRB) 36. in the system of the corresponding Indirect Request Block In general, Task Control Blocks (TCBs) 32 are distin (IRB) 36. The requests, that is, the pointers, will be read guished from Input/Output Request Blocks (IORBs) 38 in from each Queue 42 by the corresponding server and driver that Input/Output Request Blocks (IORBs) 38 are primarily concerned with input/output operations and may thus be routines of I/O Level 18, described further below, which will passed to processes for subsequent handling, thereby effec 10 operate upon the requests. The responses from I/O Level 18 tively removing Input/Output Request Blocks (IORBs) 38 resulting from the operations performed in execution of the from the set of pending operations to be performed by the requests are Indirect Request Blocks (IRBs) 36 and are First System 10 tasks. Task Control Blocks (TCBs) 32 are enqueued in the Queues 42, which will be described in primarily concerned with the internal or inter-task opera further detail below, and the pointers may then be read from tions of First System 10 and generally must be handled by 5 Queues 42 by Executive Program Tasks (EXP Tasks) 28 to the First System 10 tasks and cannot be passed off. As such, locate the data structures containing the returned results of Input/Output Request Blocks (IORBs) 38 are generally the operations. given a higher priority than Task Control Blocks (TCBs) 32, It should be noted with regard to the above description of thus clearing First System 10's operations to handle Task First System 10 that the interface by which requests and Control Blocks (TCBs)32. Exceptions may be made, how 20 responses are passed between First System Executive Level ever, for example, for clock and task inhibit Task Control (FEXL) 16 and I/O Level 18 may take many forms, depend Blocks (TCBs) 32, which must be given the highest priority. ing upon the implementation chosen by the designer. For It is to be understood in the following descriptions of the example, requests may be passed directly, as requests, to the present invention that the emulation of a First System 10 on hardware element servers and drivers of I/O Level 18 and a second system will include emulation of requests that are 25 the information used by the servers and drivers of I/O Level represented by Indirect Request Blocks (IRBs) 36 as the 18 in executing the requests may be stored in a Queue 42 to emulation of First System 10 operations and are not limited be read by the servers and drivers of I/O Level 18 as solely to system input/output requests, although system necessary. The First System Executive Level (FEXL) 16/ input/output requests are the primary form of emulation I/O Level 18 interface may be implemented in other ways, discussed in the following. All references in the following to 30 such as with a single Queue 42 with the drivers and server Input/Output Request Block (IORB) operations or Indirect routines of I/O Level 18 reading requests from the single Request Block (IRB) operations are to be taken to refer Queue 42 and passing the results of the request operations interchangeably to both types of operations, that is, to both back to Tasks 30 through the single Queue 42 and a queue Indirect Request Block (IRB) requests and Input/Output manager task for controlling the writing and reading of Request Block (IORB) requests. 35 requests to and from the single Queue 42. First System Executive Level (FEXL) 16 will further 2. I/O Level 18 include a set of data structures referred to as Resource Referring now to I/O Level 18, as described above, I/O Control Tables (RCTs) 40 which are used to store informa Level 18 includes a plurality of driver programs and rou tion describing the resources of First System 10, such as 40 tines, indicated generally in FIG. 1 as Drivers 44, wherein Input/Output Devices (IODs) 26c, the allocation of Memory there are one or more Drivers 44 for each element of First 26b space, and so forth. The internal structure of the System Hardware Platform Level (FHPL) 20 for controlling Resource Control Tables (RCTs) 40 is generally flexible, the operations of the elements of First System Hardware except for having a defined header structure through which Platform Level (FHPL) 20. programs and routines executing in First System 10 may 45 As indicated in FIG. 1, requests to I/O Level 18 for an access the contents of the Resource Control Tables (RCTs) input/output operation by an element of I/O Level 18 are 40. A given Resource Control Table (RCT) 40 may contain handled by a Driver Task (DTask) 46 corresponding to and information defining the characteristics of, for example, a associated with the Hardware Element (HE) 26 element communications link or processor or the characteristics of a identified by the request and each Driver Task (DTask) 46 disk drive while another Resource Control Table (RCT) 40 50 includes a corresponding Kernel Control Block (KCB) 48 may also contain information regarding the tasks or requests which is generally used in the execution of I/O Level 18 being executed by a corresponding resource, such as a operations in a manner similar to the use of Tasks 30 and communications link, or pointers or addresses to other data Task Control Blocks (TCBs) 32 in First System Executive structures containing such information. Level (FEXL) 16. It should be noted that Driver Tasks Finally, First System Executive Level (FEXL) 16 will 55 (DTasks) 46 and Kernel Control Blocks (KCBs) 48 are include a plurality of queue structures, indicated as Queues structured to meet the needs of I/O Level 18 operations and 42a through 42n, the function of which is to pass requests for thus generally are not and need not be similar in detail to operations on behalf of the Tasks 30 to I/O Level 18 and to Tasks 30 and Task Control Blocks (TCBs) 32 and, in certain receive back from I/O Level 18 the responses indicating the implementations of I/O Level 18, these functions may be results of the operations of I/O Level 18 in response to the 60 performed by other data and control structures. For example, requests passed from First System Executive Level (FEXL) Drivers 44 may have access to and make use of Task Control 16. Each Queue 42 corresponds to and is associated with a Blocks (TCBs) 32, Indirect Request Blocks (IRBs) 36 and Driver 44 of First System 10's I/O Level 18 wherein there Input/Output Request Blocks (IORBs) 38 for these pur is at least one Driver 44 for and corresponding to each poses. Hardware Element (HE) 26 of FHP 20 for controlling 65 Finally, I/O Level 18 will include Kernel Resource Con operations of the corresponding Hardware Element (HE) 26 trol Tables (KRCTs) 50 for storing device and system and wherein each Queue 42 stores pending requests for information used by Drivers 44 in executing requests from 5,619,682 10 First System Executive Level (FEXL) 16. Again, while As shown, Second System 54 includes the native Second Kernel Resource Control Tables (KRCTs) 50 are similar in System Hardware Platform (SHPL) 56 which is comprised function to Resource Control Tables (RCTs) 40, Kernel of the native Hardware Elements (HES) 58 of Second Resource Control Tables (KRCTs) 50 are structured to meet System 54. As in First System 10, Hardware Elements 58 of the needs of I/O Level 18 operations and thus generally need Second System 54 include a Central Processing Unit (CPU) not be identical in detail to Resource Control Tables (RCTs) 58a, a physical Memory 58b, and Input/Output Devices 40 and, in certain implementations of I/O Level 18, these (IODs) 58c, such as displays, workstations, disk drives, functions may be performed by other data and control printers and communications devices and links. structures. For example, Drivers 44 may instead have access As has been described, Second System 54 is, in the to and make use of Resource Control Tables (RCTs) 40 for 10 present implementation of the invention, a UNIX based these purposes. system and, as such and according to the usual conventions of UNIX based systems, the Second System Levels (SSLs) 3. Layered Communications Facilities 60 executing on Second System Hardware Platform (SHPL) Lastly, First System 10 may provide one or more layered 56 are comprised of a User Level 62 and a Second System communications facilities, such as the OSI/DSA networking Kernel level (SKernel) 64. In the present invention, User and network terminal drivers and concentrators available 15 Level 62 will include Application Programs (APPs) 22 and from Bull HN Information Systems Inc. of Billerica, Mass. System Administrative Programs (SADs) 24, which were As is well known, many such communications facilities, executing on First System 10, and First System Executive represented in FIG. 1 by Layered Communications Facilities Level (FEXL) 16, which was executing on First System 10. (LCF) 52 are essentially comprised of a plurality of well As has been described above, it is unlikely that First defined functional levels wherein the upper levels corre 20 System Executive Level (FEXL) 16 and Second System spond to, or are implemented as, Tasks 30, and wherein the Kernel Level (SKernel) 64 will be able to communicate or lower levels, which perform more detailed communications operate with each other to any useful degree. operations, correspond to Driver Tasks (DTask) 46 and The bridge and interface between First System Executive control various communications drivers, such as certain of Level (FEXL) 16 and Second System Kernel Level (SKer Hardware Element (HE)-Input/Output Devices (IODs) 26c. 25 nel) 64, and therefore the bridge and interface between the As indicated in FIG. 1, Layered Communications Facilities functions and operations of First System 10 in emulation on (LCF) 52 may be represented as being comprised of Upper Second System 54 and the functions and operations of Communications Facilities Layers (UCFLs) 52a which Second System 54 which allow Application Programs execute in First System Executive Level (FEXL) 16, or in (APPs) 22, System Administrative Programs (SADs) 24 and User Level 12, and which communicate with Lower Com 30 First System Executive Level (FEXL) 16 of First System 10 munications Facilities Layers (LCFLs) 52b which execute in to execute on Second System 54, is provided through an I/O Level 18 and which in turn control corresponding Emulator Executive Level (EEXL) 68. Emulator Executive communications devices of Hardware Element (HE)-Input/ Level (EEXL) 68 resides and executes in Second System Output Devices (IODs) 26c. 54's User Level 62 between First System Executive Level 4. Alternate Systems and Division of Systems Into Func 35 (FEXL) 16 of First System 10 and Second System Kernel tional Levels Level (SKernel) 64 of Second System 54. Finally, it should be noted with regard to the above As will be described in further detail in the following described separation of First System 10's operating levels descriptions of Emulator Executive Level (EEXL) 68, Emu into a First System Executive Level (FEXL) 16 level and an 40 lator Executive Level (EEXL) 68 does not comprise a new, I/O Level 18 that not all First Systems 10 will have a formal separate layer or level of functionality in Second System separation of the functions of the system into distinctly Levels (SSLs) 60. Emulator Executive Level (EEXL) 68 is defined levels and another First System 10 may in fact instead essentially comprised of certain elements of First architecturally regard the various tasks as essentially peer System Executive Level (FEXL) 16 which have been trans tasks. In any system, however, even one in which all tasks 45 formed into new mechanisms which appear, to the remain are regarded as peers, certain tasks will be involved in higher ing, unchanged elements of First System Executive Level level operations while other tasks will be involved in more (FEXL) 16, to operate in the same manner as the original, detailed tasks and it will be possible to draw a boundary untransformed elements of First System Executive Level between the tasks separating the higher level tasks from the (FEXL) 16. At the same time, these new mechanisms of detail level tasks. 50 Emulator Executive Level (EEXL) 68 appear to the mecha The above described separation of a First System 10 into nisms of Second System Kernel Level (SKernel) 64 to be the a First System Executive Level (FEXL) 16 level and an I/O native mechanisms of Second System 54's User Level 62 Level 18 should therefore not be regarded as an architectural with which Second System Kernel Level (SKernel) 64 is requirement imposed on the First System 10, but instead as accustomed to operate. a recognition that certain tasks or processes perform opera 55 The following will initially describe the present invention tions at a more detailed level than others and that a boundary from the functional viewpoint of First System 10, that is, between the types of tasks may be drawn for the purposes of will discuss the structure and operations of the emulation the present invention, even if not actually imposed by the mechanisms of the present invention primarily from the architecture of the particular First System 10. viewpoint of First System 10's functions and operations. 60 The following will then discuss the emulation of First B. General Description, Emulation Of A First System 10, including the First System 10 programs and tasks System. On A Second System (FIG. 2) being executed on Second System 54 and the emulation mechanisms, from the structural and operational viewpoint 1. Second System 54 Functional Levels of Second System 54, that is, as user programs and structures FIG. 2 illustrates the layered mechanisms of a Second 65 executing in Second System 54. System 54 that is emulating a First System 10 according to 2. First System Executive Level (FEXL) 16 and Second the present invention. System Kernel Level (SKernel) 64 5,619,682 11 12 Referring first to First System Executive Level (FEXL) be noted that the term Pseudo Device Driver as used with 16, First System Executive Level (FEXL) 16 as executing on regard to FIG. 2 is a designation which reflects First System Second System 54 again includes Executive Program Tasks Executive Level (FEXL) 16's view of the functions and (EXP Tasks) 28, the Tasks 30 spawned by the programs of operations performed by these elements of Emulator Execu Executive Program Tasks (EXPTasks) 28, Application Pro tive Level (EEXL) 68. That is, to First System Executive grams (APPs) 22 and System Administrative Programs Level (FEXL) 16, and to Application Programs (APPs) 22, (SADs) 24, the Task Control Blocks (TCBs) 32 associated System Administrative Programs (SADs) 24 and Tasks 30, with the Tasks 30, the Indirect Request Blocks (IRBs)36 and each Pseudo Device Driver (PSDD) 74 and associated Input/Output Request Blocks (IORBs) 38 created as a result Second System Kernel Process (SKP) 66 appears to Tasks of requests for operations by the programs of Executive O 30 to function in a manner that is equivalent to Drivers 44 Program Tasks (EXPTASKS) 28, Application Programs and Driver Tasks (DTasks) 46 of First System 10's I/O Level (APPs) 22 and System Administrative Programs (SADs) 24, 18. As has been described briefly above, and as described further below, these same mechanisms of Emulator Execu and the Resource Control Tables (RCTs) 50 that these tive Level (EEXL) 68 appear to Second System Kernel elements of First System Executive Level (FEXL) 16 are Level (SKernel) 64 to be native Second System 54 User accustomed to operating with. These elements of First 5 Level 62 functions and mechanisms and there will be a System Executive Level (FEXL) 16 will continue to operate Second System Kernel Process (SKP) 66 for and corre in the same manner as in First System 10, thereby providing, sponding to each Pseudo Device Driver (PSDD) 74, that is, at this level, the operating environment necessary for the for each device or function of First System 10 which is to be execution of Application Programs (APPs) 22 and System emulated in Second System 54. The present invention does Administrative Programs (SADs) 24 in their original forms. 20 not require the modification of Second System Kernel 64 As will be described further below, the functions of Queues and does not require the creation of new drivers for the 42 and the First System Executive Level (FEXL) 16 inter purposes of the present invention. The present invention faces to First System 10's Kernel 18 have been absorbed into spawns processes to execute existing Second System Kernel the mechanisms of Emulator Executive Level (EEXL) 68. Processes (SKPs) 66. The Second System Kernel Level (SKernel) 64 processes 25 are represented in FIG. 2 by Second System Kernel Pro 6. Emulation of Communications Link Layers cesses (SKPs) 66 and, for purposes of the present invention, The communications operations of First System 10 are Second System Kernel Level (SKernel) 64 will, as described emulated in Second System 54 in a manner corresponding to further below, contain a Second System Kernel Process the emulation of First System 10 input/output devices, but (SKP) 66 for each Driver Task (DTask) 46 and associated 30 with the specific form of emulation depending upon the Driver 44 of First System 10 which is to be emulated in specific type of communications operations. For example, in Second System 54. As also indicated, Second System Kernel the present invention certain communications devices of Level (SKernel) 64 includes a Kernel Process Manager First System 10 are emulated by porting the driver programs process (KPM) 70, which serves to manage Second System and routines from the native First System 10 code into native Kernel Processes (SKPs) 66. 35 Second System 54 code, or alternatively by providing Second System Kernel Level (SKernel) 64 is essentially equivalent Second System 54 Second System Kernel Pro comprised of Second System 54 mechanisms and functions cesses (SKP) 66, which are called by First System Executive which are generally analogous to those of First System 10's Level (FEXL) 16 through a corresponding Pseudo Device Kernel 18, but are in the forms which are native to Second Driver (PSDD) 74 and executed as native Second System 54 System 54. For example, Second System 54 has been processes. described as possibly being a UNIX based system and, in Layered network communications, such as OSI/DSA, this instance, the functions and operations performed by may be executed through the usual layered communications Driver Tasks (DTasks) 46 and Drivers 44 of First System mechanisms, but wherein certain of the higher communica 10's I/O Level 18 will be performed by Second System 54 tions layers reside in First System Executive Level (FEXL) Second System Kernel Level (SKernel) 64 processes. 45 16 or in User Level 12 in Second System 54 in their native First System 10 form, that is, as originally implemented in 3. Emulator Executive Level (EEXL) 68 First System 10, while the lower communications layers are As represented in FIG. 2, Emulator Executive Level implemented in Emulator Executive Level (EEXL) 68, that (EEXL) 68 includes an INTERPRETER 72 which interprets is, as native Second System 54 program layers, and use the First System 10 instructions into equivalent Second System 50 Second System Kernel Processes (SKP) 66 provided by 54 instructions, thereby allowing Second System 54's CPU Second System Kernel Level (SKernel) 64 and Input/Output 56a, Memory 56b, and other elements of Second System 54 Devices (IODs) 58c provided in Second System Hardware to emulate the operations of the corresponding elements of Platform Level (SHPL) 56 in place of the drivers and First System 10. devices provided in First System 10. This is illustrated in Emulator Executive Level (EEXL) 68 further includes a 55 FIG. 2 wherein Layered Communications Facilities (LCF) plurality of Pseudo Device Drivers (PSDDs) 74 wherein 52 is shown as being emulated by Upper Communications there is a Pseudo Device Driver (PSDD) 74 for each Facilities Layers (UCFLs) 52a residing and executing in input/output device or type of input/output device or other First System Executive Level (FEXL) 16 or User Level 12 functionality of First System 10 which appeared in First as native First System 10 program layers and Lower Com System Hardware Platform Level (FHPL) 20 and which is to 60 munications Facilities Layers (LCFLs) 52b residing and be emulated in Second System 54. As such, Pseudo Device executing in Second System Kernel Level (SKernel) 64 as Drivers (PSDDs) 74 will include Pseudo Device Drivers native Second System 54 processes, indentified in FIG. 2 as (PSDDs) 74 for terminals, for disk drivers, for tape drivers, Lower Communications Facilities Layer Processes (LCFLP) for displays, and for certain communication devices. 78. As indicated in FIG. 2, there will be a Second System 65 As shown in FIG. 2, Upper Communications Facilities Kernel Process (SKP) 66 for and corresponding to each Layers (UCFLs) 52a and Lower Communications Facilities Pseudo Device Driver (PSDD) 74. In this regard, it should Layer Processes (LCFLP) 78 are functionally intercon 5,619,682 13 14 nected and communicate through a new layer, referred to as 78 and Hardware Element (HE)-Input/Output Devices Layered Communications Emulation Bridge (LCEB) 76, (IODs) 58c executing layered communications operations in which is comprised of two cooperative modules indicated in Second System 54. FIG. 2 as Pseudo Network Layer (PNL) 76a residing and Lastly, Pseudo Network Driver (PND) 76b includes the executing in First System Executive Level (FEXL) 16 as a internal structure of a Pseudo Device Driver (PSDD) 74, native First System 10 program module and Pseudo Net which will be described fully in the following descriptions, work Driver (PND) 76b residing and executing in Second and for these purposes the descriptions of Pseudo Device System Kernel (SKernel) 64 as a native Second System 54 Drivers (PSDDs) 74 should be regarded as applying equally program module. to Pseudo Network Driver (PND) 76b as regards the struc According to the present invention, therefore, Upper 10 tures and operations of Pseudo Device Drivers (PSDDs) 74. Communications Facilities Layers (UCFLs) 52a, which are According to the present invention, therefore, a new the layered communications levels with which Tasks 30 communications bridge layer is interposed between an upper communicate directly in First System 10, are retained in communications layer executing in the First System 10 Second System 54 and execute in Emulator Executive Level environment and a next lower communications layer execut (EEXL) 68 or in User Level 12, so that Tasks 30 may 5 ing in the Second System 54 environment. The bridge layer execute layered communications operations as if they were is comprised of an upper module executing in the First executing in First System 10. System 10 environment and appearing to to the upper In turn, Lower Communications Facilities Layers communications layer to be the next lower layer and a lower (LCFLs) 52b are replaced by corresponding native Second module executing in the Second System 54 environment and System 54 communications layers referred to in FIG. 2 as 20 appearing to the next lower communications layer to be the Lower Communications Facilities Layer Processes (LCFLP) upper communications layer. This invention may be imple 78 which execute the functions and operations that were mented between any two layer communications layers hav executed in First System 10 by the native Lower Commu ing a hierarchical relationship and, because neither of the nications Facilities Layers (LCFLs) 52b of First System 10. two bridge modules is responsible for peer to peer network As shown, Lower Communications Facilities Layer Pro 25 protocols, the integrity of the layered communications facili cesses (LCFLP) 78 perform essentially the same functions ties is preserved. as Lower Communications Facilities Layers (LCFLs) 52b 7. First System 10 and the Emulation Mechanism As and the functions and operations that were performed in Second System 54 Processes First System 10 by the Driver Tasks (DTask) 46 and Drivers As has been described previously, Second System 52 is a 44, including controlling the Second System 54 Hardware 30 UNIX based system and, as is well known, UNIX based Element (HE)-Input/Output Devices (IODs) 58c which cor systems may generally be regarded as comprising two levels respond to the layered communications devices Hardware executing above the hardware platform level, generally Element (HE)-Input/Output Device (IOD) 26c of First Sys referred to as the User Level and the Kernel Level, indicated tem 10. in FIG.2 as User Level 62 and Kernel Level 64. User Level The bridge between Upper Communications Facilities 35 62 generally comprises the user accessible functions and Layers (UCFLs) 52a and Lower Communications Facilities operations of the system and Kernel Level 64 generally Layer Processes (LCFLP) 78 is, as described above, pro comprises the functions and operations that are "internal' to vided by the new Layered Communications Emulation the system and are not usually accessible to the users. As is Bridge (LCEB) 76 comprised of cooperative modules 40 also well understood, all operations in a UNIX based sys Pseudo Network Layer (PNL) 76a executing in First System tem, whether in User Level 62 or in Kernel Level 64, are Executive Level (FEXL) 16, that is, in the First System 10 executed within UNIX processes. operating environment, and Pseudo Network Driver (PND) According to the present invention, the Executive Pro 76b in Emulator Executive Level (EEXL) 68, in the Second gram Tasks (EXPTasks) 28 and Tasks 30 being executed on System 54 operating environment. 45 behalf of Application Programs (APPs) 22 and System In the exemplary implementation of the present invention Administrative Programs (SADs) 24, Upper Communica as described herein, Layered Communications Facilities tions Facilities Layers (UFCLs) 52a with Pseudo Network (LCF) 52 are divided between layer 4, the transport layer, Layer (PNL) 74a, and INTERPRETER 72 are to be and level3, the network layer, of the seven layer ISO model, executed in Second System 52 in a manner so as to appear so that layers 7 through 4 comprise Upper Communications 50 to Second System 52 to be "native' to Second System 52. Facilities Layers (UCFLs) 52a executing in First System Accordingly, and as indicated in FIG. 2, Executive Program Executive Level (FEXL) 16 while layers 3 through 1 com Tasks (EXPTasks) 28 and Tasks 30 being executed on behalf prise Lower Communications Facilities Layer Processes of Application Programs (APPs) 22 and System Adminis (LCFLP) 78 executing in Second System Kernel (SKernel) trative Programs (SADs) 24, Upper Communications Facili 64 and in Second System Hardware Platform Level (SHPL) 55 ties Layers (UCFLs) 52a with Pseudo Network Layer (PNL) 56. 74a, and INTERPRETER 72 are executed in the Second According to the present invention, Pseudo Network System 52 of the present implementation in a First System Layer (PNL) 76a emulates and appears to Upper Commu Process (FSP) 80 wherein First System Process (FSP) 80 is nications Facilities Layers (UCFLs) 52a as the X.25 net one or more user processes according to the conventions of work layer of the seven layer OSI model and transforms the UNIX based operating system executing on Second requests from the transport layer into First System 10 System 52. input/output requests. Pseudo Network Driver (PND) 76b It should be noted that, while FIG. 2 illustrates a single appears to Lower Communications Facilities Layer Pro instance of a First System 10 being emulated on Second cesses (LCFLP) 78 as the transport layer of the seven layer System 54, it is possible for multiple instances of a First OSI model and maps requests from Pseudo Network Layer 65 System 10 to be concurrently emulated on Second System (PNL) 76a into UNIX API requests that may be executed by 54, or even for multiple instances of different First Systems Lower Communications Facilities Layer Processes (LCFLP) 10 to be concurrently implemented on a Second System 54, 5,619,682 15 16 so long as Second System 54 is a multi-tasking capable In addition, the interface between the emulated First system. In such instances, each instance of a First System 10 System 10 functions and operations and the native Second will be executed in the Second System 54 as a different set System 54 processes and functionality falls at the boundary of First System Processes (FSPs) 80 executing in the Second between Second System 54's user level processes and kernel System 54. level processes and thus at a well defined interface so that In addition, each Pseudo Device Driver (PSDD) 74 with the functional integrity of Second System 54's architecture its associated Second System Kernel Process (SKP) 66 and is preserved. Second System 54 hardware device or devices, such as a Hardware Element (HE)-Input/Output Device (IOD) 58c, As such, the method of emulation of the present invention comprises a Second System 54 process, which are indicated retains unchanged the most significant aspects of the func in FIG.2 as Second SystemProcesses (SSPs)82. In a similar 10 tionality of both the emulated and the emulating systems and manner, each instance of a Pseudo Network Driver (PND) places the interface between the emulated and emulating 74a with a Lower Communications Facilities Layer Process systems at a clearly defined and controlled boundary so that (LCFLP) 78 and one or more associated Hardware Element the interface between the emulated and emulating systems is (HE)-Input/Output Devices (IODs) 58c is implemented as a substantially simplified and the functional and operational Second System Process (SSP) 82. 15 integrity of both systems is preserved. Executive Program Tasks (EXP Tasks) 28, Tasks 30, Upper Communications Facilities Layers (UCFLs) 52a, and INTERPRETER 72 may therefore communicate among C. Emulator Executive Level (EEXL) 68, Memory themselves and interoperate according to the conventions of Queues, and the Memory Queue Interface (FIG. 3) First System 10, so that Executive Program Tasks (EXP 20 1. General Description of Emulator Executive Level Tasks) 28, Tasks 30, Upper Communications Facilities Lay (EEXL) 68 and Shared Memory Space Mechanisms ers (UCFLs) 52a, and INTERPRETER 72 appear to one Referring to FIG. 3, therein is presented a diagrammatic another to be native First System 10 tasks and may therefore representation of the structures and mechanisms of Emulator execute among themselves as if they were in fact executing 25 Executive Level (EEXL) 68, a representative First System on First System 10. In this regard, it must be remembered Process (FSP) 80 and Second System Kernel Level (SKer that INTERPRETER 72 emulates First System 10's central nel) 64 with Second System Kernel Processes (SKPs) 66, processing unit and memory and thus appears to Executive concentrating upon the Emulator Executive Level (EEXL) Program Tasks (EXPTasks) 28, Tasks 30, and Upper Com 68 structures and mechanisms comprising the bridge and munications Facilities Layers (UCFLs) 52a to be First interface between First System Process (FSP) 80 and Second System 10's central processing unit and memory. 30 System Kernel Level (SKernel) 64 and, in particular, Pseudo At the same time, First System Process (FSP) 80 may Device Drivers (PSDDs) 74. The other data structures and communicate and interoperate with the other processes mechanisms of First System Process (FSP) 80, Emulator executing in Second System 54, such as Second System Executive Level (EEXL) 68 and Second System Kernel Processes (SSPs) 82, according to the conventions of the Level (SKernel) 64 will be understood with reference to UNIX based operating system executing in Second System 35 FIGS. 1 and 2. As described further in following descrip 52 and thereby appear to Second System 52 to be native tions of the present invention, Emulator Executive Level Second System 52 user processes. (EEXL) 68 resides in a UNIX Memory Space of Second As also indicated in FIG. 2, First System Process (FSP) System Hardware Platform Level (SHPL) 56's physical 80, which includes Executive Program Tasks (EXP Tasks) 40 Memory 58b and is accessible to the mechanisms of Second 28 and Tasks 30 being executed on behalf of Application System Kernel Level (SKernel) 63. Programs (APPs) 22 and System Administrative Programs 2. Memory Queue Interface and Queues (SADs) 24, Upper Communications Facilities Layers As represented in FIG. 3, the bridge mechanisms and (UFCLs) 52a with Pseudo Network Layer (PNL) 74a, and structures between First System Process (FSP) 80 and INTERPRETER 72, and Second System Processes (SSPs) 45 Emulator Executive Level (EEXL) 68 include a Memory 82 all execute within User Level 62 of Second System 54, Queue Interface (MQI) 84 residing in Emulator Executive so that First System Process (FSP) 80 and the Second Level (EEXL) 68 and executing in each First System System Processes (SSPs) 82 appear to Second System 54 to Process (FSP) 80, and a plurality of Pseudo Device Queues be Second System 54 user level processes. The interface (PSDQs) 86 and a single Active Queue (SAO) 88, between the First System 10 operations and functions that 50 which together comprise the Pseudo Device Drivers are being emulated on Second System 54 and the native (PSDDs) 74 shown in FIG. 2. Each Pseudo Device Driver operations and functions of Second System 54 which are (PSDD) 74 includes a corresponding Pseudo Device Queue used by the emulated elements of First System 10 thereby (PSDQ) 86 and the Pseudo Device Drivers (PSDDs) 74 occurs at the boundary between Second System 54's User together share the single Software Active Queue (SAO) 88 Level 62 and Second System 54's Kernel Level 64. 55 and Memory Queue Interface (MQI) 84. Although not In summary, therefore, the present invention implements represented explicitly in FIG. 3, the linked communication the emulated operations and functions of First System 10 in layer path will, as described, also include a queue mecha such a manner that the emulated operations and functions of nism comprised of a Pseudo Device Driver (PSDD) 74 in First System 10 may interoperate among themselves in the Pseudo Network Driver (PND) 76b wherein that Pseudo same manner as in First System 10 and, therefore, effectively 60 Device Driver (PSDD) 74 will also include a Pseudo Device within the First System 10 native environment. At the same Queue (PSDQ) 86 and a shared portion of Software Active time, the processes in which the emulated First System 10 Queue (SAO) 88 and Memory Queue Interface (MQI) 84. operations and functions are executing and the processes The following will therefore discuss the structure and opera emulating First System 10 input/output operations are native tions of Pseudo Device Drivers (PSDDs) 74 generically, Second System 54 processes, and thus may interoperate with 65 with the understanding that the following discussion applies one another and with other processes native to Second to all of the input/output paths emulated in Second System System 54 in a manner which is native to Second System 54. 54, including the layered communications facilities. 5,619,682 17 18 As previously described, each Pseudo Device Driver predetermined location. The general structure of the Queue (PSDD) 74 in the path of linked communications layers Headers (QHS) 84 is the same for Software Active Queue represents and corresponds to a device or driver or commu (SAO) 88 and for each of the Pseudo Device Queues nication link used by First System 10, that is, that existed in (PSDQs) 86, but the information contained in the queue will the First System Operating System Levels (FOSL) 14 and depend upon the type of the particular queue, as will be Hardware Platform Level (HPL) 20 of First System 10, and described below. there is a Second System Kernel Process (SKP) 66 or a As shown in FIG. 4, the queue structure associated with Lower Communications Facilities Layer Process (LCFLP) each Queue Header (QH) 90 is represented as a Queue 92 78 in Second System Kernel Level (SKernel) 64 for and wherein each Queue 92 is a linked queue of Queue Frames corresponding to each such device, driver or communication 10 (QFs) 94 wherein, as will be described in further detail in a link. According to the present invention, each Pseudo following discussion and figure, each Queue Frame (QF) 94 Device Driver (PSDD) 74 or Lower Communications Facili may contain a Task Control Block (TCB) 32 or an Indirect ties Layer Process (LCFLP) 78 is to operate in the same Request Block Pointer (IRBP) 36p wherein each Task Con manner as the corresponding element that existed in First trol Block (TCB) 32 or Indirect Request Block Pointer System 10. 15 (IRBP) 36p represents a request for an operation by a Task That is, the Tasks 30 and Executive Program Tasks (EXP 30, as described above with reference to FIG.1. The number Tasks) 28 executing in First System Executive Level of Queue Frames (QFs) 94 in any Queue 92 will depend (FEXL) 16 will provide requests for operations to Emulator upon the number of outstanding requests to the correspond Executive Level (EEXL) 68, and thus to Second System ing emulated device or, in the case of Software Active Queue Kernel Level (SKernel) 64 and Second System Hardware 20 (SAO) 88, the number of completed requests, as described Platform Level (SHPL) 56, in the form of Indirect Request below. Block Pointers (IRBPs) 36p or Input/Output Request Block The queue of each of Software Active Queue (SAO) 88 Pointers (IORBPs) 38p and will receive back the results of and the Pseudo Device Queues (PSDQs) 86 comprises a the operations. Emulator Executive Level (EEXL) 68 must structure referred to as a "linked queue with head node' therefore provide a path by which requests are passed to 25 wherein the Queue Header (QH)90 comprises the head node Second System Kernel Processes (SKPs) 66 and Lower and wherein the Queue Header (QH) 90 and the Indirect Communications Facilities Layer Processes (LCFLPs) 78 Request Blocks (IRBs) 34 in a Queue 92 are each linked to and a path by which the results of the operations are passed the following element in the queue. back to the Tasks 30. 5. Addresses and Address Translation 3. Implementation of Device Drivers and Link Layers 30 It will be noted, as described previously, that Software As described briefly above, each Pseudo Device Driver Active Queue (SAQ) 88, the Pseudo Device Queues (PSDD) 74 utilizes a Pseudo Device Queue (PSDQ) 86 and (PSDQs) 86, and INTERPRETER 72 are provided to emu shares the common Software Active Queue (SAO) 88 with late the corresponding mechanisms of First System 10, that other Pseudo Device Drivers (PSDDs) 74 by executing the is, First System 10's input/output devices and central pro functions provided in Memory Queue Interface (MQI) 84 35 cessing unit, as seen by Executive Program Tasks (EXP wherein Memory Queue Interface (MQI) 84 is a set of Tasks) 28 and Tasks 30. As such, Executive Program Tasks routines for accessing and managing the Pseudo Device (EXP Tasks) 28 and Tasks 30 will provide memory Queues (PSDQs) 86 and the Software Active Queue (SAO) addresses to the Pseudo Device Queues (PSDQs) 82 and 88. INTERPRETER 72 according to the requirements of the The Pseudo Device Queue (PSDQ) 86 of each Pseudo 40 native memory access and management mechanisms of First Device Driver (PSDD) 74 forms the path by which requests System 10 and will expect to receive memory addresses for operations are passed to the appropriate Second System from Software Active Queue (SAO) 88 and INTERPRETER Kernel Processes (SKPs) 66 and Lower Communications 72 in the same form. Second System Kernel Processes Facilities Layer Processes (LCFLPs) 78 of Second System (SKPs) 66, Lower Communications Facilities Layer Pro Kernel Level (SKernel) 64, wherein each Pseudo Device 45 cesses (LCFLPs) 78, the hardware elements of Second Queue (PSDQ) 86 is a path to a corresponding Second System 54 and other processes executing as native processes System Kernel Process (SKP) 66 or Lower Communications in Second System 54, however, operate according to the Facilities Layer Process (LCFLP) 78 and thus to a corre memory addressing mechanisms native to Second System sponding emulated device, driver or link layer. Software 54. As such, address translation is required when passing Active Queue (SAO) 88, in turn, which is shared by each of 50 requests and returning requests between Emulator Executive the Pseudo Device Drivers (PSDDs) 74 and Lower Com Level (EEXL) 68 and Second System Kernel Level (SKer munications Facilities Layer Processes (LCFLPs) 78 and nel) 64. their corresponding Second System Kernel Processes As described, INTEPRETER 70 is provided to interpret (SKPs) 66, forms the path by which the results of Second First System 10 instructions into functionally equivalent System Kernel Process (SKP) 66 operations are passed back 55 Second Second 54 instructions, or sequences of instructions, to the requesting tasks executing in First System Executive including instructions pertaining to memory operations. As Level (FEXL) 16. such, the address translation mechanism is also associated 4. Internal Structure of Pseudo Device Queues (PSDQs) with INTERPRETER 72, or is implemented as a part of 88 and Software Active Queue (SAO) 88 INTERPRETER 72, and is indicated in FIG. 3 as Address The Pseudo Device Queues (PSDQs) 86 are each com Translation (ADDRXLT) 98 and will be described in detail prised of a Header structure and a queue structure wherein in a following discussion. the Header structure is embedded in a Resource Control 6. Operation of Memory Queue Interface (MQI) 84, Table (RCT) 40, as described above with reference to FIG. Pseudo Device Queues (PSDQs) 86, and Software Active 1. Software Active Queue (SAO) 88 is similarly comprised 65 Queue (SAO) 88 of a Header structure and a queue structure, wherein the A task executing in First System Executive Level (FEXL) Header structure resides in system memory space at a 16, that is, a Task 30 or one of Executive Program Tasks 5,619,682 19 20 (EXPTasks) 28 executing in First System Process (FSP) 80, Memory Queue Interface (MQI) 84 will enqueue the Indi may request the execution of an operation by a device rect Request Block Pointer (IRBP) 36p of the request into emulated through Emulator Executive Level (EEXL) 68, the Queue 92 of the Pseudo Device Queue (PSDQ) 86 Second System Kernel Level (SKernel) 64, and Second corresponding to the emulated device, driver or link layer System Hardware Platform Level (SHPL) 56 by generating, 5 and, in doing so, will set a Semaphore 102 in the Queue or causing an Executive Program Task (EXP Task) 28 task to generate, an Indirect Request Block (IRB) 36 as in the Header (QH) 90 of the Pseudo Device Queue (PSDQ) 86. normal, native operation of First System 10. The Task 30 or As has been described, the Second System 54 upon which EXPTask 28 generating the Indirect Request Block (IRB)36 First System 10 is emulated is, in the present example, a will then, however, write the Indirect Request Block Pointer UNIX based system and the Semaphore 102 is correspond (IRBP) 36p into the Pseudo Device Queue (PSDQ) 86 10 ingly a UNIX semaphore which, as indicated in FIG. 3, corresponding to the appropriate device, driver or link layer operates to wake up the Second System Kernel Process by “escaping" to Emulator Executive Level (EEXL) 68 and (SKP) 66 or Lower Communications Facilities Layer Pro issuing a call to Memory Queue Interface (MQI) 84. As cess (LCFLP) 78 which emulates the requested device, shown in FIG. 3, this operation is performed through driver or link layer driver in the manner well known to those Escape/Call Mechanism (EscapeC) 100, which detects and 15 of skill in the an and familiar with UNIX based systems. It traps input/output instructions and, in response to an input/ should be noted that the Semaphores 102 also operate to lock output instruction, invokes Memory Queue Interface (MQI) a queue that an entry is being written into so that another 74 rather than, as in First System 10, passing the Indirect process will not attempt to write into or read from the queue Request Block (IRB) 34 through one of the mechanisms while the queue is being modified by a first process, such as described with reference to FIG.1. Memory Queue Interface 20 Memory Queue Interface (MQI) 84 or a Second System (MQI) 84 then writes the corresponding Indirect Request Kernel Process (SKP) 66. Block Pointer (IRBP) 36p into the corresponding Pseudo The writing of an Indirect Request Block Pointer (IRBP) Device Queue (PSDQ) 86, which resides in the Emulator 36p into the Queue 92 of a Pseudo Device Queue (PSDQ) Executive Level (EEXL) 68 operating environment. There 86 will thereby cause a conventional UNIX call and return after, and as described further below, communication and 25 in which the Second System Kernel Process (SKP) 66 or interoperation between the Pseudo Device Queues (PSDQs) Lower Communications Facilities Layer Process (LCFLP) 86, Software Active Queue (SAO) 88, and the Second 78 performs the requested operation. That is, and as indi System Kernel Processes (SKPs) 66, all of which are Second cated in FIG.3, the setting of the Semaphore 102 in a Pseudo System 52 structures and processes, will be by conventional Device Queue (PSDQ) 86 results in a process call to the process calls and returns. 30 Second System Kernel Process (SKP) 66 or Lower Com Referring briefly to the discussion of First System 10 in munications Facilities Layer Process (LCFLP) 78 which is FIG. 1 and, in particular, the mechanisms by which Tasks 30 emulating the corresponding device, driver or link layer pass Indirect Request Block (IRB) 36 requests to I/O Level driver to which the request was directed by the requesting 18, it will be apparent that, except for the request call task. The Second System Kernel Process (SKP) 66 or Lower accordingly being to Memory Queue Interface (MQI) 84 35 Communications Facilities Layer Process (LCFLP) 78 will rather than to the corresponding First System 10 mecha then access and read the Indirect Request Block Pointer nisms and escape to native Second System 54 code, the (IRBP) 36p of the request and, operating through the Indi operations within First System Process (FSP) 80 to invoke rect Request Block (IRB) 36, will obtain the information the emulation of an input/output operation are very similar necessary to execute the requested operation. The Second to the native operations of First System 10. The emulation System Kernel Process (SKP) 66 or Lower Communications call mechanism of Escape/Call Mechanism (EscapeC) 100 Facilities Layer Process (LCFLP) 78 will execute the and Memory Queue Interface (MQI) 84 therefore closely requested operation through the corresponding hardware emulates the operation of First System 10 in this regard and elements of Second System Hardware Platform Level the modifications to First System Executive Level (FEXL) (SHPL) 56 and, upon completing the operation, will return 16 are relatively slight, primarily being the addition of 45 the results of the operation to Software Active Queue (SAO) Escape/Call Mechanism (EscapeC) 100 and Memory Queue 88 and, when doing so, will set the Semaphore 102 in the Interface (MQI) 84. Queue Header (QH)90 of Software Active Queue (SAO)88. Further in this regard, it should be noted that Memory It will therefore be apparent from the above that the Queue Interface (MQI) 84 must be implemented in the 50 design of such Second System Kernel Processes (SKPs) 66 Second System 54 operating environment, that is, in Emu and of Lower Communications Facilities Layer Processes lator Executive Level (EEXL) 68, as a routine available to (LCFLPs) 78 will be well familiar to those of skill in the art, a plurality of Second System 54 processes. so that a detailed description of the design of such Second It should be further noted that Pseudo Device Queues System Kernel Processes (SKPs) 66 and Lower Communi (PSDQs) 86 and Software Active Queue (SAO) 88 are data 55 cations Facilities Layer Processes (LCFLPs) 78 is not nec structures of a form that is similar to the data structures essary for those of skill in the art to implement the present already in use by First System Executive Level (FEXL) 16, invention and, since the lower level details of such designs so that the implementation of Memory Queue Interface would differ for each First System 10 and Second System (MQI) 84 and Escape/Call Mechanism (Escape.C) 100 as 54, would be superfluous to understanding the present Second System 54 programs is, as regards the interface 60 invention. between Escape/Call Mechanism (EscapeC) 100 and 7. Further Description of Queue Headers (QHS) 90 and Memory Queue Interface (MQI) 84, a well understood Queues 92 (FIG. 4, Tables 1, 2, 3 and 4 and Appendix A) process. Referring to FIG. 4, therein is represented the Queue Returning to the discussion of the emulation of a Header (QH) 90 and Queue 92 of Software Active Queue requested input/output operation, upon being called by a 65 (SAQ) 88 or a Pseudo Device Driver Queue (PSDQ) 86 in First System Process (FSP) 80 task issuing a request for an further detail. As indicated therein, and as described previ operation by an emulated device, driver or link layer, ously, each Queue Header (QH) 90 includes, in addition to 5,619,682 21 22 a Semaphore 102, a Link 106 indicating the location of the first Queue Frame (QF) 94 in the associated Queue 92. Each TABLE 1-continued Queue Frame (QF) 94, in turn, includes a Link 106 to the Basic Queue Header 90 next Queue Frame (QF) 94 of the Queue 92, with the Link 106 of the last Queue Frame (QF) 94 containing a pointer (MOD->mcl cir Frequency of monitor calls in session. (MQI)-cxt cir Frequency of context swaps in session; back to the location of the Queue Header (QH) 90. that is, frequency of switching between The Queue Frames (QFs) 94 of Software Active Queue Tasks 30. Semaphore to lock queue structure (SAO) 88 and Pseudo Device Driver Queues (PSDQs) 86 while referencing queue structure to differ in detail and the following will describe the Queue 10 access (IRB) or to write or delete Frames (QFs) 94 of both, noting where the frames differ. (IRB); used to sleep/wake SKPs 66 or Each Queue Frame (QF) 94 further includes a Task Control to generate signal to call certain SKPs Block Pointer (TCBP) or Input/Output Request Block 66 such as XTD devices. (MQI)-bisemipid Server process identification. Pointer (IORBP) 38p, as previously described, a Priority (MQI)-fdes File descriptor. Field (Priority) 108 containing a value indicating the relative (MOI)-active servers TRUE if corresponding server SKP 66 priority of the interrupt or request. The Queue Frames (QFs) 15 is active. (MQI)-status Current state of terminal. 94 of Software Active Queue (SAQ) 88 include a Flag Field (MQI)-usri sid User terminal semaphore identification. (Flag) 108 containing a flag which distinguishes whether the (MQI)-req cnt Number of requests currently enqueued. Queue Frame (QF) 94 contains a Task Control Block (TCB) (MQI)-benq cnt Total enqueue operations to current 32 or an Indirect Request Block (IRB) 36. Input/Output time. Request Blocks (IORBs) through their IRBs are generally 20 (MQI)-deq cnt Total dequeue operations to current time. given a higher priority than Task Control Blocks (TCBs). (MQI)-slp cnt Total sleep operations to current time. Exceptions may be made, however, for example, for clock (MOI)-)wak cnt Total waken operations to current time. and task inhibit Task Control Blocks (TCBs) 32, which must (MQI)-func Pointer to function SKP 66. be given the highest priority. (MQI)-block Shared memory address of strucure 25 (Task, (TCB), (IORB). The structure and operation of Memory Queue Interface (MQI)-epid Process identification; depends upon (MQI) 84, Software Active Queue (SAO) 88, Pseudo Device specific queue. (MQI)-cur pri Priority of queue frame (IRB) most Queues (PSDQs) 86, and Second System Kernel Processes recently dequeued. (SKPs) 66 and Lower Communications Facilities Layer (MQI)-slrn Logical resource number (resource Processes (LCFLPs) 78 may be understood further by an 30 identifier) of emulated device. examination of the further data stored in Queue Headers (MQI)-brk-add Location of temporary storage of SKP (QHs) 90, which comprises information used in the opera 66 during break processing. (MQI)-trimname Name of user terminal. tions of Tasks 30, Executive Program Tasks (EXPTasks) 28, (MQI)-logname Log-in name of user, Memory Queue Interface (MQI) 84, and Second System (MQI)-display Display variable of user. Kernel Processes (SKPs) 66 and Lower Communications 35 (MOD->filename File name of emulated device to be Facilities Layer Processes (LCFLPs)78, either directly or as mounted. pointers and addresses to other data structures which contain the necessary information. The Queue Headers (QHS) 90 of the Pseudo Device TABLE 2 Queues (PSDQs) 86 have a standardized format and struc 40 Queue Header for Software Active Queue (SAQ) 88 ture and the Queue Headers (QHs) 90 of the various queues Note: SAO 88 Header is not an RCT 40 Header of Emulator Executive Level (EEXL) 68 essentially differ NIA (Not Applicable). only with respect to the specific information stored in this Pointer to next queue element or to standardized format and structure and the manner in which header if queue is empty. this information is used. As such, the following will first 45 (SAO)-nclctr Frequency of monitor calls in session. (SAO)-cxt ctr Frequency of context swaps in session; describe the basic structure and format of a Queue Header that is, frequency of switching between (QH) 90 and will then illustrate a specific example of the Tasks 30. Queue Header (QH) 90 for the Pseudo Device Queue (SAO)-bisemsid Semaphore to lock queue structure (PSDQ) 86 of an exemplary emulated device, such as a disk while referencing queue structure to drive, and for an XTDITTY device which does not use the access (IRB) or to write or delete 50 (IRB); used to sleep/wake on when Semaphore 84 for sleep/waken control. element added to queue. As illustrated in Tables 1, 2, 3 and 4, a basic Queue (SAO)-bisempid Server process identification (MQI). Header (QH) 90 contains the following fields and informa (SAO)-fdes NIA (SAQ)-active servers NIA tion and the information in the fields is used as described in (SAO)-status NIA the following. It should be noted that not all of the fields are 55 (SAO)-usr sid NIA necessarily used in a given Queue Header 84 and that certain (SAQ)-req cnt Number of requests currently enqueued. fields, not shown below, are reserved for future use. (SAQ)-enq cnt Total enqueue operations to current time. (SAO)-deq. cnt Total dequeue operations to current TABLE 1. time. (SAO)-slipcnt Total sleep operations to current time. Basic Queue Header 90 (SAO)-walk cnt Total waken operations to current time. (SAO)-func NIA (MQI)-rqh.priority Contains relative priority of request; (SAO)-block NA appears in Indirect Request Block (SAO)-epid (IRB) but listed here for convenience. Process identification; clock server (MQI)-erqh.fwd Pointer to next queue element or to process of FEXP 16. header if queue is empty. 65 (SAO)-cur pri Priority of queue frame (TCB) most 5,619,682 23 24

TABLE 2-continued TABLE 4-continued Queue Header for Software Active Queue (SAO) 88 Queue Header 90 for XTDITTY Device Note: SAQ 88 Header is not an RCT 40 Header 5 N/A recently dequeued. Total enqueue operations to current time. (SAQ)-slrn N/A Total dequeue operations to current time. (SAQ)-brk-add NIA xtd-slip cnt N/A (SAQ)-trmname NIA xtd-wak cnt N/A (SAQ)->logname NIA xtd-efunc Pointer to function (xtdio). (SAO)-display N/A 10 xtd-block N/A (SAQ)->filename NIA xtd-pid Process identification of the xtdio process. xtd-cur pri Priority of queue frame (IRB) most recently dequeued. xtd-elrn 126 N/A TABLE 3 xtd-brk-add N/A xtd-trimname N/A Queue Header 90 for Disk/Diskette 5 xtd-logname NIA xtd-display N/A (RCT)-eqaddrrqh.priority N/A (RCT)-qaddrrqh.fwd 94 Pointer to next queue element or xtd-filename N/A to header if queue is empty. (RCT)-9qaddr.mcl. ctr N/A (RCT)-qaddrcxt ctr N/A 20 (RCT)-9qaddrisem.sid Semaphore to lock queue D. Shared Memory, Memory Management and structure while referencing queue structure to access (IRB) or to Memory Protection (FIGS. 5, 6, 7 and 8) write or delete (IRB); used to sleep/wake on when element As described above with reference to FIGS. 2 and 3, the added to queue 25 First System 10 tasks and programs executing on Second (RCT)-eqaddrisempid Server process identification SKP System 54, Second System 54's native processes and 66 of diskfoiskette. (RCT)-qaddrfdes File descriptor. mechanisms and the Second System 54 mechanisms emu (RCT)-qaddractive servers TRUE if corresponding server lating First System 10 mechanisms share and cooperatively SKP 66 is active. use Second System 54's memory space in Second System (RCT)-qaddr.status N/A 30 Memory 58b. As a consequence, it is necessary for Second (RCT)-9qaddr.usr sid N/A System 54, the First System 10 tasks and programs execut (RCT)-eqaddrreq cnt Number of requests currently enqueued. ing on Second System 54, and the emulation mechanisms to (RCT)-qaddrenq cnt Total enqueue operations to share memory use, management, and protection functions in current time. a manner that is compatible with both Second System 54's (RCT)-eqaddr.deq cnt Total dequeue operations to normal memory operations and with First System 10's current time. 35 (RCT)-9qaddr.slp cnt Total sleep operations to current emulated memory operations. The emulation of First System time. 10 memory operations in Second System 54 in turn requires (RCT)-9qaddr.wak cnt Total waken operations to emulation of First System 10's memory management unit, current time. (RCT)-eqaddrfunc Pointer to function SKP 66. that is, First System 10's hardware and software elements (RCT)-qaddrblock Shared memory address of 40 involved in memory space allocation, virtual to physical structure (Task, (TCB), (IORB)). address translation, and memory protection in Second Sys (RCT)-qaddrpid N/A tem 54. As described below, this emulation is implemented (RCT)-qaddrcur pri Priority of queue frame (IRB) through use of Second System 52's native memory man most recently dequeued. (RCT)-qaddrlrin Logical resource number agement unit to avoid the performance penalties incurred (resource identifier) of emulated 45 through a complete software emulation of First System 10's device. memory management unit. (RCT)-eqaddrbrk-add N/A (RCT)-eqaddrtrmname N/A As is well known, most systems operate upon the basis of (RCT)-qaddr.logname N/A virtual addresses and perform virtual to physical address (RCT)-qaddr.display N/A translations relative to a predetermined base address, that is, (RCT)-qaddrfilename File name of emulated device to 50 by adding a virtual address as an offset address to the base be mounted. address to determine the corresponding address in physical address space of the system. While First System 10 and Second System 52 may both use such addressing schemes, TABLE 4 the actual addressing mechanisms of the two system may Queue Header 90 for XTD/TTY Device 55 differ substantially, as may the memory protection schemes. xtd-rqh.priority N/A 1. First System 10 Native Memory Mechanisms (FIGS. 5 xtd-rqh.fwd Pointer to next queue element or to header and 6) if queue is empty. The native memory mechanisms of First System 10 xtd->mcl.ctr N/A implement a ring type protection system wherein Executive xtd-cxt ctr N/A 60 xtd-eisen.sid Semaphore to lock queue structure while Program Tasks (EXP Tasks) 28 and Tasks 30 normally referencing queue structure. operate with two types of memory area respectively desig xtd-)isempid N/A nated as a system memory area and user memory areas. The xtd-fdes File descriptor for xtd socket. system areas are used for system level operations, such as xtd-active servers TRUE if corresponding server SKP 66 is active. 65 the execution of executive level programs and the storage of xtd-)status N/A the related data structures, while each user task executes xtd->usri sid N/A operations and stores data associated with the execution of the task in a user memory area. 5,619,682 25 26 Each task is assigned to a given ring and the access 0001 (1) when the request is directed to a user task memory permissions of a given task to information contained in a aCa. given memory space are determined by the respective The mapping of First System 10's memory management assigned rings of the task and the ownership of the memory functions onto Second System 54's memory space and space, that is, whether the memory space is in the system management functions is a two dimensional representation memory area or in the user task memory area or areas. For of First System 10's memory access functions as illustrated example, system executive level tasks and operations, such in FIG. 7 wherein the horizontal axis represents the class of as operating system functions executed by an EXPTask 28 the tasks requesting memory access, that is, executive task are executed in ring 0 while Tasks 30 executing user or user task, and the vertical axis represents the type of operations are executed in higher order rings, such as rings 10 memory area, that is, the System Memory (SYSMEM) 110 1, 2 and 3. As such, an EXPTask 28 executing in ring 0 will area or an Independent-Memory Pool (IPOOL) 112 area. have read and write access privileges to data residing in the Each square represented in the two by two array of FIG. 6 system memory area and read and write access privileges to thereby represents a combination, in First System 10, of a user task data residing in the user task areas. UserTasks 30 memory area and a class of task having access privileges to will have read and write access privileges to user task data that area. The upper left square represents the combination residing in selected user task areas but will have only read 15 of executive tasks with System Memory (SYSMEM) 110 access privilege, at most, to data residing in the system area. area, the upper right square represents the combination of 2. Mapping of First System 10 System Memory Area user tasks with System Memory (SYSMEM) 110 area, the (SYSMEM) 110 and Independent-Memory Pool (IPOOL) lower left square represents the combination of executive 112 Areas into Second System 54 Memory Space (FIG. 5) tasks with Independent-Memory Pools (IPOOLs) 112 and As will be described in further detail below and as 20 the lower right square represents the combination of user illustrated in FIG. 5, First System 10 memory space as tasks with Independent-Memory Pools (IPOOLs) 112. implemented in Second System 54 is organized as two types The entries within each square of the two by two array of regions, respectively indicated in FIG. 5 as the System represent, first, the number of the Second System segment to Memory (SYSMEM) 110 area and the Independent which the corresponding combination of First System Memory Pool (IPOOL) 112 areas, which are accessed by memory area and class of task is mapped and, second, the two classes of tasks, that is, the executive level or operating access privileges of each combination of a class of First system tasks and the user tasks. The access privileges of System 10 task and the corresponding First System 10 each class of task, as determined through the task ring memory area. Thus it may be seen that the upper left square numbers and memory area ownership, depends upon the represents Second System 54 memory segment 3 and that class of the task and the ownership of the memory area being 30 First System 10 executive tasks have read and write privi accessed, with executive tasks having read and write privi leges to segment 3 while the upper right square represents leges to both the Independent-Memory Pool (IPOOL) 112 Second System 54 memory segment 4 and that First System areas and the System Memory (SYSMEM) 110 area and the 10 user tasks have read only privileges to segment 4. Second user tasks having read and write privileges to Independent System 54 memory segments 3 and 4 thereby correspond to Memory Pool (IPOOL) 112 areas and read only privileges to 35 First System 10's System Memory (SYSMEM) 110 area but the System Memory (SYSMEM) 110 area. The mapping of organized as two segments distinguished by the respective task access privileges onto First System 10's memory space access privileges of First System 10's executive tasks and as implemented in Second System 54's memory space is user tasks, wherein executive tasks have both read and write therefore a two dimensional process wherein one dimension privileges to segment 3 while user tasks have only read is represented by the type of memory area, that is, whether 40 privileges to segment 4. a given memory area is the System Memory (SYSMEM) In a like manner, Second System 54's memory segments 110 area or an Independent-Memory Pool (IPOOL) 112, and 5 and 6 correspond to Independent-Memory Pools (IPOOLs) the other dimension is represented by the class of the task, 112 and the First System 10 executive tasks and user tasks that is, whether a given task is an executive task or a user 45 both have read and write access to these segments, just as task. First System 10 executive tasks and user tasks both have As also described, Second System 54 in the described read and write access to Independent-Memory Pools implementation of the invention is a AIX based system, (IPOOLs) 112. It should be noted that while segments 3 and wherein AIX* is the International Business Machines Cor 4 are distinguished by the respective access privileges of poration version of the UNIX* operating system and 50 First System 10 executive and user tasks, segments 5 and 6 wherein memory space is organized as AIX type memory are not so distinguished because both the executive tasks and segments. It is necessary to map the memory access func the user tasks have both read and write privileges to both tions performed by First System 10's memory mechanisms segments, just as to Independent-Memory Pools (IPOOLs) onto Second System 54's memory space to accomplish the 112. The mapping of Independent-Memory Pools (IPOOLs) emulation of First System 10 on Second System 54 so that 55 112 into two segments, that is, segments 5 and 6, is per the First System 10 programs and tasks executing on Second formed, however, to preserve symmetry with the mapping of System 54 may execute as if they were executing in the System Memory (SYSMEM) 110 into segments 3 and 4, native First System 10 environment. thereby simplifying the mapping of First System 10's As illustrated in FIG. 6, each First System Virtual Address memory access and management functions into Second (FSVA) 126 is comprised of a Most Significant Bits field 60 System 54 as described below. (MSB) 128 and an Address field (ADDR) 130 wherein Most As represented in FIG. 5, System Memory (SYSMEM) Significant Bits field (MSB) 128 contains a bit field whose 110 area and Independent-Memory Pools (IPOOLs) 112, value identifies whether the address is directed to an execu indicated by the dashed line enclosures, are implemented in tive memory area, that is, a system memory area, or to a user Second System 54's Hardware Element-Memory (HE task memory area. For example, the Most Sigificant Bits 65 MEM) 58b in Segments 3, 4, 5 and 6 of Hardware Element field (MSB) 128 may contain the value 0000 (0) when the Memory (HE-MEM) 58b wherein there is, for each instance request is directed to the system memory area and the value of an FSP 80 in Second System 54, a single instance of 5,619,682 27 28 System Memory (SYSMEM) 110 area implemented as a ping of First System 10 system and user memory areas into matching pair of memory areas in Segments 3 and 4 and a Second System 54 segments. The following will first plurality of Independent-Memory Pools (IPOOLs) 112, each describe the address translation operation performed by implemented as a matching pair of memory areas in Seg INTERPRETER 72, and then will describe the address ments 5 and 6 wherein each Independent-Memory Pool translation operation performed by Pseudo Device Drivers (IPOOL) 112 corresponds to a task actively executing in the (PSDDs) 74. instance of First System Process (FSP) 80. First considering the process of INTERPRETER 72 As indicated in FIG. 5, the pair of memory areas com address translation, as has been described above, each First prising System Memory (SYSMEM) 110 area in Segments System Virtual Address (FSVA) 126 is comprised of a Most 3 and 4 is comprised of a System Memory Area Segment 3 10 Significant Bits field (MSB) 128 and an Address field (ADDR) 130 wherein Most Sigificant Bits field (MSB) 128 (SMAS3) 132 "attached' from a System Memory Area Base contains a bit field whose value identifies whether the Address 3 (SYSMEMBA3) 134 and a System Memory Area address is directed to an executive memory area, that is, Segment 4 (SMAS4) 136"attached' from a System Memory System Memory (SYSMEM) 110 area, or to an Indepen Area Base Address 4 (SYSMEMBA4) 138. In a like manner, dent-Memory Pool (IPOOL) 112. For example, the Most the pair of memory areas comprising each Independent 15 Sigificant Bits field (MSB) 128 may contain the value 0000 Memory Pool (IPOOL) 112 is comprised of an Independent (0) when the request is directed to the System Memory Memory Pool Area Segment 5 (IPOOLS5) 140 area (SYSMEM) 110 area and the value 0001 (1) when the "attached” from an Independent-Memory Pool Base request is directed to an Independent-Memory Pool Address 5 (IPOOLBA5) 142 and an Independent-Memory (IPOOL) 112 area. Pool Area Segment 6 (IPOOLS6) 144 area "attached' from 20 an Independent-Memory Pool Base Address 6 (IPOOLBA6) As indicated in FIG. 8, the First System Virtual Address 146. While System Memory Area Base Address 3 (SYS (FSVA) 126 of a request which includes a memory access is MEMBA3) 134 and System Memory Area Base Address 4 provided to Address Translation (ADDRXLT) 98. Address (SYSMEMBA4) 138 are the same for all tasks executing Translation (ADDRXLT) 98 includes a Word To Byte within an FSP80, Independent-Memory Pool Base Address 25 Shifter (WBS) 148 which performs an initial translation of 5 (IPOOLBA5) 142 and Independent-Memory Pool Base the First System Virtual Address (FSWA) 126 from the First Address 6 (IPOOLBA6) 146 are different for each task System 10 format, in which addresses are on a per word actively executing in the FSP 80. basis, to a Second System 54 virtual address, in which In correspondence with the memory protection scheme of addresses are on a per byte basis. This translation is per 30 formed by a left shift of the First System Virtual Address First System 10, System Memory Area Segment 4 (SMAS4) (FSVA) 126 and, in the translation and as indicated in FIG. 136 is attached from System Memory Area Base Address 4 7, the value in the Most Sigificant Bits field (MSB) 128 field (SYSMEMBA4) 138 with read only privilege while System of the First System Virtual Address (FSVA) 126 is trans Memory Area Segment 3 (SMAS3) 132 is attached from formed from 0000 (0) or 0001 (1) to 0000 (0) or 0010 (2), System Memory Area Base Address 3 (SYSMEMBA3) 134 respectively. with read and write privileges. In a like manner, each 35 Independent-Memory Pool Area Segment 5 (IPOOLS5) 140 Having performed the translation of a First System Virtual is attached from Independent-Memory Pool Base Address 5 Address (FSWA) 126 into a per byte address, Address (IPOOLBA5) 142 with read and write privileges and each Translation (ADDRXLT)98's Ring Adder (RNGA) 150 will Independent-Memory Pool Area Segment 6 (IPOOLS6) 144 read a System Status Register (SSR) 152 which, among 40 other information, contains a Ring Number (RNG) 154 is attached from Independent-Memory Pool Base Address 6 which contains a value indicating the First System 10 ring in (IPOOLBA6) 146 with read and write privileges. which the task is executing, that is, a value of 0, 1, 2 or 3. It must be noted that Second System 54 memory space, as As described, Ring 0 is reverved for system operations while organized under the AIX operating system, is actually Rings 1, 2 and 3 are used for user tasks. If the task is structured into 16 segments, of which certain segments are 45 executing in Ring 0, that is, in system space, Ring Adder reserved, for example, to contain the AIX* operating system (RNGA) 150 will add 3 to the value (0 or 2) contained in and system functions. More than four segments, that is, more Most Significant Bits field (MSB) 128 of the shifted First segments than segments 3, 4, 5 and 6, are available for use System Virtual Address (FSWA) 126. If the task is not by user processes executing Second System 54, however, executing in Ring 0, that is, is executing in Rings 1, 2, or 3 and the mapping of First System 10 memory areas onto 50 and thus in user task space, Ring Adder (RNGA) 150 will Second System 54 memory space may make use of these add 4 to the value (0 or 2) contained in Most Significant Bits additional, available segments by a second mapping process field (MSB) 128 of the shifted First System Virtual Address performed by Pseudo Device Drivers (PSDDs) 74. (FSVA) 126. The final result will be a byte oriented First 3. Emulation of First System 10 Memory Operations System Virtual Address (FSVA) 126 having a Most Signifi (FIG. 8) 55 cant Bits field (MSB) 128 which contains a value of 3, 4, 5 Referring to FIG. 8, and to FIGS. 2, 3, 5 and 6, therein is or 6, thereby indicating the Second System 54 memory illustrated the mechanisms implemented on Second System space segment in which the address lies and an Address 54 to emulate the memory access, protection, and manage (ADDR) field 130 identifying a location within the segment. ment mechanisms of First System 10. It must be recognized Next considering the process of INTERPRETER 72 map in the following that the emulation of First System 10 60 ping of First System 10 system and user task memory areas memory operations on Second System 54 involves two into Second System 54 memory segments, it has been different address conversion operations, one being the con described that First System 10 operating system tasks and version of First System Virtual Addresses (FSVAs) 126 done functions execute in a region referred to herein as System by INTERPRETER 72 and the second being the conversion Memory (SYSMEM) 110 area while user tasks execute in of First System Virtual Addresses (FSVAs) 126 done by 65 regions referred to herein as Independent-Memory Pools Pseudo Device Drivers (PSDDs) 74. Each of these conver. (IPOOLs) 112 area and that these memory regions are sions is accomplished through translation and through map mapped into Second System 54 memory segments. INTER 5,619,682 29 30 PRETER 72 segment mapping is performed when there is a Descriptor Table (SDT) 156 associated with the task. Each change of the Task Control Blocks (TCBs) 32 whose code Segment Descriptor Table (SDT) 156 in turn contains a is being interpreted. A Task Control Block (TCB) 32 con Memory Pool Array Pointer (MPAP) 158 which in turn tains a Segment Descriptor Pointer (SDP) 154 to a Segment points to an Independent-Memory Pool Identification entry Descriptor Table (SDT) 156 associated with the task. Each (IPOOLID) 160 stored in the Memory Pool Array (MPA) Segment Descriptor Table (SDT) 156 in turn contains a Memory Pool Array Pointer (MPAP) 158 which in turn 162. Each Pseudo Device Driver (PSDD) 74 maintains a points to an Independent Memory Pool Identifier (MPID) Server Pool Descriptor Linked Set (SPDLS) 166 where the 160 in a Memory Pool Array (MPA) 162. When the Inde Independent Memory Pool Identification (IPOOLID) 160 is pendent Memory Pool Identifier (MPID) 160 of a new Task stored if currently attached by the Pseudo Device Driver Control Block (TCB) 32 differs from the previous Indepen 10 (PSDD) 74. dent Memory Pool Identifier (MPID) 160 of the previous In addition to the Independent Memory Pool Identifica Task Control Block (TCB) 32, the segments 5 and 6 are tion (IPOOLID) 160, the Server Pool Descriptor Linked Set detached from INTERPRETER 72 and the new Independent (SPDLS) 166 also contains the Second System 54 Segment Memory Pool Area is attached as segments 5 and 6. Address (SA) 168 where the Independent Memory Pool The INTERPRETER 72 translation process always gen 15 (IPOOL) 112 is attached. Unlike the instance of INTER erates addresses in segments 5 and 6 for user task addresses, PRETER 72, this Segment Address (SA) 168 may be but because of dynamic detaching and attaching of Inde anywhere from segment 4 onwards. pendent Memory Pools (IPOOLs) 112, the same addresses 4. Management of Memory Space will refer to different Independent Memory Pools (IPOOLs) 112. The mapping of system memory areas remains the 20 As described above, in the present implementation of the same, however, when switching from Task Control Block emulation in Second System 54 each Second System Kernel (TCB) 32 to Task Control Block (TCB) 32, so that the Process (SKP) 66 of a Pseudo Device Driver 74 may have INTERPRETER 72 generated addresses in segments 3 and associated with it a plurality of Independent-Memory Pools 4 always refer to the same locations. (IPOOLs) 112, wherein the number of Independent-Memory The address conversion done by Pseudo Device Drivers 25 Pools (IPOOLs) 112 associated with a Second System (PSDDs) 74 differs from the address conversion done by Kernel Process (SKP) 66 will be determined by the number INTERPRETER 72 in that it maps all the system memory of tasks for which the Second System Kernel Process (SKP) address into segment 3 whereas user task addresses, depend 66 has a request in its associated Pseudo Device Queue ing on the Independent Memory Pool (IPOOL) 112 (PSDQ) 86. involved, could be mapped in any of segments 4 onwards. 30 As such, it is necessary to manage the Server Pool Referring again to FIG. 8, therein is represented a Pseudo Descriptor Linked Set (SPDLS) 166 associated with each Device Driver Queue (PSDQ) 86 wherein each Pseudo Second System Kernel Process (SKP) 66 to dynamically Device Driver Queue (PSDQ) 86 is a part of a Pseudo assign or reassign segments as required by the tasks having Device Driver (PSDD) 74 and is associated with a corre requests in the Pseudo Device Drivers (PSDDs) 74. For sponding Second System Kernel Process (SKP) 66 as 35 example, a Second System Kernel Process (SKP) 66 may be described with reference to FIGS. 3 and 4. One of the passed a request from a task whose Independent-Memory Pseudo Device Driver Queues (PSDQs) 86 and its associ Pool (IPOOL) 112 is not among the set of Independent ated addressing structures and mechanisms is shown in Memory Pools (IPOOLs) 112 contained in the Server Pool partial detail for purposes of the following discussions. Descriptor Linked Set (SPDLS) 166 associated with the 40 Second System Kernel Process (SKP) 66, so that it is Further details of the structure and operations of Pseudo necessary to add the unattached Independent-Memory Pool Device Drivers (PSDDs) 74 and Pseudo Device Driver (IPOOL) 112, corresponding to the task, to the Independent Queues (PSDQs) 86 may be found in reference to the Memory Pools (IPOOLs) 112 corresponding to the Pseudo discussions regarding FIGS. 3 and 4. Device Driver (PSDD) 74. In addition, it may be necessary As has been described, each Pseudo Device Driver Queue 45 to delete, or detach, one or more least recently used Inde (PSDQ) 86 is associated with a corresponding Second pendent-Memory Pools (IPOOLs) 112 from the Indepen System Kernel Process (SKP) 66 which executes the dent-Memory Pools (IPOOLs) 112 of the Server Pool requests in the Pseudo Device Driver Queue (PSDQ) 86 and Descriptor Linked Set (SPDLS) 166 in order to be able to any Pseudo Device Driver Queue (PSDQ) 86 may contain attach a new Independent-Memory Pool (IPOOL) 112. requests from a plurality of tasks, each task in turn being 50 As indicated in FIG. 8, each Server Pool Descriptor associated with and executed in an Independent-Memory Linked Set (SPDLS) 166 is managed by a Linked Set Pool (IPOOL) 112 area which is mapped into a Second Manager (LSM) 168. A Pseudo Device Driver Queue System 54 memory segment by address translator (PSDQ) 86 receiving a request for a memory access will (ADDRXLP) 96 which includes a Server Pool Descriptor pass the identifier of its task to Linked Set Manager (LSM) Linked Set (SPDLS) associated with the Pseudo Device 55 168. Linked Set Manager (LSM) 168 will determine Driver Queue (PSDQ) 86, Task Control Block (TCB) 32, whether an Independent-Memory Pool Identifier entry Segment Descriptor Table 156, and Memory Pool Array 162. (IPOOLID) 160 corresponding to the task is in the Server As described previously, each Pseudo Device Driver Pool Descriptor Linked Set (SPDLS) 166 and, if it is, will Queue (PSDQ) 86 contains Queue Frames (QFs) 94 which reorder the linked set so that the Independent-Memory Pool in turn contain the Indirect Request Blocks (IRBs)36 passed 60 Identifier entry (IPOOLID) 160 is at the head of the linked from the First System tasks. Each Indirect Request Block set by reordering the links connecting the Independent (IRB) 36 in turn contains a Task Control Block Pointer Memory Pool Identifier entries (IPOOLIDs) 160, in the (TCBP) 164 which points to the Task Control block (TCB) manner well known in the art. If the Server Pool Descriptor 32 associated with the task that generated the Indirect Linked Set (SPDLS) 166 does not contain an Independent Request Block IRB 36. 65 Memory Pool Identifier entry (IPOOLID) 160 correspond As described, the Task Control Block (TCB) 32 contains ing to the task, Linked Set Manager (LSM) 168 will deter a Segment Descriptor Pointer (SDP) 154 to a Segment mine whether the Server Pool Descriptor Linked Set 5,619,682 31 32 (SPDLS) 166 contains the maximum allowable number of disk input/output operation, wherein the request identifies Independent-Memory Pool Identifier entries (IPOOLIDs) the disk unit to be read from or written to and the informa 160 and, if the Server Pool Descriptor Linked Set (SPDLS) tion to be read or written. Thereafter, the corresponding 160 does contain the maximum number of Independent Driver 44 will read the information describing the charac Memory Pool Identifier entries (IPOOLIDs) 160, will delete teristics of the disk drive that are necessary to execute the one or more least recently used Independent-Memory Pool operation from the corresponding resource control table and Identifier entries (IPOOLID) 160 from the Server Pool will read the "capacity' of the "drive' from the second Descriptor Linked Set (SPDLS) 166. Linked Set Manager system process emulating the drive and will execute the (LSM) 168 will then construct a new Independent-Memory requested operation. The requesting task need not be aware Pool Identifier entry (IPOOLID) 160 corresponding to the 10 of, or constrained by, the specific type of disk drive to which task and will enter the new Independent-Memory Pool the operation was performed. Identifier entry (IPOOLID) 160 at the head of the linked set. It is apparent from the above descriptions of the present invention for emulating a First System 10 on a Second 5. Summary of Memory Operations (FIG. 8) System 54 that, because of the level at which the boundary It may be seen from the above descriptions, therefore, between First System 10 operations and Second System 54 that, for any first system virtual address generated by a First 15 operations is drawn, the tasks executing “in” First System 10 System 10 task executing on Second System 54, INTER are not aware of the detailed operation of the Second System PRETER 72 will translate the First System 10 virtual 52 processes executed in performing disk input/output address into a byte oriented virtual address containing a requests. As such, the present invention provides essentially virtual address location within a segment and identifying a complete freedom in the manner in which Second System 52 Segment 3, 4, 5 or 6 containing the location. The INTER 20 actually performs all input/output operations, including disk PRETER 72 mapping of segments via ADDRXLT98 will in input/output operations. turn map each segment identified by an address translation According to the present invention, therefore, and because into an Independent Memory Pool Identification (IPOOLID) the emulation mechanisms of the present invention allow 160 for the current task. The Segment/Independent Memory First System 10 to use virtually any type of disk drive, all Pool mapping mechanism (i.e., ADDRXLP96) of the disk drives for First System 10 tasks executing on Second Pseudo Device Driver (PSDD) 74 executing the task request System 52 in emulation of First System 10 are defined in the associated with the First System 10 virtual address will map resource control tables of Emulator Executive Level (EEXL) the segment identified by the address translation mechanism 64 to be intelligent drives, such as SCSI drives. As such, the to a current Independent Memory Pool (IPOOL) 112 loca only information required from the resource control tables to tion in System 54's memory by providing the base address 30 perform an input/output operation is the identification of corresponding to the Independent Memory Pool Identifica drive type, as a SCSI drive, and the "drive capacity' tion (IPOOLID) 160. provided by the second system process emulating the disk drive. The Second System Kernel Processes (SKPs) 66 E. Emulation of Disk Drives actually performing the emulated input/output operations 35 are free to perform any operation that will result in a transfer As described, one of the types of First System 10 input/ of the requested data to or from the requesting First System output operations emulated by the Pseudo Device Drivers 10 task executing in First System Process (FSP) 80. (PSDDs)74 of the present invention is the emulation of First In addition, and because the emulated drive is transparent System 10 disk input/output operations. It has been 40 to the requesting task, that is, the First System 10 tasks are described that First System 10 performs disk input/output not aware of the actual characteristics of the disk drive operations in response to a request from a task by creating emulated by the corresponding Pseudo Device Driver an Indirect Request Block (IRB) 36 and a lower level task (PSDD) 74, the emulated disk drive defined by the corre to execute the input/output operation, wherein the lower sponding resource control table may be of any capacity and level task controls a disk Driver 44 to execute the operation, 45 is not constrained either by the characteristics of the actual using information read from a resource control table describ Second System 54 hardware device used to perform the ing the disk drive to control the operation. operation or by the characteristics of the "native' First The information contained in the resource control table, System disk drives. and the specific operations executed by the Driver 44 in Referring now to the Second System 54 processes emu executing the request, are determined by the type of disk 50 lating disk input/output operations, the Second System Ker drive involved in the operation. In the instance of an nel Processes (SKPs) 66 performing disk input/output opera intelligent disk drive, for example a SCSI type drive, the tions are implemented as standard UNIX type file input/ resource control table essentially contains only information output processes, as are well known in the art, and the identifying the type of drive. The capacity of the drive is "capacity' of the "drive' as provided by the file input/output read from the drive itself and no further information is 55 processes emulating a disk drive are, in fact, the capacity of required because the drive itself contains the "intelligence” the file to which the file input/output operation is performed. to perform the majority of operations necessary to read from As a result, the actual Second System 54 operations per or write to the drive. In the instance of an older or less formed in emulating First System 10 disk input/output "intelligent' drive, however, the resource control table must operations are completely under the control of Second identify not only the type and capacity of the drive, but must 60 System 54. As a consequence, Second System 54 may use provide information sufficient for the Driver 42 to perform any of its native hardware devices to actually perform the detailed control of the drive. emulated disk input/output operations without constraint The emulation mechanisms of the present invention from the tasks of First System 10. For example, Second thereby allow First System 10 to use virtually any type of System 54 may use any of its native disk drives for the input/output device so long as it is of a type suitable for the 65 operations, and need not use a disk drive at all but may use requested input/output operation, and in particular any type any other device capable of providing the desired result, of disk drive. That is, a task need only issue a request for a such as a non-SCSI drive. 5,619,682 33 34 It should be noted with regard to the above that, in the Pseudo Network Layer (PNL) 76a residing and executing in "native' First System 10 environment, the information con First System Executive Level (FEXL) 16 as a native First tained in a disk drive is contained in a “volume' wherein a System 10 program module and Pseudo Network Driver “volume' can contain one or a plurality of files. In the (PND) 76b, INTERPRETER 72 and the address/segment emulation of disk drives on Second System 54, however, a translation and mapping functions. First System 10 “volume" is treated as and is a Second All rights, including copyrights, in the subject matter in System 54 file, in accordance with Second System 54's the Appendices are vested in and the property of Bull HN emulation of disk operations as file input/output operations. Information Systems Incorporated of Billerica, Mass., the In addition, it is known that SCSI type disk drives are assignee of the present patent application and any ensuing conventionally fixed devices, that is, cannot be "mounted' to 10 patent or patents and Bull HN Information Systems Incor or "dismounted” from a system and a conventional SCSI porated retains and reserves all rights in the Appendices. drive is therefore essentially a fixed system resource. Bull FIN Information Systems Incorporated, however, According to the present invention, however, the disk drives grants permission to reproduce the materials in the Appen emulated by Second System 54 are presented to the tasks of dices for the purposes of prosecution of and issuance of or First System 10 as SCSI drives but in fact are actually 15 reproduction of the present patent application and any Second System 54 files, although the First System 10 tasks ensuing patent or patents and for study as necessary for the 'see' the emulated disk input/output only as SCSI drives. As understanding and teaching of the present invention, but for files are "mountable' units, the Second System 54 files and no other purposes. file input/output operations used to emulate First System 10 20 While the invention has been particularly shown and disk drives thereby appear to First System 10 to be "mount described with reference to preferred embodiments of the able' disk drives, effectively providing mountable "SCSI" apparatus and methods thereof, it will be also understood by disk drives. those of ordinary skill in the art that various changes, F. Appendices variations and modifications in form, details and implemen 25 tation may be made therein without departing from the spirit The structure and operation of the present invention are and scope of the invention as defined by the appended further described by reference to the following Appendices claims. Therefore, it is the object of the appended claims to which contain program listings for Memory Queue Interface cover all such variation and modifications of the invention as (MQI) 84 and Escape/Call Mechanism (EscapeC) 100, come within the true spirit and scope of the invention. 5,619,682 35 36

48A APPENDICES

APPENDIX A Mg Pages A1-A2O

APPENDEX B - PNET Pages B1-B46

APPEND C - ENEx Pages C-C241.

APPENDIX a POO ACROS AN ASSOCIAED ROUNES Pages D1-D7 5,619,682 37 38

APPENDIX A - MQI

Al (5/177 '13,

Nale: emuloop. S Purpose: Interface between C- code Input: Cerului main) and RISC (interpreter) Output:

# include "aix-regs.h" #include "ticlhead.h" #define monitoricalli (code, function) v ... extern function; cmpli 2, wo, code; bne 2, S+15; t 3, rim; bl ... function; Mclexit monitor-callix (code) ... function; crapli 2, w0, code; one 2, S+12; n iw, w0; b Callix; #define resource call (code, function) \ ... extern ... function; cmpli 2, 2, code; bne 2, S-16: r 3, in: b ... function; o tracer #define CIP call (code, function) extern ... function; cmpli 2, iw, code; be 2, S+16; r 3, rim; b ... function; b cipout #define idr (tgt, Src} lha tgt, Src (wO) #define idb (tgt, src) sli tgttgt, 1 ; s tgt, togt, bs #define REBASE ?tgt) i dt, tgt; a. dt, will, dt; St. dt, tdt file "emuloop.s" .giobl emu-loop (ds) .csect enau loop Ids) long ... emuloop (PR} ... globi ... elluloop (PR) .csect - emuloop (PR) eIT loop: ai sp, sp, -16 get a little Stack space Tiflir Z. get return link st 2, 4 (sp) i stack return link rith, 3 get argrimp tr) l 2 rint 4 (rim) read rim value from rim array crimp 2, 2, rich verify consistency of rimpt s 4, Cregs ... panic; save C context for eventual return 2, S +8 skip if Ok ... panic in case of fire, yell FIRE??? 4, aregs get as context 2, -Soi. get virtual soi c3, c34 (rim) ! get xe table ptr 0, 2, w2 test if trojan modified soi 2, 0 0x10, 2 clear CR3 field 0,-no-triji Croc 14, 14, 14 set trojan active flag co 12, 13, 14 signal rupt no trii : 5,619,682 39 40

i w0, Sdeptr get segment descript or table ptr l w1, opt get option flags Cmpi 2, w0, 0 test for valid segment desc pointer andil. 2, will, WM test virtual pool option flag crinor 5, 2, 10 no w pool if (opt. VM == 0) (SDTP == 0} ) l w0,fp. CIS load ims 16 bits of highest FP reg fd FP2RO, 0 (wO) load fp reg with constant zero Ed FPMAX, 8 (wO) load fp reg with constant DPS6 FP Inax fd FPMIN, 16 (wO) load fp reg with constant DPS6 FP max intsfi 7, 0x0. set rounding mode to -> 0 l w0, iv get IV (-> ISA) andil. W., will, Boot test boot flag beg 0, go skip if standalone R1-6, B1-7 and M1 from ISA passed to us dr (r1, R1) ldr r2, R2) idr (r3, R3) ldr (rd R4) dr (rs, R5) dr (ré, R6) idr (ci,M1) andil, ci, ci, 0x00ff Sli ci, ci, 8 ldb (b1, B1) ldb (b2, B2) ldo (b3, B3) ldb (b4, B4) ido (b5, B5) ldb (b6, B6) ldb (b7, B7) go : lha iw, O (p) fetch first instruction word b pre-soi enter interpreter - extern pre-soi ..globl Quit Quit: IE 4, aregs get asm context Cripi 2, iw, 1 MCLP bne 2, Hilt skip unless MCL In 3, r2 standalone TRMRQ b Back join returning flow l 3, ph retrieve PH sf 3, bs, 3 i debase sri 3, 3, 1 descale for error report lm 4, cregs is restore C context. l 2, 4 sp} # retrieve return link ai sp, sp, 16 # relinguish stack frame Int z # put link where it will help br : return to main # return (temporary or permanent)t from interpreter ... globi Critn Crit: Wil, opt get option flags st rin, O (sp) save rin in Stack andil. Wil, w1, Boot test boot flag l W2, soi get virtual soi St. 4, aregs save as context st fo 1, ciplop0 save fp context st fo 2, ciplop2 save fp context st fo 3, cip-2Op0 save fp context l w2, bsptr get pointer to base st bs, 0 (w2) send current base to C liu dt, Oxdead prepare to change oril dt, dt, Oxdead preempt flag l w1, baser0 get pointer to hdm beg 0, Rost detour if standalone st dt, RRLIVE (wl) store preempt flag Rast: CInpi 2, iw, MCL. # MCL2 beg 2, Mcll # skip if MCL # Illacro calls to identify request: CIP call (MAT, L6Xmat) CIP call (AME L6X ane) 5,619,682 41 42

A3

CIP call (DME, 1,6X drine) ... extern sip save Callix: # start for resource-like MC's ea, iv rlin.In ea, ea, O, 4, 31 is debase iw st ea, preliv st w2, cip3op0 save work registers l w2, baserC a. ea, W2 ea l 2, 2(ea get ISM) & SM2 riinii. 2, 2, 0, 22, 22 ca ea, SI (ea) beq 0, tri-no-Sip b. Sip save trinosip : lm w2, cip3opo # restore work registers rlinn z iw, 0, 16, 31 resource call (REQ, iWreq) resource call (WAIT, iw wait) resource call (TSK, iwitsk) resource call (0, iw lev) resource call (RTDC, iwirtdc} resource cal{UCUMUL, iwucumul) resource call (RTCN, iwirtcn) resource call (RTCF, iwirtcf) resource call (WDTN, iww.dtn) resource call (WETF, iw watf resource call (MCLACCPT, Inclaccept) resource call (MCLRCVFM, mclecw from) resource call (MCLRCVMS, Inclrecwrasg) resource call (MCLRECW, racilirecw) resource call (CLSEND, inclsend) resource call (MCLSNDTC, Inc. send to) resource call (MCLSNDMS, mc sendiusg) cupi 2, iw, Ox7fff # test for iw > 1.6 bits ble 2, S416 # Skip if not r 3, rim f SWFLT, save regs b .iw-lev # and call the dumper b tracer # resume (if you can) andil, 2, 2, OPMSK eliminate. As from iw for LEW resource call (LEV, i wiev)w ... extern ... resunk st w0,r14 (rim) # resunk will print r1 from Inemory 3, rin o ... resunk Siprestore race bl Back2I prepare to return to Interpreter l ea, iv rlin?t ea, ea, O, 4, 31 debase new iv l z, pre-iv cmp 2, 2, ea bed 2, trace2 st w2, Cip3op0 save work registers w2, baser0 ea, ea, W2 2, 2 (ea) i get ISM1 & ISM2 2, Z, 0, 22, 22 ea, SI ea) 0, S+8 Siprestore W2, Cip3op0 : restore work registers trace2 test for resource-like MCL's first nonitoricallix (MCLACCPT) monitoricallx (MCIRCWFM) monitor callix (MCLRCVMS) monitoricallix (MCLRECW monitor callix (MCLSEND) monitor callix (MCSNDTO) monitoricallix (MCLSNDMS) l w1, baser0 access HDM 2, 0x00fc w1) get TV C1 thp andil. 2, 2, 0x0001 test oddlegg of FHPO1 5,619,682 43 44

l 2, baser 0 get ring O base CImp 2. bs, 2 already ringO2 st bs, workbs save working base st 2, 0 (w2) i save ring0 base beg 0, Mcirdy ; skip if not inward call beg 2, Mclnroy ; skip if no change needed REBASE (rimp) REBASE (rimb1) REBASE (rinnb2) REBASE trill b3) REBASE {rimb4) REBASE { rim.b5) REBASE crimbó) REBASE {rimb7) REBASE (iv) REBASE (t) REBASE (rdbr) monitoricall (MCLDQSA, Inclidgsa: monitoricall (MCLCTIME, Incitine) monitoricall (MCLITIME, Inclitime} monitor call (MCLTROJAN, Inclitrojan) monitorical (MCLPOST CLMVM, Inclipclit) monitoricall (MCLPOOL ADDR, Inclitaddr monitoricall (MCLDEVID, mclidevid) Iinonitor call (MCLDELCH, mclidelch) Inonitoricall (MCLSTAT, ICl stat) monitoricall (MCLINCH, Inclinch) monitor call (MCLWKNAM, ZQNY08 return workstationnane) monitoricall (MCLWKCMP, ZQNY09 return workstation components) monitoricall (MCLWKRLA, 29NY18 return locladdir) monitor call (MCLDDVST, ZQNY20 device driverstatus) monitorical (MCLRDIOT, ZQNY21 return devicetimeout) monitoricall (MCLUDIOT, ZQNY22 update device timeout) monitor call (MCLWKPRR, ZQNY41 replace profile with Ilained profile) Inonitoricall (MCLWKUPA, 2ONY42 update workstation paritis) monitoricall (MCLWKRPA, ZQNY 44 return workstation parins) monitorical (MCLDVRPA, 29NY45 return device parTas) monitor call (MCLMLXUP, 2ONY50 update rulx-connparms) monitoricall (MCLIANUP, 29NY60 updatellan connpartis) monitoricall (MCLIANGP, 20NY61 return lar connparas) monitoricall (MCIGDTM, raci gatm) monitoricall (MCLSUSPN, mcl. suspn) monitor call (MCLEXTDT. mcil extdt) monitoricall (MCLEXTET, Incliextet) Inonitor call (MCLUSOUT, Iccusout) monitor call (MCLUSIN, Inclusin) monitorical (MCLUSERS, mousers) monitor call (MCL VERBOSE, Inclverbose) monitoricall (MCLSetDate, Inclisetolate) monitor call (MCL-STOPHVX, mclistoph vx} tonitoricall (MCICPXLRN, mcl.cpXirn) monitor call (MCL. LlNKLRN, incl-link-lrn) Inonitoricall (MCLDUMP, Incidump) IIonitoricall (MCLNOP, mClnop) monitor call (MCLTRMRQ, motr Ilarg) monitoricall (MCLSDTSZE, Incisdtsze) IIIonitor call (MCLINTSDT, inclinitsdt) monitoricall (MCL GETSDT, incl getsdt) monitor call (MCINVTADR, mclinvitaddr) nonitoricall (MCGETSUBSDT, incl.getsubsdt Ittonitoricall (MCLACTSEG, Incact seg) nonitoricall (MCLDEACTSEG, Includeacts ag) monitoricall (MCLINVSDTE-1, Tcl invisdte) Tonitoricall (MCL INVSDTE2, Jacilinvisdte 2) monitoricall (MCGETVPO, mc get wipo) Tonitoricall (MCL BUILDRMTS, Inc build rints) nonitoricall (MCLINWRMTS, inclinvritats) Ionitoricall (MCLENTR2, Incientir2) monitoricall (MCLXITR2, Inclxi tr2) monitoricall (MCLSET SWO, mci getswo) Inonitoricall (MCL GETSWWA, incl.getswva} monitoricall (MCISOCKT, mcl. socket) monitoricall (MCLBIND, Inclbind) monitor call (MCLCNNCT, Inci connect) nonitoricall (MCLGTPNM, Inc. get peer name) monitoricall (MCLGTSNM, Inclget sockname: 5,619,682 45 46

monitoricall (MCLGTSOP, frc getsockopt) imonitor call (MCLLISTN, niclisten) monitor call (MCLSHUTD, inclsnutdown) Inonitoricall (MCLSSKO, mcl. setsockopt) Itnonitoricall (MCLCLOSE Inonitoricall (MCLSCKPR, no socketpair) nonitoricall (MCLIOCTL, ITcl socket ioctl.) ... exteril . Inclunk st w0,r1* 4 (rim) n 3, rin # Ilciunk will print ri from Itemory bi inclunk Mcllexit: i rim, 0 (sp) ! unstack rim ptr l w2, bsptr fi retrieve pointer to base l Wii, Workbs # retrieve temporary base l bs, 0 (w2) # retrieve caller's base Cup 2, bs. will beq 2, trace it was there a ring change? REBASE (rimp) # skip if not REBASErin b1) REBASE (rial b2) REBASE (rimb3) REBASE (rimb4) REBASE trim.b5) REBASE (rinhé) REBASE (rirab7) REBASE (iv) REBASE (t) REPASE (rdbr) o trace1 ... extern trap ..globl Cipout Cipout : b Back2 prepare to return to Interpreter i wl, gll fetch C return bits for cip traps rlinn. W1, will, 9, 0, 4 left justify & check trap bits bed 0, trace2 no trap bits set - return to interpreter ai pp., -2 reset p before entering common trap handler cntl2 will, w1 count leading zeros to trap bit lil w0, 0x40-29 set trap handler to #29 adjust to correct trap handler b trap common trap exit # prepare to re-enter interpreter: trace1: bl Back2 trace2; prepare to return to Interpreter Int, cter lha fetch instruction word octr re-enter interpreter (a virtual soi

f* Routine to prepare to re-enter interpreter: Back2I: riT., 0 (sp) unstack rim ptr wisdtp tr get segment descriptor table ptr w0, C-soi get ptr -> C. Soi 2, Wil, W2, soi test for valid segment desc pointer W1, opt get ptr -> virtuai soi 1. Ciplop0 get option flags z, will, WM restore fp context SA1) 2, Cip1 op2 test virtual pooi option flag 3, cig-2 op0 restore fp context (SA2) 15, 2, 10 restore fp context SA3) no v pool if ( (opt, WM O) (SDTP 2, W2, w0 is trojan (still) active? 2, will, Boot test option. flag c3, c3 * 4 rin) get xe table ptr 4, aregs get asm context, r 4-r31 14, 8, 9 troian inactive iff w2 s & soi) 12, 13, 14 if trojan active, set cr2 also w0,fp.cns get ptr to fp constants FP2RO, 0 w0) FP MAX, 8 wo FPMIN, 16 (wO) restore fp constants 5,619,682 47 48

beqr O return if standalone execution -- l iw, baser0 } get pointer to hdr: liu dt, Oxable prepare to change ori dt, dt, Oxable preempt flag s dt, RRLIVE (iw) ; change preempt fiag or return is trojan options entry point i ... globl tro troi: l w1, opt get option flags st rim, 0 (sp) save rim in stack andil. W1, Wil, Boot test boot fiag l w2, soi get virtual soi still 4, aregs save as Context liu dt, Oxdead prepare to change oril dt, dt, 0xdead preempt flag beg 0, standalone detour if standalone l w2, baser0 get pointer to hdth s dt, RR LIVE (w2) store preempt flag l w2, bs ptr get pointer to base standalone:st bs, 0 (w2) send current base to C 3, rin ... extern ... trojbp ... troj-bp enter "C" trojan code rim, 0 (sp) unstack rin pt.r WO,-C soi get ptr -> C-soi w1 opt get option flags w2, w2* 4 (rim) get new w? value bs, bs' 4 (rim) reestablish memory addressability 2, W2, w0 is trojan still active? W1, W1, Boot test option flag dt, Oxable prepare to change dt, dt, Oxable preempt flag 14, 8, 9 trojan inactive iff w2 = & soil 12, 13, 14 if trojan active, set cr12 also OS+12 skip if standalone iw, baser O get pointer to hdt dt, RRLIVE iw) change preempt flag 4, aregs get asin context Intctr w2 set TD c3, c.34 (rim) ... exter , Phist get ptr to XE table b Philst re-enter interpreter

5,619,682 61 62

Al3

A + k * Nate: emulsked.c * Purpose: HVX scheduling functions * Functions in this Tinodulie: void void void voic void voic void waketch (); voic dips.goh ( ); void saw regs); long signext { }; void Stegs) ; void preempthwx ( );

# include "sys head.h" #include "enuhead.h" #include "macro.h" #include "idhead.h" # include "errhead.h" # include "Inqi head.h" include "Z3sco.h" #include "23rct.h" # include "23tch.h" #include "z3gcb.h" # include "23ripx.h" #include "z3irb.h." it include "23hdun." *include "2rb.h." # include win, h" # define CPROG #include "aix regs, h" #define IDLEPRI 64 exten char *base; ptr to HVX hardware dedicated memory */ extern chair systro, sysr3; VM base offsets for ring #0, #3 */ extern int virt view; specifies single or multiple virt. view */ exter int dpseuvah; high order half of minimum DPS6 user va */ char *Srvir base(); returns base of i-pool in Server's address space / extern int process; indicates server or not, used by WARRISC ADDR f / extern char * usr r(); extern char *wpool invbase; eXte ulong *ssdt pr; pointer to subsdt / exter WORD *I Intptr; pointer to reverse map table */ extern struct EMU OPTS opt; run-time options */ extern struct MQI sad, “xtd; extern struct SCB scho; ptr to SCB */ int count = 0; temporary met */ static union MATH math; A r structure for DPS queue element. header typedef struct DPSQ fr structure for DPS gueue head */ WORD lock; A * lock word */ ADDR headp2); -> head of queue (first element) */ ADDR tailp2); A x -> tail of queue last element) */

A # * * * r * r * * iwired * request UNIX io server execution (IO 8052) A r * on entry : bi irb, b2 = rot, 5 priority for queueing struct RISCREGs rr; A k risc regs containing DPS6 regs */

5,619,682 89 90

/* */ A * ZNVDC1-dir cleanup PNeT Directive Cleanup - Part 1 This routine performs the cleanup functions for the HVX Pseudo Network layer (PNet). * * * * * * r * r * r * r * r * r * r + ...... 2NVDC1-dir cleanup (envetr) ENVIRN *envptr; { / dir cleanup */ int 2NWDC2 dircleanup (); A Setup cleanup routine argument for second cleanup. */ su-set cleanup (ZNVDC2 dirclean up); return (O); ) / dircleanup */ 5,619,682 91 92

? wr/ /* 2NWDC2 dircilean up PNet Directive Cleanup - Part 2 This routine performs initialization functions for the HVX Pseudo Network iayer (PNet). This routine issues an empty information gater to the HVX Pseudo Network layer. Upon receipt of this gater PNet will issue a Boot Event i?o request to the Unix Network User Layer (PNetX) to complete initialization. This routine is called after all bound units have been loaded executed.and their associated initializations overlay . ) have been r x * * * * * * * * * * * * * * * * * * * * r * r * r * : * * * * * * * * * * * * * * * * * * * * * r * r * * * * * * * * * * * r * * * * * * * r * * * * */ ZNVDC2dir cleanup envptr) ENVIRN * envptr; { /* dirclean up */ struct Sgater ZNVGAT; /* Setup fields in fixed part of gater. A

A * Issue gate request. */ return (ZNGTCL gate call &ZNVGAT)); } /* dircleanup */

5,619,682 95 96

B.O

f k & f f* ZNWCRA check rerote addr Check Reinote Network Address This routine scans the chain of HVX SNSAP tables for a match between: the network address of a remote sap and the network address passed in the argument list. * * * * * * * * * * * : * : * * * * * * * * * * * * * * * * * r * * * * * * * * * r s r * : * * * * * * * * * r * * * * * ...... / ZNWCRA check remote addr net adr) unsigned char *netadr; f* check remote addr */ register struct hvXSnsap * sniptr; unsigned int slen; sniptr = SXNP PTR (YNVLSN); slen c Cnet adr O + 1} f2; for ( ; Siptr = NULL; sniptr = snaptr->snnixt { if (Snp tr-> Snrem &&. s: ptr->sn addr0 = 0 && sIptr->sn addr) == net adr 0 ) { if (ne:CITp (&netad(1), &Sniptr->Snaddr1, slen) == 0) { return O);

} /*endloop * return (1); } check remote addr */ 5,619,682 97 98

Bll

A * ZNVFLR find rn Find LRN for Pseudo Device This routine finds and validates the LRN for a specified pseudo device. r x * * * r * r * r * * * * * * * * r * * * * * * * * * * * * * * * * * * * r * x x ...... arrs w w rr w w w x ...... y 2NWFLR findirn devname, rn ptr. unsigned char * devnaine; unsigned int. *irnptr; /* findirn */ unsigned char filename 58); struct gipsb filinfo; struct incipsb Inclregs; int status; /* Get pseudo device LRN via Get File Info (SGIFI} . */ Ilnenset (filename, (int) 58); filename 0 = '; memcpy (&filenane (1), devname, 12); filinfo.gipsblfn = 0x2020; filinfo.gipsbpthp (int *) filename; filinfo.gipsbfabp = NULL; filinfo.gipsbkap = NULL; inclregs.regba = &filinfo; status se mel (MCLSGIFIL, &mclregs); if (status == 0) { *lrnptr = filinfo.gipsiblin; return {0}; } else { lin ptr = status; return (-1);

} *findlin * 5,619,682 99 100

B2

A x */ f* 2NWARBallocreg.blocks Allocate Request Blocks This routine allocates a specified number of blocks of permanent memory used for communication with the Unix Network User layer (PNetX). Each block is allocated with sufficient Tenory to contain a task request block (trb), i/o request block (iorb) and a gater. e : * * * * * * r * r * x ...... * * * * * * * r * * * * * r * * * * * * r * * r r * * * * r + k + i + j k + 4 + r * + k + x / int *ZNVARBallocreg.blocks (envptr, count) ENVIRN * envptr; unsigned int count; * alloc redblocks */ struct hvxrqb *rqbptr; struct hvXrqb * *rgbnxt; unsigned int i;

rab nxt = suget perm (sizeof (struct hvX-rqb)); rqbnxt = & { {*rgb nxt.) ->rgnxt);

return (rdbptr); } /* allocreqblocks /

/* */ f* ZNWSNI HVX25 Information A * This routine provides information about a local X.25 network subscription (HVX25 or SNSAP structure). This routine must be called by a cleanup routine from another directive handler. Called by XCM) * / r * r * * * r * * * * * * r * * * r * * * * * * * * * * * * * r * r * * * * * * * r * r * * * * * * * * * * * * r * . . . . . k + 4 + x x * * * r * * * r */ 2NWSNI snisaplinfo (argc, argist) int argc; A number of entries in arg list "A ARGVAL *argllist; f* array of pointers to ARGVALs */ f* sInsap info */ ARGVAL *avaptr; register struct hvX snisap "snisapptr; unsigned int. Silen; INT * ZNWFSN findsinsap (); awalptr = *argilist++; A* routine name */ w” Find snsap table. */ avalptr = “arg list++; if {{snSapptr = 2NWFSN find snsap (avalptr->avvalue) ) l = NULL) { /* Return network subscription address. */ aval ptr = *argilist++; awalptr->av count - snisapptr->sn addro; slen = (snsapptr->sn addr O} + 1) #2; melticpy (&avalptr->aw value? 0, &snsap ptr->sn addr(1), slen) ; A Return maximum number of virtual circuits. T/ awalptr = "argilist++; * (int ) avaiptir-2-aw value = sins appt snuttaxvci

5,619,682 103 104

Bll 4

A * */ f* 2NWFSN Find SNSAP table A * This routine scans the chain of snsap tables for a hatch on the snsap name. If a match is found, the address of the snsap table is returned to the caller. */ / x x ...... * * * * * r * * r * * * * r * r * is ...... * * * * * * * * * * * * * * * * * r * * r * * * * r * r * int. "2NWFSN findsnsap (snam) unsigned char *s nam; ( * find snsap *f register struct hwy. Srisap "snip tr; sn ptr = SXNP PTR (YNVLSN); for (; sniptr = NULL; Sniptr = SIptr->sn nxt) if ( (Temclip (snarl, & Snp tr->sn name (1) 8) == 0} &&. (snip tr->Snrem}) return (sn-ptr); ) / 2. ENDIF */ A * ENDLOOP * return (NULL); } /* END findsnsap */

3 5,619,682 105 106

B5

f* ZNVGTR HX X.25 GATER ENTERFACE

Description: This module processes gaters received by the HVX. Pseudo Network Layer (PNet). This module contains the following routines: 2NVGTR process gater 2NVCRQ process conreq 2NWCRP process conrsp ZNVDTR processdata req ZNWDRQ process disreg ZNVGTP process user-gater ZNVBE issue boot event 2NVIGE issue gater event 2NVIDX issue dislind ZNVIRL-issue relsdu ZNVERR reporterror 2NVAVX alloc voindex 2NWDVX dealloc volindex

#include #include #include "hic.h" #include "hwx.h #include "hvxerr.h" #include "Sgater.h" #include "Sxnp.h. #include "Sphd.h #in Ciude "gate gr. h." #include "Site." SMODULEID (ZNVGTR, 5,619,682 107 108

B6

A 4 x/ const char zldate e "tril/dd/hhram."; struct Sgater ZNVGAT: A r */ /* ZNVGTR process gater Process Gate

ThisLayer module(PNet). processes gaters received by the HVX X.25 Pseudo Network * * * * * * * * * r * * * * * x t t t err w w w w w w x 4 + k + k + 4 + k + k + + r r * * * * * r * * r * * * * * * * * * * * r * x * * r * r * x x * */ void 2NWGTR process gater gaterptr register Struct Sgater "gaterptr; { /* process gater "f switch (gaterpti->gtfnc) case gtfcrg: connect request */ 2NWCRQ process conreg (gaterptr); break; gtfcrp: connect response */ 2NWCRP process conrsp (gaterptr); break;

gtforq:ZNWDRQ process disreg gaterptr); disconnect request */ break; gtfatr: data request / gtfexr: expedited data request *f ZNVDTR processdata req (gaterptr); break; gtfinf: information *A ZNVIBE issue boot event (gaterpt); break; default: 2NWGTP process user gater (gaterptr); break; } f * endswitch *f return; / processgater */ 5,619,682 109 110

B7

/* 2NVCRQ process correg Process Connect. Request /* This routine processes the receipt of a connect request gater. * * * * * * * * * * * * * * * * * * * * * * * * * * * * * r * r * * * * * * * * * r * * * * * * * * * * * * * * ...... / void ZNVCRQ process con-req (gater ptr; struct Sgater gaterptr; ( / process contreq */ struct hvx *hvxptr; unsigned int index id; hvXptr = SXNPPTR (YNVHVX); f* Allocate virtual circuit index i.d. f. indexid = 2NWAVX alloc volindex (); f* Issue Gater Event to Unix Network Layer (PNetX). */ if (ZNVIGE issue gater event (gaterptr, index lid) := 0) 2NWIDX issue dislind (gaterptr}; if (gaterpt I->g tdta = NULL) ZNGRTB return buffer (gaterptr->g tdta}; } 2NWDVX dealloc veindex (index lid); 2NGTRB gate returnblock (gaterptr);

return; A * process con-reg */ 5,619,682 111 112

Bls

A * r / f* 2NWCRP process contrisp Process Connect Response /* This routine processes the receipt of a connect response gater. * * * * * * * r * * * r * r * * * * * * * + k ...... * * * * * * * r * * * * r * * * * * * * * * * * * * * * r * * * r * r * r * r * * r * r */ void ZNWCRP process conrsp (gatet ptr) Struct Sgater gaterptr; { * process conrsp */ struct hvix *.hwxptr; unsigned int index id; hvXptr = SXNPPTR YNVHVX); f* Allocate virtual circuit index id. */ indexid = ZNVAVX-alloc VCindex (); A Issue Gater Event to Unix Network Layer (PNetX). */ if (ZNVIGE issue gater event (gaterptr, index lid) = 0) f {gaterptr->gtata } = NULL) 2NGRTB return buffer (gaterptr->gtota); 2NWDVX dealloc volindex (indexid); 2NGTRB gate return block (gaterptr);

return; /* process conrsp */ 5,619,682 113 114

B9

A * */ A 2kVDTR process data request Process Data Request

A * Thisgate. routine processes a data request or expedited data request x * * * * * 4 x ...... we . . . . . * * * * * * * * * * * * * r * r * * * * * r * * * * r * + k + void 2NWDTR process data red (gaterptr) struct Sgater *gaterptr; { /* process data req *f struct hwk *hvxptr; hvxptr = SXNPPTR (YNVHVX}; if (gaterptr->gtodta == NULL) { ZNVERR reporterror (hv25.nph, 0); ZNGTRB gate return block (gaterptr); return; } ZNGOPT old phdpointers (gater ptr->gtata); A* Issue Gater Event to Unix Network Layer (PNetx */ if (ZNVIGE issue gater event (gaterptir, 0) = 0} 2NWTRIlissue relsdu (gaterptr); ZNGTRB gate returnblock gaterptr);

return; ) / process data.reg 5,619,682 115 116

B20

A k ky A ZNVDRQ process disreg Process Disconnect Request A* This routine processes the receipt of a disconnect request gater, * r * * * * * * r * * r * * * * * * * * * * * r * + k ...... t + k + r * r * r * r * r * r * * r * * * * * * r * * * r * r * * * * r * x/ void 2NWDRQ process disreg (gatetipt:} struct Sgater gaterptr; { / process dis-red */ struct hvx *hvxptr; unsigned int index id; hvXptr = SXNP PTR YNVHVX); index lid = 0; A * Allocate virtual circuit index id (refused connect). */ if (gaterptr->gt sce == NULL) { indexid = 2NWAVXaloc volindex ();

A * Issue Gater Event to Unix Network Layer (PNetX). */ if (ZNVIGE issue gater event (gaterptr, index lid) = 0) if (gaterpt r ->g tdta = NULL) { ZNGRTP return buffer (gaterptr->gt-dita); ; : (gaterptr->gt sce == NULL) 2NWDVX dealloc volindex (index lid); ) 2NGTRB gate returnblock (gaterpt); } retu); / process disreg */ 5,619,682 117 118

B21.

f r */ f* 2NVGTP process user gater Process. User Gater /* This routine processes the receipt of a gater from the user layer. x * r * * r * r * * r s r * * * * * r ...... * * * * * * r * r * r * ...... / void ZNVGTP process user gater gaterptr) struct Sgater *gaterptr; { /* process user-gater */ struct hvx *hvXptr; hvxptr = SXNP PTR (YNVHVX); /* Issue Gater Event to Unix Network User Layer (PNetX). */ if (ZNVIGE issue gater event gaterptr, 0) = 0) ZNGTRB gate return block (gaterptr) ;

retul; } /* process user-gater */ 5,619,682 119 120

B22

A x/ /* ZNVIBE issue boot-levent Issue Boot. Event f* This routine issues a boot event request to the Unix Network User Layer (PNetx). * r * * * r * r * * * x ...... * * * * * * r * r * + k ...... * * * * * r * * * * * * * * * * * * * * * * * * r * . . . void ZNVIBE issue boot event (gater ptr) struct Sgater gater ptr; /* issue boot event */ struct hvX *hvxptr; unsigned int status; hwxptir - SXNPPTR (YNVHVX); * Issue Boot Event to Unix Network User Layer (PNetX). */ if ( (status = 2NVBEV boot event ( )} := 0)

} ZNVERR-reporterror {hv25-bev, status); else

2NGTRB gate return block (gaterptr) ; return; /* issue boot event A 5,619,682 121 122

B23

/ e / A ZNVIGE issue gater event issue Gater Ewent A* This routine allocates memory for a trbailorb and issues a gater event request to the Unix Network Layer (PNetX). * * * * * * * * * ...... wers r * * * * * * * * * * * * * * * r * r * * r * sess rewr ...... 2NVIGE-issue gater event (gaterptr, dwsword) struct Sgater gaterptr; unsigned int divsword; { /* issue gater event *f struct hvX “hwkptr; ETRB retrbptr; unsigned int status; hvxptr = SXNPPTR (YNVHVX); /* Allocate memory for trb and iorb. if (ZNGGTB get buffer ( ( sizeof (ETRB) + 1)/2) + ( (sizeof (struct eirniorb) + 1)/2), &etrbptr} = 0) 2NWERR-reporterror (hv25 neta 0}; return (i) ;

A* Issue Gater Event to Unix Network User Layer (PNetx). */ if ( (status = ZNVGEW gater event (etrbptr, gaterptr, divs word)} := 0) 2NVERR reporterror (hv25 gev, status); ZNGRTB return buffer (etrbptr); return (1);

hvXptir->hvgevc += 1; return (0); f issue gater event w 5,619,682 123 124

B24

/ r/ f* ZNVIDX issue dislind Issue Disconnect indication Thisrequest routine to the builds connection a disconnect layer. indication gater and issues a gate This routine is used during the processing of a connect request to indicate an unsuccessful connect. The input argument to this routine is a pointer to the connect request gater which does not result in a successful connection. The source connection id Trust always be null when reporting an unsuccessful connect. is ...... w w w w y + r * * r * r * r + x * r + r w w w w w w x t w is ...... / ZNVIDX issue dislind (gaterptr. register struct Sgater "gaterptr; issue dislind */ extern struct Sgater ZNVGAT; ZNVGAT.gtpri O; ZNVGAT.gtnixt = NULL; * (long *) 2NVGAT.gt. Jakr = * (long * } "GTO"; ZNVGAT. gtslr gaterptr->g Cldlr; ZNVGAT.gt. Sin = gaterptr->gtdin; ZNVGAT. gt dilr gthwic; 2NVGAT.gt din gtXCET; 2NVGAT. gtsce NULL; ZNVGAT. git dist gaterptr->gtsce; ZNVGAT.gt dta = NULL; ZNVGAT.gt dis O; ZNVGAT. gttcry ZNVGAT.gtcre is C; (unsigned ) &ZNVGAT.gtflg = 0; ZNVGAT.gtvlin = 0; /* Issue gate request *f ZNGTCLlgate call (&2NWGAT); return (0); } A issue dislind */

5,619,682 127 128

B26

A * r w ZNVERR report error Report Error This routine is the error reporter for the Pseudo Network layer. PNeT generated errors are reported as pseudo network subscription errors with a fixed network subscription name of HVX. * * * * * * * * * * * r * x ...... * * * * r * * * * + k + 4 + 4 + k * * * * * * * * * * * * * * * * * * * * * * * * * * * * * */ void ZNVERR-reporterror (rsncode, other code) uri Signed int. rSncode; unsigned int other code; A* reporterror */ register unsigned char *warptr; register unsigned char *aptr; unsigned int funct, class; extern struct sgater ZNVGAT; /* Setup fixed part of unsolicited message gater. */ 2NWGAT.gt pri = 0; 2NWGAT. = NULL; * (long .gtimkr - * (long *) "GTO1"; ZNVGA. gt nwc ; ZNWGAT gtapl; 2NWGAT. gt-nad; 2NVGAT. gtfaun; ZNVGAT NULL; 2NWGAT, NULL; 2NWGAT. NULL; 2NWGAT. O; 2NVGAT. 0; 2NVGAT. O; * (unsigned *) &ZNVGAT.gtflg = 0; ZNVGAT. g twin = 0; A* Setup variable part of network subscription error gater. */ warptr = &ZNVGAT.gt vario); * varptr-, + gtfunc; A * function *f "warptr-- + - 2; funct = gterrr; apt = (unsigned char *) &funct;

* varptr++ gtobic; f* object class */ varptr-- + 2; class = g tolns; apt r = (unsigned char *) & class;

* varptra- + gtrSpl; A NS late "f varptr-- + 8; mencpy (varptr., (unsigned char *) "HVX ", 8) ; varptr + = 8; 2NVGAT.gtvlin + = 10; warptr + + gtrSp3; A * reason code "A * varptr + i. 2; aptr e (unsigned char *) &rsncode;

if (other code Is O) * varptra- + = grisp 4; A WAN reason code */

5,619,682 131 132

B28

f fry A * ZNVAVX alloc volindex Allocate VC index This routine allocates a virtuai circuit index id. * * * * * * * * * 4 x x : y : k + k + k + r + i + x : x x . . . * * * * * * * * * * * * * r * * * * * * * * * * * * * * * * * * * * * * r * * * * * * : . . . / 2NWAVXalloc volindex ( ) f* alloc volindex */ struct hvix *hwxptr; unsigned int status; SREGS

f* Allocate virtual circuit index id. A Sregs. Sir2 = g tolvc; CALLXNP (YNBAiD); ell (status); } f* alloc volindex */ 5,619,682 133 134

B29

f k ty f* ZNVDVX dealloc volindex Deallocate WC index This routine deallocates a virtual circuit index id. * * * * * * * * * * * * * * * * * * * * r * r * * * r * * r * * * * * * * * * * r * : * : * * * * r * : * r * * * * * * * * * * * r * ...... Woid 2NWDVX dealloc volindex (indexid) unsigned int index id: f* dealloc voindex */ struct hvix *hvxptr; unsigned int status; SREGS; hvxptr = SXNP PTR (YNVHVX); f* Deallocate virtual circuit index id. */ $regs. Srl = indexidi Sregs. Sir2 = gttclw.c; CALXNP (YNBDID); return; } f * dealloc voindex */ 5,619,682 135 136

B30

A* 2NWOR HVX X.25 IAO REQUESTOR

Description: This module builds iorbs and issues if o requests to the Unix Network User Layer (PNetX) for the following functions: Boot Ewelt (write) Gater Event (write) C Receive (read) PHB Allocation (write) This nodule contains the following routines : 2NVBEW boot event ZNWBEP boot evert posted 2NWGEWgater event 2NWGEP gater event posted ZNVRCWve receive 2NVRCP. v c receive posted 2NWPHB phballoc 2NWPHP phballoc posted 2NWIRC issuiev creceive 2NWIP issuephballoc 2NVPHR replenish phb ZNVPHE phballoc error ZNWGEEgater eventerror

# include *include # include "hic.h" # include "Sgater.h." # include Sxnp.h." #include "Sphd.h" # include "gate agrh. $include "Sneh include "Wix, h" include "hwx reb.h." #include "invierr, h" SMODULEID (ZNVIOR,

5,619,682 153 154

B39

/* */ A* ZNVIRC issue vic receive Issue VC Receive This routine initializes a empty gater buffer and issues a VC Receive to the Unix Network User Layer (PNetx). * * * * r * r * * * * * * * * * * * * * * * * * r * r * t w x r * * r * * * * * r * : * r * r * * * * * * * r * * * * * * * * * r s ...... 2NWIRC issue vic receive ketrbptr.) ETRB *etrbptr; { /* issue vic receive */ struct hvx *hvixptr; struct elirniorb “eiorbptr; struct Sgater gaterptr; unsigned int status; hvXptr = SXNPPTR YNVHVX); eiorb ptr etrbptr + 1; gaterptr = eiorbptr + l; /* Issue WC Receive to PNetx. *f menset (gaterptr, (int) 0, sizeof (struct Sgater)); if ( status = ZNVRCV voreceive (etrbptir)} = 0) 2NVERR report error (hv25-row, status); return (1);

hvxptr->hvvcircv + = 1; return O); } f issue vic receive */ 5,619,682 1SS 156

BAO

A x + y a * ZNVIPH issue phballoc issue PHB Allocation Event This routine allocates Inemory for packet neader blocks and issues a PHB Allocation Event to the Unix Network User Layer (PNetX). * * * * * * * r * r * r * ...... x x ...... 2NVIPH issue phballoc ( ) { w issue phballoc */ struct hvy thvXptr; struct Sphd phb ptr; struct Sphd. * *phbnxt; ETRB retrb ptr; unsigned int status; hvX ptr = SXNPPTR YNVHVX); if hwoxptr->hv phbx > PHBLOW) return (O);

phbptr NULL; phb. inxt &phbptr; A* Allocate memory for tro and iorb. */ if 2NGGTB get buffer ( ( sizeof (ETRB) + 1)/2) + ( sizeof (struct elirni orb} + 1) f2), setrbptr) = 0) ZNVERR reporterror (hv25 Juern, 0); return ); } f* Allocate Inemory for Packet Header Blocks. */ while (hvxptr->hvphbx < PHB HIGH) { if k (*phinxt = ZNGGPH get packet header () } = NULL)

ZNVERR reporterror (hv25-men); break; } f * endloop */ if (phb ptr == NULL) { ZENGRTB return buffer (etrbptr) ; return ();

f* Issue PHB Allocation Event to PNetx. */ if (status = ZNVPHB phballoc (etrb ptr, phb ptr}) = 0} 2NWPHEphballoc-error (ethp tr, status); ZNGRTB return buffer (etrhptr); return (1);

hvXptir->hvphbc = i ; return (O); f* issue phbalioc */ 5,619,682 157 158

f wry /* 2NWPHR replenish phb Replenish PHBs This routine computes the number of Packet Header Blocks used by PNetX and allocates replacement Packet Header Blocks. * * * * * * * * * r * : x * * * * * * r * ...... * * * r * : ...... / void ZNVPHR replenish phib (gaterptr) struct Sgater gaterptr; { /* replenish phb */ struct hvx *hvxptr; struct Sphd. *phb ptr;

switch (gaterptr->gtfnc) case gtfccf: connect confirin "A case gtfain: disconnect indication *f case gtfcin: connect indication A if (gaterptr->g tdta = NULL) hwxptr->hvphbX -- 1: ZNVIPHLissue phballoc (); } break; gtfati : /* data indication */ gtfexi: /* expedited data ind */ for (phbptr = gaterptr->gtdta; phbptr = NULL; phbptr = phbptr->phink)

ZNVIPH issue phballoc (); break; } /* endswitch */ return; } /* replenish phb */ 5,619,682 159 160 -

B42

A * x/ W* 2NWPHEphballocerror PHB Allocation Error This routine performs the error processing for a PHB Allocation. Event issued to the Unix Network User Layer (PNetX). This routine may be invoked from the requesting routine or from the post back routine for PHB Allocation Events. * * * * * * * * * * r * * r * * * * * * * * * * * * r * r * r s w w w w x + i + r s r * * * * r * * * * * * r * r * r * * * * r * * * * * * * * * * */ void ZNVPHE phballoc error (etroptr, status) ETRB retrb ptr; unsigned int status; { /* phballocerror */ struct hvx *hwxptr; struct elirniorb “eiorbptr; struct iorb *iorbptr; struct Sphd *phoptr;

eiorb ptr = etrb ptr + 1; iorbptr = ( ( struct iorb *} (eiorbptr + 1} ) - l; ZNWERR reporterror (hv25 phb, status); while (i.orbptr->iorbadr i = NULL) phbptr = i.orbptr->iorbadr; iorbptr->iorbadr = phbptr->phink; 2NGRPH return packet header (phbptr); hwxptr->hw.phbx -= i ;

return; ) /* phballoc-error */ 5,619,682 161 162

B43

/ r * / A* ZNVGEE gater eventerror Gater Event Error This routine performs the error processing for a Gater Event issued to the Unix Network User Layer (PNetx} . This routine is called from the post back routine for Gater Events. * r * * * * * * r * * r * r * * * * * * * * * * * * * * * * * * * * * * * * r * r * : x + i t t t t t + r * * r * r * r * * * * * * * * r * * r * * * * * * */ void ZNVGEE gater eventerror (etrbptir, status) ETRB “etrb ptr; unsigned int status; f* gater eventerror */ struct hvX *hvxptr; struct iorb *ioroptr; struct elirniorb “eiorbptr; struct Sgater *gaterptr;

eiorb ptr = etbptr + 1 ; i orbptr = {{struct iorb *) {eilorbptr + 1)) - ; gater ptr = i orbptr->iorbadri ZNVERR reporterror (hv25 gev, status) ; switch (gaterptr->gtfnc.) { case gtfcrq: A * connect request */ ZNVIDX issue dislind (gaterptr); 2NWDVX dealloc voindex (iorbptir->iorbidws); break; case gtfcrp: * connect response */ 2NWDVX dealloc veindex (iorb ptr->iorb divs) ; break; case gtfotr; A * data request */ case gtifexr: A * expedited data request */ 2NVRL issue relsdu (gaterptr); break; } /* endswitch */ return; } / gater eventerror */ 5,619,682 163 164

B44

TITLE ZNVENT, HVX PNet Entry Point Routines librn disalib libri os-lib Bull Confidential and Proprietary TEXT 2NVENT dic 2' 400' Release 4.1, Revision 0 Description: This module contains assembly language routines for gate manager wakeup and for PNetx. Event completion. These routines create the stack and work area for the C language environment.

SXequi nlist 23 to 23rd list $gingr $gate

Xdefs, xdef znvgdh gate descriptor xdef znvpch patch area xdef znviop PNetX event completion : Xlocs. xloc znvg tr - gate request entry / e r HVX Pseudo Network Layer gate descriptor. w 2. Sgate nwic, x25, X25, 2nvg tr, dr IT, rqt, cc, 0, znvent W k ar HVX Pseudo Network Layer Gate Manager Task Entry. Allocate C work area and stack within MC2 stack. k Dispatch to gate Ilanager task prologue. w equ S co Sb6, Crull--x' 18" scho db Xnp, Sb6.x root osif disa root ldb Sb7,xnp.ynbillc2 fitc2 stack ldr Sr1, Sb.1 luc2 stack size ldb Sbi, nil setup C work area stb Sb1, -Sb7 Slcow stb Sb.1 -Sb7 Ca stb Sb.1 -Sb, prev stack area stb Sb1, -Sb7 next stack area adv Sr1, -11 setup C Stack st Srl, -sb7 max stack size C. - Sof current stack size cit Sb7 load t register ldiv Sr?, 2 frame size (2 words) acq Sb7, Sr? acquire stack (frame 1) stb Sb5, Sb7 return address Setup argument list for 2ngting. Dispatch to gate thanager task prologue. ldv Sr?, 3 fraine size (3 words) acq Sb7, Sr7 acquire stack frame 2) lab Sb4, Sb'7.3 cl - So4 5,619,682 165 166

B45

lab Sb5, znvgdh gate descriptor sto Sb5, -Sb.4 ldiv Sir 5, 1 argument count lin Sb5, 2ngting gate manager task prologue rld Sb7 retinguish stack (frame 2) lab Sb1 Knull ldt Sb.1 clear t register ldr Srl, ssri return status ldb Sb5, Sh7 return to caller jItap Sb5

PNetX Event Completion. Allocate C work area and stack within MC2 stack. Dispatch to specified PNeT post back routine. request block equ ldb sch ldb osif dsa root ldb Inc2 stack ldr mc2 stack size stb return address ster mc2 stack size lini dequeue irb dw return status lini post request ldr nuc2 stack size ldb Sb.1, nil setup C work area stb Sb1, -Sb7 Slicoic stb Sb.1 - Sb, CWa: stb Sb.1 -Sbt prev stack area stb Sbi, -Sb7 next stack area adv setup C stack st max stack size c current stack size lbt Siv. tism2, sez O303 t register bit in save mask idt load t register dw Sri, O frame size (0 length) acc Sbt, Sri acquire stack div frate size (3 words) acq acquire stack stb trb pointer Cl ldb post back routine address lab argument list div argument count linj call post back routine rld relinquish stack frane lab stack header + C work area lab ldt clear t register ldb jmp return to caller

Patch area. IeSw 100, O end 2Iwet 5,619,682 167 168

B46

TITLE ZNVINE, WX PNET INTIALIZATION lib.m disallib Bull Confidential and Proprietary TEXT "2NWINI dic z' A100 Release 4.1, Revision 0 Description: This modulie performs initialization functions for the HVX. Pseudo Network Layer.

Sxecu Sgate

Xdefs. xcef znvini Xlocs. xloc znvgdh gate descriptor Xvals. xwal yngitat attach gate xval ygx2c gate manager assembly to C xval ynnlrn Itc2 lirn

HVX Pseudo Network Layer Initialization. Setup wakeup task lirn in gate descriptor. Attach gate descriptor.

lab Sb4,

Patch area. tesv 50, 0 eld zrwini, znvini 5,619,682 169 170

APPENDIX C - PNETX

C

Nane: emulx25 - C Purpose: io sewer for X.25 Functions in this Inodule: (see pnx func. h; for prototype declarations) woid pnxio () - nail, driver void pnx handler () * - signal handler ty

A general header files: */ #include "syshead.h" #include "emu head.h" #include "mgilheadh" #include "23rct.h" #include "zrb.h." #include "23irb.h." #include f* PNetx specifics : */ #include include "pnx head.h" #include "pnxvccb.h." extern char *base; extern void sigsak () ; extern struct EMU OPTS opt; extern it argCSw; extern char * * argv sw; extern char * *envp_sv; extern unsigned int verbose; A Global fields specific to PNetx */ struct pnx global *PnetXGlobal; int TraceShmid; Sigsett prix Sigs; /* to catch (or block) SIGCHILD, SIGUSR1. * SIGUSR2, SIGALRM */ struct sigaction pnx actions; int pnx-internal Sigirecwd; int pnx clock running;

A * * * * * * * * * * pnxio yx w w w x * * * */ A * This Inodule is the initial entry point for the PNetX (X.25) driver. It * is called once, by emumnt, when the 'x25 directive is encountered in * the CLM HVX file. After initialization, it simply remains in a forever loop, waiting for a SIGUSR1 (indicating an incoming to request from PNet), a SIGCHILD indicating the termination of the vic clear process), or an incoming X.25 event. */ void pnx-io ( ) : int status; int Signo; int Counterd; /* currentx25 counter ID */ int i; struct vocco *WCCB; Sigsett zeromask; struct passwd *pwent; char *usernate; ifdef PNXDEBUG syslog (LOG DEBUG, "Entered pnxio); #endilf closefa ( );

5,619,682 173 174

A error case */ #ifdef PNXDEBUG pnx ERR-reporterror (JUST HVX, LOG DEBUG, (EMUX.25 + 3), NULL., PNXSYSERR); #endilf A * PNXDEBUG */

continue; } A * END OF sigsuspend () PROCESSING / /* START x25 ctrl wait (, PROCESSING * lifdef PNXDEBUG syslog (LOG DEBUG, "pnxio - x25 ctrl wait ( ), *.d citrs", PrietxGlobali->num ctrs); #endilf status = x25 ctrl wait (PnetxGlobal->nunctrs, PnetXGlobal->counters->x25ctrs); /* block signals whiie processing */ sigprocmask SIG BLOCK, &pnxsigs, NULL ); #ifdef PNXDEBUG genciif syslog (LOG DEBUG, "pnxio - after wait - sts s d", status); if (status < 0) if (pnx internal sigrecwd) /* This flag only means that SIGCHILD or SIGUSR1 was received * sometime since last pass through loop. It doesn't mean that the negative return status is because of SIGCHILD or * SIGUSR1 - what do we do? */ pnx internal sigrecwd = 0; continue; /* we already handled the signal */ } else A* error case - will require more handing * / #ifdef PNXDEBUG pnx ERR-reporterror (JUST HVX LOG DEBUG, (EMUX25 + 1), NULL, PNX API); #endif W* PNXDEBUG / } continue; * forever loop */ } A * By now, we know by non-negative status that the x25 ctrl wait * returned with an X.25 event */ if (status a pnxx25 wakeup (status) = 0) ( pnx ERR-reporterror (JUST HVX, LOGERR, EMUX.25 + 2), NULL, PNXERROR); 7* forever lioop */ f pnxio "f

f* This function handles those signals that PNetx is specifically interested * in - SIGUSR1, indicating an inbound IORB from PNet, and SIGCHLi), indicating that a VC Clear process has compieted. A void px handler (int sig type) pnx interfallsig recwd = 1; fifdef PNXDEBUG syslog (LOG DEBUG, "PNetX sig handler - d", sigtype); tendif switch (sig type) 5,619,682 175 176

case SIGCHILD; f* death of child * f if (! PnetXGlobal -> x25 initing) pnx voclear complete ( ); break; case SIGUSR1: f* PNet event. NOTE - no events are posted here. They will be mqi* postedinput ( );by mailinput (), since we may handle more than one ORB */ break; case SIGUSR2: /* SIGUSR2 received from emu main. Clean up all the resources and * exit */ pnx cleanup (); break; case SIGALRM: f* This is our clock tick. Used to tell us when to retry an X.25 * link (since the link layer isn't letting us know when it is available, as in HVS) */ pnx-clock tick () ; if (pnx clock running) alarit (5); } break; /* end switch */ fifdef PNXDEBUG syslog (LOG DEBUG, "exit PNetX sig handler"); fencif A * end pnx handler 5,619,682 177 178

Name: prix gin. C. Purpose: PNet to PNetx interface - GATERs sent to PNetX NOTE - AIX ONLY Functions in this Inodule: (see pinx funch for prototype declarations) int prix UCR process con-reg() int pnxCRQ process con req () int pix SWCSVCCoreq) int pnx-PVC pvc con-req () int pnx UCP process con-resp () int pnx UDR process dis-reg ( ) int pnx UDT process data-reg ( ) int pnx URS process reset-reg ( ) int pnxACM admini-command ()

fic iude sys head.h" #inc lude "emu head.h" #include "Imacro.h" itic lude "Inqi head.h" firic is inc fic #inc #inc # include "hvix gater.h" i.inc lude "hvix line.h" iinc ude "x25ach" #inc iude "x25dia.h it inc lude "x25 vice.* finc lude "x25sta.h" #inc lude "x25nse.h" extern char *base; eXte rn struct pnx global PnetxGlobal;

A x + 1 k & k wrek de Me w xUCR process contred * r * * * * r * r * * * r * * * * r * * r is sy A Process Connect Request his routine processes a connect request gater. The parameters passed in the connect request gater are used to create a virtual circuit control block (WCCB) and to initialize relevant X.25 API data structures. he user gate id, user connection id, and user receive credit are passed in the fixed part of the gater and are used to update the WCCB. A local SAP name or local X. 121 address is passed in the variable part of the gater and is used to iocate the local SNSAP table. A remote SAP name may also be passed in the variable part of the gater and will be used to locate the remote SNSAP table. In those cases where the remote node is not configured, the renote X. 121 address may be passed in the variable part of the gater. Optional user facilities which hay be sent in the ch call struct are also passed in the variable part of the gater. A pointer to the call user data to be sent in the call request is passed in the gtdta field of the gater. To connect to a permanent virtual circuit the logical channel number assigned to the permanent virtual circuit is passed in the variable part of the gater in addition to the local SAP name. A

UCR process conreq struct GATER gaterptr,

5,619,682 183 184

switch gaterptr->g tvar (i)} { case x25cga: * Calling X. 121 address cgalptr = &gaterptr->gt-war i + 2 : break; case x25lcn: /* Logical channel number */ cgn = gaterptr->gt vari + 2); lcn = gaterptr->gt vari + 3); lcnflg = 1; break; At end switch * A f* end loop */ #ifdef PNXDEBUG syslog (LOG DEBUG, "exit pnxCRQ"); endilf if (icn-flg == 0) if (Cgalptr == NULL) cgalptr = &Sns apptr->snaddrO); ) /* end if */ } return (pnxSWC.svccon req(Sns apptr., gaterptr., c.gap tr, nad index)); else return (pl.x PVC pvc con-req Snisapptr, gaterptr., lcgn, l.cn, nad index)); } f* end if * A f pnxCRQ process con-req

A * r * r * * * * * r *ws ...... " pnxSWC.svc contreq * st k ...... /

e A r Process SWC Connect Request This routine processes a connect request gater which specifies creation of a e Switched virtual circuit. s r The parameters passed in the connect request gater are used to create a k virtual circuit control block (WCCB) and initialize the X.25 API s data-structures related to SVC Call. The user gate id, user connection id, and user receive credit are passed in the fixed part of the gater and are used to update the WCCB. s Optional user facilities which may be sent in the chicall struct are passed t in the variable part of the gater.

int pnx SVCSwccon req (struct snisap * snisapptr, struct GATER * gaterptr, unsigned char cqaptir, ushort nad index)

struct vocb * vccbptr; #ifoef PNXDEBUG ifendifsyslog (LOG DEBUG, "pnxSWC - NAD index is 3d", nad index); f* Create WCCB for switched virtual circuit. */ if (pnx-MVC Raakevccb (&vccbptr, SnSapptr) = 0) return (-1); } A end if */ 5,619,682 185 186

vccbptr->vcsinsap vocbptr->vcolv gaterptr->gtslr;= sn'sapptr; vccbptr->vcoin = gaterptr->gt sin; GET4 vic.cbptr->WC. cxid, gaterptr->gtsce); vccbptr->vcurcvc = gater ptr->gt-crin; wccbpur->vCXrcWC gaterptr->gt. Cre; vccbptr->vcxsndic vicchiptr->vcxrcwc; vccbptr->vcn.dxid nad index; A Set initor acceptor field to initiator (IN). */ vcchvccbptr->vicinit p tr->vcinac = 1; (short *) "IN"; /* Calling X. 12 address */ wccbptr->vccgna (0) = *.cgaptri +; if (voccbptr->vc.cgna (0) = 0) { pnx MNA move net addr (c.galptr, 0 &WCCbptr->v.ccgna (1), 0. wccb ptr->vccgna Ol); } A * end if */ A"* subscription.Incretent wo court* and set disac state to used for local network Snsapptr->Sn nutivo += 1; sisapptr->Sindsac = gtused; /* initialize API cho call structure and Send call request */ if pix SCR sendicall req (wccbpt r, gaterptr = 0) pnx CWC close VC (wccbptr); pnx RVC release voccb (viccioptr) ; return (-1); } /* end if */ syslog (LOG DEBUG, "exit pnxSWC"); fiendlif return (0); A pnxSVCsvc connect req A

As x x . . . . x 4 + r * * * * r * * * * * * * * pnx UCP process con resp " * r * r * r * r * r * r * * * * * * * ...... f Process Connect Response This routine processes a connect response gater. The parameters passed in the connect response gater are used to update the packetWCCB and to loadthe theremote X, 25station. API data structures used to send the Call Accepted The user gate id, user connection idi, and user receive credit are passed in the fixed part of the gater and are used to update the WCCB. Optional user facilities which may be sent in the Call Accepted packet are passed in the variable part of the gater. A pointer to the call user data to be sent in the Call Accepted packet is passedBlock). in"A the data field of the gater (NOTE it is NOT a kPacket Header int prix UCP process corresp (struct vicch * voccbptr, struct GATER gaterptr) unsigned int. i; struct snsap *SnSapptr; unsigned char *udfbuf; short rc; ifdef PNXDEBUG sy slog (LOG DEBUG, "prix UCF, VC=$lx", voch ptr);