Previous 4-04-40 Reassessing Client/Server Tools and Technologies Lawrence K. Cooper John S. Whetstone Payoff Client/server computing has not yet realized its promise of faster applications development, reduced maintenance costs, and enterprise scalability. This article's review of client/server technologies from the perspective of whether they cause developers to focus more on the tool than on the business problem helps IS and business managers appreciate what each technology can and cannot do. Understanding that client/server computing is more a concept than a technology is key to the proper evaluation of the tools and strategies flooding the marketplace.

Introduction During the past 10 years, client/server architecture and technologies have been hailed as spelling the demise of the mainframe-centric view of computing. Corporations purchased PC by the truckload and bought myriad development tools, languages, and frameworks for developers and users alike. Products with descriptions such as GUI-builders, front ends, gateways, back ends, , and glue flooded the client/server marketplace. Applications were supposed to be able to be developed faster and maintenance costs were supposed to decrease. Clearly this has not been the case: Client/server computing has proven to be both a boon and a bust. Although it provides more flexibility, it requires more computing resources, faster networks, and higher maintenance. According to a recent Gartner study, client/server computing costs often exceed mainframe-centric computing by up to 70%. Client/server computing also poses an abundance of data security problems. Current technologies are at a middle stage between the rigorous structures of mainframes and the total openness implicit in many of the new architectures. These problems notwithstanding, client/server technology is a core building block for most corporate IT strategies. Even so, its true potential remains largely untapped. The Gartner Group estimates that 90% of all client/server applications deployed today are two-tier: The Standish Group puts that figure at 95%. Either way, nine out of every 10 client/server applications deployed use a model developed in the mid-1980s. Conversely, only one in 10 applications is based on the more recent three-tier, multitier, tierless, or models. The problem is that developers are spending far too much time on the technology and not nearly enough time on the business problems they are supposed to be solving. Computer Technology Research (CTR) Corp. reported that because client/server projects often do not scale from the workgroup to the enterprise, applications either lack the flexibility to meet the needs of the corporation as a whole or “fail to meet the demands of the software life cycle and make programmers work more than they should.” Although corporations should not turn back the clock to the mainframe in the glass house, IS managers and their staffs require a more careful understanding of the client/server marketplace and of organizational needs before proceeding further into the murky world of client/server computing. After providing a brief overview of the current state of client/server models, this article discusses client/server technologies from two Previous perspectives: Technologies that force developers to focus more on the tool than on the business problem, such as DCE and CORBA, and technologies that developers can more readily use to build systems. Although many people immediately think of products like PowerBuilder and Visual BASIC when client/server computing is mentioned, these products are not considered because they are only capable of supporting the front end of a two-tier client/server solution.

Overview of Client/Server Model

The Two-Tier Model The now-famous two-tier client/server model first presented by the Gartner Group in the 1980s still predominates the market. Within this computing model, the client and server are both hardware. The two-tier model provides several options varying from distributed presentation to distributed data management. In all cases, however, the only real choice is on which machine the various components of the presentation, logic, and data access layers are to be located. This hardware-oriented model places limits on the transfer of data and is the culprit for repetitive network bottlenecks, the bane of many organizations. Yet, the complexity of applications in today's business climate, however, necessitates the transfer of massive amounts of information collected from disparate and distributed data sources. The way in which an application is partitioned under this model is driven by hardware/locational decisions rather than by business-function or application-logic decisions. Many of the so-called client/server tools of today such as PowerBuilder and Visual Basic are merely presentation layers for back-end data bases. Although such tools are also used to place some of the application logic in the client, they do not allow application logic to be easily moved from a client application to a server application or from one hardware platform to another.

The Three-Tier Model The three-tier architecture attempts to overcome the application partitioning and performance limitations of the two-tier model by providing a clear separation between the presentation, functionality, and data layers. The presentation layer uses a graphical user interface (GUI) to present information; the functionality layer performs the business logic and the flow of related transactions; and the data layer consists of the data sources that the functional layer accesses. The data sources include data bases, legacy systems, data feeds, and file structures. Thus the application logic can access multiple data sources. Because it can also be modified to meet changing requirements without changing all client-side applications, myriad configurations are possible to meet specific business problems. Initial three-tier applications were often developed using the stored-procedure capabilities of the data base, which allowed business logic to be moved back and forth from the client application into the server application with relative ease. The skills IT staff need for developing three-tier applications using stored procedures do not differ substantially from the skills already acquired in developing and maintaining two-tier applications. Although stored procedures permit code to be reused across applications that are deployed on the same RDBMS, they do not allow code reuse across different vendors' RDBMSs. The three-tier and multitier architectures require both IT management and staff to think Previous in new ways. The new technologies that served as the catalyst for change are being enhanced and modified almost daily. No longer can an IS department expect to learn a single language or development methodology and be able to meet either short-run tactical or long-term strategic goals. To complicate matters further, many organizations are demanding that applications be conceived, specified, developed, and deployed and show a return on investment in six to 12 months.

Technology-Focused Architectures: DCE and CORBA The challenge for most IT departments is not only choosing the right client/server architecture, but also making the right selections from an ever-growing array of software tools that promise simple solutions to complex problems. The problem is that many of the products and approaches are technology- rather than business-focused; in other words, IT staff must spend considerable time and expense to understand the technology before they can use it to solve the business problems of their organization.

The OSF's DCE The OSF DCE supports three distributed computing models:

á The client/server model.

á The RPC model.

á The data-sharing model.

The client/server model permits applications to be split across multiple disparate platforms running multiple disparate operating systems. A common matching protocol between two applications or utilities is defined, allowing applications to pair up into unique client/server relationships. The RPC model permits programmers to write client applications that call server services, without having any specific or required knowledge of where or how the called server is located. It involves the customized definition of client/server relationships between unique application modules. The data-sharing model facilitates seamless data distribution among the participating machines on a network. DCE also provides services for distributing applications in heterogeneous hardware and software environments: Basic distributed services enable developers to build applications, and data-sharing services provide a distributed file system for seamless data distribution, diskless system support, and desktop computer integration. DCE enables applications distributed across disparate hardware and operating system platforms to appear as a single system to the user. Most DCE implementations of client and server application relationships are accomplished using the DCE RPC interface. Although this method is highly effective, especially when coupled with DCE's threading capabilities, Kerberos security, and the DCE cell directory service (which provides named location independence for applications), it takes more time to develop DCE-based applications than to develop similar functionality using traditional client/server frameworks. It is the raw nature of applications development using the DCE RPC interface that has probably been the greatest impediment to DCE's widespread adoption. DCE's Light (DE-Light) product contains the same functionality as DCE; however, it requires less computing resources and is easier for developers to learn Previous and use.

CORBA The Object Management Group Object Management Architecture (OMA) combines distributed computing with object-oriented computing to help build cooperative-processing applications within heterogeneous, distributed, networked environments. CORBA, which is based on the OMA, provides mechanisms by which objects transparently make request and receive responses. The ORB provides interoperability between different applications developed in multiple languages, on multiple operating systems and multiple platforms, in a networked environment, and using multiple object systems. CORBA contains the following four main components:

á Object Request Broker. The ORB is responsible for finding object implementations for a particular request, preparing objects implementations to receive requests, and communicating the data in the request. Moreover, during a request, the ORB locates the appropriate implementation code, transmits parameters and transfers control to the object implementation, and then transfers control back to the client.

á Object services. Object services handle the housekeeping chores such as creating, deleting, or copying objects and handling their persistence attributes.

á Common facilities. Common facilities provide the functions that can be shared by many applications, such as error handling, printing, online help, and reusable user interfaces.

á Application objects. These are the actual applications developed using custom and common facilities.

Management Considerations

Implementing DCE. IS managers planning to implement DCE in the enterprise should consider the following:

á DCE is essentially a relatively complex all-or-nothing proposition. Defining, setting up, and administering a DCE cell or set of cells requires a great deal of overhead. Trained DCE system administrators are rare and expensive.

á All applications need to be redeveloped/redeployed into the DCE environment, which is best suited to a large number of applications and users.

á DCE's threading capabilities involve process threads and not operating system threads and thus yield no real performance gains.

á DCE is not fully supported on all platforms. á DCE's complexity aside, products like Entera are available that facilitate three-tier Previous client/server relationships by allowing applications deployment across multiple systems. Such products facilitate better resource utilization and enhance the scalability of applications.

Regardless of whether development environments such as Entera are employed, the need for and scarcity and expense of trained systems administrators for managing the DCE environment should be considered. IS managers planning to implement the raw OSF DCE environment should hire experienced DCE software engineers as mentors. These individuals are hard to find as well. Although products such as Entera greatly simplify the IS staff's work in using and implementing DCE-based client/server technology, staff training in both the tools and the technology itself, and the use of experienced mentors, greatly improve the chances for a successful deployment. The real difference between typical systems development and deployment under DCE is the ability to segment and distribute applications, which should in turn provide greater systems flexibility and efficiency.

Implementing CORBA. As with DCE, tools and technology training coupled with experienced mentors are required for a successful distributed object implementation in CORBA. The key difference with a CORBA implementation, however, is the introduction of OO. Staff not only need to learn a new technology, they also may have to learn a new philosophy of software specification, analysis, design, and implementation. If staff are not already trained in and currently developing applications using OO methodologies, IS managers should not make the move from traditional development practices and deployment into object-orientation and CORBA. Instead, they should consider targeting small development projects where OO methodologies, tools, and programming languages can be employed and use mentors extensively. When staff are experienced OO developers, the move to CORBA-based architectures is made fairly easily. Even so, IS managers should first ensure that staff have been truly designing and developing OO applications and not merely using a C++ language as a so-called strongly type-C language. DCE facilitates applications development in a variety of procedural and object-oriented languages and allows applications to interoperate with one another. It also allows applications to be distributed across heterogeneous networked platforms. CORBA, with its ORB and infrastructure of services, does the same for purely object-oriented applications and services, providing for the distributed objects identified in the new Gartner Group client/server model.

Software Agent Technologies The concept of software agents had been around for some time, but Oracle Corp. was the first to turn it into reality with its Mobile Agents technology. Although Oracle developed the technology to handle the bottlenecks of mobile computing, it is its potential for use in n- tier client/server applications that presents important opportunities. Oracle Mobile Agents Previous According to Oracle, Mobile Agents is an applications development environment that extends the client/server architecture into mobile systems. It combines two key technologies:

á An applications messaging infrastructure for a variety of low-speed networks commonly available to mobile workers.

á An applications development interface allowing many of today's most popular Windows tools to easily access mission-critical corporate data services.

Oracle claims that Mobile Agents technology facilitates fast applications development and deployment for applications that automate mobile workers who use wireless networks, phone lines, and corporate LAN. The Mobile Agents architecture introduces the concept of client-agent-server—its own version of a three-tier client/server architecture. The primary difference between the client-agent-server model and the two-tier client/server model is that the client and agent combine to perform the work of the client in the two-tier environment. In this scenario of client/server computing, the client applications focus on user interface and navigation, and the agents perform the data transactions and business logic. The clients communicate with an agent through the message manager and the message gateway, which passes the client message to the appropriate agent to perform actions on behalf of the client. The results of the agent's work is then sent back as a single message to the client using the same message-handling mechanism. The Mobile Agents architecture reduces overall network traffic, because the data transactions between the agent and the server are conducted on the server platform. In traditional client/server computing, this interaction occurs across the network link. The client-agent-server concept thus maximizes transaction performance while minimizing use of the client's communications links.

Management Considerations Although the Mobile Agents technology is likely to have a substantial impact, the release of the message manager for UNIX and other platforms will unleash its real potential. Coupled with Oracle's support for DCE through its Oracle7 data base engine, the software agent technology is another method of facilitating distributed object-like computing. Software agents, like the objects in object-oriented technology, are usually defined to perform a unique task. The primary difference between agents and objects is that client and software agent applications are developed using development tools currently in use, such as client GUI developer tools. Procedural languages such as C are used to develop the background agents. Portions of applications are also developed using Oracle's stored procedures and Oracle Power Objects. Agents are also developed that interact with other agents and with the World Wide Web through Oracle's Web interface. The downside regarding Mobile Agents is just how widespread the technology will become. Many technologies that have preceded it have fallen prey to too little market acceptance and slowly disappeared. New Trends in Client/Server Technologies Previous This section examines two relatively new client/server technologies and use of an existing technology in new ways. The first new technology, Sun Microsystems’ Java programming language, is an architecture-neutral language designed specifically for applications intended to run on networks. It is a major shift from portable languages such as C++ and from frameworks such as DCE that rely on the RPC-type technologies for network enabling of applications. The second new technology is Antares Alliance's ObjectStar. ObjectStar is an enterprisewide client/server tool. Finally, intranets, which are self-contained corporate Internet systems, utilize the technologies that enabled the Web in new ways.

Java Sun Microsystems' Java programming language, started as part of a larger project to develop advanced software for consumer electronics, is similar to C++. Unlike C++ and other programming languages, however, Java was designed to be architecture-neutral and network-ready. Java applets are developed on any platform and run on any other platform without modification—there are no implementation-dependent aspects of the language specification. To enable a Java application to operate anywhere on the network, the compiler generates an architecture-neutral object file format—the compiled code is executable on many processors, given the presence of the Java runtime system. All that is needed to run Java applications are Java-enabled Web browsers such as Netscape. The explosion of Web use over the past three years makes a development language like Java all the more important. Sun Microsystems has been busily licensing Java to industry vendors (i.e., software, hardware, and browser vendors) including Microsoft, IBM, and Netscape Communications. Key ingredients to Java's meteoric rise in usage include its similarity to C++, its network-ready features, and its ability to integrate with corporate applications and frameworks. Sun has integrated Java with its CORBA product (i.e., Neo), enabling distributed applications over any network while providing access to corporate applications and data bases. Although Java is most commonly associated with the Internet and the Web, it is equally adaptable to an organization's internal IT infrastructure. Its ability to function seamlessly with disparate platforms and back-end application frameworks(through Neo) facilitates the evolution of client/server applications to include the Internet and intranets.

ObjectStar ObjectStar from Antares Alliance, Inc., is rated the number-one enterprisewide client/server product by IDC and other industry research organizations. IDC defines enterprisewide tools as those that facilitate the development of applications for the highest end of the size and complexity spectrum. Features that are needed to gain this ranking include the ability to interface with mainframe applications and data, performance monitoring and optimization, and the maintainability and flexibility of applications design. Some key competitive advantages of ObjectStar include:

á A rules-oriented approach that most closely resembles the business rules that most organizations are used to defining. á The ability for data base, screen, and report tables to share the same verbs (16 in total) Previous and rules.

á A layered approach that allows each layer (i.e., presentation, logic, and data) to be changed independently of the other layers. There are certain constraints, such as the inability to change the definition of a data base table without changing the rules that access that table.

á Close ties to the relational data model makes the transition for most developers used to developing traditional applications easier than migrating to other technologies such as DCE or CORBA.

á A close relationship to the data-driven approach prevalent in object-based and object- oriented technologies enables a relatively painless transition for those who have developed applications using the newer technologies.

á A fully integrated development and execution environment that facilitates rapid prototyping and moveability (as opposed to portability)of code from one platform (i.e., hardware and operating system) to another.

á The ability to develop mainframe 3270-based applications on inexpensive UNIX or Windows NT platforms and then to deploy them on a mainframe when they are put into production keeps mainframe cycle costs down during development.

Although ObjectStar has not yet gained widespread brand or industry recognition, it is perhaps one of the most important of the technologies available for many large corporations. Its ability to interface with legacy applications and data bases such as IDMS with the same ease that it interfaces with newer data base technologies such as Oracle or Sybase have caused some corporations to use ObjectStar as a conduit between the two data base worlds. Some of the recent additions to ObjectStar such as a Windows application builder, true distributed capabilities, and widespread platform availability enabling development on one platform and deployment on another (excluding platform-specific code such as Windows) make ObjectStar a contender worth considering. Release 3.0 of ObjectStar enables a GUI running on a Windows 3.1 or Windows 95 platform to be served by a Windows NT machine. This machine connects in turn to larger data and applications servers on a combination of UNIX servers from different vendors, which themselves are connected to a back-end mainframe. In all cases rules code can be moved between any of the servers and run unchanged.

Intranets Stephen L. Telleen of Amdahl Corp. (which coined the term intranet) defines the distinction between the Internet and an intranet as simply which side of the firewall one is on—if one is outside the corporate firewall, the Internet is involved, and if inside, the company's intranet is involved. Intranets use the same technologies as the Web (HTTP servers and browsers). On the Internet, all data and information is accessed through hypertext links. The Internet and its browsers are becoming ubiquitous—there are more than 30 million Internet users and the number is growing everyday—and have completely revolutionized the way information is Previous organized, stored, and presented. The functionality of the platform-neutral Web browsers is extended by add-ins to accommodate electronic mail, multimedia, real-time interactive chatting(IRC), and a host of other functionality. Employing HTML, through the use of a Common Gateway Interface (CGI), enables connection to back-end legacy data bases that are accessed in real-time (most Web sites still contain static rather than dynamic information on their Web pages). Many companies are now using these technologies to capture customer orders or populate their Web sites with up-to-the-minute information. Although the Internet facilitated external communications and advertising, intranets facilitate internal communications and empower people. They also permit real-time videoconferencing and multimedia without concern for physical location. Whereas CGI facilitated dynamic loading of information into HTML, the Java add-in to the Web browsers enables applications to be dynamically distributed. Java also enables rapid prototyping of scalable applications, as well as links to data bases and other types of corporate information holdings. HTML lets developers build these links without concern for the platform or type of application being used. Arguably, the most important advantage of an intranet is the ease of navigating (i.e., finding information) the corporate network. The user's home page comes preloaded with all the necessary information links through a browser to applications, data bases, E-mail, and an internal company bulletin board. The system administrator can modify the user's account at any time to either enhance or restrict access within the network. One of the problems with the Internet and intranets is the expense of reequipping a corporation's PCs with sufficient hardware and memory resources to run the operating systems needed to support the Web browsers. The availability of inexpensive devices for accessing Web-enabled applications will increase acceptance of the intranet. Oracle's Network Computer (NC) Reference Profile 1 describes general hardware specifications, Internet protocols, Web standards (e.g., HTML, HTTP, and Java), E-mail protocols, multimedia formats, and security features. The NC, retailing for around $500, will enable corporations to provide staff with easy access to the corporate information base. New applications can then be developed or existing applications can be updated and made available without expensive hardware upgrades or the overhead that traditionally accompanies the release of new software upgrades throughout the corporation. Despite the network computer's potential, the success of the device will ultimately be determined by network and data security issues, the ability of Oracle and other vendors to keep it open, and the popular view that the NC is the antithesis of what a PC is all about (some have likened it to a 3270 terminal).

Management Considerations

Implementing Java. Although the Java language and the availability of tools to assist developers are in their infancy, the importance of Java in developing corporate client/server applications and its widespread adoption by industry vendors cannot be overstated. In corporations where Internet access is readily available, it is no longer a question of whether developers will use Java but of when and how. The key for many organizations is to proactively manage their developers' use of the language. Implementing ObjectStar. Previous ObjectStar facilitates JAD/RAD (JAD/RAD) perhaps as no other technology preceding it. Yet, in some organizations using ObjectStar, that is also its greatest impediment to successful projects. It is important to remember that JAD/RAD is a technique and not a methodology, and ObjectStar is a tool and not a technique. Understanding the processes and steps in joint requirements planning, JAD, and finally RAD and what they can and cannot provide for an organization's applications development exercise is perhaps the most important realization that IS and business executives, developers, and users can achieve. There is also no substitute for sound development practices (regarding data and process modeling) and documentation. If a facilitator or trainer being considered for hire indicates that data or process modeling are no longer needed because ObjectStar is being used, an IS manager should seek the services of someone else.

Client/Server Computing: Conceptual or Physical? This article has examined both the origins and the evolution of client/server architectures and related technologies and management considerations. Now, through the use of a sample client/server application architecture, the conceptual nature of client/server computing is explored. This exercise holds the key to true understanding of the tools and technologies available today.

The Client/Server Relationship A client application is defined simply as any application that performs its duties by requesting the services of another application. A server application is defined as any application that provides services to other applications. Next, three concepts are borrowed from the object-oriented world and introduced to the client/server world. The first is that any single application can be a client to one application and a server to still another application. Thus a single application can exhibit both client and server characteristics depending on what it is doing. The second concept, closely related to the first, is that the relationship between a client and server is a transactional one. As such, once the transaction is completed, the client and server relationship between the two applications has terminated. The third concept is that contrary to the belief held by some people, it is not necessarily true that the client in the client/server relationship has a presentation layer and runs on a PC.

Object-Oriented Client/Server Computing In object-oriented systems, an object that makes a request is a client, and an object that receives the request and acts on it is a server. It has been said that a contract describes the ways in which objects interact with one another. This contract is a list of requests that a client can make of a server. Both the client and server must fulfill the contract: “The client by making only those requests the contract specifies, and the server by responding appropriately to those requests.”97

97 R. Wirfs-Brock, B. Wilkerson, and L. Wiener, Designing Object-Oriented Software (Englewood Cliffs, NJ: Prentice-Hall, 1990). This basic precept in object-oriented systems allows clients and servers to have Previous interchangeable roles. In this view, each client or server can have many different contracts that each must fulfill, and each is responsible for fulfilling all contracts for which it is a server. It is useful to look at these roles in a sample system.

The Client as Server and Server as Client. Exhibit 1 depicts two applications: Application A is a traditional client-type application where there is a presentation layer that interacts with a user. The user requests the application to print a report. A sends the request to Print Server for handling. This creates a traditional client/server relationship: A, which has a presentation layer, is the client application that established a relationship or contract with Print Server. Next, Print Server asks Data Base Server to provide a list of available printers that print in the format the client application has requested. This is another client/server relationship, where Print Server is a client of Data Base Server, the server. The user then selects an incorrect printer type for the format of the print request, and Print Server reports this to the Error Server, which in turn passes the resulting message back to the user's screen. Another client/server relationship is thus established. Likewise, Data Base Server or applications A or B could use Error Server directly for reporting errors.

Sample Client/Server Application Architecture

In application B, the first client in the chain is a background application that automatically generates summary reports to a printer at midnight every day. B simply sends a flag to the Print Server application indicating that it is running in background mode. It also passes the printer name to which the report must be sent, because it does not have the ability to interactively choose the appropriate printer from the list provided by Data Base Server. This simple illustration demonstrates the three concepts borrowed from object- orientation. Both examples show that a client can be a server and a server can be a client: Print Server and Data Base Server act as both clients and servers. Which one they are at a given moment in time depends simply on their relationship to their cooperating applications. The sample also shows that the relationship between the client and the server is indeed a transactional one, and once terminated, that relationship obviously no longer exists. Application B and the server applications also show that a presentation layer is not required for an application to be a valid client.

Conclusion Just as no two business problems have exactly the same characteristics, neither is there a single client/server architecture or tool that solves all problems. The two-tier architecture that worked well for pilot projects in small workgroup settings have failed to scale upwards to resolve enterprisewide problems. Unless IT departments break out of this mold, corporations will continue to be mired in ill-fitting solutions. Business management itself must also become aware of the options available and gain an appreciation of what each technology can and, just as important, cannot do. Successful selection and implementation of technological solutions to business problems requires that both technology and business people work in consort.

The client/server technologies in which organizations invest involve on-going decisions Previous with long-term ramifications. This article's broad technical and managerial analysis of client/server technologies and its sample client/server application architecture aim to show that the terms client and server are conceptual ideas rather than physical objects. Understanding that client/server computing is more a concept than a technology is the key to the proper evaluation of the tools and strategies that are flooding the marketplace. Author Biographies Lawrence K. Cooper Lawrence K. Cooper is principal at DMR Group, Inc., in Ottawa, Canada. John S. Whetstone John S. Whetstone is an IT evaluator with the federal government of Canada in Quebec.