Reassessing Client/Server Tools and Technologies Lawrence K
Total Page:16
File Type:pdf, Size:1020Kb
Previous 4-04-40 Reassessing Client/Server Tools and Technologies Lawrence K. Cooper John S. Whetstone Payoff Client/server computing has not yet realized its promise of faster applications development, reduced maintenance costs, and enterprise scalability. This article's review of client/server technologies from the perspective of whether they cause developers to focus more on the tool than on the business problem helps IS and business managers appreciate what each technology can and cannot do. Understanding that client/server computing is more a concept than a technology is key to the proper evaluation of the tools and strategies flooding the marketplace. Introduction During the past 10 years, client/server architecture and technologies have been hailed as spelling the demise of the mainframe-centric view of computing. Corporations purchased PC by the truckload and bought myriad development tools, languages, and frameworks for developers and users alike. Products with descriptions such as GUI-builders, front ends, gateways, back ends, middleware, and glue flooded the client/server marketplace. Applications were supposed to be able to be developed faster and maintenance costs were supposed to decrease. Clearly this has not been the case: Client/server computing has proven to be both a boon and a bust. Although it provides more flexibility, it requires more computing resources, faster networks, and higher maintenance. According to a recent Gartner study, client/server computing costs often exceed mainframe-centric computing by up to 70%. Client/server computing also poses an abundance of data security problems. Current technologies are at a middle stage between the rigorous structures of mainframes and the total openness implicit in many of the new architectures. These problems notwithstanding, client/server technology is a core building block for most corporate IT strategies. Even so, its true potential remains largely untapped. The Gartner Group estimates that 90% of all client/server applications deployed today are two-tier: The Standish Group puts that figure at 95%. Either way, nine out of every 10 client/server applications deployed use a model developed in the mid-1980s. Conversely, only one in 10 applications is based on the more recent three-tier, multitier, tierless, or distributed computing models. The problem is that developers are spending far too much time on the technology and not nearly enough time on the business problems they are supposed to be solving. Computer Technology Research (CTR) Corp. reported that because client/server projects often do not scale from the workgroup to the enterprise, applications either lack the flexibility to meet the needs of the corporation as a whole or “fail to meet the demands of the software life cycle and make programmers work more than they should.” Although corporations should not turn back the clock to the mainframe in the glass house, IS managers and their staffs require a more careful understanding of the client/server marketplace and of organizational needs before proceeding further into the murky world of client/server computing. After providing a brief overview of the current state of client/server models, this article discusses client/server technologies from two Previous perspectives: Technologies that force developers to focus more on the tool than on the business problem, such as DCE and CORBA, and technologies that developers can more readily use to build systems. Although many people immediately think of products like PowerBuilder and Visual BASIC when client/server computing is mentioned, these products are not considered because they are only capable of supporting the front end of a two-tier client/server solution. Overview of Client/Server Model The Two-Tier Model The now-famous two-tier client/server model first presented by the Gartner Group in the 1980s still predominates the market. Within this computing model, the client and server are both hardware. The two-tier model provides several options varying from distributed presentation to distributed data management. In all cases, however, the only real choice is on which machine the various components of the presentation, logic, and data access layers are to be located. This hardware-oriented model places limits on the transfer of data and is the culprit for repetitive network bottlenecks, the bane of many organizations. Yet, the complexity of applications in today's business climate, however, necessitates the transfer of massive amounts of information collected from disparate and distributed data sources. The way in which an application is partitioned under this model is driven by hardware/locational decisions rather than by business-function or application-logic decisions. Many of the so-called client/server tools of today such as PowerBuilder and Visual Basic are merely presentation layers for back-end data bases. Although such tools are also used to place some of the application logic in the client, they do not allow application logic to be easily moved from a client application to a server application or from one hardware platform to another. The Three-Tier Model The three-tier architecture attempts to overcome the application partitioning and performance limitations of the two-tier model by providing a clear separation between the presentation, functionality, and data layers. The presentation layer uses a graphical user interface (GUI) to present information; the functionality layer performs the business logic and the flow of related transactions; and the data layer consists of the data sources that the functional layer accesses. The data sources include data bases, legacy systems, data feeds, and file structures. Thus the application logic can access multiple data sources. Because it can also be modified to meet changing requirements without changing all client-side applications, myriad configurations are possible to meet specific business problems. Initial three-tier applications were often developed using the stored-procedure capabilities of the data base, which allowed business logic to be moved back and forth from the client application into the server application with relative ease. The skills IT staff need for developing three-tier applications using stored procedures do not differ substantially from the skills already acquired in developing and maintaining two-tier applications. Although stored procedures permit code to be reused across applications that are deployed on the same RDBMS, they do not allow code reuse across different vendors' RDBMSs. The three-tier and multitier architectures require both IT management and staff to think Previous in new ways. The new technologies that served as the catalyst for change are being enhanced and modified almost daily. No longer can an IS department expect to learn a single language or development methodology and be able to meet either short-run tactical or long-term strategic goals. To complicate matters further, many organizations are demanding that applications be conceived, specified, developed, and deployed and show a return on investment in six to 12 months. Technology-Focused Architectures: DCE and CORBA The challenge for most IT departments is not only choosing the right client/server architecture, but also making the right selections from an ever-growing array of software tools that promise simple solutions to complex problems. The problem is that many of the products and approaches are technology- rather than business-focused; in other words, IT staff must spend considerable time and expense to understand the technology before they can use it to solve the business problems of their organization. The OSF's DCE The OSF DCE supports three distributed computing models: · The client/server model. · The RPC model. · The data-sharing model. The client/server model permits applications to be split across multiple disparate platforms running multiple disparate operating systems. A common matching protocol between two applications or utilities is defined, allowing applications to pair up into unique client/server relationships. The RPC model permits programmers to write client applications that call server services, without having any specific or required knowledge of where or how the called server is located. It involves the customized definition of client/server relationships between unique application modules. The data-sharing model facilitates seamless data distribution among the participating machines on a network. DCE also provides services for distributing applications in heterogeneous hardware and software environments: Basic distributed services enable developers to build applications, and data-sharing services provide a distributed file system for seamless data distribution, diskless system support, and desktop computer integration. DCE enables applications distributed across disparate hardware and operating system platforms to appear as a single system to the user. Most DCE implementations of client and server application relationships are accomplished using the DCE RPC interface. Although this method is highly effective, especially when coupled with DCE's threading capabilities, Kerberos security, and the DCE cell directory service (which provides named location independence for applications), it takes more time to develop DCE-based applications than to develop similar functionality using traditional client/server frameworks. It is the raw nature of applications development using the DCE RPC interface that has probably been the greatest