Distributed data processing and client server computing

Introduction
Client–server computing is a distributed computing model in which client applications request services from server processes. Clients and servers typically run on different computers interconnected by a computer network. Any use of the Internet (q.v.), such as information retrieval (q.v.) from the World Wide Web (q.v.), is an example of client–server computing. However, the term is generally applied to systems in which an organization runs programs with multiple components distributed among computers in a network. The concept is frequently associated with enterprise computing, which makes the computing resources of an organization available to every part of its operation. A client application is a process or program that sends messages to a server via the network. Those messages request the server to perform a specific task, such as looking up a customer record in a database or returning a portion of a file on the server’s hard disk. The client manages local resources such as a display, keyboard, local disks, and other peripherals. The server process or program listens for client requests that are transmitted via the network. Servers receive those requests and perform actions such as database queries and reading files. Server processes typically run on powerful PCs, workstations (q.v.), or mainframe (q.v.) computers. An example of a client–server system is a banking application that allows a clerk to access account information on a central database server. All access is done via a PC client that provides a graphical user interface (GUI). An account number can be entered into the GUI along with how much money is to be withdrawn or deposited, respectively. The PC client validates the data provided by the clerk, transmits the data to the database server, and displays the results that are returned by the server. The client–server model is an extension of the object based (or modular) programming model, where large pieces of software are structured into smaller components that have well defined interfaces. This decentralized approach helps to make complex programs maintainable and extensible. Components interact by exchanging messages or by Remote Procedure Calling (RPC see DISTRIBUTED SYSTEMS). The calling component becomes the client and the called component the server.
A client–server environment may use a variety of operating systems and hardware from multiple vendors; standard network protocols like TCP/IP provide compatibility. Vendor independence and freedom of choice are further advantages of the model. Inexpensive PC equipment can be interconnected with mainframe servers, for example. Client–server systems can be scaled up in size more readily than centralized solutions since server functions can be distributed across more and more server computers as the number of clients increases. Server processes can thus run in parallel, each process serving its own set of clients. However, when there are multiple servers that update information, there must be some coordination mechanism to avoid inconsistencies. The drawbacks of the client–server model are that security is more difficult to ensure in a distributed environment than it is in a centralized one, that the administration of distributed equipment can be much more expensive than the maintenance of a centralized system, that data distributed across servers needs to be kept consistent, and that the failure of one server can render a large client–server system unavailable. If a server fails, none of its clients can make further progress, unless the system is designed to be fault-tolerant The computer network can also become a performance or reliability bottleneck: if the network fails, all servers become unreachable. If one client produces high network traffic then all clients may suffer from long response times.



Design Considerations
An important design consideration for large client– server systems is whether a client talks directly to the server, or whether an intermediary process is introduced between the client and the server. The former is awo-tier architecture, the latter is a three-tier architecture. The two-tier architecture is easier to implement and is typically used in small environments (one or two servers with one or two dozens of clients). However, atwo-tier architecture is less scalable than a three-tierarchitecture. In the three-tier architecture, the intermediate process is used for decoupling clients and servers. The intermediary can cache frequently used server data to ensure better performance and scalability (see CACHE MEMORY). Performance can be further increased by having the intermediate process distribute client requests to several servers so that requests execute in parallel.
Other important design considerations are:
Fat vs. thin client: A client may implement anything from a simple data entry form to a complex business application. An important design consideration is how to partition application logic into client and server components. This has an impact on the scalability and maintainability of a client– server system. A ‘‘thin’’ client receives information in its final form from the server and does little or no data processing. A ‘‘fat’’ client does more processing, thereby lightening the load on the server.

Stateful vs. stateless: Another design consideration is whether a server should be stateful or stateless. A stateless server retains no information about the data that clients are using. Client requests are fully self-contained and do not depend on the internal state of the server. The advantage of the stateless model is that it is easier to implement and that the failure of a server or client is easier to handle, as no state information about active clients is maintained. However, applications where clients need to acquire and release locks on the records stored at a database server usually require a stateful model, because locking information is maintained by the server for each individual client
Authentication: For security purposes servers must also address the problem of authentication (q.v.). In a networked environment, an unauthorized client may attempt to access sensitive data stored on a server. Authentication of clients is handled by using cryptographic techniques such as public key encryption or special authentication (q.v.) servers such as in the OSF DCE system described below. In public key encryption, the client application ‘‘signs’’ requests with its private cryptographic key and encrypts the data in the request with a secret session key known only to the server and to the client. On receipt of the request, the server validates the signature of the client and decrypts the request only if the client is authorized to access the server.

Distributed Object Computing
Distributed object computing (DOC) is a generalization of the client–server model. Object-oriented modeling and programming are applied to the development of client–server systems. Objects are pieces of software that encapsulate an internal state and make it accessible through a well-defined interface. In DOC, the interface consists of object operations and attributes that are remotely accessible. Client applications may connect to a remote instance of the interface with the help of a naming service. Finally, the clients invoke the operations on the remote object. The remote object thus acts as a server. This use of objects naturally accommodates heterogeneity and autonomy. It supports heterogeneity since requests sent to server objects depend only on their interfaces and not on their internals. It permits autonomy because object implementations can change transparently, provided they maintain their interfaces. If complex client–server systems are to be assembled out of objects, then objects must be compatible. Client–server objects have to interact with each other even if they are written in different programming languages and run on different hardware and operating system platforms.

Client–Server Toolkits
A wide range of software toolkits for building client– server software is available on the market today. Client–server toolkits are also referred to as middleware. CORBA implementations are an example of well-known client–server middleware. Other examples are OSF DCE, DCOM, message-oriented middleware, and transaction processing monitors.

Post a Comment

0 Comments