<<

The Place of Control Systems In Attachment Theory

One of Bowlby’s key insights was that many of Freud’s best ideas about close relationships and the importance of early experience were logically independent of the drive reduction motivation theory that Freud used to explain them. In order to preserve these insights, Bowlby looked for a scientifically defensible alternative to Freud’s drive reduction motivation theory.

Freud viewed infants as clingy and dependent; interested in drive reduction rather than in the environment. Ethological observations present a very different view. The notion that human infants are competent, inquisitive, and actively engaged in mastering their environments was also familiar to Bowlby from Piaget’s detailed observations of his own three children presented in The Origin of Intelligence in Infants.

One of Bowlby’s key insights was that the newly emerging field of control systems theory offered a way of explaining infants’ exploration, monitoring of access to attachment figures, and awareness of the environment. This was a scientifically defensible alternative to Freud’s drive reduction theory of motivation. It placed the emphasis on adaptation to the real world rather than to drive states and emphasized actual experience rather than intra-psychic events as influences on develop- ment and individual differences.

Note that the first step toward this alternative motivation model was reformulating athe infant- mother (and implicitly adult-adult) bonds in terms of the secure base phenomenon. Without the secure base concept, we have no control systems alternative to Freud’s drive theory. Thus, it is logically necessary, at every turn, to keep the secure base formulation at the center of attachment theory. See Waters & Cummings, Child Development, June 2000 for elaboration on this point.

The following material was compiled from articles on and optimization available on-line at Britanica.com. E.W.

Control Theory

General Background

As long as human culture has existed, control has always meant some kind of power over man's environment. Cuneiform fragments suggest that the control of irrigation systems in Mesopotamia was a well-developed art at least by the 20th century BC. There were some ingenious control devices in the Greco-Roman culture, the details of which have been preserved. Methods for the automatic operation of windmills go back at least to the Middle Ages. Large-scale implementation of the idea of control, however, was impossible without a high-level of technological sophistication, and it is probably no accident that the principles of modern control started evolving only in the 19th century, concurrently with the Industrial Revolution. A serious scientific study of this field began only after World War II and is now a major aspect of what has come to be called the second industrial revolution.

Although control is sometimes equated with the notion of control (which involves the transmission and return of information)--an isolated engineering invention, not a scientific discipline--modern usage tends to favour a rather wide meaning for the term; for instance, control and regulation of , muscular coordination and metabolism in biological organisms, prosthetic devices; also, broad aspects of coordinated activity in the social sphere such as optimization of business operations, control of economic activity by government policies, and even control of political decisions by democratic processes. Scientifically speaking, modern control should be viewed as that branch of system theory concerned with changing the behaviour of a given complex system by external actions. (For aspects of system theory related to information, see below.) If physics is the science of understanding the physical environment, then control should be viewed as the science of modifying that environment, in the physica, biological, or even social sense.

Much more than even physics, control is a mathematically-oriented science. Control principles are always expressed in mathematical form and are potentially applicable to any concrete situation. At the same time, it must be emphasized that success in the use of the abstract principles of control depends in roughly equal measure on the status of basic scientific knowledge in the specific field of application, be it engineering, physics, astronomy, biology, medicine, econometrics, or any of the social sciences. This fact should be kept in mind to avoid confusion between the basic ideas of control (for instance, controllability) and certain spectacular applications of the moment in a narrow area (for instance, manned lunar travel).

Examples of modern control systems

To clarify the critical distinction between control principles and their embodiment in a real or system, the following common examples of control may be helpful. There are several broad classes of control systems, of which some are mentioned below.

Machines that cannot function without (feedback) control

Many of the basic devices of contemporary technology must be manufactured in such a way that they cannot be used for the intended task without modification by means of control external to the device. In other words, control is introduced after the device has been built; the same effect cannot be brought about (in practice and sometimes even in theory) by an intrinsic modification of the characteristics of the device. The best known examples are the vacuum-tube or transistor amplifiers for high-fidelity sound systems. Vacuum tubes or transistors, when used alone, introduce intolerable distortion, but when they are placed inside a feedback control system any desired degree of fidelity can be achieved. A famous classical case is that of powered flight. Early pioneers failed, not because of their ignorance of the laws of aerodynamics, but because they did not realize the need for control and were unaware of the basic principles of stabilizing an inherently unstable device by means of control. Jet aircraft cannotbe operated without automatic control to aid the pilot, and control is equally critical for helicopters. The accuracy of inertial navigation equipment (the modern space compass) cannot be improved indefinitely because of basic mechanical limitations, but these limitations can be reduced by several orders of magnitude by - directed statistical filtering, which is a variant of feedback control.

Robots

On the most advanced level, the task of control science is the creation of robots. This is a collective term for devices exhibiting animal-like purposeful behaviour under the general command of (but without direct help from) man. Industrial manufacturing robots are already fairly common, but real breakthroughs in this field cannot be anticipated until there are fundamental scientific advances with regard to problems related to pattern recognition and the mathematical structuring of brain processes.

Control Systems

A control system is a means by which a variable quantity or set of variable quantities is made to conform to a prescribed norm. It either holds the values of the controlled quantities constant or causes them to vary in a prescribed way. A control system may be operated by electricity, by mechanical means, by fluid pressure (liquid or gas), or by a combination of means. When a computer is involved in the control circuit, it is usually more convenient to operate all of the control systems electrically, although intermixtures are fairly common.

Development of control systems.

Control systems are intimately related to the concept of , but the two fundamental types of control systems, feed-forward and feedback, have classic ancestry. The loom invented by Joseph Jacquard of France in 1801 is an early example of feed-forward; a set of punched cards programmed the patterns woven by the loom; no information from the process was used to correct the machine's operation. Similar feed-forward control was incorporated in a number of machine tools invented in the 19th century, in which a cutting tool followed the shape of a model.

Feedback control, in which information from the process is used to correct a machine's operation, has an even older history. Roman maintained water levels for their aqueduct system by means of floating valves that opened and closed at appropriate levels. The Dutch windmill of the 17th century was kept facing the wind by the action of an auxiliary vane that moved the entire upper part of the mill. The most famous example from the Industrial Revolution is James Watt's flyball governor of 1769, a device that regulated steam flow to a steam engine to maintain constant engine speed despite a changing load.

The first theoretical analysis of a control system, which presented a differential-equation model of the Watt governor, was published by James Clerk Maxwell, the Scottish physicist, in the 19th century. Maxwell's work was soon generalized and control theory developed by a number of contributions, including a notable study of the automatic steering system of the U.S. battleship "New Mexico," published in 1922. The 1930s saw the development of electrical feedback in long-distance telephone amplifiers and of the general theory of the servomechanism, by which a small amount of power controls a very large amount and makes automatic corrections. The pneumatic controller, basic to the development of early automated systems in the chemical and petroleum industries, and the analogue computer followed. All of these developments formed the basis for elaboration of control-system theory and applications during World War II, such as anti-aircraft batteries and fire-control systems.

Most of the theoretical studies as well as the practical systems up to World War II were single-loop--i.e., they involved merely feedback from a single point and correction from a single point. In the 1950s the potential of multiple-loop systems came under investigation. In these systems feedback could be initiated at more than one point in a process and corrections made from more than one point. The introduction of analogue- and digital-computing equipment opened the way for much greater complexity in automatic-control theory, an advance since labelled "modern control" to distinguish it from the older, simpler, "classical control."

Basic principles.

With few and relatively unimportant exceptions, all the modern control systems have two fundamental characteris- tics in common. These can be described as follows: (1) The value of the controlled quantity is varied by a motor (this word being used in a generalized sense), which draws its power from a local source rather than from an incoming signal. Thus there is available a large amount of power to effect necessary variations of the controlled quantity and to ensure that the operations of varying the controlled quantity do not load and distort the signals on which the accuracy of the control depends. (2) The rate at which energy is fed to the motor to effect variations in the value of the controlled quantity is determined more or less directly by some function of the difference between the actual and desired values of the controlled quantity. Thus, for example, in the case of a thermostatic heating system, the supply of fuel to the furnace is determined by whether the actual temperature is higher or lower tha the desired temperature. A control system possessing these fundamental characteristics is called a closed-loop control system, or a servomechanism (see Figure). Open-loop control systems are feed-forward systems.

The stability of a control system is determined to a large extent by its response to a suddenly applied signal, or transient. If such a signal causes the system to overcorrect itself, a phenomenon called hunting may occur in which the system first overcorrects itself in one direction and then overcorrects itself in the opposite direction. Because hunting is undesirable, measures are usually taken to correct it. The most common corrective measure is the addition of damping somewhere in the system. Damping slows down system response and avoids excessive overshoots or over-corrections. Damping can be in the form of electrical resistance in an electronic circuit, the application of a brake in a mechanical circuit, or forcing oil through a small orifice as in shock-absorber damping.

Another method of ascertaining the stability of a control system is to determine its frequency response--i.e., its response to a continuously varying input signal at various frequencies. The output of the control system is then compared to the input with respect to amplitude and to phase--i.e., the degree with which the input and output signals are out of step. Frequency response can be either determined experimentally--especially in electrical systems--or calculated mathematically if the constants of the system are known. Mathematical calculations are particularly useful for systems that can be described by ordinary linear differential equations. Graphic shortcuts also help greatly in the study of system responses.

Several other techniques enter into the design of advanced control systems. Adaptive control is the capability of the system to modify its own operation to achieve the best possible mode of operation. A general definition of adaptive control implies that an adaptive system must be capable of performing the following functions: provid- ing continuous information about the present state of the system or identifying the process; comparing present system In direct-digital control a single digital computer replaces a group of single-loop analogue controllers. Its greater computational ability makes the substitution possible and also permits the application of more complex advanced- control techniques.

Hierarchy control attempts to apply to all the plant-control situations simultaneously. As such, it requires the most advanced computers and most sophisticated automatic-control devices to integrate the plant operation at every level from top-management decision to the movement of a valve.

The advantage offered by the digital computer over the conventional control system described earlier, costs being equal, is that the computer can be programmed readily to carry out a wide variety of separate tasks. In addition, it is fairly easy to change the program so as to carry out a new or revised set of tasks should the nature of the process change or the previously proposed system prove to be inadequate for the proposed task. With digital computers, this can usually be done with no change to the physical equipment of the control system. For the conventional control case, some of the physical hardware apparatus of the control system must be replaced in order to achieve new functions or new implementations of them.

Control systems have become a major component of the automation of production lines in modern factories. Automation began in the late 1940s with the development of the transfer machine, a mechanical device for moving and positioning large objects on a production line (e.g., partly finished automobile engine blocks). These early machines had no feedback control as described above. Instead, manual intervention was required for any final adjustment of position or other corrective action necessary. Because of their large size and cost, long production runs were necessary to justify the use of transfer machines.

The need to reduce the high labour content of manufactured goods, the requirement to handle much smaller production runs, the desire to gain increased accuracy of manufacture, combined with the need for sophisticated tests of the product during manufacture, have resulted in the recent development of computerized production monitors, testing devices, and feedback-controlled production robots. The programmability of the digital computer to handle a wide range of tasks along with the capability of rapid change to a new program has made it invaluable for these purposes. Similarly, the need to compensate for the effect of tool wear and other variations in automatic machining operations has required the institution of a feedback control of tool positioning and cutting rate in place of the formerly used direct mechanical motion. Again, the result is a more accurately finished final product with less chance for tool or manufacturing machine damage.

Principles of Control

The scientific formulation of a control problem must be based on two kinds of information: (A) the behaviour of the system (e.g., industrial plant) must be described in a mathematically precise way; (B) the purpose of control (criterion) and the environment (disturbances) must be specified, again in a mathematically precise way. Information of type A means that the effect of any potential control action applied to the system is precisely known under all possible environmental circumstances. The choice of one or a few appropriate control actions, among the many possibilities that may be available, is then based on information of type B; and this choice, as stated before, is called optimization.

The task of control theory is to study the mathematical quantification of these two basic problems and then to deduce applied-mathematical methods whereby a concrete answer to optimization can be obtained. Control theory does not deal with physical reality but only with its mathematical description (mathematical models). The knowledge embodied in control theory is always expressed with respect to certain classes of models, for instance, linear systems with constant coefficients, which will be treated in detail below. Thus control theory is applicable to any concrete situation (e.g., physics, biology, economics) whenever that situation can be described, with high precision, by a model that belongs to a class for which the theory has already been developed. The limitations of the theory are not logical but depend only on the agreement between available models and the actual behaviour of the system to be controlled. Similar comments can be made about the mathematical representation of the criteria and disturbanes.

Once the appropriate control action has been deduced by mathematical methods from the information mentioned above, the implementation of control becomes a technological task, which is best treated under the various specialized fields of engineering. The detailed manner in which a chemical plant is controlled may be quite different from that of an automobile factory, but the essential principles will be the same. Hence further discussion of the solution of the control problem will be limited here to the mathematical level.

To obtain a solution in this sense, it is convenient (but not absolutely necessary) to describe the system to be controlled, which is called the plant, in terms of its internal dynamical state. By this is meant a list of numbers (called the state vector) that expresses in quantitative form the effect of all external influences on the plant before the present moment, so that the future evolution of the plant can be exactly given from the knowledge of the present state and the future inputs. This situation implies, in an intuitively obvious way, that the control action at a given time can be specified as some function of the state at that time. Such a function of the state, which determines the control action that is to be taken at any instant, is called a control law. This is a more general concept than the earlier idea of feedback; in fact, a control law can incorporate both the feedback and methods of control.

In developing models to represent the control problem, it is unrealistic to assume that every component of the state vector can be measured exactly and instantaneously. Consequently in most cases the control problem has to be broadened to include the further problem of state determination, which may be viewed as the central task in statistical prediction and filtering theory. Thus, in principle, any control problem can be solved in two steps: (1) Building an optimal filter (so-called ) to determine the best estimate of the present state vector; (2) determining an optimal control law and mechanizing it by substituting into it the estimate of the state vector obtained in step 1.

In practice, the two steps are implemented by a single unit of hardware, called the controller, which may be viewed as a special-purpose computer. The theoretical formulation given here can be shown to include all other previous methods as a special case; the only difference is in the engineering details of the controller.

The mathematical solution of a control problem may not always exist. The determination of rigorous existence conditions, beginning in the late 1950s, has had an important effect on the evolution of modern control, equally from the theoretical and the applied point of view. Most important is controllability; it expresses the fact that some kind of control is possible. If this condition is satisfied, methods of optimization can pick out the right kind of control using information of type B. Optimization

Optimization is a field of applied mathematics consisting of a collection of principles and methods used for the solution of quantitative problems in many disciplines: physics, biology, engineering, economics, business, and others. This mathematical area grew from the recognition that problems under consideration in manifestly different fields could be posed theoretically in such a way that a central store of ideas and methods could be used in obtaining solutions for all of them.

A typical optimization problem may be described in the following way. There is a system, such as a physical machine, a set of biological organisms, or a business organization, whose behaviour is determined by several specified factors. The operator of the system has as a goal the optimization of the performance of this system. The latter is determined at least in part by the levels of the factors over which the operator has control; the performance may also be affected, however, by other factors over which there is no control. The operator seeks the right levels for the controllable factors that will optimize, as far as possible, the performance of the system. For example, in the case of a banking system, the operator is the governing body of the central bank; the inputs over which there is control are interest rates and money supply; and the performance of the system is described by economic indicators of the economic and political unit in which the banking system operates.

The first step in the application of optimization theory to a practical problem is the identification of relevant theoretical components. This is often the most difficult part of the analysis, requiring a thorough understanding of the operation of the system and the ability to describe that operation in precise mathematical terms. The main theoretical components are the system, its inputs and outputs, and its rules of operation. The system has a set of possible states; at each moment in its life, it is in one of these states, and it changes from state to state according to certain rules determined by inputs and outputs. There is a numerical quantity called the performance measure, which the operator seeks to maximize or minimize. It is a mathematical function whose value is determined by the history of the system. The operator is able to influence the value of the performance measure through a schedule of inputs. Finally, the constraints of the system must be identified; these are the restrictions on the inpts that are beyond the control of the operator.

The simplest type of optimization problem may be analyzed using elementary differential calculus. Suppose that the system has a single input variable, represented by a numerical variable x, and suppose that the performance measure can be expressed as a function y = f (x). The constraints are expressed as restrictions on the values assumed by the input x; for example, the nature of the problem under consideration may require that x be positive. The optimization problem takes the following precise mathematical form: For which value of x, satisfying the indicated constraints, is the function y = f (x) at its maximum (or minimum) value? From calculus, the extreme values of a function y = f (x) with a sufficiently smooth graph can be located only at points of two kinds: (1) points where the tangent to the curve is horizontal (critical points) or (2) endpoints of an interval, if x is restricted by the constraints to such an interval. Thus the problem of finding the largest or smallest value of a function over the ndicated interval is reduced to the simpler problem of finding the largest and smallest value among a finite set of candidates, and this can be done by direct computation of the value of f (x) at those points x.

The theory of linear programming was developed for the purpose of solving optimization problems involving two or more input variables. This theory uses only the elementary ideas of linear algebra; it can be applied only to problems for which the performance measure is a linear function of the inputs. Nevertheless, this is sufficiently general to include applications to important problems in economics and engineering.

Control of machines

In many cases, the operation of a machine to perform a task can be directed by a human (manual control), but it may be much more convenient to connect the machine directly to the measuring instrument (automatic control); e.g., a (temperature-operated switch) may be used to turn on or off a refrigerator, oven, air-conditioning unit, or heating system. The dimming of automobile headlights, the setting of the diaphragm of a camera, the correct exposure for colour prints may be accomplished automatically by connecting a photocell or other light-responsive device directly to the machine in question. Related examples are the remote control of position (servomechanisms), speed control of motors (governors). It is emphasized that in such case a machine could function by itself, but a more useful system is obtained by letting the measuring device communicate with the machine in either a feed-forward or feed-back fashion.

Control of large systems

More advanced and more critical applications of control concern large and complex systems the very existence of which depends on coordinated operation using numerous individual control devices (usually directed by a computer). The launch of a spaceship, the 24-hour operation of a power plant, oil refinery, or chemical factory, the control of air traffic near a large airport, are well-known manifestations of this technological trend. An essential aspect of these systems is the fact that human participation in the control task, although theoretically possible, would be wholly impractical; it is the feasibility of applying automatic control that has given birth to these systems.

Biocontrol

The advancement of technology (artificial biology) and the deeper understanding of the processes of biology (natural technology) has given reason to hope that the two can be combined; man-made devices should be substituted for some natural functions. Examples are the artificial heart or kidney, nerve-controlled prosthetics, and control of brain functions by external electrical stimuli. Although definitely no longer in the science-fiction stage, progress in solving such problems has been slow not only because of the need for highly advanced technology but also because of the lack of fundamental knowledge about the details of control principles employed in the biological world.

Mathematical Formulation of Control Theory

Control Theory is a field of applied mathematics that is relevant to the control of certain physical processes and systems. Although control theory has deep connections with classical areas of mathematics, such as the calculus of variations and the theory of differential equations, it did not become a field in its own right until the late 1950s and early 1960s. After World War II, problems arising in engineering and economics were recognized as variants of problems in differential equations and in the calculus of variations, though they were not covered by existing theories. At first, special modifications of classical techniques and theories were devised to solve individual problems. It was then recognized that these seemingly diverse problems all had the same mathematical structure, and control theory emerged.

The systems, or processes, to which control theory is applied have the following structure. The state of the system at each instant of time t can be described by n quantities, which are labeled x1(t), x2(t), . . . , xn(t). For example, the system may be a mixture of n chemical substances undergoing a reaction. The quantities x1(t), . . . , xn(t) would represent the concentrations of the n substances at time t.

At each instant of time t, the rates of change of the quantities x1(t), . . . , xn(t) depend upon the quantities x1(t), . . . , xn(t) themselves and upon the values of k so-called control variables, u1(t), . . . , uk(t), according to a known law. The values of the control variables are chosen to achieve some objective. The nature of the physical system usually imposes limitations on the allowable values of the control variables. In the chemical-reaction example, the kinetic equations furnish the law governing the rate of change of the concentrations, and the control variables could be pressure and temperature, which must lie between fixed maximum and minimum values at each time t.

Systems such as those just described are called control systems. The principal problems associated with control systems are those of controllability, observability, stabilizability, and optimal control.

The problem of controllability is the following. Given that the system is initially in state a1, a2, . . . , an, can the controls u1(t), . . . , uk(t) be chosen so that the system will reach a preassigned state b1, . . . , bn in finite time? The observability problem is to obtain information about the state of the system at some time t when one cannot measure the state itself, but only a function of the state. The stabilizability problem is to choose control variables u1(t), . . . , uk(t) at each instant of time t so that the state x1(t), . . . , xn(t) of the system gets closer and closer to a preassigned state as the time of operation of the system gets larger and larger. Probably the most prominent problem of control theory is that of optimal control. Here, the problem is to choose the control variables so that the system attains a desired state and does so in a way that is optimal in the following sense. A numerical measure of performance is assigned to the operation of the system, and the control variables u1(t), . . . , uk(t) are to be chosen so that the desired state is attained and the value of the performance measure is made as small as possible. To illustrate what is meant, consider the chemical-reaction example as representing an industrial process that must produce specified concentrations c1 and c2 of the first two substances. Assume that this occurs at some time T, at which time the reaction is stopped. At time T, the other substances, which are by-products of the reaction, have concentrations x3(T), x4(T), . . . , xn(T). Some of these substances can be sold to produce revenue, while others must be disposed of at some cost. Thus the concentrations x3(T), . . . , x(T) of the remaining substances contribute a "cost" to the system, which is the cost of disposal minus the revenue. This cost can be taken to be the measure of performance. The control problem in this special case is to choose the temperature and pressure at each instant of time so that the final concentrations c1 and c2 of the first two substances are attained at minimal cost.

The control problem discussed here is often called deterministic, in contrast to stochastic control problems, in which the state of the system is influenced by random disturbances. The system, however, is to be controlled with objectives similar to those of deterministic systems.

References

George B. Dantzig, Linear Programming and Extensions (1963, reissued 1974) outline the history, theory, and applications of linear programming.

G. Hadley, Linear Programming (1962), is also informative.

L.R. Ford, Jr., and D.R. Fulkerson, Flows in Networks (1962), stands as the classic work on the subject.

T.C. Hu, Integer Programming and Network Flows (1969).

James K. Strayer, Linear Programming and Its Applications (1989), addresses problems and applications.

G.V. Shenoy, Linear Programming (1989), focuses on the field from the point of view of management and is ac- cessible to readers without a background in mathematics.

Howard Karloff, Linear Programming (1991), provides an introduction from the perspective of theoretical com- puter science.

Ami Arbel, Exploring Interior-Point Linear Programming: Algorithms and Software (1993) is similar.

G. Zoutendijk, Methods of Feasible Directions (1960) is one of the path breaking books on linear and nonlinear programming.

Leon S. Lasdon, Optimization Theory for Large Systems (1970), deals with linear and nonlinear programming problems.

F.H. Clarke, Optimization and Nonsmooth Analysis (1990), serves as a good exposition of optimization in terms of a general theory.

G.B. Dantzig and B.C. Eaves (eds.), Studies in Optimization (1974), is a volume of survey papers by leading ex- perts.