<<

About the Author

Ben Vosloo was born in the Empangeni district, Natal, 4 November 1934. After completing his schooling in Vryheid, he went to the University of Pretoria where he majored in political science and economics taking the BA and MA degrees with distinction. After serving as a teaching and research assistant, he obtained a Ph.D. degree in 1965 at Cornell University, Ithaca, New York. On his return to , Dr Vosloo began his long association with the reform process in the fields of constitutional change, educational reform and . He served as Professor of Political Science and at the University of Stellenbosch for 15 years. He was inter alia member of two direction-setting Commissions: the Erika Theron Commission concerning constitutional reform and the De Lange Commission on educational reform. He published widely in academic and professional publications in the fields of management science, political science and development issues. He held offices as a founding member of a number of academic and professional associations such as the S A Political Science Association, the S A Institute for Public Administration and the S A Institute of International Affairs. During his academic career, Prof. Vosloo received several meritorious scholarships and academic awards. Ben Vosloo started his “second” career in 1981 when he was appointed as the founding Managing Director of the newly formed Small Business Development Corporation. He steered the SBDC to its successful track record and its unique position of prominence as a private sector led development institution (1981 to 1995). In recognition of his work, Dr Vosloo was made Marketing Man of the Year (1986), Man of the Year by the Institute of Management Consultants of Southern Africa (1989), given the Emeritus Citation for Business Leaders by the Argus Newspaper Group (1990) and the Personnel Man of the Year by the Institute of Personnel Managers (1990), named as one of the Business Times Top Five Businessmen (1993) and by “Beeld” as one of South Africa’s Top 21 Business Leaders in the past 21 years (1995). He acted as co-author and editor of a trend-setting publication Entrepreneurship and (HSRC Publishers, Pretoria 1994) and was awarded an Honorary Doctorate by the University of Pretoria in December 1995. In 1996 Ben Vosloo started his “third” career. He initially served as a business consultant on strategic policy matters and later became involved in export marketing in the USA, , Europe and Asia. He obtained permanent resident status in in the category “Distinguished Talents” and eventually became an Australian citizen in 2002. He is now retired and resides in North Wollongong, NSW.

i

Political-Economic Trends Around the World by W B (Ben) Vosloo ------INDEX Chapter Page Preface v Introduction 1 Nature’s Endowments 1 Human Action 2 The Landes Paradigm 3 Divergent Patterns of Growth and Development 4 The Focus of this Survey 5 References 6 1 The Footprint of the British Empire 7 The British Imperial Legacy 7 Impact of Migration Patterns 9 The Imperial Reach 10 Britain’s Domestic Political Life 19 The British Political Economy in the 20th Century 23 Conclusions 29 References 31 2 The USA – Mankind’s Best Hope 32 Early Building Blocks 32 Political Credo 34 Individualism and Self-Reliance 34 The Enduring American Dream 35 Demographic Patterns 36 Political Party Rivalry 38 Free Enterprise vs Government Intervention 39 Smaller Government vs Big Government 39 Failures of the Bush Era 40 Sources of the Global Financial Crisis of 2008 40 The Obama Era 44 Alarm Bells 45 References 46 3 The Lure of Social-Democratic Market Economies 47 The Emergence of Social-Democratic Economies 47 Basic Characteristics 49 Problem Areas 52 Case Study 1: The Scandinavian Model 55 Case Study 2: – From Warfare to 62 Case Study 3: The French Dirigiste Model 66 Conclusions 70 References 72 4 – Totalitarian to Bureaucratic Autocracy 73 Profile of the Russian Federation 73 Marxist-Leninist Communism 74 The Collapse of the Romanov Empire 77 The Rise of Stalin 78 The USSR’s Totalitarian Dictatorship 80 The Aftermath of World War II 82 De-Stalinization 84 ii

INDEX Chapter Page 4 Russia – Totalitarian Communism to Bureaucratic Autocracy (continued) The Rise of the Soviet Bloc 85 The Transformation of the Soviet Bloc 88 The Collapse of the USSR 91 Challenges of Transition and Transformation 93 Yeltsin’s Uphill Struggle 95 Privatisation 97 The End of the Yeltsin Era 98 Putin’s Russia 100 References 103 5 The Promise of Latin America 104 The Amerindians 104 The Colonial Powers 105 The Slaves 105 Indigenismo 107 Cultural Integration 108 Regionalism 109 Ideological Trends 109 Democrats versus Authoritarian Populists 110 Economic Trends 112 Case Study 1: 112 Case Study 2: 119 Case Study 3: Colombia 125 Case Study 4: Cuba 127 Conclusions 129 References 129 6 The Plight of Sub-Saharan Africa 131 Prehistoric Origins 131 Settlement Patterns 131 European Spice Traders 132 The Slave Trade 133 The Colonial Scramble for Africa 133 The Legacy of Colonialism 134 Bad Government 135 Self-serving Bureaucracy, Corruption and Nepotism 136 Misconceived Development Strategies 137 Absence of an Indigenous Modern Sector 138 Official Neglect of Indigenous Entrepreneurship 139 Socio-Cultural Constraints 141 Economic Challenges 142 South Africa’s Exceptionalism 142 Confronting the African Dilemma 151 References 152 7 The Constraints of the Islamic World 154 The Islamic Religion 154 Muhammad and the Koran 154 Pillars of the Faith 155 Trends in Islamic Doctrine 156 Twentieth Century Developments 157

iii

INDEX Chapter Page 7 The Constraints of the Islamic World (continued) The Arab World in the Twenty-first Century 159 Non-Arab Muslim States 165 Islamic Statehood 175 Islamic 177 Islamic Politics 178 Islam’s Global Networks 179 Islam and the West 181 Prospects 184 References 185 8 The Indian Enigma 186 Ecological Setting 186 Constitution and Government 186 Population 187 Early History 187 Buddhism 190 Hinduism 191 Islam’s Penetration 192 The British Conquest of 192 The Legacy of British India 193 Independence and Partitioning 194 India’s Cultural Diversity 195 Indian Politics 196 India’s Economy 199 The Handicap 202 Burdens on Business 203 ’s Drawbacks 204 Intergroup Conflict and Violence 205 The Impact of the World Recession 206 International Perspective 207 Prospects 208 References 208 9 The of 209 ’s Post-War Recovery Template 209 Industrial Development and Export Promotion 210 Encouraging Savings and Investments 211 Balancing Market Forces and 211 Equal Opportunity, Upward Mobility and Political Stability 212 Education, Training and Technology 213 An Effective Entrepreneurship Culture 213 Effective Business Networking 215 Work Ethic and Non-Disruptive Labour 216 Integration of Tradition and Modern Management Styles 217 Low Dependency Ratios 217 Reconstruction of under Lee Kuan Yew 218 The Malaysian Experience 219 Japan’s Regression After 1995 222 Impact of the Financial Crisis of 1997-98 223 The New Millennium Fluctuations 224 Conclusions 225 References 226 iv

INDEX Chapter Page 10 the Emerging Giant 228 Historical Background 228 ’s Reforms 229 with Chinese Characteristics 231 The Tiananmen Square Clampdown 232 Deng’s Nanxun Campaign 232 Deng’s Legacy 233 Kong’s Crucial Role 233 The Impetus of 234 Cutting the State-Owned Sector 234 Rapid Growth 235 The Tangled Web of Business Relationships 235 Banking 236 Business 236 Trade Patterns 237 Foreign Acquisitions 238 Demographic Patterns 239 Centralised Government 240 Civil Rights 240 241 , and Pollution 241 Strategic Issues 241 China and the Global Financial Crisis of 2008 243 Conclusions 244 References 244 11 Australia – the Lucky Country 245 Patterns of Migration 245 , and 246 Manufacturing 247 248 Service Industries 249 Economic Performance 249 The “Fair Go” Model 252 Industrial and Labour Relations 253 Regulation of Finance 254 The Role of the Public Service 255 The Rudd Deficit 255 The Aftermath of the 2008/09 Downturn 256 Appraisal 257 References 259 12 Future Political-Economic Challenges 260 Curbing and Pollution 260 Curtailing Big Government 265 Downsizing the 274 Safeguarding Democracy 278 Rebalancing Global Economic Growth Patterns 284 Managing the Risks of Contagion 287 The Way 291 References 294 v

Preface

The impetus to write about political-economic trends around the world emerged towards the end of the completion of my earlier manuscript on “Understanding Economic Trends”. It became clear that economic trends are essentially associated with other trends occurring in the world of politics. The main arena of politics in today’s world is the nation-state. Each nation-state, in turn, is characterised by its own peculiar geography and history: the interaction of its natural endowments and its human inputs. Hence it could be argued that the political and economic life of a country is intertwined like the genetic strands of a DNA sequence. These strands interact within the framework of each country’s natural resources (such as , soil and ) and its (such as institutions, traditions, culture and demography). These factors then became the obvious template for analysing political-economic trends around the world with a specific focus on selected countries. Practical constraints led to the exclusion of several other interesting case- studies. The comparative analysis of societies dates back to the pioneering work of Aristotle based on his observations of ancient Greek city states. He produced a ground-breaking taxonomy of governmental systems based on the scope of participation in government, combined with the objectives sought by those who held the power of government. He considered a mixture of monarchy, aristocracy and democracy to be more in tune with the common good than their degenerate forms which he labelled as dictatorships, oligarchies and anarchies. Aristotle also maintained that forms of government tend to evolve over time. Monarchies tend to involve aristocratic participation which then tends to degenerate into oligarchies which, in turn, are replaced by . Eventually democracies degenerate into mob-ruled anarchies until they are replaced by dictators who, in time, set themselves up as monarchies. Although Aristotle’s matrix cannot be considered as a template for prediction, his scheme of analysis continues to provoke thought. For many centuries afterwards, scholars focused mainly on abstract ethical questions arising out of the exercise of governmental power and authority. It is only since the 17th century that scholars turned their focus of inquiry to the empirical world around them – exploring the ways in which contemporary societies operate in terms of the rules, customs, practices, structures and relationships involved in the exercise of power and authority. The origins of the term “Political Economy” reach back to the French and British founding fathers of economic analysis in the 16th to 19th centuries: Montcretien (1615), Quesnay (1760), Adam Smith (1776), Ricardo (1800), Malthus (1800), Bentham (1845), Cairnes (1850) and others. These pioneers laid the groundwork for “Political Economy” as an attempt to explain, within the existing framework and assumptions of society, how contemporary society is operating in terms of the production and distribution of goods and services: creating wealth and allocating rents, wages and profits. A better understanding of these relationships could – in the words of John Maynard Keynes – to “... the emancipation of the mind”. In preparing a manuscript of this nature and scope, one inevitably has to rely on a variety of sources as listed in the “References” at the end of each chapter. To simplify and expedite the completion of the manuscript, I dispensed with the common practice of providing footnotes. Instead, references to listed sources were incorporated in the text. In chapters where I relied extensively on a particular source, additional reference was made within the relevant sub- sections – not only to recognise my indebtedness to specific authors, but also to encourage readers to explore the particular trend of analysis in greater detail at source. A special word of gratitude goes to my wife, Madalein, for typing and proof-reading my long- hand written manuscript. She not only endured long periods of time-consuming writing, but also served as a sounding board to weed out verbosity and to enhance clarity. The author must take sole responsibility for the remaining shortcomings in the final product. 1

Introduction

Within the analytical framework of social science the determinants of human achievements are interpreted as a combination of nature-nurture factors. Under “nature” is normally understood the particular combination of inherent qualities belonging to a person by birth: talents, abilities, instincts, characteristics, disposition and tendencies. Under “nurture” is normally understood the non-genetic external influences that modify, nourish, educate, train or condition individuals after their birth. Their accomplishments in the many spheres of life are determined by the interaction of their innate potential with the opportunities coming their way, whether structured, spontaneous or by chance. Some people of great potential have limited opportunities; others may not have the talents to exploit their opportunities or may simply squander their chances. Some are very fortunate. In the case of nations, countries or regions, similar forces are at work. A country’s economic fortunes are determined by a combination of natural endowments and human action, manifested by the interaction of its geography and its history.

Nature’s Endowments

The world is strewn with examples of nature’s inequality of “given” factors: latitude, climate, rivers and lakes, topography, mean temperatures, humidity, seafronts, resources, or soil quality. Nature’s unequal distribution of its favours is not easily remedied by human action, but humans can make a difference. On a map of the world in terms of product or income per head, the rich countries lie in the temperate zones, particularly in the northern hemisphere; the poor countries in the tropics and semi-tropics. With a few notable exceptions, equatorial countries are largely stifled by problems associated with a low standard of living and a short life expectancy. The world shows a wide range of temperature patterns reflecting location, altitude and the declination of the sun. These differences directly affect the rhythm of activity of all species. Animals have adapted and evolved in their own way. Mankind generally avoids the extremes – unless driven by greed to exploit or minerals, or assisted by modern heating or cooling technology. In general the discomfort of heat exceeds that of cold. Year-round heat tends to encourage the proliferation of insects and parasites. Water distribution is also of critical importance for human habitation. Regular and predictable rainfall promotes the cultivation of food . Recurrent floods and droughts are serious constraints on agricultural development. It is no accident that settlement and civilisation followed the main rivers of the world: the Nile, the Volta, the Indus, the Tigris and Euphrates, the Ganges, the Rhine, the Volga and the Mississippi. Western Europe is a good example of the favourable conditions existing in the temperate zone. The privileged European climate was largely a gift of the Gulf Stream, rising in the tropical waters and then working its way in clockwise rotation bearing heat and rich marine life. This geological good fortune gives western Europe warm winds, gentle rain, water in all seasons and low evaporation. Though not idyllic, these factors enhanced good crops, big and dense hardwood forests. Europe’s climate is more equable along the Atlantic and becomes more “continental” as one moves east toward the Polish and Russian steppes with wider extremes of both moisture and temperature. Along the Mediterranean coast, the temperatures are kind, but rain is sparser and the soil yields less. Olive trees and grapes do better than cereals and pasture pays more than agriculture. Down its history, Europe knew famine and disease, long waves of cooling and warming, also , pandemics and bad crops. Europeans kept a rich diet in dairy products, meat and animal proteins. They grew taller and stronger while staying relatively free of worm infestations. Healthier Europeans lived longer and worked closer to their potential than communities living in tougher environments. By comparison with many other communities, Europeans were very lucky. (See David Landes, The Wealth and Poverty of Nations, : Little Brown & Co., 1998, pp.17-22) 2

China ranks as one of the most successful human settlements in the world. With some 7 percent of the earth’s land area, it supports some 21 percent of the world’s population. For more than 2000 years, the peoples at the eastern end of the Asian steppes exchanged nomadic pastoralism for the higher yields of sedentary agriculture. Their leaders evidently saw the link between numbers, food and power. The Han people, as they called themselves, settled along the Yellow River and its branches where they cultivated , , , and later also . As they moved south into the basin and beyond, they found that the wetter, warmer climate, mild winters and long summers permitted double cropping: winter wheat and summer rice in submerged paddies. They kept animals for ploughing, hauling and as mounts for the army – and pigs as their primary source of meat. Sheep and dairy products were largely unknown. In the 17th and 18th centuries they added new plants from distant lands: , potatoes, sweet potatoes and yams. A labour-intensive, water-intensive model became an important feature of Chinese development. The spread of substances obtained by mining has played a crucial role in economic growth around the world: bringing , regional development, trade and export growth. These substances occur in nature. Sometimes they comprise inorganic material, such as , of definite chemical composition or aggregations of inorganic materials such as -bearing ores or rocks. Other natural products may be of fossiliferous organic origin such as asphalt, hydro- carbons or coal. Because deposits of in rock are rare and difficult to extract, it took centuries before anyone worked out how to remove the material and then to work it into something useful. In time, someone found a place where there was enough metal-bearing ore or rocks to remove it to places where they could heat it in kilns to melt the metals contained in the ore or rocks. Once they found a way to pour and collect the metal the process called smelting and casting was discovered which made it possible to extract larger amounts of metal from the ore. All sorts of items such as tools, and ornamental objects or jewellery could then be made of , , and . Metals such as copper and gold were easy to be worked as jewellery, but they made poor tools. The solution was to combine metals to make an alloy that was hard-wearing. Mixing copper and tin produced bronze that was tough, easy to work and could be sharpened. Liquid metal can also be cast in a mould to produce all sorts of complex shapes. Casting became popular because it was easy to produce complex shapes. Since hammering hardened the metal, this method was used to make objects like tools and weapons. Archaeologists have established that the use of copper was developed in Asia, the Balkans and Iberia where the metal was available in abundance around 9000 BC. By 6000 BC, smelting and casting developed in these areas. With the development of better trade routes, knowledge of metal-working gradually spread to other surrounding areas. By 2000 BC, bronze was widely used in Asia for everyday tools and weapons. The importance of bronze working led historians to call this period the Bronze Age. But bronze did not reach Australia, South America or many parts of Africa. In such places people may have used gold or copper occasionally; they mostly made do with stone technology. Bronze was a useful metal, but not as hard as stone. Then around 1300 BC, some metalworkers in the Middle East discovered . Iron-working gradually spread throughout the Middle East and into Southern Europe. Iron weapons were used by empire builders such as the Hittites of to conquer new territory. The Greeks used iron weapons to build colonies around the Mediterranean and in India the use of iron made metal technology widely available. It enabled the Celtic people of Europe to protect their hill fortresses with iron swords during the Halstatt period. Archaeologists have found metalwork and coins made in the La Tѐne period (450-100 BC) in Europe. Human Action

The history of the world records the amazing of humankind, from the Stone Age to the Space Age. Looking into humankind’s development reveals the ideas, abilities and processes that created the modern world within the framework of available natural resources. 3

Civilisation today represents how far humankind has developed since the appearance of the first humans, or hominids in prehistoric times. By trial and error people acquired the knowledge and skills that would allow them to survive: which plants and fruits to eat, how to make weapons to hunt animals and protect themselves, to live safely in family groups, to develop special skills in a co-operative lifestyle, to plant seeds and to herd animals and to establish permanent settlements. The process of civilisation gradually emerged as villages developed into towns and then into cities. Rulers with strong support conquered nearby regions and brought them under their control. Civilisation started at different times and blossomed at different tempos in various parts of the world. Some areas, such as the great plains of North America and some regions of the Middle East, Far East and Africa, did not develop civilisations because they could not be easily farmed. Soil types, distance from water resources, climate, all affected the nature of the civilisations that emerged in any particular area. Warfare, exploration and the constant search for raw materials developed as trade increased between chieftaincies or principalities. New forms of warfare and weapons continued to develop as peoples such as the Greeks, Romans, Vikings journeyed through and around Europe as well as west toward North America. The Chinese explored eastern Asia and the Polynesians roamed the vast Pacific Ocean. The Mongols dominated and from there penetrated South Asia and East Asia, spreading the Muslim religion. From the 1500s, exploration and conquest became major factors in increasing the wealth of several European countries: , , the , and Britain. These countries created trading networks that reached across the globe. Explorers from these countries created maps of most of the world and probed into the unknown territories of North and South America, Africa and Asia. Traders, soldiers and priests followed in their footsteps – and empires were built and eventually lost.

The Landes Paradigm

David Landes, in his remarkable historical survey called The Wealth and Poverty of Nations, examines the various factors that could possibly explain the divergent economic outcomes of the process of development in different societies. Some were much more successful than others. Landes acknowledges the importance of material factors such as climate, latitude, location and resources, but attaches much more importance to “nonmaterial” factors such as values (culture) and institutions. He further points out that such concepts as “values” and “culture” are not popular with economists who prefer to deal with quantifiable (or more precisely, definable) factors, but says “... life being what it is, one must talk about these things...” On the basis of his survey of the experience gained in many countries in the course of history, Landes outlined what he called the “ideal case” – the society theoretically best suited to pursue material progress and general enrichment. He cautioned that this does not necessarily mean “better” or “superior”: it simply means “... one fitter to produce goods and services”. (See Landes, op.cit., pp.215-219) Landes drew up a list of “ideal-typical” characteristics or standards a “growth-and- development” society would have to comply with. Such a society would be one that: “1. Knew how to operate, manage, and build the instruments of production and to create, adapt, and master new techniques on the technological frontier. 2. Was able to impart this knowledge and know-how to the young, whether by formal education or apprenticeship training. 3. Chose people for jobs by competence and relative merit; promoted and demoted on the basis of performance. 4. Afforded opportunity to individual or collective enterprise; encouraged initiative, competition and emulation. 5. Allowed people to enjoy and employ the fruits of their labour and enterprise.” (Landes, op.cit., p.217) 4

Landes then argues that these standards imply certain corollaries: gender equality (in order to double the pool of talent); no discrimination on the basis of irrelevant criteria (race, sex, religion, etc.); also a preference for scientific (means-end) rationality over magic and superstition (irrationality). He remarks that the tenacity of superstition in an age of science and rationalism is surprisingly common: it even beats fatalism. It is a resort of the hapless and incapable in the pursuit of good fortune and the avoidance of bad. It is also a psychological support for the insecure. Hence the persistent recourse to horoscopic readings and fortune telling. David Landes also compiled a list of measures which “the ideal growth-and-development” government would adopt. Such a government, he suggests, would for example do the following: “1. Secure rights of private property, the better to encourage saving and investment. 2. Secure rights of personal – secure them against both the abuses of tyranny and private disorder (crime and corruption). 3. Enforce rights of contract, explicit and implicit. 4. Provide stable government, not necessarily democratic, but itself governed by publicly known rules (a government of laws, rather than of men). If democratic, that is, based on periodic elections, the majority wins but does not violate the rights of the losers; while the losers accept their loss and look forward to another turn at the polls. 5. Provide responsive government, one that will hear complaint and make redress. 6. Provide honest government, such that economic actors are not moved to seek advantage and privilege inside or outside the marketplace. In economic jargon, there should be no rents to favour and position. 7. Provide moderate, efficient, ungreedy government. The effect should be to hold taxes down, reduce the government’s claim on the social surplus, and avoid privilege.” (See Landes, op.cit., pp.217-218) Again, Landes adds additional corollaries to embellish the “ideal society”. The ideal society would be honest, not only enforced by law, but based on their generally held belief that honesty is right (also that it pays) and would live and act accordingly. The society would also be marked by geographical and social mobility. People would move about as they sought opportunity, and would rise and fall as they made something or nothing of themselves. This society would value new as against old, youth as against experience, change and risk as against safety. It would not be a society of equal shares, because talents are not equal; but it would tend to a more even distribution of income than is found with privilege and favour. It would have a relatively large middle class. This greater equality would show in more homogenous dress and easier manners across class lines. Conceding that no society on earth has ever matched this ideal paradigm, Landes admits that “... it is designed without regard to the vagaries of history and fate and the passions of human nature.” Landes again: “... the most efficient, development-oriented societies of today, say those of East Asia and the industrial nations of the West, are marred by all manner of corruption, failures of government, private rent-seeking.” Landes claims that this paradigm nevertheless highlights the direction of history – that it outlines the virtues that have promoted economic and material progress. It remains to be seen to what extent development patterns around the world today show a resemblance to the historical trends implied by the Landes paradigm.

Divergent Patterns of Growth and Development

Britain was the first industrial nation to come close to the model of a “growth-and- development” society. It had the ability to transform itself and adapt to new things and ways of doing things. In particular, England had the precocity to increase the freedom and security of its people, to open its doors to migrants with knowledge and skills such as Dutch, Jewish and Huguenot refugees. Many newcomers were merchants, craftsmen, old hands of trade and finance and brought with them their network of religious and family connections. 5

The Industrial Revolution started in Britain, then changed the world and the relations of states to one another. The goals and tasks of political economy were transformed. The world was now divided between “... a front-runner and a highly diverse array of pursuers.” Britain became a commercial power of considerable potential and the principal target of emulation from the beginning of the 18th century. While Germany was still a collection of squabbling Germanic principalities and France was recovering from the turmoil of the , the British Empire was streaking ahead. It took the quickest of the European “follower countries” more than a century to catch up – and to surpass. (See Table 1)

Table 1 Estimates of Real GNP per Capita (Selected Countries in 1960 US Dollars)

1830 1860 1913 1929 1950 1970 240 400 815 1020 1245 2385 Canada 280 405 1110 1220 1785 3005 225 320 885 955 1320 2555 France 275 380 670 890 1055 2535 Germany 240 345 775 900 995 2750 240 280 455 525 600 1670 Japan 180 175 310 425 405 2130 Netherlands 270 410 740 980 1115 2385 225 325 615 845 1225 2405 Portugal 250 290 335 380 440 985 Russia 180 200 345 350 600 1640 Spain - 325 400 520 430 1400 235 300 705 875 1640 2965 240 415 895 1150 1590 2785 UK 370 600 1070 1160 1400 2225 USA 240 550 1350 1775 2415 3605

(Based on figures provided by David Landes, op.cit., p.2322)

The Focus of this Survey

In this survey the concept “political” refers to the ideas, institutions and processes involved in taking authoritative decisions and actions applicable to the governance of society as a whole. Binding governmental decisions and actions are taken in relation to the allocation of rights, privileges and obligations in a society. It encompasses the ethical, institutional and dynamic aspects involved in the governance of society. Political systems around the world range along a wide spectrum from brutal authoritarian dictatorships to representative constitutional democracies. The word “politics” comes from the Greek polis meaning city-community, which in those days served as the most sovereign and inclusive association of people. The polis was considered as the ideal setting for the good life in a well-organised community. Since the close of the Middle Ages, the “nation-state” became the principal mode of social organisation on the world scene. The word “state” comes from the Latin word status meaning condition or way of existence. The modern meaning of the word “state” was fixed and popularised by Machiavelli in his famous treatise called The Prince. Since then the word “state” is commonly used in connection with a 6

social organisation endowed with the capacity of exerting and controlling the use of force (power) over certain people within a given territory. The concept “government” is related to the art or skill of steering and control. The Greek word for the steersman of a was kybernetes, which is the root word for “governor” or “government”. The government of the day carries the authority and wields the power of the state. It consists of all the persons, institutions and agencies through which the policies of the state are expressed and implemented. The “economic” component of societal life refers to the allocation of scarce resources to satisfy needs in the production, distribution and consumption of goods and services that people use to achieve a certain standard of living. The “economy” of a country comprises the productive activity of its agriculture, mining, manufacturing and services sectors. The productive output of these sectors mainly depend upon the amount and quality of the labour and employed, the availability of raw materials, the utilisation of technology and the mobilisation of entrepreneurial and management skills. The “socio-cultural” parameters refer to the complex network of interactions between individuals and between groups within societies: their customs, beliefs, morals, habits, store of knowledge and ways of doing things. These characteristics are acquired simply by being members of society: by living together. The impact of these socio-cultural characteristics or phenomena cannot be easily quantified or validated by rigorous empirical research methods. But it is simply a matter of observation that socio-cultural factors have real consequences for all aspects of societal life. Socio-cultural determinants must be taken into account to explain the success or failure of economic strategies, policies, programmes and systems. The many countries and regions covered by this survey are intended to provide a broad canvas to trace the factors and trends affecting political economies around the world towards the end of the first decade of the 21st century. It attempts to explain – within the relevant physical conditions, socio-cultural parameters, institutional frameworks and belief systems – how these societies are conducting the “ordinary business of life”. Inspired by the Landes paradigm tracing the causes of the wealth and poverty of nations, it proceeds from the premise that a country’s wellbeing is determined by a combination of material and human factors. The impetus for the exploration of the prevailing trends around the world was provided by the severe impact of the economic downturn of 2008/09. Of particular significance is the interaction of natural endowments and human action as reflected in the interplay of a country’s geography and history. What are the key determinants of sustainable economic growth? Do successful countries have specific features in common? What are the nature, causes and consequences of financial crises? How can crisis situations be managed and avoided in the future? What are the critical problem areas? What are the future political-economic challenges facing the modern world?

References

Landes, D (1998) The Wealth and Poverty of Nations, London: Little Brown & Co. McIntosh, J. & Twist, C. Civilizations – ten thousand years of ancient history, (2001) London: BBC Worldwide Ltd.

7

1 The Footprint of the British Empire

At the beginning of the 17th century, the British Isles had been unremarkable in many ways: economically, culturally, politically and strategically. Yet three hundred years later, Great Britain had acquired the largest empire the world had ever seen. It encompassed forty-three colonies in five continents. It held sway over around one quarter of the world’s land surface and roughly the same proportion of the world’s population – some 444 million people in all lived under some form of British rule. In the course of empire building the British had robbed the Spaniards, copied the Dutch, beaten the French and plundered the Indians. Britain led the “Scramble for Africa” and had also been in the forefront of another “Scramble” in the Far East (Malaya and chunks of Borneo and New ) and a string of islands in the Pacific: Fiji, the Cook Islands, the New Hebrides, the Phoenix Islands, the Gilbert and Ellice Islands and the Solomons. For many years this vast British Empire featured on maps of the world hung in schools all over the world, showing its territory coloured an eye-catching red. Not only were millions of people all over the world conditioned by the red-covered state of affairs, but even the British themselves began to assume that they had the God-given right to rule the world – as J.L Garvin put it in 1905 “… an extent and magnificence of dominion beyond the natural”. The extent of Britain’s Empire could be seen not only in the world’s atlases and censuses – Britain was also the world’s banker, investing immense sums around the world. By 1914 the gross nominal value of Britain’s of capital invested abroad was £3.8 billion, between two- fifths and a half of all foreign-owned assets. (See Niall Ferguson, 2004, Empire – How Britain Made the Modern World, London: Penguin Books, pp.240-244) For many generations, the British Empire relied heavily on the export of its people, capital and culture – particularly its language which has become the lingua franca of today’s world. Its influence is still carried along by the predominant socio-economic–political lifestyle of English- speaking countries. It is characterised by its reliance on free enterprise, private ownership, competitive markets, comparatively limited government intervention, a legal system heavily laden with Common Law, representative parliamentary government, constitutional democracy and a pragmatic philosophical orientation. The English-speaking countries can be described as a very broad church encompassing the and most of its former colonies: the USA, Canada, Australia, and Ireland. The original template also rubbed off on South Africa, India, Singapore, , , , Nigeria, , , , Zambia, Lesotho, Botswana, Swaziland and even . The British Imperial Legacy

Wherever the British extended their sphere of influence, there were certain distinctive features of their own society that they tended to disseminate: the English language, English forms of land tenure, Scottish and English banking, the Common Law, Protestantism, team sports, the limited or “night watchmen” state, representative assemblies and the idea of liberty. (See Ferguson, op.cit., pp.xi-xxviii)

The Westminster System The British system of government dates from the Middle Ages and was gradually transformed from monarchical absolutism, first to limited democracy and eventually to a fully-fledged constitutional democracy based on popular participation. This system became widely known as the “Westminster System” which comprises the following: a hereditary monarch with ceremonial powers as head of state; a parliamentary system of executive power where a prime minister and his cabinet is responsible to a popularly elected parliament within a competitive party system; an independent judiciary, appointed by the head of state as advised by the cabinet; 8

an electoral system based on popular franchise in single member constituencies; public responsibility and accountability by way of free elections at constitutionally based regular intervals; freedom of speech and of political association and activity by individual citizens; and, decision-making by majority vote. With varying degrees of success the Westminster system has been exported to several other countries. This system pre-eminently took root in former British colonies – with the exception of the USA which adopted a republican form with an elected head of state. In many cases the system was adapted in certain important respects to meet with the requirements of local conditions. The parliamentary system of executive power has been taken over by all countries that still maintain a constitutional monarchy. But several countries have introduced electoral systems based on some form of proportional representation (e.g. Australia). The unitary system of government has not shown itself to be a suitable answer to the problem of diversity. Also in countries with extensive land areas, such as Australia and Canada, the idea of a centralized unitary state was replaced with a decentralized federal system, such as in the USA. In the case of India, the problem of diversity was dealt with by way of partitioning the pre-independent Indian colony into independent states India and Pakistan. Ireland was also allowed, after a period of violent conflict, to become an independent country, with Northern Ireland remaining part of the United Kingdom. The Westminster system as such is based on many constitutional conventions which evolved over several centuries. Even in its classic form it is currently undergoing a gradual transformation to accommodate regional sentiments in Wales and growing national sentiments in Scotland.

“Anglobalization” This process is well described in Niall Ferguson’s, How Britain Made the Modern World. It began with the competitive scramble for global markets as British pirates scavenged from the earlier empires of Portugal, Spain, Holland and France. The British were imperial imitators – following in the footsteps of the Portuguese, Spanish and particularly the Dutch. British colonization was a vast movement of peoples, unlike anything before or since. Some left the British Isles in pursuit of religious freedom, some in pursuit of political liberty, some in pursuit of profit. Others had no choice, but went as “indentured labourers” or as convicted criminals. Between the early 1600s and the 1950s, more than 20 million people left the British Isles to begin new lives across the seas. No other country came close to exporting so many of its inhabitants. An important role was played by voluntary, non-governmental organizations such as evangelical religious sects and missionary societies. All contributed in paving the way for the expansion of British influence. The British came close to establishing the first “effective world government”. This was achieved with a relatively small bureaucracy roping in indigenous elites. The use of military force was a key element of British imperial expansion. The central role of the British navy was evident around the world: first in its pirate role and later also as transporter of soldiers to the far ends of the world. Niall Ferguson argues that the British imperial legacy is not just “racism, racial discrimination, xenophobia and related intolerance” as is sometimes claimed, but that there is also a strong credit side: - the triumph of capitalism as the optimal system of economic organization; - the Anglicization of North America and Australasia; - the internationalisation of the English language; - the enduring influence of the Protestant version of Christianity; and above all, - the survival of parliamentary institutions. Winston Churchill gave a more lyrical expression to these sentiments: “What enterprise that an enlightened community may attempt is more noble and more profitable than the reclamation from barbarism of fertile regions and large populations? To give peace to warring tribes, to administer justice where all was violence, to strike the chains off the 9

slave, to draw the richness from the soil, to plant the earliest seeds of commerce and learning, to increase in whole peoples their capacities for pleasure and diminish their chances of pain – what more beautiful ideal or more valuable reward can inspire human effort?” (Ferguson, op.cit., p/xxvii) Impact of Migration Patterns

August Comte, a 19th Century French philosopher, and one of the pioneers of modern Sociology, said “demography is destiny”. It is certainly, at the very least, highly important as a motor of socio-cultural change. To understand the impact of the migration of millions of people from Europe to the New World or to Australia, New Zealand and also South Africa, it is instructive to imagine what the outcomes would have been without the arrival of the migrants. In the wake of the waves of immigrants came a host of influences: values, belief systems, traditions, ideals, knowledge, skills, practices, institutions and other ways of doing things. The migrants moved for a variety of reasons. Some were driven by religious considerations (either persecution or aspiration), some were forcefully transported away as slaves (or equally pernicious) as exiled “convicts”, but the largest proportion were attracted by the expectation of a better life. The scale of 17th and 18th century migration from the British Isles was unmatched by any other European country. From England alone, total net emigration between 1601 and 1701 exceeded 700,000. These large movements of population transformed cultures and complexions of whole continents. The fingerprints of the flow of culture and institutions cannot be easily expunged. The scale of British migration can be brought into perspective by considering that the total world population in 1700 stood at less than 1 billion. (See Niall Ferguson, Empire – How Britain Made the Modern World, Penguin Books, 2003, Chapter 2). In Elizabethan England, a Vagrancy Act passed in 1597 stated that “Rogues, Vagabonds and Sturdy Beggars” were liable “to be conveyed to parts beyond the seas”. Those parts referred to the British colonies in North America. Prisoners condemned to death by English courts could potentially have their sentences commuted to deportation. Some of the deportees were “common criminals”, but political dissidents were also disposed of in this fashion – frequently used as punishment for Irish dissidents. Prisoners were sent to Virginia or Maryland to work on plantations until the growing number of slaves exported from Africa replaced them. In 1718, Britain passed a Transportation Act, which established a seven-year banishment to North America as a possible punishment for lesser crimes, and also stated that capital punishment could be commuted to banishment. So systemic exile became a part of England’s justice system. It was thought to be advantageous for everybody involved. It was considered more humane than executing or flogging, it “offered” the possibility of “moral rehabilitation” and freedom afterwards, it rid the population of dangerous individuals, it deterred others tempted to commit crimes, it provided workers where there was a great want of servants. Transportation to North America continued for nearly 60 years, and only ceased when the American colonies revolted in 1776. By that time over 40,000 criminals had been shipped to the New World. This English practice “of emptying their jails into our settlements” was roundly rejected by the colonial elite and ultimately resulted in the Declaration of Independence in 1776 and ultimate formation of the of America in 1887. When America refused to accept further shipments of convicts, England’s prisons began to overflow. They were first accommodated in decommissioned , called “hulks” for several decades, but after James Cook’s discoveries, sending convicts to New South Wales, Australia, solved the problem. The last convict ship left England for Australia almost a century later in 1868. By then 161,021 men and 24,900 women had been sent as convicts to Australia. Over the next two centuries millions of free settlers emigrated to Australia from the British Isles. (See Russell King ed., Origins – An Atlas of Human Migration, Marshall Editions, 2007, pp.95- 105) 10

Apart from convicts, between half and two-thirds of British migrants going across the Atlantic did so under contracts of indentured servitude. In the 17th century, around 70 percent went to the West Indies, where most of the sugar trade was – despite the fearful mortality of these tropical islands. After 1700, people opted for the more temperate climate and more plentiful land of North America. The Scots and the Irish accounted for nearly 75 percent of all British Settlers in the 18th century – men from the impoverished fringes who had least to lose and most to gain from selling themselves into servitude. According to Niall Ferguson, this flight from the periphery gave the British Empire its enduring Celtic tinge. (Ferguson, 2004, op.cit., p.71) A cursory glance at the pattern of immigration to the major immigrant-receiving countries which were former British colonies, illustrates the scope of the British cultural influence. South American countries, in contrast, illustrate the Latin influence of particularly Spain, Portugal and Italy.

Ethnic Composition of Immigration Up to 1940: Selected Countries

USA % Canada % % British 11.1 British 37.0 Spanish 32.2 Irish 11.6 USA 37.0 Italian 47.4 Canadian 8.0 Other 26.0 French 4.0 German 15.6 German 2.0 Scandinavian 6.2 Australia % British 1.0 Austro-Hungarian 10.5 British 80.5 Other 13.0 Russian 8.5 New Zealand 4.5 Italian 12.0 Other European 8.3 Brazil % Other 16.5 Asiatic 3.5 Portuguese 29.0 Other 3.2 Spanish 14.0 Italian 34.0 (Figures quoted by Austin Ranney, The Governing of Men, German 4.5 Holt, Rinehart and Winston, N.Y., 1966, P.145) Other 18.5

The important point to make is that the people who settle a country first leave the biggest imprint. The language of the new nation, its laws, its institutions, its political ideas, its literature, its customs, its precepts – all are primarily derived from the mother country. Hence the British Anglo-Protestant culture defined the new nations more than any other. Its Anglo core predisposed the country to a greater emphasis on property rights and individualism; its Protestant core predisposed it to hard work. The melting pot may subsequently have had more ingredients poured into it. But the pot itself is of a recognisable Anglo-Protestant design.

The Imperial Reach

Apart from the important role of migration patterns, several additional factors played a key role in the expansion of the British Empire. A review of the key determinants of the nature and scope of Britain’s imperial reach is, in many ways, a survey of its colonial history. It involves the early take-off impetus given by the Dutch deal, the founding of a British foothold on the Indian sub-continent, the British victory in its contest with France, the founding of a British-friendly off- shoot North America, the impetus of the slave trade, the role of the Royal Navy and Gunboat Diplomacy, the contribution of technology, the groundwork of the missionary 11

societies, the Anglicisation of Southern Africa as a gateway to Africa, the capturing of South Africa’s and gold, and the underpinning role of Britain’s financial and fire power.

The Dutch Deal The Dutch East India Company, a joint stock company, was founded in 1602 to continue the successful trade relationships with the Far East conducted by various smaller commercial enterprises. This trade made Amsterdam the most sophisticated and dynamic of European cities. Holland was the first country to set up a central empowered to sell government bonds to private investors. The Dutch had trading posts in Sumatra, the Moluccan Islands, on India’s east coast, at Surat in north-west India, at Jaffra in Ceylon and at Chinsura in Bengal. The trade was based on private enterprises taking their risks and paying taxes on their profits. In 1600, Elizabeth I gave a charter to the Company of Merchants of London, including a 15-year monopoly over East Indian trade. Several other companies were given charters, e.g. the Hudson Bay Trading Company to trade in Canadian furs in 1670, the South Sea Company to trade with Spanish America in 1710. in those times was considered a zero-sum game: each party could only gain at the expense of the other. This was the essence of the mercantilist age. Competition led to violent conflict. Between 1652 and 1674 the English fought three wars against the Dutch in order to gain control over the main sea routes out of Western Europe, not only to the East Indies, but also to the Baltic, the Mediterranean, North America and West Africa. The English increased the size of its merchant navy and insisted that goods from English colonies come in English ships. Until 1667 the Dutch came out on top and their fleet even sailed up the Thames destroying docks and ships. But the English population was almost three times the size of the Dutch and its economy was considerably larger too. In 1688 a powerful oligarchy of English aristocrats decided to get rid of James II by inviting the Dutch Stadholder, William of Orange, to invade England (unopposed) and to depose James II in what was subsequently called the “Glorious Revolution”. In effect, it was an Anglo-Dutch business merger. Dutch businessmen became major shareholders in the English East India Company while Prince William of Orange became, in effect, England’s new Chief Executive. The Anglo-Dutch merger did not change religion or politics fundamentally, since both countries were Protestant and had parliamentary government. What the English could learn from the Dutch was modern finance. In particular, the Anglo-Dutch merger of 1688 introduced the British to a number of crucial financial institutions that the Dutch had pioneered. In 1694 the Bank of England was founded to manage the government’s borrowings as well as the national currency – similar to the successful Amsterdam Wisselbank founded 85 years before. London also imported the Dutch system of a national public debt, funded through a , where long-term bonds could be bought and sold. This allowed the government to borrow at significantly reduced interest rates which made large-scale projects – like waging wars – easier to afford. As Daniel Defoe observed: “Credit makes war, and makes peace; raises armies, fits out navies, fights battles, besieges towns; and, in a word, it is more justly called the sinews of war than the money itself … Credit makes the soldier fight without pay, the armies march without provisions … it is an impregnable fortification … it makes paper pass for money … and it fills the Exchequer and the with as many millions as it pleases, upon demand.” (Quoted by Ferguson, op.cit., p.23) Sophisticated financial institutions had made it possible for Holland not only to fund its worldwide trade, but also to protect it with first-class naval power. Henceforth, these institutions – including double-entry bookkeeping – were to be put to use in England on a much larger scale. Both the English and the Dutch could operate more freely in the East. A deal was done which effectively gave and the spice trade to the Dutch, leaving the English to develop the newer Indian trade. The market for swiftly outgrew the market for spices. By the 1720s the English company was overtaking its Dutch rival in terms of sales. By 1745 the Dutch company’s profits were in decline. 12

The Founding of a British Indian Sub-continent Following the Dutch deal, the English East India Company’s trade focus shifted towards India’s most populous cities. On the shore of Coromandel, the Fort St. George was built, around which the city of Madras would arise. In 1661, England acquired Bombay from Portugal as part of Charles I‘s dowry when he married Catherine de Braganza. In 1690, the Company established a fort at Sutanuti on the river Hugli, which later became Calcutta. Because of prevalent wind directions, a round-trip to India by sailing ship took around 12 months. This meant communications with Company employees in distant trading posts was slow, which allowed employees a good deal of latitude. This included trading for their own accounts, which gave rise to the emergence of the “interloper”. These interlopers gradually expanded their trade volumes in partnerships with Indian businessmen. Political power initially remained centred in the Mughal Emperor’s Red Fort in Delhi, whose ancestors swept into India from the north in the 16th century and had ruled over much of the sub-continent ever since. By 1700 India had a population twenty times larger than that of the UK with a quarter of the world’s output. The East India Company constantly had to rely on wheeling and dealing in order to appease the Emperor and to pay bribes to Mughal officials. Raids by Afghan-Turkic armies from the north eroded the Delhi Mughal Emperor’s power. In time his deputies in the provinces started carving kingdoms for themselves and a period of internecine warfare followed. To protect its own assets and fortifications, the East India Company began to raise its own regiments from the local warrior castes: Telegus, Kunbis, Rajputs and Brahmins. Equipped with European weapons and disciplined by English officers, these private armies underpinned the emergence of a new dominant force under the auspices of the East India Company. The Act of Union of 1707 united England and Scotland to produce the United Kingdom of Great Britain. But at the time, France had an economy twice the size of Britain’s and a population three times its size. France also had reached across the seas to the world beyond Europe. There were French colonies in North America: Louisiana and Quebec, and the rich sugar islands of Martinique and Guadeloupe in the Caribbean. In 1664 it set up its own East India Company with its base at Pondicherry, just south of the British settlement at Madras. The French adversary, in the struggle for global mastery, remained a real challenge for Britain throughout the 18th century. In 1757 the Seven Years War started between Britain and France but soon engulfed most European countries as well. At stake was the question: would the world be French or British? The British superiority rested on its fleet and shipyards. Britain’s Prussian ally contained the French armies in Europe, while the Royal Navy would dominate the high seas and conquer the French colonial forces. Under Prime Minister William Pitt, the British recruited 55,000 seamen and increased its fleet to 105 ships, compared with 70 on the French side. The British economic lead lay in shipbuilding, and gun founding. During the war France lost its foothold in India and French rule in Quebec came to an end. France’s Spanish ally was driven out of Cuba and the . Britain also drew a lot of advantage over France by virtue of its Dutch-taught financing systems. Much of its war effort was financed by selling low-interest bonds to the investing public. But the most important advantage was that India would be British, not French. That gave Britain, what for nearly 200 years would be both a huge market for British trade and an inexhaustible reservoir of military manpower. It was much more than the “jewel in the crown” – it was a whole mine. The Indians allowed themselves to be divided – and, ultimately, ruled by the British.

The Founding of a British North America Spanish and Portuguese colonisation of the Americas preceded the first British settlers by almost a century. Most of their efforts centred on their quest for gold and silver in Central and South America. The first British colonial settlement started in the 1580s, first around Chesapeake Bay (later called “Virginia”) and in the 1620s and 1630s by the “Pilgrim Fathers” in 13

the “New England” area, south of Hudson Bay. Both Iberian and British settlers brought European diseases such as smallpox, measles, influenza and typhus and later African diseases such as yellow fever, which killed a large proportion of the indigenous populations. Colonisation took the form of a “public-private partnership”. The crown set out the rules with royal charters, but it was up to private individuals to take the risk and put up the money. Where the profit motive was not enough to guarantee success, religious fervour provided the motivation to persevere. The Massachusetts Bay Company founded in 1629 was booming thanks to , fur and farming produce. As only about 25 percent of Spanish and Portuguese migrants in the early days were female, the male encomanderos took sexual partners from the indigenous or the slave population. The result within a few generations was a substantial mixed-race population of mestizos and mulattos. The British settlers in contrast, brought their wives and children, thus preserving their culture and identity. So British colonisation was a family affair. The fact that the British colonists were by their settlement, in effect, taking away land from indigenous groups, was rationalised by the concept of terra nullius. In the words of the great political philosopher John Locke, a man only owned land when he had “mixed his labour with [it] and joined it to something that is his own”. In other words, land that is not fenced or farmed does not belong to anyone. It was clear that Native Americans would only be tolerated if they could fit into the emerging British economic order. If they resisted expropriation, they would and should, in the words of John Locke, “… be destroyed as a Lyon or a Tyger, one of those wild Savage Beasts, with whom Men can have no Society or Security”. (See Niall Ferguson, 2004, pp.64-65) It is claimed that the estimated 560,000 American Indians on the Eastern Seaboard in 1500, more than halved by 1700. The estimated 2 million indigenous people in the whole of the USA in 1500, declined to 750,000 by 1700 and by 1820 only 325,000 were left. In terms of British constitutional law, the disappearance of the “traditional owners” did not mean colonial land belonged to nobody: it belonged to the Crown and the Crown could then grant these parts to meritorious subjects. During the Stuarts’ reign, colonisation and cronyism went hand in hand. Charles I granted Maryland to the heirs of Lord Baltimore, Charles II gave Carolina to eight of his close associates. Charles gave New York to his brother James, the Duke of York, following its capture from the Dutch in 1664. Charles II gave to William Penn, the admiral who captured Jamaica, in settlement of a debt of £16,000, ownership of what became Pennsylvania. So William Penn became the largest land owner in British history, with an estate well over the size of Ireland. To finance the building of his capital, Philadelphia, Penn sold blocks of land of 5,000 acres each for £100 and designed the now familiar American grid system of streets. He promoted emigration, also from continental Europe. Between 1689 and 1815, over a million Europeans moved to mainland North America and British West Indies. The American War of Independence was triggered by the question of taxes – the right of the British parliament to levy taxes on American colonists without their consent: “no taxation without representation” was their slogan. A rebellion fever turned into outright revolution. On 4 July 1776, the Declaration of Independence, authored by Thomas Jefferson, was adopted by representatives of the thirteen secessionist colonies. The trans-Atlantic British subjects became American “Patriots”. The declaration was a challenge not only to royal authority, but to traditional values of a hierarchical society - anti-monarchism with a strong tilt towards republicanism. It was couched in the language of the Enlightenment: in terms of natural rights, including the right of the individual to judge for himself what will secure or endanger his freedom. The British army eventually surrendered at Yorktown in 1781. In 1787 the American Constitution was finally approved by the voters of all 13 former colonies.

The Slave Trade Between 1662 and 1807 nearly 3.5 million Africans came to the New World as slaves transported in British ships. That was over three times the number of white migrants in the 14

same period. By 1700, Liverpool was sending 33 shipments a year on the triangular trip from England to West Africa to the Caribbean. John Newton, the composer of the song “Amazing Grace”, was a captain of a slave ship. In 1840, James Thompson composed his famous song “Rule Britannia” with its stirring words “Britons never, never shall be slaves”. By 1770, Britain’s Atlantic empire seemed to have found a natural equilibrium. The triangular trade between Britain, West Africa and the Caribbean kept the plantations supplied with slave labour. The American colonies kept them supplied with victuals. Sugar and tobacco flowed back to Britain, a substantial proportion for re-export to the Continent. The profits from these New World commodities oiled the wheels of the Empire’s move to a new frontier – the Asian commerce. Anti-slavery sentiments amongst colonists in the USA was first openly expressed as early as the 1680s. The Quakers of Pennsylvania were speaking out against it, arguing that it violated the biblical injunction of Matthew 7:12 “… do unto others as you would have others do unto you”. But it was only in the 1740s and 1750s that the “Great Awakening” in America spread such scruples into wider Protestant circles. By the 1780s the campaign against slavery gained enough momentum to sway legislators. Slavery was abolished in Pennsylvania in 1780 – an example followed by a number of other northern states. In Britain the slave trade was abolished in 1807. Henceforth, convicted slave-traders faced transportation to Britain’s penal colony, Australia. Once the slave trade was abolished, slavery itself could only wither, until in 1833, slavery itself was made illegal in British territory. The slave owners of the Caribbean were compensated with the proceeds of a special government loan. It did not put an end to the trans-Atlantic slave trade or slavery in the Americas. It continued on a smaller scale in the southern United States, but also on a far larger scale in Brazil. All told, around 2 million more Africans crossed the Atlantic after the British ban, most of them to Latin America. However, the British did put in a lot of effort to disrupt this continuing traffic. A British West African Squadron of 30 warships was sent to patrol the African coast from Freetown with bounties offered to naval officers for every slave they intercepted and liberated. In 1840 the Royal Navy intercepted no fewer than 425 slave ships off the West African coast. It is an irony of history that the same navy that was deployed to abolish the slave trade, was also instrumental in expanding the narcotics trade.

The Royal Navy and Gunboat Diplomacy Since the early 19th century, Britain had been pulling ahead of her rivals as a pioneer of new technology. The Industrial Revolution was well underway, harnessing the power of steam and the strength of iron to transform the world economy and the international balance of power. By 1860, steam-driven, armour-plated warships fitted with breech-loading, shell-firing guns, rolled out by the score from British shipyards. These were crewed by 40,000 sailors, making the Royal Navy the biggest in the world. Simultaneously, thanks to the productivity of her shipyards, Britain owned close to a third of the world’s merchant tonnage. The Royal Navy was available for a wide variety of geopolitical objectives. If the British wished to force the Chinese to open their to British trade – even to exports of Indian opium – they simply could send the navy’s gunboats. Although the objective was ostensibly presented as a crusade to introduce the benefits of to the Chinese, it is doubtful if the “Opium Wars” of 1841 and 1856 would have been fought if the export of opium, prohibited by the Chinese authorities, had not been so crucial to the of British rule in India. The income the East India Company earned from its monopoly on the export of opium was roughly equal to the amount it had to remit to London to pay interest on its huge debt. The opium trade was also crucial to the Indian balance of payments. As a result of the Opium War of 1841, Britain acquired as a point of entry to the vast Chinese market.

15

Communications Technology Steamships reduced the sailing time across the Atlantic from 4-6 weeks to only 2 weeks in the mid-1830s and just 10 days in the 1880s. Between the 1850s and the 1890s, the journey from England to Cape Town was cut from 42 to 19 days. Steamships got faster as well as bigger: in the same period average gross tonnage roughly doubled. The telegraphic link also revolutionised overland and undersea communications. By 1880 there were altogether 97,568 miles of cable across the world’s oceans, linking Britain to India, Canada, Australia and Africa. This technology shrank the world and made control of it easier. The railway technology also played a crucial role. The British built railways throughout the Empire, constructed by private sector companies. The first track of 21 miles opened in India in 1853. Within less than 50 years, track covering more than 24,000 miles had been laid. The Victorian revolution in communications achieved the annihilation of distance. It had important military implications because it was in India that the British kept the bulk of their offensive military capability. In 1881 the British Indian Army numbered 69,647 troops and 125,000 Natives. As a proportion of all British garrisons in the Empire, the Indian army accounted for 62 percent. In effect, India was an English barrack in the Oriental Seas. Until 1914, Indian troops served in more than a dozen imperial campaigns, from China to Uganda. Gurkhas, Sikhs and Musslemen were fighting for Britain.

The Missionary Societies The missionary societies were, in effect, aid agencies bringing both spiritual and material assistance to the less developed world. The Society for the promotion of the Christian Gospel (1698) and the Society for the Propagation of the Gospel (1701) were initially exclusively concerned with the spiritual welfare of British colonists and servicemen posted overseas. By the late 18th century the movement changed its focus to convert indigenous peoples to Christianity. The London Missionary Society was formed in 1795. In 1799 followed the Anglican Church Missionary Society to “propagate the knowledge of the Gospel among the Heathen”. In the same period Scottish missionary societies were established in Glasgow and Edinburgh – all infused with a strong sense of philanthropy. Much of the initial efforts were focused on Africa. First around Freetown and then later on the Maoris of New Zealand. Subsequently missionary stations were developed by the London and the Wesleyan Societies in the Eastern Cape at Bethelsdorp, Elizabeth and at Kuruman in Bechuanaland. Dr. John Philip, the head of the London Missionary Society in South Africa and his son-in-law, John Fairbairn, editor of the Eastern Cape’s “Commercial Advertiser”, exerted strong influence on the formulation of colonial policy and public opinion in England. David Livingstone was sent to Kuruman which became the base of his exploration of the interior of Southern Africa below the Great Lakes. Throughout Africa, the missionary societies played a significant role in paving the way for British enculturation. In Southern Africa today, close to 60 percent of black Africans claim to associate themselves with the Christian faith.

The Anglicisation of Southern Africa Britain first invaded the Dutch settlement at the Cape in 1795. At this time the distant outpost at the Cape of Good Hope became strategically important for Britain in terms of protecting her sea route to India – particularly in relation to the rise of Napoleon during the turmoil caused by the French Revolution. The Cape was subsequently returned to the Netherlands in 1803 and then re-captured in 1806. The second British occupation turned the remote, often neglected, refreshment station of the Dutch East India Company into a fully fledged British colony. At the time the advance of Britain as a world power bred a conviction amongst Englishmen that what was good for Britain was good for the world. W.W. Bird, Colonial Secretary, wrote in 1822 “… nothing can be right or proper that is not English and to which he is unaccustomed”. In the Cape the British invasion was resented by the local European-descendant Dutch population. Descendants of an amalgam of primarily Dutch, German and French settlers over a 16

period of five generations since 1652, these white South Africans (later known as Boers), did not speak English and bore no allegiance to the British Crown or the aspirations of the British people. They were highly independent-minded republicans with a strong partiality towards continental Europe. The bulk being established farmers, many were cattle graziers in outlying areas. The Dutch-speaking colonists were soon confronted with a comprehensive Anglicisation policy, changing the official language to English: the courts, government offices, official communications and schools. The English authorities even imported Calvinist Scottish clergymen to inculcate pro-British sentiments. By 1820 around 4,000 British settlers had been imported to strengthen the British component in the population. Apart from localised conflicts with nomadic KhoiKhoi tribes, a major confrontation was building up on the eastern frontier with a large southward migration of Bantu-speaking tribes. On the eastern seaboard were the Xhosa-tribes, forming the southern tip of the Nguni group, being pushed further south by the Nguni-mainstream, the Zulu-tribes occupying areas further north-east along the eastern seaboard. The central highveld regions along the spine of the country, were occupied by Sotho-speaking tribes who were driven westwards by the military powerful Zulu-tribes. Dissatisfaction with the unrepresentative colonial government and a growing sense of insecurity in the border areas, about 5,000 Dutch-speaking frontiersmen decided in 1836-1838 to trek on horseback and in wagons beyond the borders of the Cape Colony into the interior. After military confrontations with the Zulu and a series of negotiated settlements and treaties, the Boer pioneers established their own self-governing republics in Natal (1842), the Orange Free State (1854) and the Transvaal (1852). But the British Colonial Office pretended that the northern border of the Empire did not exist, and kept on annexing areas occupied by the Boer pioneers and claiming their allegiance. The subordination of the many black tribes to British Colonial Rule took many battles between the “Red Coats” and the black “impis”. The Xhosa and Zulu tribesmen, in particular, were not easy push-overs. On the eastern border of the Cape Colony the Colonial Office tried in vain to implement a segregation policy (based on the drawing of boundary lines documented in treaties) between the colonists and the black tribesmen. Cattle raids and border conflicts continued. From 1812 to 1852 no less than eight fully fledged border wars were waged between British soldiers and the Xhosa. Eventually, the Xhosa territory called the Transkei was annexed and the Xhosas became British subjects. Sir George Grey, transferred from New Zealand, became Governor of the Cape in 1854 to implement a “civilising policy”, which meant westernisation and an end to the traditional ruling power of the chieftains. They surrendered their power in exchange for a monthly salary. Other scattered tribes such as the Basutus and the Griquas were brought under treaty and placed in demarcated “reserves”. The British annexation of Natal in 1843 caused the scattered Boer frontiersmen to trek once more across the Drakensberg mountains into the Orange Free State and the Transvaal. It was now left to the British forces to subordinate the Zulu empire. Under the influence of Theophilus Shepstone the Colonial Office set aside eight “locations” (reserves) to settle thousands of refugees who re-entered Natal after being driven away during the Mfecana-era (the destabilisation of African communities in the period 1780s to 1830s as a result of, inter alia, internecine strife and the conquests of Tshaka-Zulu). In 1864 a “Natal Native Trust” was established to guarantee black occupants’ rights of ownership inside reserve territory. Additional “mission reserves” were created, controlled by missionary societies. By the middle 1870s the British Colonial Office in London developed a strategy to incorporate the Boer Republics together with the Cape and Natal colonies into a federal structure. Shepstone was sent to annex the Transvaal in 1877, but after the Boer commandos were called up, they defeated the British army under General Colley at Majuba to regain the ZAR’s independence. It was also decided that the Zulu military power constituted a danger to the federal strategy. Hence, in early 1879 a military campaign was launched against the Zulu. At Isandhlwana the British army under Lord Chelmsford was defeated by the Zulus (800 soldiers killed) and despite 17

a heroic defence of a missionary station at Rorke’s Drift by a handful of soldiers, the campaign failed. Only after reinforcements were mobilised, the Zulus were defeated at Ulundi and the British pride restored. The Zulu King, Cetshwayo, was captured and exiled to Robben Island at the Cape. The Zulu kingdom was divided into thirteen territories under appointed salaried chiefs. One of them was John Dunn, an Englishman who was said to have had 100 Zulu wives. In 1887 Zululand was annexed to the Crown and in 1897 reintegrated into Natal Province.

Capturing South Africa’s Diamonds and Gold The discovery of diamonds near Kimberley in 1866 and gold near Johannesburg in 1885 introduced a new phase in South Africa’s history. It provided a major impetus for economic growth. The diamond and gold fields were situated in the undeveloped interior and required heavy capital investment: infrastructure and heavy equipment for deep-level mining. It also required large manpower resources as well as technical and management skills to extract, process and market the mineral finds. These factors were responsible for the rapid establishment of a rail system, the opening of coal fields for the generation of electricity, the establishment of urban concentrations, commercial farming and manufacturing interests in the interior. All of this happened within a time-span of around two decades. (See Martin Meredith, 2007, Diamonds, Gold and War – The Making of South Africa, Johannesburg: Jonathan Ball, pp.247-470) The Boer republics now found themselves in the middle of several convergent forces. The diamond and gold fields contained some of the world’s largest deposits of these minerals. From the interior came large flows of migrant black job-seekers. From around the world came a multitude of fortune-seekers – some were pick-and-shovel diggers, others well-connected financial tycoons. From its commanding heights, the British Imperial government held the trump cards: financial resources and military power. The interaction of forces that played out over the next three decades coincided with what is generally known as the “scramble for Africa” by the major European powers. In Southern Africa, the United Kingdom gobbled up the lion’s share. The impresario of the unfolding drama was Cecil John Rhodes. Son of a clergyman in Bishop’s Stortford, Rhodes came to South Africa at the age of seventeen and proceeded to the Kimberley diamond fields where he soon associated with well-connected financiers and eventually established De Beers Consolidated Mines, the largest diamond miner in the world. He also established a foothold in Johannesburg’s gold fields and with his “Randlord” friends masterminded the onset of the Anglo-Boer War (1899-1902). This war destroyed the two Boer Republics: it burnt down 30,000 farmsteads, confiscated hundreds of thousands of livestock, herded women and children into concentration camps where, it is estimated, 27,000 died of malnutrition and disease, held around 31,000 Boer “combatants” as prisoners of war in camps as far afield as Bermuda and Ceylon. With the assistance of the British High Commissioner in Cape Town, Lord Milner, he broke the back of the Boer people and Anglicised southern Africa. Rhodes’ justification was unambiguous: “We are the first race in the world, and the more of the world we inhabit, the better it is for the human race.”

Financial Power and Firepower During the “Scramble for Africa” in the second half of the 19th century, the entire continent was brought under some form of European control. Roughly a third of it was British. According to Niall Ferguson, the key to the Empire’s phenomenal expansion in the late Victorian period was the combination of financial power and firepower. According to Niall Ferguson’s research most of the huge flows of money from Britain’s vast stock of overseas investments flowed to a tiny elite of, at most, a few hundred thousand people. At the apex of this elite was indeed the Rothschild Bank, whose combined capital in London, and amounted to a staggering £41 million, making it by far the biggest financial institution in the world. The greater part of the firm’s assets was invested in government bonds, a high proportion of which were in colonial economies like Egypt and South Africa. Nor is there 18

any question that the extension of British power into those economies generated a wealth of new business for Rothschild’s. Moreover, there were close relationships between the Rothschilds and the leading politicians of the day. Disraeli, Randolph Churchill and the Earl of Rosebery were all in various ways connected both socially and financially. Rosebery served as Foreign Secretary under Gladstone and succeeded him as Prime Minister in 1894, was married since 1878 to Lord Rothschild’s cousin Hannah. Throughout his political career, Rosebery was in regular contact with male members of the Rothschild family – as revealed in their correspondence. Their correspondence reveals the passing of investment tip-offs from the banker to the politician and suggestions for military manoeuvres from the banker to the political decision-maker. News of military actions or plans were exchanged before disclosure of such actions or plans in the public domain. In the late 1870s Gladstone himself, as well as Disraeli, invested heavily in Suez Canal shares and in the Ottoman Egyptian Tribute loan – all of which at discount prices – to great benefit when the military occupation of Egypt took place in 1882. (See Ferguson, op.cit., pp.285-286) The Maxim gun was an American invention, but Hiram Maxim always had his eye on the British market. He set up a workshop in London and invited the great and the good to demonstrate his new . Lord Rothschild joined the Board of the Maxim Gun Company established in 1884 and his bank financed the merger of the Maxim Company with the Nordenfelt Guns and Ammunitions Company. The Maxim guns were operated by a crew of four and could fire 500 rounds per minute. A force equipped with just five of these lethal weapons could literally sweep a battlefield clear with its hail of .45 inch bullets. The Maxim guns were used in battle against the Matabele, in the Sudan by Kitchener’s expeditionary force at Omdurman on the banks of the Nile, against the Boers (1899-1902) and in the two World Wars of the 20th century. As war correspondent, Churchill described the Maxim gun as “… that mechanical scattering of death which the polite nations of the earth have brought to such monstrous perfection”. In the South African diamond and gold fields, Cecil Rhodes, in retrospect, appears to have been a front man for the Rothschilds. By 1899 the Rothschilds’ stake in De Beers was twice that held by Rhodes. When Rhodes marched into Matabele Land in 1893 with the British South Africa Company’s private invasion force of 700 men, well supplied with Maxim guns, Rothschilds provided the financial backing. In 1895 the Jameson Raid on the Transvaal Republic was masterminded by Rhodes (at the time Prime Minister of the Cape Colony) with full backing of his partners in London, as well as full support from Chamberlain, the Secretary of State for Colonies in Gladstone’s cabinet. The only condition set by Chamberlain was reassurance that they were “working for the British Flag”. The conspirators would be armed with rifles and Maxim guns, purchased in Britain, ostensibly “for Rhodesia”, shipped to the Cape, transferred to De Beers premises in Kimberley and then to the conspirators. Once in control of Johannesburg, they would declare a provisional government and dispatch a force to seize the government’s arsenal in Pretoria. The British High Commissioner would then intervene. A new era would begin. (See Martin Meredith, op.cit, pp.311-353) Jameson’s 100 “volunteers” were recruited in Cape Town from the “Duke of Edinburgh’s Volunteer Rifles”, a Cape Town regiment. Jameson’s invasion force of 800 men, with Maxim guns in all, were rounded up by Boer forces near Johannesburg. Jameson was arrested and magnanimously released by the Boers for trial in Britain – where he served 3 months in prison and was later knighted. Both Rhodes and Chamberlain kept their interaction discretely secret. The British Parliament set up a committee of inquiry into the Raid that was little more than a sham. Chamberlain himself sat on the committee. As summarised by Martin Meredith “… the Rhodes conspiracy ended as it began: in collusion, lies and deceit”. Speaking in the House of Commons in May 1896, Chamberlain warned against the possibility of war: “A war in South Africa would be one of the most serious wars that could possibly be waged … It would be a long war, a bitter war and a costly war … it would leave behind it the embers of a 19

strife which I believe generations would hardly be long enough to extinguish … [it] would have been a course of action as immoral as it would have been unwise”. Yet, Chamberlain himself was to preside over just such a war within three years when he mobilised the British Empire’s force of 500,000 against 40,000 Boer farmers. On the outskirts of Bloemfontein stands a sombre monument to commemorate the Boer women and children who died in the concentration camps. Buried there are also the remains of a Cornish clergyman’s daughter, Emily Hobhouse, who led a campaign against the atrocities of Kitchener’s war effort. Speaking in the House of Commons, David Lloyd George declared: “A war of annexation … against a proud people must be a war of extermination, and that is unfortunately what it seems we are now committing ourselves to – burning homesteads and turning women and children out of their homes … the savagery which must necessarily follow will stain the name of this country”. (Quoted by Ferguson, op.cit., p.283) The critics argued that not only was imperialism immoral, it was a rip-off paid for by British taxpayers, fought for by British soldiers, but benefitting only a tiny elite of fat-cat millionaires. Rhodes was depicted as an “Empire jerry-builder who has always been a mere vulgar promoter masquerading as a patriot, and the figure-head of a gang of astute Hebrew financiers with whom he divides the profits”. (Ferguson, op.cit., p.284)

Britain’s Domestic Political Life

Britain’s domestic political life is expressed through its constitutional arrangements, its political party structure and its ideological rivalries.

Constitutional Arrangements Britain’s constitutional arrangements are based on a unitary structure which results in a high degree of centralisation. There is a vibrant tradition to maintain media freedom. As a consequence public office-holders are held to account in the court of public opinion and the activities of numerous organised interest groups. Elections are held at constitutionally determined regular intervals. The electoral system is based on single member constituencies and simple majorities – colloquially called “the first- past-the-post system”. It is subject to clear-cut rules to ensure transparency, fairness and the legitimacy of election outcomes

Political Party Rivalry In the UK, parties date back to the 18th century with the emergence of the “Whigs” and the “Tories”. The “Whigs”, initially under Walpole, consisted of groups of parliamentarians in favour of the extension of the parliamentary sphere of influence and they became the forerunner of what is today called the “Liberal Democrats” in the UK. The “Tories”, on the other hand, advocated the maintenance of the prerogatives of the monarchs and they became the forerunners of the “Conservative Party”. The “” emerged as the political arm of the movement in 1900 and to this day the British Labour Party share the same headquarters as the Trade Union Council. It is customary for Labour Party parliamentary representatives to be members of trade unions and for the trade unions to dominate the formation of the policies and the financing of the Labour Party.

Ideological Rivalries An ideology can generally be described as a mode of thought or a set of ideas in terms of which a programme of collective action or a set of collective arrangements are justified. Sometimes it is something people would like to see continued or, otherwise, would like to see come into being. Mostly, it is a rationalisation of something visualised or perceived, however distorted or idealised. 20

The most active ideological debates in the English-speaking world, are the “liberal” versus “conservative”, the “capitalist” versus “socialist” and the “left-right” dichotomies. These dichotomies have deep historical and philosophical roots which require careful consideration.

Liberalism versus The word “” became known in England in the early 19th century as a description for policies with a strong attachment to liberty or freedom and curbing power. Because freedom is an equivocal word in the sense that there are different kinds of freedom, there are competing strains in liberal thought in different parts of the world. The English concept of liberalism expounded by John Locke and John Stewart Mill is a kind of “Whig” liberalism. It rests on an idea of liberty understood in terms of freedom from state interference in the actions of an individual. Thus Lockean liberalism leans to laissez faire, toleration, natural rights and limited sovereignty. The continental concept of liberalism as inspired by Jean Jacques Rousseau leans more to ancient Roman notions of libertas as participation in government – not as being left alone by government. As against Lockean liberalism, this kind of liberalism is more interested in the people as a community or nation (the German “volk”). It is not individualist but organically collectivist (“étatiste”). These two competing strains of liberalism were often in conflict in political movements in England as well as on the Continent. The “Whig” type of liberalism took its inspiration from John Locke and the “étatiste” liberalism took its inspiration from Rousseau. Eventually, the Lockean “Whig” liberalism was challenged by a revisionist school called “social liberalism”. Under the influences of the continental “étatiste” brand of liberalism they wanted to enlarge the role of the state as an instrument of social improvement. This meant overthrowing, or at least revising, the traditional Lockean “Whig” definition of freedom as freedom from state interference. T.H. Green, the Oxford philosopher argued that freedom should be understood “positively” – hence state measures to promote social welfare and education could be seen as measures to enlarge, not diminish, freedom. (See Cranston, M., 1966, A Glossary of Political Terms, The Bodley Head, London, pp.65-67) On the English parliamentary scene the Liberal Party was unable to survive the contradictions between new and old theories of freedom. The movement of “social liberalism” carried many of its adherents into the fuller socialism of the Labour Party. While champions of the old Lockean philosophy of freedom found that they had more in common with the Conservative Party. Conservatism as a political word came into currency in London during the 1830s, but what it describes is a much older conscious manner of thinking. It is perhaps the most common of human attitudes and has been turned into a political doctrine to challenge radical changes to the status quo brought about by deliberate and conscious manipulation. Edmund Burke’s Reflections on the Revolution in France, dated 1791, is generally accepted as the first articulate expression of conservative philosophy. Burke saw the present as a continuation from the long and decisive heritage of the past. Men bore a heavy responsibility to conserve much of that heritage as part of a divine order for generations to come. He insisted that social and political life is the outcome of a complex set of powerful forces that cannot be wasted away with grandiose schemes. (See Cranston, M., op.cit., pp.20-24) The conservative, in the Burke tradition, is opposed to any root-and-branch meddling with society, whether from “left” or from “right”. The Conservative Party in Britain, the Republican Party in the USA and the Liberal Party in Australia tend to take a generally conservative attitude conveying a proud attachment to tradition, frequently containing a religious component. Modern political conservatives support the original kind of economic liberalism based on a limited role for the state, the maximisation of individual liberty, economic freedom, reliance on the market and decentralized decision making. It emphasizes the importance of property rights and sees government’s role as the facilitation and adjudication of civil society. It means less government, not more. 21

Capitalism versus Socialism This ideological dichotomy has been debated in the English-speaking world over the past three centuries. These debates permeated intellectual life and form the underpinnings of political affiliations – left of centre and right of centre. Adam Smith (1723-1790) is generally recognised as the first economist to articulate the central “principles” on which a liberal economic society are to be based in his famous publication “An Inquiry into the Nature and Causes of the Wealth of Nations”. He offered certain ground rules for economic progress: regulation by competition and the market and not by the state; and, an economic society in which each man, thrown on his own resources, laboured effectively for the enrichment of society. His ideas were later further developed and refined by David Ricardo (1772-1823) and Thomas Robert Malthus (1766-1834). With Adam Smith they formed the founding pioneers of “Economics” as a field of study. They became known as the “Respectable Professors of the Dismal Science”. Ricardo focused on the factors determining prices, rents, wages and profits. Malthus is particularly famous for his Essay on Population which essentially maintained that the number of people who can live in the world is limited by the number that can be fed – a fact of life that is not properly appreciated even in the 21st century. Capitalism developed historically as part of the “individualist” movement in 18th century Britain and was transplanted later to North-Western Europe and North America. It is basically characterised by a number of basic traits: individual ownership, , competition, profit. It is also historically connected to individualist democracy – a “one man one vote” system of majority rule seeking what Jeremy Bentham called “the greatest happiness of the greatest number”. (See Ebenstein, W. and Fogelman, E., 1980, Today’s Isms, Prentice-Hall, New Jersey, pp.147-152) Socialism as a mode of thought stands for a form of society in which the economic activities are deliberately planned and controlled, on behalf of the community as a whole, by the state. It opposes free enterprise in terms of which individual firms compete on their own initiative to supply goods and services. Instead, it aims at (1) public ownership of the means of production; (2) a to care for the needy; and (3) creating a society of abundance and equality, through collective action based on the “general will” – also rooted in the majority decision. Triggered by the appalling conditions created by the Industrial Revolution, the origins of socialist thought can be traced to the humanitarianism of Christian ethics – compassion for the downtrodden, the poor, the suffering exploited. In addition, the ideas of Jean Jacques Rousseau and Francois Babeuf in pre- France paved the way for the emergence of the idea that the “organic society” binds together the common good of all individuals. synthesized the ideas of Rousseau, Babeuf and others with his own ideas about , the class struggle and the revolution of the proletariat. Marx, a German Jew born in Trier, spent most of his later life in London where he expanded his theories of “” and wrote Das Kapital and The Communist Manifesto. He unsuccessfully tried to organise the socialists into an international movement with the support of his friend and financier Engels. After his death his ideas inspired Lenin and Trotsky, who ultimately succeeded in 1917 to establish a totalitarian form of Soviet Communism that lasted until 1989. (See Ebenstein and Fogelman, op.cit., pp.1-22) In England the main exponent of was Robert Owen (1771-1858). He was a wealthy industrialist, supporting Britain’s social, political and economic system, but his compassion was deeply moved by the human suffering brought about by the industrial revolution. He was deeply concerned about the wretched condition of his employees and became associated with Jeremy Bentham, a renowned social reformer of the day. Bentham in turn had close contact with other famous names of the day, James Mill and his brilliant son, John Stewart Mill. Owen was opposed to “dole” programmes – the destitute simply to be given money by government or charities. At his New Lanark mill, Owen introduced new management techniques, raised wages, encouraged trade unions, opposed exploitation of women and child labour, 22

encouraged education and set up a company store selling goods at reduced prices. Within five years New Lanark’s profits improved, illustrating that the welfare of his workers could be reconciled with profitability of the business. Owen must be considered as the true founder of British socialism. He retired at 58 and travelled widely, including to the USA, to propagate his ideas. He later expanded his reforms to include communal living by investing his entire fortune in a project at New Harmony (Indiana, USA). It ultimately ended in failure. The socialist movement in Britain was particularly enhanced during the last decades of the 19th century in response to the poverty and slums spawned by and economic crises. Much of this reaction was articulated by the , Beatrice and Sidney Webb and George Bernard Shaw. These influential intellectuals sought to replace the “scramble for private gain” with the incremental instalment of “collectivism”. (See Gray, Alexander, 1951, The Development of Economic Doctrine, Longmans Green, London, pp.119-189, 293-330) In the 1930s British socialists found much common ground with the interventionist reforms of Franklin Roosevelt’s “New Deal”. Some members of the socialist intelligentsia in Britain found much rapport with Soviet communism. Fierce battles were fought to avoid a communist takeover of the British union movement. War itself had vastly enlarged the economic realm of government. The government essentially took control of the economy and ran it for the duration of the war. The population rallied together and shared the experience of the “stress of total war”, turning the national economy into a common cause rather than an arena of class conflict. Even the royal family had ration books. These historical currents led to a rejection of Adam Smith, laissez-faire, and the traditional 19th century liberalism as an economic philosophy. The concepts of “self-interest” and “profit” became morally distasteful. maintained that private profit as motive for economic progress was “a pathetic faith resting on no foundation of experience”. The Labour Party leaders promised to turn government in the post-war era into the protector and partner of the people and take on responsibility for the well-being of the citizens to a far greater extent than had been the case before the war. The blueprint for the Labour Party strategy was the Beveridge Report prepared by a government-appointed commission during World War II. William Beveridge, a former head of the London School of Economics, set out to slay the “five giants”: want, disease, ignorance, squalor and idleness. The report’s influence was global and far-reaching, changing the way, not only in Britain, but also the entire industrialised world came to view the obligations of the state vis-à-vis social welfare. (See Daniel Yergin and Joseph Stanislav, – The Battle Between Government and the Market Place That Is Remaking the Modern World, New York: Simon & Schuster, 1999, p.11-24)

The “left-right” Spectrum The “left-right” ideological spectrum in political parties dates back to seating arrangements in the semi-circular French legislative chamber after the French Revolution. The most radical, egalitarian elements sat on the left side, while the more conservative, aristocratic elements, sat on the right side. The more moderate middle-of-the- groups occupied the centre seats. In the English-speaking world, the left-right spectrum is associated with the degree of state intervention propagated by political parties. To the left are parties that support expansive state intervention in the economy: more regulation and control, increased welfare services, a progressive tax system, higher taxes on the “top-end”, deficit spending and pro-union labour relations. To the right are parties in favour of deregulation, less welfare entitlements, lower taxes, less bureaucracy, more individual choice, supportive of free enterprise. Left-wing parties such as the in the USA, the Labour Parties in the UK, Australia and New Zealand, normally push the system towards the left of centre. Right-wing parties, such as the Conservative Party in the UK, the Liberal Party in Australia, the Conservative 23

Party in Canada and the Republican Party in the USA, normally push towards right of centre positions. In the UK, Australia and New Zealand, the Labour and “conservative” parties have somewhat more clearly defined ideologies, although they also try to project “broker” party images. Labour Parties tend to derive support (in terms of voting and money) from working class and trade union members, lower status and income persons, teachers and academics. The “conservative” leaning parties normally draw most of their support from higher income groups, business circles, rural constituencies, persons with a protestant religious affiliation and professionals.

The British Political Economy in the 20th Century

Britain had been the first Western country to industrialise, a process that started on a large scale around the 1760s. In the two centuries that followed, it was the only major industrial power which had not suffered a convulsive revolution, foreign conquest or civil war – such as France and Germany. It was based on a Common Law tradition, arbitrated by judges, which upheld rights of liberty and property and the whole legal framework within which the British created the first modern industrial society. This continued throughout the nineteenth century as an effective legal setting for vibrant economic development.

The Legacy of the Colonial Era For more than two hundred years, Britain played a central role in the financing and banking activities of the English-speaking world. By the beginning of the 20th century, its overseas investments exceeded those of France and Germany combined – most of it in the USA, Canada, New Zealand and Australia, but substantial proportions also in Latin America, Asia and Africa. No major economy before or since has held such a large proportion of its assets overseas. In 1914 only around 6 percent of British overseas investments were in Western Europe. Around 45 percent were in the United States and the major English-speaking colonies. As much as 20 percent were in Latin America, 16 percent in Asia and 13 percent in Africa. The bulk of its African investments were in South Africa. The bulk of the Latin American investments were concentrated in Argentina and Brazil. Britain’s overseas investments proved to be highly lucrative – giving higher returns than those from domestic manufacturing. The earnings from existing overseas assets consistently exceeded the value of new capital outflows. Between 1870 and 1913, total overseas earnings amounted to 5.3 percent of GDP a year. For many decades Britain was the major promoter of the principle of free trade. By the late 19th century, around 60 percent of British trade was with extra-European partners. This meant that on top of her huge earnings from overseas investment, other foreign earnings handsomely enhanced her balance of payments accounts: “invisible” items like insurance fees, shipping charges, commissions and agency fees. These capital flows enabled Britain to import much more than she exported. Between 1870 and 1914, the terms of trade moved by around 10 percent in Britain’s favour. Britain also set the standard for the international monetary system by coupling the value of its paper money to a fixed gold standard. By 1908 only a handful of countries were outside the gold standard. The gold standard had become, in effect, the global monetary system – in all but name it was a sterling standard.

The Two World Wars and the Depression The first major crisis facing the British economy was brought about by the First World War. A total of 702,410 were killed of whom 37,452 were officers. After the war there was a brief post- war recovery but the evidence of industrial decay was omni-present too. The fundamental weakness of Britain’s traditional export industries – coal, cotton and textiles, shipbuilding and engineering – was obvious. They were saddled with old equipment, old animosities and work practices, low productivity and chronically high unemployment. Around 1921 the economy 24

experienced a painful downturn and contracted around a fifth in a single year. Britain resumed her role as the world’s banker, but paying for the war led to a ten-fold increase in national debt. Then came the American-led boom period of the “roaring twenties”. The American President, Calvin Coolidge, and the ascent of “Americanism” influenced much of the English-speaking world to accept limited government intervention and to create expanded scope for private enterprise. The prosperity was widespread, but it excluded certain older industrial communities, such as the textile industry. The expansion expressed itself in spending and credit. Millions acquired insurance, investment in shares and the benefits of a building boom. The heart of the consumer boom was in personal transport: the automobile. It brought freedom of medium- and long- distance movement for millions. The middle class was also moving into air travel. The electricity industry further fuelled the Twenties prosperity. With relative speed, industrial success transformed luxuries into necessities and spread them down the class pyramid. The affluence of the twenties also played a large role in the decline of radical politics and their union base. The speculative boom of the 1920s ended in October 1929. Due to the paucity of relevant data, it was not clear initially how bad things were, or how bad they were going to get. In January 1932, around 3 million people, close to 25 percent of the workers, were out of work. The impact of the Depression on Britain was nevertheless milder than in the USA and Germany. Britain had gone back onto a gold standard and thus elevated sterling into the world’s largest system of fixed exchange rates. Chamberlain adopted a system of “imperial preference” (preferred tariffs for colonial products) in 1932, which boosted trade within the Empire. During World War II Britain spent $30 billion, a quarter of its net wealth, on the war effort. Foreign assets worth $5 billion were sold and foreign debt obligations of $12 billion were accumulated. In March 1941, the US Congress enacted the Lend-Lease Act which permitted the President to “sell, transfer, exchange, lease, lend or otherwise dispose of material to any country whose defence was deemed vital to the defence of America”. This enabled Roosevelt to send to Britain unlimited war supplies without charge. In practice Britain continued to “pay” for most of her arms by surrendering the remains of her export trade receipts to the USA. By 1945 exports were less than a third of the 1938 figure. In 1946 Britain spent 19 percent of her GNP on defence and large amounts on international relief programmes. It had to rely on a large post-war loan from the USA.

The Rise of Trade Union Power In 1900 the trade unions created the Labour Party to promote legislation in the direct interest of labour and to oppose measures having an opposite tendency. It was not primarily Marxist, or even Socialist, but a form of what Paul Johnson called “parliamentary syndication”. The unions owned the Labour Party. They directly sponsored a hard core of Labour MPs and contributed about three-quarters of the party’s national funds and 95 percent of its election expenses. The party constitution, by a system of union membership affiliations expressed in block votes, made unions the overwhelming dominant element in the formation of party policy. In 1906, Parliament passed the Trades Dispute Act which gave unions complete immunity from civil action or damages which could be alleged to have been committed by or on behalf of the trade unions. In effect, it made unions impervious to actions for breach of contract, though the other parties to the contract, the employers, might be sued by the unions. It made a trade union a privileged body exempted from the ordinary law of the land. The Trade Union Act of 1913 legalised the spending of trade union funds on political objectives (i.e. the Labour Party) and laid down that union members with other party affiliations had to “contract out” of their political dues. In the 1970s, growing union power was exerted in various ways. The unions introduced new forms of “direct action”, including “mass picketing”, “flying pickets” and “secondary picketing”. They used these devices to destroy a Conservative Party government in 1974. The ensuing Labour Party government pushed through Parliament a mass of legislation extending union privileges: the Trade Union and Labour Relations Acts of 1974 and 1976 and the Employment Protection Acts of 1975 and 1979. Their extended immunity to “rort” actions to cases where unions induced other parties to break contracts, obliged employers to recognise unions and uphold “closed shops” (to the point where an 25

employee could be dismissed without legal remedy for declining to join a union) and to provide for facilities for union organisation. The effect of this legislation was to increase the number of “closed shop” industries and to push unionisation above 50 percent of the work force. It also removed virtually all inhibitions on union bargaining power. In the early months of 1979, under chaotic leadership, the uninhibited unions effectively destroyed their beneficiary, the Labour government. Its Conservative Party successor under Margaret Thatcher introduced minor abridgements of union privileges in the Employment Acts of 1980 and 1982. Excessive union legal privilege and political power contributed to Britain’s slow growth in three main ways: - First, it promoted restrictive practices, inhibited the growth of productivity and so discouraged investment. - Second, it greatly increased the pressure of wage , especially from the 1960s onwards. - Thirdly, trade union demands on government had a cumulative tendency to increase the size of the public sector and government share of the GDP. Britain had traditionally been a minimum-government state. The census of 1851 registered less than 75,000 civil employees, mostly customs, excise and postal workers, with only 1,628 manning central departments of civil government – at a time when the corresponding figure for France (1846) was 932,000. In the century that followed, the proportion of the working population employed in the public sector rose from 2.4 percent to 24.3 percent in 1950. In the period when the Labour Party was in power, the proportion of the GNP accounted for by public expenditure rose to 45 percent in 1965, 50 percent in 1967, 55 percent in 1974 and 59 percent in 1975. After the Conservative Party won the election in 1979, public borrowing and spending was restrained. This financial discipline combined with the impact of the North Sea offshore oilfields (which made Britain self-sufficient by 1980 in oil and an exporter in 1981), stabilised the economy and raised productivity to competitive levels. By 1983 the British economy was slowly starting to recover. (See Paul Johnson, 1983, A History of the Modern World – from 1917 to the 1980s, London: Weidenfeld and Nicolson, pp.600-604)

Labour’s Leftist Big Government After World War II the UK took a sharp turn to the left. The Conservative Party under Winston Churchill was voted out in 1945 and replaced by the Labour Party under Clement Attlee, which immediately started a comprehensive interventionist strategy. In its election campaign the Labour Party undertook to nationalise certain listed industries and services. In each case, it tried to explain why nationalisation was necessary. For and light, water, telephone and telegraph, and other utilities, the criterion of nationalisation was the existence of a natural monopoly. The coal, iron and industries were considered to be so sick and inefficient that they could not be put on their feet except through nationalisation. The nationalisation of all inland transport by rail, road and air was proposed on the ground that wasteful competition would best be avoided by a co-ordinated scheme owned and managed by public authorities. The Bank of England was also proposed for nationalisation on the ground that its purpose was so obviously public. After its electoral triumph in 1945 the Labour Party methodically carried out its programme. The Labour Party also introduced a National Health Service so that health and medical facilities could be made available to every person without regard to their ability to pay. Although no one was compelled to join the NHS, more than 90 percent of the population and of the doctors were brought under the scheme. The Labour Government also set up a comprehensive cradle-to- grave scheme of social security. The system provided protection against sickness, unemployment and old age – supplemented by maternity grants, widows’ pension and family allowances. These programmes were subsequently augmented with the extension of educational opportunities on school and university levels. 26

Because Britain was much less affected by war damage, it emerged from the war as the biggest European economy. In the 1960s and 1970s it became apparent that Britain was suffering from some kind of wasting disease. The British share of the world trade showed a continual decline. In 1955 Britain still generated 20 percent of exports. By 1977 its share declined to only 10 percent. In 1939 Britain’s standard of living was second in the world only to the USA. By 1980 it was trailing all the European Community countries, as was the productivity of British workers. By any conceivable continental standard of measurement, the British economy had performed badly and the gap widened. The major problem proved to be its strike-prone labour force. Trade Unions could not be sued for breach of contract and wild-cat strikers even continued to receive social security benefits. Industrial plants became obsolete as a result of chronic under-investment. Britain struggled chronically with the weakness of its currency and a deficit in its balance of payments. In a sense the British were fighting over eggs, but neglected feeding the hen. To make the outlook worse it transpired that Britain, of all the advanced Western states, had the smallest proportion of its young people in secondary and higher education. It also had the smallest percentages in the fields of science and technology. Britain experienced serious problems with its nationalised industries. In contrast to private business where the system of profit and loss accounts as well as the bankruptcy law exercises a crude but effective discipline, public enterprises had no comparable control system. If there are losses, no one goes bankrupt. If the enterprise or industry is in the red, management can either increase prices or receive cheap credits or subsidies from government because it has a monopoly.

Thatcherism By 1979 the unproductive inefficiency of Britain’s nationalised industries and the disruptive labour relations reached such a level of unacceptability that the Conservative Party under Margaret Thatcher came into power. By 1976 the entire UK was virtually on the dole, forced to borrow money from the International Monetary Fund to protect the Pound and to stay afloat. As a condition for the loan, the IMF required a sizeable cut in public expenditures. Labour Prime Minister Callaghan told the annual : “ We used to think that you could just spend your way out of a recession to increase employment by cutting taxes and boosting government spending … that option no longer exists … and insofar as it ever did, it worked by injecting inflation into the economy”. (Yergin & Stanislaw, ibid, p.104) By the end of 1979 the country was in crisis. Public-sector employees struck, hospital workers stopped working, medical care had to be severely rationed, garbage was piling up in the streets, striking grave-diggers refused to bury the dead, -drivers went on strike, trains ran irregularly, coal miners went on strike. Finally, the Labour Government fell on a vote of no confidence. What essentially needed to be changed was the political culture of the country. Thatcher, assisted by Keith Joseph, campaigned across the country against all the manifestations of statism and the Keynesian-collectivist mould. The Institute of Economic Affairs (IEA) led by economists Ralph Harris and Arthur Seldon provided much of the research data. The IEA also provided a platform for two ground-breaking economists on the international scene: Friedrich von Hayek from the free-market “Austrian School” and Milton Friedman from the “Chicago Monetarist School”. Hayek called for a shift back from Keynesian macro-economics and the world of the multiplier, to micro-economics and the world of the firm where wealth was actually created. In their election campaigns Margaret Thatcher and Keith Joseph sought “conviction politics” not “consensus politics”. They set out to challenge the entire consensus upon which the rested. They maintained the economy was “over-governed, over-spent, over-taxed, over-borrowed and over-manned”. Joseph maintained: “The private sector is the indispensable base on which all else is built … we need the wealth-creating, job-creating entrepreneur and the wealth-creating, job-creating manager and that … capitalism was the least bad way yet 27

invented”. Even , at that point, very much part of the predominant mixed- economy consensus, lambasted Joseph as a “Mad Monk”! When Margaret Thatcher became Prime Minister in 1979, the daughter of a grocery store owner, Alfred Roberts, told the world: “I owe almost everything to my father – particularly integrity. He taught me that you first sort out what you believe in. You then apply it. You don’t compromise on things that matter”. She said her father imparted to her the homilies and examples – about hard work, self-reliance, thrift, duty and standing by your convictions even when in a minority. She went to Oxford University and graduated in chemistry and later also studied for the bar and became a lawyer, specializing in and tax. As Leader of the Opposition from 1974 onward, she became known as a strong supporter. She carried a copy of Hayek’s The Constitution of Liberty in her briefcase. During the 1980s much of Margaret Thatcher’s programme became known as Thatcherism – a combination of privatisation, patriotism, hostility to trade unions and above all a belief in people taking responsibility for themselves instead of expecting the state to take responsibility for them. She laid the foundations for the recovery of the British economic system and supervised momentous political and social changes. She maintained that government was doing too much. She set out to replace the “nanny” state with its “cradle to grave coddling” with the rewards of an “enterprise culture”. She had to contend with an inflation rate of 20 percent and interest rates of 16 percent, with enormous pay increases promised to public-sector workers, state-owned companies draining money out of the Treasury, monopolistic nationalised industries and monopoly trade unions. Huge and controversial cuts were made in government spending and the programmed public- sector pay hikes had to be rolled back. As a Prime Minister she was as unpopular as any prime minister since the start of opinion polling. In the general election in 1983, she won with a huge landslide, a 144 seat majority. She was now in a position to pursue a full-blown programme of Thatcherism: a rejection of Keynesianism, a constraining of the welfare state and government spending, a commitment to the reduction of direct government intervention, a sell-off of government-owned businesses, a drive to reduce high tax rates and a commitment to reduce the government’s deficit. A big hurdle still remained: the overwhelming power of the unions. In 1984 she took on the unions. The unions were, at the time, financed by Libya’s Gaddafi and from the . After a year the miners’ union capitulated and decades of labour protectionism came to an end. The next step was the privatisation of the nationalised industries: British Telecom, British Gas, Heathrow , North Sea Oil, British Petroleum, British Steel. In the process the Exchequer Account was filled with billions to reduce government debt. Thatcher believed in privatisation to achieve her ambition of a capital-owning democracy – “… a state in which people own houses, shares, and have a stake in society and in which they have wealth to pass on to future generations”. Thatcherites disbelieved in government knowledge, as “… governments enjoy no unique hotline to the future”. State-owned companies proved in practice to be highly inefficient, inflexible, politically pressured to maintain and expand employment far beyond what is needed, unable to resist the wage pressure from public-sector unions and thus becoming major generators of inflation. They piled up huge losses which they solved by “recourse to the bottomless public purse”. What was missing was the discipline of the market’s bottom line. Margaret Thatcher’s third electoral victory in 1987 was also the beginning of the end of an era. Thatcher saw a new bureaucratic monster rising up in the European Community’s Brussels offices and opposed joining a single European currency. She was accused of domineering leadership and of over-nationalistic opposition to the European Community. Her leadership of the Conservative Party was contested from within and after she withdrew her candidature, John Major succeeded her as Prime Minister. Margaret Thatcher’s left-wing critics described her as self-righteous, rigid and uncaring. But her legacy proved powerful and lasting. Her policies found a strong echo in “Reaganomics”, the pro-business policies of President Ronald Reagan. Most of her policies were emulated 28

throughout the world – particularly by left-wing governments at the time in Australia, New Zealand, Canada and subsequently also in the “” policies of Clinton, Blair and Brown. (See Yergin and Stanislaw, op.cit., pp.110-123)

Blair’s Third Way The last decade of the 20th century and the first decade of the 21st century have been characterised by the continued momentum of the free enterprise model. Some countries, such as Russia, Indonesia and Japan experienced steep downturns. But the world economy powered on – largely in the wake of the continued growth of the American and European markets. The biggest beneficiary was the emerging enterprise . The United Kingdom has been governed by “Third Way” policies since 1997, which, in essence, were a continuation on the path already marked out by Thatcherism. Even The Economist pronounced Mr. Blair, after a year in office, “as proving to be a pretty good Tory after all”. in power proved to be Tory government by other means. The only difference appeared to be one of style. Margaret Thatcher was forced to be confrontational and divisive in order to change the political culture. could afford to be smooth, consensus-seeking and inclusive, especially serving as a good marketing device for Thatcherism. Old Labourites started accusing Mr. Blair as being merely a Tory in disguise. The essential point to make is that policy-making in the United Kingdom became less ideologically driven. It became more pragmatic in the sense of seeking evidence-based policy options that work. Whatever works is good. The left seemed to have liberated itself from outdated preconceptions. Tony Blair and have been instrumental in modernising the political approach of the British Labour Party. They realised that economic growth and wealth creation are the key to national prosperity – without which Labour’s traditional social concerns could not be addressed. They realised that competition is the key and supported public-private partnerships and competition in public service delivery. Blair also modernised the Labour Party’s approach to electoral campaigning by reliance on consultant-inspired initiatives – to some critical observers over-reliance on gimmicky political spin. It is a perfectly legitimate concern for political leaders to market their political messages. But a problem arises when a successful PR strategy outpaces the implementation of a well- managed quality service delivery. This to a gaping mismatch between promises and achievements. The essence of responsible statecraft is to achieve the best outcomes with the means available.

Compassionate Conservatism In the UK the Conservative Party under the leadership of Mr. Cameron adopted a variant of “compassionate conservatism” in order to provide philosophical underpinning to Cameron policies. It was also meant to tune into the popular frame of mind to be “open minded” and to show compassion to society’s unfortunates. But Mr. Cameron’s camp has been keen to show that Conservatives are sceptical about the role of the state. Conservatives do not believe that every problem can or should be solved by state action through a series of “top-down” initiatives. Rather, it should be dealt with by the “connected society” based on culture, identity and belonging, linked together by decentralised intermediate institutions called the “third sector” – an umbrella term for voluntary, not-for-profit and charitable organisations. These institutions should be called upon to take over the delivery of many state services, especially in extending help to the victims of “state failure”. They were particularly keen on harnessing the creativity and of the “third sector” in delivering programmes to get people off long-term incapacity benefits. In this way Conservatives were trying to nibble at the edges of state activity and to confront the immense power of the public sector unions.

The 2008/2009 Global Financial Crisis When Tony Blair handed the keys to Downing Street No.10 to his former Chancellor, Gordon Brown in 2007, the scene was set for the Labour Party’s return to its leftist tilt. The door to this 29

policy retreat from the centre-field was opened by the onset of the Global Financial Crisis by the first quarter of 2008. Mr. Brown called his large stimulation plan “Building Britain’s Future”, but his widespread handouts essentially resulted in more “entitlements”. This fiscal laxity inevitably placed a heavy burden on future efforts to repair the gaping hole in public finances. As former Chancellor, Mr. Brown should have known that the cost of servicing rising debt will rise. Spending on the unemployed will keep going up, as jobs do not begin to recover until the economy as a whole does. Bank bail-outs and off-balance sheet spending must be accounted for at some point. Likewise, the increasing pension cost of an ageing population needs to be financed . Given that the recession was the worst in post-war history, with output down by around 5 percent since the start of 2008, the need for drastic government action was undisputed. It was generally agreed that urgent monetary and fiscal measures had to be taken. The Bank of England reduced the base rate to a record low of 0.5 percent and started creating money by buying £125 billion (equal to 9 percent of GDP) of securities, mainly gilt-edged government bonds. The extraordinary loose was predicted to push public debt to the unprecedented peace-time level of 100 percent of GDP. The high debt levels raised the need for a credible plan for fixing the structural budget deficit and to reduce the massive public debt – an arduous and perilous task. A change of government became essential.

The Liberal-Democratic Conservative Coalition The general election of 6 May 2010 delivered a “hung-parliament” result: Conservative Party 307 seats, the Labour Party 258 seats, the Liberal-Democratic Party 57 seats and the smaller parties 29 seats. Although it failed to obtain an absolute majority, the Conservative Party made spectacular gains. The Conservative Party retained its power base in England whereas Wales and Scotland remained Labour territory – indicating the persistence of old “tribal” anti-English and pro-Gaelic loyalties. Generally speaking, the Labour Party’s power base remained the lower income, urban voters. The Lib-Dems relied heavily on the support of the young voters and adults with flexible party preferences. The election outcome meant the defeat of the Labour Party under Gordon Brown, the ascendancy of the Conservative Party under David Cameron with the Liberal-Democratic Party under Nick Clegg as “king-maker”. Ironically, the Liberal-Democratic Party also lost ground by winning fewer seats, despite Nick Clegg’s clever and charming campaign. The dramatic realignment from the centre-left to centre-right has redrawn the country’s political geometry. It heralded a groundswell move towards liberalism: greater political and economic freedom, a smaller state, reforming and shrinking the public sector, and reducing the unsustainable deficit and debt levels. The coalition government inherited a lamentable legacy: unemployment approaching 8 percent of the workforce, a currency under pressure, slumping business investment, heavily indebted households and a pallid recovery beset by uncertainties.

Conclusions

What took around three centuries to build, was dismantled by decolonisation in around three decades. What had been based on Britain’s commercial and financial supremacy in the seventeenth and eighteenth centuries and her industrial strength in the nineteenth, was bound to crumble under the burdens of two world wars. The great creditor became a debtor. Since the 1950s, the great movements of population changed their direction. Emigration from Britain gave way to immigration to Britain. So controversial has this “reverse colonisation” been that successive governments have imposed severe restrictions. But despite the decline of its colonial empire, the Anglophone economies and political liberalism remains the most alluring of the world cultures. Britain still serves as a trend-setting force in today’s world. It has a sound legal system providing a secure foundation for economic interaction based on generally accepted business practices – although Jeremy Bentham’s criticism is still valid that “justice is bought very dearly”. But judiciaries are independent to guarantee the rule of law. The levels of educational attainment are high enough to provide in 30

most of society’s need for technical and managerial skills. During the past two decades industrial and labour relations were reasonably stable. Respect for law and order, a transparent system of rules, effective business networks and support systems provide an enabling environment for value-adding business enterprise. Niall Ferguson expressed his assessment of the legacy of the British Empire as follows: “Without the spread of British rule around the world, it is hard to believe that the structures of liberal capitalism would have been so successfully established around the world. Those empires that adopted alternative models – the Russian and the Chinese – imposed incalculable misery on their subject peoples. Without the influence of British imperial rule, it is hard to believe that the institutions of parliamentary democracy would have been adopted by the majority of states in the world, as they are today … Finally, there is the English language itself, perhaps the most important single export of the last 300 years. Today 350 million people speak English as their first language and around 450 million have it as their second language. That is roughly one in every seven people on the planet.” (Ferguson, op.cit., pp.365-366) London has been at the heart of a great overseas trading and investment network for centuries. Today, as a , it is only second to New York. It scores very high on the key criteria that global financial firms are looking for: lots of skilled people, ready access to capital resources, good infrastructure, attractive regulatory and tax environments, perceived low levels of corruption, accessible location and, naturally, proficiency in the language of global finance in the home of English. The of London account directly for around 20 percent of the British GDP and over 30 percent of its tax revenue. The British economic pride has been severely bruised by the large accomplice role played by its financial institutions in precipitating the Global Financial Crisis of 2008/2009. Its large and influential financial services sector stands accused as co-respondent in the court of prudential international finance. The financial sector has stumbled and loud calls have been made from many sides for bold re-regulation. The Economist of October 18th, 2008, gave expression to the dilemma of strong re-regulation: “Heavy regulation would not inoculate the world against future crises. Two of the worst in recent times, in Japan and , occurred in highly rule-bound systems. What’s needed is not more government but better government. In some areas, that means more rules. Capital requirements need to be revamped so that banks accumulate more reserves during the good times. More often it simply means different rules, central banks need to take asset prices more into account in their decisions. But there are plenty of examples where regulation could be counter-productive ….” “Indeed, history suggests that a prejudice against more rules is a good idea. Too often they have unintended consequences, helping to create the next disaster, … If regulators learn from this crisis, they could manage finance better in the future. Capitalism is at bay, but … for all its flaws, it is the best economic system man has invented yet.” By the end of the first decade of the new millennium, the UK found itself once more in the grips of the “Attlee consensus welfare state”: overextended banks, an overindebted private sector, an overweight public sector and too many citizens depending on government support. It was a country living above its means. The Economist of 27th March, 2010, reported the following macroeconomic information: as a percentage of GDP the budget deficit was forecast at 14 percent, the public debt at 80 percent and the total debt at around 400 percent. Sterling had lost around 25 percent of its trade-weighted value since the middle of 2007. Growth prospects were limited. The road to recovery clearly required the shrinking of bloated benefit rolls, the reduction of spending on government employees, the curtailment of the regulatory burden on business and a return to a period of sobriety and austerity all round. The Lib-Con coalition government that came to power in May 2010 was faced with an enormous set of challenges: closing the fiscal gap while protecting a still delicate economy; imposing severe clampdowns while securing public support through a distribution of pain that is perceived to be fair; and, striking a balance between current and capital spending. Its first step was to set up a new Office for Budget Responsibility (OBR) in charge of fiscal forecasting. The 31

OBR promptly estimated the deficit for the financial year to March 2011 at £155 billion, or 10.5 percent of GDP. On June 22nd, 2010, George Osborne, the Conservative Chancellor of the Exchequer, working with Danny Alexander, the Liberal-Democrat at the Treasury in charge of controlling spending, introduced the first Lib-Con budget. It contained a raft of far-reaching and courageous measures including spending cuts and tax rises intended to transfer the country’s deficit to a surplus in five years. The spending cuts included reducing welfare payments, freezing public sector salaries and reducing spending in all government departments. To strengthen the revenue side, VAT was increased from 17.5 percent to 20 percent and capital gains tax was raised from 18 percent to 28 percent. To encourage economic growth through business enterprise, company tax was lowered to 24 percent. In view of the UK’s heavy reliance on the financial services sector, the growth prospects of its economy depend in large measure on the spill-overs from its most important export market, the . If and when a sustained recovery would be under way, remained to be seen.

References

Ebenstein, W. and Today’s Isms, New Jersey: Prentice-Hall Fogelman, E. (1980) Ferguson, Niall (2003) Empire – How Britain Made the Modern World, London: Penguin Books Galbraith, J.K. (1958) The Affluent Society, Boston: Houghton Mifflin Company Gray, Alexander (1951) The Development of Economic Doctrine, London: Longmans Green Cranston, M (1966) A Glossary of Political Terms, London: The Bodley Head Henderson, David (1999) The Changing Fortunes of Economic Liberalism London: Institute of Economic Affairs Johnson, P. (1985) A History of the Modern World, London: Weidenfeld and Nicolson King, R. (2007) Origins – An Atlas of Human Migration, London: Marshall Editions Wolfe, Alan (2008) The Future of Liberalism, New York: Knopf Yergin, D. & The Commanding Heights – The Battle Between Stanislaw, J. (1999) Government and the Market Place That Is Remaking the Modern World, New York: Simon & Schuster The Economist (2010) “Briefing on the British Economy”, March 27th, 2010 pp. 23-24. “Briefing on Britain’s emergency budget”, June 19th, 2010, pp.54-56 32

2 The USA – Mankind’s Best Hope

The USA can boast having the world’s oldest continuous democracy. For close to a century it has provided a template for the free world and in many ways also to others. Over the past century it has rescued Europe from total military destruction during the First and the Wars and from socio-economic implosion during the Cold War period lasting from 1945 to 1989. Constant opposition from the USA on all important fronts around the world led to the ultimate collapse of the Soviet communist empire. Abraham Lincoln once said America is “… the last, best hope on earth”. Today, many Americans still support the ideal of America as the best hope of all mankind. Millions of people elsewhere in the world hold similar high hopes.

Early Building Blocks

After Columbus discovered the sea route to the Bahamas in 1492, South and Central America became the possessions of Spain and, to a lesser extent, of Portugal. The northern reaches of the New World became an area of activity for the British, Dutch and French. John Cabot made landfall on the North-American continent in 1497 and claimed it for the English Crown. But it was only in the 1580s that England had begun to make plans to colonise North America’s eastern seaboard and its “fair and fruitful regions”. After an exploratory expedition by Walter Raleigh in 1584, the first colonial settlement took place in 1585 at “Virginia”, so named in honour of Elizabeth I, the “Virgin Queen”. The first colonists almost starved and had to be ferried home. The second settlement in 1587 (also on the island of Roanoke) mysteriously disappeared. The third settlement attempt was made in 1607 in Chesapeake Bay under the auspices of the Virginia Company, a pair of English joint stock companies, chartered by the new king, James I. In the course of the next 15 years over 10,000 men and women made the voyage to Jamestown, but by 1622 only 2,000 people were still living in the community – largely as a result of disease and hunger, but also as a result of their own ineptitude and inexperience. The saving grace for the early settlers in Virginia seems to have been the introduction of a lucrative cash – tobacco. The Virginia Company freely gave land to the colonists to grow tobacco. The dried weed was shipped back to England and the profits started flowing – to the colony and to the company. So Virginia became the land of prosperous plantation owners providing a growing market for the slave trade.

The Pilgrim Fathers At the time when Virginia began to flourish, a different kind of settlement took root in what was later called “New England”. In 1620 a group of religious dissenters, known to history as the “Pilgrim Fathers”, sailed from Plymouth in southwest England to Cape Cod (Massachusetts) and founded a religious colony called “Plymouth” – bound together by their disdain for the hierarchical and ritualistic nature of the Church of England. Their relish for hard work and friendly interaction with indigenous tribes enabled them to survive. In the 1630s they were joined by a great migration of Puritans – a distinct sect within the Protestant fold. Within a decade over 20,000 Puritans travelled from England to New England, where they founded the city of Boston. In time, several other religious splinter groups such as the Quakers and Baptists also migrated to new colonies such as Rhode Island and Maryland. In 1614 the Dutch founded a trading post on the Hudson River which they called New Amsterdam. But under Charles II, Britain took control over the Dutch possession in 1660 and named it New York and thus united, under one flag and one language, the area stretching from New England in the North to Virginia in the South, which became an important part of the British sphere of influence in the New World. The millions of immigrants who would travel to the New World during the next three centuries might well have been attracted by the ethical values of the early religious migrants: an 33

instinctive respect for fellow human beings, an idealistic belief in advancement through honest toil, and a refusal to be denied the liberty of conscience.

The Transportation of Convicts In Elizabethan England, a Vagrancy Act passed in 1597 stated that “Rogues, Vagabonds and Sturdy Beggars” were liable “to be conveyed to parts beyond the seas”. Those parts referred to the British colonies in North America. Prisoners condemned to death by English courts could potentially have their sentences commuted to deportation. Some of the deportees were “common criminals”, but political dissidents were also disposed of in this fashion – frequently used as punishment for Irish dissidents. Prisoners were sent to Virginia or Maryland to work on tobacco plantations until the growing number of slaves exported from Africa replaced them. In 1718, Britain passed the Transportation Act, which established a seven-year banishment to North America as a possible punishment for lesser crimes, and also stated that capital punishment could be commuted to banishment, So systemic exile became a part of England’s justice system. It was thought to be advantageous for everybody involved. It was considered more humane than executing or flogging, it “offered” the possibility of “moral rehabilitation” and freedom afterwards, it rid the population of dangerous individuals, it deterred others tempted to commit crimes, it provided workers where there was a great want of servants. Transportation to North America continued for nearly 60 years, and only ceased when the American colonies revolted in 1776. By that time over 40,000 criminals had been shipped to the New World. This English practice “of emptying their jails into our settlements” was roundly rejected by the colonial elite and ultimately resulted in the Declaration of Independence in 1776 and ultimate formation of the United States of America in 1887. (See Russell King ed., Origins – An Atlas of Human Migration, Marshall Editions, 2007, pp.95- 105) Ethnic Composition of Immigration to the USA Up to 1940 Original % British 11.1 Irish 11.6 Canadian 8.0 German 15.6 Scandinavian 6.2 Austro-Hungarian 10.5 Russian 8.5 Italian 12.0 Other 16.5 (Figures quoted by Austin Ranney, The Governing of Men, Holt, Rinehart and Winston, N.Y., 1966, p.145)

Anglo-Saxon Melting Pot The USA ranks as the major destination by far, not only for migrants coming from the British Islands, but also from other countries on the European continent – particularly in the period up to World War II. With few exceptions, all Americans are descendants of immigrants. Hence, the USA is often referred to as the most obvious example of a “melting pot” nation. But it is of great importance to bear in mind that the receptacle in which all the various immigrant groups were “melted”, was an “Anglo-Saxon” receptacle. The “pot” or container into which the particles were thrown or blended was labelled “Anglo-Saxon”, not “Latin”, “African”, “German” or “Asian”. Samuel Huntington, in a well-documented publication Who Are We – the Challenges to America’s National Identity (2004) makes the point that America would not have been the country it has been (and in some measure still is today) if it had been settled in the seventeenth and eighteenth centuries not by British protestants but by French, Spanish, or Portuguese Catholics. It would have been closer to Quebec, Mexico or Brazil. 34

Among the key elements of the Anglo-Protestant founding culture are “the English language; Christianity; religious commitment; English concepts of the rule of law, the responsibility of rulers, and the rights of individuals; and dissenting Protestant values of individualism, the work ethic, and the belief that humans have the ability and duty to try to create a heaven on earth, a ‘city on a hill’”. Huntington acknowledges that that culture has evolved and been amended by the contributions of subsequent immigrants and generations, but its essentials remain. This culture is also the primary source of the political principles of the American creed, which Jefferson set forth in the Declaration of Independence and which has been articulated by American leaders from the Founders to the present day. The people who settled the country left a predominant imprint: its language, its laws, its institutions, its political ideas, its literature, its customs – all are primarily derived from the British Isles. Hence the British Anglo-Protestant culture defined the new nation more than any other. Its Anglo core predisposed the country to a greater emphasis on property rights and individualism. Its Protestant core predisposed it to hard work. The melting pot subsequently had more ingredients poured into it, but the pot itself remained of a recognisable Anglo-Protestant design.

Political Credo

The American political credo is essentially based on the political philosophy of the English political philosopher John Locke. Its essence is embodied in the American Declaration of Independence written in 1776: “We hold these truths to be self-evident” “… that all men are created equal”, “… that they are endowed by their Creator with certain inalienable Rights, that among these are Life, Liberty and the Pursuit of Happiness”, “… that to secure these rights, Governments are instituted among men ..”, “… deriving their just Powers from the consent of the governed ..”, “That, whenever any form of Government becomes destructive of these ends, it is the Right of the People to alter or to abolish it, and to institute new Government …”. This declaration ultimately found concrete expression in the American Constitution that was approved by all the federating units in 1787. The essential characteristics of this constitution are: - the protection of civil rights; - popular sovereignty and republicanism; - structural constraints on government power; - federalism; and - constitutionalism. Thomas Jefferson was the chief drafter of the Declaration of Independence. Together with the Founding Fathers, Washington, Madison, Hamilton and Jay, he also drafted the Constitution and assisted in the propagation of its approval by all thirteen original constituents of the United States of America. The American Constitution became the template for written democratic constitutions around the world: post-World War II European constitutions, post-Communist East European constitutions, African constitutions and East Asian constitutions.

Individualism and Self-reliance

Alexis de Tocqueville, a well-known French scholar, who visited the USA in the 1830s wrote “.. everything about the Americans is extraordinary ..”. He was also impressed by the soil that supports them. America has natural harbours on two great oceans, an abundance of every possible raw material, a huge range of farmed crops and is the fourth largest country in the world of which two-thirds is habitable. De Tocqueville also found great distinctiveness in America’s “laws and mores”. 35

America has the highest in the world, but also the highest rate of imprisonment in the world. It has more elective offices than any other and also one of the lowest voter turnouts. America has never had a major , nor a significant fascist movement. It has one of the lowest tax rates among rich countries, the least generous public services, the highest military spending, most lawyers per capita, the highest proportion young people at university, the most persistent work ethic and a strong belief in personal responsibility and self-reliance.

The Enduring American Dream

During the past two decades the USA’s performance on essential economic yardsticks has continued to be strong: growth in output, productivity and job creation. The Economist calculated in 2004 that the average person in the Euro area is still about 30 percent poorer (in terms of GDP per person measured at purchasing-power parity) than the average American. Europeans are still stuck with lower living standards than Americans. But Olivier Blanchard of the IMF claimed that Europeans have used some of their increase in productivity to expand their leisure rather than their income. Americans, by contrast, continue to toil longer hours for more income. So who is really better off? The average American worker clocks up 40 percent more hours during his lifetime than the average person in Germany, France or Italy.

Values and Belief Systems A large proportion, a remarkable 80 percent, claim to be “traditionalists”, holding “old- fashioned values” relating to family, marriage, patriotism and religion. At the other end of the spectrum are the “secular rationalists” who are less concerned about religion, patriotism and who are predominantly single, tolerant, hedonistic, secular and multicultural. According to Gertrude Himmelford in One Nation, Two Cultures, these value judgments are better predictors of political affiliation than wealth or income. In the 2000 election 63 percent of those who went to church once a week, voted for George Bush and 61 percent of those who never went, voted for Al Gore. Traditionalists live largely in the “red” states encompassing the mid-west and the south. Secularists live on the densely populated Pacific coast, the eastern seaboard and the north- eastern and upper mid-western “blue” states. Traditionalists are heavily concentrated in smaller towns and rural areas. Secularists dominate big cities. Multiculturalism is deeply entrenched in “blue” states. The states with the highest levels of immigration of Latinos and Asians include New York, New Jersey, New Mexico and . These are considered to be the new “melting pots” and they tend to vote predominantly for the Democratic Party. White immigrants tend to be more conservative. Roughly 40 percent of Americans describe themselves as politically moderate and less supportive of the death sentence. There is also growing acceptance of gays and inter-racial dating. But America is still the most religious rich country in the world. Over 80 percent of Americans say they believe in God and 39 percent describe themselves as born-again Christians. Over 58 percent think that unless you believe in God you cannot be a moral person. About 30 percent of Americans are said to be evangelical Protestants, but an equally large proportion of the Christians are Catholics.

Civic Culture De Tocqueville noted in the 1830s that “… Americans of all ages are forever forming associations”. He argued that these civic associations connected people to each other, made them better informed, safer, better able to govern themselves, and create a just and stable society. Fraternities and sororities on campuses, the League of Women Voters, scouts and girl guides, farmers associations, industrial unions, volunteers, a variety of clubs (social, recreational, professional), etc., all provided reinforcement for civic participation, reinforcing the ideal of good citizenship and engagement with public life. 36

Churches are also recognised as archetypical civic clubs because congregations meet regularly face to face. All available information point towards church attendance rising. There are also several examples of the trend towards “mega-churches”, i.e. weekly congregations of 10,000 and more. Old hymns, pulpits and even church-like buildings are replaced by churches that look like schools with information booths, food courts, PowerPoint presentations, small group discussions and social activism. Peter Drucker, the veteran management writer observed that the most significant organisational phenomenon in the first half of the 20th century was the company. In the second half, it was the large pastoral church. Today it is the mega-church that stands out as the provider of community bonding.

Growth and Dynamism Few countries can remotely equal the growth patterns and dynamism of the USA. The quality and quantity of intellectual life has been the highest in the world for many decades. A quarter of American adults have a university education. The country produces one-third of the world’s scientific papers, employs two-thirds of the world’s Nobel-prize winners, has 17 of the top 20 universities (as ranked by ’s Jiao Tong University). The country’s size and wealth, combined with its meritocratic traditions and technological prowess, have made it easier for Americans to explore new opportunities – to voluntarily move to where the jobs are. In a typical year 40 million Americans will move house – particularly footloose young graduates showing a frontier spirit. As a dynamic society, America is remarkably open to trade. Trade barriers are low and the USA has for decades served as the largest market in the world for major exporters like Japan, , China and European countries. Without the American market, these countries would not have been able to grow so spectacularly since World War II.

Demographic Patterns

The USA is remarkably open to immigration. Since 1990 about 16 million people have entered the USA legally. Its ability to absorb so many newcomers is astounding. The country is home to between 30 and 40 million people who were born abroad – about one in ten. It is not clear how many of these are legal migrants. This trend arouses anxieties – particularly about the Hispanic wave rolling in from Mexico. On the one hand is the “Anglo-Protestant-African- Catholic-Indian-German-Irish-Jewish-Italian-Slavic-Asian part of society”. On the other hand is the Spanish-speaking sub-group which is growing unusually fast and is continually fed by the country next door through a porous border. Moreover, the majority of Hispanics are under 30, which implies high birth rates.

Multiculturalism The Census Bureau forecasts that by 2050 the Hispanic population will have increased by 200 percent, the population as a whole by 50 percent and the whites by 30 percent. Hispanics will then constitute 25 percent of the total population and be concentrated in New York, Los Angeles, San Francisco, Chicago and Miami.

Ethnic Composition of the USA Population (Population %) 2004 estimate 2030 projection (300 million) % (364 million) % Hispanic 14 Hispanic 20 Black 12 Black 14 Asian 4 Asian 6 Other 3 Other 2 White non-Hispanic 67 White non-Hispanic 58

37

Several studies have found, however, that Hispanics, just as other earlier waves of immigrants, are gradually assimilating the American culture and language and moving into the mainstream. There is also evidence of growing inter-marriages between Hispanics and other settled communities. American mixed marriages, which were one in twenty-three in 1990, have subsequently increased to one in fifteen. Nearly half of all the 3.7 million inter-racial marriages in the USA, today have one Hispanic partner. The danger of America becoming a “cleft” society seems to be receding. In view of the large inflow of non-Protestant and non-British immigrants, it is to be expected that America’s founding Anglo-Protestant culture will undergo profound changes in coming years. The definition of “Americanism” has already been broadened. Many Catholics dislike being told that Protestantism is the source of the country’s dynamism. Likewise the rapidly expanding assortment of non-Anglos dislike any hint that they are less than fully American. But the “melting pot” itself, is still of a recognisable Anglo-Protestant design.

New Stratification Patterns The idea that anything is possible if you work hard enough, is an enduringly popular part of the American Dream. But can the ladder of success be climbed by all? During the past three decades the rich have been doing better than the less well off. Since 1979 median family incomes have risen by 18 percent, but the incomes of the top 1 percent (Wall Street financiers?) have gone up by 200 percent. In terms of the total national income the bottom fifth’s income has declined in comparison with the top fifth. The top 0.1 percent of Americans also earn two to three times as much as their peers in Britain and France. However, analysis of the position of the poorer echelons in relation to the upper echelons, not in relative terms but in absolute terms, indicates that a smaller share of the total population is living in poverty than before. As John Parker wrote in The Economist of July 16, 2005, “… The rising tide has lifted dinghies as well as yachts.” Americans seem to mind more about equality of opportunity then equality of results. Most Americans feel their chances of moving up a notch have improved over the past 30 years and say that their standard of living is higher than that of their parents. But social mobility is eroded by fundamental changes in the economy. This is brought about by increased rewards for intellectual skills. As a result the income gap between college graduates and those without degrees has constantly increased over the past 30 years. Lifetime employment is out and job-hopping is in. Today almost all chief executives have a higher college degree such as an MBA. Persons with a university degree are more likely to move up the income bracket than those without. These changes indicate growing trends of stratification based on education: an education- based meritocracy. But the rise in the cost of education has put “Ivy League” universities out of reach of most middle-class and poor families. The median income of families with children at Harvard is $150,000 p.a. The trends are that students from the richest quarter of the population are increasing their share of places at America’s elite universities. But even outside elite schools, students from poor backgrounds are loosing ground. The underlying causes are not easily neutralised because it deals with family behaviour. Isabell Sawhill of The argues that a person’s chances of a good education, good job and good prospects – i.e. of moving upwards – are partly determined by family behaviour. On this view, the rich really are different, not just because they have more money. It also depends on the structure of the family itself. In other words, class stratification is more than a matter of income or inherited wealth. Children from stable, dedicated families, on average, have a better chance to succeed than children who are neglected by parents, who are out of wedlock and without jobs. If the key to upward mobility is finishing your education, having a job and getting and staying married, then the rich in stable families start with advantages that go beyond money.

38

Population Trends While the populations of many countries in Europe and Japan are ageing and on the verge of shrinking, the USA population is gaining annually by almost 10 percent – a rate that is adding the population of Chicago every year. Immigration levels over the past 20 years have constantly risen to just over 1 million per annum. The bulk arrive from Latin America and Asia and they tend to have children at a far more prodigious rate than either white or black groups. The median age of the USA sustained by this young and fertile immigrant population will remain at 35 for the next 50 years in contrast to the European median age where it will rise from 41 to 53. In the second half of the 21st century, whites in the USA of European origin will be in a minority. America’s fertility rate is 60 percent higher than Japan’s and 40 percent higher than the European average. It is taking in immigrants at a faster rate than Europe and doing better in assimilating them. America will be the only big developed country where children outnumber pensioners and one of the few developed countries where the working-age population is still growing. This trend should enable America to remain relatively young and dynamic.

Political Party Rivalry

Around the world political parties are associations formed to influence the content and conduct of public policy in favour of some set of ideological principles and/or interests, either through direct exercise of power or by participation in elections. Some are built on pure ideology (e.g. communism, socialism, liberalism or nationalism) on one end of the spectrum. On the other end the parties are collections of people with specific interests such as power, influence, income, deference, etc. Most parties are “hybrids” combining aspects of both categories and range from cadre parties with strong missions to catchall parties acting as brokers for a variety of interest groups. There are also various degrees of affiliation and motivation amongst party supporters. In the USA, the major political parties, endeavour to draw at least some electoral support from every major ethnic, occupational, religious, economic and educational group in an effort to act as “brokers” for the widest possible support base. Nevertheless, the Republican and Democratic Parties both have had a fairly rigid core of supporters over many decades. The Republicans can traditionally count on the support of the following groups: WASPS (White Anglo-Saxon Protestants), farmers, small town dwellers, professionals and corporate business leaders. The Democratic Party is traditionally supported by trade union members, the working class generally, big city dwellers, minority groups, African Americans, Jews and leftist intellectuals. Historically, the Southern states used to be essentially a Democratic Party power base, but in recent decades it has changed towards the predominant political divisions elsewhere in the USA. The social composition of the leadership and support patterns of political parties in the USA does not imply a rigid, confrontational, inflexible contest between the major political parties. The major political parties generally tend to move to centre positions in order to attract the broadest possible support base. Much depends on the political arithmetic applicable at any particular time and place. Political parties have to take into account changing patterns in voting behaviour, in party identification, in issue orientation as well as candidate orientation. The correlates of voting preference in most democratic countries, such as social status, region, religion, gender and age do change over time. In none of the mentioned countries are all the classes, occupations, religious affiliations or other socio-economic characteristics evenly distributed over the whole country. Each party has its enclaves of solid support and its political deserts. The unique cross-sectional demands on political parties are reflected in the composition of its leadership. In their party committees and their slates of candidates are the names of Catholics and Protestants, Jews and Gentiles, lawyers, businessmen, farmers, trade unionists as well as teachers. 39

In the USA there has never been a substantial Liberal Party in Congress, and the word “liberal” has been used mainly as what Maurice Cranston calls “… one vague alternative to such vague appellations as radical, progressive, or even ‘left-wing’”. (See Maurice Cranston, A Glossary of Political Terms, London: The Bodley Head, 1966, p.68) The terminological confusion in the USA stems from the expropriation of the time-honoured word by various political movements. Supporters of Theodore Roosevelt’s “progressive” movement began to use the word “liberalism” as a substitute for “” after Roosevelt lost on a Progressive third-party ticket. Then in the early 1930s, Herbert Hoover (Republican Party) and Franklin Roosevelt (Democratic Party) fought it out as to who was the “true liberal”! Roosevelt won the election and adopted the term to ward off accusations of being left-wing. Hence he declared that liberalism was “plain English for a changed concept of the duty and responsibility of government toward economic life”. Since the New Deal, liberalism in the United States has been identified with expansion of government’s role in the economy. (See Daniel Yergin and Joseph Stanislaw, The Commanding Heights, New York: Simon & Schuster, 1999, p.15) Free Enterprise versus Government Intervention

The United States is one of the few modern nations in which the general ideology of favouring the free market commands strong support. But no political party argues that government should do nothing to prevent recessions or cushion business failures or put a floor under agricultural prices or support the unemployed. The actual functions American governments perform go far beyond the limits, according to the classical doctrine of “laissez faire”, they should go. Ideological purity is always trumped by pragmatic practicality. Most Americans are born in government-regulated hospitals and delivered by government- licensed medical assistants. They spend most of their childhood and youth in government- supported schools. Their marriages (and divorces) are established within the legal framework. They conduct business, engage in professions, buy and sell property and retire – all under a considerable body of governmental regulations. They may be conscripted to serve in the armed forces and even kill and die under orders from government officials known as “military officers”, and when they finally come to the end of their lives, paid their last tax bill and filled out their last form, they are buried in a government-licensed cemetery. Their estate – minus a portion retained by the government for the inheritance tax – is handed to their heirs by probate courts and government-licensed lawyers. Government plays a major role at just about every main juncture of the lives of Americans. As a consequence the debate is no longer between “laissez faire” and “socialism”, or between free enterprise versus government intervention: it inevitably became a question of smaller government versus bigger government or more market versus more government.

Smaller Government versus Bigger Government

Americans trace their mistrust of government to their roots. The Pilgrim Fathers relocated in America as refugees from state-sanctioned persecution and later revolted against English- imposed taxes. John Locke’s famous words that “… government is best that governs least” found much resonance amongst Americans during the seedtime of their republic. For much of the first century of the country’s existence, the federal government’s role was small. Thereafter it expanded steadily. Industrialisation and unionisation led to an era between the 1890s and early 1920s that brought antitrust laws, regulation of interstate commerce, income tax and regulation of food and drug quality. The Great Depression years in the 1930s and the two world wars pushed the ideological bias against the state into retreat. By the 1950s opinion leaders, political parties and the general public easily turned to government when they saw a problem they thought should be solved. The reality is that crises bring about clamour for government action. It sometimes shrinks partially afterwards, but never to its original size. New institutions, such as boards, commissions, agencies and departments are created – each with its own bureaucracy and 40

constituency of camp-followers – all contributing to a gradual increase in the size of the public sector. Jimmy Carter tried to roll back the public sector with the deregulation of airlines, transport and banking. Ronald Reagan converted this trend into a philosophy. When Reagan became the President in 1981, he declared: “Government is not the solution to our problems: government is the problem.” Reagan’s legacy was to entrench lower taxes as part of a small-government philosophy. Reagan, however, did not succeed in shrinking the size of the government-sector, he merely stopped it growing. For two decades after Reagan’s presidency, small government remained the stated preference of presidents. In 1996 Bill Clinton declared that “the era of big government is over”. The same sentiments were expressed in his presidential campaigns by George W. Bush – until the crises returned with a vengeance. A new set of crises during the Bush era precipitated more activist government after the terrorist attacks of 2001. It led to the creation of the Department of Homeland Security, the Patriot Act, the federalisation of airport security, an expansion of money-laundering rules and federally subsidised terrorism insurance. Then the collapse led to the Sarbanes- Oxley act overseeing corporate governance and accounting standards. On top of these bureaucracy-swelling interventions came the War which lasted longer and became more costly than anticipated. This burden on the public accounts was then exacerbated by stepping up the War in Afghanistan. The final straw that literally broke the camel’s back, was the collapse of the financial system in New York. This crisis unleashed a wave of government intervention of immeasurable proportions. Failures of the George W. Bush Era

The George W. Bush era in the USA, is characterised by the globalisation of American party policies. Since the start of the 21st century the Democrat versus Republican contest has been screened on the world monitor, particularly by CNN and the BBC with a relentless intensity. The contest has been displayed like a soap opera in every “international” hotel room from East to West and throughout Europe, Africa and the Arab world. The script of the serial has been dutifully written by and its associated publications. The skirmish began with George W. Bush’s controversially narrow electoral victory in 2000, then highlighted by the 9/11 disaster in New York and then serialised during the “Neo- conservative” attack on Iraq. Film makers like Michel Moore provided the short-clip trailers on the underlying oil contract and Halliburton conspiracies. George W. Bush was repeatedly portrayed as a miss-speaking, stupid fumbler and turned into the laughing stock of the world. In retrospect it is clear that George W. Bush was not equipped with the mass media potential of a Barrack Obama. He was destined to fall victim to the demands of the changing times. The mantra of “compassionate conservatism” never had the potential to capture the imagination of the new “affluenza” infested, cyber-connected, new technology-powered generation. The critical mass has shifted away from anything that appears “old hat” to anything that sounds “new-fashioned”. The financial crunch that was triggered by the collapse of Bear Sterns in 2007 is also attributed to the failure of the Bush era. In reality the groundwork of the credit crunch was laid by policy choices to increase home-ownership during the Clinton era when Fannie Mae and Freddie Mac were pressured by the Democrats in Congress to grant home loans to non- creditworthy borrowers. The monetary policy that failed to adjust interest rates appropriately to control asset price bubbles was left in the hands of the Reserve Bank. The abuse of the financial regime was done by the financial fraternity in Wall Street.

The Global Financial Crisis of 2008

Much has been said in recent times about possible causes of the 2008/9 economic meltdown. Some blame it on the unbridled greed of the financing network in New York and London; others on the lax regulatory regimes in the USA and the UK. Leftist ideologues claim that the root of the 41

problem lies in the free enterprise system associated with “Anglo-Saxon capitalism”. It is clear that glib single-cause explanations are totally inadequate.

Dodgy Financial Instruments by the A major triggering role was played by the world-wide marketing of toxic securitised instruments by Wall Street investment banks. These institutions bundled sub-prime housing mortgages into a hierarchy of collateralised debt obligations (CDO’s) which they then offloaded on unsuspecting (trusting?) investors around the world. The investors at the end of the chain evidently were not able to monitor the quality of the securities because they were too far removed from the mortgage originators to be able to evaluate the quality of the underlying lending decisions. Moreover, they were snowed under interminable pages of documentation which defied proper scrutiny. The Economist ‘s “Special Report on the World Economy” dd. October 11th 2008, p.8, states “... the bright new finance is the highly leveraged, lightly regulated, market-based system of allocating capital dominated by Wall Street ... The new system evolved over the past three decades and saw explosive growth in the past few years thanks to three simultaneous but distinct developments: deregulation, technological and the growing international mobility of capital.” As explained by The Economist, the hallmark of the new finance is securitisation. Banks that once made loans and held them on their books now pool and sell the repackaged assets, from mortgages to loans. In 2001 the value of pooled securities in America overtook the value of outstanding bank loans. Thereafter the scale and complexity of this repackaging (particularly of mortgage-backed assets) hugely increased as investment banks created an alphabet soup of new debt products. They pooled asset-backed securities, divided the pools into risk tranches, added a dose of leverage, and then repeated the process several times over. Increasing wizardry made it possible to create a dizzying array of derivative instruments, allowing borrowers and savers to unpack and trade all manner of financial risks. The derivative markets have grown at a stunning pace. According to the Bank of International Settlements, the notional value of all outstanding global contracts at the end of 2007 reached $600 trillion, some 11 times world output. A decade earlier it had been only $75 trillion, a mere 2.5 times global GDP. In the past couple of years the fastest growing corner of these markets was credit-default swaps, which allowed people to insure against the failure of the new-fangled credit products. Although the heart of the new finance is on Wall Street and in London, the growth of cross- border capital flows vastly extended its reach. As a result, financial markets, particularly in the rich world, have become increasingly integrated.

Excessive Lending Creating a Credit Boom Innovative financing enhanced a much broader access to credit, mainly by a loosening of lending standards. enabled lenders to use standardised credit scores, and the risk- spreading from securitisation made it appear safer to lend to less creditworthy borrowers – particularly at the low end of the American housing market. Easy credit spread like an infectious disease around large parts of the English-speaking world since the 1980s and subsequently around the world. On the individual level it was manifested by the granting of housing loans to people who could not afford repayment obligations. In addition, loosely controlled credit card facilities were given to minors and under-employed persons. Hire- purchase agreements were made with over-borrowed persons. Household debt in the USA rose from under 80% of disposable income in 1986 to 100% in 2000 and then soared to 140% by 2007. On the level of financing institutions, the accumulated credit derivatives were passed on like a pyramid scheme in ever-expanding concentric circles from centres like New York and London to other financial capitals like Zurich, Paris, Amsterdam, Vienna, Brussels, Frankfurt, Tokyo and Singapore.

42

Excessive Optimism Created an Asset Price Bubble Widespread optimism coupled with easy credit, played a major role in driving up asset prices – particularly houses and stock market levels – often in reciprocal interaction. People tend to take rising prices as a cause to buy. Hence, in boom times over-confident investors drive up prices of houses and stock market shares. The Economist of 20th January 2009, in a special report on the future of finance entitled Greed – and fear, p.8, describes the effect of a virtuous circle of optimism: “Asset prices pull themselves up by their bootstraps. As houses become more valuable, house owners feel richer. If they then spend more, companies make more money, which in turn increases the value of shares and bonds. Profitable companies invest and create jobs. As the economy thrives, there are fewer defaults. Lenders are therefore willing to lend more on easier terms. This extra credit makes asset markets liquid: if ever you need to sell something, there always seems to be a ready buyer. Ample credit also seems to feed into spending and asset prices. That makes people feel richer. And so it goes on ... for as long as people are optimistic...” Residential, commercial and real estate prices rose at unprecedented rates. Rampant speculation drove the US housing market in the period 2005-2006 to an annual increase rate of 11 percent. By mid-2007 the value of the multi-trillion dollar pool of lower-value mortgages that had been created over the 2003-6 period, started to fall in a self-reinforcing downward cycle which has caused markets to plunge across the globe. An additional category of mortgage loans called “Alt A”, dressed up as “mid-prime”, to supposedly better-heeled Americans were also soaring at a disconcerting rate. The Economist claims that this market was trumpeted as a means of extending home ownership to those, such as self-employed, with a reasonable credit standing but unsteady income. This market specialised in loans with scant documentation and exotica such as negative-amortisation mortgages, which allowed borrowers to pay less than the accrued interest, with the difference added to the loan balance! (See The Economist, February 7th, 2009, p.64) Reportedly half of all “Alt-A” borrowers were in negative equity by the beginning of 2009 as a result of falling house prices. Their woes spilled over to holders of these securities. Banks were selling these holdings to hedge-funds and other asset-management firms, often at large discounts. Several European and American banks were holding on to billions worth of noxious assets, Exposing themselves to the danger that as unemployment in America increases, even strongly underwritten loans may go bad. Huge amounts of investment flowed into stock markets and a speculative frenzy drove price indexes up to unprecedented levels across the world in the period 2003-2007. But by August- September 2008 markets tumbled. By early November 2008, the S&P500 in the USA was down 45 percent from its 2007 high. Stock market investors financed by bank credit were forced by calls to sell their assets which created a downward spiral. Institutional investors responsible for investing insurance premiums and pension contributions were faced with a massive decline in the value of their investments. Roger Altman in his recent article entitled “The Great Crash, 2008 – A Geopolitical Setback for the West” (Foreign Affairs, January/February 2009, Vol.88, No.1, p.5) claims that in the USA the “... damage is most visible at the household level. Americans have lost one-quarter of their net worth in just a year and a half, since June 30, 2007, and the trend continues. Americans’ largest single asset is equity in their houses. Total home equity in the United States, which was valued at $13 trillion at its peak in 2006, had dropped to $8.8 trillion by mid-2008. Total retirement assets, Americans’ second-largest household asset, dropped by 22 percent, from $10.3 trillion in 2006 to $8 trillion in mid-2008. During the same period, savings and investment assets (apart from retirement savings) lost $1.2 trillion and pension assets lost $1.3 trillion. Taken together, these losses total a staggering $8.3 trillion.”

Global Financial Imbalances In its January 24th 2009, issue, The Economist, p.65, argued that the “... damage done to the financial system by lax controls, rotten incentives, and passive regulation is plain. Yet underlying 43

the whole mess was the deeper problem of imbalances. A growing number of policymakers and academics believe these lay at the root of the financial crisis.” The underlying argument is that over the past decade unprecedented levels of liquidity were built up in New York and London as a result of a “global savings glut”. Enormous financial surpluses were realised by the major exporting countries, particularly China, Singapore, South Korea and the oil-producing states of the Persian Gulf. Their balance-of-payment surpluses were consistently recycled back to the West in the form of portfolio investments at investment banks in Wall Street and the City of London. This mountain of liquidity faced lowering yields. Huge amounts of capital started flowing into the higher yields of weak assets such as sub-prime mortgages. The tidal wave of surpluses kept flowing to Wall Street and the City of London because they were considered to be the most developed financial markets offering the best and safest returns. In most emerging countries, the local financial markets are relatively “immature” in the sense that they do not offer enough trustworthy savings vehicles to absorb the “savings glut”. The USA, and to a lesser extent Britain, was considered the favourite destination for global capital flows on account of its broad and liquid markets for securities. The tidal wave of capital inflows apparently encouraged financial market operators to create a range of dodgy instruments. Marginal loans were packaged into supposedly safe securities. This supply of credit spurred residential and commercial real estate prices to rise at unprecedented rates. It spilled over into a boom in stock market price levels. As the USA was sucking in vast amounts of savings from abroad, its own current account (balance of investments and savings) plunged into the red to the level of 6% of GDP in 2006. The USA needed to borrow from abroad to pay for its deficits. A by-product of the vast trade surplus was the piling up of reserves of US dollars, which then placed mostly in US government securities as well as quasi-state institutions such as Fannie Mae and Freddie Mac. Eventually the upward spiral reversed itself with a vengeance: a catastrophic downward spiral.

Contagious International Capital Markets In their treatise on the battle between government and the market place that remade the modern world, entitled The Commanding Heights (New York: Simon & Schuster, 1999) the authors Daniel Yergin and Joseph Stanislaw made several observations which are remarkably relevant to the ongoing danger of contagion. Responding to the question whether confidence in market systems would be affirmed or eroded by the continued move toward markets, they observed that one of the most dramatic signs of confidence was the degree to which people around the world were entrusting their savings and their retirement provisions to the stock market. In mid 1997, mutual fund assets in the USA already exceeded assets in bank accounts by 25 percent – despite the fact that results could not be guaranteed because volatility and risk are inherent to the market system. “Of all the dangers” they wrote on p.394, “... perhaps the greatest threat ... would arise from massive disruption of the international financial system. Capital markets are growing far faster than the capacity to regulate them – or even to understand them. The very scope and reach of the integrated global markets create financial risks on an unprecedented scale. These dangers result from the inter-connection of currency markets, interest rates and stock markets, along with the extraordinary growth in the various ancillary markets that hinge on them. In the past, financial panics took weeks or even months to unfold. Now contagion can sweep through the world’s markets in hours, endangering the entire edifice.” At the end of the 1990s an unlucky conjunction did occur. A financial crisis that began in Asia in 1997 spread around the world as a result of the hyper-connection of global capital markets. But many countries did not have the regulatory and surveillance machinery to cope with the ebb and flow of funds – particularly to comprehend the scale of short-term debt that lacked sufficient collateral. At the time, the IMF led the rescue efforts to reform and repair financial systems in many countries: inter alia South Korea, Indonesia, Russia, Mexico, Brazil and . The strength of the USA economy provided an expanded demand for goods and services. But the first years of 44

the new century also saw the extraordinary ballooning of the US current account deficit, threatening the stability of both mature and emerging markets. Towards the end of 2008 the unexpected did happen again. This time the convergence of several shocks reverberated simultaneously throughout the global economy – and the contagion started in Wall Street with no apparent stabilising rescue team in sight.

Collapse of Confidence When pessimism sets in, a self-reinforcing downward spiral is set into motion. As asset prices fall, people spend less, businesses postpone investments and cut employment expansion. Liquidity and credit facilities decline and a value-destroying uncertainty takes hold. Banks become more cautious in granting credit and demand stringent security standards. In a downturn, banks themselves are subject to stringent capital cover requirements. Forced asset sales drive down price levels and markets go into a regressive tail spin. The failure of the investment bank Lehman Brothers, and the losses that spilled over to money-market operators that held its debt, prompted a global run on wholesale credit markets. This made it harder for banks to find finance. Even healthy banks and companies have been cut off from all but the shortest-term financing. This credit freeze raised concerns about the prospects of the real economy, which in itself added to concerns about the solvency of banks.

Corrective Action Taken These concerns required prompt action to unblock clogged credit markets by way of action e.g. by buying commercial paper from companies or by guarantees for debts issued by banks. It was clear that the reparation of dysfunctional credit markets would be a prerequisite for opening the road to recovery. A further plank in the crisis-management policy platform was to boost banks’ capital. By increasing the capital cover banks hold in relation to the amount held in their loan books, government injections would catalyse the rebuilding of banks’ balance sheets. Government guarantees of the security of bank deposits were required to stave off the much dreaded “runs on banks”. The restoration of the health of the real economy might prove to be much more complex and demanding. It requires various efforts to cushion the negative effects of the economic fallout: growing unemployment, residential property foreclosures, tumbling stock markets, poverty. The traditional tools involve demand management by way of anti-cyclical fiscal and monetary policies. The fiscal side involves stimulatory packages aimed at distributing large amounts of government hand-outs to boost demand, including “pump-priming” through government deficit spending. The monetary side involves the control of money supply through influencing the volume of borrowing and lending by commercial banks by way of lowering interest rates. The inherent danger of over-zealous efforts to fix American capitalism is governmental over- reaching. It is important to realise that America experienced a failure of its financial system, not of the free enterprise system. To restore confidence requires fixing the financial system and to address market failures. This cannot be done by demonising business leaders and limitations on performance-related remuneration. The financial system requires better oversight, not more oversight. In the aftermath of the financial crisis, the worst since the 1930s, the economy shrank by an estimated 2.4 percent in 2009. There was much damage to be repaired: high unemployment, millions of foreclosed homes and a huge hole in the public finances. The world’s biggest economy was faced with the challenge to begin its long overdue rebalancing. American consumption and borrowing could no longer be the engine of its own and the world’s economy.

The Obama Era

In 2008 the American people elected Barrack Obama as President. He is the son of a black African student and a white American mother and was raised by his maternal grandparents in the state of Hawaii. He was trained as a lawyer at Harvard University and became a community 45

organiser in suburban Chicago before he was elected for one term as a senator for the state Illinois. He conducted his campaign on the platform of “the need for change” – a theme he propagated with considerable oratorical skill. His campaign found strong resonance amongst minority groups (e.g. Jews and Hispanics), Black voters, trade unionists and young voters. His opponent was Senator McCain, a war veteran with limited qualities. Obama chose as his Vice- President, Senator Hugh Biden, a Jewish American with strong connections to “The New York Times” and the Time-Warner media group. Barrack Obama was propelled into the White House largely by the financial crisis. The crisis created many exploitable opportunities. The most obvious rescue areas were the banks and the car manufacturers, General Motors and Chrysler. Obama explained his position on May 22nd, 2009, as follows: “We want to get out of the business of helping auto companies as quickly as we can … In the same way I want to get out of the business of helping banks. But we have to make some strategic decisions about strategic industries.” Much of the $787 billion federal stimulus package passed in February 2009, went to the states, but only if they complied with federal guidelines on how to spend the money. Federal control was extended by means of specifications and regulations. In the aftermath of the financial crisis, the American government was faced with a serious fiscal constraint: the fall in tax revenue. Recent polls showed high anxiety among voters about record deficits and rising debt. Americans were facing the reality that private risk-taking in the world of finance had plunged the country into its worst recession in decades. In response Obama and the Democrat-Party controlled Congress explored new ways to expand government’s responsibilities. Obama was elusive on where he believed the boundary between government and the market should be. In his inaugural address he said “The question we ask today is not whether our government is too big or too small, but whether it works.” According to The Economist’s assessment, Obama’s actions belie the agnosticism expressed in his inaugural speech. “Most of the big domestic initiatives taken since he became president involve expanded federal government activity, either temporary or permanent. They include more oversight over the financial system and executive pay, extending health insurance to the 15 percent of Americans who lack it, shifting the consumption from fossil to renewable and redistributing income from the wealthy to the middle-class. (See The Economist, “Government versus market in America, May 30th, 2009, p.23) The results of opinion polls taken in May 2009 indicate that Americans still had a favourable image of business activity. A total of 76 percent of respondents agreed that their country’s strength is “mostly based on the success of American business”. A further 90 percent admired people who “… get rich by working hard”. People were, however, jaded by business’s excesses: only 37 percent of respondents believed that business strikes the right balance between profits and public interest. (Results of the Pew Research Centre, quoted by The Economist, op.cit., p.23) The polls also showed that Americans retain a healthy scepticism about excessive statism. They do not support increasing government control of their lives. The polls also showed that the increasing trust in government is concentrated amongst Democrats. Republicans are more distrustful, maintaining that support for Obama is actually support for a secular-socialist that is alien to America’s history and traditions. Arthur Brooks, the president of the American Enterprise Institute, was quoted by The Economist’s “Lexington” column as asserting that entrepreneurship is central to American culture – literally part of its DNA thanks to all those immigrants importing the gene that makes you get up and go. Brooks argues that Americans voted for Obama because they were hoodwinked by a relatively small clique of influential rich intellectuals, Hollywood types, media folk and university teachers. Also in the mix that elected Obama were the idealistic adolescent young Blacks and Hispanics who trust government more than other Americans do. Brooks argued that the minority coalition was helped by the economic calamity of 2008 to fool the majority into thinking that the crisis was caused by the private sector and that the state knew how to solve it. In Mr. Brooks’s opinion it was the other way 46

round: the state’s social engineering (in particular government support for dodgy mortgages) caused the crisis and its “remedies” will only make matters worse. Mr. Brooks maintained that neither state-engineered equality of income nor money itself, makes people as happy as their own “earned” success in life”. (See The Economist, June 19th, 2010, p.40) History seems to suggest that all moves towards big government, however controversial they may be, will in the end turn out to have a strong staying power. Big government and free enterprise have co-existed in America since the Great Depression. This “mixed” pattern can be expected to continue with minor adjustments in the nature of its tilt from time to time.

Alarm Bells

The USA is still the world leader in terms of the size of its GDP, its military power, and its scientific and technological prowess. It is still the preferred destination for millions of migrants across the world hoping to find a better future. But there are many reasons to be concerned. In a perceptive article “Complexity and Collapse – Empires on the Edge of Chaos”, Niall Ferguson explores the life cycles of great powers and concludes that some great powers fall quickly and without warning. A combination of fiscal deficit and military overstretch suggests that the USA may be the next empire in danger. (See Foreign Affairs, March/April, 2010, pp.18-22) Ferguson argues that large-scale political systems are complex systems that sooner or later succumb to sudden and catastrophic malfunctions – particularly precipitous and unexpected falls associated with fiscal crises. Alarm bells should therefore be ringing very loudly about the sharp imbalances between revenues and expenditures as well as difficulties in financing public debt: a deficit of about 12 percent of GDP and a public debt, the servicing of which could absorb up to 17 percent of federal revenues. Ferguson maintains that perceptions and expectations at home and abroad about the USA’s abilities to overcome its problems are as important as the material underpinning of its power. A shift in expectations about monetary and fiscal policy could force a reassessment of future US foreign policy options. A defective brake or a sleeping driver can be all it takes for a runaway train to go over the edge of chaos. References

Ferguson, N. (2010) “Complexity and Challenge – Empire on the Edge of Chaos”, Foreign Affairs, March/April, 2010, pp.18-22 Galbraith, J.K. (1958) The Affluent Society, Boston: Houghton Mifflin Company Granston, M (1966) A Glossary of Political Terms, London: The Bodley Head King, R. (2007) Origins – An Atlas of Human Migration, London: Marshall Editions Lowi, R. (1976) American Government – Incomplete Conquest, Hinsdale: Dryden Press Ranney, A. (1975) The Governing of Men, Hindsdale, Dryden Press Rossiter, C. (1953) Seedtime of the Republic – The Origin of the American Tradition of Liberty, New York: Harcourt, Brace & World Wolfe, Alan (2008) The Future of Liberalism, New York: Knopf Yergin, D. & The Commanding Heights – The Battle Between Stanislaw, J. (1999) Government and the Market Place That Is Remaking the Modern World, New York: Simon & Schuster The Economist A Survey of America 2003 A Survey of America 2005 Special Report on America and the World 2008 Special Report on the Future of Finance, January 2009 Briefing on Government vs Market in America, May 31st, 2009

47

3 The Lure of Social-Democratic Market Economies

The history of Europe is intertwined with the evolution of Western Civilisation: its accomplishments, its contributions and its failures. It chronicles the combined efforts of the people who inhabited this part of the planet to give expression to their desire to guide and organise their survival and fulfilment as a society. The ideas, ideals and objectives that have come to orient the actions of European nations are rooted in twenty-five centuries of Western history. They have been based on certain fundamental notions such as the originally Greco-Roman belief in the sovereignty of man’s rational intellect, their faith in man’s spiritual relationship with an omnipotent Providential Power that first arose among the ancient Hebrews and early Christians with an admixture of pagan obsessions with Nature or other cult forces. Since the Treaty of Westphalia, 1648, the Europe of nation-states with which we are familiar today, gradually began to take shape: France, Germany, Italy, Spain, the Scandinavian countries and all the others. Despite recent efforts in Europe to establish a new supra-national governmental layer in the form of the European Union, the nation-state is still the predominant political unit. But the role of the state has undergone a major change. The dominant pre-occupation of Europeans in the era of the feudal system in medieval Europe was with man’s spirituality as inspired by the scriptures and administered by the Catholic Church. Though the Greco-Roman valuation of man’s rational intellect was not repudiated, it was assigned subordinate significance. As commanded by the pervasive Church, man was to seek his ultimate happiness in salvation in the world beyond and to exercise his rationality not for its own sake but as an instrument for the glorification of God. Since the end of the Middle Ages this earlier subordination of rationality to other-worldly spirituality has been progressively reversed. The dominant Western view today urges man’s fulfilment here and now. While it does not deny man’s spirituality, it sees it more as an ethical consciousness directing the use of his rationality in the service of his fellow man. The guiding inspiration of this modern Western world view is the ideal of maximised welfare here on earth. Its most insistent moral principle is the imperative that nature, human institutions, and even man himself be made to serve this ideal as efficiently and as fully as possible.

The Emergence of Social-Democratic Economies

The period since 1950 can be described as the golden era of social-democracy in the Western World. Emerging from the devastation of the Second World War, most Western nations rebuilt their economies to unsurpassed levels of prosperity and wellbeing. The per capita income levels of Germany, France, Italy, Sweden, Norway, Denmark, , the Netherlands and Belgium can only be challenged by the USA and Japan.

The Welfare State Much of this progress took place under the auspices of the interventionist “welfare state” which entered the Western political arena during the years of the Great Depression and the Second World War. It combined elements of “socialist” thought with the dynamic market forces of “capitalism”. It held that public influence on (and sometimes control of) key economic decisions was to the common good and necessary to maintain the basic socio-economic fabric of society to the extent that the market forces do not provide. It generally entailed a situation in which government provide all its citizens with certain guaranteed minimum services such as physical infrastructure, formal education, medical care, old age care, housing and protection against loss of employment. Governments, in response to public demand, have in the course of time steadily gained more and more control over the production, jobs and wealth of society. In many ways the enlarged state role was undertaken in an attempt to create a recession proof economy by such Keynesian style measures as temporary tax cuts and spending increases 48

during downturns. In the decades following the Second World War, such measures achieved significant success in fostering economic growth and job creation. Although advocates of the welfare state promoted the idea that every citizen ought to have the minimum conditions of the good life as a matter of right, they do not agree on the exact minima that ought to be guaranteed or to what degree they ought to be guaranteed. They do agree, however, that it is the proper function of government to provide every citizen with some degree of formal education, medical care, social security and employment protection and that the rich be taxed to provide for the poor. In most Western nations the idea of “laissez faire” has lost most of its support, although many people still speak strongly in favour of “free enterprise” and against government “regimentation”. At the same time parties with “socialist” leanings have espoused programmes entailing public ownership and operation of railroads, electricity and gas services, postal services, , airlines and coal mines without thereby advocating communist totalitarianism, the most repressive variant of socialism. Politically as well as economically both extremes have been shown to have grave imperfections. Most Western countries have become accustomed to the intervention of government in its ascribed role as the main gap-bridger, as a redistribution agency as well as a provider of public goods, collective goods and social insurance. This form of mixed economy is usually described as the “Social Democratic Market Model” or the “Social Market Model”.

Doctrinal Foundations is the most liberal form of socialism. Although there is no universal model of social democracy, it emphasizes rather than individual freedom, collective responsibility rather than free market liberalism, government intervention rather than market forces. However, one great objective of socialism, the welfare state – the responsibility of the community for a minimum standard of social and economic security for every person – is no longer a monopoly of socialist parties. All major parties in democratic nations accept the need for a certain minimum of welfare state services. Social-democratic party leaders (alone or in coalitions) have been in power during much of the post World War II years in Western Europe: Germany, , Belgium, Denmark, Sweden, Norway, Finland, , the Netherlands, Portugal and Spain. During this period the “welfare state” became the economic, social, cultural and organisational expression of the desire to promote equality. Some parties are stronger supporters of it than others, and some parties favour more benefits than others, but the principle that every citizen is entitled to a minimum standard of living is no longer a matter of partisan controversy. Social democracy accepts a multi-party political system and believes in gradual, peaceful means of reaching its socialist goals. In practical terms this has meant that social democrats have concentrated more on alleviating what they regard as hardships created by capitalist economies (unemployment, salary and wage inequities) than on directly restructuring societies according to a collectivist blueprint.

Mixed Economies States ruled by social democrats are generally mixed economies, combining elements of free enterprise competition with state ownership or direction of key industries. Germany is more free enterprise orientated than its Scandinavian neighbours or France. But in most social democracies, the nature of the mix of their economies, depends on the party in power. Some parties, like the Gaullist Party in France and the Christian Democratic Party in Germany occupy a right-of-centre political posture advocating classically liberal ideas based on a free- market approach. Others, like the Socialist Party in France and the Social Democratic Parties in Germany and Sweden are left-of-centre parties in favour of government intervention to achieve socialist objectives. In most Western countries, however, socialist regimes remained within the context of moderation. Only key industries or utilities were placed under direct government ownership and control, and the rest were left in private hands. Even in those nations which have had 49

socialist regimes for generations (e.g. Scandinavia), a large sphere of production is left to free enterprise. Thus market forces can still prevail in a capitalist-socialist “mix” in countries such as Sweden and France. Some ventures – power generation, railroads, communications, certain “basic” industries like motor manufacturing, iron and steel – have undergone partial nationalisation, e.g. in France. But socialists had to moderate their demands on account of the failure of nationalised industries to perform according to expectations. Social democratic planning never took the form of total, blueprinting, rigid . Ostensibly it sought to have community interest replace “selfish” production decisions, to set an “ethical” rather than a monetary standard regarding production priorities. But these claims are more rhetorical than real. Democratic socialists by and large refrained from eliminating the market function. They tried to impose a “standard of social value” on the market rather than to replace it. State plans are used as much to guide and supplement the private market as they are to lay out government goals. Representatives of labour and management as well as bureaucratic officials are consulted in the planning process. Great pains are taken to avoid direct physical controls and to work with the more subtle techniques of monetary incentive, subsidies and penalties. Numerous techniques are applied to break up concentrations of economic power and to redistribute incomes. Business regulation, welfare programmes and taxation have been directed toward achieving these ends. Efforts have been made to equalize educational opportunities, to widen the chances for broadening the education of children coming from lower class families. Much attention has also been given to improve housing for the poor so that current disparities might not breed feelings of class conflict and hostility. Hence, the economies of social democracies are justifiably called “mixed”.

Basic Characteristics

The most distinctive achievement of social-democratic rule, is its provision of an extensive network of . These are provided through public sector delivery systems – either free of charge or at a modest cost. A variety of schemes are in place in the different countries to organise such functions as , old age care, assistance to the handicapped, child care, education, research, social work, family and individual counselling, etc.

Comprehensive Social Security Schemes As various social security schemes spread across Europe over the years prior to 1940, three dominant systems emerged: the Bismarckian, the Beveridgean and the Scandinavian. Most countries adopted the principles of Bismarck’s “workmen’s insurance” as their basic model for benefits including sickness and unemployment allowances, pensions and public assistance as a subsidiary element. Thus the main instrument in the organisation of compensation for loss of income has been the system of “social insurance”, which is largely an imitation of private, voluntary insurance systems. There is a clear connection between contributions (by insured persons or employers) and eligibility for benefits. The Beveridge system is a modification of this scheme also using contributions by beneficiaries as a condition for eligibility of social benefits but involving the state in a tripartite arrangement. The Scandinavian scheme, however, is characterised by being run by public agencies, financed primarily through the revenue of general taxation and providing benefits conditioned on citizenship, not on previous occupation, income or contributions. It is often called “people’s insurance” in contrast to “workmen’s insurance”. Down the years these divergent schemes and approaches tended to converge although the main features of the various systems are clearly visible in social legislation in terms of the efforts made to maintain a balance between public and private responsibilities. Scandinavians, in particular, strongly believe in the justice of equality and have introduced cradle-to-grave welfare services. Their systems provide free hospitalisation, surgery, medicine and dental care. The “people’s security” laws grant everyone disability, old age and unemployment benefits. Every worker is guaranteed at least four weeks paid vacation. They receive allowances for children, tuition-free education through university, sick pay, amounting 50

to 90 percent of normal wages and a retirement pension equal to 60 percent of the average income of a worker’s fifteen highest paid years. Home-nurses and day centres for children are free. There is, generally, a relaxed attitude towards government and the public sector. Socialism has always based its appeal on two main issues: social equality and the abolition of poverty. Most social democracies in Western countries have gone a long way towards reaching these objectives. The limits of the welfare state are only set by the ability of these countries to pay for their benefits rather than by differences of ideology amongst its major political parties.

Taxation Rather Than Nationalisation Because social democrats have mostly come into power in industrially advanced and politically democratic nations, they have been cautious in their efforts to change existing systems. They have largely refrained from nationalising their industries. During the early post-World War II days, the most comprehensive nationalisation strategy was followed by the British Labour Party which came to power in 1945. In its election campaign the Labour Party undertook to nationalise certain listed industries and services. In each case, it tried to explain why nationalisation was necessary. For gas and light, water, telephone and telegraph, and other utilities, the criterion of nationalisation was the existence of a natural monopoly. The coal, iron and steel industries were considered to be so sick and inefficient that they could not be put on their feet except through nationalisation. The nationalisation of all inland transport by rail, road and air was proposed on the ground that wasteful competition would best be avoided by a co-ordinated scheme owned and managed by public authorities. The Bank of England was also proposed for nationalisation on the ground that its purpose was so obviously public. After its electoral triumph in 1945 the Labour Party methodically carried out its programme. The Scandinavian countries (Sweden, Norway, Denmark and Finland) have had the most impressive record of social reforms, both in the inter-war years and after World War II. From the early 1930s onward, they have been governed by socialist governments based on parliamentary majorities. As a result communism has been kept down to minor proportions as political forces in these countries. The Scandinavian socialist movements have emphasized economic development and social security rather than nationalisation and their economic policies have been centred on monetary measures (such as low interest rates) and taxation rather than public ownership. In all social democracies taxation acted as the greatest leveller. As a result of high income and estate taxes there has been a significant shift in the distribution of wealth. While the share of high incomes in terms of the national income has declined, there has been a sharp increase of the middle-income groups, particularly skilled workers. The trend toward more social equality can also be seen in the fact that the proportion of the national income paid in wages and salaries increased significantly. These policies went a long way toward eliminating extremes of inequality.

Industrial Democracy A basic belief among socialists of whatever stripe is that if the means of production remain under the complete control of private owners, the worker will be exploited. All socialists owe some debt to Karl Marx, who framed the classic socialist indictment of capitalism, accusing it of turning labour into a commodity and thus exploiting and dehumanising workers while it enriches bourgeois owners. Social-democratic governments all over Western Europe have given workers a major voice in management. Today employee representatives sit on boards of directors and management committees in Denmark, Germany, Sweden and other Western European States. In practice employees have a voice in setting wage levels, dividing profits, planning investments and firing executives. Karl Marx claimed that he discovered certain “scientific” laws of history which hold that capitalism, by creating an increasingly numerous and impoverished working class, produces the forces that would eventually destroy it by way of a violent revolution. This prediction inspired 51

many socialists for more than a century with the certainty of inevitable triumph. But rather than pushing workers deeper into poverty, as Marx predicted, capitalism has lifted the vast majority of labourers in Western Europe into the middle class. Modern unions have given employees a counterforce to management’s power – and more often, more than a counterforce. An economic downturn now hits harder at corporate profits than at wages, which are usually fixed by contract or legislation. Today there is grave concern that trade unions have become too powerful and that the general public interest is not adequately protected against the abuse of trade union power – particularly by public sector unions.

Employment Participation The West European social democracies have undergone some striking demographic changes in the post World War II period which are likely to have an important impact on future trends. The most striking changes are in the lives of women. At the beginning of the 1980s the proportion of women aged between 25 and 34 in the labour force was 42 percent in France and 49 percent in Germany. By 1988 these figures had risen, respectively to 67 percent, 75 percent and 87 percent. As the women went out to work, the fertility rate has fallen, which in turn, impacts on the age distribution of the population. By the end of the century, youngsters aged 14 or under made up a fifth of Europe’s population. In 1950 they accounted for a quarter. The first Europeans who started to have fewer babies were the rich Northerners – in Protestant Scandinavia, Germany and also Britain. Then followed France and then the Catholic Italians and Spaniards. Italian women now have the lowest fertility in Europe.

Ageing Populations The next important area refers to ageing patterns. In Europe as a whole half the people are under 24 years old. By contrast, half of the citizens are older than 34. Within a generation, in most European countries a quarter of the population will be over 60. As Europe becomes a continent of the old – and especially of old women – its politics and economics are likely to alter. Lobbies for lower inflation, larger pensions, crime control, better health services and more reliable public transport will grow. The growth of women’s employment is predicted to coincide with a sharp drop in the proportion of Europe’s workers who belong to unions. Across Europe, the rise in the ratio of pensioners to those of working age is striking. In 1950 this “dependency ratio” in European countries was commonly under 20 percent. By 2040, on present trends, this ratio is expected to rise to 30 percent. With a lower , lower immigration and an ageing population, Europe’s labour force will start to shrink as a share of the population. Without faster growth, Europe will be unable to afford its welfare system. The general growth in life expectancy will put more pressures on health and welfare services, state- run pension schemes as well as the pay-as-you-go retirement schemes. The latter pay pensions to retirees out of the contributions from current workers. As the proportion of old people rises across Europe, such schemes are running out of money. Recently the German government tackled its pensions problem by increasing workers’ contributions, raising the retirement age and reducing the value of pensions. Governments in most European countries are finding it difficult to push through such unpopular reforms. Many workers now prefer to take out life-insurance policies to supplement state pensions. A growing number are turning to pension funds. It is felt that such funds will take the pressure off state pension systems and thereby make it easier for governments to trim their bloated budget deficits. These pension funds could in addition, become a major source of equity capital to be invested in growth-promoting productive assets. Europe seems to be facing a choice between slower economic growth with an ageing population while trying to keep newcomers from East-Europe and the Mediterranean coastlines out; or it can face up to the problems of absorbing a growing number of foreign immigrants.

52

Problem Areas Declining Growth Trends The average economic growth rate of every major industrial country in the Western World has been on the decline for several decades. The major economic growth in Western Europe, Germany, is a good example of declining growth trends. In the 1950s West Germany’s output doubled. It then grew by 70 percent in the 1960s, 35 percent in the 1970s, 20 percent in the 1980s, 15 percent in the 1990s and 5 percent in the period 2000-2010. The economic sclerosis afflicting the rich Western economies manifests itself in many symptoms. Some examples are unemployment rates above 10 percent, chronic budget deficits with levels of government expenditure approaching in some cases and exceeding in several cases 50 percent of national output, social welfare systems placing a heavy burden on society, public debt levels creating debt traps where government cannot service the public debt.

High Labour Costs A major problem in Europe is the high cost of labour. In Europe workers enjoy five to six weeks off every year; its companies are obliged to make large payments to the fringe benefits and social-security system for every worker on their payrolls and its unions hold sway over major corporate decisions. As a result, European manufacturing wages are significantly higher than those of the United States or Japan – even before “social costs” like pension, unemployment insurance and severance packages are added. The fact that European companies are saddled with much higher costs than their competitors means that they are at a disadvantage. In the past the economies of the West were less exposed to the international competition that is today provided not only by China, but a vast array of newly industrialised countries in the Pacific Rim. The competitive strength of these countries is enhanced by a well-trained workforce, an environment where work is a virtue and being idle is a vice, where labour relations are non-disruptive and trade union power less exploitative. Most governments in Europe are not only sympathetic to labour, but are beholden to it for electoral support. As a result they are reluctant to challenge the status quo. It would take a brave set of politicians to challenge the operating systems of wage indexation or to call for a rollback in worker benefits. European economies are permeated by laws and regulations, arguments and customs that many workers and their political camp followers hold dear. The benefits are regarded as rights rather than privileges and it is virtually impossible to get rid of employees without payment of exorbitant severance costs. These “turnover costs” not only raise labour costs for ongoing businesses, they also discourage new investments because of exorbitant start-up costs. Would- be entrepreneurs are scared off by the high price of failure and even large companies hesitate to hire people they may not be able to use later. In this way the existing system actually contributes to high unemployment and the dearth of investment in business creation and expansion.

Open-ended Expansive Government Most European governments seem to have lost the battle of curbing and bringing their budgets into balance. Large and growing deficits are the order of the day. A negative correlation has emerged between the size of government and the rate of economic growth. Politicians and bureaucrats do not seem able to abandon policies that favour healthcare spending, shorter work weeks, higher pensions for the elderly and other forms of unproductive consumption spending. The political courage seems to be lacking to clear the way for activities that would guarantee economic growth; tax cuts to stimulate private initiative and investment; research and development to promote technological innovation; on-the-job training to raise productivity; the inculcation of self-reliance to reduce the current dependency on government provision. The underlying problem faced by all social democracies concerns the role of government. What should it do? What can it do? What are government’s limits? The expansion of government can only be checked by a clear understanding of the limitations of government’s capabilities. Too often government has been regarded as the ultimate answer to 53

most social problems. This expansive philosophy lies at the root of huge budget deficits. Deficits won’t be controlled unless societies become more discriminating in their use of government. But the much larger truth is that once established, programmes achieve virtual immortality. They are protected by affected constituencies and lobbies. Programmes survive long after their public justification has vanished. Thus the momentum of government activity becomes open- ended. Pressure groups portray their own interests as pressing public concerns that demand national action. In this way anything with public support becomes an integral part of the government agenda.

The Welfare Mentality An inherent danger of a “welfare state mentality” is the emergence of a population of welfare clients who are dependent on income transfers and social security measures – a “hand-out society”. A mentality of dependency is not conducive to the inculcation of positive cultural or human values such as a strong work ethic, self-reliance and creativity. It creates a society with a frame of mind of people who live on “hand-outs” as “free riders”. There is also the additional danger that the taking over by the public sector of the functions formerly performed by social networks such as the family, the community and the church, will redefine and restructure such functions in such a way that important features can get lost. Public institutions such as schools, hospitals, old-age centres, etc., can augment and support family care – but should not replace it or become substitutes for community based social structures. There is an important psychological difference between receiving economic, practical and emotional support directly from a spouse, a relative, a friend or a neighbour in contrast to receiving such assistance from an anonymous public institution. In the first instance the tie between resource input and the provision of help is obvious. There are clear constraints on incentives to abuse assistance or to manipulate the maximisation of individuals’ share of the common cake. Similarly, there is a vast difference between the provision of help to somebody whom you know and care for, in contrast to paying taxes to finance a bottomless well of social services. The broader and more anonymous the system, the bigger the incentives to exploit the system. (Anderson, 1983, pp.24-31). Whenever a group attempts to secure collective goods for its members, there seems to be a tendency for some group members to sit back and let others bear the effort. After all, collective goods are enjoyed by all members of the group if they are provided for any member of the group regardless of whether that member helped secure the collective good or was a free rider. Not surprisingly, if all group members pursue their rational self-interest (hoping that other members will do the work to secure the group’s collective goals while they themselves refuse to help), no one will provide the necessary effort to attain the group’s goals – and, no doubt, eventually the group will fail to obtain the collective good desired by all. In the case of collective goods, pursuit of individual rationality may thus come at the expense of collective rationality – particularly in the case of large groups where the role of individual members is largely unknown. In smaller groups the participation or non-participation of all members can be commonly observed. Thus indirect coercion or social pressure can be applied to encourage participation. It is a source of great vulnerability in the Scandinavian welfare systems that in its search for universal coverage, flexibility and internal rationality, they have removed some of the characteristics which would remind the consumer of the fact that the provisions are not paid out of a bottomless well, but paid by citizens as taxpayers. They have enlarged the problems of the “free-rider”. The connection between costs and benefits – factors which economically belong together and which should be psychologically connected, are systematically kept apart by the welfare system itself. (Anderson, 1983, pp.32-33)

The “Parasite Economy” A major factor that is easily overlooked is the burden imposed on “post industrial” societies by “rent seekers”. Jonathan Rauch of the National Journal described this phenomenon as the 54

“parasite economy”. This sector includes the activities of a wide array of professionals, lobbyists, unions, associations, consultants, bureaucrats who are operating as “rent collectors” in advanced societies and they generate what is euphemistically called “transaction costs”. In effect they are constantly pushing up costs of production without increasing productive output. They are overwhelmingly orientated towards gaining a share of existing income and wealth, rather than towards the production of additional output. The great majority of people involved in the parasite economy redistribute income rather than create it and in ways that reduce economic efficiency and output. Mancur Olson in The Rise and Decline of Nations described the special- interest organisations as “distributional coalitions” (rent seekers) who increase regulation, bureaucracy and political intervention in markets and, by the barriers they impose, reduce an economy’s dynamism and capacity to grow. (See Olson, 1982, pp.41-47)

The Weight of Public Debt A major headache in all Western social democracies is that the growth of public debt is constantly enlarging the share which interest on public debt will require of future taxes. The inherent danger in this trend is that a position might be reached where the interest that has to be paid on public debt per annum, exceeds the total annual spending on welfare benefits such as pensions and unemployment allowances. Carrying a huge public debt is not sustainable.

The Financing Crisis The real crucial issue of the welfare state is the question of financing. The simple truth is that in most instances its services and benefits have become too excessive to be affordable. It led to confiscatory tax levels, budget deficits, excessive public debt levels, inflation, overgrown public sectors and persistently declining growth rates coupled with rising unemployment. An important answer lies in the restoration of a clear connection between rights and duties and between contributions and eligibility. Contributors and recipients of benefits should not be obscured by a cloak of anonymity. Governments should be forced to cut their cloaks according to their cloths. Financial responsibility for the expenditure side depends on the corresponding financial responsibility for the revenue side.

Creeping Bureaucracy Like all governments, social democracies have to function through their civil service. The civil service bureaucracy may well be the most prominent part of any political system because it endures while political leaders come and go like passage birds. As permanent appointees bureaucrats may provide stability when the political apparatus seems to flounder from time to time. But there is an ominous downside. Like any other organisation that provides its members with a livelihood and status, the bureaucracy is self-protective. It is to a considerable extent governed from within, and it uses its resources to strengthen itself. It gladly undertakes programmes that enhance its role, but it resists liquidation of old functions. Officials remain in place whether or not they have real work to do. Civil servants want security of tenure for themselves above all, and they have generally obtained it in all social democracies. The possibility of dismissal for economic reasons is virtually unknown in Western democracies. Dismissal for incompetence requires such cumbersome procedures that it is seldom tried. Most bureaucracies lack an adequate criterion of efficiency such as profitability or some other objective standard to measure quality of performance. A business has to retreat from areas of unprofitability in order to survive. But a bureaucracy can even expand to cover failures. Bureaucracies in social democracies normally favour socialist measures because they involve governmental controls and greater bureaucratic power. Hence the drive for socialism and welfare services amounts to an idealisation of the bureaucratic order. There is constant pressure for more responsibilities, bigger budgets and larger staff. 55

Much of the blame for the unmanageable bureaucracy falls on the public, which clamours for more protection, regulation, services, support, etc. As a result demands on government outpace its capacity. Expenditure chronically outpaces income. One of the gravest problems of modern social democracy is how to contain public bureaucracy in order to keep it responsible and efficient – to ensure that they serve the public interest and not their own. In most Western countries, the public sector is now the power base of the trade union movement. Increasing government intervention and the resultant high costs of government spending “crowds out” private sector incentives to produce and lead to a declining rate of economic growth. Only value adding productive enterprise creates taxable income – everything else consumes tax.

De-nationalisation One of the important lessons of West European experience during the past forty years of social and economic reform is that the substitution of state ownership and management for private ownership increases the tendency toward government centralisation, the dangers of statism and productive inefficiency. As a result many social democracies adopted reform strategies such as socialisation (e.g. co-operatives in Scandinavia, and commercialisation and privatisation in several European countries). The Scandinavian recipe is the use of the co-operative movement rather than the state as the agent of social and economic reform. In Britain, as in most other countries, the co-operative movement has been largely confined to and wholesale trading in selected groups of articles. The Scandinavians have set up co-operatives for slum clearance, health insurance and industrial production. The concept of socialisation implies the diffusion of publicly owned property. Property is owned and managed not by the state, but by producer or consumer co-operatives, labour unions, churches, educational institutions, hospitals, development corporations, and other organisations. These organisations derive their powers from voluntary associations rather than from the sovereign authority of the state. This approach has been reasonably successfully tried in Scandinavia, as well as in . In Scandinavia, most public housing has been built not by the state, but by corporations that combine individual ownership and management with financial assistance from housing co- operatives and municipal agencies. These co-operatives are also common in fields of manufacturing. In Israel, the Federation of Labour, the main organisation of labour unions, is the largest employer in the nation. It has a considerable share in the ownership and control of such basic industries as highway transportation, navigation, aviation, banking, building, heavy machinery, , glass and rubber. A sizeable proportion of Israel’s agricultural production is in co-operative communities. In all, the “worker’s sector” in the economy accounts for almost one-quarter of Israel’s national product. None of these solutions is perfect and mistakes are constantly made. But these forms of socialisation do seem to avoid some of the worst evils of nationalisation: monopoly and the resulting concentration of economic and political power. The economic power of private monopolies can at least be opposed by the political power of the state. But who will protect the individual against the state when the monopolist is the state itself?

Case Study 1: The Scandinavian Model

For decades since World War II outsiders looked with envy to the Scandinavian countries – particularly to Sweden – as the model of a modern welfare state. Sweden together with Norway, Denmark, Finland and make up a pan-Nordic group of nations, with a common identity based on a common culture, geography, history, ethnicity, love of nature and a shared Scandinavian language (except Finland) As early as the 9th century, communities along the Rhine and along the eastern coast of the British Isles were sporadically raided by fierce Norseman travelling in their Viking ships. Traces of their pillages were found as far south as Paris and even as far out as the North American coast 56

in the west. Later the Kalmar Union of 1397 pulled the kingdoms of Norway, Sweden and Denmark together for more than a century. Each country was once ruled, at least in part, by a Nordic neighbour. The intertwined national Nordic identity is reflected in the flags of the five countries. Each has its own distinguishing colours, but they all share the design of a cross on a plain background. Sweden, Norway and Denmark are constitutional monarchies, whereas Finland and Iceland are republics. Denmark, Finland and Sweden are members of the EU, but only Finland adopted the Euro. Denmark, Iceland and Norway are members of NATO. Population figures are as follows: Sweden 9 million, Denmark 5.5 million, Finland 5.2 million and Iceland only 300,000.

Demographic Patterns For generations up to the 1980s the Scandinavian countries were characterised by their homogeneous Nordic, staunchly Protestant, populations. Since the 1980s the Scandinavian countries absorbed large waves of immigrants from Asia, Latin America, Africa, Eastern Europe and the Middle East. Most immigrants settled in southern Sweden. By 2003 it was estimated that one in four persons in Sweden was foreign born or of foreign parentage. The foreign-born percentages were as follows: Sweden 12 percent, Norway 7 percent, Denmark 6 percent, Finland 3 percent. It is clear that the famous Protestant work ethic, prosperity and political calm of the Scandinavian countries will depend on how they succeed in integrating the new arrivals. Birth rates among the Nordic populations are very low, so the populations are ageing. As a growing proportion of post-war baby boomers are retiring, the pool of working-age taxpayers is shrinking and resistance to high taxes is rising amongst the newcomers who are not steeped in the traditional welfare-state political culture. Despite these demographic transformations, the Nordic region still maintains an extraordinary standard of living. In terms of international comparative indexes, to be a citizen of one of them today is to be more assured of wealth, political stability, generous welfare, low crime and a good life than most other countries.

The Crisis of 1991 Up to the 1990s, Sweden had been living a fairytale. During World War II both Norway and Denmark were occupied by Nazi Germany and, like other Western economies, had to be rebuilt from devastated levels after the war. Sweden, on the other hand, had remained a neutral country during the war and had been essentially left undisturbed by the cataclysm of war. Emerging from the war with its physical and industrial infrastructure unscathed, Sweden had a head start during the post-war reconstruction period. By the early 1990s the profligacy of the Swedes had caught up with them: debt soared and output faltered. By 1991, total government spending had soared to 72 percent of GDP compared to a European OECD average of 50 percent. Welfare spending had risen to a level that was beyond the capacity of the country to finance it. Sweden’s GDP plunged by 6 percent between 1991 and 1993. Its budget deficit was running at 11 percent of GDP and its spiralling public- sector debt was equivalent to almost 100 percent of GDP. The scale of the problem was enormous. A right-of-centre Prime Minister came to power in 1991 with a four-party coalition, committed to radical economic reform. At the time, tax revenues stood at 50 percent of GDP. Social benefits accounted for 40 percent of disposable income and more than 30 percent of the workforce was employed by the public sector. To make matters worse, the Swedish economy had been in deep recession since 1990. Unemployment increased to 14 percent, putting additional pressure on government’s expenditure levels equivalent to 50 percent of the budget deficit. The other half of the budget deficit had to be pledged as part of a rescue package: direct aid and guarantees to rescue banks from a loan-loss crisis that engulfed the banking system in 1992.

57

Excessive Welfare Spending For many years the Scandinavian model of social democracy served as the showcase to the rest of the world that could be combined with prosperity. The Scandinavian countries led the rest of the world in per capita income levels – despite their womb-to-tomb welfare systems. Combining vigour with egalitarianism, it seemed to offer a “third way” between capitalism and socialism. The state provided a comprehensive welfare system and guaranteed full employment while allowing the market to function in the productive side of the economy. Scandinavians, like many other social democrats, came to the realisation that they cannot afford their expensive luxuries. They have been living beyond their means. They had to work more productively and spend less on welfare services. Swedish industrial production decreased by 15 percent during the period 1988-1991. After thriving for several decades on its unique economic “mix”, Sweden enjoyed a jobless rate which never exceeded 3.5 percent. By 1992 it experienced its deepest recession since the 1930s. To pay for its welfare state, Sweden’s tax burden was the heaviest in the world in 1991 (57 percent of GDP against the EC average of 41 percent). High taxes and excessive benefits destroyed the incentive to work. Sweden’s growth rate suffered and its income per capita declined to below the OECD average. After taking power in September 1991, the minority centre-right government of Mr. Carl Bildt tackled the task of introducing free-market policies and cutting the size of the public sector.

Structural Flaws of the Swedish Model In December 1993 a penetrating analysis was made by Assar Lindbeck of what he called “The Dilapidated Swedish Model” (Newsweek, December 20, 1993, p.17). At the time Lindbeck was Professor of Economics, Institute for International Economic Studies, University, and Chairman of the Nobel Prize in Economics. He argued that Sweden had been looked upon as the very model of a modern welfare state and that both the achievements and problems of the welfare state were well illustrated by the Swedish experience. He maintained that this system enhanced the economic security of Swedes and increased the quality and quantity of social services available to them, from education to health, child and old-age care. But he saw as the basic problem with welfare states, “that they tend to disconnect the relationship between effort and reward by creating disincentives to work, savings and entrepreneurship”. The structural flaws he identified were financing, benefit abuse and lower saving. In relation to financing, Lindbeck argued that to raise the revenue to finance a welfare state means imposing high marginal tax rates on incomes. These taxes insert “wedges” between costs to firms and the rewards that are obtained by workers. When all such taxes are taken into consideration, these wedges were driven, in 1993, as high as 50 percent for most Swedes and 70 percent for high-income earners. So pervasive were the disincentives caused by such taxation that the high marginal tax rates not only discouraged individuals from doing extra work themselves, but they also discouraged the employment of others. Swedes started doing repair jobs themselves, rather than hire workmen or they bartered for repair services. High tax rates also weakened the willingness of people to take new jobs or to invest in further education and training. High taxes encouraged tax avoidance and evasion and generally made tax honesty very expensive. Benefit abuse could be seen in the high level of absenteeism from work because of generous sick-leave benefits. Sweden had among the healthiest citizens in Europe, but in the early 1990s employees skipped work about 25 days a year on average. Adding people who retire early for health reasons and those who take time off to care for sick relatives, absence for health reasons accounted for no fewer than 50 days per annum on average – almost one fifth of the working year. The problem was that people adjust their behaviour to take advantage of the system of welfare benefits. Moreover, political competition tends to bid up benefits over time. When governments take the main responsibility for providing for the economic security of its citizens, individuals lose an important motive for saving. Swedish household savings dwindled in the 70s and 80s to minus 5 percent of disposable household income. Reduced private savings 58

slows down the accumulation of national wealth. Entrepreneurship requires private wealth and hence private saving. In spite of high taxes, Sweden’s public-sector borrowing accounted for a heavy 13 percent of GNP, which shifted the welfare burden to future generations. Lindbeck recommended several measures to improve the situation. Firstly, to cap benefits, e.g. reduction of unemployment pay in order to encourage people to take greater responsibility for their own lives. Secondly, to establish more compulsory social-insurance schemes e.g. old- age pensions, sickness and work-injury compensation as well as unemployment benefits, outside the government’s annual budget to avoid political tampering – also to bring benefits more closely related to the fees that are paid for them as in private insurance systems. Thirdly, to allow private agents to offer voucher-based social services such as child-care, education and old-age care to mitigate the government’s inevitable inefficiency. Fourthly, to replace the existing system of welfare benefits with lifetime “drawing rights” which would allow people to “borrow” public funds to pay for services such as education or unemployment, to be repaid as soon as they were actively employed.

Sweden’s Reform Measures Urgent reform measures were taken when a Carl Bildt-led four-party coalition came into power at the height of the 1991 crisis. Apart from emergency measures in the form of direct aid to the unemployed and guarantees to rescue the banks, various modest changes were introduced in an effort to remodel the welfare state: - housing subsidies were to be cut; - old-age pensions would start at a higher age and pension levels would, in the interim, be pegged; - workers’ sickness benefits were to be cut in such a way that no payment would be made for the first day off and the amount payable for the subsequent 6 days would be reduced; - health benefits and sickness insurance would be paid for by employers and employees instead of the state; - the high level of foreign aid would be cut; - the Krona was to be floated and allowed to depreciate by about 20 percent; - through budget cuts, unemployment benefits were to be reduced. (See The Economist, 26 September, 1992) By these measures the Bildt government hoped to address the dilemma they inherited in 1991. But it was not possible to achieve a quick turnaround. By 1993 total government spending, including social-security costs, reached no less than 72 percent of GDP and the budget deficit peaked at 13.8 percent of GDP. But Bildt could claim some notable achievements in his short-lived bid to redress the structural imbalance in the economy between the public and private sectors. A privatisation programme raised a substantial sum to repay public debt and deregulation measures increased competition in telecommunications, broadcasting and in the energy sector. Capital gains taxes had been reduced and taxes on dividends scrapped. An element of choice was introduced in the welfare regime by way of public-private partnerships in the delivery of education and health services. Spurred by devaluation and productivity gains, the industrial sector enjoyed a boom in exports. The modest reform measures also created a more positive business climate. The Bildt reform measures also rubbed off on the when they returned to power. The economic crisis woke that Party from its fairytale dreams. They realised that to sustain the benefits of the welfare state, it had to be pruned. They realised the need to cut spending levels in order to narrow the budget gap, to reduce unemployment by stimulating growth and by introducing flexibility in the labour market, e.g. by removing the rigidities in hire- and-fire rules. They also agreed to continue with public-private partnerships in education and in health services – also to accept freedom of choice as a basic ingredient of the welfare state’s delivery system. But the underlying structural imbalances remained intact. Up to two-thirds of the electorate depended on the public sector for the bulk of their personal income – either through direct 59

employment or through welfare benefits. This reality remained a formidable constituency-based obstacle to the radical reform required. The Social Democrats soon came back to resume the power of government. Faced with the choice between effective cuts in welfare and printing money to finance its preservation, they continued to choose the inflationary route as is customary for left-wing parties.

The Role of the Protestant Work Ethic The much-vaunted sense of a “peoples community”, is known by the term “”. Much of it grew out of the Protestant-Lutheran culture in which care for the weak is the duty of society as a whole. For many generations this sense of community was closely linked to adherence to the welfare state: acceptance of high taxes coupled with generous welfare benefits and a bloated public sector. The duty of hard work within a co-operative framework is deeply ingrained in the “folkhemmet” traditional culture. To do your duty and to stand together have over generations become part of the Protestant-Lutheran work ethic. Hard work is a virtue and idleness is a vice. For generations this frame of mind became ingrained in the Nordic culture. It remained an important building block of Nordic societies – despite contemporary secularist tendencies. Church-going habits may have faded, but ingrained culture-based values are not diluted over the span of only one generation. But generous, compassionate welfare ultimately depends on two details of demography: enough people of working age, productively employed, to fund the claims of those in need; and, enough working people who are willing and able to pay the necessary taxes. Since the early 1990s, particularly the Swedish part of the , seemed to have buckled under the weight of their bloated welfare system. It remained an open question whether the large influx of immigrants could be sufficiently integrated into the fold of the Protestant-Lutheran work ethic and its concomitant welfare state generosity. A more ethnically mixed society could make the idea of the “folkhemmet” more difficult to sustain and lessen the general willingness to bear a heavy tax burden. The countries where most of the immigrants originated are not famous for having advanced tax collection cultures.

Post 1997 Trends In all Nordic countries, the underlying electoral majority lies on the side of social- democratic parties. Hence, through a shifting coalition of parties, social democrats remained in power throughout the region. But to the extent that the Nordics succeeded in turning around the downward economic trends, they have done so “un-Nordically” by curbing the state and by systematic surgery on the excesses of welfare state benefits. Norway owed its success less to deft policies, than to the geological bonanza of its off-shore oil and gas rigs. But Finland, Denmark and even Sweden experienced significant economic growth after 1997 – although unemployment remained high at 10 to 15 percent levels and government spending remained high at around 60 percent of GDP. Social democrats, the architects of the welfare state, are used to thinking of the government as the fixer of problems – not the problem itself. Hence taxes and government spending continue to swallow up a bigger share of national output than most other rich countries. Nordics have a deep-seated conviction that sharing wealth is as important as acquiring it. These convictions shape the way working conditions are set, benefits are distributed and markets are regulated. From time to time, isolated instances occur where individual governments try to introduce reform measures such as reducing taxes and also widening the narrow gap between what their citizens can earn from work and what they can get from handouts. In both Norway and Denmark efforts were made to make unemployment benefits less generous by expecting unemployed youths to join vocational-training programmes or else see their dole cheques reduced. That policy helped to cut unemployment rates among 16- to25-year olds to 4.8 percent. Norway has also reduced the duration of unemployment benefits to, a still generous, three years. Finland has loosened restrictions on working hours and short-term contracts. 60

All Nordic governments have been compelled to face up to their most pressing challenges – a prospective rise in pension costs that threaten to undo all their efforts to reduce budget deficits. Norway has placed a big chunk of its oil income in a State Petroleum Fund to help pay for the pensions of its ageing population when oil and gas production tails off. Finland, without an oil windfall, raised the retirement age. But certain employment practices that tend to keep unemployment high and threaten their budgetary stability, have remained intact. Throughout the Nordic region wages and working conditions are largely set by centralised agreements among powerful trade unions, employers and the state. Although such treaties may occasionally restrain wage rises that employers cannot afford, they keep minimum wages high and, in the name of equality, narrow the variation between low- and high-wage jobs and industries. Thus, unskilled workers cost too much and the talented have too little incentive to invest in their education, or to switch from laggard industries to booming ones. The Nordic social-democrat parties are loath to offend the unions that provide the core of their electoral support and funding. Hence their job creation programmes are often used to hide or obfuscate the hard realities of government-financed training schemes for the unemployed. Employment-assistance programmes are often only a cover-up to reduce unemployment statistics –the “dole” by another name. The Swedes know how to massage employment statistics. The trick is to exclude those in government make-work programmes and early retirees from unemployment figures. Workers on long-term sick leave are counted as working (employed persons). Sickness benefits account for 16 percent of public spending. Few or no net private-sector jobs have been created since 1950. Youth unemployment is amongst the highest in Europe. The Economist of September 9th, 2006, estimated Sweden’s real unemployment rate around 15-17 percent. The job shortage was felt most acutely by Sweden’s fast-growing immigrant population. Despite obstacles to job creation, Sweden’s big companies (like Volvo, Ericsson, SKF and Telia) have thrived. But the regulatory and tax climate is chilly to newer and smaller companies. Only one of Sweden’s 50 biggest companies was founded after 1970. Moreover, Sweden has the lowest rate of self-employment in the OECD. The Economist (September 8th, 2006, p.26) maintained that the “…much vaunted trilateral partnership between government, employers and unions works if the employer is an established large company; for a new or smaller one, it simply adds to costs. High personal taxes and generous welfare benefits – which pay people who lose their jobs as much as 80 percent of previous incomes for years – discourage work. The “tax wedge” (i.e. the non-wage cost of employment) is too thick, especially for low earners”. Although there is no formal minimum wage, Sweden’s powerful unions enforce one in practice. The terms of labour contracts are largely set by unions, which dislike temporary or part-time work. Moreover, by making it expensive to sack anybody, it also discourages the hiring of new workers. Another big failing of the Swedish system is too many people in the public sector. Its public sector accounts for 30 percent of total employment – twice the share in Germany. Although public-sector productivity figures are unreliable, a recent assessment quoted by The Economist, puts Sweden and Denmark at the bottom of all OECD countries (Ibid, p.27). In Sweden the unions oppose competition and the private provision of services.

Prospects The 24 million people of the Nordic countries are certainly well off. They generate more income than Spain’s 40 million and their national wealth is shared out more evenly than elsewhere. Norway is the wealthiest and will continue to pump North Sea oil and gas for decades to come. After Norway, Denmark is the richest Nordic country, but relies mainly on trade, services and manufacturing and a range of medium-sized companies. Finland has built its economy on hi-tech exports. Sweden has gradually been slipping behind its Nordic neighbours. Despite concerns about the sustainability of their high-welfare, high-tax socio-political culture, Nordic societies seem to be powering ahead with considerable determination. They 61

have delivered consistent growth and low unemployment and are amongst the world’s most competitive economies. Nordic companies are strong in technology and research. Their health and education systems are much admired. Most Nordic states ran healthy budget and current account surpluses in recent years. Given Sweden’s high tax rates and poor employment record, the question arises why do so many voters stick with the Social Democrats? One answer is that it is on account of so many people being dependent on the state – some 30 percent work for it, and a bit over 30 percent receive transfer payments. Another answer is that all Swedes are to some extent Social Democrats. An attack on social democracy is interpreted as an attack on Sweden itself. The linkage between the welfare state and the sense of community cannot be ignored. As a welfare state pre-supposes a strong sense of community based on a high degree of cultural identity, in today’s multi-ethnic Sweden the support of this welfare state may diminish. Multi- ethnic societies appear to be less supportive of high tax rates. In addition, there is also a real danger that tax competition between countries might erode the revenue base of the Nordic region. People as well as capital is becoming increasingly mobile in today’s world. Sales tax levels of 25 percent are driving Nordic shoppers to seek bargains in other countries (e.g. Baltic neighbours). The continuity of the fully-blown welfare state momentum in Sweden is appealing to those who have full-time jobs in the public sector or make a living on welfare handouts. It is an open question whether the poorly integrated immigrants and the unemployed youth are equally attracted to the system. The fast-growing immigrant population is affected by unemployment as are the young Swedes. The new immigrants are largely concentrated in areas such as Malmo where as much as 95 percent of local inhabitants don’t have a job. About one million Swedes were born outside the country and a large proportion do not speak “Swedish”. Many find the Swedes reluctant to accept the newcomers as true Swedes. Every Nordic country is predicted to face a top-heavy population structure in future. By 2050 pensioners are estimated to equal 45 percent of the working-age population of Norway, 49 percent in Finland and 54 percent in Sweden. Across the region the number of workers per pensioner is set to halve by mid-century. Privately funded pensions are being introduced in all Nordic countries and efforts are made to raise the retiring age. On current trends, by 2050 public pensions are expected to swallow up 7 percent of GDP in Denmark, 10 percent in Sweden and 18 percent in Norway. Denmark has prospered since 2003 with a centre-right party in power. Unemployment has been brought down to 4-5 percent – its lowest in over 30 years. In 2006 inflation levels were below the euro-area average and growth was faster. In 2005 the budget surplus stood at 4 percent of GDP. Denmark’s excellent performance on employment patterns attracted much attention in recent years. During the period 2003 to 2006 the Danes shaved the public payroll by almost 1 percent while boosting private-sector employment by 3.7 percent. Much credit is given to “flexicurity”, a particularly Danish blend of a flexible labour market and generous social security. Hiring and firing is deregulated to give Danish companies a competitive edge, but where workers lose their jobs, they are given generous unemployment benefits. The system is based on a tradition of employer-union dialogue. Apart from certain similarities the Nordic countries differ from each other in certain important areas. Sweden is in the EU but not the euro, while Finland is in both and Norway in neither. Different parts of the Nordic community have different strengths. Finland’s education standards are much admired, as is Denmark’s labour market and Iceland’s entrepreneurship – until the devastation of the Global Financial Crisis . Sweden’s management of big companies is considered a comparative advantage and so is Norway’s oil and gas industry. Some analysts point out that the Nordic countries have done well economically during periods when they followed more liberal Anglo-American policies by lowering tax levels and reducing the dead weight of the public sector. They have gone off the rails when higher taxes and more regulation was imposed. 62

Case Study 2: Germany – From Warfare to Welfare

Although the history of Germanic tribes can be traced back to the Middle Ages, the emergence of the European concept of a “nation-state” goes back to 1648. But at that time the area that is today regarded as Germany was populated by a range of Germanic communities fragmented into numerous small despotically governed principalities – barely emerging from the age of feudalism. These principalities were bound together by a loose-jointed sense of cultural solidarity with an ill-defined geographical territory. The German culture extended into contiguous areas in France, the Alps, Poland and the Baltic states. But their political boundaries were not clearly drawn. Their solidarity only lay in their German language and German Kultur. Even in 1803 when Napoleon imposed a measure of political consolidation on the Germans, miniature states remained the predominant pattern. Rivalry amongst these units and perennial invasion from outside, remained a permanent fact of life in the Germanic territories. The fractured existence of the Germans was further aggravated by a lack of religious solidarity. Western and south-eastern parts were mainly Catholic, while central, northern and eastern areas became strongholds of Lutherism. Such differences stood in the way of a national consensus. After decades of haggling, Bismarck, the Chancellor of the North German Federation (Prussia), declared Wilhelm I to be the Emperor of a united Germany in 1871. Austrian Germans were excluded from the Prussian-led Reich. But it was an imposed federation of authoritarian principalities. It was not brought together by representatives of the people. Consequently, nationhood and liberal democracy parted ways from the start. The German nation in the form of the Bismarckian Reich, began its life with several basic handicaps: it was not based on a free political consensus with its roots in popular sovereignty; nationhood was built by the strong hand of a political leader and not by freely consenting citizens; the political elite holding the levers of political power were not elected civilian leaders, but military-minded aristocrats. The rest is history. The German participation in World War I resulted from an accidental turn of events in Sarajevo, while Germany was controlled by an inexperienced and arrogant grandson of Queen – Kaiser Wilhelm II. The Weimar Republic was not the result of a popular political will, but an artificial consequence of military and political collapse. World War II was the tragic outcome of a diabolically manipulative bond of evil men who succeeded in getting their hands on the levers of control over a state machine with hitherto incomparable destructive power. It took the lives of dozens of millions. Hitler’s Nazis created a “warfare” state, preserved private property, but controlled and subjected to their diabolic purposes. During World War II the German economic and physical infrastructure was virtually wiped out. Brought to its knees, the country was divided according to the lines drawn by the occupation forces. The Soviet Union held the Eastern parts and the Western Allies held the Western part of the collapsed Third Reich.

Emergence of the Bundesrepublik Deutschland The Federal Republic of West Germany emerged after the war with the lucky option to align itself with Western democracies and a market economy. East Germany ended up as the “Democratic Republic of Germany” under the auspices of the Stalinist Soviet Union, behind the Iron Curtain. West and East Germany became separate members of the United Nations in 1973. Eventually in 1990, Helmut Kohl managed to secure unification of East and West Germany, without abandoning the Western orientation of the Federal Republic of Germany with as its capital. In 2009 the German population stood at 82.3 million and enjoyed a high standard of living as the trendsetting member of the European Union after the Deutsch Mark was converted to the Euro, the common currency of the European Union. After the war, Germany was a devastated, hungry country. The Social Democratic Party led by Schumacher propagated a socialist future based on nationalisation and central planning, much in line with the policies of the British Labour Party. Even the centre-right Christian Democrats 63

believed that the capitalist economic system had failed and supported public ownership and central planning. Soviet expansionism fuelled a confrontation between East and West, leading in turn, to rising suspicions against the Left. The Marshall Plan aid provided by the USA gradually assisted the revival of European economies. Initially, during the period 1946 to 1948 the German population went through terrible hardship: food shortages, unemployment and poverty was the order of the day. The bulk of the population had to rely on American food aid, which often meant corn, not wheat – or what was called “hühnerfutter” (chicken feed). In 1948, General Clay, the head of the USA Military Occupation, appointed an economist, Ludwig Erhard, as head of “Bizonia”, as the combined American and British occupation zones were called. Erhard was a member of the Freiburg School of “Ordoliberals” who believed in free markets and opposed the monopolies and cartels that flourished in the Nazi Third Reich. They believed that competition was the best way to prevent private or public concentration of power, the best guarantee of political liberty as well as providing a superior economic mechanism. They were committed to a free market system coupled to a supporting social safety net. They called it the “” – which came to describe the German economic model in the post- war years. Germany’s economic recovery took off with the arrival of the D-mark in 1948. In the miracle years of 1950-73, GDP grew by an annual average of 6 percent. With the bounty, politicians could extend the benefits of the welfare state. Konrad Adenhauer served as Chancellor for the next fourteen years – ably assisted by his economics minister – Ludwig Erhard. The result was the “Wirtschaftswunder” – the German economic miracle. (See Yergin and Stanislaw, op.cit., pp.33-38)

The Wirtschaftswunder The “social market economy” that emerged in Germany looked in many ways like a “mixed economy”. In 1969, the federal government owned 25 percent or more of the shares of some 650 companies. At the länder (state) and local levels public ownership was broad in scope including transportation systems, telephone, telegraph, postal services, radio and television networks and utilities. Partial public ownership also extended to coal, iron, steel, shipbuilding and other manufacturing activities. But in Germany the state did not take control (as in France), it created a network of organisations to enable the market to function more effectively. The economy operated under the tripartite management of government, business and labour (“Mitbestimmung”). This corporatist system was embodied in the supervisory boards – Betriebsrätte – consisting of representatives of all three sectors. It propelled Germany to the centre of the European economic order within a decade and firmly established it as the locomotive of European economic growth. In the post-World War II years, Germany provided job opportunities to millions of migrants (guest workers) from Turkey, , Italy and Spain.

The Mittelstand With capital provided by the Marshall Plan, Mittelstand companies sprang up and became the backbone of the German economy since the start of post-war reconstruction. They were the first enterprises to start producing goods and services for the local market and to start employing the vast reservoir of jobless people on a decentralised basis. Today they still employ 70 percent of the workforce, account for 46 percent of investment, creating 70 percent of all new jobs.

Export-Led Growth German export companies compete at the top end of the market by devising excellent technical solutions, supplying reliable goods and building long-lasting relationships with their customers. These German manufacturers are famous for their obsession with detail and strong emphasis on safety and durability. Their products are expensive but of top quality. Their brand 64

names are well known around the world: Mercedes Benz, BMW, Volkswagen, BASF, Bayer, Bechstein, Siemens, SAP. Most are dedicated to improving their products rather than changing to fast-moving, mass-consumer markets, ruled by the simplification of products and pushy marketing. Despite price competition from other parts of the world, e.g. Japan and China, top- class German manufacturers seem to be holding their own – at least up to 2005. (See table)

Exports by Country (Monthly Average $ billion) 2003 2005 Germany 65 80 United States 60 75 China 40 65 Japan 40 50 France 30 40 Netherlands 25 38 Britain 22 30 Italy 22 30 (Source: OECD & IMF)

Between 2004 and 2007 net exports accounted for 60 percent of growth. By 2008 Germany’s current account surplus reached 6 percent of GDP. Germany’s industrial exports rely heavily on the automotive industry (3.7 percent of GDP), chemicals (2.2 percent), (3.3 percent) and electrical engineering (3.2 percent). Thus Germany’s economic future remained heavily dependent on the health of the auto industry and the related electronics and computer manufacturers. But these industries also remained vulnerable to suddenly declining export markets. The best safeguard appeared to be to actively promote service industries as well as shifting the focus to the sizeable domestic market. Willingness to acknowledge the failure of a specific business model is a prerequisite for the development of a more successful alternative. The contraction of Germany’s export market as a result of the 2008 Global Financial Crisis brought in its wake painful effects. In 2008 vehicles, machinery and chemicals accounted for close to 66 percent of Germany’s exports – which constituted a narrow base for the world’s fourth largest economy. Vehicles, in particular, are a highly vulnerable export item – despite the high quality of German . Several analysts maintain that Germany has a competitive edge in climate and environmental technology, but expect that the growth industries will be pharmaceuticals, measurement technology and business services. The fiscal-stimulus packages announced by the German Federal government in early 2009, were expected to push the budget from balance in 2008 to a deficit of nearly 6 percent of GDP in 2010.

Labour Law Rigidities The most serious structural problems are all associated with an over-burdened welfare state. These are manifested in various ways: unemployment and sickness benefits, maternity grants, pension benefits. These benefits are sometimes referred to as “non-wage labour costs” reaching levels up to 42 percent of gross wages. These benefits are embedded in legislation and contracts covering job protection, collective bargaining, closed shop practices, welfare benefits. In contention are the levels of unemployment benefits (e.g. 60 percent, 70 percent, or 80 percent of previous net wages); the duration of unemployment benefits (6 months, 12 months, 36 months or indefinitely); the allocation of the cost of health-insurance contributions (shared equally by employers and employees or some other formula); the scope of health services (e.g. dentistry, sporting accidents, number of family members, nature of family relationships to qualify); the level, duration and scope of long-term pension benefits (pay-as-you-go state pensions or insurance- based, defined benefits or contributory systems); etc. 65

The headquarters of Germany’s federal government employment agency is the “Bundesagentur für Arbeit” (BA), based in Nuremberg. In 2006 its total staff numbered 90,000 employees. But it looks after existing employees, not the long-term unemployed. Wages are mostly set by “national peak-level bargaining”. Pay rises in one industry or area are applied, under union pressure, by employers in other areas or sectors. The country finances its welfare state benefits largely by its system of “payroll taxes” based on matching contributions by individuals and employers. Unemployment and early retirement benefits pushed up these contributions and inevitably the cost of labour increased as well. Contributions rose in 2006 to 40 percent of gross income, compared with 27.6 percent in 1970. Unification in 1990 aggravated this vicious circle of rising labour costs. Much of this cost of integrating former East Germany was piled onto social-security systems because it was politically easier than raising taxes. In this way de facto minimum wages are set by welfare benefits. Along with strong protection against dismissal, a circular path is created to protect all insiders. In recent years it was found that a lot of money was spent on job promotion programmes called “active labour policies” consisting of training schemes and job creation programmes, but most of it wasted. The unemployed were not assisted, but the training firms operated by unions and employers’ associations raked in the money. It was found that job-placement figures were faked. In the absence of reliable data, there was no way to evaluate whether the billions spent on retraining had any lasting effect. Input was measured, not the output. The world’s biggest exporter is not creating jobs for the unskilled or lower skilled. Hence Germany’s unemployed include a high proportion of long-term job seekers. Almost 30 percent of job seekers are reported to be people with no qualifications and around 25 percent are 50 years old or more. Government job creation efforts seem to transfer the unemployed only to subsidised jobs which provides no long-term solution to unemployment. It is likely that Germany’s unskilled labour pool would be more competitive if their wage levels would be set by the market, instead of the highly regulated minimum wages. It is of crucial importance to realise that work and pay conditions should be tailored to the varying needs of different areas and industries – not bureaucratically ordained by centralised vested interests of the trade unions. Their one-size-fits-all approach tends to price people out of employment. Ideology needs to be subordinated to reality.

Mitbestimmung The system of “mitbestimmung” that was introduced during Germany’s post-1945 reconstruction, created harmony between management and employees in the heydays and kept Germany strike free. In later years it vested too much power in the “Betriebsratten”. Any company with 5 employees or more can be landed with a Betriebsrat if the workers decide they want one. It leads to slow and cumbersome decision-making, time-consuming meetings, resistance to change, adjustment and reform.

Obstacles to German Economic Adjustment - European economic and monetary union in January 1999 has robbed Germany of the means to adjust interest rates and exchange rates to optimise its own economic well being. - Reunification with East Germany has proved to be much more expensive than anticipated, costing €1.25 trillion since 1990 and still consuming 4 percent of GDP in transfers, with the added burden of 20 percent unemployment in the region. - The federal structure as entrenched in the German Basic Law imposes serious restrictions on the federal government to impose radical but necessary reform measures. - The country’s cumbersome and expensive labour laws are a crippling anachronism in a globalised, service-driven and high-tech world economy. - The consensus model of interaction tends to stifle essential adjustment to the demands of a rapidly changing world.

66

Case Study 3: The French Dirigiste Model

The word “France” conjures up many images: Voltaire and Rousseau, the French Revolution, Napoleon, the splendours of Paris, massive street demonstrations, Versailles, the rural bourgeoisie, the French Riviera, cultural pride, le petit commerce, fast trains and fashionable women, bureaucracy, modern art, and a deep-rooted wine culture. France is represented by all of these and much more. Today, the French also like to distinguish themselves from what they call “Anglo-Saxon laissez-faire capitalism” by their own “dirigiste model”. The cultural feature of this model is the interventionist state: particularly its role as provider, cushioning citizens, redistributing wealth and propping up demand. Besides its role as provider, the state also functions as prominent planner and regulator. Besides Britain, France has had a longer continuous development as a state, than any European country of similar population, resources and world-involvement. No other country so epitomises the civilisation of continental Europe. It has excelled in many fields of human endeavour: “France, mѐre des arts, des armes et des lois”.

Early History The history of France is deeply rooted in European history. It can be traced back to ancient times when Caesar was Proconsul of Gaul. It covers the early Middle Ages when Charlemagne extended the Kingdom of the Franks from the Atlantic to the Danube and from the Netherlands to Provence, encompassing Saxony, Bavaria, , Brittany and Iberia. Charlemagne built palaces at Nijmegen, Engelheim and Aachen and ruled his Empire by a network of around 300 comitates (countries), each headed by an imperial lieutenant. Law and order was established in the name of the King. It was in the court of Charles the Great that the ancient term of Europe was revived. He was a great patron of learning, he separated church and state. After his death in 814 AD, his kingdom disintegrated as a result of the infighting of the Carolingians. The Kingdom of France was revived towards the end of the 12th century under Philippe- Auguste. He consolidated its territory, established a national army and a centralised administration in Paris. A long list of monarchs, from Louis IX (1226-70) to the rule of Louis XIV starting in 1661 saw the consolidation and growth of France as a nation-state. The century-and- a-half from 1661 to the fall of Napoleon in 1815 saw a period of French supremacy in Europe. It was based in part on its large territory and population and the systematic nurture of its economic and military resources – also by the disarray amongst major rivals: Germany, Italy and Spain. Centralised and controlled by its succession of absolute monarchs, France had served as a model of Western civilisation – La Grande Nation. But French kings did not permit their sovereignty to be gradually transferred to a parliament: they refused to make the necessary concessions. The result was that those who defended the old order and those who attacked it were driven to extremes leaving the French political consensus deeply fractured. Modern France’s political problems stemmed in large measure from the greatest event in the country’s history – the French Revolution of 1789. In the place of the old myths of “One King, One God, One Law” came a multitude of new slogans: “Liberty, Equality, Fraternity”, “Social Justice”, “Property is Theft” and “Workers of the World Unite”. Since the Revolution, Frenchmen have alternated between too much and too little authority: it lived through two empires and five republics.

The Fifth Republic During World War II, France was rapidly overrun by the German army. The Vichy- government and millions of French men and women collaborated with the Nazis. Many others joined the “Resistance” movement and when the Allied Powers liberated France in 1945, preparations started (with Allied assistance) to establish a new republic. The constitution of the Fifth Republic came into effect in 1958. With former General Charles de Gaulle as President, France regained its status as a leading world power and a permanent member of the UN Security Council. In 1981, Francois Mitterrand was elected the first Socialist 67

president and he stayed in power for two seven-year terms. In May 1995 Jacques Chirac was elected President and afterwards for a second term, during which time the French remained a recalcitrant member of the Western Alliance. In April 2007, Nicolas Sarkozy was elected President. Today France’s population is around 61 million of which 74 percent live in urban areas. Its ethnic population is mostly Celtic and Latin with Teutonic, Slavic, North African, Indochinese and Basque minorities. About 25 percent of its population lives in the Paris region with additional concentrations in Lyon, Marseille, Toulouse, Nice and Strasbourg. Around 83 percent of the population identify themselves as Catholic, 2 percent as Protestant, 10 percent as Muslim, 1 percent as Jewish and 4 percent as unaffiliated. Some of its former colonies in the Caribbean still participate in its government as overseas “departments” (French Guiana, Guadeloupe, Martinique and Reunion) in the same way as the departments of metropolitan France. There are also a few overseas territories in the Southern Oceans (Mayotte, St. Pierre, Miquelon, French Polynesia, Futuna and St. Martin). The French legal system is still based on the principles of Roman Law as codified under Napoleon. In criminal cases, the judicial process is inquisitorial rather than adversarial. The public sector has its own law and court structure, headed by the Conseil d’Etat. It serves as a final court of appeal in the administrative system and as a consultative body to the government in public and constitutional law. France imports most of its hydro-carbons, but produces 78 percent of its electricity from nuclear plants. It is largely self-sufficient in food production. Its industries account for 24 percent of the workforce and 21 percent of GDP. Around 72 percent of its workforce is employed in the service sector (largely the public sector). France receives an estimated 76 million tourists per annum, spending around US$43 billion.

The French Economy Since the 1970s, the French economy, in terms of GDP per capita, has dropped from 7th place in the world to around 17th place in 2006. This slippage raised questions about the French dirigiste economic model. France’s heavily relies heavily on a strong centralised state in the pre-revolutionary tradition established by Jean-Baptiste Colbert, Louis XIV’s finance minister. The French dirigiste model served the country reasonably well. It speeded up reconstruction after the Second World War. It delivered the trent glorieuses, or 30 years of post-war prosperity. It laid the foundation for the rapid transformation of the economy into an industrial powerhouse. It helped other forms of modernisation: the high-speed TGV trains and its early decision to invest in nuclear energy which accounts for 78 percent of its electricity production. A planned society relies crucially on an intelligent and efficient state. Over the years the French version has become rather cumbersome: too many bureaucrats, supported by too many taxes, imposing too many rules in too many overlapping organisations. The French public sector is inefficient and public spending accounts for 54 percent of GDP, compared to the OECD average of 41 percent. One in four French workers is employed by the public sector. Public debt amounted to 66 percent of GDP in 2006. Over the past ten years, prior to the stimulus packages to counter the Global Financial Crisis, public debt has grown faster in France than in any other EU country. In the French hierarchical system people too often expect solutions to be provided from the top. The mal francais, essentially a bureaucratic mentality, chokes creativity and innovation and entrenches resistance to change. The heavy and inert state machinery blocks the evolution of society. The French political culture is not attuned to penetrating self-analysis. Politicians prefer to deflect blame to other sources: Europe, America or globalisation. Consequently there is no consensus for reform. It requires courageous political leadership. The French are inclined to believe that economic efficiency and social justice are incompatible. Yet many other social democracies have revived their flagging economies without destroying their welfare systems or way of life. 68

In 2006 France had 3.7 million people living in poverty, 2.5 million living on the minimum wage and 2.5 million persons unemployed. The IMF has found that more competition in markets for both goods and services, combined with labour-market reform, could boost the French GDP in the long run by more than 10 percent.

Structural Unemployment France’s structural unemployment is evidenced by the persistence of high unemployment even in times of economic expansion. The Socialist government shortened the standard working week from 39 to 35 hours without loss of pay. In the service sector it caused headaches. In order to spread the shorter hours over a year, many office workers got about three weeks extra holiday in addition to the normal five weeks of paid vacation. National collective bargaining agreements which apply to all employees in an industry, whether unionised or not, entrench union power. All the protective measures deter employers from creating permanent jobs. They resort to temporary staff, interns or short-term contracts instead. This practice produced a two- tier labour market: good protected jobs for some, and insecure jobs or unemployment for the rest. The side-effects are felt not just by the outsiders – the young, the poorly skilled, the long-term jobless – but by the economy as a whole. Whereas France’s big companies generally get by, its small companies struggle. The government tried to lighten the burden on small employers by reducing social-security contributions on minimum-wage jobs as well as new job contracts for small firms that make it easy to sack people for the first two years.

Overweight Public Sector France’s overweight public sector is a formidable problem. For 25 years, every time a new problem has emerged, France has responded by increasing public spending. Over that period no government has presented a balanced budget. Public debt has grown to €1.1 trillion ($1.4 trillion) or 66 percent of GDP – five times its level in 1980. This figure does not include France’s off-balance sheet civil-service pension liabilities. If current trends continue, public debt is forecast to reach 100 percent of GDP by 2014. To support this spending level the state subjects its citizens to heavy taxes and charges. Income tax rates are relatively modest for families, but other imposts are heavy. An employer who pays a worker twice the minimum wage, or €2,400 a month, has to shell out nearly half as much again to the state in social-security contributions. The employee, for his part, has to hand over 22 percent of his pay in social-security contributions, on top of income tax. A French pay slip typically runs over 40 itemised lines. Deductions from the employee’s gross pay include 6.65 percent to the pension fund; 2.4 percent to the unemployment fund; and 5.1 percent to the social-security fund. The employer gets stung for contributions of 1.25 percent to the work- accident fund; 0.4 percent to the work medicine fund; and 2.6 percent to the transport fund. Michel Pébereau, chairman of BNP Paribas Bank reported in an investigation commissioned by the government in 2006 that most government borrowing pays for current operations (not long-term investments, not research and development, not higher education and not infrastructure). The result is an unsustainable administrative burden. Over the past 20 years the state has hired nearly 1 million extra civil servants, bringing France’s total to 5 million. In the Ministry of Agriculture with a shrinking number of farmers, the number of officials has grown by 8 percent. The country has twice as many post offices for every 3,530 inhabitants than Germany, yet fewer letters arrive the next day. The Bank of France has 14,000 employees compared to 1,836 at the Bank of England. The problem with the French bureaucracy is too many layers of administration interfering too much in everybody’s lives. The state sets the dates that shops can hold sales; forbids hypermarkets from selling below cost; limits the number of Paris taxis; and prevents pharmacists from owning more than one pharmacy. Lack of competition hampers growth. The one single market participant losing out is the consumer. In 1856 Alexis de Tocqueville described the administration of his own country as a “… regulating, restrictive administration which seeks to anticipate everything, take charge of 69

everything, always knowing better than those it administers what is in their own interests”. Nothing much seems to have changed.

Impact of the Global Recession Although the French economy has been battered by the global recession like many others, it has been less hard hit. The reason is that France is less dependent on exports than Germany and consumer spending was up on the same period the previous year. The French are avid savers and most have not taken out unaffordable mortgages or spent heavily on credit. Household debt was less than half that in Britain or America. French banks have survived without government intervention. At the onset of the global downturn Le Monde wrote, “In the crisis, the French model, formerly knocked, finds favour once more”. They advised the Americans to save more, consume less, make real things rather than short-term paper profits, redistribute more wealth, provide better health care for all and build more high-speed trains. A growing number of Europeans talked about the “French lessons on the state’s role” and the benefits of long-term strategic planning in such areas as energy and transport.

Balancing Market and State? Have the French got the balance right or are they sacrificing the dynamic growth to sustain it in the long run? It is clear that the strong egalitarian ethos of the French and the cushioning effect of its social system have enabled them to keep their heads above water during the downturn. The public sector and the welfare system have propped up the economy even in bad times. Many Frenchmen live in rent-subsidised housing, receive all manner of direct help including even vouchers for children’s holidays and after-school activities, special schemes for the “working poor” and a wide range of benefits under the national welfare safety nets. Across France 5.2 million workers, or 21 percent of those with jobs, are employed by the public sector. If those whose incomes or jobs are not exposed to the economic cycle are included, 49 percent of those either in work or retired are only moderately vulnerable to recessionary pressures. If the layers of social protection, including unemployment benefits that can reach up to 75 percent of previous salary and a range of direct payments for families (e.g. for newborn babies), the French are well-sheltered from market downturns. France’s health system, a mix of public and private provision, manages both to guarantee universal coverage and produce a relatively healthy population for half the cost per person of America’s system, with shorter waiting lists than Britain’s somewhat cheaper version. The French have a slightly higher life expectancy than both the British and the Americans. Through means-testing, the state covers those without the top-up private insurance needed to complement the public scheme. All of those “automatic stabilisers” help to support demand and should be counted as part of the fiscal stimulus package. The French model provided shock absorbers that were already in place when the downturn set in. The French had no need to reinvent their employment, health and welfare systems. Although the official “Planning Commission” has been abolished, long-term strategic planning of public infrastructure is still carried out. Paris boasts five cross-city underground lines and recently embarked on a ten-year project to build an automated metro loop around the outskirts of Paris, linking the main with France’s fast-train TGV network. It provides a viable alternative to air and road travel. France is a net electricity exporter. Thanks to its nuclear- generated power grid. France’s 82 public universities leave much to be desired, but it does have a world-class layer of engineering, business and public administration schools known as grandes écoles which produce a technically skilled elite to fill strategic positions in the public and private sectors in order to keep the supply side going. Although the impulse to regulate is too strong, in the financial sector France’s regulatory urge has served it well in the financial crisis of 2008/09. France’s big banks have lost a lot of money, 70

but they have performed better than their British and American peers. One reason could be tighter regulation. Mortgage debt represented only 35 percent of GDP, compared to Germany’s 48 percent, Britain’s 86 percent, Ireland’s 75 percent and Spain’s 62 percent. Apart from stricter rules, the tradition of cautious borrowing also played a role. France applies stricter rules on bank capitalisation than international standards dictate. The regulator recommends that banks should not make loans on which interest payments represent more than a third of the borrower’s income. Banks are under legal obligation not to push borrowers into more debt than they can manage.

The Downside of Dirigisme If the French model has broadly sheltered its people from credit-fuelled excess, kept demand buoyant and inequalities manageable, does it mean the model works well? According to The Economist’s briefing on the French model the answer lies in a generally disappointing macroeconomic performance with low growth and high unemployment. It can be explained by the flipside of each of the three roles the French model assigns to the state. First, as provider, the government taxes employers and employees with such heavy social- security levies to pay for all the health and welfare benefits, that it ends up deterring firms from creating jobs in the first place. Many firms make use of rotating interns and temps. France’s jobless rate never falls below 8 percent, even in good times. The upshot is a split employment market. A decent permanent jobs market on the one hand and a large unprotected market for short-term work, or none at all. Joblessness for the under 25s is 21 percent and in the Muslim banlieues, the rate is double that. The state as planner has its flaws too. Generous new “research tax credits” have been devised to encourage innovation in the carbon-light and high-tech sectors. In addition, a new “auto- entrepreneur” system to encourage self-employment, has lured a massive 145,000 people since it was launched in January 2009. But small firms have difficulty growing and the practice of picking winners has had a mixed record. The jury is still out on the latest planned interventions. The egalitarian impulse does not guarantee quality as the pathetic universities illustrate. Its second-rate, free tuition, universities produce many recruits for the public service, an over supply of philosophy and sociology students and high drop-out rates. The world-class grandes écoles cater only to a tiny elite. As for the state as regulator, it may have protected the French economy from extreme volatility, but that goes for the expansion of the upside too. A more stable economy in the recession also means a less dynamic, less innovative economy in good times. (See The Economist’s briefing “The French Model”, May 9th, 2009, pp.24-26) By the middle of 2010, it was clear that the impact of the Global Financial Crisis had also caught up with the French. France’s budget deficit crept up to 8 percent of GDP – which appeared to be closer to Greece’s 9.3 percent than to Germany’s 5 percent. It seemed to be inevitable that the profligate French politicians were about to learn the meaning of austerity.

Conclusions

The post-war period was decidedly successful for the social democracies of the West. Sustained boom periods raised Western countries to unparalleled affluence. Historically rooted class antagonisms were softened as the welfare state benefits reached down to the masses. The issue of the position of the working class which had troubled the late nineteenth and early twentieth centuries has largely been resolved and the bulk of the “proletariat” has become financially comfortable. Political power oscillated between left of centre and right of centre political parties. Fortunately, in most Western countries, regimes with socialist leanings remained within the context of democracy and moderation. They have largely abandoned abstract ideology in favour of pragmatic and eclectic programmes. They accepted the need for a predominant role for private enterprise, differential incentive rewards for persons and only a light-handed regulation 71

of the market system. They have modified and largely relinquished their earlier demands for central planning and control where these demands require structures that could endanger human freedom and creativity. They have, in short, maintained a pluralistic society with all that it implies in terms of personal freedom, individuality, the right to dissent, and to carry on private entrepreneurship. In many respects, the economies in social democratic systems are today essentially “mixed” economies. In all social democracies taxation rather than nationalisation was used as the prime instrument in the process of social levelling. Social democrats, metaphorically speaking, did not seem to mind who owns the cow, so long as the government gets most of the milk! They managed to do this by steep progressive taxes on income, capital gains, profits and inheritance. But they also have been steadily eroding the prerogatives of ownership and the management of business enterprise. Increasing fears have taken hold that social democracy’s flirtation with near confiscatory tax policies, by reducing rewards, have begun to discourage incentive and innovation. As a result they have jeopardised the proper functioning of the essential source of all productive effort and economic growth: business entrepreneurship. The basic “welfare state” objective – that the organised society should take responsibility for a minimum standard of social and economic security for every person – is today widely supported amongst most political parties and it forms part and parcel of the responsibilities of most social democracies in the West. However, developments during the recent decades brought in its wake the urgent need to bring about fundamental changes and adjustments in the agenda of the modern welfare state: reducing welfare spending and downsizing the public sector. The survival and acceptability of a sustainable welfare system depends on a sound balance of responsible action on both sides of the system: the consumers of welfare (welfare recipients) and the contributors of the financing resources (largely taxpayers). The consumers of welfare services (education, health or social security benefits) must be made to realise that these benefits are privileges, not rights, and that close surveillance must be kept on the efficiency, costs and financial burdens involved. The contributors (tax payers) who have to carry the financial burden have to be persuaded that it is in the common interest to carry on taking care of the basic needs of the sick, the elderly, the unemployed and the system of education within reasonable bounds. But incursions into personal liberty in the name of the “common good” cannot remain open-ended. The average economic growth rate of every major industrial country in the Western World has been on the decline for several decades. Serious questions must be asked about the causes of this economic stagnation. Is the problem “cyclical” or of a “structural” nature? Can this trend be reversed? Is the welfare state a state that cannot stop growing? Will the countries of the West succeed in significantly rolling back the state and carrying out deep structural reform, particularly of labour laws and over-sized public sectors? Today it is recognised in all social democracies that the most important interests in most peoples’ lives are economic and that the most substantial non-government power is privately owned enterprises. If the state dominates the economy, it is unlikely that enough non- governmental power can be mustered to push back the state. History shows no example of true democracies with state-managed economies. Democratic government and civil have down the ages been closely correlated with economic freedom. It is unclear how far countries are democratic because they have strong independent economic interests and how far they have strong non-state economic interests because they are democratic. The answer probably lies in the reciprocity of the relationship: democracy thrives on economic prosperity and vice versa. This is an important lesson of the modern social democratic market economy. It also illustrates that irresponsible concentrations of power are dangerous and wrong. Monopolised industry or trade unions that are not counterbalanced or restrained, undermine the public interest. Many “socialists” in the social democratic movement seem to find themselves in a somewhat bewildered uncertain future. The total collapse of the communist countries, as well as the economic stagnation of Western welfare states, required a fundamental rethinking of the interaction between government, the community, the individual and private enterprise. The onset of the Global Financial Crisis that started in the USA’s “Wall Street”, illustrates that the 72

economic life of a country is, after all, determined by the mental and behavioural patterns of humans. Social-psychology underpins the cultural mindset that regulates the give and take interaction between responsibility and gratification: between contributions and claims, between efforts and results, between obligations and demands and between progress and decline. The road forward for Europeans requires drive and effort to engage in creative and productive work, pushing through reforms to enhance competition, holding down labour costs, reducing welfare dependency, downsizing the public sector and boosting economic growth.

References

Andersen, B.R. Rationality and Irrationality of the Nordic Welfare State, (1983/1984) Daedalus Bell, D. (1976) The Cultural Contradictions of Capitalism, London: Heinemann Educational Books Burger, W. (1993) Europe’s nemesis: high labour costs, Newsweek, March 22nd, pp.16-17 Ebenstein, W., et al (1980) Today’s Isms, Englewood Gliffs, N.J.: Prentice Hall Erikson, R., et al (1987) The Scandinavian Model, London: M E Sharp Galbraith, J.K. (1992) The Culture of Contentment, London: Sinclair-Stevenson Grossman, G. (1974) Economic Systems, Englewood Gliffs: Prentice Hall Hadenius, A. (1986) A Crises of the Welfare State, Stockholm: Almqvist & Wiksell Hallowell, J.H. (1950) Main Currents in Modern Political Thought, New York: Holt Rinehart & Winston Hayek, F.A. (1971) The Road to Serfdom, London: Routledge & Kegan Paul Heckscher, G (1984) The Welfare State and Beyond, Minneapolis: University of Minnesota Press Johnson, P. (1983) A History of the Modern World, London: George Weidenfeld and Nicolson Ltd Kosonen, P. (1987) From collectivity to individualism in the welfare state, Acta Sociologica, 30 (3/4): 281-93 Lindbeck, Assar (1993) The Dilapidated Swedish Model, Newsweek, December 20th, 1993, p.17 Logue, J. (1979) The welfare state – victim of its own success, Daedalus, 108:69-88 Olson, M. (1982) The Rise and Decline of Nations, New Haven: Yale University Press Polyani, K. (1944) The Great Transformation, New York: Holt Reinhart & Winston Wilhelm, D. (1977) Creative Alternatives to Communism, London: The Macmillan Press Yergin, D. and The Commanding Heights – The Battle between Stanislaw, J. (1999) Government and the Marketplace That Is Remaking the Modern World, Simon and Schuster, New York: Touchstone The Economist (1992) Europe’s trade unions, October 31st, 1992, pp.74-75 The Economist (1993) What the taxman takes, March 13th, 1993, pp.81-82 The Economist (1997) Remodelling Scandinavia, August 27th, 1997, pp.37-39 The Economist (2002) A Survey of Germany, December 7th, 2002, pp.1-18 The Economist (2003) A Survey of the Nordic Region, June 14th, 2003 The Economist (2006) A Survey of Germany, February 11th, 2006, pp.3-18 The Economist (2006) Special Report on the Swedish Model, September 9th, 2006, pp.25-27 The Economist (2006) A Survey of France, October 28th, 2006, pp.3-16 The Economist (2009) Briefing on the French Model, May 9th, 2009, pp.24-26 73

4 Russia – Totalitarian Communism to Bureaucratic Autocracy

The land on the eastern edge of Europe acquired its name in the era of the Vikings: The “Land of the Rus”. The power centre of Viking rule was the principality of Kiev under Vladimir who introduced Orthodox Christianity into his domain. After three centuries of Viking rule, the Mongol Khans invaded Russia and their domination lasted for two and a half centuries. By the end of the 15th century the Mongols were gradually driven out by -based principalities led by a succession of Ivans. Ivan III took the title of “Czar” (after Caesar) around 1480 AD. Ivan IV, his grandson, became the personification of cruelty and earned the name of “Ivan the Terrible” – even killing his own son and only heir – and thereby shattering a royal line of several centuries. In the ensuing turmoil, a Romanov relative of Ivan the Terrible’s wife, Anastasia, was crowned as Czarina. Then followed three centuries of Romanov family rule. The pinnacle of the Romanovs was the rule of Peter the Great. He built St. Petersburg and laid the foundations of wedding Russia to Europe. His initiatives were followed by Catherine, the German Princess, who later became Czarina after a coup d’état. The Romanovs stayed on the Crown for the next century and a half. They succeeded in driving back Napoleon’s 1812 assault on Moscow and in 1814 the Russian army of Alexander I swept into Paris and restored the European monarchies conquered by Napoleon. Thereafter the Romanovs expanded the into the largest contiguous empire the world had ever seen encompassing Slavs, Turks, Mongols and Finns – a veritable nation of many nations. The Romanovs failed to embrace the modernisation that followed the French Revolution and the industrial revolution. Despite their superior numbers, the outdated rule of the Romanovs was illustrated by their decisive defeat in the Crimean War of 1853-56. The courageous reform efforts of Alexander II were too late to stem the revolutionary surge in Russia before he was assassinated in 1881. Thereafter Alexander III and Nicolas II reacted too slowly to the demands for a peaceful transfer to constitutional monarchy. In the wake of the devastation of World War I, the era of Romanov rule came to an end when Nicolas was forced to abdicate in 1917. He and all the members of his family were executed in 1918. The Soviet-Union lasted from 1917 to 1991, when the totalitarian rule of the Communist Party came to an end with the collapse of the USSR. In February 1992 the Russian Federation took over the nuclear arsenal of the USSR. Thereafter a new constitution was adopted at a national referendum on 12 December 1994. Yeltsin was elected as the first President of the Russian Federation. In December 1999 the ailing Yeltsin appointed his Prime Minister, , as acting President. On 26 March 2000, Vladimir Putin was elected as President with 52 percent of the vote. Profile of the Russian Federation

The Russian Federation is made up of 21 republics and is the largest country in the world covering 17 million square kilometres. It extends from the Baltic Sea in the west to the Pacific Ocean in the east, a distance of 10,000 km. It shares borders with 12 other countries. The Federation also has coastlines on the Black Sea, the Sea of Japan, the Arctic Ocean and the Caspian Sea. The country is divided by the Ural Mountains which traverse the country from the Arctic Ocean to . To the west of the Urals are the fertile East European Plains and to the east the low-lying Siberian steppe lands, the Siberian Plateau and the Siberian lowlands. Climatic conditions vary dramatically, ranging from northerly polar conditions (down to -68°C in north-east ), through sub-arctic, humid continental to sub-tropical and semi-arid conditions in the south. Permafrost covers most of Siberia. Areas of the greatest precipitation are bordering the Baltic, Black and Caspian seas. In the southern end of the federation’s Pacific coast summer monsoon conditions prevail. Moscow’s average January temperature is -9°C and average July temperature is 18.3°C with a rainfall of 630mm. 74

In 2008 the total population was estimated at 141 million – approximately 51 percent of the former USSR population – of which 73 percent live in urban areas. The ethnic composition included 80 percent Russian, 3.8 percent Tartar, 2 percent Ukrainian and 14 percent other. Moscow has 10.5 million citizens and St. Petersburg 4.6 million. In terms of religion 15-20 percent claim to be Russian Orthodox and 10-15 percent Muslim. There are also Roman Catholics, Buddhists and Jews. Russian is the official language, but 100 other languages are spoken. In terms of the 1993 constitution, the bicameral Federal Assembly is the supreme legislative body. The upper chamber is the Federal Council with 178 deputies representing the federal units. The lower chamber is the Duma with 450 members elected by popular vote. The President is elected by popular vote. Currently it is with former president Vladimir Putin as Prime Minister. Regionally there are 21 republics, 46 oblasts, 9 krays, 2 federal cities and 4 autonomous okrugs. A 13-member Constitutional Court determines the validity of presidential decrees and legislative enactments. A Declaration of Human and Civil Rights and Freedoms was enacted in 1991 providing for freedom of travel, speech, religion, peaceful assembly and the right to own property. Judges are appointed for life by the President on recommendation of the Federal Council. The GDP in 2007 was estimated at US$1.29 trillion (slightly more than India) or US$9,000 per capita. Industry contributed 35 percent of GDP (22 percent of the workforce), agriculture 5 percent of GDP (10 percent of the workforce) and services 60 percent of GDP (68 percent of the workforce). In 2006 Russia produced 9.77 million barrels per day of crude oil and 612.1 billion cubic metres of . There are vast reserves of both resources in Siberia. Russia is also a major producer of coal, , , copper, gold and diamonds. Mineral deposits are mostly in Siberia. About 50 percent of the country is forested and more than 3.5 million tonnes of are caught. In 2007 total exports (mainly oil and gas) amounted to US$365 billion. The main export destinations for oil and gas products are the Netherlands, Germany and Italy.

Marxist-Leninist Communism

Normally the profile of a society’s history and geography reflects the institutions, interests, beliefs and ideologies of its peoples. The USSR, however, was the creation of the Communist Party of the Soviet Union (CPSU). In Part VII of the Third Programme of the CPSU, 1961, one finds the following explanation: “Unlike all the preceding socio-economic formations, Communist society does not develop spontaneously, but as a result of the conscious and purposeful efforts of the masses, led by the Marxist-Leninist Party. The Communist Party which unites the foremost representatives of the working class of all the working people, and is closely connected with the masses, which enjoys unbounded prestige among the people and understands the laws of social development, provides proper leadership in Communist instruction as a whole, giving it an organised plan and a scientifically based character.” To understand how the CPSU came about, it is necessary to explain the doctrine that provided the image and objectives for the transformation of the Russian society by the CPSU. - , with a later admixture of , was the official doctrine that provided the mould into which Russian society was consciously poured since the took power in 1917. The doctrine known as “Marxism-Leninism” derives from three primary sources: Karl Marx (1818-83); , his financier and collaborator (1820-95); and Vladimir Ilyich Ulianov, who later called himself Lenin (1870-1924). Lenin added his own theories of imperialism to the work of Marx and Engels as the last stage of capitalism, of the dictatorship of the proletariat and socialism in one country. Stalin, who was the unchallenged dictator of the Soviet Union from 1928 to his death in 1956, always claimed that his political theory was strictly that of Lenin. What can be said of Stalin is that he added to Marxism-Leninism the practical 75

implications of “socialism in one country”. To elaborate the totalitarian control of the CPSU he introduced a great deal more violence, fraud and forced collectivisation. Karl Marx was born in Trier, Germany, of Jewish parents. He attended the University of Berlin for several years where he studied jurisprudence, philosophy and history. He joined the staff of the Rheinische Zeitung in Cologne, but when the Prussian Government suppressed the paper, he moved to Paris, then the European headquarters of radical movements. In Paris, Marx met Proudhon, the leading French socialist thinker; Bakunin, the Russian anarchist; and Friedrich Engels, a Rhinelander like Marx himself, who became his lifelong companion, financier and collaborator. Engels was the son of a German textile manufacturer with business interests in Germany and England. Engels was sent by his father to manage their business in Manchester in 1842 where he wrote a penetrating analysis of squalor and poverty in the midst of luxurious wealth under the title Conditions of the Working Class in England (1844). In 1845 Marx was expelled from France and he then went to Brussels, another centre of political refugees. There Marx composed, with the aid of Engels, a pamphlet called The Communist Manifesto (1848), the most influential of his writings. In 1848 Marx participated in revolutionary activities in France and Germany and was forbidden to return to Germany. In 1849 he moved to England, financially supported by Engels, where he stayed until his death in 1883. He spent most of his time in the British Museum. The first volume of Das Kapital was published in 1867, the second and third volumes appeared posthumously in 1885-1895, edited by Engels. Both Marx and Engels were deeply sensitised by the social distress brought about by chaotic industrialisation in the 19th century. Marx emerged as the most compelling prophet of the dispossessed. He posed logical arguments which, to his admirers, became holy writ for action in the destruction of capitalist society. Marx combed historical sources in the public libraries in search of the fundamental causes of social change. By adapting Hegel’s views on the dialectic march of ideas through history, Marx claimed that the subject of the whole historical process was nothing mental or spiritual at all, but was material. The diagnosis Marx offered took the form of historical materialism. Marx proclaimed that the underlying force in historical development were changes in the relations of production. Relationships within the productive process in society were marked by conflict and contradiction: between patrician and plebeian, freeman and slave, and in the perpetual class struggle between bourgeoisie and proletariat. In maturity, Capitalism mobilises a class of exploited workers who are “alienated” through their inability to attain self-realisation in their work. Because they are paid much less than the value of their work, their deprivation (hunger and poverty) and recurrent economic crises bring their class enemy into focus. The proletariat then challenges the political supremacy of the bourgeoisie by social revolution. Marx also purported to anticipate the means of fulfilment. Aim and direction are given by an intellectually motivated vanguard, the Communists, who provide the spearhead. The period of revolutionary transformation will consist of a “dictatorship of the proletariat” until the bourgeoisie are eliminated. Finally, with the advent of a classless society, the interference of the state power in social relations becomes superfluous and then ceases of itself. The state is not abolished ... it withers away. In Marx’s view labour is both the source and measure of value. Price is merely the money name of the labour realised in a commodity. As the surplus value of labour is accumulated in the pockets of the capitalist class, the historical accumulation of capital leads to an inbuilt cyclical expansion and contraction, which produces an industrial “reserve army” of unemployed as permanent raw material for revolution. Marx promoted his interpretation of historical materialism with skilful arguments. His appeal was dynamic to the intellectual representatives of the underprivileged. It provided a bible of infallible dogma to the Russian Bolsheviks, the Maoist Communists and to European social- democrats. It is important to note that virtually all the major books attributed to Marx, were co-authored and edited by Engels. It is also important to note that the ideas attributed to Marx are neither original nor unique. 76

In the Communist Manifesto, Marx and Engels appropriated the name Robert Owen used for his own very different kind of socialism: “village communism”. The word “communism” in turn, is a simple translation of the French communisme, derived from the French name for the old village unit, le commune. According to Engels, Marx chose the word “communism” in order to distinguish his version of “scientific” socialism from the “utopian” socialism of earlier French and English theorists such as Blanqui, Fourier, Saint Simon and Owen. Hegel originated the idea of “dialectic” historical determinism into which Marx injected the idea of class conflict as the moving force determining relations between society and the individual. In 1832 the French revolutionary, Auguste Blanqui, already characterised political struggle as a historical conflict between economic classes distinguished by their relationships to the means of production. Blanqui also argued that a cadre of alone could guarantee the success of a revolution. It must also be said that Marx’s prediction of an inevitable revolution in the highly industrialised Western states had come to naught. Only in the pre-industrialised largest primary producing areas had it succeeded (Russia and China). From these power bases, Marxism turned its attention to the under-industrialised Afro-Asian nations. In the last three decades of the 19th century Marxism began to be studied in the Russian Empire – subjected to various interpretations. Lenin struck out a distinctive line. He was the son of a school teacher and studied law at the University of Kazan. His eldest brother was involved in an abortive plot to kill Czar Alexander III. He was arrested and executed. Lenin thereafter found himself under police observation, but managed to maintain political contact with illegal groups. In December 1895 Lenin was arrested in St. Petersburg and spent fourteen months in prison. In prison he continued to write revolutionary pamphlets and was sentenced to three years exile in Siberia. In 1898 Lenin married , a fellow revolutionary in Siberian exile, and their home became a centre for revolutionary organisation and planning. He also found enough time for hunting, recreation and ice skating. He changed his name to “Lenin”, derived from the Siberian river Lena. After his release in 1900 he went abroad where he spent the next 17 years organising the revolutionary movement in Russia. From an early stage Lenin latched on to the revolutionary elements in Marxism. He opposed “revisionists” who wished to make social-democracy the vehicle to reform capitalism along constitutional lines rather than overthrowing it. He believed the true role of a Marxist party was to make a revolution as the vanguard of the proletariat. In his pamphlet What is to be Done? written in 1902, Lenin expounded his doctrine of the “professional revolutionary”. He argued that an organisation of workers must essentially be like a trade union, large and public in character. In contrast, the organisation for revolutionaries must be a party of professionals, hierarchical and centralised and as secret as possible. He believed class consciousness could be brought to the workers only from without. Lenin’s concept of a small, central, revolutionary vanguard brought him in conflict with Trotsky and his followers who believed in democratic structures. In 1903 a scattered group of 43 delegates from various Marxist organisations of the Czarist Empire met in London to found a social-democratic party and to decide upon its rules. Lenin put forward his own thesis and managed to carry the majority with him. Since the Russian word for majority is bolsheviki, Lenin’s victorious faction was dubbed the Bolsheviks and retained that name thereafter. In his most influential work, titled State and Revolution, written early in 1917, Lenin sets out the task of the proletarian revolution. Once the proletariat has seized power, its task is to break up the apparatus used by the former possessors of capital and land, and to establish its own machinery. It must carry out a programme of repression of the former possessing classes. The proletariat, or those who act for it, must establish an inequality of rights, and deprive the former possessing class of its right to vote and to participate in government. This regime of destruction is the “dictatorship of the proletariat”. Ultimately the former possessing classes must be eradicated, so that only one class, the proletariat, will remain. Ultimately, there will be no need for the state apparatus; it will simply “wither away”. The political direction of man will give way to the administration of things and material processes. As this may take some time, the proletariat must first repress the previously propertied classes and change the traditions of 77

those who have hitherto supported them. It will also have to raise the standard of living to such a height as to make possible the norm of socialist distribution: “from each according to his ability, to each according to his needs”. Lenin’s practical contribution to the Communist Revolution of 1917 in Russia was mainly threefold: - First was his clarification of the principles of revolutionary conspiratorial organisation. - Secondly, he evolved a system of leadership of the popular mass by a small minority party. - Thirdly, he developed a system to penetrate all institutions of society by party members.

Collapse of the Romanov Empire

Over its long convoluted history, many Russian autocrats committed vile outrages and condemned their subjects to immeasurable suffering. Ivan the Terrible struck down and strangled his own son, Peter the Great’s son died of punishment ordered by his father. Several times the Russian people rose in large-scale revolt against the autocracy and exploitation of the czarist order. The first occasion was after the Russian defeat in the Crimean War, they gained the generously conceived Emancipation Edict of 1861 to end serfdom, but it was inadequately implemented. On the second occasion, a major revolt was mounted in 1905, after Russia’s defeat in the Russo-Japanese War and they were granted three successive Dumas (parliaments) but with restricted powers and whenever about to embark on major reforms, were dissolved. With the third and last large-scale resistance to czarism in March 1917, the imperial regime collapsed and the Czar abdicated. For a brief interval power passed to two centres: the parliamentary liberal democrats of the provisional Kerensky government and the radical councils of workingmen and soldiers, called soviets. But before the two loci of revolutionary power could join forces to establish Russia as a constitutional democracy, the Bolsheviks – only 23,000 strong in March 1917, but brilliantly and ruthlessly led by Lenin – gained control of the soviets and through the soviets gained control of the revolution and therewith made themselves the Russian peoples’ new masters. The Petrograd Soviets’ key Military-Revolutionary Committee under Bolshevik control was briefed to supply the necessary soldiers, sailors and armed workers. Trotsky took command and on the night of October 25th, 1917, the take-over plan was activated. Bolshevik pickets surrounded all government buildings. There was no reaction. On the morning of the 26th, at 10.00am, Lenin issued a press release: “To the Citizens of Russia: The Provisional Government has been deposed. Government authority has passed into the hands of the organ of the ... the Military- Revolutionary Committee, which stands at the head of the Petrograd proletariat and garrison. The task for which the people have been struggling has been assured – the immediate offer of a democratic peace, the abolition of the landed property of the landlords, worker control over production, and the creation of a Soviet Government. Long live the Revolution of the Workers, Soldiers and Peasants.” Practically every word in the declaration was false or misleading, but it made no difference. Lenin and Trotsky had correctly calculated that there was no one in the capital to oppose them. The provisional government ministers were still huddled in the Winter Palace, waiting for a rescue that would never come. The imperial army were nowhere in sight. At 9.00pm the Bolshevik sailors on the cruiser Aurora fired one blank salvo at the Winter Palace. The mob moved in when they saw there was no resistance. The “storming of the Winter Palace” was completed when the Ministers surrendered at 2.30am. That was the moment when the Bolsheviks seized power in Petrograd. In retrospect it must be understood that the “” of 1917 consisted of several interwoven chains of collapse. The two political eruptions – the February (March) Revolution which overturned the czarist monarchy and the second October (November) Revolution or coup which installed the Bolshevik dictatorship – were accompanied by widespread upheavals throughout the Empire’s social, economic and cultural foundations. They were also accompanied by an avalanche of national risings in each of the non-Russian countries 78

which had been incorporated into the Empire, and which now took the chance to seize their independence. The effects of the upheavals were dramatic. In mid-February, the last of the Romanovs still stood at the head of Europe’s largest army. Within twelve months the Romanovs had been extinguished, their Empire had disintegrated into a score of self-ruling states and the central rump territory was under control of Bolshevik rulers. Following an armistice agreed at Brest- Litovsk, all effective Russian participation in World War I ceased from 6th December, 1917. The humiliating terms of the Brest-Litovsk Treaty forced the Bolsheviks to surrender , , and the Russian part of Poland to Germany and to recognise the independence of Ukraine, and Finland. In view of their volatile grip on power at the time, the Bolsheviks decided to execute the Romanov royal family on 18th July, 1918. They renamed Russia as the Russian Soviet Federated Socialist Republic (RSFSR) with Moscow reinstated as the capital. The Bolsheviks installed themselves into power as the Council of Peoples’ with Lenin as Chairman (PM) and Trotsky (Lev Bronstein) as foreign minister. Elections for a Constituent Assembly were held which gave the Bolsheviks only a quarter of the seats and an absolute majority to the peasant-based Socialist Revolutionaries. Their deputies rejected Bolshevik demands that they should be subordinate to the All-Russia Congress of Soviets. When the Assembly convened on 6th January, 1918, it was promptly disbanded by Lenin. He convened a Second Constituent Assembly consisting of Bolshevik deputies which sanctioned his Council of People’s Commissars. When Russia then slid into civil war, Lenin banned all political parties except his own Bolsheviks which he renamed the All-Russian Communist Party. The Bolshevik seizure of power did not go uncontested. They were challenged by the “White” armies led by former czarist officers and actively supported by Britain, France, the US and Japan. The “War of Intervention” dragged on until 1922. The Bolsheviks, with their enemies defeated, were now supreme. Decrees of the years 1917 and 1918 had nationalised the land, private industry, the entire credit system and the means of transportation. By 1921 the economy was almost at a standstill: consumer goods had largely disappeared, grain was available only by forced requisitions from the peasantry. Lenin shifted course towards “” and permitted a brief revival of private enterprise, called the (NEP). This policy combined state ownership of the “commanding heights” of the economy (, public utilities and the financial system) with free market and private ownership of small-scale industry and agriculture. This allowed the economy to make some small progress towards recovery. When Lenin died in 1924, the economic options were still open, but so too was the problem of succession. The Party held Lenin in such reverence that they decided to embalm his corpse and display it to the multitude in a mausoleum in the Red Square of Moscow. The actions and success of the ambitious Lenin and his followers had little to do with the ideas of Karl Marx and Friedrich Engels. Lenin’s pre-revolutionary programme, his improvisations during the revolution, and his own and his successor’s policies have been expressions of a political dynamic much older than Marxism-Leninism: a desire for power and a determination to exploit every opportunity to take control of the Russian people. (For a comprehensive review of the events surrounding the Russian Revolution of 1917, see Norman Davies, Europe – A History, , 1996, pp.901-938) The Marxist-Leninist ideology played a vital role in the Communist conquest of Russia: to provide self-confidence to the revolutionary leaders, to maintain a common bond amongst themselves and to give an intellectual and moral appeal to the vested interests of a self- appointed revolutionary elite. Rise of Stalin

When Lenin died in 1924, the problem of economic policy and the problem of succession became inextricably intermingled. In terms of Marxist-Leninist revolutionary theory, the establishment of the “dictatorship of the proletariat” in Russia should have been followed up by the industrial proletarians of the West: their triumph would guarantee the survival of the revolution in Russia. But the revolutions did not materialise and left the national bourgeoisies in 79

control as before. Only in China, where a nationalist revolution was in progress in alliance with communism, was there hope of further spread of the revolution. In Russia itself, the proletariat of industrial workers was fractionally small, sitting atop a mass of primitive and superstitious peasants anxious only for one thing – their free and undisturbed possession of their land. Without them, the revolution would starve to death. When Lenin died at the age of 53 after a series of mentally debilitating strokes, he left the Party with an internal struggle between several factions. The major camps were represented by (Lev Bornstein) and Yosef Stalin (Yosef Ozhugashvili). Other prominent members were Grigory Zinoview (Grigory Apfelbaum) and (Lev Rozenfeld), , Alexei Rykov and Mikhail Tomsky. Economic policy was a key issue in the factional struggle. Trotsky advocated accelerated industrialisation, financed at the expense of the peasants. Bukharin favoured conciliation with the peasants. The others were more focused on personal ascendancy, forging alliances to outmanoeuvre other contenders. Trotsky had been neither Bolshevik nor Menshevik in the formative years of the Party. He joined Lenin’s following only in 1917. In 1924 his chief rival appeared to be either Kamenev or Zinoview, but in the course of time they joined forces out of fear of Stalin’s growing power. The history of Russia from 1924 to 1936 is the story of Stalin’s rise to personal dictatorship. Stalin was no intellectual. His mutterings in the field of Marxist-Leninist doctrine were considered clumsy, mechanical and banal. He was no orator. He was not middle class like Lenin or Trotsky. Proletarian in origin, he had joined the Bolsheviks early and while its pundits were in exile in the West, Stalin carried out revolutionary work in the oil fields of his native Caucasus along with fellow Georgian, Laurenti Beria, later to become a faithful Stalin stalwart and dreaded head of the secret police. Stalin was a crafty, devious, suspicious and utterly ruthless Georgian. He gravitated to where his talents could be put to excellent use: to the secretaryship of the Party. At that time the Central Committee was still the main policy-making body of the Party in which the Congresses still counted. Under Stalin, the Secretariat began to lay a stealthy hold upon the Party: for the Secretariat controlled admission to and expulsion from the Party. The famous “Lenin enrolment” of 1924, ostensibly a demonstration of affection for the departed leader, allowed Stalin to induct a large contingent of his pledged supporters into the Party, just as the struggle for succession intensified. For different reasons, Kamenev and Zinoview on the one hand and Stalin on the other were determined to prevent the accession of Trotsky. They contracted an alliance (the Troika) and, against it, Trotsky and his supporters found themselves increasingly isolated in the party Congresses. By the time Zinoview and Kamenev woke to the fact that Stalin represented a greater danger to them than Trotsky did, it was too late. The efforts of the two groups to unite against Stalin were dismissed by him as a “mutual amnesty” and the Party Congress rejected them with scorn. In 1927 Trotsky was exiled for his persistent “anti-party” activity. In 1928 Stalin came out with his own solution to the Russian economic problem and the future of her revolution. The world revolution has failed. Therefore all energies of the Soviet peoples must be harnessed to building up an industrial structure as fast as possible – for two reasons: first to protect the USSR against the powerful bourgeois nations of the West – the “capitalist encirclement” – and secondly, to create, by sheer power, the industrial proletariat whose numbers and strength could alone save the revolution. So was initiated the First Five- Year Plan: socialism was to be built in one country. Regimenting the Soviet peoples in a ruthless drive to construct heavy industry became Stalin’s overall objective. In 1930 the pressure was switched to the peasantry. Peasants who owned property (kulaks) were stripped of their possessions. The small-holdings were then merged into collective . In this way, it was thought, mechanised farming could be introduced – and the peasantry subjected to state control. Zinoview, Kamenev, Rykov and Bukharin moved into opposition to Stalin as a result of the ruthlessness with which he implemented his industrial and agricultural policies. From his vantage point in the Secretariat, Stalin controlled the Party Congresses and had no difficulty in 80

demoting and humiliating his opponents. From 1930 Stalin held the high ground in the struggle for succession. As the first Five-Year Plan was succeeded in 1932 by the Second, the new moved towards totalitarian control. Owing to his hold on the Party machinery, Stalin dominated the Central Committee and its . Decisions were taken by him and his adherents and executed under his authority as First Secretary – the head and apex of the apparat. The coercive organs of the state expanded rapidly and the security police (GPU or OGPU) began their rise to prominence. After 1934 a reign of terror started to emerge. Stalin’s heir apparent, Kirov, was assassinated in Leningrad. By decree the GPU was empowered to arrest on charge of counter-revolutionary crimes and to try people by secret court martial. Then in 1936, the year in which Stalin’s New Constitution was proclaimed, the great purges started: of the armed forces, of the bureaucracy and of the Party itself. In the next two years, half the officers in the armed forces were purged, and no less than one out of every four party members. Charged with mostly fabricated counter-revolutionary crimes, people were ejected from the Party, exiled, imprisoned or executed. After 1936, Stalin was the personal despot in charge of the Soviet Union. Trotsky was in exile in Mexico. Zinoview, Kamenev, Bukharin, Rykov – all members of the original revolutionary leadership circle – were vilely executed for alleged conspiracy against the USSR. Self- incrimination was accepted practice in court proceedings. From that point on, the Soviet Union was a fully fledged totalitarian state.

The USSR’s Totalitarian Dictatorship

Since antiquity scholars have classified forms of government according to their essential characteristics. Aristotle’s famous typology referred to monarchies versus dictatorships: aristocracies versus oligarchies; and, democracies versus anarchies according to the good or bad intentions of the rulers. Down the centuries democracies were rare and dictatorships plentiful. During the 1930s a new kind of dictatorship was in ascendancy: Mussolini’s fascist dictatorship in Italy, Hitler’s Nazi dictatorship in Germany and Stalin’s Communist dictatorship in the USSR. Two Harvard University Professors have subsequently developed an analytical typology to describe the essential characteristics of this new kind of dictatorship which has become known as the “Friedrich-Brzezinsky Syndrome”. (See Carl J. Friedrich and Zbigniew K. Brzezinsky, Totalitarian Dictatorship and Autocracy, New York: Praeger Publishers, 1956) The Friedrich-Brzezinsky Totalitarian Syndrome contains the following elements: 1. An official ideology: this is characterised by its attempts to cover nearly all aspects of human existence (totalism) and by the projection of a utopian goal that compels total rejection of the status quo. 2. A single, elite-directed mass party: the party is headed by a dictator, is restrictive, and does not exceed 10 percent of the population; contains a hard core of militants dedicated to realising the ideology; it is superior to the government bureaucracy or “comingled” with it. 3. A system of terroristic police control: this assists the party elite, but can be turned against individual members; terror is applied not only against open oppositionists, but against arbitrarily singled-out groups; it uses the latest available techniques and technology. 4. A near-complete monopoly of control over the media and mass : this monopoly affects press, radio, motion pictures and is carried out by the party or its servants or agents. 5. A party-dominated monopolistic control over all means of effective armed combat: in short, this is a weapons monopoly. 6. A centrally controlled and directed economy: this involves a “command economy” in which central authorities control basic economic decisions and priorities. This analytical scheme does not imply that all the factors mentioned are necessary conditions which must be present in all instances as a syndrome of necessary traits to qualify as a 81

“totalitarian dictatorship”. Rather, these elements are the sum of features that characterise totalitarian dictatorships, but are not all found in full measure in each specific case. All totalitarian regimes draw from the same bank of traits. Stalin’s totalitarianism rested on four organisational pillars: the Communist Party (CPSU), the government structures, the army and the secret police network. Stalin would favour one or more of these pillars while he was purging one or more of the others. He moulded each institutional pillar according to his liking by constantly replacing the persons filling key positions. He was constantly keeping a watchful eye that nobody acquired a power base that could challenge his own pre-eminence. The Communist Party served as Stalin’s original power base. From the early stages he promoted his own recruits to key positions and then in the early 1930s, purged all echelons of the party of people who did not blindly support his policies. Terror from incarceration and torture to murder was the fate of people who failed to show the required degree of enthusiasm. Especially hard hit were the “Old Bolsheviks”, comrades of Lenin, who were bizarrely charged with being spies, traitors or foreign agents. Forced confessions and “show trials” were supposed to “prove” that the colossal difficulties of collectivisation and industrialisation were their doing. He blamed the failures of his Five-Year Plans on the “spiders, bloodsuckers and vampires” who worked behind the scenes to “wreck”, “sabotage” and “undermine” his policies and plans. The main organs of the CPSU in the 1930s had a hierarchical structure with the Secretary General (Stalin) at the top, three buro’s on the next level below him (the Secretariat, the and the Politburo). Below the Secretariat served the Central Committee and at the bottom of the pile, the Party Congress which was supposed to be the “legislature” of the CPSU. In Lenin’s time the party congresses were lively arenas of discussion, but Stalin turned them into organised cheering sections. Party congresses became occasions to praise the great leader and to ratify decisions already taken and policies already implemented. The main function of the party congress was to elect the Central Committee which meant confirming the names nominated by Stalin. Under the 1936 constitution congresses were supposed to meet at least once every four years, but in reality the lag was between seven and fourteen years. The Central Committee, with its 100 members, was supposed to be a forum for the top party elite to decide key issues. Under Stalin many members were arrested and killed. It met less frequently and had a limited agenda. The other organs (the Politburo, Secretariat and Orgburo) consisted under Stalin of blindly devoted or thoroughly browbeaten followers. Of the fifteen members of the Politburo chosen in 1934, seven were killed before the end of the decade. The Secretariat was Stalin’s special domain. Its members and staff were his most loyal subordinates. Stalin used several techniques to preserve his hegemonic position. One technique was the “local clique” with “vertical” links reaching upwards to Stalin himself. Another technique was the “Mafia system” based on ad hoc small committees to take decisions and the use of “agents of personal rule” in addition to his “all-powerful personal secretaries” to operate outside the framework of formal institutions. A crucial technique was the secret police that could purge suspected party members by jailings, executions or banishment to vast labour camps. In 1936 a new constitution, known as the “Stalin Constitution” was proclaimed in the USSR. It declared the country a “socialist” society and set up a formal government structure that lasted until 1991. It also declared the CPSU the only legal party. In theory the basic organ of the Soviet Government was the bicameral Supreme Soviet: the Soviet of the Union (one deputy per 300,000 people) and Soviet of Nationalities (chosen by the 15 Union-Republics, autonomous republics and national arcas). This meant that the whole Supreme Soviet counted more than 1,000 deputies. On paper this was the supreme lawmaker and appointer of government officials. In reality it was a rubber-stamp and the organised cheering section. The Supreme Soviet chose a small board called the Praesidium of the Supreme Soviet to carry on the parent body’s functions when not in session. It also had the functions of a collective chief executive and its chairman was considered the head of state (“president”) of the USSR. This position was usually reserved for “elder statesmen”. Also emanating from the Supreme Soviet was the “cabinet”, later known as the “Council of Ministers”. It selected a chairman, who was the Soviet “prime minister” and several vice-chairmen who were “ministers”. Stalin preferred to delegate these positions to his 82

most loyal subordinates, such as Molotov. As the Council of Ministers grew to include many ministries and agencies, another Praesidium emerged (the Praesidium of the Council of Ministers), which was the true “inner” cabinet of the system. Ministries were of two sorts in the Council: Union-Republic Ministries with separate organisations at both national (Moscow) and republican levels; and, All-Union Ministries only at the national level who operated directly everywhere in the Country. The Soviet Army served as Stalin’s military arm. In the mid-1930s, the army was severely purged. In 1937, Marshall Tukhachevsky, Deputy Defence Minister Gamarnik and other top generals were executed for treason. This inaugurated a large-scale purge of the Soviet High Command and the top Party apparatus in the Army. This had near calamitous results for the Soviet Army in the beginning of World War II. From the time of the Civil War (1918-21) party leaders had dispatched “political commissars” to the army to watch over the military commanders of non-Communist inclinations. After the war, when there were signs of Marshall Zhukhov becoming too prominent a figure, he was promptly removed from the limelight and ideological indoctrination was intensified. The secret police was indispensible to Stalin’s power structure. The Soviet secret police first emerged during the Civil War of 1918-21 as the . In 1922 the Cheka was replaced by the GPU which a few years later became the OGPU. In the late 1930s the OGPU was absorbed into a ministry, the NKVD, or People’s Commissariat of Internal Affairs. After World War II, it became known as the MVD. After Stalin’s death it became the KGB. During Stalin’s regime a relatively backward and illiterate society had been hammered into an industrialising urban society. Economically the country advanced at a very rapid pace and a formidable heavy industry was created. Its Achilles heel remained agriculture. Stalin’s purges wiped out the backbone of its peasant communities. The collectivisation programmes failed in its purpose and food production lagged behind the production of iron, steel and cement. Consumer goods and their distribution were neglected. The development progress that was achieved came at a terrible cost in life, in freedom and in happiness. The totalitarian dictatorship that was put in place by Stalin is well described by Norman Davies: “It was supported by the largest “secret police” in the world, by the Gulag, by an aggressive brand of pre-emptive censorship, by a vast arsenal of tanks and security forces. But these were not the primary instruments of oppression: the dictatorship relied above all on the dual structures of the party-state, that is, on the civilian organs of the Communist Party and their control of the parallel institutions of the state ... There was no branch of human activity which was not subordinated to the relevant department of state. There was no branch of the state which was not governed by orders from the relevant committee of the Party. Whatever was going on, be it in the most august of ministries or in the lowliest of local farms, factories, or football clubs, it could only be legal if organised by the state; and it could only be organised if approved by the Party ... The nominal heads of all state institutions – ministers, generals, ambassadors, leaders of delegations, all directors of factories, schools, or institutes – were formally obliged to accept instructions from a parallel Party committee. They were servants of more powerful Party secretaries operating behind the scenes. ... Everything depended on the efficient transmission of the Party’s orders. Party members were sworn to obedience and to secrecy. ... In a very real sense, the Soviet Union never really existed, except as a fascia for Party power. ... That is why, when the CPSU eventually collapsed, the USSR could not possibly exist without it.” (See Norman Davies, op.cit., pp.1093-1094)

The Aftermath of World War II

The Second World War had an immense impact on the international role of the USSR. Its geopolitical ramifications dominated the international scene for the next five decades and affected the lives of millions of people around the world. 83

The Soviet Union switched from initially being an ideological ally to Nazi Germany to becoming its wartime nemesis. During the latter part of the 1930s, Russia’s “soviet-socialism” and Germany’s “national-socialism” operated in close alliance and mutual support, devastating local communities in Poland, the Baltic States, Czechoslovakia and the Ukraine. Untold millions were uprooted or brutally killed with impunity while the West chose to remain blind-sided. Stalin made non-aggression pacts with Nazi Germany on August 23rd and September 28th, 1939. The pacts contained secret protocols dividing Eastern Europe into German and Soviet spheres of influence. It enabled Germany and the USSR to destroy Poland and for the USSR to occupy Eastern Poland and the Baltic states. It allowed Germany to attack the West European countries with Stalin’s support and encouragement. Train loads carrying food and raw materials for Germany’s war machine were dispatched from Russia into Germany. By the summer of 1941, the Stalin terror operated in full force in Poland. Between 1 and 2 million individuals had been transported either to Arctic camps or to forced exile in Central Asia. The terror was not only directed at Poles, but also at Byelorussians, Ukrainians and Jews. Some 26,000 Polish prisoners of war – mainly reserve officers, officials, politicians and clergy – were taken from their camps and shot in a series of massacres known under the collective name of Katyn. Stalin personally authorised the NKVD to carry out the killings. On June 22nd, 1941, the Nazi Operation Barbarossa against the USSR was unleashed. It opened up the front which was to account for 75 percent of German war casualties and which must be judged as the main reason for Hitler’s ultimate defeat. Their campaign carried the Wehrmacht to the gates of Moscow and led them to the Volga and the Caucasus. Their “Crusade for Civilisation” attracted support from , , Italy and Spain. In the end the Nazi Wehrmacht suffered the same fate as Napoleon’s army. Eventually Marshall Zhukhov led his victorious army into Berlin after about 27 million Soviet citizens were killed in the war. In the East the Soviet army deployed against the Japanese, captured north-, North Korea, Sakhalin and the Kuril Islands. The Soviet Union suffered appalling devastation during the war and needed an interval of respite. It had annexed 272,500 square miles of foreign territory, with an extra population of 25 million people. It needed time to purge and prepare them for the Soviet way of life. The Soviet Union did not yet possess the atomic bomb but soon acquired technical details from its spies and informants in the USA and the UK. Many East European countries were now in the USSR’s sphere of influence: Poland, Ukraine, Byelorussia, Latvia, Estonia, Lithuania, Romania, , Hungary, Czechoslovakia, East Germany, Yugoslavia and Albania. The Communist Parties of Western Europe were greatly strengthened by the demise of fascism. They were particularly active in France, Belgium and Italy. After the fiasco of a failed communist coup in Belgium in 1944, the communists’ strategy was to participate in parliamentary and governmental coalitions. Relations between Western and Soviet administrations in Germany deteriorated and Berlin remained split into mutually hostile sectors. The three Western zones went their own way, joined forces and prepared to introduce the D-Mark. On March 5th, 1946, Winston Churchill, now replaced as Prime Minister by Clement Attlee, delivered his famous “iron curtain” speech at a meeting at Westminster College in Fulton Missouri: “From Stetlin on the Baltic to Trieste on the Adriatic, an iron curtain has descended across the continent. Behind that line, lie all the capitals of the ancient states of central and eastern Europe – Warsaw, Berlin, Prague, Vienna, Budapest, Belgrade, Bucharest, and Sofia ... This is certainly not the liberated Europe which we fought to build up.” By the end of 1947, the Iron Curtain became a reality following the creation of the Cominform, the February 1948 communist coup in Prague and the Berlin Blockade. Cominform’s purpose was to co-ordinate the strategies of fraternal communist parties. To test the USA’s determination to retain its foothold in Europe, Stalin decided to block traffic to West Berlin. West Berlin was kept under blockade for 15 months and was kept going by American and British airlifts. These events marked the beginning of the Cold War. 84

1949 saw the establishment of the new state of West Germany. Stalin responded with the DDR, the “German Democratic Republic” with its capital in East Berlin. West Berlin remained a small enclave of disputed status and a loophole for thousands of refugees. 1949 also saw the take-over of Beijing. The People’s Republic of China was formally established October 1st, 1949. At this point in history, half of humanity was part of the “Socialist Camp”. Soviet influence was strong with ex-colonial peoples. Moscow saw itself as the natural patron of all national liberation movements. It forged strong links with , the Arab World and Cuba. On the home front all available resources were thrown into the military applications of nuclear science. Teams of scientists produced the Soviet bomb. An atomic device was successfully tested in 1949 and a hydrogen device in 1953. To all outward appearances the USSR was a strong, impregnable fortress armed with the world’s largest arsenal of . As the greatest military power in Europe, the Soviet Union turned itself into one of the two global super-powers. But at the same time its own internal processes were decomposing: it was riddled with its own kind of political cancer. But no one noticed its distress – neither the external Sovietologists nor its own political leaders themselves. Stalin’s last years brought no relief to the long suffering of the Russian people. He was still surrounded by the same pre-war cronies. The same mixture of terror, propaganda and collective routine kept the Soviet peoples down. The Gulag kept up the same motions of mass arrests and slave labour. Stalin’s regime continued with no mitigation until he died in 1953.

De-Stalinisation

Historian Norman Davies vividly describes the circumstances surrounding Stalin’s death on March 5th, 1953, after suffering a stroke at his dacha in Kuntsevo: “In his death-throes he was left lying on the floor for 24 hours. No Kremlin doctor who valued his own life was going to save Stalin’s. The Politburo members kept vigil at his bedside in turns.” Davies then quotes from the Khrushchev Memoirs: “... As soon as Stalin showed signs of consciousness, Beria threw himself on his knees and started kissing Stalin’s hand. When Stalin lost consciousness again, Beria stood up and spat ... spewing hatred.” (See Norman Davies, op.cit., p.1091) De-Stalinisation meant the removal of those features of the Soviet regime which were directly connected with Stalin himself – the cult of personality, the one-man-rule – and the practice of mass terror. Although the post-Stalin USSR still suffered from “totalitarian momentum”, the Stalin template started to wane. The first important change involved the decline of the secret police (then called the MVD, or Ministry of Internal Affairs), during the succession crisis following Stalin’s death. L.V. Beria, who was head of the secret police for over 19 years and who acted as if he was planning to make himself Stalin’s heir, was accused of being a British spy and was subsequently executed. With him were also executed a number of his henchmen. The secret police was stripped of its ministry status and placed under control of the Politburo. A widespread amnesty broke open the prison camps. In place of a massive lawless police apparatus, some rudimentary form of “due process” safeguards were developed called “socialist legality”. At least critical voices stopped vanishing without a trace. The dictatorial machine was kept in place. Initially a form of collective leadership was exercised by Molotov, Malenkov and Khrushchev, but it soon gave way to the ascendancy of . He was described as a “proletarian opportunist” who made his way up the party ladder in the Ukraine. He had a “rough peasant charm” and became famous for his sensational revelations of Stalin’s atrocities in his Secret Speech to the XXth Party Congress in March 1956. He made selective revelations and denounced Stalin for his purges of 1936-9 and later. In the place of Stalinism and its cult of personality he promised collective leadership and “revolutionary legality”. In 1957 Malenkov, Molotov and Kaganovich attempted to depose Khrushchev, but he managed to turn the tables and expelled them from the Central Committee. 85

But Khrushchev’s erratic policies and conduct of international relations aroused opposition from within the Party. In October 1964, he was removed from his high offices as Party Secretary and Chairman of the Council of Ministers and a split leadership emerged under Brezhnev as Secretary and Kosygin as Chairman of the Council of Ministers. Apart from the decline of the secret police and the emergence of “socialist legality”, the resurgence of the CPSU and its institutions such as the Central Committee, the emergence of collective leadership at the expense of the cult of personality, de-Stalinisation also involved a more flexible economic system. Soviet economy under Stalin was extremely rigid and centralised. It was a command economy in every way. The State Planning Commission () in Moscow set rigid production targets for the whole economy. The plan was so rigid that a given factory might receive production targets well above its reach. Targets were not adjusted to economic realities. Failure to meet the often arbitrary targets could result in severe punishments. Because of the rigidities and penalties, various ways of avoiding difficulties were concocted. An expedient way was the use of the “fixer” – a person or agency to solve problems related to shortfalls of input resources, or equipment breakdowns or transportation delays. In the Khrushchev period efforts were made to decentralise the economy by setting up regional economic councils and at the factory level to give managers looser guidelines as to how to achieve quotas. Even “profitability” – the hated capitalist yardstick – was rehabilitated as an indicator of efficiency. These changes meant greater scope for managers, greater mobility for workers and greater product choice for consumers.

The Rise of the Soviet Bloc

Eight East European countries were incorporated into the “Soviet Bloc” (not the Soviet Union): Poland, Hungary, Czechoslovakia, East Germany, Romania, Bulgaria, Yugoslavia and Albania. After 1948, all passed through phases of Stalinisation and de-Stalinisation, at various points after 1953. Most of them belonged to the Soviet Union’s military alliance, the Warsaw Pact, or to the Soviet Union’s parallel economic organisation, CMEA or Comecon. All of them were ruled by communist dictatorships which had learned their trade under Soviet tutelage. These regimes justified their existence by reference to the same Leninist ideology and continued to owe allegiance to Moscow. There were important variations. Some, like Hungary, passed through de-Stalinisation sooner than others, like Czechoslovakia. Since their exposure to Soviet methods was shorter, the degree of “sovietisation” was lower. They were all subsumed in the category of “People’s Democracies”, which by no means were either popular or democratic. In the Stalinist phase (1945-53) all the countries of Eastern Europe were forced to accept the type of system prevalent in the USSR. Stalin held close control over East Germany, Poland and Romania. Elsewhere, he had not initially insisted on rigid conformity. After 1948, however, discipline was tightened. All chinks in the Iron Curtain were to be sealed and all features of late Stalinism were to be ruthlessly enforced. Cohorts of Soviet “advisors” and specialists were integrated into the local apparatus to ensure standardisation and compliance. Moscow-trained Stalinist clones were put in command: Bierut, Gottwald, Rakoczi, Ulbrecht, Georghiu Dej, Zhivkov, Tito and Enver Hoxha. Yugoslavia was the only country where total obedience to Moscow was rejected at an early stage. Josef Broz, or Tito (1892-1980), a Croat, had set up his regime without Soviet assistance. He had his own nasty record of repressions. His multinational federation, dominated by , was closely modelled on the Soviet Union dominated by Russia, with all nationality (ethnic) problems effectively suppressed. The Federated People’s Republic of Yugoslavia came into being in 1945 and had been functioning since January 1946. Tito was not inclined to take orders, did not favour collectivised agriculture and favoured workers’ self-management. In June 1948 he and his party were expelled, but he remained both communist and independent. Belgrade made its peace with Moscow during a Khrushchev visit in 1955, but it never joined the CMEA or the Warsaw Pact. 86

Political affairs in the Soviet zone of Germany had been conducted on the hopeful assumption that foundations were being laid for a united communist Germany. The failure of the Berlin blockade and the declaration of the Federal Republic in 1949 showed that such expectations were unrealistic. Hence the German Democratic Republic (DDR) was formally constituted on October 7th, 1949. The Soviet occupation forces reserved important powers for themselves. The principle problem lay in the constant haemorrhage of escapees. For a dozen years one could reach West Berlin by taking the U-bahn train from Friedrichstrasse to the Tiergarten! Over the period 1949-61, thousands of people escaped. The Council for Mutual Economic Assistance (CMEA), better known as Comecon, was founded on January 8th, 1949 in Moscow, where its Secretariat remained. The founding members were joined by Albania (1949) and the DDR (1950), (1962) and Cuba (1972). Its main function was to assist in the theory and practice of “building socialism” by Soviet methods. The mechanisms of inter-party controls played an important role in keeping members of the Soviet Bloc in line. The CPSU controlled the affairs of fraternal parties who, in turn, controlled the republics for which they were responsible. The International Department of the CPSU’s central Secretariat was specially entrusted with this vital task and each of its “bureaux” was charged with overseeing the internal affairs of a particular country. Through its channels, all the leading posts in fraternal parties could be subordinated to the nomenklatura system of “higher organs” in Moscow. Soviet agents could be placed at will into key positions throughout the Soviet Bloc. In effect, the Soviet Politburo could appoint all the other . The KGB could run all the other communist security services and Glavpolit all the General Staffs of the emerging People’s Armies. For several years after 1945, Stalin did not wish his to have large military forces of their own. The most obvious signs of Stalinism taking hold were seen in the series of purges and show trials that smote the leadership of the fraternal parties after June 1948. Stalin put the East European comrades through the same “meat grinder” that he had used on the CPSU. Poland’s Gomulka and Bulgaria’s Kostov were charged with “national deviation” and executed. In Budapest, Foreign Minister Rajk was tried and executed. In Prague, Slansky, the General Secretary of the Party and 14 others were charged and executed. Charges in several countries included Titoism, , anti-Sovietism, foreign espionage and even Zionism. In the second (post-Stalinist) phase (1953-68) the Soviet satellites worked their way towards a stage that has been labelled as “national communism” or “polycentrism”. Each of the fraternal parties was to claim the right to fix its own separate “road to socialism”. The CPSU reserved the right to intervene by force if the gains of socialism were in danger. “Gains of socialism” was a code word for communist monopoly power and for loyalty to the Kremlin. Sporadic upheavals were mercilessly suppressed. On June 17th, 1953, a demonstration of workers in East Berlin was crushed by Soviet tanks. Khrushchev’s speech of 1950 propelled a shock-wave across Eastern Europe. Popular unrest welled up as the old guard of ruling parties was rocked by the demands of would-be reformers. In June, workers were killed in Poznan, Poland, when the Polish Army fired on demonstrators carrying banners demanding “Bread and Freedom” and “ Go Home”. In October, first in Warsaw and then in Budapest, two fraternal parties took the unprecedented step of changing the composition of their politburo’s before first clearing their choice in Moscow. While the Western powers were distracted by their differences over the Middle East, the USSR was left with a free hand in Warsaw and Budapest. Khrushchev flew into Warsaw in October and decided to accept Gomulka’s election as General Secretary of Poland’s communist party. In Budapest events took a fatal turn. Khrushchev wanted to demonstrate that his generosity to the Poles and the Yugoslavs should not be construed as a sign of weakness. The Soviet military intervention battered the recalcitrants into submission within a month. Janos Kadar, a loyal communist, was appointed General Secretary and security chief. But Imre Nagy, the leader of the party’s reformist faction, emerged as Prime Minister. He admitted several non- communists into his government, breaking the communist monopoly. The release of Cardinal Mindszenty, sparked demonstrations of enthusiasm, followed by attacks on the hated security police. The government appealed for assistance from the United Nations and announced 87

Hungary’s withdrawal from the Warsaw Pact. At dawn, on November 4th, 1956, the Soviet armoured divisions poured back into Budapest. After 10 days of heroic fights by unarmed locals, Nagy took refuge in the Yugoslav Embassy, only to be handed over to the Soviets. In due course he and 2,000 followers were shot. Hundreds of thousands of refugees flooded into Austria. The final toll of casualties mounted to hundreds of thousands. Hungary was left in the hands of Kadar and a revolutionary government of workers and peasants. Hungary’s national rising left an indelible stain on the Soviet record. It destroyed lingering sympathies for leftists, ruined the future of communist parties in the West and greatly increased the tensions of the Cold War. In the 1960s, a new Soviet economic strategy was adopted, partly in imitation of the EEC and partly in recognition of the shortcomings of existing Stalinist methods. The CMEA acted as co- ordinator of joint planning and great store was put on the dissemination of modern science and technology. It was felt that “goulash communism” would cure well-fed citizens from their dreams of political liberty. The main idea was to introduce limited market mechanisms into a system still controlled by the state and to encourage enterprise by relaxing controls on compulsory deliveries and land ownership. Results came swiftly so that by the mid-1960s Hungary’s prosperity was helping people to forget its political misery. Budapest became a city of thriving restaurants, well-stocked shelves and no politics. “Kadarisation” seemed to offer an attractive compromise between communism and capitalism. But some countries failed to react to the newly developing trends. The German Democratic Republic (DDR) stuck to its rigid ideological conformism and excessive pro-Sovietism. These sentiments were particularly fostered by the Stasi, a security apparatus of fearful reputation. Nearly 40 divisions of Soviet occupation troops remained in the DDR (Putin’s training ground was near Dresden). The steady exodus of citizens was an embarrassment. On August 13th, 1961, all crossings between East and West Berlin were sealed. For the next 28 years the Berlin Wall turned the DDR into a cage: the most visible symbol of communist oppression in Europe. Great efforts were made to promote heavy industrialisation and to win international recognition through massive state sponsorship of Olympic sport. Nicolae Ceausescu (1918-89) became General Secretary of the Romanian League of Communists in 1965 and as Conducator created a neo-Stalinist cult of personality and a brand of nepotistic despotism that was described as “socialism in one family”, while keeping his people in fear and beggary. His infamous Securitate exceeded the atrocities of the KGB. He somehow succeeded in being knighted by the Queen of England. It has been aptly called the North Korea of Eastern Europe. According to historian Norman Davies, Bulgaria competed with East Germany for the “laurels of grim immobility”. It was held by Party leader, Todor Zhibov on its slavishly pro-Soviet course from 1954 to 1990. Czechoslovakia resisted de-Stalinisation until January 1968. General Secretary, Antonin Novotny, paid no attention to economic reforms. He was overturned by a coalition in the Politburo of Slovaks, who were disgruntled with Czech dominance and Czechs eager for systemic reform. The new leader Alexander Dubcek (1927-93), was a mild-mannered Slovak communist. He tried to give socialism a human face. Dubcek and his team tried to introduce reform by suspending censorship and initiated an active debate amongst the people. The “Prague Spring” struggled against the odds for barely seven months when his Soviet comrades expressed concern over alleged excesses such as freedom of the media. At dawn on August 21st, 1968, half a million soldiers drawn from Warsaw Pact countries poured into Czechoslovakia without warning: Poles, East Germans, Bulgarians, Soviet divisions from Poland and the Ukraine. Dubcek was flown to Russia in chains and reforms were halted. Czechoslovakia’s frontiers were henceforth to be guarded by the Warsaw Pact. Dubcek was replaced by Husak. When it was all over, Brezhnev spelled out the Soviet position at a summit meeting of Soviet Bloc leaders in Warsaw in November 1968. The Brezhnev Doctrine stated in the clearest terms that Moscow was obliged by its socialist duty to intervene by force to defend the “socialist gains” of its allies. 88

The invasion of Czechoslovakia was far less brutal than the suppression of the Hungarian Rising. But it unfolded on the world’s television screens. The East European communist ice age was yet to last another two decades. (See Norman Davies, op.cit., pp.1089-1106)

The Transformation of the Soviet Bloc

During the Brezhnev Era, 1968-85, the norms laid down by the Brezhnev Doctrine were progressively challenged by a growing tide of intellectual, social and eventually political protest. The principal challenger was Poland, but Prague was the seat of Soviet-powered “normalisation”. Compartmentalisation was a central feature of the Soviet strategy. The vehicle of control was “national communism”. Each country, whilst closely connected to Moscow, was effectively insulated from the others. The cordon separating the countries from one another was a severe as the Iron Curtain itself. Poland displayed a number of unusual characteristics. It was the largest of the Soviet satellites and the least sovietised. The Roman Catholic Church under its formidable Primate Stefan Cardinal Wyszynski never submitted to political control. The Church retained the loyalty of the peasantry as well as the new proletariat which, in turn, undermined the Communist Party. Initially Gomulka succeeded in suppressing protest: the students, the intellectuals and the dockworkers. The Solidarity trade union grew from a group of determined strikers in the Lenin shipyards at Gdansk in August 1980. It was led by an unemployed electrician, Lech Walesa. It swelled into a nation-wide social protest, with a following of 10 million. Dedicated to non-violence, it did not fight the communists, it simply organised itself without them. It won the right to strike and to recruit members. It became the only independent organisation in the Soviet Bloc. The ailing Brezhnev first put the Soviet army into motion, then hesitated, and left the task to the Polish army. On the night of December 13th, 1981, aided by deep snow, General Jaruzelski executed the most perfect military coup in modern European history. In a few hours up to 50,000 solidarity activists were arrested and military commissars took over all major institutions. Martial law was declared. When material conditions in Poland deteriorated even more, renewed strikes loomed. The desperate ministers turned to the leader of the banned Solidarity union, Lech Walesa. Early in 1989 round-table talks started to discuss power-sharing. Solidarity, now re-legalised, was allowed to contest a limited number of parliamentary seats. Walesa’s people swept the board in every constituency they contested. Prominent communists were not re-elected. As so often in the past, the Polish regime turned to Moscow for directions. But the reforming trend that had surfaced in Poland was about to break surface in Moscow itself – under the name of “” (restructuring). When Mikhael Gorbachev talked with the head of the Polish Communist Party on the phone, the answer was different. He was told that the Soviet Union would accept the outcome of a free election – in this case a government with a Communist minority and a non-Communist Prime Minister. That phone call, in many ways, ended the cold war by taking the first step to transforming the Soviet Bloc. Tadensz Mazowiecki became Poland’s first non-Communist Prime Minister in August 1989 and he knew he needed rapid action. He was looking for a Polish “Ludwig Erhard” – which he found in the person of a Polish economist by the name of Leszek Balcerowicz. He was the author of an economic programme that would end up carrying not only Poland, but also much of Eastern Europe and even the Soviet Union, into the market economy. This was the year that communism collapsed, like dominoes, throughout Eastern Europe. But Poland led the way in terms of economic reform and that was the work of Balcerowicz. He had spent two years studying business at St. John’s University in New York, subsequently investigated the dynamics of Korean and Taiwanese growth, studied Ludwig Erhard’s reforms in Germany and familiarised himself with various Latin American stabilisation programmes. 89

In Warsaw, since 1978, he had directed the Balcerowicz study-group that was focused on the “problems” of socialism and the question of how to reform the Polish economy. It focused on such basic questions as property rights, the proper role of the state in the economy, inflation and what was increasingly becoming the true hallmark of socialism – shortages. All of this convinced Balcerowicz that “” was doomed to failure: that changes should be combined and applied rapidly in order to reach a “critical mass”. He came to the conclusion that people are more likely to change their attitudes and their behaviour if they are faced with radical changes in their environment, which they consider irreversible, than if those changes are only gradual. Balcerowicz became finance minister and deputy prime minister in the new Solidarity government on condition that a rapid and massive transition would be implemented – a shock therapy to implement a “market revolution”. Before his plan was implemented, Poland was already running an annual inflation rate of 17000 percent and was in default on a debt of $41 billion. Many enterprise managers were engaged in what was called “spontaneous privatisation” – stealing as fast as they could the assets of the enterprises they managed. January 1st, 1990, was the day of the “big bang” for the shock therapy to mark the decisive break with the communist past. Prices were freed, the currency was devalued and made convertible, wages were pegged, taxes were reformed, the government deficit was reduced from 7 to 1 percent of GDP and a restrictive monetary policy was put in place. Initially prices jumped almost 80 percent in a matter of days, food reserves were low and shortages continued, food riots and street demonstrations were held. Within four weeks, things started to normalise. Farmers started to bring their produce to sell on the sidewalks. Industrial wares showed up. New merchants appeared across the country. As shortages disappeared and supplies increased, prices started to come down. Markets were starting to work. Afterwards many people argued that gradualism would have worked better and that standards of living declined. Balcerowicz had to counter attacks from many quarters. He endured all sorts of abuse. The privatisation of the large state enterprises proceeded very slowly, but other parts of the reform programme exceeded expectations. The birth of the new economy was particularly dramatised by the explosion of small business. Within 30 months over 700,000 new companies were registered. By mid-1997 the number was over 2 million. The “shortage economy” disappeared as a consumer-orientated system emerged. By 1992, the new private sector was generating over half of the entire GDP. The predicted mass unemployment did not occur because the new private firms created two million new jobs within two years. Poland was importing more, but now it could be paid for: hard currency exports doubled between 1989 and 1993. The most extraordinary outcome of all was Poland’s overall performance – economic growth averaging 6 percent a year from 1994. In the 1997 parliamentary election, Solidarity emerged as leader of a coalition government. The new finance minister was Leszek Balcerowicz, who had written the script for Poland’s market revolution. (See Yergin, D. and Stanislaw, J., op.cit., pp.267-273) In Prague in 1989, four decades after the communists seized control, dissidents succeeded in implementing a smooth transition to democracy. It was carried out under the tutelage of writer Vaclav Havel, imprisoned under communism, who provided the moral authority and vision for what became known as the Velvet Revolution. After experimenting for 70 years with a dual household between Czechs and Slovaks, they followed their Velvet Revolution with a Velvet Divorce – an amicable separation to create two separate countries: the and . If Havel was the embodiment of principle and democratic values as President, his Prime Minister, Vaclav Klaus was the man that was responsible for economic reform. He led the Czech Republic into an economic success story. Klaus, who as student of Economics familiarised himself with the writings of Hayek and Friedman, became a free market convert. Liberal ideas governed his policies when he launched the Czech version of shock therapy in January 1991, exactly 1 year later than Poland’s. He discarded the debate between shock therapy and gradualism as irrelevant and unrealistic when it came to the realities of transition. He argued that the more they put the brakes on the transition, the more costly and painful it would be. 90

The Czech programme followed along the lines of Poland: immediate freeing of most prices, currency convertibility and devaluation (combined with import surcharges to provide some protection) and tight monetary policies. The effects were much the same as in Poland – a great burst of inflation at the start and then a quick settling down, followed by strong economic growth. One important difference was that the Czechs went for quick and massive privatisation on the premise that it was better to get property into private hands than wait for restructuring or a fully developed legal and institutional framework. As early as 1990 some property was returned to the people from whom it had been confiscated when the Communists came to power in the late 1940s. The government experimented with a variety of privatisation measures. The best known was the “voucher system”. Books of vouchers were sold to all citizens over the age of 18 who wanted them. These vouchers, in turn, could be used either for direct purchase of shares in companies, or for indirect purchases through voucher funds. Elsewhere, the transformation of Communist Bloc countries proceeded with varying degrees of success. Some countries had neither strong mercantile traditions nor advanced technologies to build on. In some instances both the cultural sub-structure and the physical infrastructure were absent. In Hungary the transition to democracy in 1989 was relatively smooth. The Hungarian People’s Republic was abolished without bloodshed and the Communist Party voluntarily transformed itself into a social democratic party. In Romania, after a bloody uprising in Bucharest, the hated Securitate defended itself to the death. The liberation culminated in the execution of the Ceausescus. Gorbachev, though not the architect of East Europe’s liberation, did play a significant role. Historian Norman Davies describes Gorbachev as the lock-keeper who “... seeing the dam about to burst, decided to open the floodgates and to let the water flow ... the dam burst in any case, but it did so without the threat of a violent catastrophe.” (Norman Davies, op.cit., p.1123) The Soviet bloc gradually phased out. The Brezhnev Doctrine had died unnoticed. The CMEA and the Warsaw Pact ceased to function. One after another the ruling communist parties bowed out. Every new government declared its support for democratic politics and a free market economy. In Berlin, on November 12th, 1989, East German border guards stood idly by as crowds on both sides of the Berlin Wall started to demolish it with gusto. The DDR government under Honicker was told by Gorbachev that they could not count on the support of Soviet troops. By 1990 the Soviet troops were systematically withdrawn. The drive for re-unification in Germany accelerated as the organs of the DDR evaporated. Helmut Kohl won the general election and the Federal Republic formally absorbed the citizens, territory and assets of East Germany. In time, the unification proved to be more cumbersome in terms of the political and financial costs for Germany and for Germany’s neighbours. The fires of freedom spread far and wide. It soon spread to the constituent republics of the major federations: the Soviet Union and also Yugoslavia. Though not yet recognised, independent status was declared by the Ukraine, Estonia, Latvia, Lithuania, Georgia, Chechnya, and Moldavia. The pulverisation of the Yugoslavian federation proved especially vicious: Serbian nationalists stoked the passions for a “Greater Serbia” encompassing Serbs in , and Serbia. Panic and intercommunal violence rapidly gripped several parts of the disintegrating Yugoslavian state. Transforming communism proved to be a thorny problem in all post-communist countries. The prevailing laws could not be abandoned overnight. The communist nomenklatura, now declaring undying devotion to democracy, could not be dismissed en masse. The ex secret policemen could not be easily unmasked. Germany was rocked by the exposure of hundreds of thousands of Stasi informers. Poland re-opened investigations into political murders. Czechoslovakia was alone in passing its Lustracni zakon (Verification Law) which sought to exclude corrupt or criminal officials. The legacy of Soviet-type economies was dire. Despite initial successes, such as the currency reform and the conquest of hyperinflation under Poland’s Balcerowicz Plan (1990-91), it became 91

painfully clear that no overnight remedy was available. All former members of the Soviet Bloc faced decades of agonising re-organisation on the way to a viable market economy. Everywhere the social attitudes engendered by communism persisted. Embryo civil societies could not rush to fill the void. Political apathy was high, petty quarrels were ubiquitous. Residual sympathy for communism as a buffer against unemployment and uncertainties was greater than many supposed. The masses were conditioned to disbelieve all promises and to expect the worst. The cynical idea that someone loses if someone else is gaining was deeply ingrained. The dimensions of the devastation were much larger than anticipated. The fact that communism died without a fight did not ease the pain which it left behind. (See Norman Davies, op.cit., pp.1117-1125)

The Collapse of the USSR

Brezhnev died in November 1982, whereupon Yuri Andropov, former head of the KGB, succeeded him both as General Secretary and as President of the Praesidium of the Supreme Soviet. After less than a year in office, Andropov died in February 1984. He was succeeded in both posts by Konstantin Chernenko. He died on March 10th, 1985. These events surrounding the geriatric Soviet leaders symbolised the approaching end of the Soviet Union. The political and economic system was in advanced decline. In March 1985, Mikhael Gorbachev emerged as the fourth General Secretary of the CPSU in three years. He was chosen by the Party apparatus and had no electoral credentials. He was the first Soviet leader to be untainted by a Stalinist record. He was affable, quick-witted and spoke without notes. Mrs. Thatcher was quick to announce him as a man “with whom we can do business”. Gorbachev’s early months in office were taken up by reshufflings of the Politburo, but he soon moved on East-West relations. He proposed a 50 percent reduction in all nuclear weapons. It was rejected but Gorbachev continued, pressing for an end to the Cold War. A young school boy from Hamburg, Matthias Rust, provided a catalytic event. Rust piloted a tiny private monoplane up the Baltic from Hamburg, crossed the Soviet frontier in Latvia, flew at tree-top level under the most sophisticated air defences and landed on the cobble stones near Moscow’s Red Square. Single-handedly, he made the Cold War look ridiculous. Mikhael Gorbachev embarked on a remarkable era of change under the twin banners of glasnost (openness) and perestroika (reconstruction). He immediately shook the Soviet Union and its subject regimes in the Soviet Bloc to their roots. He unleashed social, political and nationalistic forces which quickly assumed a momentum of their own. It became, in effect, the “Second Russian Revolution”. After new elections in March 1989, Gorbachev launched a series of new initiatives. The vast majority of political prisoners were released as part of his glasnost initiatives. As part of perestroika he undertook to inject more market principles into the economic management of society and a reduction in central planning. He set out to stimulate debate about solutions. It opened the floodgates of unprecedented arguments. But the average citizen remained beset by many problems, shortages, rising prices, labour unrest and unemployment. Things in general appeared to be getting worse. Soviet troops were withdrawn from Afghanistan. In December 1989, Gorbachev and Pres. Bush made a joint statement that the Cold War had ended. Gorbachev also announced that the Soviet Union would not use force to impede the thrust to democratisation in the members of the Soviet Bloc. Gorbachev found himself in an anomalous position. In spite of his liberal reputation in the West, he was still a convinced communist who wanted to humanise and revitalise the system, not to destroy it. Like Brezhnev before him, he arranged to be given the office of state “President” – as if he were the equivalent of the American President. Yet he never faced the electorate and never sought to relinquish his main, unelected office of CPSU leader. He never proceeded beyond tinkering with half measures and marginal issues. He rejected more radical plans: e.g. to decollectivise agriculture, to legalise private property, to deregulate prices. As a 92

result, the planned economy started to collapse in conditions where the market economy could not start to function. He invited the republics to state their demands, but refused to make any concessions. Gorbachev was a skilful tactical politician, coaxing the conservatives and restraining the radicals; but he did not win any substantial public confidence. Gorbachev and his Western admirers overlooked the essential features of the Soviet system and underestimated the consequences of unleashing pent-up expectations and aspirations. They ignored the implications of removing coercion from a political machine that had known no other driving force. The Party provided the physiological dynamics of the political and governmental organs. Without these dynamics the brain and the heart and the nervous system could not function in a coherent way. They underestimated the effects of decades of Party indoctrination which rendered hierarchical echelons of administrators ineffectual without top-down directives. They continued to think of the Soviet Union as an integrated national entity (moya strana, “my country” as Gorbachev was still calling it in 1991). In reality, without the binding connections of a dictatorial party machine, it was an artificial political unit that was bound to tear apart along its many fault lines. They misjudged the effect of glasnost on the suppressed nationalities for whom freedom of expression could only mean demands for independence. The rivalry between central and republican structures and between party and state bureaucrats ruined the centralised distribution network. The USSR was forced to rely on massive food-aid shipments from the West to feed its population. This occurred in a year with a record grain . In February 1990, Gorbachev proposed that the CPSU surrender its constitutionally guaranteed “leading role” in Soviet society. This opened the Soviet Union to a multi-party system. He reorganised the Politburo and shifted towards a Western-style presidential system in which the president would be popularly elected from 1995. The Congress confirmed Gorbachev in that post in March 1990. He was also re-elected as party leader. At this point Yeltsin announced that he was quitting the CPSU. Loosening the fetters of the Party in Soviet political life also unleashed pent-up ethnic tensions and nationalistic aspirations in Uzbekistan, Georgia, Azerbaijan and Armenia that eventually threatened the very Union itself. Gorbachev, committed to preserving the Union, soon faced serious separatist challenges in the Baltics, Estonia, Latvia and Lithuania. An event of crucial importance was the election of as the first popularly elected democratic president of the Russian Soviet Federative Socialist Republic (RSFSR) on June 12th, 1991. The RSFSR was the largest, most populous and most important of the 15 Soviet republics of the USSR. Its capital was Moscow, also the capital of the Soviet Union (USSR). With its population of around 150 million, it occupied a predominant position in the political and economic life of the USSR. After his election, Yeltsin began to distance his Russian constituency from the inclusive USSR represented by Gorbachev. In June 1991 Gorbachev was awarded the Nobel Peace Prize. In his acceptance speech he made a plea for a massive unconditional international foreign aid programme and warned that if perestroika failed, the prospect of entering a new peaceful era in history would vanish for the foreseeable future. He was invited to a London summit meeting of the G-7 and promised expert aid and technical assistance through the and the IMF – also promises of development aid if the Soviet Union continued its transition to a free-market economy. Reading the signs of the time, Gorbachev started an initiative to convert the USSR into a much looser union of sovereign republics. A new union treaty was set to be signed in August 1991. It omitted any reference to socialism and endorsed all forms of property ownership. It vested the republics with ownership of and control over their economic resources. The Ukraine indicated that it required more debate on the details. The Baltic States, Georgia and Moldavia, flatly rejected it, but eight other republics, including the Russian Federation, indicated their willingness to sign. On August 18th, 1991, while vacationing at the Crimean retreat of Foros, Gorbachev was visited by a delegation of a body calling itself the “State Committee for the State of Emergency” (SCSE), which demanded that Gorbachev sign a decree handing over all his powers to Vice- 93

President Yarnayev. Gorbachev refused and was placed under house arrest. The SCSE banned all demonstrations and restrictions were placed on the media. Gorbachev’s powers were suspended. Large numbers of tanks and armoured vehicles rumbled into Moscow and took up key positions. Yeltsin clambered onto a tank and denounced the coup as illegal and declared that he was assuming command. Thousands of supporters assembled at the Russian parliament building. On August 21st, 1991, the SCSE’s stratagem to seize power collapsed as the troops were ordered to return to their barracks and several coup leaders took flight. When Gorbachev returned to Moscow, Yeltsin, the hero of the hour, was now the senior partner in a new relationship. The coup leaders were arrested to face treason charges. Yeltsin then signed a decree ordering the suspension of the Communist Party in the Russian Federation. The other republics in the USSR followed Yeltsin’s example cracking down on the Communist Party. Gorbachev now reached the end of his political credit line. He resigned as General Secretary just before the Party dissolved itself. On September 5th, 1991, the Soviet Congress of Deputies passed its last law, surrendering its powers to the sovereign republics of the former Soviet Union. On October 24th, 1991, Gorbachev issued a last decree, splitting the Soviet KGB into its component parts. Gorbachev was left standing as the figurehead president of a phantom state. In December 1991 Gorbachev made a last vain attempt to summon the heads of the Soviet Republics to Moscow. He was unaware that the leaders of the ASFSR, Byelorussia and Ukraine were already negotiating their future association as a “Commonwealth of Independent States” (CIS) as a core locus of control over the USSR’s strategic weapons arsenal. On December 8th, 1991, they signed a declaration that the USSR had ceased to exist as a subject of international law. On December 12th, 1991, the Supreme Soviet of the RSFSR ratified the declaration ending the USSR as a geopolitical reality by an overwhelming majority. On December 21st a further 11 former USSR republics met in Kazakhstan and signed a declaration joining them as members states of the CIS. On December 25th, 1991, by special law, the RSFSR was renamed the “Russian Federation”. The President of the Russian Federation informed the Secretary-General of the United Nations that the Russian Federation, as the successor state to the USSR, would continue the membership of the Soviet Union in all organs of the UN, including the Security Council. On the same date, Gorbachev signed a declaration proclaiming the formal end of the USSR. The peaceful end of Europe’s last empire was complete.

Challenges of Transition and Transformation

The collapse of the Soviet Union confronted its remnants with the formidable task of forging viable nation-states. The former members of the Soviet Bloc (Poland, East Germany, Czechoslovakia, Hungary, Bulgaria, Romania, Albania and Yugoslavia) at least had the rudiments of self-government to use as foundations. The former constituents of the USSR (the fifteen republics) had even larger developmental gaps to bridge. In many areas the new leaders faced a multitude of challenges: the old systems; the agonising problems in winning the full commitment of their citizenry; the level of expectations far exceeding the delivery capacities of the fledgling political and economic systems. Disappointment or failure was inevitable – in most cases both reactions were the order of the day. Some small steps were taken by Western Europe to bridge the East-West divide. NATO established a Joint Co-operation Council to which former Warsaw Pact members were invited. The European Community (EC) signed treaties of association with Poland, Hungary and Czechoslovakia. A joint European Bank of Development and Reconstruction was opened in London. Food and financial aid was sent to the ex-Soviet Union and peace-keeping missions to ex-Yugoslavia where the ethnic distribution of people did not match the newly drawn national boundaries. Yet the supportive steps taken were exceedingly small. The EC was still blocking agricultural imports from the East, throttling trade. Except for German investment in East 94

Germany, Western investment in East Europe was minimal. No co-ordinated foreign policy was forthcoming; no effective action was taken to contain the looming conflicts in Croatia and Bosnia; no dynamic leadership emerged. The habits of a generation led people to assume that West was West and East was East. For 40 years the Iron Curtain had provided the framework for political and economic life. It disguised the close connection of their interests. In the command economies of the Soviet Era, supply and demand were irrelevant economic data. Resources were allocated by bureaucratic decision rather than millions of individual choices that add up to supply and demand. What mattered were the preferences and goals of the political leaders, which were implemented through the mechanisms of central planning. Government agencies at the centre made the whole system work. Their names all began with gos – an abbreviation of the Russian word for government. Gosplan determined the plan, Gosten set prices, Gossnab allocated supplies, Gostrud determined labour and wage policies. The Communist Party provided the decision-makers. The economic tests of profitability and efficiency were not part of the equation. What mattered was following the directives. From the 1930s to the 1970s, the Soviet system enjoyed immense prestige around the world. It was seen as delivering the goods: rapid industrialisation and high growth rates. The military- industrial complex stood at the centre of the system: weaponry and armaments. Agriculture, services and consumer goods were neglected. The whole economy was subordinated to the needs of the military-industrial complex. When the Cold War approached its final stage, the need for extra tanks, fighter planes, troop carriers and military equipment disappeared. Thousands of factories and millions of factory workers became superfluous. Their production lines came to a standstill, their skills inappropriate, their mindsets brainwashed and their expectations wrong-headed. Gorbachev tried to reform the old system by dismantling the machinery of central planning and by phasing out the dominating position of the Communist Party. But he did not replace it with anything that could serve as the driving force. There was nothing to keep the parts working together. The most visible deficiencies were high inflation and shortages. The shelves in the shops became more and more empty. The queuing lines grew longer. The industrial sector continued to be enormously irrational, inefficient, wasteful and polluting. There were no templates or recipes to guide the transformation of an entire economic system towards a market system. The only recent experience to go by were still very much works in progress: Poland and the Czech Republic – respectively with 40 million and 10 million people in contrast to the Soviet Union’s around 300 million people. No country had ever faced the scale and urgency of the Soviet situation. Not one of the fundamental prerequisites for a market economy existed in the Soviet Union or its immediate successor, the Russian Federation. There was no price mechanism to convey information about supply and demand. Nor were the rules of the game understood – norms and laws – to guide behaviour in the marketplace. There was no system of established law to deal with contracts and property rights. Most individual properties were not properly surveyed in a deeds registry. All this had to be built up from scratch. There was no laboratory in which to practice. Finding people with relevant expert knowledge was no easy task either. Russian economists were trapped in a no-man’s-land of a dismal command economy and a superficial picture of a market economy. There were small pockets of economists in Moscow and Leningrad who had obtained special permission to gain access to Western economic literature (held in the Spetskhran, a classified, secluded area in libraries) – if they had sufficient reading knowledge of foreign languages. A few young Russian economists, such as and Grigori Yavlinsky, had access to information about forms of in Hungary and Yugoslavia as described by Janos Kornai, a Hungarian economist. They were also well informed about the Balcerowicz reforms in Poland and its variants implemented by Vaclav Klaus in Czechoslovakia. They were also influenced by the Japanese experience after World War II. 95

As the USSR approached its dissolution towards the end of 1991, Boris Yeltsin had been preparing for Russia’s sovereignty and his assumption of authority. He invited several groups of competing economists to come up with a suitable economic strategy. Yegor Gaidar was the leader of a group calling for radical reform. He and his group were convinced that shock therapy was the only way to go. Yeltsin chose Gaidar and his team. In November 1991, Gaidar was made deputy prime minister and minister for finance in the RSFSR (later called the Russian Federation). Towards the end of 1991, as the USSR (Soviet Union) was about to be dissolved, the country was on the brink of collapse. In reality it meant that a nuclear superpower was close to anarchy. The army was not accountable to anyone. The economy was in chaos with fifteen central banks in fifteen independent republics. The country had no money, no grain to last through the next winter, no way to generate a solution. Public finances were falling apart. The old economy was plunging into deep depression with the output plummeting as the orders for tanks and other military equipment disappeared. Coal supplies were disrupted and there was a good chance that Moscow and St. Petersburg would have no heat in the winter. In the words of Gaidar: “It was like travelling in a jet and you discover that there’s no one at the controls in the cockpit”. Yeltsin’s Uphill Struggle

In February 1992, the Russian Federation, by agreement with the other republics of the CIS, was recognised as the legal successor to the USSR. It also assumed control over the former USSR’s nuclear arsenal, with the commitment from the other republics, where nuclear weapons were stationed, to transfer those to Russia. This was accomplished under the leadership of Boris Yeltsin. Despite the resistance of hard-line conservatives, Yeltsin, with a slim majority, persuaded the Russian parliament to accept radical reforms of the Russian economy. The most revolutionary economic reform came into play on October 1st, 1992, when every Russian was issued with coupons to the value of 10,000 roubles (then US$40) with which they could buy shares in state- owned enterprises. Russia and Ukraine reached a broad agreement on an 18-point plan to settle their differences over such matters as the possession of the Crimea, the disposition of the Black Sea fleet and economic liabilities. The Yeltsin government moved quickly to free prices and to remove the huge distortions. The most immediate problem was grain: the cities were running out of bread. Gaidar and his colleagues knew how important grain shortages had been in Russian history – helping to provoke revolution in 1917 and helping to create the Stalinist economy in the late 1920s. The dangers were food shortages, riots and hyperinflation. The state procurement agencies created by Stalin who had requisitioned grain from the peasants since the early 1930s were no longer there. As in Poland, the government had to rely on the incentive of newly freed prices to solve the problem and wait. In June 1992, the first harvest began to reach the cities. Other controversial reforms were initiated and partly implemented. Many prices were decontrolled and Russia started restoring public finances and reducing inflation. Foreign trade was liberalised and economic activity was freed. Military procurement was cut by 70 percent. Subsidies were slashed and cheep credits to factories systematically reduced. Opposition to the reforms intensified, delaying implementation, sometimes almost derailing the entire process. Enterprise managers and bureaucrats had much to fear from the test of the market. The military saw its resources disappear. The elderly held the reformers responsible for the high inflation that was devouring their pensions – not realising that it was the cheap-credits policy of the central bank (whose head was opposed to reform) that was fuelling inflation. Local politicians saw the enterprises that supported whole towns collapse and blamed the reforms. The social safety net was frayed. The military-industrial enterprises that shrank or collapsed had formerly provided their workers with the bulk of their social services – housing, child care, medical care, recreation. Who would now provide these services? Those who lived on government salaries – teachers or doctors or researchers in state institutions – saw the value of their salaries or wages drastically decreased. 96

For managers of public enterprises, bureaucrats and pensioners, the “market” was a source of great stress. It was something that invaded their lives, attacking the body of society, disrupting their belief systems and devaluing their experience, questioning the very rationales that had governed their lives and justified their sufferings. To all of these people the market meant anarchy. What was happening before their eyes was immoral, against their deepest instincts. Making money was suspect. Speculation was the all-purpose term of opprobrium and insult. Trade was considered mafia. The black Zil and Chaika limousines of the regime figures were customary and proper – not the new Mercedes filled with arrogant young business tycoons holding cellular phones and heavily made-up young women. The radical reformers were described as “ruthless populists” who showed no deference to the old system that had resisted Hitler and put the first Sputnik into space. Alexander Rutskoi, Yeltsin’s vice-president and later opponent, attacked Gaidar and his team as “small boys in pink shorts and yellow boots”. In an effort to stabilise the political situation, Yeltsin made prime minister in December 1992. He had been the most successful industrialist in the country as head of Gazprom, the state gas monopoly, which had grown into the largest energy company in the world. He was widely respected and did not emerge through the military- industrial complex. The actual reform process slowly moved ahead – sometimes uncertainly paused, sometimes reversed. Yeltsin was constantly pressured to back away from the reforms. But reform and necessity had their own inescapable logic and momentum. Every time it slowed, inflation would rise dramatically or the rouble would collapse. Such events pushed Yeltsin back onto the path of reform. Yeltsin himself was not thought to have had deep economic views. Within the Russian Federation the struggle between the President and the reactionary parliament sharpened. The main conflict involved budgetary allocations. On September 21st Yeltsin issued a decree dissolving parliament and calling for fresh elections. The old Congress and Supreme Soviet system was to be abolished and elections for a new parliament (a Duma) were to be held on December 11th-12th, 1993. Presidential elections were to be held in June 1994. Parliament refused to accept the Yeltsin decree. It appealed to the Constitutional Court which promptly found Yeltsin’s decree illegal. Parliament then swore in Vice-President Rutskoi as head of state. The crisis reached its peak in early October 1993 when a demonstration of anti-Yeltsin protesters burst through police cordons and entered the parliament building in Moscow. They also tried to capture the television centre’s building where a number of protesters were killed by special squads of the Interior Ministry. On October 4th, tanks opened fire on the parliamentary building and by the end of the day the government’s control had been restored. Rutskoi and some of his supporters were arrested. Russia’s constitution was adopted at the national referendum on December 12th, 1993, the same day as the elections for the new parliament, the Federal Assembly. A majority of voters supported opposition parties. On February 23rd, 1994, the Duma declared amnesty for the leaders of the August 1991 coup and the October 1993 parliamentary resistance. The prisoners were released despite Yeltsin’s opposition. In December 1993, Yeltsin’s opponents, capitalising on the social distress, scored big gains in parliamentary elections. One month later, in January 1994, a shaken Yeltsin accepted Gaidar’s resignation. Viktor Chernomyrdin then took direct responsibility for the economy. The government retreated from financial austerity and opened the floodgates of credit. The result was a collapse in the value of the rouble. Chernomyrdin had to return to the reform path of sound money and low inflation. By then Russia was a country of two economies: the old state-controlled Soviet military- industrial system on the one hand, and a new, market-based society, responsive to consumers’ needs and demands. The lead in the latter was taken by the post-communist generation. (See Yergin & Stanislaw, op.cit., pp.275-289)

97

Privatisation

The Russian reformers decided to follow the Czech model which handed out vouchers on a mass basis. It had the potential to reduce corruption by eliminating back-room deals and increasing transparency. The Russian government issued vouchers worth 10,000 roubles each to every citizen, including children. Eventually 144 million out of 147 million Russians received vouchers. They could be exchanged for shares in companies through auctions. Yeltsin declared it to be a “ticket to a free economy ... creating millions of owners rather than a handful of millionaires”. Vouchers became the first liquid security in Russia. People could hold on to them, acquire shares in specific companies, exchange them for shares in mutual funds, or sell them. Markets sprang up for buying and selling vouchers. In Siberia, women sold vouchers from stalls “... like carrots and cabbages”. The critical question was what proportion should go to current managers and employees and how much the general public and outside investors could acquire. The first major privatisation was the Bolshevik Biscuit Factory in 1992. The workers won control and ended up selling a controlling interest to France’s Danone. The programme moved ahead despite constant opposition. The momentum was maintained and some nine-hundred- thousand workers a month moved from the state sector to the private sector via voucher privatisation. The privatisation programme ran for almost two years. It began in October 1992 and was over by July 1994. During that time, the greater part of Russian industry was privatised. A property-owning stratum in the population had been created. As a result of the programme, some 40 million people became shareholders, either directly in companies or as members of mutual funds. Both insiders and outsiders had a stake in their privatised firms. It provided an incentive for companies to do better, improve their products, to find markets and adapt to them and to manage costs. Privatising medium- and larger-sized firms was only part of the process. The state also owned housing which, in most cases, meant apartments. Those who lived in apartments had quasi-ownerships – passed down as inheritances. Occupants were allowed to buy their units at very low cost and by October 1994, some 105 million apartments were in private hands. Shops and small enterprises were left to their localities. There were important limitations on the privatisation effort. “Strategic” and certain defence companies from the military-industrial complex were spared the privatisation process on account of their importance to the national interest. Opposition to these politically sensitive and well-connected companies were much stronger than for the others. At the late stages, banks were able to acquire a substantial number of shares still in government hands by granting loans to the revenue-starved government. It was seen as an ill-concealed method to strengthen the hands of insiders. Yuri Luzhkov, popular mayor of Moscow and ally of Yeltsin, managed to keep much of the state assets in Moscow excluded from the national programme. The city sold or leased them out on its own terms to the benefit of the city’s coffers. The proceeds were subsequently used to finance a major face-lift for the city. In the face of the enormous challenges faced, the privatisation programme launched by Yeltsin was surprisingly successful. By 1996, some 18,000 industrial enterprises had been privatised – including more than 75 percent of all large and mid-sized industrial firms and something close to 90 percent of industrial production bringing the proportion of industrial workers employed in the private sector to 80 percent. Over 80 percent of small shops and retail stores have also been privatised, including 900,000 new ventures established by Russian entrepreneurs. It is claimed that 70 percent of GDP is generated in the private sector. Privatisation was not well received in the arena of public opinion. In the public mind it was associated with job losses, high inflation, social distress and particularly with corruption and insider dealings. Some people, by cornering assets and aggregating coupons, amassed a lot of assets. In large sections of the public mind, privatisation is theft of the labours of the Soviet 98

people by either the nomenklatura, the mafia, shady speculators, or banks and financial institutions of the new Russia. Some well considered criticism has also been levelled at the outcome of privatisation in Russia. Grigori Yavlinsky argues that the way privatisation was done did not result in creating private property, but merely “cartelisation”. Others contend that the privatised firms required radical restructuring and that the managers did not have the capacity, experience, competence or desire to restructure. Great fortunes were made during the transition years by a handful of “insiders” by collecting vouchers from unsuspecting people, taking advantage of subsidised credit and selling commodities which were acquired at low domestic prices at world prices. The outcome appears to be the concentration of economic and political power in what critics call “cartels” and “oligarchs”. A properly functioning system of private enterprise required the development of financial markets which could efficiently provide the capital needed by the various economic sectors – and, at the same time support the development of skills and competencies required by a market economy. (See Yergin & Stanislaw, op.cit., pp.287-293)

The End of the Yeltsin Era

In October 1994, Dudayev, the President of the Caucasian Republic of Chechnya, declared its independence from Russia. The Russian army, supported by anti-Dudayev forces, made an assault on the Chechen capital, Grozny. When the assault was defeated by the Chechnyans, Yeltsin sent in the Russian Air Force which resulted in an armistice and the appointment of a new Chechen President. The Russian Duma passed a motion of no confidence in Yeltsin’s government and four of his ministers resigned. New elections for the Russian Duma were held on December 17th, 1995. The Communist Party won the most seats (157 of 450) and the position of Yeltsin’s government became virtually untenable. In late June 1996 Yeltsin suffered a heart attack, his third in 15 months. Despite his disposition, Yeltsin won the next presidential election, obtaining 53 percent of the votes. In March 1996 the IMF formally approved a three-year credit line of US$10.1 billion, the second largest in the Fund’s history. A month later the Paris Club agreed to a comprehensive rescheduling of over US$40 billion of Russian debt During the following months, several constituent republics of the former USSR found it necessary to rekindle closer ties with the Russian Federation. In April 1996, an economic co- operation agreement was signed with , Kazakhstan and Kyrgystan. Renewed efforts were made to strengthen the peace treaty with the newly elected Chechen President, Aslan Maskhadov, by signing agreements to regulate the transportation of Russian oil through Chechen territory. In January 1998, the Yeltsin government announced a major currency reform. It meant that one new rouble became equivalent to 1000 old roubles. Throughout 1999 the insurrection in Chechnya continued. Russian forces continued air strikes and artillery bombardments. By October 1999, Russian troops reached the outskirts of Grozny, the Chechen capital. Much has been said and written about Yeltsin’s role. Some commentators focus on his drinking habits, his eccentric behaviour, his boisterous temperament and many other human frailties. Since 1995 he was particularly hampered by his physical fragility. But Yeltsin’s momentous role and contributions cannot be evaluated without proper regard to the seismic changes that engulfed Russia in the 1990s. In the light of the challenges he faced, Yeltsin’s contributions should not be underestimated. During the last convulsive days of the Soviet Union, Yeltsin prevented the “old guard” from regaining control and simultaneously literally saved Gorbachev’s life. When Boris Yeltsin stood on the tank in front of the Russian parliament on August 21st, 1991, he effectively brought the Soviet Union’s regime to an end. Gorbachev handed over power – including the nuclear codes – to Boris Yeltsin as President of the Russian Federation. 99

During the period December 1991 to December 1999, Boris Yeltsin miraculously prevented the total disintegration of the Russian Federation as a geopolitical unit. On the economic front, Yeltsin’s presidency brought the communist system to an end; introduced countervailing forces of private property and market institutions; set in motion forces that will carry Russia forward into the world of market economies; did an outstanding job in bringing down inflation and stabilising the rouble. On the political front, he managed to fend off serial reactionary efforts to restore the Communist regime, several attempts at insurrection with limited bloodshed, designed and implemented a new constitutional dispensation providing for representative legislatures, regular popular elections, multi-party participation, freedom of movement, freedom of association, freedom of speech and freedom of worship. There are also many entries on the negative side. In terms of demographic traits such as infant mortality, life expectancy, health care and social welfare services, the deficiencies were huge. Yeltsin faced many obstacles. Many Russians demonstrated their opposition and even outright hostility to the market system. For generations Russians were conditioned to expect the government to be the source of all good things. They viewed private enterprise with suspicion. The pervasive problems of corruption and crime constantly threatened the legitimacy of the new system and undercut the consensus necessary to its effective functioning. The Duma continued to be dominated by anti-reform forces – both Communist and Old Guard nationalists. The political system took on the character of a “hybrid regime”, simultaneously showing characteristics of democracy, authoritarianism, populism, oligarchy, nepotism and constantly verged on anarchy. A very visible problem area was the disproportionate power of the new business elite: the known as the “oligarchy”. They were “oligarchs” because they had money, power and media control. They unabashedly wielded their power in the continuing struggle over the ownership of state assets. Much of their power was concentrated in the big banks and the resources companies. These unaccountable power centres threatened to destroy the credibility of the entire reform movement. Another major obstacle of the Yeltsin regime was the collapse of the old bureaucracy and the slow emergence of a responsible, accountable alternative based on the rule of law. The legal structure and the legal process were totally inadequate to provide the foundation required for the proper functioning of the market system. Without a sound legal foundation, property rights and business contracts could not be enforced. Organised crime rackets were more powerful than an underpaid, demoralised police force. Without proper tax collection, salaries and pensions could not be paid. By 1998 the entire imbroglio took on crisis proportions. The tax system was based on excessively high rates and absurdly low collections. Tax evasion and non-payment encouraged the government to seize bank accounts. This provided incentives to avoid dealing in cash. The economy became “deliquified” of money. Up to 75 percent of the economy was being conducted by barter and promissory notes. In order to fill the gap between revenues and expenditures, the government resorted to short-term borrowing, pushing up the burden of interest payments. By this time Yeltsin had lost much of the credibility and legitimacy he had gained as the man who had stood up against the Communist system and its tanks. He had become an erratic, unpredictable and isolated politician afflicted with ill health. In 1998 Russia was hit by two serious external shocks. The first was the collapse in oil and commodity prices which reduced Russia’s export earnings and taxes. The second was the contagion from the Asian economic crisis which led to a contraction of international portfolio investments. In view of Moscow’s huge short-term debt, money began to flood out of the country. The Russian stock market crashed. The IMF bail-out was not enough to stop the outflow of foreign capital. The Russian government defaulted on its debt and devalued the rouble. Panic swept across the country and the Communists saw the crisis as a chance to strike back at Yeltsin. stepped forward as a compromise figure. He served as a member of the Politburo of the Communist Party in Gorbachev’s time. Yeltsin kept him as head of the foreign intelligence service in 1996 when he made him Foreign Minister. 100

Primakov’s strength as Prime Minister was his acceptability to most main groupings, his ability to operate and make deals and his lack of personal ambitions for the presidency in the 2000 elections. Primakov stayed in office from September 1998 until May 12th, 1999. Yeltsin stated as reason for Primakov’s departure, his inability to turn around the stagnant economy. After served as acting Prime Minister for a few months, Yeltsin appointed the former KGB major and deputy mayor of St. Petersburg, Vladimir Putin, as Deputy Prime Minister. On August 16th, 1999, the Duma approved Putin’s appointment as Prime Minister. On December 31st, 1999, Yeltsin resigned as President, thereby elevating Vladimir Putin to the post of acting President. Putin was quick to sign a decree granting Yeltsin and his family immunity from criminal prosecution, arrest, search or interrogation.

Putin’s Russia

In his “Special Report on Russia” for The Economist of November 29th, 2008, Arkady Ostrovsky says Russia is no longer the Soviet Union: it is now a “... corrupt oligopoly with a market economy of sorts, recovering as a world power”. He says Russia is ruled by former KGB men like Putin and Medvedev who are building a Soviet Union Mark II: crushing independent journalists; using the state media to pump out anti-American propaganda; once again holding military parades in the Red Square; stage-managing elections; using state television to tell people what they want to hear; cleverly exploiting nostalgia for Soviet cultural symbols; restoring as an icon of power and respect; and sustaining the authoritarian and corrupt rule of the rent-seeking elite. Arkady Ostrovsky describes a country beset by chronic and dangerous weaknesses: an economy that depends on natural resources and cheap credit; private businesses that are constantly harassed by the state; corruption that is so pervasive that it has become the rule; with a growing gap between rich and poor; with a population that is shrinking by 700,000 a year; where the rich and powerful are steeped in luxury while the average Russian earns a meagre $700 a month; where the country builds pipelines to Europe while its own people have no gas or even plumbing. Until 2008 the Russian economy had been flush with money. Oil, gas and metals made up around 80 percent of Russia’s exports. For the past five years the economy has been growing at around 7 percent a year. Some of the money has been stashed away into a stabilisation fund, but a large portion has been fuelling an unprecedented consumer boom. Since 1999, real incomes have more than doubled and the growth in retailing has averaged 12 percent a year. But there has been a good measure of capital investment too, especially in the extractive industries. Fixed investment in 2007 rose to a record level of 21 percent on the year before. Since the 1998 crisis, private initiative, freed by the market reforms, became the main force behind Russia’s economic recovery. Private ownership transformed the Soviet-era giants. Like a powerful drug the oil money has masked the pain caused to the Russian economy by the Kremlin. It has failed to convert the oil bonanza into domestic production. Imports are growing much faster than manufacturing. After years of fiscal discipline the budget of 2007 saw spending rise by 20 percent in real terms. Since 2000, spending on the government bureaucracy and law-enforcement agencies has doubled as a percentage of GDP and the number of bureaucrats has risen from 522,000 to 828,000. The main beneficiaries were newly created state corporations and bureaucrats who are skilled at enriching themselves. Bribery and corruption have become endemic in Russia: it is the long-standing institution that works well. Ostrovsky writes that “... it has penetrated the political, economic, judicial and social systems so thoroughly that it has ceased to be a deviation from the norm and become the norm itself”. Transparency International puts Russia on the same corruption index level as Kenya and Bangladesh. The size of the corruption market is estimated to be close to $300 billion, equivalent to 20 percent of GDP. Corruption is so endemic that it is perceived as normal. Amongst the young, it is not even considered a crime. Some call it “offering a reward” for making life easier. Small and medium- 101

sized businesses suffer the most. Government offices send them to private firms to handle their paper work – usually firms owned by relatives. After Yukos was broken up, its assets were passed on to Rosneft, chaired by Deputy Prime Minister Igor Sechin. The sale of Rosneft oil in turn, was given to a Dutch-registered trading firm, Gunvor, with a complicated ownership structure. After five years Gunvor has grown into the world’s third-largest oil trader which ships 30 percent of Russia’s seaborne oil exports and has estimated revenues of $70 billion a year. In the Yeltsin era corruption ended in a privatisation action. In the Putin era, corruption ends in the nationalisation of business. Privatisation now takes the form of a quiet transfer of state property into accounts. It is said that businessmen pay bribes as much to be left alone as to get something done. They call it “bribe of survival”. Medvedev was the previous Chairman of Gazprom. The company’s insurance and pension funds (multi-billion activities) are handled by a private bank, called Rossiya, controlled by a close friend of Putin. Both Putin and Medvedev condemn corruption in public. Cynics claim that their current concern is to protect and legalise the largesse accumulated in the past five years. Medvedev introduced a draft law which requires bureaucrats to declare their own and their family’s income and assets. But there are a couple of loopholes. First, information about their income is confidential and available only to other bureaucrats. Second, the family is defined as spouse and under-age children – but excludes siblings, parents or grown-up children. Corruption is said to have become a system of management. Central to this system is the notion of kompromat, or compromising material. It is easier to control someone if you have kompromat on them, i.e. information that could “expose” them. Without effective political competition, independent courts, free media and a strong civil society, the fight against corruption cannot be won. Russia’s legal system is deeply flawed. The selection of judges leaves much scope for interference. They are appointed by the President or on his recommendation by the upper chamber of parliament. But they are first screened by a Kremlin commission which includes the heads of the security services and the interior ministry. Large companies rarely trust in a judge’s unprompted decision. They rather take their cases to London. Late in 2008, Russia’s oil-fuelled economy began to slump, later than many other countries. The Kremlin stifled public debate of the issue, dismissively blaming America for “infecting” the world. As oil prices slid, hundreds of thousands of Russians lost their jobs. For a country that boomed for the past eight years with an annual GDP growth reaching 8 percent, an expected contraction of 5 percent or more could have severe consequences. The rouble lost more than 30 percent of its value against the dollar in 2008. The Kremlin has spent over $200 billion of its reserves to cushion the devaluation of the rouble and to avoid panic. It still had more than $300 billion left to use if necessary. To keep unemployment figures down, many Russians were put on indefinite unpaid leave. The government has earmarked $200 billion for various rescue measures. But the problem is that there is little scrutiny of what the money is spent on. To unblock the banking system, the government deposited $50 billion in three banks, two of which are state-linked. Another $50 billion was earmarked to rescue “strategic” companies. At the front of the queue were Russia’s largest oil and gas companies, including the state-controlled Gazprom and Rosneft. A major consequence of the bail-out action would be increased state ownership of the privatised entities. The most likely outcome would be the emergence of quasi-state firms, run by people closely associated with the Kremlin. The most likely model would be a “state corporation”, i.e. either an independently controlled state-owned company, or a privately controlled subsidiary of a state-controlled company. The problem with quasi-institutions is the maintenance of clear lines of responsibility and accountability. They are neither state nor private. With former KGB officers in charge a closed circuit of inter-connected power conglomerates are created: a spider’s web of hidden connections. 102

There is a clear danger of a new bureaucratic “aristocracy” arising. Knowledgeable Russians refer to them as siloviki, loosely translated as “power people”. They are the people with the epaulettes on their shoulders: the men from whose ranks Mr. Putin came. People with a military, intelligence or law-enforcement background make up around three- quarters of Mr. Putin’s top officials (as against 5 percent of Gorbachev’s Politburo). They occupy more than a third of the posts in the top three levels of government and make up 70 percent of the staff of Mr. Putin’s seven federal envoys or “super-governors”. The siloviki are people who were privileged in Soviet times. They were above the law and steeped in the tradition of a strong state that guarantees them their privileges. They have their own business ties – particularly to state-owned arms and oil firms. They were the most sophisticated members of the Soviet nomenklatura, more disciplined and professional than other bureaucrats. In their strongholds they are as powerful and as unaccountable as they were in the Soviet days. The KGB’s successor, now called the FSB, is the nerve-centre of their power – all ex-colleagues of Mr. Putin. (See Gideon Lichfield, “Having it Both Ways”, The Economist, May 22nd, 2004, pp.6-8) The projection of KGB power in Russia’s politics and economy has been a grinding principle in Putin’s period in office. The siloviki had to rely on tax inspectors or the Federal Service Bureau (FSB) to extend its control. Now the financial crisis is creating new opportunities in the form of state-private partnerships in which profits are privatised by Kremlin friends and debts are nationalised. This will not take Russia back to a state-run economy, but it could push it further towards a corporatist state. According to demographic predictions, Russia’s population of 142 million is currently shrinking by 700,000 persons per year. At this rate it could be down to 100 million by 2050. Russia’s demographic crisis is considered one of the main constraints on the country’s economic future. Its population of working age will decline by 1 million a year and thereby increase the social burden on those who remain. Over the next seven years, Russia’s labour force is expected to shrink by 8 million and by 2025 it may be down by 18 to 19 million. On top of these demographic problems, there appears to be a self-destructive streak in the national character. Drinking yourself to death is one of the most common methods of suicide. The average Russian gets through 15.2 litres of pure alcohol a year, twice as much as is thought to be compatible with good health. Two-thirds of hard liquor is produced illegally and sold untaxed. Aids is a relatively new problem for Russia. The first case was recorded in 1987. By 2007 the figure was 430,000, the highest in Europe. On September 10th, 2009, Mr. Medvedev published a manifesto on a Russian website highlighting Russia’s failings. He mentioned a primitive, oil-dependent economy, weak democracy, a shrinking population, an explosive north Caucasus and all-pervasive corruption. A day later, Mr. Putin told foreign journalists and academics that he and Mr. Medvedev would decide between themselves who was going to be President when Mr. Medvedev’s first term expires in 2012. In his article, Medvedev predicted optimistically that Russia, in time, may lead the way into an open and flexible political system that fits the requirements of a free, prosperous and confident people. He lamented the fact that in Russia “... influential groups of corrupt officials and do- nothing entrepreneurs are well ensconced” ...but he predicted that “... the future does not belong to them”. Unfortunately Mr. Medvedev’s lamentations are not reflected in his actions. During his presidency the Russian media has not become any freer. Political opponents are still denied access to television. The number of murders and attacks on human rights activists has gone up. Mr. Medvedev has spent most of his career as Mr. Putin’s subordinate. It was his loyalty that qualified him for the top job. Mr. Putin is leader of the dominant political party, the United Russia Party. It is predicted that Mr. Putin will return to take up the presidency after 2012, when Medvedev’s term expires. Under Putin’s inspiration, Russia recently started to rewrite its history of the Stalin era. It reinstated the Stalinist version in which Russia bears no guilt for the enslavement of Eastern Europe. He has described the collapse of the Soviet Union as the greatest geopolitical 103

catastrophe of the 20th century. It jars with those who see the end of communism as a blessed liberation. Apart from rewriting the past, Mr. Putin has closed Russia’s archives again and criminalised attempts to rebut his version of history. Under a new law, anyone who “falsifies” the Kremlin’s version of history, for example by equating Hitler and Stalin, two of the 20th century’s worst mass murderers, may be prosecuted. Suggesting that 1945 brought not liberation but new occupation for Eastern Europe is also banned. Perhaps Mr. Putin could shed some light on his activities as KGB officer for several years in their East German office in Dresden. Catherine the Great is quoted to have said that a Russian ruler must be autocratic “... for no other form of government but that which concentrates power is compatible with the dimensions of a state as great ...”. Today Putin evidently also shares that belief and he is not likely to be challenged soon by a better informed and more assertive electorate. (See Arkady Ostrovsky, “Special Report on Russia”, The Economist, November 29th, 2008, pp.3-18)

References

Ahrendt, H. (1966) The Origins of Totalitarianism, Harcourt, Brace & World Aslund, A. (1995) How Russia Became a Market Economy, Washington D.C.: Brookings Institution Davies, N. (1996) Europe – A History, Oxford University Press Finer, S.E. (1970) Comparative Government, Penguin Books Friedrich, C.J. and Totalitarian Dictatorship and Autocracy Brzezinsky (1956) Praeger Publishers Hagopien, M.N. (1978) Regimes, Movements and Ideologies, New York: Longman Laruelle, M. (2008) Russian Eurasianism: An Ideology of Empire, Woodrow Wilson Center Press Ostrovsky, A. (2008) “Special Report on Russia”, The Economist, November 29th, 2008 Schapiro, L. (1964) The Communist Party of the Soviet Union, New York: Vintage Books 104

5 The Promise of Latin America

Latin America encompasses about 30 countries with a combined population of around 600 million. Amongst these are three main constituent anthropological components: the original Amerindian inhabitants, the descendants of the colonial conquerors (mainly Spanish, Portuguese, British and French) and the descendants of millions of slaves transported by the colonial powers. There is also a major fourth component: the mestizos or mulattos descended from the racial integration of the original components. The characteristic pattern in Latin America is that states originated as lineal descendants of colonial administrative divisions. In the former Spanish realms, pre-existing Indian structures were significant to the extent that the two principal Vice-Royalties (Lima and Mexico City) were seated at or near the former capitals of the two major Amerindian empires, the Aztec (Mexico) and the Inca (). Spanish control was at first assured simply by substituting Spaniards for Aztec or Inca and maintaining the lower ranks of the pre-existing hierarchy for an interim period. Over the three centuries of Spanish rule, with local variation, the relatively small number of Spanish settlers succeeded in imposing themselves as a quasi-feudal caste, abetted by patterns of inter-marriage into Indian lineages. The settler culture served as the unquestioned basis for the newly independent communities that gradually enlarged themselves by absorbing outsiders into the settler-elite culture. Looking at the ten most populous states in Latin America the general population structure for the area as a whole is around 30 percent white, 50 percent mixed, 5 percent black and 15 percent Amerindian. The largest numbers of whites are in Brazil, Argentina, Columbia and Venezuela. The largest proportions of mixed inhabitants are in , Venezuela, Mexico, Colombia and Brazil. The only areas where there are still significant numbers of Amerindians are in Mexico (30 percent), Peru (45 percent) and (55 percent).

The Amerindians

The original inhabitants were several “Amerindian” tribes who left traces of their civilisations going back at least 5000 years. The Maya lived in the tropical areas of Central America where they built stone houses, temples and paved streets. They turned out fine ornaments in pottery and crafted copper into implements. They guided their life by calendars based on advanced skills in mathematics and astronomy. Scholars and priests also practised a distinctive way of writing based on some 800 signs or hieroglyphs and wrote on paper manufactured from the bark of the wild fig tree. The great era of the Maya ended by AD800 – possibly partly as a result of the scarcity of water and factional strife. Further east on the highlands of what is today Mexico, were the cities of the Aztec empire. When the Europeans first arrived in the 16th century they found Montezuma’s Aztec city, Tenochtitlan, on the site of today’s Mexico City, one of the largest cities in the world. The area of the Aztec empire was almost as large as modern Italy and its population is estimated at around six to eight million. They excelled in the crafts of building, were first-rate goldsmiths and jewellers, competent in mathematics and adept at agriculture. They had a calendar based on the solar year which was followed with strict attention. They also practised the sacrificing of human lives and ceremonial killings, justified by ideology. Far to the south the slopes of the Andes Mountains adjacent to the Pacific coast had been occupied by the Inca. They domesticated the llama and alpaca and cultivated and potatoes as early as 2000 BC. They built agricultural terraces, aqueducts and tunnels for the purpose of . By 900 AD they were able to manufacture bronze ornaments, instruments and tools including axes, chisels and knives. They built a network of which enabled them to reach the outskirts of their empire from what is Bolivia today in the north, to central Chile in the south. They used their 24,000km of roadways to establish a message system enabling a message to travel up to 240km per day. Their beast of burden was the llama. They had remarkable success in domesticating plants: the , the sweet potato, the tomato, various beans, the cashew and 105

, coca, peppers, squash, cassava and the pineapple. Maize originated independently both in South America and Mexico. (See Blainey, G., A Short History of the World, Penguin Books Australia, 2000, pp.305-332)

The Colonial Powers

The Spanish Conquistador, Hernan Cortes, paved the way for the colonial powers into the New World in 1518 with a small fleet of ships carrying 600 soldiers armed with crossbows and firearms in addition to several hundred Indian servants and African slaves. He also carried 16 horses, the first ever to be seen on American soil. Montezuma II invited Cortes and his men into his capital, who then took the emperor into custody and subsequently destroyed the city, killing thousands of Aztecs. Cortes took over the Aztec empire. The Spaniards also brought diseases which quickly killed thousands amongst the native peoples in Mexico. Smallpox carried by the Spanish traders also spread into Inca territories so that in 1532, Francisco Pizarro’s tiny force easily captured the Sun God, the Inca emperor Atahualpa. The following year they captured Cuzco. After the smallpox , measles followed, as well as typhus, influenza, whooping cough, scarlet fever, chickenpox and even malaria – all new to the inhabitants and therefore all the more deadly. Of the estimated eight million Indians in Mexico and south of the Great Lakes when Cortes arrived, less than one-third survived fifty years later. In the empire of the Incas, far south, the death toll also numbered millions – as many as eight out of every ten people died. Even Indians taken back to Europe as objects of display were prone to catch the new diseases. When the Frenchman Jacques Cartier returned from Canada in 1534 with 10 American Indians, nine were to die from European diseases. The effect of the European diseases on the native Amerindians was disastrous. Cultural and economic life largely disintegrated. In the wake of the Spaniards came the Portuguese, the British and the French. The Pope issued a statement in 1493 allocating trade in the Americas to Spain and trade in Asia to Portugal, but the Portuguese also acquired the area that is now Brazil in the Treaty of Tordesillas. What the British envied most was what the Spanish discovered in America: gold and silver. Englishmen dreamt of finding their own “El Dorado”. The next best thing was to exploit their skills as sailors to pirate gold from Spanish ships and settlements. The English Crown legalised the buccaneering in return for a share in the proceeds. The names of buccaneers Henry Morgan, Francis Drake and Walter Raleigh became famous as “Brethren of the Coast” in partnership with the British Crown. In the process the British acquired a string of islands in the Caribbean Sea such as Jamaica, Trinidad and Barbados. The French also acquired islands in the Caribbean, such as Martinique and Guadeloupe, which are still part of France today. In the course of the 16th century around 250,000 Spaniards, mostly men, settled in the New World. Many took wives from among the native populations and so gave rise to mixed race offspring called the mestizos. When African slaves began to be imported to South America in large numbers since the early 1500s, female slaves were also taken as concubines by the ruling Spaniards and Portuguese. Children of Afro-Hispanic parentage were known as mulattos. In the absence of an established aristocracy, colonial Spanish society came to be organised according to a careful and legally sanctioned grading of skin colour. “Pureblood” Spaniards were at the top of the social pyramid, native Amerindians and black-skinned Africans were at the bottom and all the varieties and shades of mestizos and mulattos occupied the middle levels. (See Blainey, G., op.cit., pp.305-332) The Slaves

Over the course of 400 years, from the beginning of the 16th century to the end of the 19th century, around 11 million Africans were forcibly removed from their homelands and shipped to North and South America and to the islands of the Caribbean to live out their lives as slaves. Known as the Atlantic slave trade, this transport of humans constituted the largest forced migration in history. 106

The traffic of Africans involved all the main European trading nations: Spain, Portugal, Britain, France and the Netherlands. It also relied heavily on African tribal leaders and kings, who brought a ready supply of slaves from the continent’s interior to the ports of West Africa from where the European traders operated. For the victims of the slave trade, the experience was traumatic and cruel. It is estimated that on average 15 percent of enslaved Africans died in transit, either from disease or maltreatment. Another one-third died within three or four years after arrival. An incalculable number died en route from the African interior to the coastal ports. The Spanish and the Portuguese were the pioneers of the slave trade from as early as 1518. The Spanish used the slaves (some were enslaved Amerindians) in their gold mines. The Portuguese used slave labour on their sugar plantations. By 1550 Brazil was the world’s largest exporter of sugar and a major importer of slaves. In Barbados and Jamaica, both British possessions, the sugar industry also depended on slave labour. Initially tobacco was the main cash crop, which largely used indentured immigrants from Britain itself. But sugarcane proved to be more profitable so that plantation owners switched to sugar and slave labour. Dutch merchants also brought in slave labour and transported the sugar harvested in the Americas to market in Europe. Trade with the Caribbean dwarfed trade with the rest of the Americas. Europeans found the mortality in the tropical islands and coasts fearful during the summer “sickly season” so that after 1700 European emigration to the Caribbean slumped as people opted for more temperate climes. This shift in focus of European emigrants only served to expand the demand for slave labour on the tropical plantations. The “Atlantic Triangle” described the flow of goods along the slave trade routes. From Europe to West Africa the traders carried guns, gun powder, Indian cloth, copper or iron bars to be traded for slaves. The slaves were then carried to the Americas along the infamous “middle passage”, the second side of the triangle. To complete the triangle, American produce including sugar, rum, molasses, coffee and cotton were then ferried back to Europe. Between 1662 and 1807 nearly 3.5 million Africans came to the New World as slaves transported in British ships. That was over three times the number of white migrants in the same period. It was also more than a third of all Africans who crossed the Atlantic as slaves. At first the British had pretended to be above slavery but in time, after 1662, the New Royal African Company supplied thousands of slaves to the West Indies from Nigeria and Benin. By 1740, Liverpool was sending 33 ships a year on the triangular trip. This was the same year that James Thompson’s song “Rule Britannia” became a popular hit. John Newton, a captain of one of Britain’s slave transporting ships, was the composer of the famous song “Amazing Grace”. His ship also carried spiked shackles and neck . The African slave markets were spread all along the concave coast of West Africa from the River to Luanda in the south. European traders picked up the slaves at coastal ports. The slaves were often prisoners-of-war enslaved by their tribe’s enemies. Trade flourished whenever tribal conflict broke out. Slaves were usually branded with a hot iron on the breast or shoulder and kept in forts until the slave ships arrived. Pairs of slaves were chained together at the ankle and herded below decks by sailors with whips. Life on the “middle passage” was an ordeal at the best of times. Spaces below decks were seldom more than 1.5 metres high. Beds were narrow shelves on which they were made to lie down “spoon ways” to maximise the number of people that could be squeezed in. Unsanitary conditions led to dysentery. Smallpox and malaria were also prevalent. Harsh discipline was maintained during the voyage and floggings and beatings were common. Women were in constant danger of being abused or raped by the ship’s crew. In the 16th century as much as 40 percent of slaves died en route, but by the 19th century this figure was down to 10 percent. At least 133 rebellions are recorded and 140 slave ships simply disappeared. The hardest labour and worst regimes for slaves were found on the sugar plantations of Brazil and Cuba. Because so many slaves died after a few weeks, these sugar states continued to import slaves – even long after the trade was banned by the British and Americans in 1807. Although the hardships endured by slaves in America were also great, a slave’s life was generally better than in the Caribbean or Brazil. Tobacco estates were smaller and slaves were allowed to 107

develop settled communities with families and children. Slavery was finally abolished in the Americas in the late 19th century. (See King, R., Origins – An Atlas of Human Migration, ABC Books, 2007, pp.82-93 and Ferguson, N., Empire – How Britain Made the Modern World, Penguin Books, 2004, pp.72-84)

Indigenismo

“Indigenismo” is described as the mystique of the Indian heritage. In Latin America as a whole there were several primary Indian clusters: the Meso-American complex of what is today Mexico and areas to the north up to the Great Lakes; the Maya of Central America, particularly Guatemala; the Quechna of the Andes republics of Peru, Bolivia and Colombia and Chile; and the special case of the Guarani of .

The Inca’s Demise A peculiar aspect of the Spanish influence in Latin America is the scope and depth of the colonial experience. In 1492 a permanent Spanish settlement was established in what is now the . When independence was achieved in the 19th century, Spanish rule was already three centuries old. Intensive Indian-Hispanic culture contact has far deeper historic roots than acculturation in most of Asia and Africa. There were instances of primary resistance movements by indigenous groups, but not comparable to resistance in the name of nationalism based on indigenous historic identity. The revolt of Tupac Amarú II in Peru (1780-81) was a major uprising. The Spanish under Pizarro had decapitated an Inca Emperor and substituted their own rule for that of the Inca Empire. The Inca had been exceptional in the degree of sophistication of its organisation and also in its assimilation of features of the conqueror. Quechua, the language of the Inca, was diffused over a broad area of both highland and lowland Andes. Jose Gabriel Condorcanqui, better known by his Imperial Inca title of Tupac Amarú II, was fully assimilated to the Spanish culture. He had been educated at a Cuzco school, spoke fluent Spanish and wore Spanish clothes. He was also a descendant of the last crowned Inca Emperor, Tupac Amarú I, executed by the Spanish in 1571. Through peonage, conscription and other means, the entire Indian population in many areas was reduced to forced labour in the mines and textile plants. Thus when Tupac Amarú II seized and executed a Spanish district official and proclaimed himself Inca, many flocked to his side. For a time he enjoyed success and Spanish authority crumbled in the highlands. The tide soon turned and the rebellion was crushed before Tupa Amarú II could consolidate his position. The Spanish took severe repressive measures to stamp out resurgent Inca-hood. The Inca royal house was hunted down and killed. The shock waves reverberated throughout the Andes, but the revolt was totally extinguished by the Spanish. Today the descendants of the Inca, the Quechua-speaking Amerindians, still form about 46 percent of the Peruvian population. But 37 percent are mestizo and 15 percent white. The majority of Amerindians in the Andes region (sierra) – which covers Ecuador, Peru and Bolivia – speak Quechua. Unlike Guarani, Quechua is not taught in the schools, although the majority of Quechua speakers know no Spanish. The mestizos are generally bilingual. Traditional customs and distinctive Indian costumes have also disappeared and it is predicted that the Amerindian culture would gradually be assimilated into the dominant Hispanic culture with the Indian past relegated to the national museums. (See Anderson, C.W., et al., Issues of Political Development, Prentice-Hall, 1967, pp.45-55)

Mexico’s Indigenismo Of all the nations in Latin America, Mexico has been most self-consciously preoccupied with indigenismo. It was a central theme in the Mexican Revolution and is still today securely embedded in the Mexican national identity, although the country’s first Indian president, Benito Juarez is remembered in Mexican history for his efforts to “liberate” the country from its 108

traditional past and to move Mexico towards constitutionalism and economic liberalism. The advocates of indigenismo in 20th century Mexico were largely members of the dominant Hispanicized community – many of mestizo racial background. The national elites used indigenismo as a symbol of being Mexican – as the symbolic roots of national identity. It was not a case of romanticising Mexico’s Indian heritage. What was at stake was an effort to blend Indian with Western themes into a unique Mexicanness. (See Anderson, et al., op.cit., pp.35-56)

The Guarani and Paraguayan Nationalism Paraguay presents the most intriguing case of interpenetration of Indian and Hispanic cultural identity. Asuncion is the only Latin American capital where an Indian language is widely spoken – not only by the peasantry, but proudly, by all strata of society. Paraguayan national identity is partly founded upon the mystique of Guarani Indian heritage. The first to insist on their Guarani origins are Paraguayan intellectuals. The Guarani were a series of segmentary, semi-nomadic communities, occupying fertile forest and grassland which extended in a 120-mile arc radiating from Asuncion, where 95 percent of the present Paraguayan population lives. Spanish settlement began in 1534, when a military force arrived in search of precious metals. Silver ornaments worn by local Indians seemed to prove the presence of silver in the area. Later it was discovered that the silver came from areas in the Andes already controlled by Pizarro. The 350 Spaniards had to turn to agriculture to survive. On the La Plata, the hostility of local Indian groups was such that no small settlement could be secure. What is now Argentina was not settled until several decades later. But the Guarani were not initially opposed to the small Spanish force. It was necessary though, to establish good relations with the more numerous Indians. The small band of Spanish soldier-agriculturalists took Indian wives – particularly marrying into senior lineages. By the middle of the 16th century, Spaniard and Guarani regarded each other as kinsmen. The Spanish added to the local community such elements of statehood as overall political, military, economic and religious institutions. In time, lower-class Hispanic culture represented by the soldier-settlers had all but supplemented the previous Indian patterns. At the same time the Spaniards learned Guarani. The mestizo offspring emulated the cultural norms of their fathers and learned to speak in the idiom of their mothers. Thus a peculiar cultural fusion took place, Hispanic in most elements, but Indian in language. This pattern survived over the years because of the isolation of Paraguay as a backwater of the Spanish empire. Once Madrid learned that Paraguay was no El Dorado, very few European immigrants followed the first pioneers. Later immigrants to Latin America also chose the more inviting prospects of Argentina, Chile, , Brazil, Colombia, Mexico and Cuba. When independent Paraguay began its search for a national identity, the Guarani myth was ready at hand. Unlike other Latin American countries, Paraguay teaches its Indian tongue in the schools. The young learn that they are Guarani and that they are superior to other Indians. Guarani-hood is what makes Paraguay distinctive. Spanish is not questioned as the language of official administration, but Guarani is accorded loving nurture as a vehicle of cultural identity. Because the Guarani identity is coterminous with territorial Paraguay, state and ethnicity coincide. The Hispanic-Indian duality of the Paraguayan past is transcended in the unifying mystique of Guarani-hood. (See Anderson, et al., op.cit., pp.49-51) Cultural Integration

Modernisation seems to be producing increased cultural integration between the various ethnic-linguistic groups. Census data indicates that five states have 30 percent or more of the population classified as Indian (Mexico, Guatemala, Peru, Ecuador and Bolivia). A further six are predominantly of mixed population, with the Indian component prominent in the mixture (El Salvador, Honduras, , Colombia, Chile and Paraguay). Although the indigenismo mystique of the Indian heritage is salient in the national ideology of Mexico, Guatemala, Peru, 109

Bolivia and Paraguay, the paradox is that the apostles of indigenismo have been white or mestizo intellectuals and not the Indians themselves. Latin American states differ from most Afro-Asian examples of culturally diverse communities in that they are dominated by the Iberian culture of the Spanish (and Portuguese) immigrants or culturally assimilated mestizos. By the time of independence, Indian communities had already been overwhelmed, dispersed or destroyed. The assimilationist predisposition of Portuguese colonial philosophy led to strong inducements being offered for intermarriage and mestization. By the 19th century, few “pure” Indians were left along the main river and accessible tributaries. Today Portuguese has almost completely supplanted Tupi and other indigenous languages. (See Anderson, et al., op.cit., pp.56-59) Regionalism

Regionalism is a significant type of sub-territorial solidarity. In Latin America it became deeply embedded in the larger polities during the 19th century era of caudillos, poor communications and central administration whose orbit seldom extended far beyond the seat of government. This pattern has been especially pronounced in Venezuela, Peru, Mexico, Colombia, Chile, Brazil and Argentina. The trans-Andean Amazonia region of Peru could only be conveniently approached up the River through Brazil. Regional loyalty was strongly reinforced by the limited horizons of the hacienda system which characterised much of rural Latin America. Regionalism also manifests itself in a sense of cultural and economic distinctiveness. The Brazilian of the gaucho heritage of Rio Grande do Sul finds the life of the carioca of Rio de Janeiro distinctively foreign. The people of the highlands of Colombia, Ecuador and Venezuela have much more in common with each other than each has with his coastal compatriots. Regionalism played an important part in the political development of Venezuela. The military barracks of the Andean state of Tachira furnished all the presidents of Venezuela. Regionalism also manifests itself in cultural and economic ways. For many years there was considerable cultural rivalry between the aristocratic families of Caracas and those of Cumana. A similar rivalry persists between Caracas and the Andean city of Merida which considers itself intellectually and culturally superior, being the seat of a great university and home of many distinguished families. There are few common interests between the proud families of the Andes and the rough Llaneros of the great plains. (See Anderson, et al., op.cit., pp.24-25)

Ideological Trends

The ideological movements that originated in Europe in the 19th and 20th centuries also made their appearance in Latin America. Concepts such as Liberalism, Positivism, , Syndicalism, Socialism, Communism, Nationalism and Fascism – each trend of thought has been absorbed into the stream of Latin American political thought and political contests. The first socialist party was established in Argentina in 1896. By 1940 several such parties had been established in eight Latin American nations – all remaining as relatively minor factions. The closest affinity to the European tradition of socialist thought was the welfare state format established in Uruguay in 1903 which was preserved and extended by subsequent governments. Several parties could be described as part of the “democratic left”: APRA in Peru, Costa Rica’s Liberacion Nacional, Venezuela’s Accion Democratica and at least half a dozen other movements. These parties could not be described as affiliates of Continental or Marxist socialism. Their ideologies and policies represented a complex amalgam of local aspirations and welfare state models. Many were closely aligned with the Democratic Party of the USA. Programmes tended to be flexible and pragmatic, preferring regulation and taxation rather than nationalisation as a means of controlling foreign enterprise and directing its activities toward the needs of national development. Emphasis was placed on education, public health, transport and social welfare 110

services. Agrarian reform did not imply “communalization”, but usually the extension of private property by way of family-sized parcels. In several Latin American countries, Christian Democratic movements took a place of prominence: Chile, Peru, Brazil, Argentina and El Salvador. Some describe themselves as Christian Socialists and their policy orientations generally resemble those of the Democratic Left. The main difference lies in the way in which this policy orientation is ideologically justified. Doctrinal socialism played a more prominent role in the Mexican Revolution of 1910 and the Bolivian Revolution of 1952. The actual presence of widespread peasant revolt in both nations served to give greater support to agrarian reform. Nationalisation in both countries has been directed at specific enterprises, for quite specific reasons. It has not been a central focus of electoral campaign politics. The most militant, radical, revolutionary left-wing experiment has been the “Castroite Socialism” introduced by the Cuban Revolution. However, enthusiasm for the Cuban experiment is not all of the same kind. Some who supported Castro identified with the opposition to the tyranny and corruption of the previous regime. Others were attracted by the rapid and sweeping reforms of the first few years following the revolution. Still others were attracted by Castro’s affirmation of Marxism-Leninism. (See Anderson, et al., op.cit., pp.145-148)

Democrats versus Authoritarian Populists

The emergence of democracy in Latin America has been a slow and convoluted process. Currently the few fledgling democracies are fighting an uphill battle against a new wave of authoritarianism in the form of populist dictatorships. A new battle is being waged for Latin America’s soul: it is waged within Latin America over its future. Latin America’s efforts to make democracy work and to use its instruments to make unequal societies fairer and more prosperous have implications across the developing world. During the period 1998-2002, the Latin American region suffered severe financial turmoil and economic stagnation. Voters blamed the slowdown on free-market reforms known as the “Washington consensus”. As a consequence, they started to vote for leftist parties and supported leftist causes. But the differences between the leftist causes are as important as their similarities. One camp is made up of moderate social democrats, of the sort in power in Chile, Uruguay, Costa Rica and Brazil. Broadly speaking, they stand for prudent macro-economic policies and the retention of liberalising reforms of the 1990s, but combined with better social policies to alleviate poverty. The other camp is the radical populists, led by Venezuela’s Hugo Chavez, who gained a disciple in , Bolivia’s president. The populists shout louder and claim that they are helping the poor through state control of oil and gas. Both have mestizo connections and market themselves as opponents of the “white” elites and as protectors and champions of the downtrodden. They have been actively supported by Cuba’s Castro brothers, by the Kirchners of Argentina, by Garcia of Peru and by Rafael Correa, Ecuador’s left-leaning president. Democracy has systematically succeeded over several decades to improve the lot of the poor and unemployed. Democrats have mobilised the electoral support of the masses to introduce effective social policies to make searingly unequal societies fairer and more prosperous. Democrats – of both left and right wings – are now waging a battle against a new wave of authoritarian populists. Hugo Chavez, after gaining power in a coup, has twice been re-elected. In 2008 he secured abolition of all term limits and ended the independence of both the judiciary and the electoral commission. Having dismantled all checks, balances and independent institutions, his regime rests on his personal control of the state oil company, the armed forces and clandestine armed militias. Although “populism” is an elusive concept, it generally describes a politician who seeks popularity through appealing to the baser instincts of voters. In Latin America populism has had an enduring influence. It began as an attempt to ameliorate the social dislocations caused by capitalism. Its heyday was from the 1920s to the 1960s during the era of industrialisation and 111

the growth of cities. It was the means by which the urban masses – the middle and working classes – were brought into the political system. In Europe this role was performed by social-democratic parties, but in Latin America, where trade unions are weaker, it was accomplished by the classic populist leaders. They included Getulio Vargas in Brazil (1930-1945 and 1950-1954), Juan Peron in Argentina (and his second wife Eva Duarte) and Victor Paz Estenssoro, the leader of Bolivia’s national revolution of 1952. They differ from socialists or conservatives in forging multi-class alliances. Populist leaders are typically charismatic. They are great orators (or demagogues if given a platform in front of a mass audience). A prime example was Mario Velasco, Ecuador’s most prominent populist who was five times elected president and four times overthrown by the army. In the past, populists had to rely on mass gatherings and parades. Today they use the modern media. Hugo Chavez exercises his skills as a communicator every Sunday in a four-hour television programme, glorifying his “Bolivarian revolution” with the exultation of a televangelist. Blurring the distinction between leader, party, government and state, populist leaders usually lead a personal movement rather than a formal, well-organised political party. Chavez likes to get his followers into the streets to demonstrate their support for his cause and to show indignation at the deeds of the “enemies”. Many populists started their careers as military officers. That goes for Vargas, Peron and Lazoro Cardenas of Mexico who nationalised foreign oil companies and handed land over to peasants. Chavez was a lieutenant-colonel and a large part of his appeal is that of the military candillo, or strongman, who promises to deliver justice for the “people” by firm measures against “exploiters”. Populists also like to appeal to the nationalist sentiments and hardships of their followers. They rage against a variety of rhetorical enemies, “capitalists”, “oligarchs”, “imperialists” and “Yanqui-exploiters”. These are all part of the Hugo Chavez arsenal. Across the board, populists are supportive of a bigger role for the state in the economy and the increase of social benefits (more handouts) to the poor. They often paid for this by printing money. Though populists were not alone in favouring inflationary finance, they have been identified with it in numerous instances. In their survey called The Macroeconomics of Populism, Rudiger Dornbusch and Sebastian Edwards characterise “economic populism” as involving a dash for growth and income distribution while ignoring inflation, deficit finance and other risks. Such policies were pursued by Garcia, Peru’s president (1985-1990), Kirchner, Argentina’s president (2001-2006), , Chile’s socialist president (1970-73) and Nicaragua’s Sandinistas. But there is nothing inherently “leftist” about populism. Juan Peron lived comfortably in Franco’s Spain for 18 years. Some populists favoured corporatism – the organisation of society by functional groups, rather than the individual rights and pluralism of democracy. It seems that populism is more a technique of leadership than it is a clear-cut ideology. Peru’s Alberto Fujimori and Argentina’s Carlos Menem were free-market conservatives who sidestepped interest groups by making direct appeal to the masses. Populism is full of contradictions. It is, above all, anti-elitist, but it creates new elites. It claims to favour ordinary people against oligarchs. But as Dornbusch and Edwards pointed out, “at the end of every populist experiment real wages are lower than they were at the beginning”. Populists often crusade against corruption, but often engender more. In the 1960s, populism seemed to fade away in Latin America, pushed out by Marxism, Christian democracy and military dictatorships. Its recent revival shows it is deeply rooted in the region’s political culture. One obvious reason for populism’s persistence is the extreme inequality in the region. That reduces the appeal of incremental reform and increases the lure of messianic leaders who promise a new world. Yet populism has done little to reduce income inequality. A further driver has been Latin America’s wealth of natural resources. Many Latin Americans believe that their countries are rich, whereas in reality they are poor. Populists blame poverty on greedy oligarchs or on multinational oil or mining companies. That often plays well at the ballot 112

box. But it is a misleading diagnosis. Countries advance through a mixture of good policies, sound institutions and wise leadership. Populists usually lead people down blind alleys. (See The Economist, April 15th, 2006, pp.43-44)

Economic Trends

Since the middle of 2008, Latin American economies have slumped along with the rest of the world, showing double-digit falls in industrial output. Workers have been laid off in Mexican car factories, Brazilian aircraft plants and on Peruvian building sites. Although Latin Americans have seen downturns in income per head on five separate occasions since 1980, this time they have not fared worse than the rest of the world. But the downturn has been bad nevertheless. Latin American economies have been hit by four different recessionary forces: collapse of manufacturing levels, a plunge in trade volumes, a decline in the flow of capital as foreign banks trimmed credit lines and a contraction in of migrant workers and tourist spending. Financial statistics show a contraction in GDP of 1.9 percent for 2009 and a modest growth of 1.6 percent in 2010. With a population growth of 1.3 percent, it means a shrinking income per person. The downturn brought an end to five years of steady growth which averaged 5.5 percent amid relatively low inflation. This period of prosperity saw a decline in poverty from 44 percent in 2002 to 33 percent in 2008. This growth enabled tens of millions of Latin Americans to join the emerging lower-middle class. Mexico and most of Central America is expected to be worse off than the regional average. Brazil, with more diversified exports spanning different markets and products, is hit less badly. Peru exports much gold which should enable its economy to buck the downward trend. The normal sources of weakness in Latin America: financial systems, currencies and public finances, have not been the usual drag. The banking system held up well and thus did not act as a magnifier of the recession. Most Latin American economies followed more responsible fiscal policies, accumulated surpluses and reduced public debt. Their currencies declined (30 percent) as a result of the flight of foreign capital, but the de facto devaluation helped their exports. Several of the larger economies announced fiscal measures to stimulate demand (+ 1 percent of GDP) and Chile and Peru decided to raise public spending by around 10 percent, but most of it on infrastructure such as roads and housing. Central banks have assisted with interest rate cuts. Venezuela, Argentina and Ecuador have taken a radically different approach. They have pursued expansionary fiscal policies and have rigid exchange rates. The IMF predicted that these three would be amongst the worst-performing Latin American economies. In social policy too, the region is better placed than in the past. A dozen countries have cash- transfer schemes aimed at attacking extreme poverty in rural areas. In Mexico and El Salvador governments have increased payments under these schemes. Peru is looking at possibilities to extend the provision of free school meals to cover family members. The general assessment of the region is that the pain has not erupted in turmoil, nor has economic and political stability been lost.

Case Study 1: Brazil

With a total area of 8.5 million square kilometres, Brazil is the fifth largest in area in the world. Three physiographic regions dominate the landscape: the 5.7 million square kilometres comprising the Amazon Basin in the north which is the largest in the world; the north-eastern plateau which is covered by infertile savannah woodlands; and the southern plateau where the bulk of Brazil’s coffee plantations are situated. The population is estimated at around 192 million, of which 81 percent live in urban areas. Its two largest cities are Sao Paulo and Rio de Janeiro. Its ethnic composition is 53 percent white, 39 percent mixed white and black, 6 percent black and 2 percent other. Portuguese is the official language. 113

The Republic of Brazil was declared in 1889 after 300 years of Portuguese rule. A new capital, Brasilia, was inaugurated in 1960. A cynical comment has it that “... Brazil is the country of the future and always will be.” But after a decade of stability and relatively more responsible government, Brazil has slowly started to put more stock in the future. In many ways Brazil’s founding cultures lived for the present. Indigenous Brazilians had no need to plan ahead for whatever they needed was available in abundance. The slaves who did not even own their own bodies had no reason to invest in anything. The Portuguese and the many other Europeans who followed them (Spanish, German, Italian and Polish) were focused on quick enrichment. Today the Brazilians, with their abundance of natural resources – including its new bonanza of oil deposits – are beginning to take a longer perspective. It may yet become the powerhouse of the Southern Hemisphere.

Modern History Brazil’s political history is a reflection of its uphill struggle to establish a constitutional democracy on its highly fractured socio-cultural foundations. The country has traditionally been dominated by its feudal oligarchs. In the middle of the Great Depression in 1930, Getulio Vargas was brought to power in a military coup. Vargas initiated a programme of industrialisation and broader political participation, but he failed to reconcile the conflicting interests and reverted to authoritarian rule. In December 1937, Vargas established his fascist-modelled Estado Novo, with himself as dictator. His main aim was to install a process of modernisation through industrialisation. Military rule was established during the period 1945-1951, but Vargas returned to power in 1951 with support of the Brazilian Labour Party (PTB), a broad coalition of workers, industrialists and the urban middle class. The rightwing landowners backed by the military demanded his resignation, but he committed suicide in August 1954. Then followed the PTB-backed presidency of Juscelino Kdebitschek (1956-1960) which sought to attract foreign investment. The new capital, Brasilia, was built as a symbol of national integration and growth. But the high level of foreign borrowing constrained the economy and the stalemate between a succession of presidents and Congress led to another military coup in March 1964. A succession of military rulers from 1964 to 1978 banned political party and trade union activity and eliminated urban guerrilla groups. A severe economic recession followed the oil crisis of 1973 and continued until major social unrest in 1982 compelled General Figueiredo to allow elections in 1982. Eventually, in October 1988, a new constitution prepared the way for a return to full democracy in 1990. The conservative National Reconstruction Party (PRN) under Fernando Collor defeated the Socialist Workers Party (PT) under Luiz Inacio de Silva (Lula) which was formed in 1980. Collor embarked on a remarkable reform programme, renegotiating the $110 billion foreign debt, rooting out corruption, privatising government enterprises and opening the domestic market to foreign competition. His administration declared a four-month hold on wage increases, an indefinite price freeze and the scrapping of indexation to combat the inflation rate which was running at around 1800 percent in 1990. President Collor also scrapped subsidies for forest clearing and sought to involve the developed world in the conservation of the rainforest through a “debt-for-nature” programme under which foreign grants would help reduce Brazil’s massive foreign debt. In August 1992 the Lower House of Congress voted to impeach President Collor, thus ending his presidency. The Supreme Court indicted the former President Collor on charges of “passive corruption and criminal association” during his term of office. Eventually every major political figure in the country was implicated in accusations of corruption. The moral foundation of the political system was brought under disrepute and the state was on the verge of collapse. Street violence, kidnappings and murder were the order of the day. In December 1993, the Finance Minister, Fernando Cardoso introduced a stabilisation programme which included a military budget cut of 50 percent. An election was held in 1994: essentially a two-candidate race between PT’s Lula da Silva and the PMDB’s Cardoso. Cardoso’s “Plan Real” which combined a new currency, the real, with a stabilisation programme, proved a 114

major triumph. Inflation dropped from 50 percent to under 3 percent per month. Cardoso won the election with a resounding 54 percent versus Lula’s 27 percent. In December 1994 Brazil signed a general agreement with Argentina, Paraguay and Uruguay initiating the Mercosur trading bloc which came into effect in 1995. Amidst fierce opposition from leftist groups and trade unions, Cardoso pressed on with his liberal reforms. To combat poverty Communidade Solidaria was formed, headed by Ruth Cardoso, the president’s wife. Cardoso also initiated a programme endeavouring to place 250,000 landless peasant families on the land. After Cardoso’s electoral victory in 1995, the support for his reform programmes crumbled. The privatisation moved slowly since the state still controlled 60 percent of the assets of the top 500 firms. But Cardoso still pressed on with negotiating free trade agreements with Bolivia, Colombia, Peru and Venezuela. He brought the armed forces under civilian control for the first time since 1985. Lula da Silva (PT) was elected president in 2002 with the support of a coalition of left-wing parties. Lula secured a $500 million loan from the World Bank to pursue a social reform programme which included land reform. But his government had faced several corruption charges, including using public money to fund his political party. Those who expected cleaner government from Lula were disappointed, but that only made the clamour for more accountability louder.

Economy Brazil’s estimated GDP of $1.3 trillion in 2007 gave its 192 million population a per capita income of $8,450 compared to Australia’s per capita income of $49,271 for its 21 million people. Both countries rely heavily on iron ore exports and agriculture, but Australia also mines vast quantities of coal for export and has a much more advanced services industry. Brazil is in many ways agriculture’s paradise with more of its fair share of the world’s sun, soil and water. The growing season is long, often allowing two a year on account of shorter periods required for maturing. It produces the world’s cheapest sugar and orange juice. The endless savannahs are ideal for growing soya, by far Brazil’s biggest agricultural export commodity. Brazil is also the world’s largest exporter of beef, coffee, sugar and orange juice. It is closing in fast on the leaders in soya, poultry and . Brazil is not running out of land. Agriculture occupies 60 million hectares. It could stretch out to another 90 million hectares without touching the Amazon rainforests. If the rich countries demolished trade barriers and scrapped subsidies, Brazilian farming could expand in a big way. Liberalisation would boost the real value of agricultural and food output by 34 percent and net farm income by 46 percent according to calculations by the World Bank. It would raise Brazil’s income by $3.6 billion per annum. Agriculture was, for a long period, the Cinderella of Brazil’s economy. Brazil’s dictators thought industrial development was the mark of an advanced economy. Until the late 1980s, agriculture provided the resources for industry and cheap food for the urban masses. The export of agricultural products was subjected to quotas and . Since the mid-1990s, international trading houses like Cargill, Bunge and Archer Daniels Midland and multinationals like Danone and Nestlé supplied finance, international connections and distribution contracts. In 1997 the government eliminated export taxes on commodities, cutting costs by 10-20 percent. A sharp devaluation of the real in 1999 gave a strong push to exports. At 8.8 percent of GDP, agriculture’s share is not higher than in comparable economies, but trade in agriculture and agriculture related industries accounts for 40 percent of Brazil’s exports. Since the 1970s, the Sao Paolo region started production of sugar-based alcohol (ethanol) to cars. Bagasse, the crushed remnants of sugarcane is burnt for energy and mixed with urea to feed cattle. The top of the chain is fuel-flex cars which burn alcohol and gasoline in any combination and account for two-thirds of new cars sold in Brazil. Global ethanol trade is expected to rise 25-fold by 2020. 115

Much of Brazil’s edge in the cost of land and labour is blunted by its shaky infrastructure. Just 10 percent of the country’s roads are paved, compared to 29 percent in neighbouring Argentina. Brazil neglected its railways. Its navigable rivers do not traverse the heart of the country, but veer off into the Amazon rainforest. Brazil’s agrarian unruliness has a long history. A growing trend is for foreign buyers to join local producers to discipline the supply lines. The intertwined requirements to export, invest in technology and food safety and to raise external finance, are encouraging the emergence of big companies that are transparent enough to withstand public scrutiny. Beef abattoirs have lifted their standards to meet the requirements of European importers. Corporate Brazil is gradually coming of age with the appearance of Companhia Vale do Rio Doce (CVRD), privatised in 1997. When it acquired Inco, a Canadian producer, it became the world’s second largest mining company. Gerdau is the biggest producer of long steel products in the Americas with buying operations in nine countries to overcome trade barriers. Embraer is the Boeing of the regional jet market. The merger of Submarino and Americanas created one of the world’s largest e-commerce operations. Azaleia, Brazil’s largest shoe manufacturer, shut its Brazilian operation and opened a factory in China. Brazil has more than 72,000 industrial enterprises with more than 10 employees. They account for 25 percent of turnover in the corporate sector but only 13 percent of jobs. The middle group of 15,000 firms account for 63 percent of turnover and 50 percent of employment. Agriculture, having conquered much of the savannah of the centre-west, is now opening new fronts in the north-east. The old mining centre of Minas Gerais now has a rival in Carajas in the Amazonian state of Para. Industry, having converged on the city of Sao Paolo for much of the 20th century, has been dispersing for decades. Brazilians have always associated open space with opportunity, which lures them to their frontiers. This impulse has devastated the Amazon where 18 percent of the forest has disappeared. The cycle of destruction started with illegal logging, etching the first trails into the forest. Land grabbers follow, or stake their claim to virgin forest by razing or burning the trees and turning the land into pasture. Then come the planters who replace pasture with more profitable soya, driving the ranchers deeper into the forest. The pioneers outrace the state’s capacity to enforce the law or bribe the officials covering the frontier areas. Recently new laws proclaim that no public forests can be privatised and provide concessions for “sustainable” logging. But fraudulent “logging licences” still abound. To assist the Brazilian state to protect the Amazon rainforests, it recently proposed an international fund to pay Brazil for the forest’s environmental services to the planet. Under the scheme Brazil will be compensated for reducing deforestation below a certain baseline according to the market value of the carbon sequestered in the intact forest. This would give it a value to compete with the profits to be gained from its destruction and finance the cost of proper policing. (See Brooke Unger, “Special Report on Brazil”, The Economist, April 14th, 2007, pp.8-11) The discovery in 2007 of vast Tupi field offshore oil deposits deep beneath the Atlantic seabed will be a crucial test of Brazil’s economic prudence. Depending on how it is used, the new wealth could help the country to overcome poverty and under-development – or exaggerate its spendthrift ways. Lula described the windfall as “a gift from God” and a “passport to the future”. In August 2009 he unveiled four new bills setting out how the new wealth should be gathered and spent. One bill declares the oil the property of the state, rather than of the companies that buy concessions and retrieve the oil. In each block, half of any oil produced would go to the state. The remaining half would be subject to a production-sharing agreement between Petrobas and any companies that partnered it, in proportion to their costs. Another bill creates a new state oil company called Petrosal to represent the state’s interests in each block. The state will also inject the monetary equivalent of 5 billion barrels of oil into Petrobas. In addition, provision is made for a social fund to spend Petrosal’s billions. The danger is that the money will be spent rather than saved or wisely invested, further bloating a state whose revenue is already equivalent to 36 percent of GDP, compared to 20 percent in Mexico. In view of the track record of Brazil’s politicians, what looks like winning a lottery ticket can all too easily become a source of corruption. 116

Social Structure Brazil is said to have been dominated for generations by the culture of cordialidade. This meant the interaction of personal bonds with formal rules; universal and egalitarian morality conflicting with unequal and hierarchical morality; and, a vibrant private sector coexisting with a sclerotic state. The current President Lula, who presented himself previously as the scourge of old-style oligarchs, now governs in coalition with their support. During the past decade efforts to reduce poverty and inequality have been bearing some fruit. Rural areas have benefited from infusions of federal cash. The value of pensions, which is linked to the official minimum wage, has doubled over the past 13 years. Well over half of families now receive the Bolsa Familia, a benefit of up to 95 rais a month that requires parents to keep their children in school and take them to clinics for health check-ups. The Bolsa Familia was first initiated by the Cardoso government, but Lula has expanded its coverage and increased the value of the benefit. It now reaches 46 million people, a quarter of Brazil’s population, making it the world’s largest “conditional cash transfer”. The benefits are linked to the recipient’s behaviour. Poverty and inequality are deeply rooted in Brazil’s history. It is punctuated by its history of slavery and the obliteration of its indigenous peoples. Its bloodiest conflicts have been internal affairs, pitting classes, regions and races against each other. Today the challenge is to narrow the wide gap between the Brazil of gated luxurious condominiums and the other Brazil of the favelas and untreated sewerage. Income distribution in Brazil is more skewed than in any other big country. Violence and pollution are spread even more unequally. The road that separates Gavea, a wealthy neighbourhood of Rio de Janeiro, from Rocinha, a favela dominated by gangs of drug dealers, marks a nine-fold difference in unemployment, a seventeen-fold difference in income and a thirteen-year variation in life expectancy. Such indicators are correlated with race, but Brazilians argue over the question whether racial inequality is a cause or a consequence of economic inequality. Narrowing inequalities has become a major challenge to governments since the restoration of democracy. Lula likes to describe himself as “the father of the poor” and claims to be the initiator of many poverty reducing schemes. The truth is that the recent progress builds on initiatives taken by his predecessor, Mr. Cardoso, himself a scholar of Brazil’s poverty and race relations. Between Cardoso and Lula, they have produced a significant reduction in poverty and inequality. The Real Plan, initiated by Cardoso, prompted a sharp drop in poverty by slashing the inflation that taxes the poor. The first three years of Lula’s government have seen a continued decline of poverty levels. The Gini coefficient, which measures concentration of income, fell by 4.7 percent between 2001 and 2005. The share of national income going to the poor half of society increased from 9.8 percent to 11.9 percent and the share going to the richest tenth fell from 49.5 percent to 47.1 percent. The combined effect of earlier reforms and increased social spending appears to have brought about the improvement. The labour market also played a role by way of an increase in new formal jobs. Greater economic stability gave employers the confidence to resume hiring. The export boom created rural job opportunities. Industries fled from urban obstacles such as crime and trade unions. Unequal access to education also reduced. Federal income transfers, such as the Bolsa Familia, probably had a major impact, particularly outside the metropolis areas of the south-east. One downside to the income transfers is its political impact. It could easily become a way to buy votes and thus to diminish the proper functioning of a competitive democracy. An additional negative result would be the breeding of a dependency culture. A further negative side-effect is the extra burden on the productive taxpaying middle class. In the October 2006 elections, Lula failed to win re-election in the first round of voting. In the second round he regained the initiative by accusing his opponent, Geraldo Alckmin, of plotting to privatise the jewels of state-owned industry. He then polled most votes in regions where 117

government transfers were highest. An anti-tax campaigner, Guilherme Domingos, best summed up the situation as follows: “Brazil is divided between those who depend on the government and those who pay the bills.”

Machine Politics The malfunctioning of Brazil’s representative political institutions stem in large measure from its peculiar proportional representation electoral system. Political parties are weak and disjointed. There is little connection between voters and their representatives and ideas or policies barely enter into the discussion. Representatives are perennially accusing each other of corruption. Party-hopping has become part of the political culture. In the Congress that ended in December 2006, no less than 195 of the 513 deputies switched parties – some several times. The same Congress also voted itself a 91 percent pay rise, which the Supreme Court promptly overturned. The electoral system weakens the links between voters and deputies. Candidates for “proportional” offices in the lower house of Congress and the state assemblies, compete in state- wide contests. Seats are distributed to each party (or electoral coalition) in proportion to the number of votes received by all its candidates. Within that quota, the winning candidates are those individuals with the most votes. It may look fair that both parties and candidates are ranked according to their popularity. But since every vote contributes to a party’s quota, parties try to field as many candidates as possible even though voters barely know them. Candidates from the same party compete against each other as well as against the opposition. If one candidate gets more votes than he needs (according to a calculated quota) to get elected, his surplus votes go to lesser lights from the same party or coalition. These marginal figures often beat more substantial candidates from less favoured groupings. Most of the 21 parties elected to Congress under this system are little more than “nameplates”. Few of them practice democracy internally in the selection of candidates. By party-hopping after election, deputies sever their last tie with the voter. The only lasting connection is the one with the network that secured their election. It consists of mayors, special interests and enterprises (which often finance campaigns off the record). Between elections, these continue to function, producing , contracts and donations that benefit all concerned. The former president, Mr. Cardoso, remarked that Congress is composed of representatives, not of the people, but of vested interests. In 2007, the abortive pay rise caused a backlash, which in turn gave rise to a momentum for reform of the electoral system. A score of reform proposals were circulated in subsequent years, but vested interests stand in the way of changing the constitution. Correcting the worst imbalance would involve changing the rule that each state gets a maximum of 70 and a minimum of 8 seats in the lower house. This ensures that more than half the seats go to the north, north-east and centre-west, which between them have only 41 percent of the electorate.

The Overbearing State Brooke Unger’s special report on Brazil titled “Land of Promise” states unambiguously that “the biggest enemy of Brazil’s promise is an overbearing state”. But the meddlesome, covetous, inefficient and corrupt state is not a rarity or a novelty in Brazil. The origin of the “rent-seeking state” can be traced to 1808 when Napoleon chased Portugal’s royal family to Brazil. Mussolini provided the ideological inspiration for the Estado Novo of President Getulio Vargas in 1937 whose system of labour and industrial syndicates is the basis of today’s labour relations. Since the return to democracy under the 1988 Constitution, excessive government spending and taxation tended to crowd out the private sector or to drive it into the informal sector. The grey market in Brazil accounts for a much higher proportion of the economy than in Mexico, China or India according to the International Finance Corporation (IFC) of the World Bank. Informal firms under-invest, weaken their formal-sector competitors and act as a drag on productivity. 118

Brazil is said to have the most elaborate state apparatus in the developing world. The plethora of institutions, regulations, procedural requirements, levies and fees tend to serve the state apparatus more reliably than it does ordinary Brazilians. Excessive state spending has built up a net public debt running at 45 percent of GDP. It continues to stoke demand while restraining supply. The solutions have been on the activity list for a long time without making much headway: the redesign of the tax system, pensions and labour laws; more flexible spending; more formal autonomy to the central bank. Apart from servicing its debt, the federal government spends money mainly on three things: pensions, transfers to lower levels of government and its own bureaucracy. In none of these areas is spending efficient or equitable. Brazil spends 11 percent of its GDP on publicly financed pensions. The cost of public servants’ pensions is about half that of the private-sector, but benefits a group of people only one-eighth the size. The federal government also transfers revenue raised in the rich south-eastern areas to states and municipalities in the poorer north- eastern areas. Experience has shown that revenue without responsibility is not a good idea. The more local governments depend on transfers, the more they spend on administration. Bureaucrats are paid more relative to private-sector workers. On average public servants earn more than twice as much as workers in the private sector and have an easier life. The constitution protects them from dismissal. Merit tends to be measured by the number of training courses they have attended rather than by their competence. Some 20,000 federal jobs are filled by political appointees. The average business firm takes 2600 hours per annum to process its taxes. Opening a business requires 17 procedures and 152 days. Hiring people is expensive because taxes add 60 percent to salaries and workplace rules are an invitation to conflict. In 2005 the banking sector was embroiled in 160,000 cases in the labour courts.

The Cardoso-Lula Tripod President Lula resoundingly won re-election in October 2006, largely on the strength of support from the poor. Their living standards have steadily improved, thanks, in part, to handouts from the central government. Income inequality has slowly begun to shrink. Inflation also declined. Although Lula’s PT party had opposed Cardoso’s Real Plan, Lula defied his companheiros and entrenched stability. He faithfully stuck to the policy “tripod” put in place by his predecessor and political foe, Fernando Cardoso: a primary surplus (before interest payments) high enough to reduce debt as a share of GDP, a floating and inflation targets. Assisted by global enthusiasm for Brazil’s goods and financial securities, the Cardoso-Lula tandem brought about a miracle recovery. In 2006 inflation shrunk to 3 percent, below the target of 4.5 percent set by the central bank. Real interest rates declined to their lowest level since 2001. Export and trade surpluses soared, pushing foreign-exchange reserves above $100 billion. Brazil’s government became an international creditor. Brazilians clamour for good government, less corruption, more accountability, better service delivery – but they persistently overlook the obvious mainspring of their malaise: the size of government. Brazilians only have to keep an eye on the large electronic impostometro in the centre of Sao Paolo that adds up the government’s tax take in real time. After the 2006 election, Lula’s finance minister indicated the government’s intention to implement a long-term fiscal programme where spending is under control. He indicated that his programme relies on economic growth and a cap on public salaries to reduce state spending as a share of GDP. The wishful thinking is that in 2010, when Lula’s second and final term runs out, Brazil may elect as president the first state governor since Fernando Collor who led Brazil into an era of stability and growth. Mr. Serra and Mr. Neves, both of the Opposition Party of Brazilian Social Democracy (PSDB) will put efficiency on the national agenda. Lula’s PT and the PDSB, the likeliest source of an alternative presidential candidate, are political rivals but philosophical kin. The two “liberal” parties felt obliged to excise the word 119

from their names and policy outlines – in effect disavowing the creed that challenges the size of the state. Hopefully democracy and economic change will be pulling Brazil forward, albeit at a halting pace. But it is important for Brazil to sort out the way it governs itself.

Case Study 2: Mexico

Mexico is the largest country in Central America covering a total area of 1,958,201 square kilometres, situated south of the southern border of the USA. The northern and western parts of the country are arid. To the south it is tropical and humid. Its population of 110 million include 60 percent mestizo (Amerindian-Spanish), 30 percent Amerindian, 9 percent white and 1 percent other. Mexico City, its capital, is a sprawling city of 9 million people. Around 90 percent of its population are Roman Catholic. The post-colonial history of Mexico was characterised by the interaction of several major forces. The first was the pervasive Latin-American “caudillo” leadership principle (leader with personal military backing). The second major force was the “Mexican Revolution”, the use of revolutionary symbols by a dominant political party to legitimise its policies. The third force was the sporadic rebellious violence engendered by Amerindian tribal groups in the western redoubts of the country. The fourth was the periodic violence perpetrated by drug cartels using guerrilla-style tactics: the kidnapping and intimidation of leaders, sending heavily armed battalions to attack police stations and assassinating police officers, government officials and journalists. The most fragile force in the Mexican arena is its democratic system. The first two freely elected democratic governments, hampered by electoral competition and the decentralisation of political power, have struggled during the last decade to disrupt the established payoff systems between drug traffickers and government officials. What has changed in Mexico in recent years is not the escalating drug trade and commercial violence, but that a fledgling market-based democracy has arisen.

Historical Background Democracy did not come easily to Mexico. For more than two centuries the country has seen long periods of authoritarian rule, punctuated by three civil wars. In the mid-19th century there was a brief period of la reforma under Benito Juarez, a Zapotec Indian, as elected president. Then in 1876, power was seized in caudillo-fashion by Porfirio Diaz who remained president for more than 30 years until 1911. Mexico was one of the world’s leading petroleum exporters at the time. In 1911 Diaz was overthrown by Francisco Madeiro. It sparked guerrilla warfare in the north under Pancho Villa and a peasant revolt in the south under Emilio Zapata. Madeiro was deposed and 250,000 people died in the ensuing civil war. The chaos, anarchy and violence scarred the country and paved the way for the rise of a powerful corporate state dominated by a single political party. A new constitution in 1917 introduced educational and social reform. Church-state relations deteriorated with the Cristero rebellion led by militant Catholic priests (1926-1929). The year 1929 saw the formation of the National Revolutionary Party (PNR, renamed the Mexican Revolutionary Party, PRM, in 1938 and the Institutional Revolutionary Party, PRI in 1946) which held power for 71 years under the presidency of Lazaro Cardenas. Organised labour was encouraged, land reform accelerated, co-operative farms were established and much of the railway system nationalised. US and British oil companies were expelled and their property expropriated. During World War II, Mexico collaborated with the US war effort and the economy prospered. In the 1970s the economy suffered when the oil price slumped. In 1982 the Mexican government announced that it could no longer service its foreign debt ($80 billion) and was forced by agreements with the IMF to revise its economic policies. 120

In 1985 Mexico City was struck by a massive earthquake causing billions worth of damage and killing 7000. In 1989 the PRI suffered its first defeat in 60 years when it conceded victory to the centre-right National Action Party (PAN). Free-market reforms were introduced in the 1990s and Mexico joined NAFTA, a free market arrangement with the USA and Canada. More than 1000 state enterprises were privatised. An enormous trade deficit in 1994 led to a loss of investor confidence and pushed Mexico into a severe crisis. The USA provided financial support to strengthen the peso, but the economy remained in the doldrums with rising bankruptcies and unemployment. The crisis was exacerbated by continuous examples of the PRI’s involvement in drug-trafficking and corruption. Vincent Fox (PAN) was elected president in 1998, ending 71 years of rule by the PRI. However, the PRI retained the majority in both houses of the National Congress. The election result was annulled by the Federal Electoral Tribunal on account of widespread irregularities. A major problem surrounded the position of indigenous Indian groups involved in activities of the Zapatista rebels in the Chipias area. Government armed forces were forced to maintain military bases in several traditional areas. Another major problem area was the continued participation of PRI-leaders and army generals in drug-trafficking and money laundering. A major crackdown in 2002 on drug cartels saw the arrest of many military, police and political officials involved in the drug trade. Notably, two former generals in the Mexican army were the first senior military officers found guilty of having assisted the operations of the Juarez drug cartel. The former ruling party, the PRI, was fined $90 million for illegally financing its election campaign in 2000 with $45 million in funds transferred from the state-owned oil company Pemex. The PRI nevertheless won the July 2002 lower house elections with 45 percent of the vote against the 31 percent of President Fox’s support group, the PAN. To complicate matters even more, at a convention attended by over 10,000 supporters, the revolutionary EZLN declared political autonomy in 30 municipalities in the southern portion of the state of Chiapas. In 2006 Felipe Calderon of the PAN was narrowly declared the winner in presidential elections. The opposition candidate Andres Obrador of the PRD challenged the election results. The seven judges of the Electoral Court ruled that the results of the presidential election were fair. Obrador and his supporters continued to refute the decision and pledged to establish a “legitimate” parallel government. He held a symbolic inauguration attended by 100,000 supporters and they claimed to be the true standard bearers of the “Mexican Revolution”.

The Impact of the Mexican Revolution Historically, the “Mexican Revolution” refers to the civic insurrection during the period 1910- 1917. During this period the long dictatorship of Porfirio Diaz (1876-1910) was overthrown. A nationwide revolt led to demands for social reform, the restoration of land to Indian communities and an end to foreign domination of the country’s economic life. By 1917 stable government had been restored, but the leaders continued to describe their governments as a continuation of the “revolution”. The concept “revolution” was thus not only used to refer to an historic period of violence and rebellion, but to identify an established political order. In a sense it was seen as a consecration of the political system based on a single party which claimed lineal descent from the uprising of 1910: the National Revolutionary Party, later renamed the Institutional Revolutionary Party (PRI). The party presented itself as the realisation in public policy of the aspirations and objectives unleashed in seven years of civic turmoil. The concept of “institutionalised revolution” enabled the PRI to dominate political life. It regularly won 80 to 90 percent of the vote. The legacy of the Revolution was constantly invoked in political oratory and political symbolism: in literature (Azuela) in art (Rivera and Orozco), in music (Chavez). In science and social commentary, the process and objectives of the Revolution were the predominant point of reference. The Revolution was identified with agrarian reform, social and welfare legislation, the nationalisation of the oil industry and the railroads and also the secularisation of the church. The 121

“revolutionary regime” has also endorsed large agricultural enterprise, foreign investment and a modus vivendi with Roman Catholicism. In fact, the Mexican Revolution came to connote everything that had happened in the country since 1910. The continuity of the symbol of “revolution” was used to legitimise every policy alternative – depending on the needs of the moment. It became a propaganda tool, cynically manipulated by political elites. The PRI systematically extended its control over Mexico’s territory and people. It quelled political opposition by incorporating important social groups into its party structure: workers, peasants, business people, intellectuals and the military. Its reach went beyond politics: it created Mexico’s ruling economic and social classes. The government granted monopolies to private-sector supporters, paid off labour leaders and doled out thousands of public-sector jobs. It provided plum positions and national recognition for loyal intellectuals, artists and journalists. Famously called “the perfect dictatorship”, the PRI used its great patronage machine (backed by a strong repressive capacity) to subdue dissident voices – and to control Mexico for decades. The legacy of the PRI is a country drowning in corruption. Under the PRI the purpose of government was to assert power rather than to govern by law. The vagueness of court proceedings, the graft of police officers, the menacing presence of special law enforcement agencies were essential elements of an overall system of political, economic and social control. The judiciary was not a check or balance on executive power, it was just another arm of the party used to reward supporters and intimidate opponents. Law enforcement was used to control, rather than to protect the population.

The Drug Cartels and Violence Ties between the PRI and illegal traders began in the first half of the 20th century, during the Prohibition Years. By the end of World War II, the relationship between drug traffickers and the ruling party had solidified. Through the Ministry of the Interior and the federal police, as well as governorships and other political offices, the government established patron-client relationships with drug traffickers – similar to relationships with other sectors of the economy and society. This arrangement limited violence against public officials, top traffickers and civilians. Court investigations never reached the upper ranks of cartels and so the rules of the game for traffickers became a defined pattern. This compact held sway even as drug production and transit accelerated in the 1970s and 1980s. The long-standing dynamics were disrupted when the PRI’s political monopoly ended. So too did its control over the drug trade. Electoral competition nullified the unwritten understandings, requiring drug lords to negotiate with the new political establishment and encouraging rival traffickers to bid for new market opportunities. Accordingly, Mexico’s drug-related violence escalated first in opposition-led states. After the PRI lost its first governorship in Baja California in 1989, drug-related violence there surged. In Chihuahua, violence followed on opposition takeover in 1992. When the PRI won back the Chihuahua governorship in 1998, the violence moved to Ciudad Juarez, a city governed by the National Action Party (PAN). When Vicente Fox, the PAN candidate, was elected president in 2000, the old model, dependent on PRI dominance, was broken. Drug-traffickers shifted their focus to buying off or intimidating local authorities to ensure safe transit of their goods. Democratic competition, ironically, also hampered the state’s capacity to react forcefully. Mexico’s powerful presidency that used to rely on party cohesion came to an end. As Congress’ influence grew, legislative gridlock weakened President Fox’s hand, delaying judicial and police reforms. Conflicts also emerged between different levels of government as federal, state and local officials frequently belonged to different parties and refused to co-ordinate policies or even to share information. It even led to armed standoffs between federal, state and local police forces, as occurred in Tijuana in 2005. In response to escalating violence between security forces and drug cartels, the Calderon government deployed federal police reinforcements to the state of Sonora and army units to the state of Veracruz in May 2007. But in September the Popular Revolutionary Army (EPR) carried out a series of simultaneous bomb attacks on state owned gas pipelines. Government action to establish domestic law and order involved some 45,000 troops. 122

The year 2008 saw an escalation of drug-related violence, with the number of deaths surpassing those of any other year in Mexican history. Disputes between rival criminal gangs led to open gun battles with automatic rifles and pocket-propelled grenades in major city streets – often in broad daylight. Death threats forced dozens of law enforcement and government officials to resign. Extortion rings in many cities preyed on businesses forcing owners to pay protection money for their operations and employees. Fear of kidnapping plagued the upper, middle and working classes alike. (See Shannon O’Neil, “The Real War in Mexico”, Foreign Affairs, Vol.88, No.4, 2009, pp.64-69)

The Rising Drug Trade Mexico has a long history of supplying illegal substances to US consumers, beginning at the turn of the century with heroin and marijuana. It continued through Prohibition as drinkers moved south and Mexican rum-runners sent alcohol north. The marijuana trade picked up in the 1960s and 1970s with rising demand from the US counter culture. Then in the late 1970s and 1980s, US cocaine consumption boomed and Mexican traffickers teamed up with Colombian drug lords to meet the growing US demand. In the 1980s and 1990s the US cracked down on drug transit through the Caribbean and Miami. As a result more products came through Mexico: by 2004, 90 percent of cocaine and other drugs. It gave more power to Mexican traffickers and more profits to pay for militarised enforcement arms. With an increasingly sophisticated operational structure, Mexico’s drug cartels moved into heroin and methamphetamine products and into the expanding European cocaine market. They extended their influence into source countries such as Bolivia, Colombia and Peru. They established beachheads in Central America and the Caribbean nations, where they worked their way into the countries’ economic, social and political fabric. Mexican drug cartels were identified by the US Justice Department in 2008 as the “biggest organised crime threat to the United States”, with operations in some 230 US cities. The main magnet of the drug trade is the lucrative US market. Supply follows demand. Hence the US needs to take a hard look at its own role in escalating violence and instability in Mexico. The US is also the main source of the illegal weaponry used by the drug cartels. Washington needs to inspect freight traffic on the border going south as well as north. Even more crucial is the flow of money. Estimates of illicit profits range from $15 billion to $25 billion going across the US border to Mexico’s drug cartels each year. This money buys guns, people and power: it is wired, carried and transported across the US-Mexican border. Mexican criminal gangs launder the funds by using seemingly legal business fronts such as used-car lots, import-export businesses or foreign-exchange houses. Laundered money not used to pay off officials or finance operations is generally sent back to the US and saved in US bank accounts. Reduced drug demand in the USA would lower the drug profits that bribe corrupt officials, that buy the guns from US gun smugglers and threaten Mexico’s fledgling democracy. (See Shannon O’Neil, op.cit., pp.69-71)

Economy The Mexican economy is closely linked to the USA. Some 80 percent of its exports (well over $200 billion) go to the USA. Mexico’s tourist industry, which brings in $11 billion, depends on 15 million American vacationers each year. The large Mexican and Mexican-American populations living in the United States, estimated at 12 million and 28 million respectively, transfer nearly $25 billion a year to family and friends in Mexico. After Canada, Mexico is the second most important destination of American exports. Around one million Americans now live in Mexico. Nearly one million people and $1 billion in trade cross the border every day. This massive movement of people and goods virtually overwhelms the existing infrastructure and border personnel leading to long and unpredictable border delays. The US Department of Transportation reported in 2008 that $11 billion more will need to be spent on the US side of the border to catch up with the growing traffic. Oil accounts for more than a third of total government revenue. But total tax revenues amounted to only 11.4 percent of GDP in 2004. It is much less than the average for OECD 123

countries (36 percent) and even less than the average for Latin American countries (13.7 percent). One reason is that large chunks of the economy such as food, medicines, agriculture, fisheries and local transport are either exempted from value-added tax or zero-rated. After China joined the WTO, a powerful competitor to Mexico arrived in the US market. It caused three years of economic stagnation in Mexico and a loss of some 700,000 formal jobs – most of them in the maquiladora plants south of the border for export to the USA. Some of those jobs moved to China. Thanks to high oil prices in the second half of Mr. Fox’s term as president, economic growth increased to over 4.5 percent and created almost 1 million formal jobs – almost keeping pace with the growth of the labour force. The car industry started to improve in 2005. As Detroit’s troubled carmakers closed factories in the USA, they were quietly expanding in Mexico – as were Nissan, Toyota and Volkswagen. The chief beneficiaries were Mexican suppliers of car parts. Mexico cannot compete with China’s cheap labour, but it can compete in higher value goods and where transport costs are important. Competition is not Mexico’s strongest point. The most powerful man in Mexico is said to be Carlos Slim, arguably the world’s third-richest individual. According to Forbes magazine, his net worth of $30 billion puts him only behind Bill Gates and Warren Buffet. His business tentacles extend across large swatches of the economy: Telmex, the telecoms company privatised in 1990 of which Slim’s family hold 48 percent; America Movil, the largest mobile phone operator; and a string of industrial and retailing businesses. He is the biggest tenant in the country’s shopping centres with large investments in the oil industry and Televisa, Mexico’s dominant media business. Telmex is just one example of Mexico’s widespread rule of oligopoly. The country still lacks a competition culture. All the existing monopolies are part of the PRI legacy. But the private monopolies pale in comparison with the state monopoly of energy. Pemex holds a stranglehold on the oil industry: from exploratory drilling to refining to deliveries to petrol stations, since 1938. Pemex works for the government treasury and for its own trade union. The electricity industry is dominated by two state-owned monopolies. The trade unions used to be dominated by the PRI and treated with suspicion by the PAN. Many of the over-powerful unions derive their power from the monopoly power of their employers. Mexican workers are only one-third as productive as those in the USA. Foreign direct investment has fallen from 3.5 percent of GDP in 1994 to 2 percent a decade later. Mexico has a few big world-class firms. Cemex has grown to become the world’s third-biggest cement company with factories in 50 countries. Mexican beer has become an export industry with Corona and Sol as two international brands. The vast majority of Mexican companies are small businesses, many of them operating in the informal economy. According to an estimate of the World Bank, more than half of Mexico’s total employment is in the informal sector. It is also one of the explanations why the tax take is so small. But the size of the informal sector is closely correlated with the tangle of red tape that makes it hard for informal businesses to go legal. The US recession has had a severe impact on Mexico’s economy. The peso skidded to a 16- year low against the dollar and the Mexican GDP was expected to shrink by 3.5 percent in 2009. Since some ten million Mexicans still live on less than $2 a day, it was to be expected that the downturn would push more Mexicans into poverty with rising unemployment and an increased danger of social unrest.

Social Security “Some parts of Mexico are more like the USA and some parts more like Central America”, said President Calderon recently. “... It is a clear challenge for me to make them more alike”. Official figures show that one Mexican in two still lives in poverty and in much of the south that figure rises to three in four. Since 1995 average growth in the northern areas has been running at 4-5 percent, while in most of the south and centre it has been more like 1-2 percent. 124

The nine states of the south and south-east account for almost a quarter of Mexico’s total area and population. They are more Indian and poorer than the rest. Around 45 percent live in small rural settlements, lacking electricity and piped water, and half as many can read and write. They are afflicted by poor schooling, poor communications, lack of investment and, often, reactionary political leadership. All statistical indicators show that poverty and inequality in Mexico have gradually declined throughout the period since 1950. The first reason for the decline is the millions of Mexicans going north to the United States or to work in the tomato fields of the Baja California. They return richer or send back money. The Inter-American Development Bank estimated that remittances climbed in 2006 to $24 billion a year, about a third more than the flow of direct foreign investment. The second reason why people are better off is a means-tested anti-poverty programme. It was pioneered by President Zedillo’s government and largely expanded and renamed Opportunidades by Mr. Fox as president. Opportunidades pays mothers a monthly allowance provided they keep their children in school and take them for regular health checks. In some areas around 70 percent of families receive help from Opportunidades and for the country as a whole around 25 percent of families (some 5 million families) are supported at a cost of $2 billion a year. While alleviating poverty, the programme’s main aim is to prevent it in the next generation: to expedite Mexico’s transition to a labour force that has finished secondary school. introduced by Mr. Fox make it obligatory for formal private sector workers to contribute to the Mexican Institute for Social Security (IMSS) which provides pension and health care. They are also obliged to contribute to a housing fund. Between them, employers’ and employees’ contributions add up to a hefty 35 percent of wages. The IMSS is one of Mexico’s union-driven monopolies. It administers the state pension scheme and is the largest single provider of health care in North America. Its trade union is the second biggest after the teachers’, with 380,000 members. The pension scheme has been switched from a pay-as-you-go system to individual capitalised accounts, with one embellishment. The government makes a contribution to each account (total cost $1.5 billion p.a.) to augment the amount payable. The Afores, as the new pension funds are called, manage $61 billion in assets and have provided a natural market for the government’s long-term peso bonds. The IMSS’s own employees, as well as other public sector employees, have a separate social security institute funded by a percentage of the contributions private sector workers are obliged to make to the IMSS! Mr. Fox made an effort to switch public sector employees and those of the IMSS to a similar scheme as the one used by the private sector. The unions went on strike and forced Mr. Fox to cave in. Since the IMSS covers only the formal sector workers (13 million workers or 30 percent of the workforce), Mr. Fox tried to expand a range of non-contributory social-protection schemes for workers outside the IMSS. The government added a pension scheme to Opportunidades in 2006 under which it contributes slightly more generously than it does to formal-sector pensions. More than 90 percent of those who get Opportunidades have never worked in the formal sector. Mr. Fox also launched a health care programme (also non-contributory) called Seguro Popular for those outside the social-security system. In 2006 public spending on health for workers in the informal sector totalled 131 billion pesos ($12.1 billion) against 107 billion pesos for those in the IMSS. Since 1998 public spending on social protection for informal workers has expanded by 110 percent. General public investment in infrastructure has risen by only 0.8 percent.

Current Trends Calderon was elected by Mexico’s burgeoning middle class, now nearly one-third of the population. In a society long noted for the disparities between the extremely wealthy and the desperately poor, Mexico now has an economic middle class that is rapidly expanding. The boom in immigration to the USA has brought in billions sent back to families at home. The middle-class families work in small businesses, own cars and homes and strive to send their children to 125

university. They are now also increasingly demanding public transparency, judicial reform and personal security. After the 2006 election Calderon’s PAN lost its majority to the PRI. As a result Calderon had to settle for only modest reforms in the fields of energy, education, pensions and public finances for the remainder of his term of office until 2012. Battered by the recession, tax revenues have plunged with unemployment and under- employment soaring. At the same time, the recession has increased the demand for social spending. The President expressed his determination to press on with his campaign against organised crime and corruption and to raise spending on Opportunidades. He probably will receive grudging support from the PRI for unpopular reforms to avoid exposing the party to the charge of putting partisan advantage ahead of the national interest at a difficult time for the country. Case Study 3: Colombia

Colombia is situated in the extreme north-west of South America with a Caribbean and a Pacific coastline and a number of island territories in both. Its total area is 1.1 million square kilometres divided into 32 regions. The Colombian Andes run north-south, separating densely forested and sparsely populated Llanos and Amazonian lowlands to the east from the Pacific and dry Caribbean coastal plain in the west and north-west. Three mountain ranges stretch west- east: Cordillera Occidental, Cordillera Central and Cordillera Oriental. The major river systems are the Putumayo, Meta, Orinoco and Amazon. The climate is predominantly tropical. The total population numbers 45 million of whom 90 percent inhabit the temperate Andean valleys and Eastern Cordillera plateau. The ethnic composition includes 58 percent mestizo, 20 percent white, 14 percent mulatto, 4 percent black, 3 percent black-Indian and 1 percent Indian. Several indigenous groups are involved in ongoing territorial disputes over expropriation of traditional Indian “resguardos”. The capital city Santa Fe de Bogotá has a population of around 7 million people. Around 90 percent of the population are Roman Catholic.

Economy In 2007 the GDP stood at $1.6 billion, $3,600 per capita. Industry contributes 50 percent of the GDP (19 percent of the workforce), agriculture 12 percent of GDP (23 percent of the workforce), and services 38 percent of GDP (58 percent of the workforce). More than 50 percent of the country is forested. Its exports, worth $29 billion, include petroleum, coffee, coal, apparel, bananas and cut flowers. The USA is its main export destination.

Political History Throughout the first half of the 20th century, Colombia was generally considered a politically mature country: a progressive nation where democracy was firmly established. It was thought that it had overcome its 19th century legacy of chronic civil strife and achieved stable, responsible government. The military had not intervened in politics since 1906 and a high calibre civilian statesmanship seemed to be the rule: Alfonso Lopez, Eduardo Santos and Alberti Camargo. were generally respected. Peaceful alternation in office of well- developed political parties, the Liberals and the Conservatives, were setting a new, unusual pattern for Latin America. The urbane, sophisticated middle class controlled the balance of power between the parties. From 1903 to 1930 the Conservative Party was dominant. Then until the late 1940s the Liberal Party was in control. In 1946, Colombia’s democratic reputation was shattered. For more than a decade the country was immersed in a state of chronic civil strife, causing over 200,000 casualties among a population of some 14 million. It was a period of civic breakdown, savagery, bloodshed and guerrilla warfare that the Spanish called la violencia. The civic disorder only started to ameliorate in 1958 with the establishment of the civilian National Front government. Thereafter sporadic outbreaks of violence continued as an undercurrent of Colombian history throughout the twentieth century.

126

Causes of La Violencia Many factors contributed to the eruption of civic disorder: 1. Mass uprising against the rigidities of an oppressive and unjust social order where social and political advantage were monopolised by a small “oligarchy”, while the great bulk of the population subsisted in dire poverty. 2. In the countryside, party identification tended to reinforce other social cleavages – of community, family and region – cleavages that had a component of conflict about them as part of the legacy of the civil wars of the 19th century. Rural communities were almost completely composed of adherents to one party or the other. In many cases, politics was inherited with the honour of the family. 3. In the 20th century, the Colombian parties grew larger and aggregative. They became coalitions of quite diverse ideologies. The classic Liberalism of the Liberal Party was espoused by land owners and the newer industrial and commercial interests came into conflict with the doctrines of social reform preached by a new generation of political leaders. The new leaders, influenced by disparate aspirations such as Marxism, socialism, Peruvian Aprismo, or the New Deal, led to party fragmentation. 4. The potential for violence was increased when the heritage of partisan identification at the local level was no longer held in check by the “gentlemen’s agreement” at the national level. National leaders started to mobilise mass support for a contest no longer bound by the historic rules. (See McDonald, A.F., Latin American Politics and Government, New York: Thomas Crowell, 1949, pp.377-390)

Drug Violence and Politics The National Front (1958-1974) divided power between the Liberals and the Conservatives, with the President alternating between the two parties. Constitutional reform in 1968 allowed new parties but maintained the detente between Liberals and Conservatives. In the late 1970s, left-wing guerrilla groups emerged, most prominently the National Liberation Army (ELN) and the Revolutionary Armed Forces of Colombia (FARC) and the April 19 Movement (M19). Guerrilla actions caused major disruptions and left many dead. In response, para-military death squads emerged targeting the enemies of the drug cartels. In 1989 new measures were put in place including the extradition of drug traffickers and murderers to the USA. The drug cartels continued to kill hundreds of people in campaigns of assassination and terror. In 1990 the Liberal Party candidate, Cesar Trujillo, was elected president. He offered the chance of “plea bargain” with reduced sentences to drug traffickers. Pablo Escobar, head of Medellin, and several of his lieutenants, surrendered. They escaped again in 1992 and drug- trafficking continued unabated. In 1993 peace agreements between the government and guerrilla groups FARC and ELN broke down. Renewed attacks on the electricity grid and the petroleum infrastructure caused an energy crisis. After the death of Escobar, the government turned its attention to the Cali cartel but violence and assassinations continued. In 1994 the USA initiated a new aid programme aimed at assisting the Colombian government in its efforts to stamp out drug cartels. In 1995 many Cali cartel leaders were arrested, but a new Antioquia drug cartel arose from the old Escobar group. Many smaller organisations also emerged and the area under coca cultivation increased fourfold. The cartels were acting as major financial contributors to political campaigns of the Liberal Party. The Conservative Party leader Alvaro Hurtado was assassinated and Liberal Party leader, Samper, became president. He was subsequently accused of illegal enrichment, electoral fraud, conspiring and falsification of documents. In the October 1997 election the Liberal Party increased its majority and the accusations were quashed. The three-way conflict pattern continued: the government alternating in the hands of the Liberal and Conservative Parties, the two major liberation groups (FARC and ELN) each dominating in their own geographical areas and the para-military forces mobilised by the drug 127

cartels to target officials and politicians considered to be enemies or obstacles for the narcotics trade. The USA poured billions into efforts to contain the narcotics trade: $1.6 billion in 2000 to strengthen the military and $7.5 billion under Plan Colombia to reduce cultivation and trafficking of narcotics. Efforts to obtain the support of FARC and ELN were unsuccessful. Instead FARC and ELN imposed a “peace tax” on all companies with assets more than $1 million, enforced through kidnapping of managers or owners who failed to comply. The EU and the Inter-American Development Bank announced a $300 million aid package aimed at promoting coca replacement crops. The USA classified the United Self-Defence Force (AUC), a para-military group, as a terrorist organisation, leaving the group’s US assets subject to seizure.

Recent Trends In May 2002 Alvaro Uribe Velez, a member of the right-wing Colombian First Party, was elected President. Following a series of attacks by FARC guerrillas on Bogota, he declared a state of emergency. After a new financial and military aid agreement with the USA, the government launched a major offensive against FARC and ELN as well as the narcotics cartels. US Special Forces troops arrived in Colombo to train local forces in counter-insurgency techniques and to assist in the defence of a pipeline owned by US-based Occidental Petroleum. President Uribe launched major social and infrastructure projects designed to improve living standards and social welfare. The para-military UAC was demobilised. In 2005 the ELN leftist guerrillas announced their agreement to a ceasefire. The Colombian armed forces continued their campaign against FARC. President Uribe was re-elected to a second term in 2006. It became clear that the FARC forces were supported by Chavez of Venezuela and by Ecuador. Colombian officials revealed in July 2009 that three anti-tank rocket launchers (sold by Sweden to Venezuela in 1988) had been found in a camp belonging to FARC guerrillas. The leaders of Chile and Brazil showed no outrage. They rather expressed unease over a pending deal that would give the United States use of several Colombian air and naval bases. Colombia had offered the Americans facilities at Palanquero, its main air-force base to replace an American base at Manta in Ecuador, whose lease was not renewed by Rafael Correa, Ecuador’s left-wing president. Manta was used by the American AWACS for surveillance of drug trafficking in the eastern Pacific. But the agreement also formalised facilities used by American trainers and surveillance planes that help Colombian forces in anti-drug actions under Plan Colombia. Although violence continued in remote rural areas, President Uribe succeeded during his two terms of office to save Colombia from decades of guerrilla and para-military violence. Even the economy has held up surprisingly well. In view of his popularity, his supporters started a campaign to call a referendum to change the constitution to allow Mr. Uribe to run for a third consecutive term of office. Case Study 4: Cuba

Lying 217km south of Florida in the Caribbean Sea, Cuba comprises two main islands and dozens of small islets covering a total area of 115, 704 square kilometres. Its climate is sub- tropical with warm temperatures and high humidity. Cuba is exposed to frequent hurricanes. Its total population was estimated at 11,423,000 in 2008. Its ethnic composition was 51 percent mulatto, 37 percent white, 11 percent black and 1 percent Chinese. Roman Catholics totalled 85 percent, 3.3 percent were Protestants and 1.6 percent Afro-American Spiritualist.

Economy In 2007 the GDP was estimated at $51 billion, with per capita income at $4,500. Industry accounted for 26 percent of GDP (14 percent of the workforce), agriculture for 6 percent (21 percent of the workforce) and services for 68 percent (65 percent of the workforce). Cuba is the world’s third largest producer of sugar which represents, by value, about 50 percent of the country’s exports. A total of 28 percent of the country is arable, 6.5 percent 128

under permanent cultivation and a further 27 percent is used for meadows and pasture. With 25 percent of the country forested, Cuba has extensive and valuable forest resources and, with an annual catch of around 60,000 tonnes, fishing is a major export industry. The state provides around 75 percent of all employment.

Political History Cuba was subject to authoritarian Spanish colonial rule until the USA declared war on Spain in April 1898. After a brief campaign, Cuba was captured and placed under US military control until an independent government could take over. The USA forces left the island in 1902, after Cuba’s acceptance of an American military base in Cuba. The corrupt dictatorship of Gerardo Machado led to a revolution in 1933. Army sergeant Fulgencio Batista led a mutiny against senior officers and forced the government to resign, installing his own puppet president. Chronic corruption gave Batista the pretext to seize power again in a coup in 1952. Increased repression engendered growing support for the guerrilla campaign waged by Fidel Castro since 1954. At the end of 1958, the regime disintegrated and Batista and his family fled the country. On 1 January 1959 Castro’s army captured Havana.

Castroism Castro became Prime Minister using the Communist Party as his vehicle of government and suspended the constitution. He instituted wide-ranging reform programmes, including the nationalisation of foreign-owned land and enterprises. Many of those who opposed his policies went into exile in the USA. The US severed diplomatic relations early in 1961 and a US- sponsored invasion by Anti-Castro exiles was defeated at the Bay of Pigs in April 1961. The external threat allowed Castro to strengthen his position internally and in December 1961 he declared Cuba a Communist state in close alignment with the USSR. Castro vowed to bring revolution to neighbouring countries in Latin America. Castro’s operations led to the discovery in 1962 of Soviet missile bases in Cuba. A US blockade of the island followed and produced the most serious super-power confrontation of the decade. The USSR withdrew its missiles and became Cuba’s principal trading partner. In 1965, Castro renamed his party the Communist Party of Cuba (PCC) and banned all other parties. In October 1967, Castro’s revolutionary associate, Che Guevara, was killed in action while involved in guerrilla operations in Bolivia. The PCC’s first congress was held in December 1975 and a new constitution was approved by referendum in 1976. His party was institutionalised along Soviet lines with Castro as President of the Council of State. Cuban troops became involved in Angola in 1976 and in Ethiopia in 1977 in support of Castro’s internationalist aims. In 1980 a mass exodus to the US took place when an estimated 125,000 Cuban refugees fled to the USA. The third congress of the PCC in 1986 strengthened the influence of Genl. Castro, brother and deputy of Fidel Castro. The decline of the influence of the USSR in the 1980s and Castro’s opposition to the glasnost and perestroika policies of Mikhael Gorbachev, led to Cuba’s growing isolation and impoverishment. The USA gradually tightened trade restrictions on Cuba in an effort to weaken Castro’s hold on power. In 1992 the PCC approved reforms to the 1926 constitution which updated social and economic legislation but preserved the island’s one-party Communist system. A number of reforms were introduced in 1994, including the legalisation of self-employment, free markets for farm produce and autonomous agricultural co-operatives. Offshore petroleum discoveries by foreign companies also lifted the prospects of reducing the chronic fuel shortages and accelerated economic growth. Cuba also joined Caricom, the commercial Association of Caribbean States. Cuba also initiated the Juragua nuclear power project and renewed efforts to convert petroleum to bagasse (made from sugarcane). Pope John Paul II visited Cuba in January 1998 in an effort to strengthen the Catholic Church in its struggle to survive in a Communist state. In 2001 Castro signed a with Communist China under which Cuba received $374 million in “government credits”. 129

In 2006 Castro ceded power to his brother Raol Castro after serious stomach surgery. In 2008 Raol was formally installed as President.

Current Trends In July 2009 Cuba celebrated the 50th anniversary of its revolution with a mass rally with a huge banner of Fidel and Raol Castro thrusting their arms skyward under the words “Vigorous and Victorious Revolution Keeps Marching Forward”. The hubris of the celebrations could not obscure the many obvious signs of degeneration and hardship in Cuba: poor infrastructure, a huge trade deficit, food , declining education and health spending and chronic power cuts. In 2008, Cuba imported 80 percent of its food from the USA. Inefficient state farms occupied three-quarters of the best land, but left most of it idle. Raol Castro offered land to private farmers, but the scheme has been slow to get started. On taking power, Raol Castro spoke about “change of structure and concept” in the economy, raising hopes that Cuba would imitate China and Vietnam in moving to a capitalist economy under communist political control. Those hopes have not yet been met. Raol Castro told his party congress: “I was elected to defend, maintain and continue perfecting socialism, not to destroy it”.

Conclusions

Although Latin America’s economies have improved during the past two generations, they have lagged behind the pace of the developed world and have been completely outperformed by most East Asian economies. They have been left behind in terms of all the key determinants of productivity growth: capital, labour, technology and entrepreneurship. A large part of the Latin American businesses are small and inefficient enterprises operating in the informal sector. Their infrastructure services are clogged and inefficiently managed, their labour forces are not properly trained, their public services remain inefficient and corrupt, their regulatory structures are inordinately complex and convoluted. The Latin American region will remain left behind unless corrupt governments, inefficient bureaucracies and organised crime are significantly brought under control. Without a civilised order, productivity growth, the driving force of development, will remain an elusive ideal.

References

Anderson, C.W. et al. (1967) Issues of Political Development, Prentice-Hall, New Jersey Blainey, G. (2000) A Short History of the World, Viking Penguin Books, Australia Ferguson, N. (2004) Empire – How Britain Made the Modern World, Penguin Books Fukuyama, Fed. (2008) Falling Behind: Explaining the Development Gap Between Latin America and the United States, London: Oxford University Press Gunther, J. (1941) Inside Latin America, New York: Harper Row McDonald, A.F. (1949) Latin American Politics and Government, New York: Thomas Crowell Needler, M.C. ed (1964) Political Systems of Latin America, N.J.: D van Nostrand O’Neil, S. (2009) “The Real War in Mexico”, Foreign Affairs, Vol.88, No.4, pp.63-77 Reid, M. (2006) Time to Wake Up – A Survey on Mexico, The Economist, November 18th, 2006, pp.3-13 Unger, B. (2007) Dreaming of Glory – A Special Report on Brazil, The Economist, April 14th, 2007, pp.3-16 The Economist (2005) “Argentina’s Debt Restructuring”, The Economist, March 5th, 2005, pp.6 5-67 130

The Economist (2005) “Poverty in Latin America”, The Economist, September 17th, 2005, pp.39-42 The Economist (2005) “Democracy’s Ten-Year Rut”, The Economist, October 29th, 2005, pp.45-46 The Economist (2005) “Special Report on Brazilian Agriculture”, The Economist, November 5th, 2005, pp.69-71 The Economist (2009) “Latin America’s Economies”, The Economist, May 2nd, 2009, pp.39-46 The Economist (2010) “Latin America’s Unproductive Economies”, March 27th, 2010, pp.42-43

131

6 The Plight of Sub-Saharan Africa

In terms of independent countries with flags at the United Nations, Sub-Saharan Africa comprises 48 countries with a total population which exceeded 800 million in 2007. It excludes the Mediterranean countries at the northern tip of the African continent: , , Egypt, Libya, and which are dealt with separately as part of the Islamic World. With the exception of South Africa, the populations of all the countries included are predominantly “Black African”. The total “White African” population in the area is less than 5 million of which about 90 percent live in South Africa. Almost all the bottom places in the world league tables are, sadly enough, filled by Sub- Saharan African countries. Of the 160 countries on the United Nations annual development index, 32 of the lowest 40 are in Africa. The gap between them and the rest of the world is widening. At least 45 percent of Africans live in poverty and they need growth rates of at least 7 percent or more to cut that figure in half in 15 years. With the exception of South Africa, growth rates in the region have been less than 3 percent over the past half century – calculated from very low bases. Poverty, ignorance, disease, brutality, despotism and corruption can be found in many parts of the world – but in Sub-Saharan Africa these problems exist in combination and have taken on endemic proportions. Many explanations are advanced by scholars: lack of basic structures needed to develop, social and cultural constraints, lack of self-confidence, bad governance and leadership, predatory conflicts, tribalism, etc. Africans prefer to put the blame on their colonial past: the slave trade, exploitative trade relations, capitalism, white racism, etc.

Prehistoric Origins

Prehistoric archaeological research has found that the emergence of Homo habilis, a forerunner of Homo sapiens, from ape-like pre-human forms took place in East Africa, over four million years ago. By one million years ago, Homo habilis had evolved into the bigger-brained hominid, Homo erectus, who seems to have spread beyond the African continent, for his remains have been found also in Java and China. By about 200,000 years ago, Homo sapiens, the anatomically-modern human species, had spread over most of the globe. (Peter Bogucki, The Origins of Human Society, Massachusetts: Blackwell Publishers, 1999, pp.29-126) The human diaspora to different regions of the world brought in its wake the development of a divergence in physical types, presumably due to environmental factors and to isolation. Physical anthropologists distinguish four categories of Homo sapiens – the Caucasoid (or Indo- European), Negroid, Mongoloid (Asiatic, Amerindians and Polynesians) and Australoid (Australian Aborigines). (See Hammond-Tooke, D. – The Roots of Black South Africa, Johannesburg: Jonathan Ball, 1993, p.23) Settlement Patterns

David Hammond-Tooke’s research revealed that about six or seven thousand years ago, Africa was inhabited by the ancestors of the four main “racial” types regarded as indigenous to the continent: the Khoisan (“San” and “KhoiKhoi”), the Pygmies of the equatorial forests, the Caucasoid “Hamites” and the numerically preponderant Negroes. The Hamites appear to have entered Africa from the north-east in the late Stone Age times (their languages are distantly related to the Semitic languages), but the other three evolved out of older African stock. Recent research indicates that the Khoisan and Pygmies are in fact derived from Negroid stock and are evolutionary adaptations to different environments – open grasslands, in the one case, and equatorial forests, in the other. 132

The Negro physical type arose in the narrow savannah belt north of the equatorial forest at a time when the southern Sahara enjoyed much more rain than it receives today. They followed a hunter-gatherer style of life, living on the forest margins, but later became associated with the domestication of plants and animals. They adopted the cultivation of crops (sorghum and millet) in the light woodland savannah from Senegal to the upper Nile. The forest food crops grown in Africa were mainly of South-East Asian origin – the banana, yam and coco-yam. Only in the 16th and 17th centuries AD, were maize and cassava introduced from the Americas. The Negro communities north of the equator spoke dialects belonging to two main ancient language groups: the Eastern Sudanic languages spoken to the north of the equatorial forest (from the Nile to Lake Chad) and the Western Sudanic, spoken west of the lake. In sharp contrast, the Negro communities south of the equator developed what is today called the “Bantu” languages. Associated with the Iron Age, the people living south of the equatorial forests, probably in the savannah highlands of Katanga (Zambia) and the region of the Great Lakes to the east, spoke closely related Bantu languages. As a result of fast population increase, Bantu-speakers migrated east and south. The communities reached what is now northern Angola and East Africa by the early centuries of the first millennium AD. From there people continued to spread southwards into savannah environments with good grazing, arable soils and adequate rainfall for crops such as sorghum, millet and various legumes. In time the various Iron-Age communities were fragmented into numerous social entities speaking different dialects. Most were small kingdoms or chieftaincies geared to survival in Africa’s fickle climate. The European Spice Traders

The discovery of the by the end of the 13th century enabled seafarers to undertake long voyages out of sight of land. The first Europeans to navigate their way along the West Coast of Africa were the Portuguese, searching for a sea route to the Far East. Bartholomew Diaz de Novaes set out in August 1487 from Portugal. En route, he touched points along the African continent never before seen by Europeans and after he turned Cabo Tormentoso (Cape of Storms) he landed at what he called Rio de Infante (Mossel Bay) and triumphantly returned home to report that it is possible to round the southern tip of Africa. The first business expedition to the East was led by Vasco da Gama who left Lisbon in 1497 with four ships. After several months he reached the port of Mozambique where he encountered Muslim traders. His small fleet progressed northwards where he discovered a thriving Arab port at Mombasa. At Malindi he employed a Hindu pilot to take him across the Arabian Sea to Calicut in India. In 1581 Jan Huygen van Linschoten was the first Dutchman to complete a return voyage to India. Then in 1595, Cornelis and Frederik de Houtman also completed a successful return voyage and thus started the Dutch involvement. Soon the need arose for a halfway station where the sailors, desperately affected by scurvy, could find fresh fruit and vegetables. By the 1650s, the Dutch were the world’s leading trading nation and the Vereenigde Oostindische Compagnie (VOC), the world’s largest trading enterprise with a strong trading post in Jakarta (Batavia). In 1652, Jan van Riebeeck established a refreshment station at Cape Town. Apart from the area around Cape Town, Europeans initially had limited contact with the African interior. There were few natural harbours and much of the coast was either dry desert or wet jungle, making access hazardous. From the 1760s onward, Europeans made determined efforts to explore the interior of Africa. The major rivers of the and Nile were mapped by the British and the French explored the Sahara Desert and reached Timbuktu, a city which became famous as a trade centre and a centre of learning since the 1300s. Still later in the 1800s, the Scotsman, David Livingstone, who was a missionary and a doctor, explored the Zambezi River and travelled the continent from east to west. He was followed by the American adventurer, Henry Stanley, who explored the Great Lakes region and then sailed down the last unknown river in Africa, the River Congo. Both Livingstone and Stanley are thought to have paved the way for the European colonisation in their “scramble for Africa” in the late 19th century. 133

The Slave Trade

The first slaves were shipped out of Africa by the Arabs more than 1000 years ago. Some local rulers grew rich by selling captured enemies into slavery. The first organised transport of slaves to the Americas was undertaken by Spanish traders in 1518 to work in the gold mines of South America under the conquistadors. The Portuguese used African slaves on their sugar plantations in the Atlantic islands of Madeira, Cape Verde, Sao Tome and the Azores. But as the Portuguese set up sugarcane production in their new acquisition of Brazil, the market for slaves was established. By 1550 Brazil was the world’s largest exporter of sugar and the largest importer of slaves. Other European nations soon joined in: British, Swedes, Danes, Spanish, French and Dutch. These trading nations built over 30 slave forts in the Gold Coast (now ) alone. The “triangular trade” as it was known, involved slave-ships leaving European ports for west Africa with rum, guns, textiles and other goods to exchange for slaves, who were then transported across the Atlantic to be sold to plantation owners, the ships then returning with sugar and coffee. It is important to note, however, that the pernicious trade could not have existed without the support of African chiefs and traders. Slaves were brought to the coastal trading forts by African agents or collaborators. Many of the slaves sold to the European traders were men and women captured in battles between tribes – like the Asante and the Acan in the Gold Coast. By the mid-18th century, Britain was the biggest slaving nation. Ports like Bristol, Liverpool and London thrived as a result. In Britain, many important people were involved in the slave trade: the royal family, the Church of England and politicians like William Gladstone – himself the son of a plantation owner. Since there was no slavery in Europe or in England, people were largely ignorant of its nature and scale – not unlike the Germans vis-à-vis the holocaust in Germany. It became the task of the abolitionists to expose the shameful reality of the trade to an ignorant public. During the period 1701 to 1810, millions of Africans were sent to the Americas: estimates of the numbers involved vary between 7 million up to 20 million. In the early 1800s it was abolished in Europe, in 1833 in England and in the 1860s in Portugal. The Arabs continued the trade until 1873, when the last remaining slave market in Zanzibar was closed. (See The Economist, “Breaking the Chains”, February 24th, 2007, pp.55-57)

The Colonial “Scramble for Africa”

In November 1884, the German Chancellor, Otto von Bismarck, convened a conference on Africa in Berlin, ostensibly intended to ensure free trade in Africa. The real purpose of the conference was to “define the conditions under which future territorial annexations in Africa might be recognised – in effect, a charter for the partition of Africa into spheres of influence based on nothing more legitimate than their effective occupation. Historian Niall Ferguson describes the course of events in the following terms: “Across Africa the story repeated itself, chiefs hoodwinked, tribes dispossessed, inheritances signed away with a thumbprint or a shaky cross and any resistance mown down by the Maxim gun.” (Empire – How Britain Made the Modern World, Penguin Books, 2004, p.239) Ferguson goes on to describe how one by one the nations of Africa were subjugated – the Zulus, the Matabele, the Mashonas, the kingdoms of Niger, the Islamic principality of Kano, the Dinkas and the Masai, the Sudanese Muslims, Benin and Bechuana. By the beginning of the next century, the carve-up was complete. The British had all but realised Rhodes’ vision of unbroken possession from Cape to Cairo. Their African empire stretched northwards from the Cape Colony through Natal, Bechuanaland, Rhodesia, Nyasaland, and southwards from Egypt, through the Sudan, Uganda and Kenya. Tanganyika, the German possession, was the only missing link. The Germans had South West Africa (Namibia), and Togo. Britain had also acquired the Gambia, Sierra Leone, the Gold Coast and Nigeria in West Africa as well as the north of 134

Somaliland. But West Africa was mostly in French possession. From Tunis and Algeria in the north, downwards through , Senegal, French Sudan, Guinea, the , Upper Volta, Dahomey, Niger, Chad, the French Congo and , the greater part of West Africa was in French hands. Their only eastern possession was the island of Madagascar. Besides Mozambique and Angola, Portugal remained an enclave in Guinea. Italy acquired Libya, and most of Somaliland. The Belgian King owned the vast territory of Congo. Spain had Rio de Oro. Africa was almost entirely in European hands, and the lion’s share belonged to Britain. Small wonder that the British began to assume that they had the God-given right to rule the world.

The Legacy of Colonialism

Africa’s nation-states are artificial creations of the colonial era. Colonial boundaries, which were to define the borders of the new states, had been set quite arbitrarily by the European powers. European negotiators of the nineteenth century lacked detailed knowledge of African physical and cultural geography and tended to draw boundaries in terms of parallels and meridians, straight lines, arcs of circles or by reference to topographical features like rivers and valleys. These boundaries were artificial and arbitrarily cutting across ethnic, tribal and linguistic areas. Tribal groups, which even today, are a major force to be reckoned with in almost every black African country, were divided among two or more countries (e.g. the Fulani, the Bakongo, the Luo, the Tswana, etc.) The peoples living within any given colony hardly had any sense of community and common identity – let alone considered themselves a nation. (Kotecha, K.D. and Adams, R.W., African Politics: The Corruption of Power,1981, p.55) Colonial powers tended to look upon their colonies as suppliers of raw materials for their own advanced industrial sector and as markets for the finished goods. In the process the colonies benefited by receiving foreign capital investment in extractive industry such as mining, oil drilling and plantations to produce raw materials. This in turn, resulted in infrastructure development such as communication systems, roads, railways and port facilities. But the industry introduced by colonialism remained dependent on the colonial power for its markets, its investment capital and its technology. What was drawn from the colony was raw material and labour and the relatively cheap availability of these made colonialism worthwhile to colonial interests. Colonialism certainly acted as a strong modernising force. But it came to the colonies as a force from without, not necessarily from within. In large measure the modernisation did not develop from within the colonies through the operation of forces native to those societies. In the process of modernisation the pre-colonial history and traditions were largely eliminated. The European colonists found willing accomplices among Africa’s European-orientated elites. These modernising African elites tended to support the notion that anything traditional was by definition primitive. They tried to transplant European models to Africa and neglected to build indigenous models. Rather than to build on tradition, the new Africans often sought to purge what was authentic in their own cultures. The African colonial experience is significantly different from the East Asian colonial experience. A common feature of Western colonisation in the societies of East Asia was that it never managed to supplant historical tradition – be it the emphasis on education, the hierarchical social structure or the religious tradition of Confucianism, Buddhism and, in Indonesia and Malaysia, Islam. Some East Asian countries also emerged as Africa did from colonial tutelage. Some had no natural resources and were as divided along ethnic, religious and linguistic lines as many African countries are today. Yet, with the exception of those who chose the authoritarian communist path – such as Cambodia, Vietnam, Laos and – the South East Asians have prospered. They have diversified away from reliance on single commodity exports, they adopted “outward” development policies and privatised land holdings. Africa, on the other hand, pursued “inward” economic policies, based on trade restrictions and overvalued currencies. East Asia has become a model of economic success, while Africa has seen increasing poverty, hunger and economies propped up by foreign aid. 135

After independence many African countries became ravaged by socialism, the ideology then in vogue throughout Europe. African varieties were developed such as Kaunda’s “” in Zambia and Nyerere’s “njamaa” in Tanzania. The result was government ownership of most enterprises, a distrust of private sector initiative and of foreign investment.

Bad Government

Africa’s political liberation struggle against colonial rule was driven by nationalism – a drive for collective self-determination against alien rule. Almost no price was too high to achieve collective freedom. But collective self-government is not synonymous with democratic individual self-fulfilment or with good government. The problems started in large measure as a result of the gap between national aspirations and the practical problems involved in putting together a working government that is responsible, accountable and efficient. Constitutional democracy needs to be underpinned by a civic political culture. The mere fact of independence and self-governance did not propel governments to make sound decisions. Nor did it assure effective people’s participation at all levels and the rule of law to protect individual citizens from the government’s abuse of power and arbitrary and capricious bureaucratic controls. The new regimes proved to be dysfunctional and unstable. During the period 1960-2003 a total of 107 African leaders were toppled by coups, war or invasion. Military juntas ruled in several countries like Benin, Ethiopia, Nigeria, Somalia, Sudan, Zaire and Burkina-Faso. Other countries are characterised by one-party states and life-time presidents. These include Malawi, Zimbabwe, Kenya, Cameroon, Congo and, until recently, also Mozambique and Zambia. Up to 1990, only five – Botswana, Gambia, , Senegal and Zambia – allowed free elections. Governments lack public accountability and in many cases those in power have appropriated the machinery of government to serve their own interests. The tragic case of Zimbabwe illustrates the immeasurable harm and destruction that can be caused by bad leadership. Mugabe has cowed the judiciary, smashed the independent media, stolen three elections, dispossessed Zimbabwe’s most productive citizens and printed money until inflation collapsed the Zimbabwe currency. To disguise his contempt for the law, Mugabe brought in new legislation to authorise his land grab. He pretended to hold free elections but had many opponents killed and had citizens intimidated into voting for his party. Other African leaders have done little to stop Mugabe from wrecking his country and oppressing his people. Many of his peers seem to think that Mugabe is a hero who knows how to sort whites out. (See The Economist, January 17th, 2004, pp.5-6. op.cit.) The institutionalisation of political power is not well developed in Africa. There is little direct representation of popular majorities in government decision making. Leaders tend to build personal coalitions to seize and hold power. Clienteles, and the coalitions of which they form a part, are held together by payoffs. Common ways of increasing the supply of payoffs include expanding the public sector at the expense of the rest of the economy. Given the importance of staying in office, opposition parties are thwarted, parliaments deprived of the means to oversee the executive, “Bills of Rights” are abolished and the judiciary is politicised or supplanted. (Kotecha, K.C. and Adams, R.W., 1981, op.cit., pp.74-76) The World Bank in its 1989 study on Sub-Saharan Africa’s performance since independence reported that weak public sector management has resulted in loss-making public enterprises, poor investment choices, costly and unreliable infrastructure, price distortions (especially overvalued exchange rates, administered prices, and subsidised credit), and hence inefficient resource allocations. Even more fundamental in many countries, is the deteriorating quality of government, bureaucratic obstruction, pervasive rent seeking, weak judicial systems and arbitrary decision-making. As a result of pervasive one-party kleptocracy, drainage by corruption often equals or exceeds the legitimate economic intake.

136

In addition to the mismanaged public sector, should be added the dead weight of “crony capitalism” and “ethnic nepotism” where sectional political and business interests collude to protect their markets from competition from home and abroad. Entrepreneurial opportunities to other people are thereby denied and income inequalities aggravated. All of the abovementioned conditions not only added heavily to the cost of doing business, but also continued to discourage investment. (See The Economist, January 17th, 2004, pp.11-13) The good news is that since 1990 more African states have introduced multiparty politics than in the previous 25 years combined. In recent years a growing number of Africa’s states have held free elections or adopted significant democratic reforms. Many Africans are talking about a second liberation struggle. If the first liberation effort was for political independence, the second struggle is for democracy, for wider human rights and for individual self-fulfilment.

Self-Serving Bureaucracy, Corruption and Nepotism

The source of much of Africa’s economic woes is the domination of its economies by corrupt and nepotistic overgrown state sectors. The state sectors are characterised by overlapping bureaucracies, stifling red tape, waste, inefficient state enterprises and overstaffed with party functionaries and other camp followers. African independence leaders were often close to European left-wingers … “who implemented in Africa the biggest socialist fantasies that they weren’t able to implement in their own countries – mainly government ownership of everything and government engineering of the economy at every level.” In some countries the state came to own and manage 80 percent of the formal economy. Senior executives were appointed for political reasons, not for competence. Government jobs were distributed as patronage. The enterprises, incompetently run, lost money. Tribalism, provincialism and nepotism grew rank and led to unemployment – along with resentment and low morale. (Time Magazine, September 7th, 1992, p.30) By far the largest chunk of government expenditures in Africa – in some countries up to 70 percent – is absorbed by wages and salaries in the public sector, instead of providing for the public goods required for dynamic private sector development. The World Bank’s 1989 Report found that in several countries, such as Gambia, Ghana and the , civil services have expanded very fast and have served, not only to reward clientele supporters, but also as welfare programmes to counter economic decline. Consequently the salary toll absorbed a very large part of government revenue. By 1986 Guinea’s civil servants’ wages accounted for 50 percent of current expenditures. In Gambia the civil service doubled between 1974 and 1984. In Ghana the civil service increased at a rate five times the growth of the labour market – 14 percent each year between 1975 and 1982. In the Central African Republic civil service salaries absorbed 63 percent of current revenues. Generally, productivity was extremely low, discipline was largely lacking, and overall there was little accountability. To tackle this situation, civil service reform has been placed high on the agenda for many African governments in recent years. The first step has been to conduct a staff census to eliminate from the payroll departed staff, overage employees and unwarranted promotions and allowances and to determine precisely the numbers and deployment of civil servants. In Gambia more than 20 percent of the civil service was identified as superfluous. The Central African Republic terminated automatic recruitment of graduates and introduced competitive entrance exams; a census allowed savings of 7 to 8 percent of the wage bill. Ghana carried out a systematic job inspection programme that removed some 24,000 civil servants from the payroll over two years. Corruption manifests itself by way of various malpractices: bribes, kickbacks, embezzlement, exploitation of privileged information, lavish spending of public funds, personal enrichment through public office, abuse of discretionary powers, preferential treatment of kinsmen or other favoured persons, deprivation or harassment of out groups. Periodic cleaning exercises 137

obviously are of some value, but they cannot substitute for institutional checks on leaders exercising public authority in democratic systems. (Kotecha, K.C. and Adams, R.W., 1981, pp.85-180) Time Magazine described the emergence of a “parallel” economy in the “official” sector during the 1990s, in the following terms: “Salaries in Africa become living wages only by unofficial dealing – by baksheesh, bribing, finagling, operating off the books, bartering, finding a thousand intricate routes of collateral circulation around the occlusions of law and bureaucracy. A telephone company repairman in Lagos earns $60 a month. Therefore, the only way one can get a phone repaired is to offer him ‘a little something’ on the side. No extra money, no repairs – which may be one reason why most phones in Lagos do not work … Ripping off the government has become a popular sport: it is thought of as stealing from thieves … So Africa improvises its own unofficial social contract, one deal at a time. Africans are brilliant at adapting to the impossible conditions created by their governments.” (Time Magazine, September 7th, 1992, pp.36-37) But the development of a market economy has not only been constrained or inhibited by the state per se – a number of socio-cultural factors also played a role. There are several examples where a corner of the market – or even the whole market – has been captured or monopolised by an ethnic group. Nigeria underwent the trauma of a costly civil war because the Igbo had been perceived as monopolising certain areas of the economy. In 1972 Idi Amin tried to break Asian commercial monopolies in Uganda by expelling Asians completely from the country. But Idi Amin then replaced Asian nepotism with Nubi nepotism in the economy. There are numerous examples where ethnic groups such as the Igbo in Nigeria, the Buganda in Uganda and the Kikuyu in Kenya, have distorted the market nepotistically – provoking counter-ethnic resentment. The restoration of ethnic balance by way of some form of “affirmative action” is sometimes – as in the case of Nigeria – done in a haphazard way and often in conflict with other democratic values. (Mazrui, 1992, pp.10-11) Statism, bureaucratic obstruction, corruption and ethnic nepotism have been active as significant constraints on the development of true marketisations in Africa. These factors tended to clog up procedures and substantially paralysed the production and distribution of the goods and services required to improve the quality of life in many African countries.

Misconceived Development Strategies

In the World Bank’s report on Sub-Saharan Africa: From Crisis to Sustainable Growth, issues of governance were broached with unprecedented frankness. The World Bank’s team of authors concluded that Africa’s poor growth performance was to a large extent influenced by misconceived development strategies. The authors of this report summarised their findings as follows: “The post-independence development efforts failed because the strategy was misconceived. Governments made a dash for ‘modernisation’, copying, but not adapting, Western models. The result was poorly designed public investments in industry; too little attention to peasant agriculture; too much intervention in areas in which the state lacked managerial, technical and entrepreneurial skills; and too little effort to foster grassroots development.” (World Bank, 1989, p.3) A common misconception among governments of many African countries has been that only through economic independence and industrialisation can they improve their relative economic condition. They echoed the ideas of prominent economists of the day. Industrialisation was believed to be the engine of economic growth and the key to transforming traditional economies – partly because the prospects for commodity exports were thought to be poor and partly because of a strong desire to reduce dependence on manufactured imports. Agriculture was relegated to a secondary role. To implement their strategies, African leaders believed that government had to play the dominant role. That view reflected their mistrust of foreign business, the perceived shortages of domestic private capital and entrepreneurship and an 138

underlying mistrust of market mechanisms. As a result governments drew up comprehensive five-year plans; invested in large state-run core industries and enacted pervasive regulations to control prices, restrict trade, and allocate credit and foreign exchange. (World Bank, 1989, p.16) But misconceived strategies led to misdirected policies and measures. These relate to various forms of government intervention in the economy: nationalisation, forceful “Africanisation” of key positions in public and private enterprises, prohibition against engaging in retail trade by certain categories of the population, requiring the sale of operating businesses to Africans, displacing non-Africans from distributive trade and small scale manufacturing, imposing indigenous ownership on private enterprises, expropriating land, confiscation of property, deportation of selected individuals or groups, etc. (Kotecha, K.C. and Adams, R.W., 1981, pp.187-289) Managing this regulatory apparatus embroiled governments in ever expanding intervention across their economies. That meant that governments had to get bigger. New interventions were introduced to deal with difficulties caused by earlier interventions. Government became such a suffocating force that the private sector went almost completely underground to escape it, or else virtually expired. Ben Turok’s study of the collapse of the Zambian economy provided a comprehensive account of the consequences of massive state intervention. In 1968 the government bought out 51 percent in the copper giants Anglo American and Roan Selection Trust. In 1970 the granting of trading licences to expatriates was restricted and businesses such as brick-making and transportation were reserved for Zambians. The effect of these moves was to put the Zambian economy into a devastating tailspin. Far from bringing about socialism, they led to the rise of “state capitalism” and a fast-growing self-serving state bureaucracy. Turok accuses this state bureaucracy of plotting an “indecisive and erratic course” and striving to create a firmer base for itself by increasing the sheer size of the state apparatus. He paints a picture of a bureaucracy, frustrated by the decline of the industries it has annexed, biting more and more pieces off the already dying private sector. The result of several decades of this kind of state capitalism is a generally weak economy with a state-employed elite class which has lost the capacity for innovation and the capacity to organise the economy. (Ben Turok, 1989, pp.5, 36-59, 109) Ghana was another example of the devastating effect of ill-conceived government intervention. In 1957 it was the richest country in black (Sub-Saharan) Africa and had the best educated population. It was the world’s leading exporter of cocoa, it produced 10 percent of the world’s gold, it had diamonds, and and a flourishing trade in mahogany. Its income per capita was almost equal to South Korea’s: $490 against $491 (in 1980 dollars). By early 1980s South Korea’s income per head was five times Ghana’s, whereas Ghana’s income per capita had actually fallen by nearly 20 percent to $400. Between 1970 and 1982 real wages fell 80 percent. Investment slumped from 20 percent of GDP in the 1950s to 4 percent by 1982, exports from more than 30 percent of GDP to 3 percent. A potentially prosperous economy was devastated by nationalisation policies and bad government in the space of two decades. In contrast to this dismal picture, countries with stronger economic growth performance followed policies of providing greater credit to the private sector, limiting the size of government and public sector industries and relying on market forces for allocating resources.

Absence of an Indigenous Modern Sector

In most of Sub-Sahara’s 48 states, there is no modern private sector to speak of. A handful of multinationals dominate private business often in joint ventures with governments, which in the past have insisted on calling the shots. Resources are by and large monopolised by the state with private sector relegated a peripheral role. Coca Cola is Africa’s largest private employer. Prof Tony Hawkins of the University of Zimbabwe’s Business School, found that African governments went about fostering indigenous enterprise the wrong way. They provided a blanket protection through import tariffs, subsidised capital, guaranteed markets and preferential access to government contracts. Invariably such potential enterprises became state- 139

owned monopolies which not only bred incompetence and inefficiency by allowing politicians to use them as a vehicle for political patronage, but also became closed to the outside world. This “national champion” strategy failed, partly because the enterprises were sheltered from competition and were denied the productivity gain which flows from access to new technology. In addition, they were often treated as part of a hidden agenda, tasked with creating jobs, generating exports or subsidising a sector of the economy. In recent times some countries such as Tanzania, Zambia and Zimbabwe started campaigns to sell off some of its large portfolio of parastatals. So far the programme has been slow. Few foreign investors have shown interest and locals do not seem to have the money. Those who do also face difficulties as these countries have not developed the financial instruments to enable such participation in the privatisation programme. In Tanzania, for instance, its 13 financial institutions which include the central bank, three commercial banks, three development financial institutions and six non-bank financial institutions are all government owned. In the past they could not operate along business lines because of interference from the government and the ruling party. The government decided who the banks could lend to and at what interest rates. Most of their lending was to parastatals and agricultural co-operative unions. Interest rates were highly subsidised and the credit-worthiness of borrowers was not taken into account, with the result that the banks made huge losses. Most citizens have preferred to do their business outside the formal banking sector. As a result of negative interest rates which have prevailed in the past, it has been difficult to convince people to save – especially in view of the poor and inefficient services the banks have been offering. The introduction of private banks is a slow process. Foreign banks which were nationalised are still reluctant to come back. Besides, the lack of capital and money markets or any related instruments, limits the flexibility and profitability and the type of services existing financial institutions can offer. The development of an efficient banking system and of capital and money markets is essential in privatisation programmes. Although much of the modern sector has been in malaise throughout Africa, the informal sector, in contrast, has shown remarkable dynamism. This sector refers to indigenous, mainly unregistered enterprises in both urban and rural areas as well as locally based intermediary non-government grassroots organisations. This sector is home to small firms involved in a broad range of activities in agriculture, industry, trade, transportation, finance and social services. In the informal sector enterprises find a business environment that is competitive, free from unjustified regulatory constraints and well-adapted to local resources and demand. These enterprises are also supported by a system of grassroots institutions such as on-the-job apprenticeships and small associations that can represent group interests and improve access to credit and other resources.

Official Neglect and Discouragement of Indigenous Entrepreneurship

Entrepreneurship has a long history in Sub-Saharan Africa. As far back as the tenth century, before Africa was “discovered”, there were free markets at Timbuktu, Salaga, Kano and other terminals of the Trans-Saharan trade routes. Archaeological evidence indicates the presence of “Great Zimbabwe” where mining activities were linked to Arab export markets on Africa’s south- eastern coast. According to a World Bank survey, the neglect of traditional indigenous entrepreneurship started during the colonial period and was continued after independence when policy-makers focused mainly on promoting large-scale industrial enterprises. These were deemed hallmarks of development and were generously supported by public policy. Large firms enjoyed preferential access to credit, foreign exchange concessions and protection from competition through subsidies, tariffs, quotas and exclusive licenses. The state’s role as entrepreneur was justified by the argument that the indigenous private sector had neither the capital nor the expertise to drive rapid development and industrialisation. Africa was seen as a continent without indigenous entrepreneurial skills, its “progressive” modern sector at odds with a “backward” informal sector that could provide subsistence but nothing more. In viewing the 140

activities in the informal sector as marginal to development, however, policy-makers have greatly underestimated the depth and potential of African entrepreneurship. Equally, they ignored the extent to which their own policies have driven entrepreneurs into the informal sector. Even in instances where African governments tried to promote small- and medium-scale enterprises (SME’s), the policy and institutional background has been unfavourable. In Africa as a whole, the average cost of registration of a business is nearly twice the annual income per head. The rules that make it so expensive are usually pointless. Compliance with the rules only serves the financial interests of rent-seekers and special interests. Most of the SME’s that have grown in response to market demand have done so despite official obstacles. In Ghana and Tanzania, two of the most extreme cases, massive resources were directed to public enterprises, while local entrepreneurs who attempted to circumvent price controls saw their property confiscated. Uganda’s entire Asian community was expelled in 1972. These policies crippled economic growth by deterring long-term investment by both foreign and local entrepreneurs. Today Africa’s market traditions are reflected in towns and cities across the continent. Traders and artisans continue to organise activities according to long established customs and rules administered through grassroots institutions. The village market has changed little and remains free and open to all. Prices are determined by the forces of supply and demand. In traditional Africa, three of the four economic factors of production – labour, capital and entrepreneurship – are privately owned. The exception is land, but even with communally owned land, peasants exercise usufructural rights and what is produced on this land is privately owned. Moreover, peasants go about their business as they themselves see fit – not according to some bureaucratic or dictatorial decree. The UN International Labour Organisation (ILO) has estimated that the informal sector employs 59 percent of Sub-Saharan Africa’s urban labour force and an ILO survey of 17 African countries found that the informal sector contributed, on average, 20 percent of GDP to the economies studied. The World Bank’s 1989 study predicted that the informal sector part of the economy in Sub-Saharan Africa is likely to grow while the formal one stagnates. By the year 2020, it says, 95 percent of the African workers will be in the informal sector, whose contributions to GDP will grow from under half to two-thirds. In 1992, Time Magazine described a state of affairs which still applies in many places. “The second economy is endlessly inventive. It embraces everything from street vendors selling cigarettes and candy in a Dar es Salaam market to the large and intricate border smuggling of Zambian gemstones. At least 10 million of 26 million Kenyans make a living from small-scale cash-crop farming, carpentry, masonry, metalworking, tailoring, shoemaking, retailing, smuggling, illicit brewing and running private taxis and . Second-hand clothes are imported from Europe and America and sold by the roadside. Packing cases are fashioned into furniture. Oil drums are made into roofing sheets, frying pans, barbecues, stoves, knives and lamps. Cars that cannot be repaired are salvaged piecemeal and turned into carts to be pulled by bullocks and donkeys. Much of this unofficial labour is carried out in the open air and is therefore called jua kali (hot sun). As multinational companies are driven away by government policies and demands for kickbacks, as government-run enterprises fail and lay off workers, the jua kali economy is booming.” (Time International, September 7th, 1992, pp.36-37) But as a result of past strategies and policies, outside of the informal sector, small- and medium-scale enterprises are few and far between. This paucity of businesses that can link imported and local technologies – which the World Bank’s 1989 Report describes as the “missing middle” – is a major impediment to Africa’s development. Despite recent reforms, entrepreneurial initiative is hampered by regulation and limited consumer demand for local products and services. Small-scale entrepreneurs find it hard to raise capital, to mobilise skills or to gain access to efficient infrastructure services. It is therefore of critical importance that a middle ground of entrepreneurs between the largest and the smallest firms should be encouraged. SME’s create jobs at a lower cost and use 141

local resources more intensively. These enterprises also contribute to equity by producing goods and services that are widely affordable. They foster entrepreneurship through learning-by- doing. By thus reconciling broad-based consumer demand, available resources and indigenous and imported technology, SME’s perform a vital role in development. Although the informal sector offers entrepreneurs a competitive environment and grassroots support, it cannot provide all the physical and social infrastructure services that investment and growth require. Governments can build on the demonstrated strengths of the informal sector and try to correct its weaknesses. But more is needed by way of a more comprehensive SME development strategy that can help entrepreneurs overcome the barriers by improving the institutions and infrastructure that support enterprise.

Socio-Cultural Constraints

In as much as “culture” refers to a complex of characteristics and phenomena (which includes knowledge, belief, art, morals, laws, custom, capabilities, habits and other ways of doing things) acquired by man as a member of society, it is simply a matter of observation that cultural factors do have some impact on economic behaviour. Socio-cultural determinants cannot be discounted or ignored simply because they cannot easily be quantified and validated by rigorous empirical, research methods. Despite the scepticism in some parts of development literature, many observers have come to believe that socio-cultural factors must be taken into account to explain the success or failure of economic development strategies, policies and programmes. Theoretical assumptions are made which may not stand up to empirical actualities on the ground. It is argued that there are many misunderstandings about how market forces, privatisation and the profit motive actually operate (or fail to operate) in the African cultural context. (See Mazrui, 1992, pp.9-21) Proper regard must be had to traditional African cultural traits such as attitude towards authority, attitudes towards time, leisure and labour, attitudes towards decision-making, traditional incentives and behaviour patterns, land-use patterns, ethnic and group loyalty, family obligations, inter-personal relations, the role of women, the accumulation of wealth, individual performance, contractual bonds, etc. Western values are not always congruent with traditional incentives and behaviour patterns prevalent in most African countries. Self-reliance and self-interest tend to take a back seat to ethnicity and group loyalty. The main concern seems to be to keep social balance and equity within the groups, rather than individual economic achievements. The frontiers separating collective preferences from individual ones are often very vague. Typically a higher value is placed on inter-personal relations and the timely execution of certain social, religious or mystic activities than on individual achievements. The ritual surrounding economic transactions, are often more important than the business principles governing these transactions. Economic success in itself does not lead to upward social mobility. If achieved outside of the group, it may even lead to social ostracism. State resources can be considered fair game for ethnic groups and extended family to build their own bases of support and legitimacy, through patronage or even graft. African society is generally very paternalistic and hierarchical. Little prone to individualism, it tends to be egalitarian within the same age group, but hierarchical in inter-group relations, with marked subordination of the younger members. These patterns run counter to Western values of individual freedom and individual responsibility. Africans tend to seek unanimity and are generally prepared to engage in seemingly interminable discussions. The tendency to value group solidarity and socialising has generally led Africans to attach high value to leisure and the attendant ability to engage in rituals, ceremonies and social activities. These activities serve as a means of reinforcing social bonds – even if its “marginal returns” are more social than economic. Ali Mazrui maintained that African economic behaviour is often more inspired by the pursuit of prestige than by the quest for profit. Precisely because African cultures are more collectivist, members of society are more sensitive to the approval and disapproval of the collectivity. 142

Mazrui also claimed that the prestige motive serves as a device of income distribution because those who are financially successful often desire renown for generosity. Hence obligations towards wider and wider circles of kinsfolk are fulfilled. Those who succeed share their rewards with many others. According to Mazrui, the prestige motive in African economic behaviour also shows a negative side in that it encourages “ostentatious consumption and self-indulgent exhibitionism”. While profit motive, as identified as the maximisation of returns, in classical economic theory was supposed to lean towards production, the prestige motive in contemporary African economic behaviour leans towards greater conspicuous consumption. According to the Mazrui thesis the prestige motive operates both privately and at the state level, eating away into resources of the country. In this way structural adjustment is severely constrained by cultural forces. Hence the problem in most of Africa is not simply how to liberate and activate the profit motive, but also how to control and restrain the prestige motive in such a way that it serves the goals of production and not merely the appetite of consumption. It is a question of making creativity more prestigious than acquisition and production more prestigious than possession. (Mazrui, 1992, p.12) The essence of Mazrui’s argument is that the fate of both political and economic liberalisation hinges on cultural variables which have too often been underestimated. We may need to grasp the cultural dimension before we can fully gauge the scale and durability of change and development in Africa. (Mazrui, 1992, p.3) Economic Challenges

Sub-Saharan Africa’s economies are characterised by low investment, low productivity, weak agricultural growth, a declining industrial output, poor export performance, climbing debt, single crop dependency, a disintegrating infrastructure and a small tax base. In short, their economies suffer from a combination of factors making the vicious circle of low growth and national poverty self-reinforcing. The African economic crisis has taken a heavy toll in human terms: poverty, unemployment, starvation, illiteracy, disease, infant mortality, overcrowded cities and social breakdown. It is further exacerbated by a dysfunctional institutional environment which is characterised by bad and corrupt government, inefficient and poorly managed public bureaucracies, inefficient resource allocation, disintegrating educational institutions and the collapse of the banking and judicial systems. Political instability, coup d’états and ethnic strife exacted a heavy toll. This picture of Africa does not look attractive to the investor or trading partner. Aid dependence and dearth of investment afflict Africa much more severely than other developing regions. The danger is that much of the external world’s interest in Africa threatens to become merely humanitarian. Other interested investors may see it as a vacuum to be exploited. The African experience continues to raise some searching questions. What are the factors behind Africa’s economic malaise? Does Africa face special cultural or structural problems? Should external factors be blamed? Will the next generation be more numerous, poorer, less educated and more desperate? Could the process of formulating and implementing reforms be improved? Is there a long-term vision in terms of which the inner dynamics, the resources of vitality and resilience as well as the natural resources of Africa can be energised? Why is South Africa an exception to the general trend, or is it doomed to go down the same road?

South Africa’s Exceptionalism

Foreign visitors, who might have been exposed to other parts of Africa, are often surprised at what they see on arrival in South Africa. They soon realise why it is claimed that South Africa, with only 6 percent of Africa’s population, accounts for no less than 45 percent of the GDP of the whole continent. Visitors are surprised by the modern airport facilities, the six-lane highways in and around the major cities, the many high-rise buildings, the large traffic volumes including a multitude of 143

big and expensive German cars, huge township development projects, five-star hotels, abundant modern shopping malls, more white faces than expected, well-maintained farmlands and high-quality livestock herds by the roadside, modern deepwater port facilities at five port cities, a wide variety of eateries and an abundance of food, a large stock of luxurious brick-built homes, extensive hospital and health services – but also around the cities large stretches of squalid squatter townships and, on street corners in the outlying suburbs of cities, large collections of people just sitting around looking for employment. To explain the context of the “New South Africa” requires a more comprehensive analysis.

Positive Building Blocks 1. Sound Basic Infrastructure – Since its early colonial era, South Africa was well served by its major port cities: Cape Town, Port Elizabeth, East London and Durban. During the 1960s two additional deep-sea harbours were constructed: one at Richards Bay to export coal and one at Saldanha Bay to export iron ore. Both export harbours are connected with good railway lines running hundreds of kilometres to the inland mining areas. Apart from these, South Africa has well-developed transport corridors between the port cities and the mining, industrial and commercial centres in the inland area – some of which developed more than a century ago. South Africa has more than 40 percent of all the paved roads and railroads on the continent of Africa. South Africa produces more than 50 percent of the total electricity output on the continent at six major coal-fired power stations and eight hydro-electric power pump schemes, and one nuclear-powered power station. Several giant new facilities are under to service the expanding distribution networks. South Africa is also well served by around fifteen large water-storage dams and a wide network of irrigation canals and tunnels. 2. Sound Business and Financial Infrastructure – For more than a century South Africa has been served by a network of financial institutions and business support systems. The major banks, Standard, First National, Nedbank and ABSA have been in operation for generations, each with a countrywide network of branches. In addition, there are several merchant banks focusing on business expansions and mergers. All the major accountancy networks are represented across the country as well as a plethora of attorneys, solicitors, conveyancers, tax consultants, management consultants, marketing agents and insurance agents. The mining, manufacturing and retail sectors are well-organised to lobby their interests. The trade union movement has a history of more than a century and is today predominantly represented by COSATU, one of the partners of the ANC government. Several major insurance companies – SANLAM, Old Mutual, Liberty Life and African Life – have been in operation for over 80 years and have played a major role in offering pension schemes, a wide array of insurance policies and in mobilising the savings of millions of persons in their capacity as predominant institutional investors. South Africa has also been well served by development corporations, both public and privately sponsored, to stimulate and channel development support such as the Industrial Development Corporation (IDC) and the Small Business Development Corporation (SBDC, now called Business Partners). Last, but not least, a pillar of the business and financial infrastructure is the Johannesburg Stock Exchange (JSE) – the 11th largest in the world. Organised stock exchanges enable investors to acquire or sell shares in public companies. They can diversify risks by owning shares in several firms that are engaged in different businesses without having to become directly involved in management. Because shares can be easily bought and sold, change in ownership does not cause disruptions in operations as it does in either sole proprietorships or partnerships. The continuity of a public company makes long-range planning easier and also increases the ability of the incorporated firm to borrow money for expansion. The market capitalisation of the JSE is the largest in Africa and larger than Russia’s stock market. A strong , strong financial systems and financial stability is crucial for sustained economic growth. 144

3. A Developed Education Network – Some of South Africa’s universities have been in action close to a century, e.g. Cape Town, Stellenbosch, Pretoria, Witwatersrand, Potchefstroom, Grahamstown, Fort Hare (counting as one of its famous students), Natal and University of South Africa (UNISA), a correspondence university with more than 100,000 students from across Africa and elsewhere in the world. Even Robert Mugabe, during his years in prison, was a student of UNISA. Most of South Africa’s Prime Ministers, Cabinet Ministers, politicians, senior civil servants and particularly also business leaders and managers are graduates of South African universities. In addition, every major city is also served by technical and teacher training colleges offering vocational training in all relevant fields. The education sector consists of some 8 million primary, 4.5 million secondary, 1 million tertiary learners. 30,000 schools and 400,000 educators. Education is compulsory for the 7-15 age category and 20 percent of the national budget (5.5 percent of GDP) is spent on public education. Literacy levels are estimated at 82 percent: 81 percent female and 83 percent male. At most primary schools, 50 percent of learners are female. At university level, the majority of students are female. Much still needs to be done to improve the quality of public education. For most of the past decade, less than 50 percent of candidates passed the final school-leaving examination. Private (independent) education is able to offer world-class schools that increasingly attract international pupils. Within the public sector there are also many ex-White suburban schools that are now racially integrated, where 100 percent pass rates are achieved. Of those who gained matric passes good enough to get them into university in 2003, only 5 percent were Black, compared to 7 percent Coloured, 41 percent Indian and 36 percent White. Unfortunately there are also dysfunctional township schools, which only achieve 0 to 20 percent pass rates and where a culture of teaching and learning is absent. The hardest part is to improve the quality of teachers. During the anti-apartheid “struggle” years, much criticism was directed against the Bantu Education Act of 1953 which introduced mother-tongue education for the various African communities. It was claimed that it was designed to consign Black people to a destiny of manual labour and mental oppression. But the principle of mother-tongue education was consistently supported by Afrikaans-speakers for many decades. Afrikaners have always maintained that mother-tongue education is the simple de facto reality in mother-tongue societies like England, France, Germany and Italy. The mainstream language, mother-tongue language and the official language is one and the same thing. In a multi-lingual country like South Africa, with its 11 official languages, English – or perhaps “broken English” – has become the lingua franca. A growing number of Afrikaner and African parents have selected English as the preferred medium of instruction for their children in recent years. The consequence has been a marked intensification of learning problems. A study by Webb and Kembo-Sure, a research team, reported in African Voices, 2000, p.7, “… The decision of school authorities and parents to use English as the language of learning in schools (especially primary schools) has definitely contributed to the under-development of the South African people.” The same point was also made by Mamphela Ramphele, widow of Steve Biko and former Chancellor of the University of Cape Town, in a press interview on March 8th, 2009: “There is overwhelming evidence that learning through the first language or mother tongue helps to anchor learning in the child’s immediate environment: family, community and everyday interactions. Children who are taught in the first few years in their mother tongue, while other languages are introduced as subjects, tend to become more proficient in all languages. It provides the anchor for better and deeper learning by linking it to everyday life and one’s own identity.” 4. Access to World-Class Science and Technology - In contrast to other parts of Africa, South Africa had the benefit of easy access to the predominant sources of modern science and technology – the major Western countries such as the USA, Western Europe and the United Kingdom. A very large proportion of White academics in a multitude of disciplines had the opportunity to pursue post-graduate studies at the trend-setting universities abroad: the Ivy League schools of America, the top universities in the UK, the leading universities in Germany, 145

France, the Netherlands and in Scandinavia. This interaction was facilitated by the language proficiency of not only large numbers of White scholars but also by a substantial number of Black, Coloured and Indian students. The transfer of scientific knowledge and state of the art technology was evident in all the natural sciences, physics, engineering mechanics, IT-technology, architecture, electro-magnetics, nuclear physics, economics, business management, public management and administration, finance and banking, medicine, education, agriculture, psychology, sociology and even theology. This interaction created a vast amount of cross-fertilisation that enabled South African universities, technical colleges and research institutions to keep abreast of new trends and best practices. There is virtually no field of knowledge, or area of technological advancement, that has not been accessible to South African scholarship over many generations. The presence of this relatively large “first world” component of highly educated and skilled persons has enabled South Africa to calibrate its industrial manufacturing strategy to international realities – a shift away from raw materials and cheap, labour-intensive, heavy industry to what is increasingly known as the “knowledge economy”. Ideas, information and technology are believed to have acquired greater importance than muscle and commodities. Extracting the wealth of minerals in South Africa, required a high level of mine engineering technology and geological science. 5. International Trade – After more than two decades of co-ordinated world-wide campaigns to isolate South Africa by imposing trade and investment boycotts, 1994 once again opened the world for South Africa. In the period 1994 to 2001, manufacturing’s share of total exports rose from 35 percent to more than 50 percent. The share of primary products in merchandise trade declined from 64 percent in 1970 to 37 percent in 2000. Its emphasis on high technology caused the low technology sectors to decline while the high technology sectors have seen a steady increase. In the same period, exports, as a percentage of manufactured output, increased from 14 percent to 28 percent with motor vehicles, basic iron and steel products and basic chemicals leading the way. Vehicle exports rose from 15,764 in 1995 to 130,000 in 2002, and are still rising. Exports of vehicles as a percentage of domestic production have risen from 4 percent to 26 percent, earning more than R50 billion. From BMW, Mercedes Benz, Volkswagen to Toyota, South African right-hand drive vehicles are sold in many parts of the world. South Africa is also well-placed to penetrate the huge potential of the emerging African market. 6. Accommodating Governmental Structures – Heterogeneous societies always carry the burden of inter-group rivalry – often based on coinciding and reinforcing cleavages as in the case of South Africa. These cleavages provide a fertile seedbed for polarised, cumulative conflict with regard to the allocation of values such as natural resources, privileges, priorities, pre- eminence and the control of command positions. For several generations the South African political scene was characterised by White domination coupled by various efforts to implement a separatist policy by creating separate self- governing territorial units where feasible. The underlying assumption was that socio-cultural incompatibilities and political tensions could not be resolved within a single polity. Given this irreconcilability of interests, separation of the groups into their own socio-economic political units with a territorial foundation was viewed by White power centres as the solution. In the colonial era, separate protectorates, that later became independent states, were created as in the cases of Lesotho, Swaziland and Botswana. But the historical pattern of group interaction has subsequently created common areas where members of all groups were permanently resided. A dismantling of the inter-racial South African economy proved to be unworkable without the total impoverishment of all concerned. Neither economic resources nor political capability was available to accomplish the “separation of the inseparable”. The South African reality encompassed both socio-cultural diversity and economic inter-dependence. Normally, separation by way of partitioning can at best only be achieved if it is geographically and economically feasible and based on mutual agreement. Successful examples of this solution, though not impossible, are hard to find. 146

The opposite option was to choose a Westminster-type non-racial “common society”, based on universal suffrage, which in South Africa would inevitably lead to Black domination. The Westminster system, although suitable for a fairly homogeneous society, was itself modified by the English to accommodate the Irish Catholics (partitioning off Ireland and again sub- partitioning Northern Ireland) and granting a degree of self-rule to Scotland (devolution). The problem with simple majority rule in deeply divided societies, is that it has a basic tendency to revert to majority domination. Some of the problems raised by an unmodified Westminster-type majoritarian approach can be summarised as follows: (i) Executive dominance with the acquiescence or support of a permanent parliamentary majority. (ii) An intensification of inter-group conflict as a result of the winner-takes-all principle and its concomitant inference that the losers may lose all. (iii) A concentration of political competition and conflict at the centre of the political system as a result of a lack of decentralisation to meet local problems and to respond to diverse needs and aspirations. (iv) A lack of restraint on parliamentary sovereignty and executive power in the absence of effective constitutional restraints. (v) The absence of power-sharing devices to reconcile conflicting group aspirations and to protect minority interests. An accommodation response to the challenges of diversity and intergroup strife is to accept the plurality of the country as given and to design a political structure to fit and express this pluralism in a constructive way. Insofar as a democratic system rests on consent rather than compulsion, each coherent group must perceive that its basic needs can be fulfilled within the system. Examples of such needs are physical safety, preservation of language and culture, local self-government and effective participation in the decision-making of the overall political system. Techniques of accommodation normally include decentralisation (devolution), federalism (regionalism), power-sharing (e.g. coalition, proportionality in policy-making bodies, mutual veto or even segmental autonomy in tribal areas, recognition of communal common law), and sub-cultural autonomy (enclaves of ethnic self-rule). These techniques can be applied to replace unrestrained majority rule with co-operative consensual rule; to replace executive dictatorship with power-sharing based on a mutual veto in order to harmonise inter-group relations; to replace winner-takes-all majority decision-making with the principle of proportionality or parity to scale down the disproportional exaggerated power of majorities in setting priorities and allocating resources; by delegating as much decision-making as possible to sub-units (e.g. federal regions or provinces) where regional or national minorities are protected from being swamped politically, socially and culturally. South Africa went through three phases of transition after a ground-breaking De Klerk speech in the South African Parliament on 2nd February, 1990, opened the door for the release of Nelson Mandela. The first phase, 1990-1994, was the negotiation phase. The second phase, 1994-1999, was the interim phase. The third phase, starting in 1999, was the phase during which the consolidation and expansion of Black Power took shape. Each phase involved all three levels of government: national, provincial and local. Approximately 650 local municipalities had to convert themselves into negotiating forums and negotiate an interim phase of local government. This was done under the auspices of the Local Government Transition Act. On the national level an Interim Constitution was negotiated which led to the election of a Government of National Unity that was elected in the first non- racial democratic elections in April, 1994. At the same time, a Constituent Assembly was appointed which negotiated the Constitution that was finalised in 1996. The 1999 election was the first under the final constitution. During the transition years, much use was made of the “co-optation” technique to rope in leaders from minority communities in sensitive command positions. After the ANC triumph in the 1994 elections, Mr. F.W. de Klerk, the former President, was appointed as a joint Vice- 147

President together with Thabo Mbeki under Nelson Mandela as President. For the next two years, two investment bankers, Derek Keys and Chris Liebenberg, were consecutively roped in as Minister of Finance to assist Deputy Finance Minister, Trevor Manuel, to find his feet. Dr. Chris Stals stayed on as Governor of the Reserve Bank for two years to assist with the training of his successor, Tito Mboweni. Piet Liebenberg served as Receiver of Revenue for a period to facilitate the eventual take-over by Pravin Gordhan. After 1997, Thabo Mbeki roped in Marthinus van Schalkwyk, former National Party Minister, as his Minister of and Environmental Affairs. When Jacob Zuma became President in 2009, to appease general uneasiness, he appointed several non-ANC members to his Cabinet: Pieter Mulder, Marthinus van Schalkwyk, Sue van der Merwe, Andries Nel, Derek Hanekom, Gert Oosthuizen. In order to accommodate a wide variety of sub-cultures, minorities and interest groups, Zuma enlarged his full Cabinet to 34 Ministers and Deputy-Ministers. The country was on a knife edge during the transition years, 1994-1996. There were isolated bombings, massacres, assassinations and limited armed confrontations. But on the whole, it was a relatively peaceful transition. South Africa proved to have the leadership and the character to walk through this period of turbulence and keep the transition process on track. F.W. de Klerk and Nelson Mandela were subsequently joint recipients of the Nobel Peace Prize. Mandela deserves much praise for recognising the need for reconciliation; De Klerk for being prepared to sponsor the concessions made by his constituency. Van Zyl Slabbert, a former Leader of the Opposition in the South African Parliament, summed up the process in the following words: “Eventually all the major parties that could cause irreparable damage, came to the table and chose peace rather than violence. Their leaders have to be commended without exception: Mandela, De Klerk, Viljoen and Buthelezi.” (See Van Zyl Slabbert (2002), “Government and Opposition”, in Bowes and Pennington, South Africa – The Good News, pp.49-55) So far the new constitutional dispensation has facilitated the achievement of a significant degree of democratic political stability; it has implemented an effective system of tax collection; it has achieved a peaceful, constitutional succession of political and executive leadership; it has tolerated the mobilisation of special interests; it has implemented a system of regional government to fit the ethno-graphic population composition (e.g. Zulus in KwaZulu Natal, Xhosas in the Eastern Cape, Tswanas in the North West; Sothos in the Free State and Coloureds and Whites in the Western Cape); and, it has maintained a reasonable economic growth rate of between 2 and 5 percent over the course of 14 years.

Problem Areas Despite many “good news” items, the New South Africa also produced a number of obdurate problem areas that needed to be addressed. Nelson Mandela deserves much credit for fostering a spirit of reconciliation and optimism – for leading by example. The jury is still out on the quality of his management skills – as managing director of South Africa Incorporated. 1. Law and Order – Crime statistics became a political hot potato with even the Commissioner of Police finding himself under suspicion. The most popular explanation for the high crime rate offered by ignorant journalists was that it was somehow attributable to the apartheid system that existed between 1950 and 1990! Simple explanations are not helpful to deal with a huge problem that particularly affects urban black communities. Several studies have revealed that 72 percent of the victims of violent crime knew the offender. It also revealed that 60 percent of respondents in the survey had been victim of at least one crime between 1993 and 1998. In the period 1996 to 2000 the number of crimes, especially rape, carjacking, serious assault, housebreaking and common robbery, had been steadily increasing – particularly violent crime. During the 35 apartheid years a total of 2,700 people were killed confronting government forces (The Economist, “Survey of South Africa”, February 24th, 2001, p.7). During the 3 years to March 2000, police officers killed 1,550 persons. Only for a period of two years, 2001 and 2002, police statistics recorded 21,000 murders each year. 148

Between 1994 and 1997 a total of 554 farmers were killed on their farms, averaging almost 200 murders per 100,000 farming population. In 1999 the number of farm killings rose to 809. When caught, criminals are handled by a slow, over-burdened justice system. Only 18 percent of murder cases led to a conviction. Part of the problem is the poor quality of the policy force. Some are said to be corrupt, many untrained and more under-equipped. The Economist, (op.cit., p.7) reports that one-quarter are functionally illiterate and 10,000 do not have driving licences. In view of the high cost of crime to the business community, they spent more than R11 billion on private security services in 1999. The police budget for the same year was R15.5 billion. The private sector also launched an organisation called Business Against Crime (BAC) to work jointly with government agencies to fight crime. Actions initiated included for example: - The streamlining of courts to handle cases faster and more efficiently. - Addressing of corruption by vehicle registration offices. - The elimination of the disappearance of dockets at many courts. - The elimination of the illegal re-registration of vehicles (used to recirculate hijacked cars). - Video surveillance in city centres, like Johannesburg, to reduce street crime. - Support to victims of sexual offences. - Reduction of commercial and organised crime. - Improvement of safety in areas frequented by tourists. - A schools crime prevention programme was launched. - Attention was turned to syndicated crime in the drug trade, corruption, illegal firearms and vehicle theft (which exceeded 120,000 in Gauteng alone). - Improvement of security in outsourcing and handling of cash in transit to commercial banks. By 2006 a United Nations report claimed that South Africa still had the highest rate of gun- related crime in the world (except Colombia). They also reported an increase in well-organised armed robberies. To protect their own safety and their property, private individuals and businesses hire private security guards which appeared to outnumber police in a November 2006 count by at least two to one. South Africa spends a lot on police, courts and prisons. In 2004 it spent 3 percent of GDP – or $130 per person – on criminal justice, compared with 1 percent and $66 in Europe. South Africa employed about 260 policemen per 100,000 in 2004, compared with the international average of 380. 2. An Over-Embellished Bill of Rights – The constitutional structure that was jointly designed by the outgoing National Party and the upcoming ANC, and implemented in 1994, has served the country reasonably well in the subsequent years in terms of reconciliation and reconstruction. The new constitution was formally adopted by an elected Constitutional Assembly in 1996 and formally came into force in February 1997. Acknowledging injustices of the past, the new constitution makes a commitment to improve the quality of life of all citizens and to free the potential of each person by building a non-racial and non-sexist society in which fundamental human rights are respected. The founding values of the legal order thus established were stated as: human dignity, the achievement of equality, the advancement of human rights and freedoms and respect for certain fundamental principles of democracy – the rule of law, universal adult suffrage, a common voters’ role, regular elections and a multi-party system of democratic government aimed at ensuring accountability, responsiveness and openness. These founding values of the Constitution were articulated in a Bill of Rights. These rights, however, are not absolute. They may be limited in terms of a law of general application, and only to the extent that the limitation is “… reasonable and justifiable in an open and democratic society based on human dignity, equality and freedom.” (See Section 36 of the Constitution) The critical issue remained testing the scope and nature of the abovementioned provisions under the Constitution. Chaskelson’s response was not encouraging. He stated that it “… calls for a proportionality analysis, involving the balancing of different interests in the context of the relevant legislative and social setting.” 149

(See Chief Justice Chaskelson, “The Constitution and the Constitutional Court” in Brett Bowes and Stewart Pennington, South Africa, the Good News, Rivonia: South Africa, the Good News Pty Ltd, 2003, pp.77-83) The equal protection clause guarantees that “… everyone is equal before the law and has the right to equal protection and benefit of the law” (Section 9(1) of the Constitution). Discrimination is presumed to be unfair unless the contrary is established. However, ironically, “positive discrimination” in favour of black persons is exempted. The extent of this exemption could ultimately nullify all the constructive intentions of the constitution-makers. The Constitution provides for a Constitutional Court to function as a court of appeal and its decisions are binding on all courts and all organs of the state. In addition, the Constitution is premised on a separation of powers between the legislature, the executive and the judiciary. The powers of each branch are defined in the Constitution. A potential problem with the over-the-top scope of the “rights” granted in the Constitution lies in its enforceability. In original form, a “Bill of Rights” served as a charter guaranteeing fundamental liberties to the individual and setting firm limits on how far the state may encroach on the lives of its citizens. These conventional rights, as in the 1791 amendments to the USA constitution, refer to freedom of speech, religion, association and self-incrimination. But the new SA Constitution also includes socially inspired “rights” such as “access to adequate housing”, “reproductive health care”, “adult basic education” and even green rights to “have the environment protected”. The Bill of Rights thus converts the constitution into an instrument of social-engineering for “righting wrongs”, e.g. all forms of social inequality and material disadvantage, and for promoting the rights of those who believe they have a special claim against society. It imposes positive duties on the state to achieve the realisation of certain socio- economic rights, e.g. rights to housing, health care, food, water and social security, land reform and access to land. These claims on society, elevated to constitutional imperatives and entrenched in a Bill of Rights constitute a legal base for aggrieved persons or interest groups to sue everybody. It would allow people to prosecute politics by litigation. Self-righteous reformers do not see the role of courts as simply to enforce the law, but to remake society in their own image. Obviously the courts have no power to order the government how to spend its budget! The broad range of issues covered in the Bill of Rights could open the door to an avalanche of frivolous litigation by opportunists. Grandiose designs are bound to end in disasters. No government can deliver equality as an outcome. The state can try to improve equality of opportunity, but to delude the man in the street that it can ensure equality of outcome as a legal “entitlement” is surely dishonest. 3. Nepotism and Corruption – As a result of the conflation of party and state, both embedded in a culture of reverse discrimination and compensative entitlement, nepotism and corruption have become unrestrained. ANC wheeling and dealing are by definition seen as actions of the previously disadvantaged, and accordingly above reproach and beyond criticism. Parliamentary control over the executive branch through its committees (e.g. Public Accounts) is neutralised and exacerbated by a shortage of opposition members adequately knowledgeable to penetrate and decipher the intricacies of public accounts. The traditional checks and balances between the different branches of government (legislative, executive and judicial) which are essential components of an accountable and responsible democratic system, are slowly being eroded. Parliament has become a rubber stamp, the public broadcaster a government mouthpiece. Cost effectiveness and efficiency are sacrificed. Since it does not count in an open-ended timeframe of redress and compensation. Many ANC-connected members have promoted and enriched themselves in the name of “transformation” and “affirmative action”. R.W. Johnson lists, in unrelenting detail, a web of ties between ANC leaders and their families and the new rich and powerful: crony capitalism for comrades and camp-followers. 4. Reverse Discrimination – “Affirmative action” was intended to redress past imbalances and injustices. With the passage of time it established a different kind of institutionalised race-based discrimination. It is underpinned by an elaborate legislative framework. The Employment Equity Act determines that “designated employers” should have “demographic proportionality” in employment (70 percent Black, 45 percent women, 5 percent disabled) and submit plans to 150

attain such equity. The Equity Act reinforces the constitutional ban on discrimination and those who are accused of “unfairness” have to prove their innocence. The Black Economic Employment Act imposes a host of obligations on companies. To be BEE compliant companies have to meet different criteria, which include having a designated proportion of Blacks in upper and middle management, paying for skills development, etc., thus creating a legal minefield benefiting an army of specialist lawyers, consultants and accountants – all part of a new industry of racial auditing. The Mbeki government which openly spoke of the need to build a black bourgeoisie, took various legislative steps to push Black Economic Empowerment (BEE). BEE firms were privileged in public procurement. In tenders up to R500,000, the evaluation of tenders allowed 20 points for bids from HDIs (historically disadvantaged individuals) and 80 points for price. Above R500,000 a 90:10 ratio would apply. On 22nd March, 2002, the Mercury Business Report announced that to date 66 percent of contracts awarded had gone to BEE companies. Companies sought hard to qualify as “black empowered” in terms of the appointment of black executives and board members to transfer equity to black shareholders, to adopt BEE charters, to set numerical targets for black employment and to commit to an annual social spending percentage on black groups out of post-tax profits. The government announced its overall target of placing at least 35 percent of the economy into black hands by 2014. But the downside is clear: once the principle is established that merit or price is irrelevant, there is a steep and slippery slope to the exercise of blatant racial choice and a dilution of standards. (See Johnson, R.W., South Africa’s Brave New World: The Beloved Country Since the End of Apartheid, London: Penguin Books, 2009, pp.381-444) The momentum of the BEE campaign has not only inhibited new direct foreign investment, it encouraged disinvestment. The affirmative action programme has also spawned a populist campaign to nationalise mines, industries, banks and farmland. It created the spectre of unemployed youths who do not understand basic economic essentials such as the need for fixed investment, the laws of supply and demand, the role of profit to generate taxable income or the role of positive cash flow in running a viable business. 5. AIDS – Although South Africa’s health care is superior to what is available elsewhere in Africa, its response to the AIDS pandemic has failed. For a decade, the country’s political leadership has been in denial. This resulted in more HIV-positive people than anywhere else in the world. AIDS came later to South Africa than to many countries further north. The first case was reported in 1987 and the epidemic did not really begin until 1993. Little is known for certain how many people are infected and how many people have already died. UNAIDS estimated the HIV-positive people at 4.2 million in 2001. Deaths were expected to rise to around 400,000 in 2005 and 600,000 in 2010. Average life expectancy is set to fall from 60 years to 40 by 2010. It was claimed that in 2006, AIDS killed as many as 900 people a day. For a long time Mr. Mbeki questioned medical opinion on the causes and treatment of AIDS – particularly on the use of anti-retroviral drugs. This led to little public education and opinion leadership on the issue. Mr. Mbeki’s agenda was to find African solutions rather than the expensive anti-retroviral drugs. It is clear that there is more merit in preventative measures than merely extending the lives of terminally ill AIDS patients in terms of funding priorities. But such an approach requires very intensive educational programmes aimed at teenagers and sexually active young adults. It also requires a stronger focus on properly researched treatment of early stage HIV-positive persons. However, the expensive anti-retroviral treatment is essential to extend the lives of people who have already contracted the AIDS disease. But much more needs to be done to make the general public aware of the causes and consequences of AIDS. 6. The Zimbabwe Tragedy – In the period April to September 1994, Hutus in Rwanda carried out a systematic genocide of Tutsis, killing 800,000 to one million persons. The international community stood by, failing to intervene or to do anything to prevent this horrible massacre. When Robert Mugabe’s supporters turned on white farmers in the mid-1990s, the UK faintly offered to fund the cost of a farm redistribution programme. Over a period of 10 years, Mugabe 151

has virtually destroyed civilised life in Zimbabwe: he cowed the judiciary, silenced the media, rigged three elections, killed and imprisoned his opponents, dispossessed Zimbabwe’s most productive citizens and printed so much money that inflation destroyed his currency and brought his country to a standstill. He converted Zimbabwe to the level of a failed state. Yet, the free world stood by comfortably leaving the problem in South Africa’s hands. Mr. Mbeki strived, over almost a decade, to set South Africa up as a symbol of African potential, with its own institutions (NEPAD) and its own mechanisms for solving problems (silent diplomacy). His ambition was that African countries should help each other uphold standards of good governance, human rights, democracy and economic progress. But despite his repression of his own people, Mugabe remained a revered icon of the liberation struggle, as the man who can throw scornful insults at Western leaders in measured English. But after Mr. Mbeki had exhausted his arsenal of “quiet diplomacy” measures, recalcitrant Mugabe remained in power and continued to ransack his country. South Africa also failed to help the people of Zimbabwe – except, perhaps, by way of allowing an estimated 2 to 3 million refugees to cross the borders into South Africa. 7. Political Correctness – Spin and propaganda are facts of life in the modern communication world. It involves avoiding or softening the unfavourable parts and stressing or focusing exclusively on the good news. The negative outcome is that the truth is either distorted or totally concealed. But you cannot build a future on ignorance. The impact of political correctness on life in the New South Africa is well expressed in the words of an ex-ANC foreign correspondent: “The educational system deteriorated markedly in ANC hands and, effectively, the government insisted that expertise and good qualifications did not really matter. Anyone who argued that merit was the vital criterion in choosing future doctors or competent managers was accused of being racist in principle. Indeed, all the talk of meritocratic criteria was regarded as intrinsically racist. This left the government free to appoint to positions throughout society people who lacked the skills or qualifications necessary to do their jobs properly: young black women with no technical background to run the railways, ambassadors who had, at best, spent a few weeks learning the skills serious foreign officers spend years inculcating, judges who, even as lawyers, had been inexperienced, incompetent and drunk, and senior policemen who had been thugs or crooks. All these are real examples.” “In order for this to pass muster South Africa became a society of ubiquitous pretence, not only by the government but by politically correct whites. It was so nice to cheer the arrival of so- and-so to this or that leading position because he/she was young, black, a woman or a disabled person, and this said such nice things about the new society. To notice that such a person could not possibly do their job properly was the height of bad manners. ANC municipal office-holders, who needed no encouragement to treat their towns as merely part of a spoils system to be ransacked, happily took the lesson and would gaily get rid of trained planners, accountants, engineers and their ilk in order to be able to hand their jobs to unqualified cronies or relatives.” (R.W. Johnson, op.cit., p.430)

Conclusion It cannot be said that South Africa has completed its transition from a polarised society to a smooth functioning liberal democracy. The temptation to suspend democracy in favour of some authoritarian alternative, as has often happened elsewhere in Africa, remains a danger. The practice of “reverse discrimination”, or what is euphemistically called “affirmative action”, remains an open question. If all problems faced by black societies are explained and treated in terms of a racialist paradigm, the incentive to find constructive self-reliant solutions will remain stunted. Confronting the African Dilemma

Africa’s deficiencies are well documented: widespread poverty, unemployment, starvation, dysfunctional politics, as well as social and institutional disintegration. In 2000, The Economist dubbed Africa the “hopeless continent”. But a decade later Africa’s progress appears to be more 152

promising: significant increases in its annual output and rapidly growing levels of foreign direct investment. The continent’s vast oil and mineral resources, its expanding market for consumer goods and the growth potential based on its expanding working-age population as a manufacturing platform, are hopefully opening the cages of African lions to take their place next to the proverbial dragons and tigers. What practically all African countries have in common is a legacy of colonialism that has disturbed, and still today distorts, African economies. Whatever merits one may accord to the colonial era in bringing knowledge and technology to Africa, it certainly was not intended to improve balanced economic growth. Africa’s colonial heritage is one of artificial borders, little or no industrial development with single crop or commodity dependency. There is no quick fix for its inherent problems. But Africa is also rich in human and natural resources as well as limitless potential. One of the major dilemmas confronting African countries is to reconcile the demands of economic liberalisation and political liberalisation within a relatively short time span: i.e. reducing the role of the state in the economy concomitantly with increasing the role of the people in the political process without succumbing to unrealistic expectations and rampant populism. Successful Pacific Rim countries pursued economic growth before they started to pay proper attention to political democratisation. Ghana’s Kwame Nkrumah opted for the primacy of politics: “Seek ye first the political kingdom and all else will be added unto it.” Nkrumah’s recipe of the 1950s, which also implied complete ownership of the economy by the state, was not successful in Ghana, Zaire, Zambia, Mozambique, Uganda, Tanzania, Angola , Zimbabwe or any other African state. The simple truth is that a stable, free and responsible political system can only be built upon the foundations of a sound and productive economic sub-structure. Since the beginning of the 1980s the World Bank and the IMF started to introduce comprehensive market-friendly economic reform measures in Africa. Much emphasis was placed upon measures to restore African economies to market forces and entrusting them to private ownership, private control and private initiative. The World Bank, the IMF and the so- called donor countries made it clear that they wanted to wean African countries from thinking of aid as an entitlement or a permanent fact of life. They felt that the principle cause of the continent’s economic decline was a fundamentally wrong approach to economics. In their eagerness to industrialise, many African countries neglected and even actively discouraged agriculture, which in most cases constituted their primary strength. Socialised, centrally planned and directed economies coupled with grandiose capital schemes were seized by corrupt and overstaffed bureaucracies and their fellow travellers. Much of the real energy of Africa, and its future, lie outside government structures in the creative potential of its people with their rich diversity of culture, experience and tradition. Although governments can facilitate progress, only people can make things happen: people with entrepreneurial talents, skills and access to appropriate resources. In order to mobilise the creative talents and energies of people, requires the understanding of the cultural preconditions of both political democratisation and economic progress. The design of an appropriate strategy for sustainable and equitable economic growth requires a proper understanding of its cultural viability. If not, it would result in an endless series of economic and social dislocations, ensuring that realistic solutions are more and more unattainable.

References

Arnold, G. (2005) Africa – A Modern History, London: Atlantic Books Bogucki, P. (1999) The Origins of Human Society, Massachusetts: Blackwell Publishers Coertzen, P. (1988) Die Hugenote van Suid-Afrika 1688-1988, Kaapstad: Tafelberg Uitgewers Couzens, T. (2004) Battles of South Africa, Claremont: David Philip Publishers 153

Davenport, T.R.H. (1977) South Africa – A Modern History, Johannesburg: MacMillan De Klerk, W.J. (1971) Afrikanerdenke, Potchefstroom: Pro Rege Pers De Wet, C.R. (1902) Three Year’s War, New York: Charles Scribner’s & Sons Dia, M. (1991) “Development and Cultural Values in Sub-Saharan Africa”, Finance and Development, December 1991, pp.10-13 Doyle, Arthur Conan The Great Boer War, Scripta Africana Edition, 1987 Gey van Pittius, E.F.W. Staatsopvattings van die Voortrekkers en die Boere, (1941) Pretoria: J.L. van Schaik Gilliomee, H (2003) The Afrikaners – Biography of a People, Cape Town: Tafelberg Publishers Gilliomee, H. & Mbenga, B. New History of South Africa, Cape Town: Tafelberg (2007) Publishers Guest, R. (2004) The Shackled Continent, London: MacMillan Hammond-Tooke, D. The Roots of Black South Africa, Johannesburg: (1993) Jonathan Ball Harrison, D. (1981) The White Tribe, London: MacMillan Heese, H.F. (2005) Groep Sonder Grense - 1652 – 1795, Pretoria: Protea Boekehuis Johnson, R.W. (2004) South Africa – The First Man, The Last Nation, Johannesburg: Jonathan Ball Publishers Johnson, R.W. (2009) South Africa’s Brave New World, London: Penguin Group Klitgaard, R. (1988) Controlling Corruption, University of California Press Kotecha, K.C. & African Politics: The Corruption of Power, Washington: Adams, R.W. (1981) University Press of America Marsden, K. (1990) Africa’s Entrepreneurs”, IFC Discussion Paper, International Finance Corporation, Washington DC Mazrui, A.A. (1992) “The Liberal Revival, Privatisation and the Market: Africa’s Cultural Contradictions”, Arusha Conference Paper, Friedrich Naumann Foundation, pp.1-42 Meredith, M (2007) Diamonds, Gold and War – The Making of South Africa, Jeppestown: Jonathan Ball Publishers Millin, S.G. (1951) The People of South Africa, London: Constable & Co. Oliver, R. (1999) The African Experience, London: Weidenfeld & Nicolson Pakenham, T. (1993) The Boer War, Johannesburg: Jonathan Ball Publishers Pama, C. (1983) Die Groot Afrikaanse Familie-naamboek, Kaapstad: Human & Rousseau Raidt, E.H. (1991) Afrikaans en sy Europese Verlede, Kaapstad: Nasionale Opvoedkundige Uitgewery Theal, G.M. (1917) South Africa, New York: G.P. Putnam’s & Sons Thompson, L. (2001) A History of South Africa, Jeppestown: Jonathan Ball Publishers Turok, B. (1989) Mixed Economy in Focus: Zambia, Institute for African Alternatives, London Van Dijk, L. (2004) A History of Africa, Cape Town: Tafelberg Publishers World Bank (1989) Sub-Saharan Africa: From Crises to Sustainable Growth, Washington DC The Economist The Survey, September 23rd, 1989, pp.1-57 Rediscovering the Middle of Africa, December 1990, p.75 The Hopeless Continent, May 13th, 2000. How to Make Africa Smile, January 17th, 2004

154

7 The Constraints of the Islamic World

Islam, the proper name of the religion traditionally called Mohammedanism in the West, is based on the revelations uttered by the prophet Muhammad (Mohammed) who lived in Arabia around AD 570 to AD 632. His revelations were collected after his death in the volume called The Koran (Arabic Qu’ran). From the Koran, supplemented by statements and rulings traced back to Muhammad, a system of law and a theology were derived in the subsequent centuries. These combined with elements from other sources to form a distinctive Islamic civilisation which has continued to grow into modern times. The total number of adherents today is variously estimated to number between 1 billion and 1.4 billion, spread over more than 50 countries around the world. The world of the Arabs is considered by themselves as the backbone of the Islamic world. It includes around 360 million people inhabiting some 22 countries from the Atlantic to the Persian Gulf and from the Saharan Desert to the foothills of Anatolia. Islam is the dominant religion of the Arab world, but most of the world’s Muslims are not Arabs. They live outside of what was traditionally called Arabia. The non-Arabic Muslim world encompasses many countries where the majority of the population are followers of the Muslim religion. It includes countries such as Turkey, , Afghanistan, Pakistan, Bangladesh, Malaysia, Indonesia, several sub-Saharan countries such as Mali, Nigeria, Niger and Tanzania and also former republics of the USSR such as Turkmenistan, Uzbekistan, Tajikistan, Kyrgyzstan and Kazakhstan. Altogether the Muslims in these countries add up to around 750 million. In addition there are millions of Muslims living in minority enclaves in many other countries such as India, China, , France, Germany, the UK, Canada and the USA. The Islamic Religion

As applied in the Koran, the term “Islam” denotes “surrendering” or “committing” to the will of Allah (God) – which is the characteristic attitude expected of all adherents. The adherents themselves are called Muslims (Arabic Muslimūn) or “Believers” (Mu’minūn). The religion of Islam was spread through the conquests by the Arabs in the 7th century AD over Western Asia and North Africa and in the 8th century AD into Central Asia as well as into Spain. From the 11th century AD, under Turkish leadership, it spread into Southern Russia, India and Asia Minor and under Negro leadership into the Niger basin. In the 14th century it became politically dominant in the Balkans under the Ottoman Sultans and in India under the Sultans of Delhi. It also spread, largely by missionary endeavour, into Indonesia and parts of China. By the end of the 15th century AD it was expelled from Spain and in the 19th and 20th centuries AD, it lost ground in the Balkans, where it survived only in local communities in Bosnia and Albania. It continued to advance in East and West Africa.

Muhammad and the Koran

The history of Muhammad’s life is essentially known from the oral recollections of his followers which were subsequently collected in biographical works of the 8th and 9th centuries AD. According to these sources Muhammad was born in Mecca, then a prosperous centre of the caravan trade between southern Arabia and the Mediterranean countries. In mid-life he developed “contemplative habits” and proclaimed the worship of one God against the prevailing polytheism and idol-worship of his fellow-Arabs. He succeeded in winning over the support of a few prominent citizens of Mecca, Abū Bakr and ’Umar, who later, after his death, launched Arab armies on expeditions that led to the expansion of Islam. Although orthodox Islam rejected any kind of worship addressed to Muhammad himself as a human being, he gradually came to be regarded as an “eschatological” figure – a privileged intercessor for the whole community of Muslims – as a link between Allah and all creation. The 155

human Muhammad came to be seen as the intermediary of Allah’s revelation to mankind: the messenger and recorder for Allah’s word. An essential article of belief of all Muslims is the doctrine of the verbal inspiration of the Koran. Its verses, when quoted, are introduced by the phrase “Allah has said”. The Prophet’s part is understood to have been wholly passive. The Koran is the source of the guidance and instructions required by all Muslims for their daily lives: the religious obligations of prayer, alms, fasting and pilgrimage; the definition of the basic institutions of marriage, divorce and inheritance; and, the outline of the general structure of law. Muhammad preached that men and women are worthless unless they surrendered their will to Allah. In the manner of Christians, he preached that a day of judgment would come and that all must so order their daily life that they would not be judged unfavourably by Allah and thereafter be punished in hell with all its terrors. After the death of Muhammad, the Community of Islam was involved in a civil war over succession. The majority faction, is called “Sunnis” or followers of the Sunna (Practice) of the Community at large. Opposed to them were two dissident groups: one which maintained the sole legitimacy in the headship of Muhammad’s cousin and son-in-law, Ali (and his descendants), which were called the Shī‘at ‘Ali (partisans of Ali) or “Shī‘a”; the other, which rejected both Sunni and Shia positions and maintained the right of the Community not only to elect its own head, but to depose him if found guilty of sin (these were called Kharijites). During subsequent centuries the Sunnis predominated, not only in the Arab world, but also amongst the numerically preponderant non-Arab converts. The Shia are largely limited to Iran. The Kharijites survive only in small enclaves in Oman, Zanzibar and Southern Algeria.

Pillars of the Faith

Each individual Muslim believer is subject to certain duties called the “Pillars of the Faith” (Shahada): 1. Confession of Faith by repetition of the Word of Witness “There is no God but the one God, Muhammad is the Apostle of God”. 2. Regular performance of the ritual of Prayer (salat) at the five appointed times and by performing certain prescribed ritual movements. 3. The giving of alms (Zakat), which is a certain percentage for the relief of the poor and needy. 4. Observance of the annual fast (Ramadan). 5. Once in a lifetime a Muslim should make a Pilgrimage to the Sacred Mosque at Mecca and participate in certain prescribed ceremonies. In addition to these five duties, certain other obligations are laid on Muslims by the Koran: they are forbidden to drink wine, to eat swine’s flesh, to gamble, to practice usury and to refrain from unethical conduct such as perjury or slander. It is also obligatory to accept the Shari’a as both system of law and rule of life – setting out the ethical ideal. It is the Shari’a which confirms to each individual, as a Muslim citizen, those personal rights of liberty, prosperity and function awarded by God and which frees each person from capricious restrictions and classifications of a purely secular society. The common interest of the Community of Muslims requires each believer to join with other members similarly aware of their responsibilities to “strive in God’s path” for its defence against external and internal enemies. This “Holy War” (jihād fi sabil Allāh) has taken different forms in different ages. Holy War was waged vigorously against those who failed peacefully to submit to God’s will, though Jews and Christians were given special status as “protected peoples of the book” (dhimmis), since their scripture was believed to be based on the “partial revelation” of lesser prophets. As dhimmis they could continue to follow their own faiths, provided they paid a special “head tax” (jizya), about 6 percent of each individual’s total monetary worth, to their Muslim rulers. Pagans, however, were offered only the options of Islam or death. 156

The conquest of Mecca in the Prophet’s own lifetime gave Muslims a solid core of Arab power around which to expand their brotherhood in the decades following Muhammad’s death. Within a single century, Islam burst explosively across North Africa, Spain and over the Tigris and Euphrates to Persia and India. Never before in world history had an idea proved so contagious and politically potent. Martial fervour, combined with the ethic of social unity that replaced Arab intertribal conflict, made Muslim forces virtually invincible during the first century of their zealous expansion. The last judgment day, when all dead would be raised to head Allah’s eternal decisions, was a concept vividly articulated by Muhammad. None were promised better prospects in Allah’s paradise than those valiant warriors who died in righteous battle. (See H.A.R. Gibb A (1977) “Islam” in Encyclopaedia of the World’s Religions, Barnes & Noble, pp.166-199 for a detailed analysis of Islamic theology and dogmatics)

Trends in Islamic Doctrine

The elaboration of doctrine and scholastic theology was a relatively late development in Islam. The earlier generations were satisfied with simple unspeculative piety and fear of God (taqurā), together with the performance of the ritual obligations. Influential religious teachers generally disapproved of speculative scholastic theology. What is explicitly stated in the Koran was to be accepted without asking questions. The “madrasas” served as the main depositories of stereotypical orthodox scholasticism.

Sufism The first challenge to orthodoxy came in the 1100s with the spread of Sufism. For centuries the sufis propagated a spiritual and mystical form of Islam. Sufism was influenced by the traditions of monastic asceticism and mysticism found in Buddhism, as well as in Gnostic, Hermetic and Christian religions. Richly endowed convents, madrasas or lodges were set up in various centres such as Cairo, Damascus, Baghdad, Istanbul and North India where the sufis could engage in the pursuit of spiritual experience, the bodily discipline of celibacy and mystical intuition. Unlike orthodox Islam which sets all believers on the same level, Sufism isolated its practitioners from the general body of the Islamic community. Each Sufi group constituted an “ekklesia” in the form of a “church” or “order” with “shaikhs” at the top and a hierarchy of disciples and underlings.

Wahhabism The next phase saw a return to Islamic fundamentalism in the 1700s with the emergence of wahhabism under the influence of an Arab sheikh named Muhammad ibu ’abd al-Wahhab. The Wahhabis called for a return to the doctrines and practices of early Islam. Their hostility towards Sufism led to a “purification of Islam” from Western influences. It also led to the development of the in 1928. The Muslim Brotherhood grew from a grassroots organisation that interpreted Islam as a system of government, into a mass movement that provided key popular support for the 1952 Revolution of the Free Officers, a military coup led by Col. Gamal Abdul Nasser that ousted the Egyptian monarchy. Similar movements in Palestine, and North Africa later emerged as significant actors in the political sphere. Many branches of the Brotherhood based on orthodox revivalism were directed towards missionary activities in Africa, India and Indonesia. Wahhabism is now deeply entrenched in where they have a stranglehold on the Al Saud royal family. In a Special Report on Saudi Arabia, written by Max Rodenbeck for The Economist of January 7th 2006, it is reported that at a giant state-run press outside Medina, some 10 million beautifully printed Korans a year are produced in 40 languages and distributed free. These editions of the Koran are annotated by Wahhabist scholars, who pronounce, among other things, that jihad is one of the “pillars” of Islam in addition to the well-known five “pillars”. One footnote says “Jihad is an obligatory duty”. Some estimates put the number of Saudi 157

volunteers for jihadist campaigns in Iraq, Afghanistan and elsewhere at around 30,000. Generous funding also flowed to jihadist causes, often without the knowledge of the Saudi donors. The Wahhabist establishment has been given control of the Saudi kingdom’s mosques and schools. Wahhabist schools and sharia courts have supplanted older institutions across the kingdom. The powers of the mutawaa (religious police) have gradually been widened and rules on such things as female dress more rigidly enforced. Huge sums went to religious causes around the world, from the founding of Islamic universities to the building of mosques and pilgrimage assistance. Osama bin Laden and his deputy Ayman al-Zawahiri, the founders of al- Qaeda, are both alumni of the Wahhabist school. The country’s main universities also remain steeped in Wahhabist thoughts.

Modernisation Trends Sheikh Muhammad ‘Abduh of Egypt (1849-1905) was the first major Islamic theologian to seek a balance between reason and revelation by interpreting fundamental principles of Islam while not being bound by imitation of traditional authority. He argued that it was necessary to find a balance between God’s truth as revealed in nature and His Truth as spoken in the Koran. In the 19th century, social institutions began to undergo major changes with the setting up of new Civil Courts and their gradual assumption of the power of Shari’a courts, whose jurisdiction became limited to the area of family law. Turkey was the only Muslim country to completely abolish the jurisdiction of the Shari’a law. In other Muslim countries, the increasing tendency is for a substitution of new legislated codes for traditional rulings of the orthodox courts. Their new codes, while deriving rules from the Koran, the traditions of the Prophet and decisions reported on the authority of early jurists, leave the legislators a free hand to disregard all “school” decisions. Although the rules applied in different countries vary, the new codes introduce restrictions in the contracting of marriage, lay down minimum ages for legal marriages, restrict the ability of husbands to divorce their wives and make provision for wives to seek annulment or dissolution of marriage, Polygamy is also restricted, but only prohibited outright in one or two countries. This transformation brought into question two basic principles upon which the unity of the Muslim community was built. The first was rejecting the principle that every ruling, to become valid, must be endorsed by a general consensus (ijma’) and resorting instead to an eclectic choice of rulings. This change challenged the “catholic” structure of traditional Islam by introducing an element of “protestantism”. It brought a diversity of interpretations of Islam’s constitutional documents – the Koran and the Traditions of the Prophet. The resultant diversity of interpretations challenged the inner function of the ijma’ as the instrument by which the Community regulates its spiritual life. Traditionally, the function of the ijma’ is to secure and to preserve the integral spiritual unity of all Muslims as a spiritually governed society. The second implication of the transformation represents the emergence of the state’s assertion of its independent legislative authority. Instead of being content to remain as a parallel system of rulings and jurisdictions in the field of public administration, the state now claimed an exclusive right of legislation, overriding the Sharia – even in the face of established ijma’ – and binding upon all Muslims under its jurisdiction, irrespective of the legal schools to which they individually adhere. It meant that the true character and function of the ijma’ was now challenged by the principle of ijtihad – the formulation of rulings based on a critical study of the sources. It raised the possibility of a secular power usurping the “spiritual rights” of Muslims and carries with it the possibility of disrupting the Sharia and consequently of destroying the peculiar and divinely ordained constitution of the Muslim Community. (See, Gibb, op.cit., pp.195-197)

Twentieth Century Developments

One of the major debates within Islam in the 20th century has been on the relationship between religion and the state. The principle issues in this debate encompassed national unity versus 158

Islamic unity; the response of Islam to socialism and capitalism; and the role of Islam in a secular state.

Pan-Islamism The concept of Pan-Islam was first advocated by Sayyid Jamal al-Din al-Afghani (1838-1897), a Persian Shi’ite, as a necessary response to what he perceived as the threat posed by the West to Islam. He also insisted that Islam should be rebuilt on its classical foundations. In the 20th century, the idea of Arab unity became prevalent. The common cultural, linguistic, historical and religious links between the Arab countries were emphasized and the Arabs held up as the backbone of Islam. Al Bazzaz maintained that there was no immediate possibility of unity of all Muslims for a variety of political and social reasons. Hence it was argued that Pan-Arabism was a necessary prerequisite of any future Muslim unity. These ideas were also supported by Sati al- Husri (1880-1964) who believed that modern world conditions militated against Pan-Islamism, but that Arabic unity was a feasible goal for the foreseeable future. In Egypt Ahmad Lutfi al-Sayyid (1872-1963), the founder of the People’s Party, was a sceptic of Pan-Arabism and rather propagated sentiments of nationalism. The idea of Islamic nationalism featured prominently in the birth of Pakistan and its subsequent development. The main intellectual force behind the creation of Pakistan was the poet, mystic and philosopher Muhammad Iqbal (1875-1938) who perceived the creation of a separate Muslim state in India as a step towards the realisation of an ideal Islamic state and also as a means of freeing Islam from the influence of Arab imperialism. The founding of Pakistan was originally opposed by Sayyid Abu I Ala Mandudi (1903-1979), founder of Jamaat-i-Islami, a militant movement whose purpose was to establish an Islamic world order. Mandudi argued that any form of nationalism was contrary to Islamic ideals. At a later stage he accepted the reality of Pakistan and spearheaded a movement to transform the country from a Muslim homeland to an Islamic state. This change of opinion highlights the inherent dichotomy in Muslim thought between the ideal of Islamic unity and the practical realities of loyalty to the individual state. The latter bears the danger of delegating the sovereignty which belongs to God, to a human ruler or to the people.

Islam and Socialism The relationship between Islam and conceptions of socialism and capitalism is complex. Capitalism has been associated with the corruption and decadence of the Western World and with Western Imperialism. It thus aroused strong feelings amongst Muslims of nationalism and a sense of religious morality. But socialism, which is also considered to be materialistic at its roots, was not readily accepted as an alternative. As a result, a division arose between Muslim countries committed to socialism and those which were not. There was also a broad spectrum of socialist stances amongst Muslims, ranging from Marxism to . The Arab Ba’ath Socialist Party, which came to power in and Iraq in the latter half of the 20th century, combined the ideas of socialism with those of Pan-Arabism. The Algerian Popular Democratic Republic in its 1976 charter described social revolution as the only hope for a declining Muslim world. Several Islamic intellectuals such as Mustafa Mahmud attempted to show that Islam can provide a system far superior to both Marxism and capitalism. By building on the principles of the Koran and the Sunna, the needs and desires of the individual and the group could be accommodated in balance. This view of Islam as the middle path between capitalism and socialism was also reflected in the ideas of Ayatollah Mahmud Taligani, a principal leader of the Iranian revolution. He argued that Islamic economics, based on the right of ownership by individuals of the fruits of their labour, were in fact the inevitable end of any economic system.

Islamising the State In Egypt, political activity by the Muslim Brotherhood, an activist group founded by Hassan al Banna (1906-1949), began with their campaign in 1928 to create an Islamic state by direct action. They tried to work within the framework of the existing state. After initial successes 159

which included the forced abdication of King Farouk in 1952, the Brotherhood formed an alliance with the then ruling Revolutionary Command Council. Disagreements between the two groups led to an assassination attempt on Nasser, leader of the Free Officers who put the Revolutionary Command Council in power, and the majority of the Brotherhood were imprisoned or executed. In the 1970s the Brotherhood were again active in opposition and continued their campaign to introduce more Shari’a law as a means towards Islamising the state. In Iran the reformist and inept policies of the Shah in the 1960s and early 1970s engendered widespread hostility. The Shah antagonised the mullahs by encroaching on the autonomy of the religious establishment. He also alienated conservative Muslims by banning the chador from universities and government offices. A broadly based movement backed the revolution and brought the exiled Ayatollah Ruhallah Khomeini into power to establish the Islamic Republic of Iran in 1979.

Reactionary Trends The role of women in modern Islam has also been affected by the reaction against Western culture in many Muslim countries. Increased freedom of activity for women and the abandonment of traditional dress have been seen by many as the result of corruption by Western decadence. The Muslim Brotherhood, in particular, has striven to encourage traditional Islamic values such as polygamy, which is seen as a protection from the evils of adultery and prostitution which are encouraged by enforced monogamy. As a result of the fast changing world, many Muslim groups are placing a growing emphasis on a return to Islamic values. But the situations created by the increasingly complex world, confronts Muslims with ever-changing challenges to interpret the ethical and social values of Islam in a meaningful way. The 1990s saw the formation of al-Qaeda. It is composed of loosely affiliated terrorist cells and its activities have fuelled a resurgence in Islamic fundamentalism. Founded by Osama bin Laden, it operates in around 80 countries acting more or less like an “ideological franchise” coupled with a “terrorism central”. It supports jihadi groups who draw their strength from a common pool of self-righteous anger at what they see as the humiliation of Muslims at the hands of Western “infidels”. America’s wars in Iraq and Afghanistan, along with its support for Israel are interpreted as a war on Islam.

The Arab World in the Twenty-first Century

The 22 countries that belong to the Arab League are not as homogenous as is generally assumed. Apart from pockets of Kurds, Maronites, Copts, Berbers and Africans and apart from the fact that Arabic is widely spoken and understood, there are many Arab dialects that are not commonly understood. The Islamic religion provides some glue, but in a few cases is a divisive force. The only source of consensus is their mutual hatred for Israel. Within the borders of some Arab countries the problem is the missing glue of nationhood. Iraq is saddled with the perennial sectarian conflict between Sunni and Shia factions and the separatist striving of the Kurds. is notoriously fragile as a result of its religious factions. Sudan has been plagued by civil wars between its Arab-dominated centre and the non-Arab minorities in its south and west.

160

Demographic Trends

Total Muslim Country Population Percentage

Algeria 34,178,188 99.0 727,785 82.0 Comoros 752,438 98.0 Djibouti 516,055 94.0 Egypt 83,082,869 90.0 Iraq 28,945,657 97.0 Jordan 6,342,948 95.0 Kuwait 2,691,158 85.0 Lebanon 4,017,095 60.0 Libya 6,310,434 100.0 Mauritania 3,129,486 100.0 Morocco 34,859,364 98.7 Oman 3,418,085 100.0 Palestinian Territories 4,000,000 100.0 Qatar 833,285 100.0 Saudi Arabia 28,686,633 100.0 Somalia 9,832,017 100.0 Sudan 41,087,825 70.0 Syria 20,178,485 74.0 Tunisia 10,486,339 98.0 United Arab Emirates 4,798,491 96.0 23,822,783 99.0

Source: Pop. Data – 2009 CIA World Fact Book

The region’s population has doubled over the past 30 years to around 360 million. The majority of Arabs are under 25 years old. The rapid population growth has coincided with a massive influx into the cities. Cairo burgeoned from 9 million in 1976 to 18 million in 2006. Saudi Arabia’s capital, Riyadh, hardly a noteworthy town 50 years ago, is a city of 5 million people today. Around 90 percent of Lebanese and Jordanians live in cities. The Arab population is expected to surge by 40 percent over the next two decades. That amounts to 150 million additional people – the equivalent of two . With low employment rates and particularly high youth unemployment (around one person in five are out of work), the Arab world faces looming problems. Particularly in the most populous states (Egypt, Algeria and Morocco), the prospects of creating enough job opportunities seem remote.

Authoritarian Regimes Throughout the Arab world, authoritarian rule is the order of the day. Hardly any of the 21 actual states can plausibly claim to be a genuine democracy. The cartel of authoritarian regimes is well practised in the arts of repression in order to stay in power. In Egypt radical Islamist movements such as Islamic Jihad and the Jamaat Islamiya continue to claim lives. One of Egypt’s jihadists, Ayman al-Zawahiri, Osama bin Laden’s number two, founded and led al-Qaeda. 161

The stubborn conflict in Palestine has created a deadly international stalemate – despite continued efforts by American presidents – or, perhaps on account of continued American support of Israel. Much of Arab opinion remained fixated on the struggle with Israel. After the 9/11 attack George W. Bush sent in the American Army to destroy the dictatorship of Saddam Hussein in Iraq – ostensibly to rid America of an unpredictable enemy suspected plausibly of weapons of mass destruction. The weapons were not found (despite the fact that Saddam Hussein found it necessary to expel the UN’s weapons inspectors). President George W. Bush pressed on with his “Freedom Agenda” to spread democracy in the region – perhaps naively, not realising that if he had been more successful in his campaign, several of the Arab kings, emirs and presidents (who depended on American markets, aid and military protection) would be thrown out of their positions in the advent of real democracy in their countries. The “Freedom Agenda” of the Bush administration failed and the Obama Administration decided to bow out respectfully. The fundamental problems relating to the political stagnation of the Arab world was left unresolved. Although the local details vary, most Arab regimes maintain their power in remarkably similar ways. At the top of the system sits either a single authoritarian ruler (a monarch or a president), or an ever-ruling party or royal family. The ruler is shored up by an extensive makhabarat (intelligence service) employing a vast network of informers. Some estimates of the Egyptian internal security apparatus are as high as 2 million people. The second instrument of control is the government bureaucracy. With rotation of power, Arab countries have blurred the distinction between the ruler and the state. Bloated civil services provide the regimes with a way to dispense patronage and “pretend jobs” to mop up new university graduates. The size of these administrative armies is staggering. In 2007, Egypt’s civil service was about 7 million (population 83 million). As a proportion of their population, the Gulf oil producers’ public sector payroll is higher still. The most effective instrument of control used by the Arab regimes is “sham democracy”. Most Arab countries have parliaments and they hold formal elections, but parliaments have few powers and elections are rigged to ensure that the ruler or his party cannot be unseated. News media are government controlled. Hosni Mubarak has remained President of Egypt for 28 years. In 2007 a new constitutional amendment banned political parties with a religious orientation and increased the extensive powers of the president – including the emergency laws under which Egypt has been governed for most of the past half-century. The government has stopped broadcasting of parliamentary debates or reporting them in newspapers. Members of the opposition party, the Muslim Brotherhood claim that they are repeatedly harassed by the police. Elsewhere in the Arab world the democratic outlook is equally bleak. In Syria, Hafez Assad died after 30 years as the country’s ruler, but his son Bashar Assad took his place. Ali Abdullah Saleh has been president of Yemen for more than 30 years. Jordan is still run by the Hashemite family and Morocco by the Alouite family, Saudi Arabia by the al-Sauds and Kuwait by the al- Subahs. Muammar Qaddafi has been imposing his unique brand of “Islamic Socialism” on Libyans since 1969 and is grooming one of his sons to take over. In Egypt there is also talk of a favoured son inheriting his dad’s position. It is clear that for the Arab world the reality of government of the people, by the people and for the people still lies in the distant future. Change will have to come from within. But it is not clear how this change might come about. One possibility is that the impetus for change might grow as power passes down the generations, bringing to the fore a brand of leaders (and followers) with a more modern outlook. Morocco’s King Muhammad is more of a moderniser than was his father, King Hassan. Saudi Arabia’s Abdullah, who eventually ascended to the throne in 2005 at the sprightly age of 80, has cautiously accelerated the careful reforms he initiated during his time as crown prince when his older half-brother Fahd was king. Jordan has not advanced since Abdullah took over from Hussein. It seems that change will have to come from below.

162

Economic Stagnation With the exception of the oil-rich areas such as Libya and the Gulf States, the economic statistics of the Arab world paint a bleak picture. The broad pattern is one of under-performance in investment, productivity, trade, education and social development. The total manufacturing exports of the entire Arab world is lower than those of the Philippines (with less than one-third the population) or Israel (with a population the size of the Saudi city of Riyadh). In the booming Gulf States, much of the migrant workers are drawn from Asia, but they are also sucked in from poorer Arab countries. Many Arabs serve as migrant workers in Europe. The remittances of migrant workers are estimated to make up around 20 percent of the GDP in Lebanon and Jordan. Millions of Egyptians work abroad, many of them in the Gulf. Some of the countries that lack oil, but are close to European markets and influence, such as Morocco and Tunisia, have begun to create diversified economies. It is clear that all the Arab countries, both oil and non-oil states, are disproportionately dependent on collecting rents – if not from oil from some other form such as remittances or foreign aid or loans. The Arab countries need to address the imbalances in their economies: skills shortages, rigid labour markets, over-sized bureaucracies and an over-dependence on rent collection. They are hampered by chronic weaknesses in their government bureaucracies, defective judicial systems, a lack of political transparency or accountability and exploitative vested interests. Over the last quarter of a century, real GDP per capita has fallen throughout the Arab world. In 1999, the GDP of all the Arab countries combined stood at $531.2 billion, less than that of Spain. Today, the total non-oil exports of the Arab world (which has a population of around 360 million people) amount to less than those of Finland (a country with only 5 million inhabitants). The situation regarding science and technology is as bad or worse. The number of patents registered in the USA between 1980 and 2000 coming from the Arab world totalled 867 – compared with 16,328 from South Korea and 7,652 from Israel. The Arab countries also have the highest illiteracy rates. Only sub-Saharan Africa has a lower average standard of living.

The Gulf States and the Oil Bonanza Barely eight decades ago, all six desert monarchies that make up the Gulf Co-operation Council (Saudi Arabia, Oman, the United Arab Emirates, Qatar, Bahrain and Kuwait) were wretchedly poor, thinly populated and so loosely governed that they could barely claim the status of nation-states. All were catapulted into modernity by the discovery of 45 percent of the world’s known oil reserves by American prospectors in 1932 and its subsequent extraction and marketing around the world. In 2006 oil revenues contributed 80 percent of government revenue in the abovementioned GCC countries with their combined population of around 40 million people. The oil bonanza brought many benefits to the GCC countries. In a special report on the Gulf States in 2002, The Economist noted that the six desert monarchies since 1970 had trebled literacy levels to 75 percent, added 20 years to average life expectancy and created a world-class infrastructure by spending a total of $2 trillion. Dubai, with relatively little oil and gas, has become a successful business, shopping and tourism hub along the lines of Singapore and Hong Kong. Kuwait pioneered the idea of safeguarding its own future by becoming a long-term investor in the economies of others. Huge sovereign wealth funds of the Gulf States have turned the Arab countries into a big force in the world economy as strategic investors. A quick glance at the demographic trends in the Gulf States shows that there is huge scope for investment in the development of the home region. With 60 percent of the Gulf’s native population under the age of 25 and with more of its citizens in school than in the workforce, the region faces at least a generation of rocketing demand for employment. In every single GCC country the native workforce will double by 2020. In Saudi Arabia it will grow from 3.3 million in 2002 to 8 million. This challenge is particularly daunting for the Gulf region for several reasons. The first is its lopsided labour structure which was caused by importing millions of foreign workers to do the heavy lifting and to dispense cosy jobs to locals. The result is a two- tier workforce, with outsiders working mostly in the private sector and locals monopolising the 163

public bureaucracy. In 2002 in Kuwait, for example, 93 percent of the natives who have jobs were employed by the government, whereas 98 percent of the 900,000 people working in the private sector were foreigners. Private sector workers were found to be productive, whereas government workers were found to be worth only a quarter of what they were paid. The second is the poor quality of their education systems which were largely based on outdated Egyptian models – largely designed to instil patriotism and religious values. The system discourages intellectual curiosity and channels students to prestige certificates rather than gaining marketable skills. Of the 120,000 graduates that Saudi universities produced in the period 1995 to 2000, only 10,000 had studied technical subjects. They accounted for only 2 percent of Saudis entering the job market. Government largesse tended to spoil Gulf-state citizens. Jobless youths do not bother to find work because their families are wealthy and willing to keep them in comfort until “appropriate” positions arise. Years of easy money and state coddling seem to have weakened the work ethic. Businesses are said to be reluctant to hire locals because they “won’t show up, won’t care and can’t be fired”. Indians form the largest proportion of foreign workers. They work long hours for low wages. Their wages are determined by the marginal productivity of labour – not determined by the levels in the Gulf States, but the levels in Bangladesh where many come from. Non-residents cannot acquire citizenship, nor have they been allowed to own property – unless they form a minority partnership with a local. The security of the oil-rich Gulf States has long been a strategic problem for the oil-dependent world. Saddam Hussein, as Iraq’s leader, invaded Iran, gassed Kurds, invaded Kuwait, lobbed ordnance at the Saudis and developed (concealed?) chemical weapons. Iran also tends towards erratic behaviour, is armed with missiles and harbours nuclear ambitions. As a result of its volatility, the Gulf States have enlisted American military protection. Although the USA tends to keep a low profile, it keeps upwards of 30,000 troops in the region at Kuwait and at the Prince Sultan air base near Riyadh. America’s 5th fleet is headquartered at Bahrain. The USA also keeps military facilities at Qatar, Oman and the UAE. These facilities are crucial to the American campaign in Afghanistan.

Family Dynasties A remarkable characteristic of the Arab world is the resilience of the family dynasties. The archetypical example is the Al Sauds. Several observers have commented on the surprising degree of acceptance enjoyed by the ruling dynasties. Saudi Arabia (population 29 million) is the site of the holiest places in Islam, which carries with it both heavy responsibilities and wide influence among the world’s 1.4 billion Muslims. It has been created by holy war (jihad). Its present territory was captured between 1902 and 1925 by a crusading puritan army under Abdul Aziz ibn Saud, who declared himself king in 1932 – the same year that oil was discovered in the east of the kingdom. Oil and Islam define Saudi Arabia in many ways: the relationship between citizen and state, between effort and reward in the workplace. The immense wealth and continuous patronage financed by the income stream derived from oil and its strategic importance to the oil-hungry world at large. The sheer size of the Al Saud clan is quite remarkable. There have been eight generations of Saudi rulers dating back to 18th century sheikhs who held sway in a few oasis towns near the present Riyadh. Many have been prolific. Abdul Aziz himself sired some 36 sons and even more daughters. The first son to succeed him, King Saud, fathered 107 children. King Abdullah is believed to have 20 daughters and 14 sons. The extended Al Saud family is now estimated to number some 30,000, with around 7,000 regarded as princes. Of these around 500 occupy government positions, perhaps 60 in important decision-making roles. Tribal links are maintained through strategic marriages and the selective manning of key institutions such as the Saudi National Guard. Forty years ago, King Feisal, a renowned reformer, decreed that the royal family’s take from oil exports should be capped at 18 percent. State budgets are too opaque to audit the current 164

percentages, but the combined wealth of the family is estimated to add up to hundreds of billions of dollars. The Al Sauds and their loyalists control all of the dozen Saudi daily newspapers. The two most respected pan-Arab dailies, as well as four out of five of the most popular Arab satellite TV channels (al-Jazeera excluded). Despite the lack of any constitutional constraints, the Saudi King is restrained by the Wahhabist religious establishment. Although kings appoint senior members of the clergy, they have no direct oversight over the 700 judges who run the sharia courts, the backbone of the Saudi legal system. The king must also answer to his own enormous family. By tradition, succession is not vertical, passing to sons, but horizontal, passing to brothers in order of age. The king’s sons, of whom dozens are still in line for succession, have used their long wait to create their own powerful fiefs. All have appointed their sons to top positions.

The Condition of Women In most Arab countries women have no political power. The depressed and down-trodden status of women is one of the main reasons for the under-development of their society as compared with the West and the rapidly developing East. Some countries such as Iraq and Tunisia have made significant progress towards the emancipation of women by increasing opportunities for them: access to higher education and a widening range of professions. The spectacle of women peeping out of small holes in burkas is still a familiar sight. It symbolises the perspective on the world accorded to female Arabs. How can a society prosper when it stifles half its productive potential? Despite development initiatives over the recent few decades, one in every two Arab women still cannot read or write. In nearly all Arab countries women suffer from unequal citizenship and legal entitlements.

Stirrings of Social Change Apart from the plutocratic cliques at the heart of the various Islamic regimes, there are feint signs of stirrings: scattered street demonstrations, protesting voices in the social-networking sites of the blogosphere, some courageous newspapers nibbling at sensitive subjects such as bureaucratic excesses and corruption. But these stirrings manifest themselves within severely circumscribed limits. Satellite television plays an important part to spread information about the world, private investors and entrepreneurs are playing a growing role in economies that used to be dominated by the state, business associations and chambers of commerce are increasingly involved in public policy making in several countries. Although most businessmen avoid “politics”, they press the need for “modernisation” – of procedures, regulations, education and training. The expanding role of business means that the circle of consultation and decision making has grown beyond the theocratic and plutocratic elite. Arabs today enjoy unprecedented access to information and divergent opinions. After 1996 the emir of Qatar, Sheik Hamad bin Khalifa, established the al-Jazeera television station in his capital, Doha. The new station was allowed to broadcast news from across the Arab world. Although al Jazeera chose not to dwell much on the blemishes of Qatar itself, its many Palestinian journalists tackled sensitive issued elsewhere in the region and soon spawned imitators and competitors. Leaders were obliged to explain and justify themselves as never before. The Saudi-sponsored al-Arabiya subsequently also entered the terrain. Both stations pay a lot of attention to the plight of the Palestinians and take a strong stand against the American support for Israel and its involvement in Iraq. Both al-Jazeera and al-Arabiya have been careful not to antagonise their respective Qatari and Saudi sponsors, but they have created platforms for debating Arabic issues and for exposing problem areas. (See The Economist, “Special Report on the Arab World”, July 25th, 2009, pp.11-13)

The Battle of Ideas Although the Arab world has produced few critical minds from within, the growing platforms for public debate have encouraged the level of participation: from liberals as well as 165

conservatives. Radical imams as well as milder clerics are active bloggers. The overwhelming messages pumped into the airwaves are by no means all congenial to the West. In recent times reactionary conservatives seem to have gained ground across the region promoting extreme piety and an apolitical stance – sometimes called Salafism. The Salafist movement champions a return to the pure Islamic traditions practiced by Muhammad and his contemporaries. They adhere to a utopian vision of Islam mastering the world. They do not pursue jihad against the West nor do they attack the legitimacy of Arab regimes. They do not promote specific political agendas, but are still well organised. They are well funded from sources in Saudi Arabia. They are unfriendly to liberal causes such as female emancipation. They consider those Muslims whose practice of the faith falls short of their own exacting standards as takfir – unbelievers or even apostates. Peter David writes in his article “Waking from its sleep” that “Religious fervour is growing among Arabs ... Access to the airwaves and the internet has democratised Islam, forcing rival interpreters of the faith to compete on their own merits for an audience that crosses sects and borders. And this cacophony inside Islam is itself part of a wider, and surprising, paradox of today’s Arab world, which is that, behind the stagnation of its formal politics, it is engaged in a fierce and potentially history-altering battle of ideas.” (See The Economist, July 25th, 2009, p.14) The Economist’s analyst contends that the causes of conflict in the Arab world are self- reinforcing: the competition for energy, the conflict with Israel, the weakness of statehood and the stagnation of politics. The USA is deeply involved in the Arab world as a consequence of its association with Israel, its strong military presence in the Gulf States and its vital interests in the region’s oil. The American presence is strongly challenged by Iran with its close ties with Russia, China and Latin American allies and its export brand of theocratic domination strongly buttressed by Hamas, Hezbollah and its Shia militant allies. In Iraq, the only large Arab country where Shias outnumber Sunnis, the Shia religious establishment has not embraced the Iranian idea invented by Ayatollah Khomeini of setting up a supreme Islamist jurist above the elected leadership of the state. The political debate in the Arab world is overshadowed by the issue of Israel. It looms larger than anything else in Arab minds and distorts the internal Arab debate about politics and government. Iran has turned the Palestinian conflict into a tool against America’s Arab allies, arousing anti- American passions on Arab street. Pro-American regimes lack democratic legitimacy and are presented as lackeys of a resented superpower. Many Arabs reject the idea of peaceful coexistence with Israel. This conflict tends to override internal quarrels between secular and religious, Sunni and Shia, or left and right. Their hatred for Israel is an intoxicating way to ignore their own failings and to blame someone else. It enables the plutocratic regimes to maintain states of emergency at home and to postpone reform. There will be no spring or new dawn without solving the Palestinian problem.

Non-Arab Muslim States

Along the borders of the Arab world and further away are a number of outlying states with predominantly Muslim populations. To the north of the Arab world lies Turkey (99.6 percent out of 76.8 million) and north-east lies Iran (99 percent out of 66.4 million). In South-Central Asia lie Afghanistan (99 percent out of 33.6 million), Pakistan (97 percent out of 154 million) and Bangladesh (83 percent out of 156 million). Further east lies the most populous Muslim state, Indonesia (86 percent out of 232 million) and Malaysia (52 percent out of 25.7 million). Towards Central Asia lie the former Soviet Republics of Kazakhstan, Uzbekistan, Tajikistan, Kyrgyzstan and Turkmenistan (90 percent out of 53.6 million). In Africa these include Chad (54 percent out of 10.3 million), Mali (90 percent out of 12.6 million), Niger (80 percent out of 15.3 million), Nigeria (75 percent out of 149 million), Sierra Leone (60 percent out of 6.4 million), Tanzania (65 percent out of 41 million) and (100 percent out of 405,000). 166

Together these countries have a combined population of around 1 billion people of which around 75 percent are Muslim. As the spearheads of Islam continued to probe in all directions it soon reached the Strait of Gibraltar in the west. Far to the east they reached the mouth of the Indus on the Indian Ocean, the areas north of the Ganges as well as the banks of the Brahmaputra above the Bay of Bengal. After conquering the lands of the Persians they penetrated Central Asia all the way to Samarkand and the areas west of the Chinese Walls. In many of the captured areas, Arab warriors were actually welcomed by local inhabitants who had long resented the previous rulers. The Muslim invaders required armies that were capably led, high in morale and skilled in using horses. Jews and Christians in conquered areas were usually both treated as second-class citizens and forced to pay high taxes. Penetrating into Africa, Islam extended beyond the outer reaches of the Roman Empire. From the 8th century the first mosques appeared in the ports of East Africa. South of the Sahara Desert, a long stretch of dry territory extending to the Atlantic coast in Mauritania was visited by Islamic merchants. The Islamic religion gave meaning to the wandering nomads of the desert regions – requiring no priest and no church – not even for a burial. The language of the Koran was Arabic, but converts could learn passages by heart. Islamic traders did business as far apart as Mombasa, Canton and Timbuktu. Although Islam proclaimed the kinship of all peoples, the idea did not extend to slaves. In the course of centuries, Islamic merchants served as major slave traders from the upper reaches of the Niger River to Central Asia and Northern India.

Turkey The Turks originated in the Turkestan region of Central Asia. As a warrior tribe they moved west in stages in the wake of the Mongols’ advance. Ottoman I, a commander of the Ottoman Turks and founder of the Ottoman dynasty, established a base in the Anatolian interior. From that base they raided far and wide, chipping away at the Byzantine frontier. A succession of Ottoman rulers, styling themselves as “Sultan” conducted the conquest of Asia Minor, overwhelming the Christian Greek settlements with Muslim Turkish colonists. The Ottoman Sultans led a supreme nation of warriors, calling themselves ghazis (“warriors of Islam”). By 1400 the Ottoman Turks held nearly all of the territory of the present Turkey and extended their rule far into Christian Europe. They occupied long reaches of the Danube River and large parts of what are now Albania, Serbia, , Bosnia, Bulgaria and Romania. Many peasants accepted the religion of the Turks and joined their army as mercenaries. In 1453 the Turks conquered Constantinople then moved into Greece and the south of Italy. In the 16th century they captured Damascus and Cairo. As the Europeans were establishing their empires in America and Asia, the Ottoman Turks from Asia were forcing their way into Europe. In the late 1680s, the Turks were driven back first from the outskirts of Vienna and then from Buda, Belgrade and finally also from Athens. But the Turks clung to the Holy Land and much of the Middle East until they lost control in the First World War (1914-1918). By 1918 Mustafa Kemal, the Turkish general who earlier became the hero of Gallipoli when he routed the Allied Powers from Turkish soil, surfaced at the head of a Turkish national movement dedicated to the creation of a national republic based on a modern secular society. With his headquarters in Ankara, the Turkish-speaking heartland, he was a sworn enemy of the Sultan, the mosque and the veil. Mustafa Kemal’s movement played a central role to consolidate the remnants of the Ottoman state after the Armistice of Mudros which ended Ottoman involvement in World War I. He had to deal with local uprisings of irregular forces, remnant Ottoman forces and Greek military encroachment. The first step in establishing a legitimate basis of action was to convene a parliament, the Grand National Assembly, at Ankara in 1920. In the Fundamental Law of January 20th, 1921, the assembly declared that sovereignty belonged to the nation, that the assembly acted as the true and only representative of the nation, that the name of the state was declared to be Turkey and that executive power was entrusted to an executive council headed by Mustafa Kemal. 167

After he drove out foreign forces from Turkish soil and brought irregular forces including remnants of the sultanate under control, a comprehensive settlement was achieved via the Treaty of Lausanne (1923). A compulsory exchange of populations was arranged as a result of which 1.3 million Greeks left Turkey and 400,000 Turks were repatriated from Northern Greece. The city of Mosul was allocated by the League of Nations to become part of the new state of Iraq. Turkey only regained control of the Straits in 1936. Construction of a new political system began with the abolition of the sultanate and the assembly’s declaration of the Republic of Turkey on October 23rd, 1923, with Mustafa Kemal elected as president. All members of the Ottoman dynasty were expelled from Turkey and a full republican constitution was adopted on April 20th, 1924. It retained Islam as the state religion, but in 1928 this clause was removed and Turkey became a purely secular republic. Mustafa Kemal came to be known as “Ataturk” (Father of the Nation). He is said to have been an autocratic, dominating and inspiring personality. He died in 1938. His policies were based on six fundamental principles: republicanism, populism, nationalism, statism, and revolution. These concepts were given specific content within the context of Kemalism. The secularism of Ataturk played a major role in pushing Turkey into a process of modernisation. Under his leadership the caliphate – the supreme politico-religious office of Islam – was abolished. The secular power of all religious authoritarians and functionaries was reduced and eventually eliminated. The various religious foundations were nationalised and religious education was restricted. The influential and popular mystical orders of the dervish brotherhoods (tarika) were also suppressed. Although secularised at the official level, religion remained a strong force at the popular level. Kemalist secularism did not merely mean separation of state and religion, but also the separation of religion from educational, cultural and legal affairs. It meant independence of thought and independence of institutions from the dominance of religious institutions and religious thought. The Kemalist principle of secularism did not advocate atheism – it was not an anti-God campaign. It was a rationalist, anti-clerical secularism. Kemalist secularism was not against an enlightened Islam, but against an Islam that was opposed to modernisation. Ataturk replaced traditional Islamic institutions with modern, nation-based, anti-imperialist institutions focused on economic and technological development. His concept of “statism” was based on the principle that the state had a legitimate role to regulate the economy, to promote the national interest, to engage in areas where private enterprise was inactive, or where private enterprise proved to be inadequate. Under his influence, the state emerged not only as the principle source of economic activity, but also as the owner of the major industries of the country. The Kemalist reforms brought about a revolutionary change in the status of women through the adoption of Western codes of law, in particular the Swiss Civil Code. In 1934 women received the right to vote. Kemalism opposed class privileges and distinctions and considered Turkish citizenship as the supreme value to spur people to work hard and to achieve a sense of unity and national identity. Kemalist nationalism exulted citizens to accept the principle that the Turkish state is an indivisible whole comprising its territory and people. Hence, Kurdish separatism was considered unacceptable. Secularisation involved the abolition of religious courts and schools, the adoption of a purely secular system of family law, the substitution of the Latin alphabet for the Arabic in writing Turkish, the adoption of the Gregorian calendar, the replacement of Friday by Sunday as the weekly holiday, the adoption of surnames, the abolition of the wearing of the fez or the veil and the wearing of clerical garb outside places of worship. After the death of Ataturk in 1938, his closest associate Ismet Inönü was elected president. During World War II, Turkey clung to neutrality and only joined forces with the Allied Powers towards the end of the war. After 1947, it received extensive military aid and economic assistance from the USA. During the post-Ataturk period the conservative Islamic institutions gradually clawed back special privileges and status, religious schools, reinstatement of Arabic and radio readings of the 168

Koran. Under continued pressure the government gradually relaxed the secularist policies of pure Kemalism. The years 1958-60 saw the economy rapidly worsening and unemployment rising. This led to a bloodless coup carried out by officers and cadets from the Ankara and Istanbul war colleges. A 38-member National Unity Committee was established. After the military coup of 1960, a new constitution was drafted and approved in a referendum in 1961. It provided for a bi-cameral parliament, proportional representation, a constitutional court and a president elected jointly by the Senate and National Assembly. In 1971, Martial Law was required to restore order and in 1973 the army retreated to their barracks again. But governments continued to depend on minor party coalitions and in 1980 military intervention was again required to restore order in the form of a bloodless coup. The army feared an Islamic revolution along the lines of the Iranian Revolution of 1978-79 and the possible spread of a Kurdish uprising. A new constitution was drafted based on the 1958 French constitution: a strong president (7 year term) who appoints a prime minister, senior judges and a unicameral parliament that could be dismissed by the president. The political parties, the press and trade unions were subjected to stringent control. During the next decade the country flourished under free-market policies and growing foreign trade. After the 1980s, Turkey experienced growing tension around the position of its Kurdish minority. Several groups emerged espousing demands ranging from freedom of cultural expression to outright independence. The most important of these groups was the Kurdistan Workers’ Party (Partiya Karkeran Kurdistan – PKK). The PKK sought an independent Kurdish state with full autonomy. The PKK received support from Kurds living abroad and in neighbouring countries (Iran, Iraq and Syria). Kurdish political parties remained forbidden and the government relied on military suppression and martial law. Thousands of persons were killed and Turkish troops attacked Kurdish safe havens in Iraq. During the 1980s and 1990s, Islamic groups increased their influence in Turkish public life: changes in dress, segregation of sexes, growth of Islamic schools and banks and support for Sufi orders. This upsurge was also reflected in the first decade of the 21st century when the Justice and Development Party (AKP) led by Erdogan swept into power. Turkey refused to grant transit through its territory to the US military during the Iraq War. A major source of tension between Turkish and Armenian communities (and sympathisers of the latter elsewhere in the world) is the early 20th century treatment of Armenians at the hands of the Ottoman Empire. Thousands of secularist protesters also showed unease about Erdogan’s Islamic roots and his wife opting to wear the headscarf. The military also issued a memorandum on the internet criticising the rising role of Islamists in the government. Since 2007, the Turkish armed forces conducted a series of military strikes across the Iraqi border on Kurdish targets. In 2008 the parliament voted to amend Turkey’s constitution eliminating a ban barring the headscarf from being worn on university campuses. Thereby a long-standing fault line within Turkish society was aggravated. It is clear that the pendulum has swung back in favour of the Islamist fundamentalists.

Iran Human habitation in Iran dates back to some 100,000 years ago, but recorded history began with the Elamites about 3000 BC. The Medes who flourished from 728 BC were overthrown in 550 BC by the Persians who were in turn conquered by Alexander the Great in the 4th Century BC. Muslim dynasties appeared in the Sassanid (Persian Empire) in the mid-7th century. Subsequently the area was governed by a succession of Safavid and Qajar dynasties until the 19th century when the country was first controlled by the Russians and then by the British. Reza Kahn seized power in a coup in 1921. His son, Mohammad Reza Shah Pahlavi alienated religious leaders with a programme of modernisation and Westernisation and was overthrown in 1979. Shi’ite cleric Ruhallah Khomeini set up an Islamic Republic in 1979 and started a process of suppressing Western influence. 169

Today Iran is an Islamic republic with one legislative house. The head of state and government is an elected president, but supreme authority rests with the rahbar, a ranking cleric jurist. The capital is Teheran and the 2008 population estimate stood at 72,269,000. Persians constitute the largest ethnic group. Other ethnic groups include Azerbaijans, Kurds, Lurs, Bakhtyari and Buloch. The predominant religion is the Shi’ite version of Islam with small pockets of Zoroastrianism. Most of Iran’s surface area consists of deserts and other wasteland. Only about 10 percent of the country is arable and about 25 percent suitable for grazing. Its rich petroleum reserves account for about 10 percent of the world’s known reserves and are today the mainstay of its economy. When Islamic forces combined in 1979 to overthrow the regime of the Shah, Iran appeared to return into the reactionary fold of the Shi’ite branch of Islam. The Shah, probably naively, tried to force his version of secular modernity on a complex, traditional and devout society. It took Iranian society another 30 years to 2009 to spawn a new eruption of popular reformist protest in Tehran. It ended with a crackdown of the forces of Mahmoud Ahmadinejad with the blessing of Iran’s supreme leader Ayatollah Ali Khamenei. Subsequently, hundreds of people including academics, journalists and former officials were incarcerated and accused of conspiracies to foment a secularist overthrow of the Islamic state. The possibility of a regime change from within remained as remote as ever. The stability of the regime is based on the dominant role played by the Islamic Revolutionary Guards Corps (IRGC), a of 120,000 essentially focused on “internal threats”. Ahmadinejad himself an ex-guardsman, is strongly supported by the hard-line faction centred on the IRGC embracing a network of former officers and like-minded men in other security branches. Despite outrage over the post-electoral crackdown, this faction escalated its offensive against dissent and further consolidated its hold over Iran’s politics and economy. Subsequent to the 2009 election, state television broadcasts showed a series of show trials of prominent reformists. With dramatic prosecutor’s accusations and defendants’ confessions, the outcome was to destroy the reformist opposition and to purge powerful centrists too. As actors in an alleged plot to discredit the June 2009 elections, they have been used as scapegoats to ban opposition parties outright and to jail their leaders. The IRGC cronies fill most of the key cabinet posts (intelligence and oil ministries). The IRGC controls the 70 percent of Iran’s economy that is state run, with stakes in everything from dental and eye clinics to car factories and construction firms. Even “privatised” assets fall into the hands of ex-guardsmen or close friends. A special “privatisation” agency has been set up to “safeguard” the process. During his first term, Ahmadinejad steered billions in uncontested oil, gas and large-scale infrastructure contracts to the IRGC. In 2006 alone, the IRGC’s main construction firm, Khatam al-Anbya, received $7 billion to develop gas and oil fields and for the refurbishment of the Tehran metro system. The IRGC is also widely rumoured to control a near monopoly over the smuggling of alcohol, cigarettes and satellite dishes. They appear to have become a mafia abusing their access to key points of governmental decision making and to exploit their intelligence capabilities to spy on competitors. (See The Economist, August 29th, 2009. P.40) On various platforms around the world, Ahmadinejad expressed his intention to provoke a “clash of civilisations” in which the Muslim world led by Iran takes on the “infidel” West led by the United States and defeats it in a protracted contest. In Ahmadinejad’s analysis, the rising “super power” has decisive advantages over the infidel. Islam has four times as many young men of fighting age than the West with its ageing populations. Hundreds of millions of Muslim ghazis (holy raiders) are keen to become martyrs while the infidel youths, loving life and fearing death, hate to fight. Islam also has 80 percent of the world’s oil reserves and so controls the lifeblood of the infidel. The USA, the only infidel power still capable of fighting, is hated by many nations – albeit instigated by misinformation. According to Ahmadinejad’s strategy, Iran should wait out the USA’s decline and in the interim develop its own nuclear strategy, thus matching the only advantage the infidel enjoys. Further components in its strategy are to build its outer defences in 170

Syria, Lebanon, Iraq, Afghanistan and Pakistan. These strategies should be supported with strengthening Iran’s network of Shia organisations in Bahrain, Kuwait, Saudi Arabia, Pakistan and Yemen. Close contact should also be resumed with Sunni fundamentalist groups in Turkey, Egypt, Algeria and Morocco. Shia boys should be taught to cultivate two qualities: the first is entezer, the capacity patiently to wait until the time for action is right. The second is taajil, devising actions needed to hasten the process. As soon as the infidel loses its nuclear advantage, it could be worn down in a long, low-intensity war at the end of which surrender to Islam would appear to be the least bad of options. He believes Americans are impatient and they run away at the first sight of a setback. Muslims, in contrast, know how to be patient. That is why they have been able to weave carpets for thousands of years. The Israeli Prime Minister, Benjamin Netanyahu, has described the Iranian regime as a “... messianic apocalyptic cult”. In truth, Iran is not that easy to read. It is a self-proclaimed Islamic theocracy. They have repeatedly vowed to “wipe Israel off the map” and appear to be moving relentlessly closer to the point where it could build a nuclear bomb. Containment of a nuclear Iran seems to be a matter of strategic urgency. The mere possession of a nuclear capability might encourage its regime to adopt a more aggressive foreign policy in Iraq, Lebanon, the Palestinian territories and even in Afghanistan and Pakistan. (See The Economist, July 21st, 2007, “Special Report on Iran”, pp.2-16)

Afghanistan The area that is today Afghanistan was originally part of the Persian Achaemenian Empire in the 6th century BC, ruled by the Darius and Xerxes dynasties. The Empire extended as far West as Macedonia and Libya, in the north to the Caucasus Mountains and the Aral Sea and to the Persian Gulf and the Arabian Desert in the south. In the 4th century BC it was conquered by Alexander the Great. Islam entered around 870 AD. Subsequently the area that is today called Afghanistan was controlled by the Mughal Empire of India and then by the Safavid Empire of Persia. Its current boundaries were drawn after Britain conquered the area in the 19th century. From the 1930s the country had a reasonably stable monarchy but it was overthrown in the 1970s by Marxist revolutionaries. Marxist reforms sparked rebellion amongst the local tribes and Soviet troops were sent in to invade the area. Eventually the Afghan guerrillas prevailed and the Soviets withdrew in 1989 when their involvement became too costly. In 1992 rebel factions overthrew the government and established an Islamic republic. In 1996 the Taliban militia took power in Kabul and enforced a harsh fundamentalist Islamic order. Osama bin Laden’s September 11 al-Qaeda attacks in 2001 led to military conflict with the USA and allied nations. It was followed by the overthrow of the Taliban and the establishment of a volatile interim government under Hamid Karzai. Afghanistan has three distinctive regions. The northern plains are the major agricultural area. The south-western plateau consists primarily of desert and semi-arid landscape. The more densely populated highlands cover the central parts of the country. After fighting on the frontiers of British India against Afghan tribesmen, Winston Churchill wrote of the Pashtuns: “To the ferocity of the Zulu are added the craft of the Redskin and the marksmanship of the Boer”. Little seems to have changed. Several problems stand in the way of creating a stable society in Afghanistan. The first is that the forces of local disintegration are greater than the forces of national integration. Tribal competition and conflict based on ethnic differences is a major source of internal cleavage. Real power lies with the “warlords”. Vengeance is justice. About 40 percent of the people belong to the Pashtun ethnic group. Other ethnic groups include Tajiks, Uzbeks and Hazara. The religion is mainly Sunni Muslim but Zoroastrianism is also present. The southern and south-eastern areas are the ideological heartlands of the Taliban. The second problem is the insecurity that prevents reconstruction efforts. Unless more secure conditions are created, nation-building activities such as road building and the provision of housing and school facilities cannot be undertaken. Thirdly, the poverty on the community level is overwhelming. 171

About 80 percent of the Afghans depend on what they can grow, but the country lacks water and cultivatable land. There are no noteworthy irrigation systems. Three-quarters of the world’s opium and nearly all of Europe’s heroin originate in Afghanistan. Pashtun traders move the opium. They lock in farmers by paying advances on next year’s harvest. Drug money (estimated at a third of GDP) appears to be funding the terrorism in southern Afghanistan. Planting poppies is the surest form of generating income. The only alternative source of income for the 23 million Afghans is foreign aid. Afghanistan is a highly fractured and complicated country that verges on being a failed state. Without military and financial support from the USA and its allies ($32 billion since 2001), the current Afghanistan regime will revert back to Taliban control. To sustain a stable democratic regime would require much development in the social, economic and political spheres for generations.

Central Asian Republics The central region of Asia extends from the Caspian Sea to the border of in the east. In the north it is bounded by Russia and in the south by Iran, Afghanistan and China. The region consists of Kazakstan, Uzbekistan, Tajikistan, Kyrgystan and Turkmenistan with a total combined population of around 54 million of whom around 90 percent is Muslim. About 60 percent of this area consists of desert land except for the vast grassy steppes along the margins above the mountain ranges to the south and east. The scarcity of water has led to a very uneven population distribution largely concentrated along the fertile river banks in the south east. The various ethnic groups (Uzbek, Kazak, Tajik, Turkmen and Kyrgyz) speak languages related to Turkish with the majority adherent to the Sunnite branch of Islam. Under Soviet rule the area was often used for nuclear weapons testing and it supplied most of the USSR’s cotton, coal and other minerals. Human occupation of Central Asia dates back 35,000 years to the late Pleistocene Epoch. The Uighurs were amongst the earliest Turkic peoples in the area. The region was gradually Islamised in the 11th-15th centuries. From the 13th century the area was ruled by the Mongols until it was conquered by the Russian tsars in the 17th century. After the Communist Revolution of 1917, the area was divided into five Soviet socialist republics of the USSR and brought under the Soviet system of central planning. When the Soviet Union collapsed in 1991 they became sovereign independent nations.

Pakistan The total population of Pakistan is estimated at around 154 million living in an area of around 800,000 square kilometres. The population is a complex mix of indigenous peoples who have been affected by successive waves of migrations of Aryans, Persians, Greeks, Pashtuns, Mughals and Arabs. The languages spoken are Urdu (official), Punjabi, Pashto, Sindhi and Baluchi. The official religion is Sunni Muslim, but there are also pockets of Christians and Hindu. The country is considered to consist of four regions: the northern mountains (Himalayas), the Baluchistan Plateau, the Indus Plain and the desert areas. The mixed economy is largely based on agriculture, light industries and services. The first Muslim conquests started in the 8th century. Subsequently it was controlled by a long succession of Muslim dynasties, most notably the Mughal dynasty which started in 1526 and which lasted until British India started sidelining the Mughals after 1757. Pakistan as a “nation state” was born of expediency. At the end of the Second World War, British colonialists were under pressure from their American allies to get out of India. The Muslim League, led by Mohomed Ali Jinnah, had co-operated with Hindu nationalists in the movement against Britain, but sought a separate Islamic state. The leaders of India’s Congress Party, despite the Ghandian ideal of a Hindu-Muslim brotherhood, were anxious to achieve Indian independence. The product of these diverse motivations was independence for India and a Muslim nation of Pakistan, divided into two territories separated by more than a thousand miles of Indian land. British India was partitioned into two independent “dominions” to be 172

known respectively as India and Pakistan. Pakistan was to consist of two parts: West Pakistan and East Pakistan. Although most of the people in East Pakistan and West Pakistan were Muslim, the population, language and levels of modernisation in the two divisions contrasted sharply. The East had 55 percent of the population but received less from national expenditures and less from foreign aid than the West. Easterners had a lower standard of living although the East created a major portion of Pakistan’s foreign currency earnings. By 1967-68 the average per capita income of East Pakistan was 62 percent of that of West Pakistan. People from the East were mostly Bengali, whereas the dominant group in the West was Punjabi. The Bengali-speaking East was led by the Awami League, whereas the Urdu-speaking West held sway in the military regime of Marshall Ayub Khan. Ethnic polarisation was pervasive and when in March 1971 West Pakistan troops descended upon the East to quash Bengali separatism, the estimated number of Bengalis killed exceeded 500,000. Millions fled into India. On December 15th, 1971, Pakistan was formally split into two separate countries: East Pakistan became Bangladesh and West Pakistan continued to be called Pakistan. The Kashmir region remained a disputed territory between Pakistan and India, with tensions resulting in sporadic military clashes. Many Afghan refugees migrated to Pakistan during the Soviet-Afghan war in the 1980s and remained there during the Taliban and post-Taliban periods. Political instability led to an army coup in 1999. General Pervez Musharraf became Pakistan’s president. The strongest opposition to Musharraf was concentrated in the hands of Fazhur Rehman, head of the Muttahida Majlis-e- Amal (MMA), Pakistan’s powerful coalition of Islamic parties. Since 2001 Pakistan’s army has lost many soldiers hunting “terrorists” in the Federally Administered Tribal Areas (FATA) along the Afghan border. The USA provided billions of dollars in military aid to assist Musharraf’s efforts to improve his military capability to control militant insurgency. The boundary between Pakistan and Afghanistan is based on the “Durand Line”. This line was drawn in British colonial times. This borderline cuts the Pashtun tribe in two: one part living in Afghanistan and the other in Baluchistan, a province of Pakistan. Tribesmen move freely across the border. Hence it is impossible to identify “insurgents”. Many tribesmen on both sides of the border are Taliban supporters. The colonial-era model of governance on both sides of the border is based on malicks (tribal leaders) who mediate with the central administration through political agents. These interlocutors are often ulema (religious scholars) who are amongst the most active militants. Musharraf described these interlocutors as charasi (hashish-smoking) Taliban, i.e. thugs using the Taliban’s mantle. Most interlocutors run madrassas across the frontier enabling them to be used as Taliban recruitment centres. It would seem that the Pashtun-dominated Taliban has taken over the role of al-Qaeda and that they have taken root along the Durand Line frontier. The potential instability along the border of Pakistan makes it the most dangerous hot-spot in South Asia.

Bangladesh Bangladesh lies on the northern coast of the Bay of Bengal. It is surrounded by India with a small common border with Myanmar (Burma) in the south-east. The country is a low-lying riverine land traversed by the many branches and tributaries of the Ganges and Brahmaputra rivers. Tropical monsoons and frequent floods and cyclones annually inflict heavy damage in the delta region. The earliest reference to the region was to a kingdom called Banga (around 1000 BC). Buddhists ruled for centuries in the region but by the 10th century Bengal became part of the Moghul Empire and the majority of the East Bengalis converted to Islam. Bengal was ruled by British India from 1757 until Britain withdrew in 1947. Pakistan was then founded out of the two predominantly Muslim regions of the Indian sub-continent. The area that was East Bengal became the part of Pakistan called East Pakistan. 173

Tension between East and West Pakistan existed from the outset because of their vast geographical, economic and cultural differences. East Pakistan’s Awami League, a political party founded by the Bengali nationalist Sheik Mujibar Rahman in 1949, sought independence from West Pakistan. Although 56 percent of the population resided in East Pakistan, the West held the lion’s share of political and economic power. In 1970 the East Pakistanis secured a majority of the seats in the national assembly. President Yahya Khan postponed the opening of the national assembly in an attempt to circumvent East Pakistan’s demand for greater autonomy. As a consequence, East Pakistan seceded and the independent state of Bangladesh was proclaimed on March 26th, 1971. Civil war erupted and with the help of Indian troops in the last few weeks of the war, East Pakistan defeated West Pakistan on December 16th, 1971. An estimated 1 million Bengalis were killed in the fighting or later slaughtered. Ten million more took refuge in India. In February 1974, Pakistan agreed to recognise the independent state of Bangladesh. Founding President Sheikh Mujibur was assassinated in 1975, as was the next president Zia ur-Rahman. Subsequently the army chief took control in a bloodless coup, but was forced to resign. Thereafter a succession of prime ministers governed in the 1990s. In 2009 the Bangladesh population was estimated at 156 million living on a land area of 133,911 square kilometres (just over half the size of the UK), giving it an extremely high density of 1,146 people per square kilometre. Its per capita income is amongst the lowest of all countries in the world.

Indonesia Indonesia comprises some 17,500 islands of which 7,000 are uninhabited, covers 1,860,360 square kilometres and carries a population of around 232 million, divided into 300 ethnic groups speaking close to a similar number of languages. The predominant religion is Islam but there are also significant pockets of Christians, Hindus and traditional beliefs. The Indonesian archipelago stretches 5,100 kilometres from west to east. More than half of the population live on two major islands, Sumatra and Java. The other populated islands include Bali, Lombok, Surabaya, Borneo, Celebes, the Moluccas and the western portions of Timor and New Guinea. The islands are characterised by rugged volcanic mountains and tropical rainforests. The area is geologically unstable, 20 percent of the land is arable and rice is the staple crop. Petroleum, natural gas, coal, timber products, garments and rubber are major exports. Indonesia is a republic with a two-chamber legislature and an elected president. Indian traders brought Islamic Hindu and Buddhist influences into the country. Today Indonesia has the largest Muslim population in the world. European influence began in the 16th century when the Dutch East India Company established a major trading post in Java. It held control until the Japanese invasion in 1942. After World War II, Sukarno declared Indonesia’s independence in 1945, retaining a nominal union with the Netherlands until 1954. An alleged coup attempt in 1965 led to the deaths of hundreds of thousands of people claimed to be communists. By 1968 General Suharto took power and forcibly incorporated into Indonesia with much loss of life. In the 1990s the country was beset by political and economic turmoil. Suharto was deposed by the army in 1998. Since 1998, Indonesia, the world’s fourth most populous country and the largest with a Muslim majority, has undergone a profound political transition. Its political system has been overhauled and the foundations for a better system of governance have been put in place. In the wake of ten years of dictatorship under President Sukarno and more than three decades of iron rule by President Suharto, the country’s political institutions were weak. Indonesia’s democratic transformation, known as Reformasi, began in 1998. A new national parliament was chosen in 1999 in the first openly contested elections and Abdurrahman Wahid became president through an indirect vote. In mid-2001, Wahid was forced out of office because of his erratic leadership and Megawati Sukarnoputri – Sukarno’s eldest daughter – ascended to the presidency. On April 5th, 2003, the secular, Progressive Democratic Party (PD), led by former general Susilo Bambang Yudhyono (SBY) and the Prosperous Justice Party (PKS), an urban-based Islamic party campaigning on a platform of “clean government”, achieved the direct 174

election of SBY as president for a five-year term. The outcome of these elections reaffirmed the potential strength of moderate Islam in Indonesia. The keystone of Indonesia’s political system, rooted in the constitution of 1945, is a strong presidency. But in view of past experience, a set of constitutional amendments adopted between 1999 and 2003 injected critical checks and balances. The 1945 constitution previously allowed the president to govern by decree. The political life was dominated by the military. The parliament (DPR) routinely rubber-stamped legislation put forward by the president’s cabinet. The 1998 and 2001 amendments to the constitution introduced a senate of 128 directly elected members representing each of Indonesia’s 32 provinces. The parliament and the senate, sitting jointly as MPR, are now empowered to amend the constitution, to swear in the president and vice-president and to dismiss them for specific violations. The amendments also created a constitutional court to review laws and resolve electoral disputes. It also provides for a “general election commission” and sets out basic human rights protections. Public life in Indonesia is closely linked to its underlying “primordial” societal structure: the rural peasantry (abangan), the secular aristocracy (priyayi) and the Islamic clerics (santri). These structural elements represent influential “currents” or societal forces that permeate and underlie predominant trends. Every political movement relates to the “primordial” forces in its own peculiar way: exploiting its symbols, dispensing patronage, mobilising support. Gulkar, the centrist bureaucratic party nurtured by Suharto and embellished in the large military sector, dominates the political scene together with Megawati’s secular nationalist party (PDI-P). But the Reformasi introduced a trend towards election outcomes to be defined more by personalities than by issues. The Indonesian armed forces (known by their acronym TNI), still cast a long shadow over the political life of Indonesia. It remains the dominant force in government policy making, with an effective veto over important decisions. It is given credit for its role in winning independence from the Netherlands in 1950 and for the stability and economic growth of the Suharto era, but as an institution it is said to be less prominent than two decades ago. Under Reformasi the police was separated from the armed forces. According to Lex Rieffel, a Visiting Fellow at the Brookings Institution, a large amount of off- budget funds goes to the military. An amount estimated to be twice as large as its formal budget allocation. Since independence the TNI has maintained a command structure that parallels the civilian government down to the village level. Although it stresses the importance of defence against external enemies, it deployed its forces in the past mainly to combat domestic insurgence, e.g. in Aceh and Papua. Since the birth of the nation, military units have operated a mix of legitimate and illegal businesses. In the 1970s and 1980s a sprawling network of military foundations and co-operatives sprang up, running a range of murky activities and drawing on questionable sources of funding. After the mid-1980s, the military’s economic power declined as the network of businesses owned and controlled by Suharto’s relatives and friends gained prominence. The vacuum in the economy resulting from the collapse of Suharto Inc. has been filled by thousands of small- and medium-sized enterprises, not by the military. Not a single successful military-operated business can be found in the top ranks of Indonesian companies. Hopefully a strong undercurrent of opposition to military domination represents a credible check on creeping militarism. The 1997-98 financial crisis virtually shipwrecked the Indonesian economy. Relations with the IMF and the consortium of donor countries were strained. But since the “Dream Team” of economic ministers appointed by Megawati were less political and more technocratic than their predecessors, the turnaround has been remarkable. Co-ordinating Minister Dorodjatun Kuntjoro-jakti and Finance Minister Boediono managed to set the stage for a successful transition during 2004 out of IMF balance-of-payments financing and debt relief. This step signalled the end of the recovery from the 1997-98 financial crisis and a return to normal relations with official and private sources of external financing. All of Indonesia’s macro-economic indicators improved from mid-2001 to 2004. Inflation declined from 12 percent to 5 percent, interest rates dropped from 17 percent to 7 percent, 175

foreign exchange reserves rose while the rupiah appreciated in real effective terms. The ratio of government debt declined from 100 percent in 2000 to less than 70 percent in 2004. The key to success was fiscal discipline. Indonesia faces several obstacles to realising its economic potential. First is the diverse array of ethnic groups scattered over 6,000 islands. The country needs to develop the social consensus to achieve reforms to produce sustained rapid growth. Second is the widespread resentment of the commercial power of the Indonesian-Chinese business community, although this community is the source of investment in job-creating economic growth. Third is escaping the “curse” of having abundant natural resources which hamper balanced growth of all economic sectors. Fourth is the high level of unemployment. At least 30 percent of the labour force (40 million people) remains under employed. Indonesia needs to successfully overcome the various impediments to domestic and foreign investment. Most critical are the unreliable judicial system, a weak banking system and corruption. Indonesia needs to build on a vibrant press, dynamic non-governmental organisations and improved standards of public accountability to reinforce its reform process and business confidence. Last, but not least, Indonesia needs to step up its own internal war on terrorism. It requires preventing foreign terrorist organisations from operating in Indonesia and keeping its own domestic Muslim fanatics under control. Hundreds of thousands of unarmed civilians were slaughtered by ideologically driven mobs during the mayhem following the failed September 1965 coup that led to the demise of the Sukarno regime. The Bali bombings in 2002 and the hotel bombings in 2003 and 2009 may have been isolated incidents. But Jamaah Islamiyah is an example of religious fanaticism that can spread like a forest fire. Indonesia’s first experiments with constitutional democracy failed. The second round of experiments that began with the Reformasi of 1998 is still in progress. It still needs to deliver more employment opportunities; less corruption and to stamp out senseless violence to ensure that the political transformation is not stalled or allowed to slip into reverse. (See Lex Rieffel (2004), “Indonesia’s Quiet Revolution”, Foreign Affairs, Vol.83, No.5, pp.98-110)

Islamic Statehood

All the countries of the Islamic world mentioned in this survey are recognised as sovereign entities in which religion combined with other social characteristics such as race, language or culture are explicitly or implicitly recognised as the basis of the state.

Closed Societies All Islamic states are closed societies. With the exception of Dubai, they do not readily allow or attract immigrants, but they do generate millions of emigrants to Western countries. Only three Islamic countries have in recent years experimented with constitutional democracy: Turkey, Indonesia and Lebanon. In both Turkey and Indonesia the preservation of a fair degree of secular civil rights depends heavily on the intervention of the military. Lebanon is the one country in the entire Islamic world with a significant experience of democratic political life. It has suffered not for its fault but for its merits – the freedom and openness that external forces have exploited with devastating effect. Syria was finally induced to withdraw its forces in 2005 and the Palestinian leadership has been withdrawn. But Hezbollah, trained and financed by Iran, still retains a powerful presence. The stability of Lebanese democracy is likely to remain volatile as long as the country is surrounded by meddlesome Islamic authoritarian regimes.

Creations of the Colonial Era In the case of Indonesia, the mere existence of the state as a political entity is a product of its colonial history: its pre-colonial political systems were much smaller in scale consisting of a number of traditional monarchies which were absorbed within the new political system. Some Islamic states are examples of traditional states which never fell under prolonged colonial 176

occupation: Afghanistan, Iran and Yemen. In Iran there have been brief occupations and some border alteration but it did not involve the imposition of European colonial rule – which explains the delayed impact of social change upon customary institutions. In some cases the Islamic states were created artificially following the break-up of colonial empires such as the Hashemite kingdoms of Jordan and Iraq – both more or less fabricated to reward those who rendered wartime services to the British Empire. Similarly petty monarchs were awarded (or permitted to seize) territory ranging beyond the ambit of any justifiable claims (Saudis in Saudi Arabia and Senussi leaders in Libya). In the cases of Pakistan and Bangladesh, religion combined with ethnic factors have been the organising foundations of the emerging political entities. The colonial era left many Islamic states with territorial boundaries inchoately dissecting religious or ethnic communities. An obvious example of trans-territorial ethnicity is the case of the Kurds. They are situated as an ethnic-linguistic minority in each of four distinct states: Syria, Iraq, Turkey and Iran. Their aspiration of an independent Kurdish Republic has brought them into conflict with ruling groups in all four sovereignties. Kurdish self-consciousness has been stimulated by the development of militant nationalism among dominant groups in the four states. Kurdish cultural pride is traced back 4200 years to the ancient Medes. The great majority of Kurds are Sunnite Muslims, particularly important for their members in Iran who confront a Shi’ite theocracy. Under Saddam Hussein the persecution of Kurds led to mass killings of many thousands. The Turkish army has often conducted raids into Kurd territory in Iraq. Thus religion, a shared sense of persecution and resentment at minority status, a sense of historic uniqueness and the impact of intensified nationalism in surrounding areas brought about the Kurdish dilemma. Similarly the creation of the state of Israel with the support of the Western World (particularly the USA and the UK) led to the displacement of millions of Palestinians. The neglect of the legitimate territorial claims of the Palestinians has created one of the most volatile flashpoints in the modern world. The British colonial office must take responsibility for drawing many arbitrary lines during the colonial era. The arbitrary territorial demarcation of Iraq, with its eastern boundary dissecting the Shia community, created an Iraqi political entity with a virtually unmanageable conflict potential. A similar artificial situation exists in Sudan. The inhabitants of the western region of Darfur as well as the “blacker-skinned” Christian and animist tribes in the south have little in common with the Muslim Arabs in the north. Since the , millions of westerners and southerners have been killed. An estimated 9 million people depended on food handouts from abroad by 2010. Also on the western boundary of Pakistan, the colonial-drawn Durand Line bisects the Pashtun tribes. Afghanistan’s poverty is exacerbated by Taliban fundamentalism coupled with the fluidity of the Pashtun tribes fluctuating across the Durand Line between Pakistan and Afghanistan.

Islam’s Sacred Mission Throughout modern history, religion has been a major foundation for identity and cohesion. The world’s great religions have all provided significant politically relevant affiliations: Christianity, Buddhism, Hinduism and Judaism. The profound hold which religion is capable of exerting upon man’s emotions and imagination render these cleavages peculiarly intractable. The common bond of religion can produce both militant cultural identity and a sense of sacred mission. The role of religion takes on an added importance where religion regards the sacred and secular realms as inseparable – as is the case in Islamic countries. Coexistence of different religious or secular communities within Islamic states is particularly difficult if not impossible. Religious parties see it as their sacred duty to suppress and crush what they see as anti- religious, anti-Islamic movements.

177

Islamic Finance

The roots of Islamic finance stretch back 14 centuries. It rests on the application of Islamic law, or sharia, whose primary sources are the Koran and the sayings of the prophet Muhammad. Sharia emphasises justice and partnership. In the world of finance that translates into a ban on speculation (gharar) and on the charging of interest (riba). The idea of a lender levying a straight interest charge, regardless of how the underlying assets fare in an uncertain world, offends against these principles. Some Muslims dispute this, arguing that the literature of sharia covering business practices is small and that terms such as “usury” and “speculation” are open to interpretation. Companies that operate in immoral industries, such as gambling or pornography, are considered out of bounds. So too are companies that have too much borrowing defined as having debt totalling more than 33 percent of the firm’s stock-market value. Such criteria mean that sharia-compliant investors steer clear of highly leveraged conventional banks. Islamic financiers are confident that they can create their own versions of the useful parts of conventional finance. What is allowed or not allowed under sharia is decided by boards of scholars. They act as a kind of spiritual rating agency working closely with lawyers and bankers to create instruments and to structure transactions that meet the needs of the market without offending the requirements of the Islamic faith. The distinctions between conventional finance and Islamic finance may seem contrived. An options contract to buy a security at a set price at a date three months hence is frowned upon as speculation. But a contract to buy the same security at the same price, with 5 percent of the payment taken up front and the balance taken in three months upon delivery, is sharia compliant. Who is the ultimate authority on sharia compliance? There are no simple answers and diverging interpretations. Malaysia has tackled the problem by creating a national sharia board. The Accounting and Auditing Organisation for Islamic Financial Institutions (AAOIFI) in Bahrain is attempting to lay down the ground rules for common standards. But differences between national jurisdictions, e.g. between pious Saudi Arabia and more liberal Malaysia, are likely to remain. Both countries feature in the top three markets for Islamic finance, measured by the quantity of sharia-compliant assets. In 2007 Iran stood at the top with $154.6 billion, Saudi Arabia with $69.4 billion and Malaysia with $65.1 billion. The Gulf States, awash with liquidity and a roster of huge infrastructure projects to finance are seen as the most dynamic markets. Britain is the most developed Western centre. France, with a much larger Muslim population, is working to close the gap. Islamic banks are opening their doors across the Gulf and sharia-compliant hedge funds have been launched. Western law firms and banks are expanding their Islamic-finance teams. Indonesia announced in 2008 that it would issue the nation’s first sukuk (sovereign wealth fund). The British government, keen on retaining its lead as the West’s front-running centre for Islamic finance, has also taken steps to issue a short-term sovereign sukuk. Compared to the highly questionable American sub-prime lending, the Middle Eastern sovereign wealth funds look attractive. They frown on speculation and support risk sharing. In 2008 the amount of Islamic assets under management stood at around $700 billion, with much scope to grow, provided that their ethical standards are met. Sharia-compliant mortgages are structured in such a way that the lender itself buys the property and then leases it out to the borrower at a price that combines a rental charge and a capital payment. At the end of the mortgage term, when the price of the property has been fully repaid, the house is transferred to the borrower. The additional complexity adds to the direct transaction costs since the property changes hands twice – which makes it liable to double stamp duty. Britain, as can be expected, ironed out this complexity in order to get the business. It is argued that with increasing volumes, economies of scale would cut costs: recycling documentation, using standard templates, streamlining processes. Sharia-compliance is constantly redefined. An example is the use of hedge funds, which are essentially based on the principle of selling something that an investor does not actually own. After several years of 178

cutting corners and redefining terms, the financial wizards developed a technique called arboon. This technique ensures that investors, in effect, take an equity position in shares before they sell them short! Persons with a knowledge of Islamic law and Western finance, as well as fluency in Arabic and English, have become highly prized members on the boards of banks with Islamic clients. Assets have also become a bottleneck. The ban on speculation means that Islamic transactions must be based on tangible assets, such as commodities, buildings or land. Exotic derivatives are frowned upon. This limits the scope for securitization that is not backed by sharia-compliant assets. Islamic financiers are concerned about a possible mismatch between the duration of a bank’s liabilities and their assets. The flexibility imposed on Islamic finance by Western financial architects and manipulators holds the danger of sharia being twisted for short-term business purposes. Scholars are paid for their ingenuity to evade strict sharia standards instead of producing functionally sound innovations. Unscrupulous operators could harm the reputation of the entire industry. (See Briefing called “Islamic Finance”, The Economist, September 6th, 2008, pp.72-74)

Islamic Politics

A specialist in Islamic Studies at Princeton University, Bernard Lewis, remarked early in 2009 that throughout the Islamic world there are two opposite trends competing for ascendancy: Islamic theocracy at one end of the spectrum and secular liberal democracy at the other end. The Islamic theocracy movement is currently by far the most prominent. The momentum of secular liberal democracy is sporadic and faces many severe obstacles. The forces representing Islamic theocracy have several obvious advantages. Their messages are cast in religious rather than secular political terms. They express both their critiques and their aspirations in terms that are familiar and easily accepted, unlike those of Western-style democrats. They have access in the mosques to a communications network that bears the authentic stamp of Islam which therefore provides the tools to disseminate propaganda. Secular democratic opposition groups are required by their own ideologies to tolerate the propaganda of their opponents, whereas the religious parties have no such obligation. Rather, it is their sacred duty to crush the anti-religious, anti-Islamic movements. Their diagnosis of the ills of the Islamic world is all due to infidels and their local dupes and imitators. The remedy is to resume the millennial struggle against the infidels in the West and return to God-given laws and traditions. At the opposite end of the political spectrum are the secular liberal democrats who argue that it is the old ways, represented by degenerate and corrupt power centres, that are crippling the Islamic world. For them the cure is openness and freedom in the economy, society and the state – in genuine democracy. But the road to democracy, and to freedom, is long and difficult with many obstacles along the way. The bulk of the Islamic populations find themselves somewhere between the opposite positions of the political spectrum, depending on the national political complexion where they live. Hence, Islam’s main political arms differ greatly in both tactics and aims: from jihadist militancy against infidels to pragmatic co-existential participation in the democratic process. “Across the Muslim world, a new generation of activist bloggers and preachers is discovering ways to synthesize Islam and modernity”. This claim is made in an essay in Time Magazine (March 30th, 2009, pp.28-32) by Robin Wright. The title of the essay is “Islam’s Soft Revolution”. Is this wishful thinking or an empirically based factual description of current trends? Wright contends that this revolutionary change is more vibrantly Islamic than ever. It is decidedly anti-jihadist and ambivalent about Islamist political parties. Culturally it is deeply conservative, but its aim is to adapt to the 21st century. Politically it rejects secularism and Westernisation, but craves changes compatible with modern global trends: it is more about groping for identity and direction than expressing piety. 179

According to Wright the new revolutionaries are synthesizing Koranic values with the ways of life spawned by the internet, satellite television and Facebook. For them Islam is a path to change rather than the goal itself: trying to mix modernity and religion. The new Muslim activists are said to be part of a post 9/11 generation, disillusioned with extremists who fail to construct credible alternatives, to advance their aims. They seek answers within their own faith and community rather than in the outside world. Wright claims that the “soft revolution” is to be found in hundreds of schools from Turkey to Pakistan. Its themes echo in Palestinian hip-hop, Egyptian Facebook and the flurry of Koranic verses text-messaged between students. Telephones are answered by young Egyptians saying “Salaam Alaikum” (Peace be upon you) instead of “Hello”. When discussing everything from weather to politics, they add the tagline “bi izn Allah” (if God permits). They watch satellite broadcasts of young preachers explaining that you could be a good Muslim and yet enjoy life – exploiting the middle ground between being devout or liberal. Traditional clerics denounce the televangelists for preaching “easy Islam”, “yuppie Islam”, or even “Western Islam”. Elvis Presley is used as an example to illustrate that life without spirituality is empty. In Egypt several bloggers have emerged within the ranks of the Muslim Brotherhood who argue that Muslims should eschew both the Iranian-style theocracy and the Western-style democracy. They want a blend, with clerics playing an advisory role in societies, not ruling them. As a consequence Islamic parties are increasingly placed under intense scrutiny. In Turkey a group of scholars at the Kocatepe Mosque in Ankara have initiated a “Hadith Project” to investigate the recorded actions and sayings of the Prophet Muhammad in order to substantiate their validity and consistency within Koranic scripture, e.g. the stoning of adulterers, honour killings and the tradition which says that women are religiously and rationally not complete and of lesser mind. There is a growing awareness about dealing with reason, constitutionalism, science and other big issues that define modern society. The West is no longer the only world view to look up to. Forty years ago, Islamic dress was rare in Egypt. Today more than 80 percent of women are estimated to wear the hijab. Piety alone is not offered as explanation for the change in dress. It is claimed that the veil is the mark of Egyptian women in their power struggle against the dictatorship of men. The veil gives women more power in a man’s world – it provides protective cover and legitimacy for her campaigns. Many young Arabs are angry at the outside world’s support for corrupt and autocratic regimes such as the Gulf States and Libya. At the top of their gripe-list is the West’s support for Israel and its neglect of the suffering of the Palestinians.

Islam’s Global Networks

Since the 7th century, the word of the Prophet Muhammad was carried out of Arabia by invading Muslim armies and settlers joining their soldiering family members. In some instances local populations took to the road as refugees as the Arabs advanced, leaving their homes invitingly empty. In other instances, the invaders made treaties with townspeople whereby they promised to leave cities intact as long as half the properties were handed over as homes for Arabs and half the churches were handed over for conversion into mosques. In this way, Muslim custom and Arab populations took root in the new Islamic Empire. After the fall of the Ottoman Empire in 1923, the Muslim heartlands were occupied by Britain and France. The Muslim shock at the advance of European colonialism gave rise to a movement called Salafism which reaffirmed the centrality of Muhammad’s spiritual guidance. Salafism, in turn, became allied with the Sunni Islam puritan school of thought known as Wahhabism, practised by Saudi clergy. The Muslim Brotherhood was formally established by Hassan al- Banna in 1928 and its nerve centre is believed to be located in Egypt. It is a global fraternity of Sunni Islamist groups with branches in some 70 countries. Although full details are not generally known, it is believed that on joining the Brotherhood, followers are required to take an oath which pledges them to “work for God’s message” and to “believe and trust” in the movement’s 180

leaders. A Brotherhood member is expected, with his comrades’ help, to cultivate certain virtues, including bodily health, a sound mind and punctuality. The practice of working through other movements and fronts has had some spectacular success and has brought the Brotherhood and its proxies a high degree of influence. The Muslim Brotherhood is said to have millions of adherents all over the world. It believes in participating in any democratic or other process that is available and in taking advantage of the freedom the Western World allows. In some countries the members of the Brotherhood operate under different names and they also work within different groups to spread their ideas. Much of their activities are financed out of Saudi Arabia. An important offshoot of the Brotherhood is the World Assembly of Muslim Youth (WAMY) set up in 1972. Young Muslims, invited from all over the world, have been mentored by this association before returning to countries such as Malaysia, Indonesia and Turkey. Other key institutions co-founded by Brotherhood members are the Muslim Association of Britain (MAB) and the Union of Islamic Organisations of France. The method and aims of the Brotherhood’s work vary with local circumstances. The stated aim of the Brotherhood is primarily to promote Islam. Introducing sharia law as laid down by the Koran cannot be questioned, but the Brotherhood’s policy is that the process should not be rushed. Sharia can come into being only when the people have freely convinced themselves of its virtues. The Brotherhood is said to be shadowy but not secret. Its leader is an Egyptian, Mehdi Akef and its spiritual guide is Sheik Yusuf al-Qaradawi, whose broadcasts and pronouncements on the internet are followed by Muslims around the world. A prominent offshoot of the Muslim Brotherhood in the Middle East is Hamas. After winning the Palestinian election contest, Hamas demonstrated its willingness to move beyond jihadist militancy by facing the reality of exercising governmental power and the day-to-day challenges of governmental administration. It had to water down its Islamist fervour by entering policy debates with its secularist, Palestinian-nationalist rivals in the movement. It may even face the challenge of deliberating the pros and cons of a tactical compromise with Israel. In the past the Brotherhood opposed the recognition of Israel’s existence. Al-Qaeda shares common ideological origins with the Muslim Brotherhood. Both have their roots in the anti-secular opposition in Egypt, a conservative reading of Sunni Islam and the wealth and religious zeal of the Saudis. But they differ hugely over politics and tactics. The ideologists of al-Qaeda reject the division of the world into modern states. To them, the only boundaries that matter are between Islam (of which they believe they are the only authentic representatives) and infidels. By contrast the Brotherhood thinking is pragmatic, accepting the reality of national boundaries. The founders of al-Qaeda, Osama bin Laden and Ayman Zawahiri (an Egyptian doctor whose ideological roots lay in the Brotherhood) are focused jihadists intent on keeping the war against “crusaders” and “Jews” alive. One of the most powerful and best connected of the networks that are competing to influence Muslims around the world – especially in places far from Islam’s heartland – is Fethullah Gulen’s Pennsylvania-based movement. As a global force, the Gulenists are especially active in education with more than 500 places of learning in 90 countries. The Gulen movement is rooted in Turkey and is particularly active in Central Asia on the crossroads between Turkey, Iran, Russia and China. The Gulenists claim that they embrace democracy, as a matter of principle, not merely for tactical purposes. A key asset of the Gulenist network in Turkey includes a university, a newspaper, a raft of business enterprises and a chain of dormitories for students. The network also maintains a close association with the ruling Justice and Development (AK) party in Turkey. Less well known are the complimentary organisations devoted not to direct action but to ideological struggle. Of these, the most important has been Hizb ut-Tahrir (HT), or the Party of Liberation, a transnational movement that has served as radical Sunni Islamism’s ideological vanguard. HT is not itself a terrorist organisation, but it can usefully be thought of as a conveyer belt for terrorists. It indoctrinates individuals with radical ideology, priming them for recruitment by more extreme organisations where they can take part in actual operations. By combining fascist 181

rhetoric, Leninist strategy and Western sloganeering with Wahhabi theology, HT has made itself into a real and potent threat that is extremely difficult for liberal societies to counter. HT is composed of secretive cells with estimated membership in the hundreds in European countries such as Denmark and up to tens of thousands in Muslim countries. HT is banned in most of the Muslim world as well as in Russia and Germany but it has been allowed to operate freely elsewhere, most notably in the United Kingdom where it has played a major role in the radicalisation of disaffected Muslim youth. After it was revealed that the bombers in the 7/7 bombing attacks in the UK were members of an HT splinter group, the British government announced a series of measures to address the threat of Islamist extremism. These included the compiling of lists of suspect web sites, bookshops and organisations as a prelude to the possible deportation of foreign nationals suspected of unacceptable activities. HT web sites offer easily accessible literature and news to Muslims living in Western countries to assist them to feel part of the global Muslim community. In an article called “Fighting the War of Ideas” (Foreign Affairs, Nov/Dec 2005, pp.68-78), Zeyno Baran, Director of International Security and Energy Programmes at the Nixon Centre in Washington D.C., had the following to say about the challenges facing the Western World: “Taking advantage of the West’s own freedoms of speech, assembly and the like, HT has spread hate-filled anti-Semitic and anti-constitutional ideas and created a fifth column of activists working to undermine the very systems under which they live”. Western governments and societies must find ways to protect themselves not just from terrorism, but also from the indirect incitement by militant organisations. Many Westerners do not understand the ideological threat posed by radical Islam and fail to take sensible precautions.

Islam and the West

Muslims have lived in India for many centuries so that even after the partitioning of 1947, India still has a very large Muslim population of 156 million – around 13 percent of its total population and often a source of strife and conflict. Russia is another country with a large Muslim component of around 20 million or 14 percent of its population – after the central Asian republics broke away when the USSR collapsed in 1991. The remainder of the Muslims in Russia have been living there for many generations. In the middle of the 20th century there were very few Muslims in Europe and North America. However, since the 1960s, growing waves of Muslims arrived in Western countries: some as migrant labourers from Turkey and North Africa, but many others as refugees from Arab countries, particularly Iraq, Palestine, Pakistan, East Africa and North Africa. European policy makers imported immigrants to fill job shortages and immigrant numbers continued to grow under “family reunion” rules. As a result of the influx of immigrants many West European societies became “multi-ethnic”. Today immigrants account for about 10 percent of the total population of most European countries and up to 30 percent in some cities such as Rotterdam and Brussels. For decades policy makers assumed that immigrants would quickly adopt the mores of their host societies and there are multiple examples of upward mobility and successful integration of immigrants. But a surprising number of immigrants have proved to be “unmeltable”. Muslims, in particular, failed to assimilate in large numbers.

182

The Muslim component in selected Western countries is as follows (2008):

Country Number % of Total France 6,405,779 10.0 Netherlands (estimate) 1,000,000 6.0 Belgium 374,916 3.6 Italy 1,743,786 3.0 Germany 3,046,200 3.9 Austria 344,832 4.2 Canada 2,300,000 1.9 Denmark 110,010 2.0 Ireland 84,064 2.0 Macedonia 620,015 30.0 Sweden 280,849 3.1 Switzerland 235,738 4.6 United Kingdom 1,650,057 2.7 USA (estimate) 4,000,000 1.4 Source: Pop. Data – 2009 CIA World Fact Book

Today there are 15 to 17 million Muslims in Europe, making up about 50 percent of all new arrivals in Europe. For the most part European countries have bent over backwards to accommodate the sensibilities of the newcomers. The British pensions department has a policy of recognising (with benefits included) “additional spouses”. Immigration departments in Scandinavia adopted flexible interpretations of “family” reunion and the residential requirements to qualify for social benefits. Demographers estimate that the Muslim population of the European Union’s existing 25 members may, on present trends, double from around 15 to17 million to around 30 million by 2025. That leaves out EU-applicant Turkey, with an almost entirely Muslim population of around 70 million. It is not clear how many Muslims there are in Europe – or for that matter Western countries at large. In France, for instance, secular authorities never ask a religious question on a census form. Current statistics are at best only educated guess work. Extrapolating from differences in birth rates, the figure for France might rise to 20 percent in 2020. But that estimate ignores the possibility that Muslim women may have fewer babies the longer they live in France. The great majority of Muslims in Germany have their origin in Turkey. Several opinion surveys have shown that German Turks seem to be growing more fervent in their attachment to Islam. A rising number think women should cover their heads. In the Netherlands, the Muslim community of around 1 million originated largely from Turkey and Morocco, mostly from the poorest parts of their countries. In both groups, young people usually take spouses from the home country. Thus both groups retain strong links with their homelands. Surveys found, however, that around 37 percent of second-generation Muslims attended mosques less frequently than their parents, but still preferred to marry within their own groups. During the electoral campaign of Pim Fortuyn, who was murdered in 2002, he argued that the rise of a fundamentalist subculture in the Netherlands threatened the country’s democratic values and had to be seriously addressed before it was too late. He argued that immigrant communities that refuse to align their values to those of Western democracy are ticking time bombs – and that too much stress has been placed on accommodating different values and faiths. 183

Most of Britain’s Muslims come from Pakistan and Bangladesh. A small proportion of the women go to work which explains why, statistically, Muslims remain at the bottom of the economic pile – compared to British Hindus and Sikhs. Yet some Muslim sub-groups, such as Ismaelis who come from South Asia via East Africa, have soared ahead, showing that Islam itself is no barrier to economic advancement. According to several surveys, there is a clear trend amongst European Muslims to see Islam increasingly as a symbol or badge of identity. Their faith, rather than their passport or skin colour, is seen as the main thing that defines them. Islam, rather than the country or city where they live, is their true home. European Muslims claim that their experience of the “Islamic diaspora” makes them feel that the umma – worldwide Islam – tugs hardest at their heartstrings. For the same reason as in France, the USA statistical services do not ask questions about religion. Hence it is hard to estimate the size of its Muslim population. The guesses range between 3 million and 7 million. American Muslims do not see themselves as being radically at odds with American society. Freedom to practice and preach Islam is protected by the American system. Americans are used to exuberant displays of religiosity. So the daily prostrations of a devout Muslim are less shocking to an American than to a lukewarm, secular European Christian. Hence Americans are less gloomy about Islam than Europeans. America’s free speech culture offers many opportunities for Muslims to express their opinions. The right to say almost anything on most subjects is deeply entrenched in the American political culture – in contrast to European “political correctness”. Europe’s Muslims are not a homogeneous group. In fact they fall into at least five categories: those from Eastern Europe (Bosnia, Albania and bits of Russia); first-generation immigrants; second- or third-generation Muslims born in Europe who speak only European languages and are indistinguishable from others; converts; and those who have become largely secular. It is not easy to quantify these categories, but the secularised group is probably the smallest. According to Olivier Roy, a French academic, when Muslims are torn from their traditional moorings – customs, family life and cuisine – they tend to become more fundamentalist and in some cases fanatical. Alienated from their parents’ way of life and their host societies, young European Muslims can be easily attracted by a simple, electronically disseminated, back-to- basics version of Islam that acknowledges no national boundaries and has been disseminated with the help of plenty of Saudi oil money. It is a simplistic solution to a complicated set of challenges: the proper status of Muslim women in modern society; the conflict between pluralistic, democratic Western Society and rigid, doctrinaire, authoritarian hierarchies; the differentiation between religion and politics in Western societies; accountability to the civic society in which they live in the face of subordination to the preachings and dictates of Muslim clerics; reconciliation of allegiance to Allah as an act of faith with obligatory duty to the state as an act of reason. Another Islam expert, Antoine Sfeir, has identified relations between the sexes as a big factor in the re-Islamisation of second-generation Muslims in Europe. Because young Muslim women often do better than men at adapting to the host society, old patriarchal structures are upset and young men acquire a strong incentive to reassert the old order. British specialists say groups of young, disaffected Muslims goad one another down the path to extremism. They develop a common interest with suffering Muslims across the globe. Websites and satellite television channels then supply visual images and incendiary rhetoric from any place where Muslims are fighting non-Muslims. The favourite war used to be Chechnya, then it was Iraq and later still Afghanistan. The internet is a major source of training and inspiration for militant Muslims. These patterns of self-recruitment and self-radicalisation are a headache for security services. The target groups are recent arrivals, second-generation members of immigrant communities and converts. As long as some people feel economically deprived or socially excluded, the pool of potential killers and bombers will grow. Many countries are tightening their immigration laws, shifting to a skills-based immigration system and setting citizenship tests for would-be immigrants. The French have banned girls 184

from wearing veils in schools. British politicians, such as Tony Blair and Jack Straw, have denounced the veil as a symbol of separation. The old welcome-mat has been removed. Christopher Caldwell who writes for the as well as the Weekly Standard, and who has spent more than a decade studying European immigration patterns, has expressed deep concerns about Western Europe’s naive and lax immigration policies. In a recent book entitled Reflections on the Revolution in Europe: Immigration, Islam and the West, he states that Europe is no match for Islamic self-confidence: “When an insecure, malleable, relativistic culture meets a culture that is anchored, confident and strengthened by common doctrines, it is generally the former that changes to suit the latter.” The strongest challenge facing Western democracies is how to respond to the dangers of terrorist enemies within their own borders. The 9/11 attack on the World Trade Centre and the 7/7 bombings in London illustrate the possibility of a “clear and present danger”. In the first instance the atrocity was initiated by people in some distant war zone who then used Hamburg, Germany, as an easy platform to launch their attack on the USA. In the second instance, a small band of British-born malcontents travelled by train from Leeds to London in order to plant bombs killing themselves and many others. They were influenced by ideas, images and interpretations of Islam that circulate electronically on a continuous basis, even if every extremist who tried to enter Britain were intercepted. The best terrorist-hunters in Britain and elsewhere in Europe can do is to trace how disaffected people from their own tranquil suburbs form connections with ideological mentors and ultimately terrorist sponsors who live overseas and how those godfathers find recruits in Western countries. Prospects

Which trend will prevail among the world’s 1.4 billion Muslims: violent confrontation or peaceful coexistence? Will secular-minded middle-class democrats gain ground on Islamic fundamentalists? The answer is not clear. When the Western World faced its previous epic ideological struggle during the Cold War, which lasted half a century, it was only able to prevail after coming up with a durable strategy based on thorough study of communist ideology and tactics. That strategy was to contain the enemy’s military threat while offering a better ideological alternative, based on political and personal freedom combined with widespread economic prosperity based on free trade and integration into the global economy. It is clear that another struggle is unfolding on the Islamic front and that it requires a comparably durable strategy. This time, however, close to 20 million Muslims live in Western democracies. Another 160 million live in India. In addition, the West is saddled with a perceived bias in dealing with the Israeli-Palestinian conflict – which is a major binding factor in the Muslim world. The Western world will have to deal with its disaffected Muslim minority problem. It will require an effective ideological campaign highlighting values common to the Western and Muslim worlds. It will require an even-handed treatment of the Israeli-Palestinian problem. It will require finding effective (not contra-productive) ways of helping moderates and reformists win the theological and ideological civil battles currently taking place within the World of Islam. It can assist them in developing school curricula that emphasize critical thinking, ethics and those Islamic values that are compatible with democracy and secularism. It will have to realise that any strategy would require patience and determination. Ideological and theological struggles can last many generations.

185

References

Bara, Z. (2005) “Fighting the War of Ideas”, Foreign Affairs, Vol. 84, No. 6, pp.68-78 Caldwell, C. (2008) Reflections on the Revolution in Europe: Immigration, Islam and the West, Doubleday David, P. (2007) “Special Report on Iran”, The Economist, July 21st, 2007, pp.3-16 David, P. (2009) “Special Report on the Arab World”, The Economist, July 25th, 2009, pp.3-16 Gibb, H.A.R. (1997) “Islam”, Encyclopaedia of the World’s Religions, Barnes & Noble, pp.166-199 Greenstock, J. (2004) “What Must be Done in Iraq”, The Economist, May 8th, 2004, pp.23-26 Grimond, J. (2003) “A Survey of Iran”, The Economist, January 18th, 2003, pp.3-16 Kepel, G. (2004) The War for Muslim Minds, Belknap Press Lewis, B. (2009) “The Arab World in the Twenty-first Century”, Foreign Affairs, Vol. 88, No. 2, pp.77-88 Lichfield, G. (2008) “Special Report on Israel”, The Economist, April 5th, 2008, pp.3-16 Nasr, V. (2009) Meccanomics: The March of the New Muslim Middle Class, One World Rieffel, Lex (2004) “Indonesia’s Quiet Revolution”, Foreign Affairs, Vol. 83, No. 5, pp.98-110 Rodenbeck, M. (2002) “A Survey of the Gulf”, The Economist, March 23rd, 2002, pp.3-28 Rodenbeck, M (2006) “A Survey of Saudi Arabia”, The Economist, January 7th, 2006, pp.3-12 Roy, O. (2004) Globalised Islam: The Search for a New Ummah, Columbia University Press Wright, R. (2009) “Islam’s Soft Revolution”, Time Magazine, March 30th, 2009, pp.28-32 The Economist (2003) “Dealing with Iraq”, February 15th, 2003, pp.23-25 The Economist (2003) “The United Nations and Iraq”, February 22nd, 2003, pp.25-27 The Economist (2005) “Special Report on Muslim Extremism in Europe” July 16th, 2005, pp.25-27 The Economist (2006) “Special Report on Political Islam”, February 4th, 2006, pp.21-24 The Economist (2006) “Special Report on Dubai”, December 16th, 2006 pp.73-75 The Economist (2007) “Briefing on Turkey”, July 21st, 2007, pp.24-26 The Economist (2008) “Briefing on Afghanistan”, May24th, 2008, pp.31-32 The Economist (2008) “Briefing on Islamic Finance”, September 6th, 2008, pp.72-74 The Economist (2009) “Egypt and Global Islam”, August 8th, 2009, pp.48-49

186

8 The Indian Enigma

There is no country more remarkable than the enigma that is India. Direct exposure to the frantic life of its teeming millions is a baffling, often confronting, experience. It confounds any simplistic analysis. The history of Indic Civilisation covers more than four millennia. Successive waves of invaders have washed over the Indian subcontinent like monsoon storms, each flood line leaving its own debris. Despite the turbulence of each fresh wave, India’s cultural currents carry on in an endless cycle of existence. Around one-fifth of the world’s population lives on less than three percent of the earth’s surface, spread among 29 states within India’s unitary republic, larger than all of Western Europe. India’s uniquely complex history is reflected in the tragic religious and regional conflicts and unrelenting poverty that continue to plague its people.

Ecological Setting

The subcontinent of South Asia encompasses an area of more than three million square km from the Hindu Kush and Baluchi Hills in the west and the Great Himalayas in the north, to the Burmese Mountains in the east and the Indian Ocean in the south. Within this triangular shaped subcontinent, the north-south and east-west braces extend roughly 3500 km each. Virtually every sort of topography, climate and geological formation can be found: from sub-sea-level desert to the world’s highest peaks; from perennial drought to some of the earth’s most drenched terrain; from ancient granite to “youthful” mountains in the north. Geographically the subcontinent may simply be divided into three major horizontal zones: the northern mountain belt, the Indo-Ganges plains and the peninsula in the south. Its climate is mostly subtropical, but heat is a pervasive fact of life. The earliest traces of human habitation appear to indicate that between 200,000 and 400,000 years ago, humans migrated to South Asia over the mountains of the northwest from their original habitations in Central or East Asia. The Indus River is fed by the glaciers of southern and flows through Kashmir before it veers to the south. North India’s two other great river systems, the Ganges and Brahmaputra, originate in the same region of Tibetan ice but its waters are driven south by the Himalayas. Today the three river systems are politically reflected in the threefold division of the subcontinent into Pakistan, India and Bangladesh. These three nations depend most vitally upon the Indus, Ganges and Brahmaputra waters respectively. North and South India are separated by the Deccan Plateau, with the Western Ghats and the Eastern Ghats on either side. The southern regions were originally populated by the “Dravidians” with their own set of languages – in contrast to the Indo-European and Indo-Aryan languages of the Northern “Aryans”.

Constitution and Government

India is a unitary republic. Its president is elected for a five-year term by an electoral college consisting of the elected members of the upper and lower houses of parliament and the legislative assemblies of the states. The president appoints a prime minister, who is the head of government. On the prime minister’s advice the president appoints a Council of Ministers, which is responsible to parliament. The 545 members of parliament are elected directly by universal suffrage on a constituency basis for 5-year terms. The upper house of 250 members is indirectly elected by the state assemblies for 6-year terms – one-third replaced every two years. The regional government level comprises 28 states and 7 union territories. The constitution stipulates the jurisdictions of the union and the states. The Supreme Court, whose members are 187

appointed by the president, has jurisdiction in all disputes between the states and the union. Each state has its own High Court and subordinate courts of law.

Population

The total Indian population is estimated to be around 1.2 billion, of which around 30 percent live in urban areas. The largest cities are Mumbai (+ 12 million), New Delhi (+ 10 million) and Calcutta (+ 5 million). The ethnic composition is made up of Indo-Aryan (72 percent), Dravidian (25 percent), Mongolian and other (3 percent). In terms of religious affiliation the Hindu are by far in the majority (80 percent), Muslim (14 percent), Christian (2 percent), Sikh (2 percent) and other (2 percent). English is the most important language for national, political and commercial communication. Hindu is the national and primary language of 30 percent of the people. But there are 14 other official languages: Bengali, Telegu, Marathi, Tamil, Urdu, Gujarati, Malayalam, Kannada, Oriya, Punjabi, Assamese, Kashmiri, Sindhi and Sanskrit. Hindustani is a popular variant Hindu/Urdu which is widely spoken throughout Northern India, but it is not an official language.

Early History

The Indus culture is traced back to ca. 2500-1600 BC. It developed in the valley of the Indus River in the area that was later called the Punjab and Sind. Archaeological excavations unearthed early urban settlements at the ancient citadels of Harappa, Mohenjo-Daro and several others. Hundreds of thousands of burnt bricks, granaries, seals, statues, beads and other artefacts were revealed. Indications were found that the early Indus valley dwellers conducted a brisk trade with Sumerians in the vicinity of modern Iran. Huge granaries beside the river at Harappa seem to indicate that Indus merchants exported surplus grain to Samaria and elsewhere. They spun cotton into yarn and wove it into cloth. It appears that the use of cotton for is one of India’s major gifts to world civilisation. It was exported to Mesopotamia. Around 2000 BC the original Indo-European speaking, semi-nomadic barbarians who lived in the region between the Caspian and the Black Seas, the Aryans, were driven from their homeland by some natural disaster or Mongol invasions. Some tribes moved west across Anatolia and some to the east across Persia (now Iran, a cognate of Aryan) and eventually advanced still further east across the Hindu Kush Mountains into India. From the Aryans’ religious “Books of Knowledge” (Vedas), particularly the Rig Veda (Verses of Knowledge), consisting of 1017 Sanskrit poems, the early Aryan Indo-European history has been pieced together. Unlike the pre-Aryan peoples of the Indus valley who lived in citadels, the Aryans lived in tribal villages with their migrant herds of cattle, sheep, goats and domestic horses. They wielded bronze axes as well as long bows and arrows and their literature describes how they conquered the Dasas (dark-skinned slaves). The term Aryan, while primarily a linguistic family designation, had also the secondary meaning of “highborn” and “noble”. The foremost Aryan tribe was called Bharata, probably the name of its first raja (king). Each Aryan tribe was ruled by an autocratic male raja and each family was controlled by its father – the origin of the patriarchal household. They occupied and settled in the catchment area and valley of the Indus River and its seven tributaries. The tenth and final book of the Rig Veda explains that the four great “classes” (varna) of Aryan society emerged from different parts of the original cosmic man’s anatomy: the brahmans issuing forth first, from the mouth; the kshatriyas second, from the arms; the vaishyas third, from the thighs; and the shudras last, from the feet. All rajas, who were kshatriyas by birth, fell below all brahmans, who alone were associated with the cosmic “head”. As they expanded eastwards toward Delhi and the Gangetic plain, the Rig Vedic Indians (descendants of the Aryans), developed a range of occupations: carpenters and wheelwrights, blacksmiths and tanners, weavers and spinners, farmers and herders, who settled down to the routine of village interdependence. Each varna (class), originally meaning “covering” associated with skin covering and its varying colours had its distinguishing colour: white for brahmans, red 188

for kshatriyas, brown for vaishyas and black for shudras. Acute colour consciousness thus developed early during India’s Aryan age and has since remained a significant factor in reinforcing the hierarchical social attitudes that are so deeply embedded in Indian civilisation. The religion of the early Aryans centred on the worship of a pantheon of nature gods, to whom sacrificial offerings were periodically made for the good things of life and for repose thereafter. To the seeming simplicity of the Aryan nature-worshipping was added the Vedic quest for an understanding of cosmic origins and control over cosmic forces. Sacrifice had as immediate purpose to secure some divine favour, but it also had cosmic meaning in that its proper performance helped maintain the balance of order in the universe. By the sixth century BC there were around sixteen major kingdoms and tribal oligarchies in North India: from Kamboja in Afghanistan to Anga in Bengal. The most powerful of these mahajunapadas (tribal regions) were Magadha and Kosala south and north of the Ganges River artery. The Kosala region contained the epic city of Ayodhya and the thriving river city of Kasi (later called Varanasi and then Banaras, the capital of Hindu worship). Kosala’s capital was at Sravasti near the Himalayan foothills. Magadha’s capital, Rajgir, commanded the eastern Gangetic trade and the rich mineral resources of the Barabar Hills. It became the richest and most powerful kingdom of North India. Maghada also bequeathed to the world one of its great religious philosophies: Buddhism. As the Aryan tribes migrated eastwards, they took with them the secrets of “Aryan civilisation” such as the manufacture of iron weapons and tools and the use of ploughs, seed and irrigation to ensure a grain surplus all year round. Hostile tribes were subdued by the northern invaders who believed the gods were on their side. As more of the different tribal peoples were absorbed within the spreading boundaries of Aryan society, it soon became necessary to add a still lower class. It was one whose habits or occupations were so strange and “unclean” that even shudras did not wish to “touch” them. Hence the emergence of those beyond the pale of the four–varna system: the untouchables, also called “fifths” (panchamas), or outcasts. The actual pattern of social hierarchy that emerged varied greatly from region to region as a result of Aryan and pre-Aryan interaction, e.g. in South India and Bengal. Plough and irrigation agriculture greatly increased the food supply available to Aryan settlers, permitting rapid expansion of India’s population as a whole and the growth of extended family units within villages and towns. Bonds of kinships and marriage allowances as well as economic interdependence linked villages within each territorial kingdom. So the eastward and southward expansion and cultural synthesis continued: a constant blend of the “great” Sanskritic Aryan and “little” pre-Aryan traditions: conquest and assimilation. The first imperial unification of North India was completed under Mauryan rule around 326 BC. The Maurya family, led by Chandragupta, originated in a pre-Aryan clan living in Magadha south of the Ganges River. Chandragupta sustained his array of spies, soldiers and bureaucrats totalling more than a million men. He claimed a share, usually one-fourth (sometimes as high as one-half), of the value of all crops raised throughout his domain. Trade, gold, herds and other forms of wealth were also taxed. Mauryan officials were shrewdly chosen and supervised. The Mauryan Empire was divided into japana (districts) which reflected tribal boundaries and were administered by the emperor’s closest relatives or most trusted generals. The army included infantry, cavalry, chariots and nine thousand elephants. It is estimated that there were close to fifty million people in South Asia by the third century BC. The Mauryan state owned and operated all mines, vital industries such as shipbuilding and armament factories, large centres of spinning and weaving. The country was governed as a socialised monarchy with strict enforcement of working regulations on artisans, professionals and officials. Weights and measures and currency were state controlled. While in theory the king owned all land, large tracts of tax-free property were bestowed upon valiant servants of the crown and their heirs. Many artisan and merchant guilds (shreni) were also privately owned corporate bodies. Shreni exercised judicial autonomy over their members in towns and cities. An interdependent private enterprise and state-controlled economy has thus existed for more than 2000 years in India. 189

Chandragupta abdicated his throne in 301 BC to become a Jain monk in South India, where he fasted until his death, while his son, Bindusara took control of the capital city Pataliputra. His son Ashoka invaded the frontier tribal Kingdom of Kalinga to the south (modern Orissa). After his violent conquests, Ashoka abandoned violence and adopted Buddha’s law of non-violence. Only three Southern Dravidian “kingdoms” (Kerala, Chola and Pandya) remained independent, as did Ceylon. Ashoka appointed many overseers of the law (dhamma-mahamattas) to supervise local officials and thus establish a central bureaucracy over India’s vast differences in customs, laws and languages in its diverse regions. There are today many inscriptions carved into rocks and pillars of sandstone throughout his enormous empire, setting out his edicts, policies and admonitions such as “compassion” and “truthfulness”. The word dharma, which means law, duty and responsibility, was used more than any other term by Ashoka. In his 26th year of rule, Ashoka inscribed the message “... this is my rule: government by the law, administration according to the law, gratification of my subjects under the law and protection through the law”. His emissaries were sent to distant regions of Southeast Asia to convert people to Buddhism and through Buddhism to Mauryan pacification and Indian civilisation. Sometime between 250 and 240 BC, Ashoka hosted the Third Great Council of Buddhism at Pataliputra, which had by then become Asia’s foremost centre of art and culture. The Ashokan pillars were topped by capitals decorated with animal sculpture, the most famous of which are the four lions of Sarnath, three of which have become the national symbol of modern India. After Ashoka’s death, the Mauryan Empire lost much of its vitality, falling into economic and spiritual decline. His many sons contested the throne but the Empire disintegrated through fragmentation, local reassertion of independence and regional rivalries. India’s first great unification lasted 140 years. It was won by the swords of Chandragupta and Bindusara, ruled in accord with the shrewd pragmatism of the Arthashastra and consolidated under the royal paternalism of Ashoka. The Mauryans ruled India roughly as long as the British would more than two thousand years later. After the collapse of the Mauryan Empire, India remained politically fragmented for around five centuries. The next dynasty was the Guptas. They also established their base in Magadha where they controlled the rich iron ore veins from the Barabar Hills. Ghandra Gupta I was crowned in 320 AD (Maharajadhiraja). The Guptas gradually expanded their frontiers to the Punjab, Bengal and Kashmir. The peak of their power was attained under Chandra Gupta II (ca. 375-415). During the Gupta reign, royal support was given to Hindu, Buddhist and Jain faiths and the Hindu temple emerged as India’s classic architectural form (e.g. at Deogarth in Central India and Aihole in the south). They are characterised by their imposing towers and extravagantly ornate structures. Commerce and Buddhism stimulated Indian interaction with China and Southeast Asia. Indian ships carried cottons, ivory, brassware, monkeys, parrots and elephants to the Middle Kingdom and brought back from China musk, woven silk, tung oil and amber. Various centres of Hindu power were established in Bali, Vietnam, Sumatra and Java. Indian agriculture blossomed in this period, providing a rich variety of succulent produce: mango, melons, coconut, pears, plums, peaches, apricots, grapes and oranges as well as such staples as rice, wheat and sugar cane. Though meat was not widely consumed, fish played a vital role. After Skanda Gupta’s death in 467, the Gupta dynasty rapidly declined. When the Punjab was wrested from Gupta control by 500 AD, India again reverted to fragmentation. South India displayed its own kind of “multicentre power” through many centuries. It consisted of a series of nuclear areas of village-based agricultural clusters centred around the drainage basins of the major peninsular rivers. The upland and forest clusters retained their own tribal structures. The warrior families who asserted their authority over these areas were brahman and sat-shudra transmitters of Aryan culture and Hindu civilisation. The better-known ruling families were the Pallavas, the Cholas and the Pandyas. Kanchipuram served as a major seat of control for the Pallava kings and Mahabalipuram (south of Madras) served as a major 190

port. Much of the art and architecture of Cambodia and Java (including Angkor Watt and Borobudur) were inspired and produced by Pallava artists and craftsmen. (See Stanley Wolpert, A New History of India, New York, Oxford University Press, 1993, pp.24- 103)

Buddhism

Buddha set his “wheel of the law” (dhamma) in motion in about 527 BC by preaching his first sermon after achieving enlightenment. That sermon on the four noble truths embodied his message and was to become the philosophical core of Therevada (Teaching of the Elders) Buddhism. The first noble truth was “suffering” (Dukkha) and how all existence was inexorably bound up with it: from birth to death, through sickness and old age, sorrow was everywhere, gaining poignance in separation from those we love, intensified by proximity to those we hate; no facet of life could escape it. The second noble truth was “ignorance” (avidya), the basic cause of suffering. The root of suffering lies in ignorance of the fundamental nature of reality. Buddha spent much of his wandering years trying to understand the nature of reality. He explained in his sermon at Sarnath that had we the wisdom to understand reality’s soulless, transient misery, we would be able to elude or diminish suffering. Buddha suggested abandonment of the passions of sense organs and cravings which chain us to the wheel of cyclical suffering, reborn and redying. Buddha prescribed as his third noble truth that any “ill” which was understood could, in fact, be cured. The fourth and final of the Buddha’s truths was the noble eightfold path to the elimination of suffering: to hold, practice and follow right views, right aspirations, right speech, right conduct, right livelihood, right effort, right mindfulness and right meditation. The difficulty, of course, was in properly defining “right”. But Buddha taught that if one followed the eightfold path without misstep, the goal of nirvana (which literally meant “the blowing out” as of a candle’s flame) could be achieved and the pain of suffering would finally be overcome. Nirvana was thus the Buddhist equivalent of moksha, a “paradise” of escape rather than pleasure. The Buddha spent the next forty-five years of his life teaching these four noble truths to disciples who gathered around him in such numbers that he established a monastic “order” (sanghi) which continued to grow and to spread throughout the world after his death. Initially only men could join the sangha and the vow of chastity (brahmcarya) was as important as those of non-violence (ahimsa) and poverty (aparigraha) – three vows that would become integral to Hindu concepts of piety (all three were taken by Gandhi during the latter half of his life). Nuns were also admitted to the sangha, but he was doubtful about the nature of female influence upon his monks. His foremost disciple, Ananda, warned followers to “keep alert” in the company of women! All members of the sangha are expected to pursue a rigorous course of “right discipline” (sila), yogic concentration and thoughtful study in their search for nirvana. Not only did they have to abandon all family bonds and prospects of progeny, but they were enjoined daily to beg for their food, bestowing “merit” upon those who placed rice in their bowls. With heads shaved, the saffron-robed, barefoot disciples of the Buddha marched the length and breadth of the Gangetic plains and beyond, teaching his message of moderation, non-violence and love for all creatures. The idea of monasticism achieved such popularity that it attracted religious leaders in other parts of the world, spreading to the West, to the Near East and to Europe, wandering north and east to China and Japan. In India they did become a formidable ideological force against Brahmanism and attained great political significance in Magadha. Several offshoots of Buddhism developed in India and elsewhere. Janaism practiced ascetism, self-torture and even death by starvation. Its central doctrine is that all of nature is alive: everything from rocks and earthworms to gods has some form of “soul” called jiva. All jiva is eternal. Also central to Jainaism is the concept of non-violence (ahimisa). Thanks to Janaism, ahisma became a significant aspect of Hinduism – all following the Buddhist prohibition of killing any living creature. The Janaist guru taught that “... all things breathing, all things existing, all things living, all beings whatever, should not be slain, or treated with violence, or insulted, or 191

tortured, or driven away”. Like Buddhism, Jain philosophy soon acquired all the characteristics of a religious faith, practices by an order of male monks, joined later by nuns and a supportive lay community. The Jain community, centred in Gujarat refrained from agriculture, but turned to commerce and banking. They became wealthy and remained a mercantile community. Gandhi revived the “fast-into-death” as a political weapon. After more than 1700 years, India’s major centres of Buddhism were destroyed in 1202 by Turko-Afghan Muslims, including the university at Nalada where more than 10,000 monks lived and studied. Thousands of Buddhists fled towards and Tibet and countless others were killed who weren’t swift enough to escape. The severity of Turko-Afghan persecution directed against centres of Buddhist monasticism was so unrelenting that the religion of Buddha was now sent into exile from the land of its birth, never to return in significant numbers until 1954. Though Buddhism flourished in Nepal, Tibet, China, Japan and most of South East Asia after its diaspora, the sangha found no sanctuary on Indian soil for some 750 years. (See Stanley Wolpert, op.cit., pp.50-54) Hinduism

Hinduism had no single founding figure like Christianity or Islam. It is a very ancient religion in which many primitive aspects survive beside highly developed philosophical systems. Broadly speaking, it is a religion of an ethnic character, unlike the more recent missionary religions such as Buddhism, Christianity and Islam. Like Judaism, Hinduism is the faith of a single cultural unit. It did not, in the past, make any special attempts to attract the support of people outside the cultural community. Generally speaking, Hinduism describes a person who chiefly bases his or her beliefs and way of life on the complex system of faith and practice which has grown organically on the Indian subcontinent over a period of at least three millennia. Hinduism is the oldest and most enduring of the Eastern group which maintains that the soul inhabits many bodies in its journey through the cosmos, until it reaches its final goal, which is described in varying terms by different sects. The corollary of this doctrine is that all life, whether supernatural, human, animal, insect, or with some sects even vegetable, is governed by the same law. Whereas Western religions generally teach that man is a special creation, possessing an immortal soul which is denied to lower animals, Hinduism maintains that all living things have souls, which are essentially equal and are only differentiated through karma, or the effect of previous deeds, which conditions successive re-births in different types of body. This doctrine of samsara has given a very distinctive character to much Hindu thought and philosophy. For the religiously minded Indian, the main spiritual quest for at least 2500 years has been to rise above the cycle of transmigration and to achieve union or close contact with the ultimate Being. To many Hindi sages or philosophers this amounted to complete identification with a total loss of individual personality in the divine. It was generally agreed that sacrifices and good works were not enough, but would merely result in a very lengthy residence in one of the heavens. For the ordinary worshipper that would be a desirable goal, but not possessing that finality which the truly spiritual soul desired. Various schools of philosophy identified different means of achieving the supreme goal and gave diverging interpretations of the experiences of the mystics. For a Hindu it is not a question of great concern in which god a person believes. What is really important is that he/she should believe in the Hindu way of life and follow it to the best of his/her ability. The whole life of a Hindu is punctuated by ritual acts, ceremonies, sacraments and social customs. These refer to marriage, cremation, animal sacrifices, the pouring of ghee, recitations of verses, the use of sacred images, initiation rites, patriarchal marriage relationships and ascetic widowhood. Although many domestic rites, the caste system, pilgrims bathing in the Ganges and sacred cattle still survive in India, new trends have appeared in the last hundred years. In the new Hinduism there is a strong sense of social purpose. Many ancient prejudices have disappeared in respect of “untouchables”, divorce and widow re-marriage, forbidding polygamy, taboos and ideas of ritual impurity. Feeding the hungry and caring for the sick is part of the social 192

conscience in Hinduism. Western missionary schools played an important role in sensitising communities to many aspects of modernisation. The old Hinduism is still alive, but it is gradually giving way to new interpretations, not by a religious revolution, but by a steady process of adaptation. (See Bashan, A.L., “Hinduism” in Encyclopaedia of World Religions, Barnes & Noble, 1997, pp.217-254) Islam’s Penetration

It is difficult to imagine two religious ways of life more different than Hinduism and Islam. The penetration of Islam into South Asia dates back to the early 8th century when Muslim ships started calling on the western coastline of India around the mouth of the Indus River. Islam’s major invasions were launched only by the end of the 10th century from the Afghan high board of the Khyber Pass. By then Islam was embellished by Persian civilisation and protected by Mamluks, who were Turkish armed slaves. Entire tribes of Turkish nomads were driven into Persia and Afghanistan by China’s expansion to the west. The first independent Turkish Islamic kingdom was founded by a Samanid warrior slave named Alptigin, who seized the Afghan fortress of Ghazni in 962 and established a dynasty that lasted two hundred years. His grandson, Mahmud, the “Sword of Islam” led many bloody forays into India from his Ghazni perch. He descended each winter into the Punjab plains, smashing Hindu temple idols considered as abominations to Allah and looting India’s cities of as many of their jewels, specie and women as his Turkish cavalry could carry across the Afghan mountain passes. Many thousands of Hindus were slain and left a legacy of bitter Hindu-Muslim antipathy. After the plundering Ghaznavid waves of Turko-Afghan Muslims invaded North India, first in the Punjab, then further south and east, capturing Lahore in 1186 and Delhi in 1193. Indian Buddhists were driven to Nepal and Tibet and so exiled from the land of their birth. The founding of the Mamluk dynasty transformed North India into Dar-ul-Islam (Land of Submission). The sultanate of Delhi lasted 320 years, including five successive Turko-Afghan dynasties. Bengal declared its independence from the Delhi sultanate in 1338 and retained it until the 16th century Mughal conquest. The Bengali preferred to practice Sufism, the mystic form of Islam that was more peculiarly attuned to their cultural character, heritage and religious consciousness. It appealed to the mystical passionate yearnings experienced by many Hindus, Buddhists and other “God-intoxicated” seekers the world over. In April 1526, the king of Kabul, Babur – descended from Turkish and Mongol parents – founded the Moghul Empire with Delhi and Agra as twin capitals of the Mughals. The Mughal Empire lasted until the end of the 18th century when it was sidelined by the British Raj. (See Stanley Wolpert, op.cit., pp.104-134)

The British Conquest of India

In the age of mercantilism, the British penetration of India was led by the East India Company which operated like a state-licensed monopoly. The merchants who founded the company pooled their resources for what were large and risky ventures under protection of government charters. Starting in the 1630s, it established several trading posts on the Indian coast which later became the cities of Madras, Calcutta and Bombay. Political power initially remained in the hands of the Mughals, but the company’s textile trade rapidly expanded. It was soon augmented by the “interloper” trade of the company’s officials in partnership with Indian businessmen. Alongside the official trade of the company an enormous private business operation was developing. Pleasing the Moghul Emperor and his local subordinates required mercurial talents of wheeling and dealing as well as buying and selling – which the British had in ample supply. A fine specimen was Thomas Pitt who understood the importance of fortifying the trading posts and their surrounding European settlements. The East India Company began to raise its own regiments from among the local warrior castes, equipping them with European weapons and 193

subordinating them to English officers. Having begun as a trading operation, the East India Company now had its own fortified settlements, its own diplomats and even its own private army. During the subsequent century, Britain was engaged in intermittent warfare with France. Both countries were driven by a quest for universal dominion. While Britain’s Prussian ally contained the French in Europe, the British Navy carved up their empire on the high seas, leaving scattered British armies to drive out their remaining colonial forces in Canada, the Caribbean and in the East. Britain’s success depended on its pre-eminence in shipbuilding, metallurgy and gun founding. The new alliance between science and strategy enabled Britain to traverse the oceans and establish its naval superiority. Its commercial foothold in India was gradually strengthened by exploiting the Indian feuds. The Indians allowed themselves to be divided and, ultimately, ruled by the British. Under the Treaty of Allahabad, the Moghul Empire granted the East India Company the civil administration – known as the diwani – of Bengal, Bihar and Orissa. It gave the company the right to tax over 20 million people – a stream of revenue larger than the trading profits. After an initial period of “smash-and-grab” tactics by Robert Clive, Warren Hastings was appointed Governor-General of Indian Territories by 1773. He is said to have introduced a more benevolent multicultural approach. Indian historians, however, claim that the aims of the colonial “nabobs” were to “drain” capital from India to Britain. By the time of Hastings, the East India Company had more than 100,000 men under arms and was in a state of near perpetual warfare. What had started as an informal security force to protect the company’s trade became a progression of new battles, conquering new territory to pay for previous battles. The ratcheting up of taxes on the Indian population coincided with local famine and poverty. On his return to Britain in 1785, Hastings was impeached by the House of Commons and charged with “high crimes and misdemeanours” which included “cruelty and treachery”, “extirpating innocent and helpless people”, “impoverishing and depopulating the country”; and with “wanton and unjust and pernicious exercise of his powers”, with “enormous extravagances and bribery” and to “enrich his dependants and favourites”. At the end Hastings was acquitted by an exhausted House of Lords. The Hastings trial changed the face of British India. A new India Act was passed aimed at cleaning up the East India Company and to bring to an end the freebooting “nabob”. Governor- Generals were no longer company officers, but appointees of the Crown. The Earl of Cornwallis (fresh from defeat in America) was appointed to introduce a new institution called the “Indian Civil Service”, English-style private property rights and fixed landowners’ tax obligations. The effect was to strengthen the position of a rising Bengali gentry. Oriental corruption was out and classical virtue was in. British power in India continued to be based on the sword. War after war extended British rule beyond Bengal – against the Marathas, the Mysone, the Sikhs and finally the Mughal Emperor himself accepted British “protection”. By 1815 around 40 million Indians were under British rule. Nominally, it was still the East India Company that was in charge. It was now the heir to the Mughals and the Governor-General was the de facto Emperor of a subcontinent. Great Britain was now in charge of the largest empire the world had ever seen, encompassing 43 colonies on five continents. In the words of Niall Ferguson: “They had robbed the Spaniards, copied the Dutch, beaten the French and plundered the Indians.” Now they ruled supreme. (See Niall Ferguson, op.cit., pp.33-52)

The Legacy of British India

Indian nationalists have long claimed that British rule of India was good for the Brits but not good for Indians: complaining that the wealth of India was systematically being drained into the pockets of foreigners. Between 1757 and 1947 British per capita increased in real terms by 347 percent, Indian by a mere 14 percent. A substantial share of the profits, which accrued as the Indian economy industrialised, went to British managing agencies, 194

banks or shareholders. The free trade imposed on India in the 19th century exposed indigenous manufacturers to lethal European competition at a time when the USA sheltered its infant industries behind high tariff walls. It is also argued that Indian indentured labourers supplied much of the cheap labour on which the British imperial economy depended. Between the 1820s and the 1920s, close to 1.6 million Indians left India to work in a variety of Caribbean, African, Indian Ocean and Pacific colonies, ranging from the rubber plantations of Malaysia to the sugar mills of Fiji. The conditions under which they travelled and worked were often similar to those inflicted on African slaves a century earlier. On the other side of the balance sheet were the large British investments in Indian infrastructure: irrigation, industry, railways and roads. By the 1880s the British had invested £270 million in India, not much less than 20 percent of their entire investment overseas. By 1914 the figure had reached £400 million. They created the Indian coal industry from scratch and increased the number of jute spindles by a factor of 10. There were also marked improvements in public health which increased Indian average life expectancy by eleven years. The British introduced quinine as an anti-malarial prophylactic, carried out public programmes of vaccination against small pox and laboured to improve the quality of water supplies. The British also believed that there were some advantages in setting up an incorruptible bureaucracy to serve as the Indian Civil Service. The British had called into being an English- speaking, English-educated elite of Indians, a clan of civil service auxiliaries on whom their system of administration had come to depend. (See Niall Ferguson, op.cit., pp.216-220)

Independence and Partitioning

The formal roots of Indian nationalism and partitioning sentiments can, inter alia, be traced to British India’s decision to partition the Province of Bengal between Muslim and Hindu regions. In 1909 the Morley-Minto reforms provided for a limited extension of Indian participation in government and introduced separate electorates for the country’s different religious communities. After World War I, British trained lawyer, Mahatma Gandhi, emerged as the leader of the nationalist movement. Gandhi advocated extra-constitutional, but non-violent, methods of struggle. His first civil disobedience campaign began in 1920 but ended in 1922 when violence erupted. Gandhi and his Congress Party dominated the nationalist movement for the next 30 years, supported by lieutenants Vallabhai Patel and . Throughout the next fifteen years sporadic civil disobedience campaigns were undertaken. In 1935 the colonial British government introduced elected responsible government at the provincial level, based on a narrow franchise. The Congress Party agreed to participate in the ensuing election and gained 8 out of 11 provincial governments. The Muslim League, led by Mohammed Ali Jinnah, did poorly. It marked the beginning of a growing rivalry between the League and Congress. In 1939 the Congress Party withdrew from the provincial administrations in protest at the British decision to declare India a party to World War II. The Muslim League, however, co- operated with the British administration during the war and called for “independent states” in Muslim majority areas. In August 1942 Gandhi launched the “Quit India” movement to force the British to leave. In an effort to contain mass civil disobedience, the administration imprisoned thousands of Congress Party supporters, including Gandhi and Nehru. The end of World War II saw a major upsurge in nationalist sentiments. The British Labour Government began to prepare for Indian independence. In February 1947 the British government announced its intention to withdraw from India by June 1948 and appointed Lord Louis Mountbatten as Governor-General to oversee the process. In the face of escalating communal strife, particularly between Muslim and Hindu groups, Mountbatten, after widespread consultation, recommended partitioning: a Hindu India and a Muslim Pakistan. A 195

London lawyer, Sir Cyril Radcliffe, was commissioned to determine the borders. The Punjab and Bengal provinces were to be partitioned between Muslim and Hindu majority areas. On August 15th, 1947, British India was partitioned into two independent “dominions” to be known respectively as India and Pakistan. Pakistan was to consist of two parts: West Pakistan and East Pakistan with a thousand-mile stretch of India in between! As the appointed day of partition approached, Hindus and Muslims in the Punjab region began to commit appalling atrocities against their neighbours. Mobs rampaged through villages, destroying property and murdering indiscriminately. Rape and mutilation was common. Men killed their own wives and daughters as “pre-emptive” steps. A usually peaceful population became militant and blood thirsty. The police and army were as partisan and factionalised as the population at large. Sikhs were targeted in the Punjab by Muslims. In turn the Sikhs formed militia groups to execute revenge attacks on Muslims. In a two-way migration, millions of people moved across the borders. An estimated ten million Sikhs and Hindus came to India from Pakistan territory and six million Muslims escaped Hindu violence by moving into Pakistan. This vast impromptu migration was not anticipated. Some travelled in buses or on bullock carts, but most went on foot. Marauders constantly attacked the columns. Sometimes entire trainloads of refugees were slaughtered. Although the populations of East and West Pakistan were Muslim, the East was mostly Bengali and the West was Punjabi – each set aside by regional identity, cultural heritage and language. Many Bengalis continued to live just across the East’s frontier in the Indian federal state of West Bengal. When the conflict between East and West Pakistan boiled over in March 1971, millions of the East’s Bengalis took refuge across the border in India – many of them Hindu Bengalis. Pakistan’s inter-ethnic conflict became a full-scale international conflict. By the end of 1971, Indian troops and planes were fighting West Pakistan’s forces. The USSR supported India and East Pakistan and the USA and China found themselves backing Pakistan. With the help of Indian forces, the Bengalis regained control of the East’s capital, Dacca, and declared Bangladesh an independent nation. The dominant political grouping, the Awani League, had to come to terms with the militant Mukti Bahini guerrillas who had provided a major portion of the armed forces for the nationalist movement, as well as the Biharis, a major ethnic minority in the East, which showed sympathy with the West Pakistani forces. On December 15th, 1971, Pakistan in turn was split in two: East Pakistan became Bangladesh and West Pakistan continued to be called Pakistan.

India’s Cultural Diversity

India is a unique laboratory of diversity: linguistic, religious and caste. These cleavages have all played an important role in defining areas of conflict. In some areas caste and language have proved to have similar boundaries and have been mutually reinforcing. Religion divided the population along different lines and tended to overshadow linguistic differentiation. Indian linguistic clusters have deep roots in Indian history. As early as the 12th century, the major regional languages had not only scripts, but also scholars and literatures. In the 13th century an alien language, Persian, was introduced by the Islamic conquerors as a language of administration to supplant Sanskrit. Under the Moghuls as well as under the British, linguistic frontiers were ignored in the determination of administrative regions. The first evidence of the recognition of linguistic sentiments came in 1920 when the Congress Party introduced linguistic sections in its internal organisational structure. Linguistic cleavages paled into insignificance when the Muslim League polarised India into two religious communities, culminating in the holocaust of 1947 with 500,000 dead and 12 million refugees. The influence of religion was so strong that the decision in 1947 to make Hindi the official language for all of residual India was met with little objection in non-Hindi areas. The excision of Pakistan removed the Hindu-Islam conflict from the centre of domestic concerns. It also brought an end to linguistic tranquillity. A cry for linguistic self-determination 196

and the redrawing of provincial frontiers along linguistic lines was raised from all corners. Since the language of the independence movement was English, it also became the lingua franca of the new Indian elite. Though the shared use of the English language and spirit of nationalism unified the elite at the top, a contrary process was underway at other levels – expressing a new social consciousness through their own languages. It created a new market for films, literature and newspapers – all in the regional languages. The bulk of schoolchildren received their education in their own regional language. Although approximately 40 percent of the Indian population are Hindi speaking, the balance, especially in the Dravidian zone of southern India, have developed an intense antagonism towards Hindi. The problem is further complicated by the fact that there is no consensus over what Hindi is. Purists demand a return to the Sanskrit sources, others prefer a standardisation of the Hindi of the marketplace or varying degrees of Urdu admixture. Village Hindi is a diverse series of dialects, often mutually unintelligible and far removed from the Hindi of the towns. Beyond this, there are major sub-categories in Hindi such as Rajasthani, Bihari and Punjabi. In 1953 a States Reorganisation Commission was established to look into the merits of linguistic boundary claims. In 1956, on the basis of the Commission’s report, a major reshuffling of the provincial boundaries took place. In 1960 the last remaining bilingual state in Bombay collapsed with its partition into Gujarat and provinces. Assam declared itself a unilingual state. Caste identity has been extended over broader areas by social change, but it halts at linguistic frontiers. Marriage or kinship across the language boundary remained relatively rare. Three castes, the Brahmans, Kayasthas and Banias, favoured by a literary tradition, were the first to take advantage of British education. This resulted in their domination in the services and professional sectors, where they tended to form closed preserves, excluding members from other castes. Individual failure and frustration tend to be translated into caste terms and result in increased tension and bitterness. Elections to village councils heighten caste feelings through electoral invocations of caste loyalty. Traditionally dominant castes struggle to preserve their hegemony, the others strive to overthrow their control. India’s diversity has come into sharper focus since independence as a result of the emotional power of sub-national sentiments and feelings of solidarity. With linguistic states a reality, linguistic loyalties have become more firmly rooted. Since inter-ethnic conflict in South Asia does not neatly confine itself within national boundaries, India itself has to accommodate its large Bengali population in the federal state of West Bengal. The state has long been plagued by severe poverty and has given rise to both Maoist and more orthodox Communist parties.

Indian Politics

In 1950 the new Indian constitution came into effect declaring the country a republic and a “federal union” with a parliamentary system of government. The first post-independence elections in 1951-52 gave the Congress Party 364 of the 489 seats, but only 45 percent of the popular vote. Under Nehru the government pursued moderate left policies centred on state- directed industrial development at home and non-alignment in foreign affairs. Elections in 1957 and 1963 confirmed the influence of the Congress Party and Nehru’s personal popularity. In the autumn of 1962 a border conflict broke out with China, which culminated in a Chinese invasion of the north-east border area and a humiliating defeat for India. Nehru died in May 1964 and was succeeded by L.B. Shastri. In August 1965, war broke out again with Pakistan over Kashmir. The Indian army repelled Pakistan’s forces. Both countries accepted a Soviet offer of mediation and peace talks were concluded at Tashkent in January 1966. Shastri died at the end of the peace conference and was succeeded as Prime Minister by Nehru’s daughter . In the 1956 elections the Congress Party lost a lot of ground and Indira Gandhi formed an alliance with the radical wing of the Congress Party, but was opposed by the party’s right wing. 197

In 1969 the Congress Party split in two, but Indira Gandhi succeeded in winning the 1971 election with the slogan “Abolish Poverty”. In 1971 war again broke out with Pakistan as India intervened in support of the secessionist forces in East Pakistan, underwriting the emergence of independent Bangladesh. In 1974 opposition to the Congress Party escalated leading to campaigns of civil disobedience. In 1975 Gandhi responded by imposing a state of emergency, suspending established civil liberties and postponing the 1976 elections. When elections were finally held in March 1977, Gandhi’s Congress Party lost more than half of its seats to the Janata Party. Under Morarji Desai, the Janata Party formed the first non-Congress Party government since independence. However, factional conflict and the lack of a coherent set of policies forced Desai to resign as Prime Minister in July 1979. He was briefly succeeded by Charan Singh as Prime Minister but he did not succeed in forming a viable government and resigned a month later. The January 1980 elections restored Indira Gandhi’s Congress Party to power. Her autocratic style of leadership weakened the party and distorted the conduct of public affairs. Beginning in 1983, serious unrest developed in the state of Punjab over Sikh demands for regional autonomy. Although Prime Minister Gandhi opened negotiations with Sikh political leaders, political influence in the Punjab shifted towards the militant factions led by Jamail Bhindranwale. His supporters organised the murder of political opponents and attacks on security forces. In June 1984 the army launched an assault on the Golden Temple of Amritsar, the power base of the militants. The militants were routed after a fierce battle. In October 1984, Indira Gandhi was assassinated by two of her Sikh bodyguards in revenge. Communal violence swept over the capital New Delhi and some 2500 Sikhs were massacred. Gandhi’s son, Rajiv, succeeded her within hours after her death and called elections for December 1984 which the Congress Party won in a landslide. In July 1985 Rajiv Gandhi signed an accord with Sikh leader Longowal. His lieutenant Barnala led the Akala Dal to victory in the state elections in September 1985 after Longowal was killed by an extremist assassin. However, in May 1987 the Barnala government was dismissed and the Punjab was again placed under central control to contain the militants. Rajiv Gandhi began his period in office promising to clean up Indian politics and to bring the country into the twenty-first century. But increasing divisions within the party and his style of leadership hindered reform. In 1987, V.P. Singh was moved to the Defence Ministry after clamping down on business tax avoidance. He was subsequently forced to resign after launching an investigation into corruption in defence contracts. Singh joined with other dissident Congress Party figures to launch the Jan Morcha Party – later merged into the Janata Dal – and formed a broad coalition of opposition parties under the banner of the National Front. Elections for the lower house held in late November 1989 resulted in victory for the Janata Dal-led National Front coalition with the support of the right-wing Hindu revivalist Bharatiya Janata Party (BJP) and from Left Front groups. In early December 1989, V.P. Singh was sworn in as Prime Minister. The unstable coalition which held Singh in position did not last long so that he was forced to resign in November 1990. He was followed by Chandra Shekhar as Prime Minister leading a minority National Front government which fell in February 1991. New elections were called, but before the election process was completed, Rajiv Gandhi, who was widely expected to be returned to power, was assassinated by Tamil Tigers from Sri Lanka. The election was postponed until June 1991. The Congress Party failed to win an outright majority, but formed a government with the support of a few independents. P.V. Narasimha Rao became India’s tenth Prime Minister. V.P. Singh’s Janata Dal and its National Front affiliates suffered heavy losses in the election and the BJP emerged as the largest opposition party, sweeping the key northern states of Bihar and Uttar Pradesh. The BJP campaign called for the destruction of the controversial 16th century Babri mosque at Ayodhya and the construction of a Hindu temple on the site it claimed was the birthplace of the Hindu god Lord Rama. On 6th December, 1992, the BJP’s campaign against the Babri mosque was realised. A Hindu crowd demolished the 464-year-old mosque. It fuelled riots all over the country causing grave problems for the Rao government. Relations with Pakistan also became strained after an explosion in Mumbai on 12th March, 1993, which killed 257 people, was blamed on Pakistan 198

military intelligence. Relations with Pakistan were further set back in 1994 by repeated Indian claims of Pakistan interference in Kashmir. In October 1996, Prime Minister Rao was indicted on three counts of corruption. After the 1996 elections, the Congress Party was able to form a coalition government which lasted only 13 days before being replaced by the Congress Party member Dere Gowda who, in turn, was soon toppled for tackling corruption too eagerly for some congress Party members. He was quickly replaced by I.K. Gujral. In March 1998, the Congress Party finally persuaded Sonia Gandhi, the widow of their assassinated former leader, Rajiv Gandhi, to become party leader. New general elections in 1998 saw the BJP form a coalition government under Vajpayee as Prime Minister. Their victory was a source of great concern to India’s large Muslim minority, given the BJP’s strong anti-Muslim rhetoric. India proceeded with the detonation of two nuclear devices in May 1998 leading to the imposition of mild trade and diplomatic sanctions against India by several countries. Pakistan soon countered by detonating its own nuclear device shortly afterwards. Vajpayee called an election in 1999 after losing a no-confidence motion, but his government was returned with an increased majority. India’s population officially reached one billion in May 2000. At the start of the new millennium, India continued to be plagued by internecine violence in Jammu and Kashmir. A devastating earthquake measuring 7.9 on the Richter scale hit the western state of Gujarat, leaving 20,000 people dead and 600,000 people homeless. On 13th December six gunmen representing a Pakistan-based militant group, stormed into the national parliament in New Delhi, leaving 14 persons dead. The Muslim militants (JeM and LeT) also launched an attack on the Jammu and Kashmir state legislature assembly in Srinagar. Cross border and rail links had to be terminated and both India and Pakistan amassed hundreds of thousands of troops along their shared borders. A train carrying Hindu activists was set alight by an angry mob at Godhra Station in Gujarat prompting thousands of Hindus to go on the rampage leaving over 1000 persons dead and thousands displaced. Tensions were further stirred over plans by Vishwa Hindu Parished (VHP), a right-wing Hindu group, to build a temple on the ruins of the Babri mosque in Ayodhya. India’s north-east states Assam, Arunashal Pradesh, Nagaland, Massipur, Mizoram and Meghalaya have for decades harboured separatist movements. These Indian states border on Myanmar, Bhutan and Bangladesh, where militant separatists set up refugee camps in the border areas. Since 2002, India’s defence ministry has engaged the co-operation of neighbouring countries to crack down on the militant camps in the border areas. A milestone was reached in India when a BJP nominee, Dr Abdul Kalam, a Muslim and architect of India’s missile arsenal, won the vote on 18th July, 2002, to become India’s 11th President. Relations with Pakistan over the disputed region of Kashmir also improved in the next few years. Air, passenger and freight train services as well as cricket matches were restored. Peace dialogues continued. Italian-born Sonia Gandhi led the Congress Party to victory in the 2004 elections and asked former finance minister Manmohan Singh to become Prime Minister. He became India’s first Sikh and first non-Hindu to occupy the Prime Minister’s office. His Congress Party led United (UPA) government announced a package of reforms. During his period in office, Manmohan Singh faced several national emergencies: Maoist Naxalite guerrilla insurgencies in the north-east states, a massive earthquake in Kashmir, a series of Pakistan-based LeT bomb attacks on commuter trains and stations in Mumbai, separatist violence campaigns in Assan and violent protests in West Bengal over the establishment of special economic zones (SEZ) designed to stimulate foreign investment. But the aged Manmohan Singh persevered with his remarkable reform programme to build a better economy in India. He also signed a nuclear co-operation treaty with Pres. Bush in which India would separate its military and civilian nuclear programmes and a degree of IAEA scrutiny in exchange for uranium supplies and access to US technology. According to a December 2008 survey by The Economist, of the 522 members of the 2008 Indian parliament, 120 were facing criminal charges. Around 40 of these were said to be accused 199

of serious charges, including murder and rape. Sadly, most Indian politicians are presumed to be corrupt. In view of India’s poor and fractious society, patronage politics is virtually inevitable. The Economist claimed that Indian politics had become much murkier in recent years because of two factors: the use of regional caste-based parties, nakedly dedicated to delivering patronage – and the mutinous coalitions engendered thereby. When the Congress Party returned to power in 2004 after eight years in the wilderness, it only won 145 seats out of the 522. To form a government, for which 272 seats were required – Congress had to put together the United Progressive Alliance (UPA) with 12 other parties. Putting together a coherent support base under this arrangement would have been hard enough, but the UPA was still short of a majority. So Congress recruited “outside” support from another five parties, the most important of which was a coalition of Communist parties, the “Left Front”. This absurdly complicated and unrepresentative coalition government turned out to be more enduring than expected. Much credit must be awarded to Prime Minister Singh, but his lone effort had its limitations. The Communists proved to be the most obvious blockage, opposing every liberal proposal on principle. Like India’s vast bureaucracy, the government was forced to expend far too much energy merely to sustain itself in power. In 2007 the Communist contingency walked out over the civil nuclear co-operation pact with the USA. The government only survived after Mr. Singh threatened to resign and the government coalition survived by recruiting a new ally, the low-caste Samajwadi Party (SP) from Uttar Pradesh. It is a troubling fact is that the Indian government is forced to take risks and to make concessions to get support for any bold, but essential, policy initiatives. The fragmented polity makes reaching a consensus virtually impossible. Every government is bound to be a coalition: either led by the Congress Party or the BJP. The Congress Party ruled India for almost four decades by relying on three main groups for support: Muslims, high-caste Hindus and Hindu dalits (“untouchables”). The Congress Party also relies on a residual fondness for the Gandhi family. But Mrs. Gandhi’s son, 38-year-old Rahul Gandhi, does not appear to display obvious leadership qualities. The Congress Party does not have a clear-cut ideological base to unite its squabbling factions. The BJP is the only major alternative source for coalition building. It is built on a base of about 15 percent of Indian voters – typically high-caste and from the north – who feel attracted to its Hindu-chauvinist creed, known as Hindutva, or “Hinduness”. It occupies the right-of-centre nationalist stance, but has recently relinquished overt stress on its Hindutva image in order to avoid offending potential Muslim support. Indians are proud of their democracy. It has been interrupted only once, in 1975, by Indira Gandhi’s 21-month state of emergency. At the next opportunity India’s voters threw out Mrs. Gandhi and her Congress Party for the first time in its history. They thereby indicated the importance of timely elections. India’s Election Commission enjoys much status and effective power. It can, and does, remove any official suspected of undue bias. Every five years India holds a reasonably orderly and fair election. Its 29 states do the same, according to their own electoral calendars. These must be considered significant accomplishments. The Indian political scene is a kaleidoscope of constantly shifting coalitions. It reflects the highly fragmented structure of the complex Indian society influenced by region, religion, caste, language, ideology, traditions, loyalties and constantly changing public opinion. It imposes very demanding tasks on its leaders to muster enough support to move the country forward.

India’s Economy

India’s GDP in 2008 was estimated at $US1,362 billion (per capita $1,190) with an economically active population of around 530 million and unemployment estimated at around 7.5 percent. The agricultural sector employs around 60 percent of the workforce and contributes around 21 percent of the GDP; industry employs around 17 percent of the workforce and contributes 200

around 28 percent of GDP and services employ 23 percent of the workforce and generate around 51 percent of GDP. India produces around 1 million barrels of crude oil per annum at its Assan oil fields and around 32 billion cubic metres of natural gas. It produces around 450 million tonnes of coal and has the fourth largest coal reserves in the world, based in the states of Bihar, West Bengal and Madhya Pradesh. The bulk of its electricity (83 percent) is based on and 14 percent on hydro. About 50 percent of the land is arable and since agriculture is the principle industry, much of India’s vast population is dependent on the land for a living. India is self-sufficient in food grains (rice and wheat) and a net food exporter. India is a legal producer of opium poppy for pharmaceutical trade, but also produces opium poppy and cannabis for the international drug trade. The total cattle stock was estimated (2007) at 180 million, as well as 62 million sheep, 124 million goats, 475 million chickens, 100 million buffalo and 630,000 camels. A total of 23 percent of the country is forested. In 2006 a total of 7 million tonnes of fish were caught. India’s exports in 2007 totalled $US141 billion, mainly textile goods, gems and jewellery, engineering goods and leather manufactures. Imports of $US224 billion included crude oil, machinery, fertiliser and chemicals. Its main export destinations are the USA (17 percent), the UAE and China. The main sources of its imports are China, the USA and Germany. Around 4 million tourists visit India annually. Some Indian companies have achieved considerable global success. Prime examples are Wipro, Infosys, Tata Steel (with its acquisition of Britain’s Corus) and Mittal, which is now one of the largest companies in the world. But more impressive than the success of India’s best companies is the zest for business shown by millions of Indians in dusty bazaars and shack factories. Indians are truly entrepreneurial. Indians have prospered everywhere outside India. Inside India the challenge is daunting. Shifting the bulk of the population from subsistence agriculture to more productive livelihoods would be very difficult even if the number of people of working age was not growing at such a rapid pace. Roughly 14 million Indians are now being added to the labour market each year and the figure is rising. Half of India’s people are under 25 and 40 percent under 18. Only about 20 percent of jobseekers have had any sort of vocational training. But job-creating economic growth is stifled by India’s cumbersome labour laws. To escape throttling labour laws, Indian entrepreneurs tend to keep their operations small: 87 percent of manufacturing jobs are with companies that employ fewer than ten people. For the period 2003 to 2007, the Indian economy managed to grow at an average annual pace of just over 8.8 percent, compared with around 6 percent in the 1980s and 1990s and a measly 3.5 percent during the three decades before 1980. In earlier years interventionist policies shackled the economy. Since the start of the new millennium India has been reaping the rewards of reforms that were made in the early 1980s – chiefly under the positive influence of Manmohan Singh’s deregulation. These reforms lowered barriers to trade and liberalised capital markets. As a result total trade in goods and services leapt to 45 percent of GDP from 17 percent in 1990. The biggest obstacle to higher growth levels is India’s lagging infrastructure – especially roads, ports and power. According to the World Bank, the average manufacturing firm loses 8 percent of sales each year from power cuts. India spends 4 percent of its GDP on infrastructure compared to China’s 9 percent. India’s government has indicated ambitious plans to increase total infrastructure spending. The idea was for the bulk of it to be financed by public-private partnerships. Private investors, particularly foreign ones, seem to shy away from sectors like electricity and roads because they are uncertain of earning a reasonable return. It is said that only half of all electricity used is paid for because power is stolen and bills are left unpaid. Regulatory reforms are required to protect the interests of both investors and consumers. Another big obstacle to growth in manufacturing is India’s labour laws which are among the most restrictive in the world. Firms employing more than 100 people cannot fire workers without government permission. This is a major impediment to employment growth since it discourages expansion. Central government, relying on the Communist Party as a coalition party, 201

is totally hamstrung and cannot reform the system. In theory the state governments can apply the laws with more flexibility in the special economic zones, but this exemption has not yet led to more flexible labour markets. The third obstacle is the dreadful quality of public services. This problem is felt across the board from education and health to water supplies and refuse collection. Half of urban households lack drinking water within their homes; one-quarter have no access to a toilet, either public or private. It is claimed that public services have worsened in recent years. In Bangalore water is irregularly available for only three hours per day. It partly explains why people are reluctant to move to towns and cities. The fourth obstacle is the quality of education and healthcare. A survey in 2003 found that only half of paid teachers were actually teaching during school hours. Another survey found that healthcare centres in poor parts of Delhi had a more than 50 percent chance of prescribing a harmful therapy for common ailments. Government spending accounts for only 21 percent of total health spending and 40 percent of education spending. People go to sub-standard private providers because public provision is even worse. Despite the strong reformist instincts of Manmohan Singh, the Prime Minister, the need to maintain the coalition of splinter groups overwhelms the appeal of reform. Yet, many economists in Delhi reckon that annual growth of 8 percent is sustainable even without significant reform despite the formidable supply-side constraints of infrastructure, labour laws and public services. This view seems over optimistic. Better education, labour market flexibility and less red tape are essential prerequisites for sustained growth. The country has to put in place the right policies. If infrastructure suffers from too little investment, the rest of the economy is burdened by a too heavy government hand. Seven of India’s ten biggest companies, measured by sales, are majority-owned by the state (five oil-and-gas firms, a steel producer and a bank); state-owned banks control nine-tenths of deposits; the railways employ more people than any other commercial organisation in the world. Privatisation started in 1991 with the sale of minority stakes in some state enterprises. In 1999 the previous BJP-led government shifted the focus to the sale of ownership and control to strategic investors. Then, in 2003, the Supreme Court ruled that some sales required parliamentary approval. Subsequently, in deference to its left-wing allies, the Singh government agreed that “generally” profit-making companies (i.e. those that investors might want to buy) would not be privatised. The government also mooted that the proceeds of privatisation would be earmarked for social-sector schemes. The private sector is hamstrung by many controls. The over-active “control raj” is alive and well: for example, some 670 items including 21 textile and hosiery products, are “reserved” for small-scale producers. The Institute of Planning and Management, a think-tank, reported in 2004 that each industrial unit is visited by between 40 and 60 inspectors in the course of a month. It estimated that Indian manufacturers spend 16 percent of their time dealing with government officials. It has also been hinted that the government considered extending affirmative action policies to the private sector. These policies would “reserve” a proportion of jobs for minorities and “backward” castes. The business community argued that it would impose a further restraint on investment and open another door to corruption. They stressed that more flexible labour laws and fewer bureaucratic obstacles would be a more cost-effective remedy. Since Mr. Singh left the finance ministry, the public sector deficit, including state and central governments, has been hovering around 9-10 percent of GDP. This deficit is financed domestically by a timid banking system that keeps 40 percent of its assets in government debt. This debt, however, stands between the government and its development targets. A large chunk of tax revenue goes into interest payments, civil service payments and pensions, defence and subsidies. The deficit also threatens to choke private-sector investment by pushing up the cost of credit. (See The Economist’s Special Report on “India’s economic reforms”, June 12th, 2004, pp.65-67) 202

India needs faster growth to create more jobs for its expanding population and to relieve poverty. Although the educated middle class has made significant gains, the 60 percent of the population close to or below the poverty line has been left behind. Measured by the commonly used Gini coefficient, India has less income inequality than China or America. But it has much more poverty. An estimated 260 million people still live on the equivalent of less than $1 a day. Half of all children under five are malnourished. Better education, improved infrastructure and better public services can not only increase growth, but also spread the rewards. As The Economist commented, India’s economy is not likely to sprint ahead like a tiger, it would rather amble along like an elephant. But an elephant has a lot of stamina and can travel far if its way is not blocked. (See The Economist’s briefing “India on Fire”, February 3rd, 2007)

The Infrastructure Handicap

Every day more than 1000 children die in India of diarrheal disease. The Ganges in Varanasi contains 120 times more faecal coli form bacteria per 100 millilitres than is considered safe for bathing. Four miles downstream, with inputs from 24 gushing sewers and 60,000 pilgrim- bathers, the concentration is 3000 times over the safety limit. In places the Ganges becomes black and septic. Corpses of semi-cremated adults or enshrouded babies drift slowly by. India’s sanitation is abhorrent. In 2008 it was estimated that only 13 percent of the sewage its over 1 billion people produce is treated. An estimated 700 million Indians have no access to a proper toilet. Water-borne diseases caused by poor sanitation are a big reason why so many of India’s children are malnourished. The constricting impact of India’s infrastructure on its economy is particularly illustrated by the derelict condition of its roads, ports, railways and airports – all operating close to or beyond capacity. In 2008 it took an average of 21 days to clear import in India. In Singapore it takes three. The Jawaharlal Nehru Port Trust in Mumbai, which handles 60 percent of India’s container traffic, has berths for nine cargo vessels. Singapore’s main port can handle 40. With the number of air passengers in India growing at 30 percent a year, the inadequacy of its four main airports is obvious. India’s 3.3 million kilometre road network is the world’s second-biggest, but most of it is “pitiful”. Its national highways account for only 2 percent of the total. Only 12 percent or 8000km are dual carriage ways. China, by contrast, had some 53,000km of highways with four lanes or more by the end of 2007. India’s urban roads are choked. The average speed in Delhi has fallen from 27 kph in 1997 to 10 kph. All of the country’s roads are perilous, even before a million Nanos a year are added to them, as predicted by Tata, the car maker. India’s shortage of power is an even bigger concern. The peak demand in 2007 outstripped supply by almost 15 percent. In some industrial areas businesses were cut off for 24 hours at a stretch. According to the World Bank, 9 percent of potential industrial output in India is lost to power cuts. Some 600 million Indians have no mains electricity at all. Despite Communist Party coalition partners’ objections, the Singh government has pushed public-private partnerships for building roads and airports. New airports were opened in Hyderabad and Bangalore in 2008. The airports in Mumbai and Delhi were scheduled to be modernised by 2010. Government plans have scheduled 1500km of new road and rail linkages between Delhi and Mumbai, studded with manufacturing hubs. It will require an investment of $100 billion and is meant to be completed by 2013. Using Mumbai as an example, James Astill argues that there are two main reasons for the decrepitude. The first is that tight land and rent controls have destroyed Mumbai’s land and property markets. For fear of being stuck with immovable tenants, landlords, for example, have left an estimated 40,000 properties vacant. The second reason is longstanding under-investment in Mumbai by the state government. It diverted Mumbai’s revenues to rural areas which had more voters. India’s cities do not have influential centres of decision making to manage their development. 203

India has introduced a planning scheme to double its investment in infrastructure to $475 billion over a five-year period, representing 8 percent of GDP per annum. In 2008 the investment had already been trimmed down to only 4.6 percent of GDP. A major obstacle to mobilising the necessary capital resources for infrastructure investment is the shallowness of India’s corporate debt markets. In addition, innumerable bureaucratic and legal impediments stand in the way – colloquially referred to as the “permit raj”. Private and public sector projects get equally bogged down. Attracting private investment to where it is most needed, in power generation, is the most difficult. Private investors fear they will not get paid for their electricity because state governments like to give it away for free to voters, or allow it to be stolen. The central government tried to “unbundle” power generation, transmission and distribution. But the state governments have undermined this scheme so that 35 percent of India’s power is still stolen. In Delhi where distribution has been privatised, the theft rate has dropped from 48 percent to 18 percent. Five of the states contribute 80 percent of the losses of India’s state utilities and five better-governed ones contribute 78 percent of cash profits. Education and healthcare have an equally abysmal record. Illiterate children cannot be taught basic hygiene. Illiterate men are not equipped for productive employment. In 2001 only 65 percent of the population was defined as literate (compared to 90 percent in China). In 2007 the overall education budget represented 2.8 percent of GDP, about half the figure in Kenya. Where children are attending school the problem is quality. According to a World Bank Study, only half of India’s teachers show up for work and half of Indian children leave school by the age of 14. The higher education system is also appalling. It has been called “the collateral damage of Indian politics”. Politicians, or their lackeys, collect bribes for appointing faculty, admitting students and awarding good grades. They insert their supporters to run the racket. Having destroyed the public universities, they then grant themselves permission to open private universities from which they milk the profits. (See James Astill, op.cit., pp.11-14) Burdens on Business

In a ranking of 155 countries, by ease of doing business in 2006, the World Bank and its affiliate, the International Finance Corporation, list India at 116, two places below Iraq, 56 below Pakistan and 25 below China. The “licence raj”, the “inspection raj” and the “infrastructure deficit” impose heavy costs on business firms. Power shortages and traffic jams are making doing business a trial. Vineet Agarwal of the Transport Corporation of India, a freight firm, gave an excellent description of the effect of both “hard” and “soft” constraints on Indian business. He described the trials and tribulations of a 2150 km journey of a typical cargo between two of India’s great “metros”, Kolkata and Mumbai. The story appeared in The Economist under the heading “The Long Journey – Why Indian business moves so slowly”: “The lorry is loaded at 2pm in central Kolkata. But it cannot leave until after 10pm, because heavy vehicles can use the city streets only at certain times. By then, there is a jam and it is 4am before the lorry hits the National Highway 6. It takes a good 14 hours to travel the 180km to the border of this state, West Bengal, with . By then the border is closed for the night. At 5am the following morning, the lorry joins the border queue. It takes two hours for the documents to be cleared, and the same time again to cross a sliver of Jharkhand. After another two-hour queue, it enters Orissa and enjoys a relatively uneventful 200km. But then it has to stop for the night, because the road is closed to avoid the danger of attacks by bandits or Maoist insurgents. Day four begins again at 5am, and after 12 hours on the road the lorry reaches the next border, with . Here it queues for four hours, but at least it can cross at night, making a creditable 350km in one day. So by day five, the lorry is in Maharashtra, the state of which Mumbai is the capital. 204

However, the lorry still has to pass a further 12 toll-booths and inspection points after the 14 it has already negotiated, so it takes another two days to get to Mumbai itself. The driver then has to telephone the octroi agent and get the tax processed, which takes all night. It is the morning of day eight before he reaches his customer in Mumbai, having achieved an average speed of 11km per hour and spent 32 hours waiting at toll-booths and checkpoints.” (See The Economist, “A Survey of Business in India”, June 3rd, 2006, p.11)

Democracy’s Drawbacks

India’s major obstacles are well-known: a lousy infrastructure, bumbling and burdensome regulation and restrictive labour laws. But its reform efforts seem to be perpetually stalled in political recriminations and horse-trading. Delicate coalition arrangements have kept Manmohan Singh in office as Prime Minister since 2004. A co-ordination committee was set up between the left-of-centre Congress Party and its coalition partners (the United Progressive Alliance or UPA) on the one side and the Left Front of Communists and other left-wing parties on the other. The Communists staged a four-month boycott of the co-ordination committee to press their policies and then, in concert with the trade unions, called a one-day strike. The banks along with the Communist stronghold of Kolkata (Calcutta) were paralysed. As finance minister in the 1990s, Manmohan Singh pushed through the measures that kick- started reform in India. Without the support of the Left Front, his reform efforts were stalled. Singh introduced a measure that guaranteed 100 days’ employment to every household in India’s poorest districts to appease the Left Front. Much of the money, as much as 1 percent of GDP by some estimates, has been wasted or stolen. The list of what Singh was prevented from doing is much longer. Progress in liberalising India’s notoriously rigid labour laws is a key political battleground. Any company with more than 100 employees cannot make redundancies without obtaining approval from local labour boards. According to the Left Front this protects workers from unscrupulous employers. In fact, it makes employers wary of taking on new staff, opening new factories or growing beyond the threshold of 100. It protects unionised labour at the expense of those not in work. The Left Front benefits from the system in West Bengal and Kerala, the two biggest states where Communists are strong. Those who are losing out as a result of unreformed labour laws are hundreds of millions of people who are now marginally employed in the countryside – despite the booming labour-hungry textile industry. India’s antiquated laws are preventing India from exploiting the textile boom – in contrast to China’s successful expansion of its textile industry. India is also losing out in terms of direct foreign investment. China attracted $60 billion in 2004 alone and benefited because of technology, expertise and marketing relationships that this money represents. One chief reason for the discrepancy is that India imposes caps on Foreign Direct Investment in a host of economically important but politically sensitive sectors: insurance, aviation, , media, retailing, etc. Direct foreign ownership in retailing is banned, which explains why even Delhi’s smartest shopping areas are scruffy and chaotic. The Left Front is violently opposed to lifting the rules for FDI. With the recalcitrance of the Left Front and their pivotal position in the coalition, Mr. Sing’s hands are tied. The same applies to privatisation, or its younger sibling disinvestment, meaning the selling of minority stakes in state-controlled companies. The coalition was prevented by the Left Front from privatising nine so-called “crown jewels”, or leading state-owned companies – most of which are loss-making operations. Nothing could be done since 2004 to reduce the mountain of subsidies that distorts the Indian economy. These subsidies consume a shocking 14-15 percent of GDP. Worst of all could be the heavy hand of bureaucracy. The “inspector raj”, the “licence raj” and the “permit raj” are kept alive by the left-wing members of the coalition. It lives on in the cascading excise and sales taxes, one of the biggest handicaps facing manufacturers. It is also reflected in the chronic budgetary deficits of around 8 percent per annum. Much of the deficit goes into interest payments (40 percent of recurrent spending), defence, subsidies and civil 205

service wages and pensions. It leaves little scope for capital investments to reduce the infrastructure deficit. West Bengal is a state of 82 million people with Kolkata (Calcutta) as capital, run for 28 years by the Communists and their allies. It is ironical that the “free market model” works more freely in Communist China than in democratic India – thanks to India’s own democratically elected Communist and left-wing politicians. (See The Economist’s Special Report on “Reform in India”, October 29th, 2005, pp.23-25)

Intergroup Conflict and Violence

India is a prime example of a pluralistic country where solidarity patterns exist that rival the commanding loyalty which the state itself is able to generate. These solidarity patterns have been produced by India’s convoluted history and are based upon shared religion, language, ethnic identity, race, caste or region. Historically, in some situations or regions of India, it led to the constitution of autonomous political communities such as Pakistan in 1947 and Bangladesh in 1971. Today India still faces the fall-out or remaining conflicting solidarity patterns in Kashmir, on its frontiers with Pakistan and on its north-eastern frontiers in Assam and Bihar. In many ways India remains a veritable compound where some of the most durable and persistent cleavages which cause men to rise up against other men are piled up within the confounds of a single political system. It produced a horde of peasant revolutionaries, regional separatists, low- caste champions and Muslim jihadists. It implies that India has a much larger conflict potential than is commonly supposed. In the recent past several troubling “hot spots” erupted. Towards the end of November 2008 an outrageous terrorist attack was carried out in the commercial heart of Mumbai by Pakistan- based Islamist militants that indiscriminately killed 180 people, including foreign tourists and businessmen. India’s response was mercifully restrained – possibly because Indians have long been used to conflict and terror. Even before the Mumbai terrorist attacks, 2008 had been a violent year. In Jaipur, Ahmedabad, Bangalore and Delhi, dozens of people were killed in summer bombings by a terrorist group. They seem to have sprung from a long campaign by Pakistan and Bangladeshi militants to stir revolt amongst India’s 150 million Muslims. Poor and often marginalised, they have many grievances. The Indian Mujahedeen circulated a list of allegedly state-sanctioned crimes against Muslims and went on to say: “If you still think that the arrests, expulsions, sufferings, trials and tribulations inflicted on us will not be answered back, then here we remind you ... those days are gone.” Apart from Kashmir, which carries the threat of an international conflict, religious violence appears to be the most troublesome of India’s conflicts. The rise of the BJP is symptomatic of the rise of Hindu consciousness. It advanced from the Ayodhya mosque issue into a widespread sense of animosity against Muslims. Under Vajpayee, the BJP also became known for its liberal economic management and many of its leaders are less Hinduist than nationalist. But the Hindu fanatics are strongly based in BJP-ruled Gujarat, one of India’s most prosperous areas. Also in BJP-ruled Orissa there is an ongoing campaign against Christians, in which many Christian houses and churches have been torched by Hindu fanatics. In most of India, Hindus, Muslims and Christians live together peacefully. Secular Indians also constitute a numerous category. But the conflict pattern seems to have been set, terrorism from aggrieved Muslims drawing a violent response from Hindus. Kashmir is India’s only Muslim majority state with an ever-present potential for pro-independence protests. India’s north-eastern states are among India’s poorest and most rebellious. In Manipur, on the border with Myanmar, there are more than 20 tribally based separatist groups. The army’s counter-insurgency strategy in Manipur and across the north-east has been to bribe the insurgents to keep quiet and to quash those who refuse to co-operate. India has made major investments in the region’s road network to boost its economy. The Maoist insurgency, known as the Naxalites in the West Bengal area, has formed the Communist Party of India (Maoist). Its influence has spread to 220 of India’s 611 districts of 206

which 76 are considered “seriously affected”. The Naxalites’ stronghold is in the roadless forests of Chhattisgarh and Maharashtra where they hide an army of 12,000 ragged revolutionaries. They crop up wherever there are local grievances, thriving in poor and crowded parts of Uttar Pradesh (UP), Madhya Pradesh and Bihar, where the district administrations are weakest. They represent a law-and-order problem which can only be solved by the Chinese formula: rapid economic development. Economic development has already lessened India’s caste divisions. Despised dalits have migrated to India’s cities where the caste system has diminished. It only survived in the marriage market and advertisements for a spouse. This centuries-old separation will not die soon, but the urban trend is against endogamy. Some observers predict that as caste stratification reduces with economic development, religious conflict may increase. (See James Astill, op.cit., pp.14-15)

The Impact of the World Recession

During the first 18 months of the world meltdown the Indian economy remained relatively undamaged. Its banks were sound and its foreign debt manageable. In common with many others, its stock market crashed, losing 60 percent of its value in 2008. The rupee lost about 20 percent of its value against the US dollar as a result of the outflow of foreign portfolio investments. Many Indian companies were selling rupees for dollars to finance their foreign operations. The Reserve Bank of India had been selling up to $2 billion a day from its foreign exchange reserves, which dropped from a high of $316 billion in May 2008 to nearly $63 billion by the end of December 2008. Short-term lending rates were cut to 6.5 percent. The good news at the end of 2008 was that India’s economy was expected to keep growing, albeit at a lower rate. The weakening rupee assisted Indian exporters, especially the computer- services industry whose main market is American banks. Merchandise exports were down. India is less reliant on exports than most emerging nations since exports amount to only about 22 percent of GDP. If India could restore its growth rate to the 8.8 percent of the recent five-year period, India could be transformed as China has been. Its $1 trillion economy would double in size in less than 10 years and poverty could be reduced at an unimaginable rate. During the 1990s, India’s investment rate averaged around 25 percent of GDP, but since 2003 it averaged at 35 percent of GDP. India particularly needs more investment in manufacturing capability to create jobs. India’s savings rate increased from 28 percent in 2003-04 to 35.5 percent in 2007-08. India’s bulging working-age population gives it a high ratio of earners to elderly dependents. India’s young population structure should enable India to keep its savings rate close to the current level for the next two decades. The darker side of this rosy picture is the danger of inflation. As credit soared, the current- account deficit widened and inflation jumped to 7 percent. In addition, the government increased its spending levels by over 20 percent in 2006 and 2007. It has thrown India’s public finances into troubled waters with a budget deficit in excess of 8 percent. Unfortunately the extra largesse did not go to productive assets but was handed out in wasteful ways such as public service pay increases, subsidies on oil and fertiliser and some high profile welfare schemes. India spends 3 percent of its budget subsidising manufacturers of -based fertilisers (poisoning their fields and themselves) and increasing its use of oil. India imports 75 percent of the oil it uses. It also subsidises petroleum products including petrol, kerosene and diesel by fixing the price at which they are sold to consumers. It also taxes these products, but the subsidy makes the budget finances hostage to the oil price. Because India’s government absorbs so much of India’s savings, the country relies heavily on foreign capital to sustain its high investment rate. Its current account deficit recently widened to 3.5 percent of GDP. High public spending also contributes to inflation, which limits the Reserve Bank’s scope to keep interest rates low. Higher interest rates will dampen private sector investment. 207

In order to lift the growth rates depressed by the world recession requires reform action on the part of the Indian government. The most pressing reforms include fiscal reform (including subsidies), privatisation of public enterprises, opening state-owned banks to private ownership, reform of labour laws and deregulating the coal and sugar industries. Priority should be given to selling the government’s loss-making companies. (See James Astill’s Special Report on India, The Economist, December 13th, 2008, pp.7-10)

International Perspective

What sort of rising power is India? Until recently India had little interest outside Asia. Its foreign service is still small, with around 600 diplomats. Its foreign trade, though rapidly growing is still relatively small. But like China, India is developing foreign policy to meet its economic needs: chiefly to access natural resources and foreign markets. In April 2008 India held a summit in Delhi for 14 African leaders. Its trade with Africa is half the size of China’s, at around $30 billion. The summit was dominated by private companies which are leading India’s overseas investments. This helps to ensure that India escapes much of the opprobrium heaped on China for consorting with dictators. But India is as pragmatic as China. It trades with Myanmar, which has oil and gas that India needs. It also trades with Iran, with which it is negotiating to build a $7.5 billion gas pipeline. India’s major foreign headache is its own messy regional environment, especially Pakistan which is one of the most worrisome power kegs in the world. India clearly is deeply concerned about Pakistan’s problems. After the Mumbai bombings, India did not accuse the Pakistan government. It did not threaten Pakistan with a military reprisal. It did not withdraw from a four-year diplomatic effort to “normalise” its relations with Pakistan. Kashmir remains a sticking point. India and Pakistan both claim all of Kashmir. India controls the rich valley of Kashmir and Pakistan a poorer portion. On both sides leaders have mentioned the possibility of formalising the existing divide as a “soft border”. But India also has a long history of meddling in Pakistan’s politics. Bangladesh is, to India, a semi-hostile nation of 153 million delta-dwellers under military rule. Illegal Bangladeshi migrants are a constant source of tension in India’s north-eastern state of Assam. During floods in the delta, millions of Bangladeshi seek refuge in India. But South Asia is the least integrated region in the world. Trade between its members accounted for less than 2 percent of their combined GDP in 2007. Two-way trade between India and China climbed from $2 billion in 2002 to $38 billion in 2007. It represented a small portion of China’s trade, but formed an encouraging basis for a relationship between the two countries that fought a border war in 1962 and which are still claiming portions of each other’s territory. India’s armed forces are, like its economic progress, at least a decade behind China’s. India also spends less than half of what China spends on defence. But India does have one important advantage over China. It is in the way much of the world perceives it: as well-intentioned and democratic; chaotic, but not inscrutable and malign. The Economist of December 13th, 2008, made some snide and cynical remarks about George Bush entering into a civil nuclear co-operation agreement with India. It stated that “it is safe to assume that Mr. Bush’s fear of a rising China, and his wish to bolster India against it, was the main motive for the nuclear detente”. It also described Manmohan Sing’s remark to Bush “... that the Indian people love you, and all that you have done to bring our two countries closer to each other” as “unfashionable” and something to be “disliked”. The editorial staff of The Economist are possibly not fully aware of the extent to which Britain used Indian troops to fight its wars. In World War I about one-third of British Forces in Europe were Indians (about 2 million). In World War II over two and a half million Indians served in the British Forces – even though they were paid only 18 rupees a month in comparison with the equivalent of 75 rupees for a British soldier. With its huge population, India has an important 208

strategic role to play in the modern world. Hopefully that role would be a benevolent one to the interests of the free world. Prospects

India’s main challenge in the next few decades lies on its domestic front. It has to lift the bulk of its huge population out of abject poverty and provide the necessary quality of public services and the enabling policy environment for its economy to grow. Nandan Nilekani, a prominent young Indian businessman (co-founder of Infosys, the country’s second largest IT company) recently wrote Imagining India: The Idea of a Renewed Nation – an interesting analysis of India’s prospects. The first part of the book explains why democratic, English-speaking India is starting to achieve its potential and how that could lead to a globally influential position for the country. The second catalogues the alarming reasons why the country could still fall apart. Nilekani believes India now stands evenly balanced between the reluctance to change in the face of immense challenges on the one hand and the possibilities that could arise out of tackling these issues head-on. India will either become a country that greatly disappoints when compared with its potential, or one that surpasses all expectations. India has already taken some steps to reduce the dead weight of bureaucracy with its near- infinite paperwork and corruption, to maintain its food security, to cut its patronage-based subsidy system, to build an integrated national gas grid to transport India’s growing supply of relatively clean natural gas, to deregulate its labour market, to strengthen its “deliberative democracy” and to bridge the yawning gap between rich and poor. Unfortunately no outline has been provided of the steps to be taken to transform Indian politics into a reliable vehicle for reform. India’s economy is the vehicle required to transport its people to a better future with better schools, health services, housing, job opportunities and a higher standard of living. It requires a sustained high rate of economic growth. As long as that happens, India’s emergence will continue.

References

Anderson, C.W., Issues of Political Development Van der Mehden, F.R. & Englewood Cliffs: Prentice-Hall Inc. Young, C. (1967) Astill, James (2008) “A Special Report on India”, The Economist, December 13th, 2008 Bashan, A.L. (1997) Hinduism, Encyclopaedia of the World’s Religions, Barnes & Noble Duncan, E. (1995) “India Survey”, The Economist, January 21st, 1995 pp. 4-30 Enloe, C.H. (1973) Ethnic Conflict and Political Development, Boston: Little Brown & Co. Ferguson, Niall (2004) Empire – How Britain Made the Modern World, New York: Penguin Books Long, Simon (2005) “A Survey of India and China”, The Economist, March 5th, 2005, pp. 3-20 Long, Simon (2006) “A Survey of Business in India”, The Economist, June 3rd, 2006 Nilekani, N. (2009) Imagining India: The Idea of a Renewed Nation, London: Penguin Wolpert, S. (1993) A New History of India, New York: Oxford University Press

209

9 The Momentum of East Asia

Since World War II several East Asian countries have achieved spectacular economic growth. Japan emerged as an economic superpower. In its wake followed the “Four Tigers”, usually identified as Hong Kong, South Korea, Singapore and Taiwan. Another four countries, usually identified as newly industrialising economies (NIES) have also emerged as fast growing “little tigers” (as they were then called): Malaysia, Thailand and Indonesia and China. Straggling far behind the “little tigers” are an additional number of countries in East Asia that are often overlooked: the Philippines (93 million), Vietnam (86 million), Myanmar (53 million) and North Korea (24 million). Together these countries today represent a total population of some 820 million. Mainland China, also located in the fast-growing East Asia, is an emerging giant with a population of approximately 1300 million – the largest in the world. In view of its growing international role and its unique character as a modernising totalitarian communist country, it will be analysed separately. With the exception of the stragglers, these countries grew, on average, three times faster than the OECD economies during the 1980s and early 1990s. The top eight economies proved that they could sustain growth rates of over 7 percent a year – a rate at which an economy doubles its size each decade. Pundits predict that by the middle of the 21st century there will have been a shift in economic power away from Europe and North America to the Western side of the Pacific Rim. What makes these countries more successful than others? Many observers and commentators have attempted to identify and define the sources of the East Asian achievement and the literature grows increasingly voluminous. More complex analytical models have emerged incorporating not only economic variables but also political and cultural factors (see Diagram 1). This survey will focus on some of the lessons to be learned from East Asia’s emerging economies. Most of the conclusions are based on the experience of Japan, Taiwan, South Korea, Hong Kong and Singapore. Where relevant, reference will be made to specific countries.

Japan’s Post-War Recovery Template

For more than 250 years the Tokugawa shoguns governed Japan as a reclusive feudal state. Then in 1853, the American Commodore, Matthew Perry, steamed into Tokyo Bay and opened the country to trade. Japan then emerged as a rapacious colonial empire, powerful enough to conquer most of the Pacific Rim countries up to the Indian border and then, as part of the Axis Powers, attacked the Americans at Pearl Harbour in 1941. When World War II ended with the American nuclear bombs in 1945, Japan was on its knees. Its economy was shattered and its infrastructure devastated. Reconstruction had to start from scratch. The American occupation of Japan after 1945 brought in its wake direct exposure to the American way of doing things. After initial hardships brought about by dislocations, shortages and high inflation, the Americans introduced the “Dodge Plan” which set the ball rolling for the Japanese economic recovery. In addition Japan served as supply base for American forces involved in the which began in 1950 and it stimulated an export boom. By the mid-1950s Japan’s steep growth path took off. By 1964, when the Olympics came to Tokyo, Japan’s national income was approaching the West European level. Consumers were acquiring the “three sacred treasures” – televisions, washing machines and refrigerators. By 1970 they graduated to the “three C’s” – car, colour television and air conditioning. By the 1980s the economy moved into a rapid technological growth phase. The Tokyo Stock Exchange was equal to that of New York and of the world’s ten biggest banks, eight were Japanese. By 1990 Japan’s exports produced such high levels of foreign reserves that Japanese companies began a 210

shopping spree, acquiring foreign assets such as New York skyscrapers, Hollywood film studios and French paintings. The Japanese spectacular growth was based on several fundamentals: strong industrial- financial combinations (keiretsu), a large and educated workforce, low inflation and a high savings rate. It absorbed as much American and European technology as it could buy and copy (e.g. transistors). It, furthermore, nurtured inherent strengths of the Japanese culture: an incredible work ethic, an intense identification with the employer-firm, a strong sense of national identity, a keen desire to a better life – and a searing memory of the humiliation of their World War II defeat. Also central was Japan’s commitment to exporting its way to growth. International trade became the bedrock of Japanese economic progress. It moved up the product chain: from textiles and simple manufactures to ships and steel to complex mechanical goods, electronics and high technology. (See Yergin and Stanislaw, op.cit, pp.160-162) Japan’s growth pattern became the template for its East Asian neighbours.

Industrial Development and Export Promotion

In the early stages of the economic development of Japan, Taiwan and South Korea, their economies were characterised by limited natural resources, an over-supply of unskilled labour and a shortage of capital. The private sector was weak and the government played an active role in planning and encouraging industrial development. Many measures were taken and most of them were restrictive or protective in nature or took the form of a subsidy (such as tax reductions, tax exemptions and the provision of finance at lower interest rates) for the setting up of specific industries or the manufacture of specific products. The protective measures took the form of high tariff and non-tariff barriers for supporting the growth of infant industries as well as the control of foreign exchange in order to make effective use of this scarce resource for importing the machinery, equipment and raw materials needed for industrial production. In addition, several policies were implemented and measures were taken to encourage the process of industrialisation:  Encouragement of investment;  Export processing zones;  Science-based industrial parks;  Infrastructure development;  Appropriate education system. A common feature of the economic growth of particularly Japan, Taiwan, Hong Kong, Singapore and South Korea was the export expansion strategies that unleashed the potential productive resources – especially labour. After a period of import substitution based on tax relief, of loans at a low interest rate as well as tariff and non-tariff barriers, Japan, Taiwan, South Korea, Hong Kong and Singapore pursued very active export expansion measures. In the case of Taiwan this programme of export promotion entailed several measures after 1955: 1. There was a significant devaluation of the currency. Initially, a dual exchange rate was adopted, with one rate as the basic official exchange rate and the other applied to export proceeds and inward remittances. After five years a single exchange rate was adopted. 2. Tariffs were reduced and strict import controls were eased, especially for imports of materials and equipment that would be used in the production of exports. 3. The scheme of export incentives was expanded to include not only rebates of customs duties on imported raw materials but also exemption from stamp duties, a lower taxable income base, special low-interest loans, direct subsidies and government financed export promotion facilities and market research. 4. Tax-free and duty-free export processing zones were created and designed for the purpose of attracting foreign investment. As a result export trade rose by 12 percent a year between 1955 and 1962 and at a rate of 25 percent between 1963 and 1972. The share of employment 211

in the export industry, as a percentage of total employment, went up from 12 percent in 1961 to 34 percent in 1976. Export expansion has been a key factor contributing to employment growth (Chi-ming Hou, 1988, pp.39-45).

Encouraging Savings and Investments

In contrast to less successful countries such as Brazil, Mexico, Venezuela, Morocco and Tanzania where policies for low interest rates were followed coupled with double-digit inflation, the East Asian countries encouraged their people to save more in order to generate domestic finance for sustained rapid growth. All these countries had allowed their domestic interest rates to rise to a reasonable level in their effort to curb domestic inflation. Consequently, saving through domestic financial institutions was encouraged. At the same time, this sort of realistic high-interest rate policy prevented the wasteful use and misallocation of scarce capital and thus ensured fair returns on their investment projects. All four countries took care to promote the development of appropriate financial institutions and financial markets to channel funds from savers and lenders to borrowers and investors by expediting the creation and trading of financial instruments: the foreign exchange market, the money market and the capital market. High positive rates of return on savings deposits have helped to mobilise savings to financial institutions, but have also made capital markets less attractive. Tax measures were used to encourage savings and investments, e.g. exempting from personal income tax the interest income from savings and fixed-term deposits with maturity terms of two years or more, as well as exempting from corporate income the tax profits that were ploughed back into investment. These inducements resulted in an inflow of voluntary savings into the banking system which provided much needed non-inflationary financing for domestic investment. The investment activities made possible by these non-inflationary sources of finance brought about a rapid increase in productivity. In Taiwan the rapid increase in real income also enhanced savings. In this way Taiwan was converted into a country whose people had a high propensity to save. In 1952 Taiwan saved 5.5 percent of its national income. By 1963 the percentage stood at 13.2 percent and by 1980 savings in Taiwan climbed to the high level of 35 percent compared with 22 percent in Japan and less than 9 percent in the UK and the USA during the same year (Lee Tsai, 1988, pp.232-253). Foreign investment has been encouraged as a national policy in all four newly industrialised economies (NIE’s). The contribution of foreign investment was not only financial but also extended to better technical know-how, more efficient management, the opportunity to import parts and appliances and excellent marketing contacts for export expansion.

Balancing Market Forces and Economic Planning

Most East Asian economies, at the start of their economic development, faced a series of obstacles: high unemployment, a lack of infrastructure, a high inflation rate, insufficient capital and a lack of entrepreneurial confidence. The private sector was characterised by small, traditional and even unsophisticated enterprises. The government of these countries had to adopt measures to overcome their difficulties and took a leading role, giving direction to national economic development, while at the same time encouraging the free play of private enterprise. The nature of economic planning in these countries should not be confused with the rigid systems associated with centrally planned economies. East Asian governments did not invoke the central authority to compel private enterprise to adhere to government guidelines or to meet the targets that the government set. Instead the various governments employed policy measures such as tax reductions or tax exemptions, as well as financial measures to induce private enterprises to develop industries with the potential for growth in terms of comparative advantage in the world market (Tzong-shian Yu, 1988, pp.124-126). 212

In all the successful economies of East Asia, industries were privately owned rather than belonging to the government, except in naturally monopolistic industries such as public utilities. This choice turned out to have enormous economic significance because the owner-managers were motivated to make profits and hence prepared to put their own equity capital at risk. These owner-managers had the incentive to maximise profits and minimise losses and to pay careful attention to the consequences of what they did or did not do. A further factor common to the successful economies was the extension of the rule of law to the economic sphere. This implied not only the enforcement of legal contracts but also the guarantee of due process and the reduction of discretionary administrative powers in economic matters. The rule of law enhanced predictability, the security of returns on investments and the curtailment of nepotism. It meant that the rewards of economic success go to the efficient and not to the merely politically powerful or well-connected and certainly not to the government of the day. This is not to say that the East Asian governments have not been interventionist to the extent that they could attain certain recognised public policy objectives. But on the whole these governments have not directed the activities of private firms. They have allowed owner- managers to run their own firms and retain their taxed profits. An additional important factor was the presence of competition. Most governments have generally refrained from creating or supporting domestic monopolies. The East Asian governments did not compete with private firms. Moreover, the export orientation of these economies implied that most enterprises had to compete internationally and to comply with the discipline imposed by the competitive world market. Private enterprise provided the profit motive, but competition and the rule of law ensured the efficient allocation of resources (Lau, 1990, pp.239-241). In Japan and Taiwan the government has allowed companies to run their enterprises as they see fit within the bounds government policies have set. In sluggish India, on the other hand, the government tended to control everything and ended up serving only the vested interests of a handful of big businesses.

Equal Opportunities, Upward Mobility and Political Stability

The Economist maintained that it was the feeling that everyone was in the same boat that was an important factor allowing the “tiger” governments to get away with being interventionist for so long. It gave governments the power to push through unpopular measures in times of economic crisis. This frame of mind was enhanced by conscious efforts to couple growth with the removal of unearned disparities in opportunities and income. Relatively equal opportunities tended to give people a common bond and a sense of destiny. The Economist claimed that the president of , Korea’s largest company, earned only nine times the pay that one of its production- line workers did. At any American company of comparable size, the difference between the two would have been closer to a hundred times – if not more! In Japan, South Korea, Taiwan and also China, comprehensive land reform policies were introduced. Under Taiwan’s “land-to-the-tiller” policy, which was implemented in the 1950s, large tracts of land originally owned by a small number of landlords were expropriated with government compensation and transferred to the tillers. These tenant farmers were then in a position to manage their own farms after obtaining ownership of the land. The former landlords, in turn, invested their money in industries, thus creating job opportunities for the under- employed surplus labour force on the farms. The successful implementation of land reform solidified Taiwan’s agricultural development and helped stabilise social and political conditions. This achievement also provided a favourable climate for the development of the industrial sector. (See The Economist, November 16, 1991) An obvious characteristic of the political environment of the successful East Asian countries lies in their stability. Over the period 1960-1990 there was not a single instance of the transfer of power to another political group in any of the eight economies. China remained a communist 213

dictatorship. Singapore, Indonesia and Malaysia have each been governed by the same authoritarian party since achieving independence. South Korea and Taiwan have begun to liberalise, but continued to be ruled by the same “old guard” elite structure. Hong Kong remained a colony of the United Kingdom until 1998. Indonesia’s President for almost two decades, Suharto, had military backing. In Thailand the generals and civilian politicians took turns in forming the government while a small group of technocrats appeared to be running the country. This elite group’s first loyalty was to the king who gave stability to the nation. Although the political lessons learned from the emerging economic powers of East Asia may be unacceptable to persons with a strong liberal-democratic persuasion, such examples do not necessarily mean that economic growth requires authoritarian government per se. Emerging Asia’s governments have been economically enlightened. They have not flinched from taking tough measures to maintain macro-economic stability and have ensured that economic policies are predictable and transparent. An additional feature of the prevalent style of government in East Asia, is the predominant roles played by small cliques of unelected technocrats. These bureaucratic elites are found in the feudal fiefdoms of the Japanese manufacturing industry (“samurai” and “keiretsu”); in the corridors of Japan’s Ministry of International Trade and Industry (MITI); the modernising Chinese Mandarin elite officials who are occupying key positions in Taiwan, Hong Kong and Singapore; the so-called “Berkeley Mafia”, a group of hand-picked technocrats who have won accolades for their role in assisting Indonesia’s growth in the 1980s.

Education, Training and Technology

The single biggest source of comparative advantage for the successful East Asian countries lies in their well-educated and well-trained workforce. Confucius wrote: “If you plan for a year, plant a seed. If you plan for ten years, plant a tree. If for a hundred years, teach the people.” This advice has not been forgotten in East Asia. Japan introduced compulsory elementary education in 1872 and became one of the world’s most education-conscious societies. This obsession rubbed off on Japan’s former colonies, Korea and Taiwan. Today a Korean teenager is said to be more likely to go to university than his Japanese peer. Korea spent 5 percent of its GDP on research and development in 2000. Taiwan’s achievement is equally impressive. It has 42 universities and 75 polytechnics turning out 40,000 engineers and 140,000 technicians each year. In 1980 one in every four candidates for doctorates in electrical engineering at American universities came from Taiwan. Taiwanese companies were not following technology, they were leading it in many fields. From the experience of these countries it is clear that sound education improves the quality of the labour force and enables its members to accept advanced and sophisticated productive technology. It also induces innovation and invention which, in turn, enhance efficiency and productivity. An Effective Entrepreneurship Culture

East Asia’s miracle economies have been driven by Asian enterprises which flourished long before the arrival of the Europeans, and had skilled craftsmanship, trade and commerce forming an integral part of traditional life. The growth impetus came from entrepreneurially driven enterprises: some large, many medium-sized and a multitude of smaller ones. K. Imai described the post-war economic growth process in Japan as follows: “In the process of the post-war economic growth, large oligopolistic firms grew up through introducing innovations of large-scale technology which enabled them to enjoy advantages of mass production and mass marketing. They gave the driving force of rapid growth, thereby expanding industrial fields subject to large-scale production. On the other hand, however, the process of rapid growth, accompanied by changes in the industrial structure, diversified consumer demands and helped to create a great many opportunities for different kinds of goods to be produced in small lots by small firms, thereby giving rise to the new types of distribution 214

and services. Such opportunities created favourable conditions for the growth and development of firms of various sizes, including small firms.” (Imai, 1980, p.103) By 1990, Japan had 6.5 million businesses in operation. Only 46,000 of these could loosely be described as large corporations; the rest were small to medium-sized firms. The vast majority of these – some 5.6 million firms in all – were active in the services and tertiary fields. The remaining 900,000 have traditionally been the loyal burden-sharers for Japanese manufacturing as component suppliers to big manufacturers. These suppliers usually have to absorb the unemployment costs when big firms take back their sub-contracted work and do it in-house during economic downturns. Subsequently, the proportion of small firms earning their living by sub-contracting declined to less than 50 percent of the total. Many became market producers in their own right, while others captured new markets – at home and abroad. Empirical research done by Redding which focused on the sources of real growth in value added in Taiwan and South Korea, found the following: 72 percent of the growth in value added was attributable to expansion by existing firms, whereas 28 percent of this growth came from new entrants. It was concluded from this finding that expansion and not entry was the critical factor. The problem seems to have been less a matter of entrepreneurial quantity, than one of quality (Redding, 1988, p.7). It was also established that many new firms failed initially. Roughly 25 percent failed after four years and a further 10 percent after six years. But the survivors were generally those that expanded rapidly, and it was on their shoulders that the economy rested – not so much on the large number of newcomers churning around at the bottom of the pile (Redding, 1988, p.7-8). The key entrepreneurial skills that were found crucial to successful business development were initiating and co-ordinating skills. All the emerging countries of East Asia were found to show certain values and structural characteristics embedded in their societies which facilitated and enhanced entrepreneurship in both of its main features: the initiating as well as the co- ordinating side of business operations. East Asian society accords high status to businessmen and has developed systems which facilitate innovation and support such values as risk taking, achievement, wealth creation, co- operation, trust and professionalism. In addition, favourable “structural factors” were present such as a sound education system that provided training in professional and managerial skills. The Chinese family business, dominated by the pater familias and internally owned and controlled, plays a prominent entrepreneurial role in most East Asian economies. The so-called “” form a highly significant economic group, particularly in Taiwan, Hong Kong and Singapore. The Economist stated in 1991 that their influence extends to other East Asian countries too: “Indonesia’s Chinese minority, only 5 percent of the total population, controls an estimated 75 percent of corporate assets. For the past 30 years Malaysian politics has been dominated by efforts to redistribute wealth from the Chinese minority to the Malay majority. Half of Thailand’s GDP is produced by Bangkok, a Chinese city in Thai disguise.” (The Economist, November 19th, 1991, p. 8). Various research projects have found that the “Overseas Chinese” maintain value systems and kinship structures which facilitate the initiating facet of entrepreneurship. The heavy reliance on family as the principal unit of society means that survival and security are closely allied to the success of the family business. The capital resources required are provided by the savings of family members – or by private loan associations consisting of relatives and friends by means of drawing lots. Family members share all kinds of jobs – from manual labour to management. Their willingness to work hard has kept labour costs low and created a favourable environment for business enterprise. This environment has inevitably spawned a large number of small firms, further spurred on by values which support being an owner rather than an employee. On the negative side it was found that the family business context poses barriers to the higher levels of co-ordination necessary for the growth of large firms, as the forces of family control struggle with those of neutral professionalism (Redding, 1988, pp.14-15). As regards the other indigenous population groups, it was found that prevailing socio- cultural values have been barriers to the crucial first entrepreneurial stage of initiating. Their 215

values about going into business appeared to be less encouraging than in the Chinese case. Family survival is less sharply an issue in people’s perceptions. Risk-taking is less of a norm and conservative orientations are likely to dampen innovative tendencies. The structural factors, particularly the labyrinthine bureaucratic controls, but also the misguided economic policies and financing systems are less conducive to either initiation or co-ordination activities, than they are in the cases of Japan, Taiwan, South Korea, Hong Kong and Singapore. These findings suggest a close association between the emergence of entrepreneurship activity and the economic milieu created by government policy and by the socio-cultural features of society.

Effective Business Networking

Another noteworthy characteristic of the successful East Asian economies is their highly effective system of business networking – the best example of which is found in Japan. The key to the Japanese business world (“zaikai”) is said to be in its hierarchical structure which, in turn, is largely patterned on the traditional feudal structure. For centuries Japan was divided into small feudal fiefdoms called “han”. Each “han” was controlled by one man – the “daimo” – who lived in a fortified town surrounded by the agricultural land that provided his tax base and his military power. Below the daimo’s family were his most trusted retainers – “samurai” of the highest rank who served his household – then below them additional layers of “samurai”. At the bottom of the pyramid were the hierarchies of common people: farmers, artisans and merchants. Giant parent companies like Matsushita, Toshiba, Hitachi, Sony, Fujitsu and others, stand at the apex of their vertically integrated business pyramids. Each manufacturing “family” of dozens of related companies includes some that are large and powerful in their own right, many of which are listed on the stock exchange. Some family members may be related in a “federal” loose-jointed structure, while others are more centralised and “unitary” in structure. Beneath the parent company’s directly associated corporate “family”, are the trusted retainers – primary sub-contractors – beneath which lie further layers of sub-contractors. Rather than doing their own designing and manufacturing, the various parent companies called “dai kigyo” or “keiretsu”, co-ordinate a complex design and manufacturing process that involves thousands of medium- sized and small companies. Small to medium-sized companies called “chu-sho kigyo” make up the bulk of Japanese industry and form the real foundation of the Japanese economy. In 1990, it was estimated that over 75 percent of Japanese companies were capitalised below US$70,000 and only 1 percent of all companies in Japan were capitalised over US$700,000. Small and medium-sized enterprises employed over 80 percent of the national workforce (K Sakai, 1990:40). But even the giant “keiretsu” manufacturing companies are integrated into large industrial groups. Some are part of the influential pre-war Big-Six “Zaihatsu” cliques, e.g. Mitsui (Nimoku club) Mitsubishi (Kinyo club), Sumitomo (Hakusui club), Fuji (Fuyo club), Sanwa (Sansui club) and Dai-Ichi Kangyo (Sanzen club). Others include non-zaihatsu groups formed around top manufacturing companies, banks and retailing conglomerates such as Toyota, Hitachi, Matsushita, Seibu and Tokyo. In step with the era of high technology, industrial groups are pursuing greater co-operation among member firms, organising joint research and development projects in the fields of new materials development, biotechnology, electronics and data communications. Japan’s awe-inspiring trade structure hinges on large trading groups called “”. These general trading companies differ from other giant groups in that they are not necessarily involved in the manufacturing field. Instead, they tend to be orientated to both the supply and demand side and to function as problem solvers. Their investments are directly connected with trade and international business. The most prominent among them are the Mitsubishi Corporation, Mitsui & Co., Itoh & Co., the Sumitomo Corporation, the Marubemi Corporation, Nissho Iwai, Toyo Menka Kaisha, Kanematsu-Gosho and Nichimen. With the support of MITI they became the advance guard of Japan’s export drive. Since they played such a crucial role in exports and imports, they served as catalysts for Japan’s rapid economic growth. They have vast 216

communication networks spanning the world, collecting and transmitting data on day-to-day commodity price fluctuations, markets, areas of surplus and shortage. They are involved worldwide in virtually any kind of commercial transaction, reaching far beyond trade itself into resource development, finance and the organisation of industrial projects and they offer clients a wide range of services: information, expertise, insurance, shipping, etc. Their total annual sales amount to over 40 percent of Japan’s GNP. Japan’s business world is well organised to protect and further its interests in the government, political circles, organised labour, the media and foreign countries. There are four core organisations that function as vehicles for this rather intangible business activity: the Federation of Economic Organisations (“Keidenren”), the Japan Committee for Economic Development (“Keizai Doyukai”), the Japan Federation of Employer’s Associations (“Nikkeiren”) and the Japan Chamber of Commerce and Industry (“Nissho”). Officials who serve in these organisations are top executives from leading corporations and many have posts in two or more organisations. By far the most important and powerful is the “Keidenren”, a conglomeration of big businesses. Its chairman is often referred to as the “Prime Minister of Commerce”. Doyukai is an assembly of individual business leaders and the organisation’s influence is more of an ideological or theoretical nature. “Nikkeiren” is broadly speaking an employer’s bastion and its activities are primarily targeted at the affairs of labour. The Japan Chamber of Commerce and Industry (“Nissho”) is an organisation for small and medium-sized enterprises. The “Keidenren” is a nationwide body with 117 associates and numerous corporate members. Its central organ of decision-making is the board of directors with 470 members. It performs the following roles:  It consolidates and co-ordinates the views of the business community and conveys these views to government and political parties;  It responds to official and unofficial requests for counselling and recommendations;  It provides representation on advisory organs of government enabling it to participate in the formulation of national economic policies and strategies;  It maintains close day-to-day communications and contacts with officials of various government agencies involved in economic matters;  It maintains bilateral contact with countries of particular economic importance to Japanese business. Work Ethic and Non-Disruptive Labour

In societies where work is regarded as a virtue and idleness as a vice, it is hardly surprising that East Asia’s economies have managed to maintain non-disruptive labour relations and continuous economic expansion. The Confucian philosophy coupled with Chinese and Japanese tradition appears to have engendered the ability and willingness to work hard and for long hours. The general labour relations picture is characterised by the employee’s strong sense of affiliation to the enterprise, weak umbrella unions and the dual structure of labour-management relations. The strong sense of affiliation to one’s employer apparently arose out of Japan’s feudal history and traditional social structures. The greatest assurance of economic security for a worker lay in gaining a permanent attachment to the employer company. This pattern led to the emergence of “enterprise” unions, consisting of workers in a given company or plant. These unions cross trade or skill barriers and usually include both blue-collar and white-collar employees. Any antagonism between management and workers is perceived as a family or household dispute. A specific time in each year is set aside for handling disputes between labour and management and this is known as “Shunto”. This annual bargaining for higher wages each spring began under the leadership of Sohyo (The General Council of Trade Unions of Japan) to support the position of “enterprise” unions and the largely non-unionised sector – Japan’s myriad small and medium-sized enterprises. 217

Post-war industrial peace in Japan as well as in other East Asian countries has been based on the realisation that the enterprise encompasses a community of people – both management and other employees – who are bound together by a common destiny. Employees in major Japanese corporations form the heart of the corporations in the same way as shareholders form the nexus in US/European corporations. In addition, the Japanese worker has become both producer and supervisor on the shop floor, taking a subjective interest in improving production methods in the certain knowledge that the company has to survive in Japan’s highly productive and competitive environment. Recognising satisfactory performance is another way of linking the fortunes of the worker to those of the enterprise. Japan has a bonus system to give bonuses annually to all employees, depending on how the company has fared. This system forms part of the catalyst that makes Japanese enterprises highly efficient in marshalling their employees’ energies. Also in the case of the high-performance economies that share the , enterprises adhere to the Chinese tradition in labour relations which promises both flexibility and employment stability. Bonuses are paid to workers at major festivals and at the end of the year. Relations between employer and employee are more permanent than in the West, and employers are subject to moral pressure to take proper care of their workers.

Integration of Traditional and Modern Management Styles

The high-performance economies of East Asia followed in Japan’s wake in combining traditional Pacific values and styles with modernised management methods, styles and systems. For several decades the Japanese have been perceived as having perfected the art of taking apart other people’s products and working out how to make them almost as well and a great deal more cheaply. They have eventually shed the image of merely being good imitators and have entered a new era of creativity in management and industry. For many years there has been a heavy reliance in Japan on Western, particularly American management approaches and methods. Of particular relevance was Fredrich Taylor’s “scientific management”, Demmig and Juran’s “quality control”, Larry Miles’ “value management”, Chester Barnard’s “managerial co-ordination”, and Peter Drucker’s “management by objectives” (MBO). Although these approaches and systems have been superseded in the Western business schools by other fashionable concepts and fads, the original theories not only thrive in East Asia but are adapted and developed into company-wide systems. One such example is “value engineering” (VE) methodology with the aim of improving product and service functions, reducing costs and improving work efficiency. The technique analyses product function improvements through creative techniques such as brainstorming and the analysis of alternatives. Other examples are Quality Assurance (QA) to assure product reliability and Technology Promotion (TP) focussing on product promotion, and on rewards for invention and creative ideas. Japan’s success in adapting and extending Western management techniques is akin to its successful exploitation of Western electronic technology. Low Dependency Ratios

A favourable demographic that underpinned much of East Asia’s economic success has been the favourable ratio of the region’s dependants – those 15 and under or 65 and over – to its working age population. Since the 1970s it has fallen from 80 percent (including many children) to 55 percent. For the whole region it is estimated to reach its lowest point around 2015 at 49 percent. This demographic dividend (which excludes Japan) means a swelling cohort of working-age persons – comparable to the USA’s post World War II baby-boomers. East Asia’s baby-boomers are having a favourable effect on consumption patterns and expectations of a better future which includes more discretionary spending. It is also argued that the region’s unofficial or black-market economy is as much as 50 percent of the size of the official economy (Japan excluded).This part of the economy could act as a better engine of higher consumption because it is not taxed at source. 218

As high savings fuelled much of the initial export-driven growth of the region, the new generation of baby-boomers could lead to a virtuous cycle of higher domestic consumption. It would encourage a more balanced development pattern, less dependent upon exports to unreliable foreign markets and more dependent on local consumption.

Reconstruction of Singapore Under Lee Kuan Yew

Singapore is an island city-state with a population of 4.6 million. Chinese make up 77 percent, Malays 14 percent and Indians 8 percent. For over 140 years of British rule until 1959, it had a simple entrepot trade economy with little agriculture and no industry. Since 1959 it strived towards inter-racial harmony by ensuring minority representation through group representation constituencies. This required a group of four candidates for each constituency to include at least one candidate from a minority racial group. Thirty years ago Singapore suffered from political unrest, high unemployment and low economic growth. They faced communist insurrection with strikes, go-slows, arson, political assassinations and general disorder. When Mr. Lee Kuan Yew took over as Prime Minister in 1959 the establishment of stability became his first priority. Thereafter came the restoration of social and work discipline. These objectives took several years to achieve, but it paved the way for economic growth through investments and trade. It was not easy to reverse the habits of a society that had become used to disruptive and destructive behaviour. Revolutionary political movements had to be converted from agitation and disruption to a restoration of law and order, to establish stable conditions so that learning, hard work and constructive endeavour became rewarding. It was not easy to enthuse a people to make a U-turn in attitudes and behaviour when the prospect offered was not a quick victory and acquisition of wealth, but a long hard slog to create conditions for investment and growth. It was a battle for the hearts and minds of the people against the communists who were still intent on continuing destruction. According to Lee Kuan Yew it took ten years to win the people over and get them to relearn the habits of learning, working, co-operating and succeeding. By the mid 1960s the preconditions for economic growth were established and big investments started to flow in. An Economic Development Board was established with offices in Europe, the US and Japan to woo and assist multi-national companies to invest. They were offered tax holidays and incentives, But the real incentive was Singapore’s stability, with its former militant labour force becoming an industrious and keen partner to management in the pursuit of skills, knowledge and higher productivity. It required a painstaking and sustained effort to explain the lessons to the workers and re-educate the union leaders, managers and government officials. Tripartite bodies of labour, management and government were established in a National Wages Council to set guidelines on what wages the economy could bear in each sector. Similarly a National Productivity Board was established to get workers and management to learn productivity techniques and to build up the spirit of co-operation so that productivity could improve and so increase profits and wages. Once economic growth had got going in earnest, they launched a massive programme of public housing to give each worker a stake in the country’s progress. A compulsory savings scheme was built up to which employers and employees each initially contributed 5 percent of wages. Today this contribution has increased to 20 percent of wages from each side. Workers can borrow from this fund to buy their own homes and still have a retirement nest egg. Today nearly 90 percent of workers own their own homes bought with these savings. Singapore’s key strengths in fostering growth have been meritocracy and an efficient and corruption-free administration. A person earns his place on merit. Race, status, connections or influence of his parents or friends are conscientiously screened off or filtered out. This was not easy because in the aftermath of victory in the post-independence elections, party stalwarts wanted to be rewarded for their loyalty and support – the only virtues they believed merited recognition. Meritocracy had little appeal to the previously disadvantaged. But the leaders had to moderate and temper expectations. They had to resolve the contradictions between the 219

aspirations of their supporters and the realities of the economy. They were faced with enormous pressures to redress the inequities of past distribution of opportunities and wealth. According to Lee Kuan Yew the art of government in such a situation is to redress grievances in a manner and at a pace that does not turn the rules of economics upside down.

The Malaysian Experience

The Malaysian success story involves its remarkable economic development, but also keeping its multi-racial and multi-religious community reasonably stable. Its population of just over 25 million includes 58 percent indigenous Malays, 24 percent Chinese and 8 percent Indian, Pakistani or Tamil. It is a parliamentary monarchy which reflects the British colonial influence during the period 1896 to 1963. In 1963 Malaysia was formed consisting of the eleven states of peninsular Malaya and Singapore, the two states of Sarawah and Sabah along the northern coast of the island of Kalimantan (Borneo). In 1965 Singapore broke away from Malaysia to become an independent state. Friction between the Malay and non-Malay communities has remained a source of tension in Malaysia for many decades and is not likely to disappear soon. Though relations between the Chinese and Malay communities are civil enough, they live their lives apart, attending different schools and universities, speaking different languages and working for different employers. Political parties are divided along racial lines. Malays and other indigenous groups felt themselves at serious disadvantage when independence came in 1963. Under British rule, Malaysia’s Chinese traders and businessmen had prospered. The Bumiputras, who lived mainly in the rural areas and had little access to education, owned only 2.5 percent of the country’s corporate assets, against over 30 percent for the Chinese. Foreigners, mainly British, owned the rest . As in many East Asian countries, the Chinese minority traditionally controlled the lion’s share of the local economy, while the Malays worked as farmers and fishermen. In order to redress the economic disparity between Chinese and Malays, which was claimed to be the underlying cause of continuous friction, the New Economic Policy (NEP) was established in 1971 by Tun Razak. It was designed to defuse resentment and to eradicate poverty by creating a Bumiputra (Malay) commercial and industrial community that would own a 30 percent share in the economy by 1990. A constitutional amendment was passed, making it seditious to question the special rights accorded to Malays. It meant that racial discrimination was explicitly written into the constitution. The category “Bumiputras” (sons of the soil) covered not only Malays, but other marginally situated indigenous ethnic groups as well. Most jobs in the bureaucracy were reserved for Malays, as were the majority of government contracts. Quotas were set for university admissions, allowing Malays to win places ahead of better qualified Chinese or Indians. Developers were required to sell a certain proportion of housing and commercial property to Malays, often at a discount. Publicly quoted companies were forced to ensure that at least 30 percent of their shares were held by Bumiputras and had to hand out a similar share of jobs to them. At the same time the government “Malayised” education: Malay schools had to teach solely in Malay, not English, though Chinese-language primary schools, paid for by the government, and Chinese secondary schools, mainly funded by the Chinese community, were allowed to continue as before. These policies had a big impact, though not all of it intended. Malay and Chinese children lived in two separate worlds, educated in different establishments and languages. Because university places are based on quotas, many Chinese Malaysians study abroad. The impact on business has been that Chinese Malaysian business has been kept small and private, rather than growing to the point of having to comply with NEP requirements. The encouragement given to Malay businessmen has led to spectacular and expensive misjudgements as well as to a heavy infusion of crony capitalism. Despite the downside of the NEP, Chinese Malaysian businessmen consider themselves still better off than in neighbouring Thailand or Indonesia where they were forced to assimilate completely. In Thailand they were forced to take Thai names. In Indonesia they experienced 220

repeated pogroms. The Chinese Malaysians feel the NEP made them more resilient, more competitive and tougher. In the political arena, the party conflict between the two big Malay parties (PAS and UMNO) has given the Chinese Malayans the balance of power. Chinese Malayans are not keen on civil- service jobs. Even Dr Mahathir, who is Malay, expressed uncertainty in his 1970 book The Malay Dilemma over the future of the NEP, calling on Malays to “throw away their crutches”. University quotas have been relaxed and the 30 percent requirement is not consistently observed in business appointments. Dr Mahathir became Prime Minister in 1981. He introduced a series of positive reform measures in his governing style of “democratic dictatorship”. He relaxed some of the NEP requirements in respect of foreign firms manufacturing for the export market. Where it does apply, he encouraged the practice of drawing in well-managed Bumiputra firms as partners. In addition he modernised the education system and introduced English as a language of instruction. After Asia’s economic crisis of 1997-1998, Malaysia moved swiftly to reform itself. The government set up a quick fundraising asset management company, Danaharta, to handle non- performing loans in bank portfolios. In time, Danaharta took on more then 50 percent of non- performing loans at market value. A Debt Restructuring Committee was set up to resolve syndicated debt, functioning with a bottom-line discipline. Bank restructuring was an important part of the clean-up. The government injected a large sum into the banks to avoid them selling assets at forced sale value. The banking system was consolidated into a few larger “anchor” banks and specialist financial institutions. The entry of foreign banks was restricted. The business culture was changed by forcing listed companies to separate the functions of owner and chief executive, to submit quarterly results, complete with balance sheets and profit-and-loss accounts. Efforts were also made to clamp down on cronyism. According to Transparency International, an organisation dedicated to fighting corruption, cronyism remained alive and well in Malaysia. They point to the builders of a new container port at Tanjung Pelepas, the Bakun dam, mining company Pernas, and many others. The problem is that it is difficult to distinguish between cronies and merely successful businessmen whose affairs are clouded in secrecy. Overcoming the economic crisis of 1997-98 also involved the imposition of capital and exchange controls and the introduction of a dollar peg for the Malaysian ringgit. Despite grudging approval by the IMF, Dr Mahathir’s medicine apparently assisted in the subsequent economic recovery. The peg of the ringgit implied a devaluation of the currency. Coupled with capital controls it resulted in the imposition of obstacles to currency speculation. It also brought about growing trade surpluses for five consecutive years and the piling up of foreign reserves. Under British rule, the Malaysian economy relied heavily on commodities. It served as an important supplier of tin, rubber and palm oil. Dr Mahathir’s reforms made a big difference to Malaysia’s fortunes. In contrast to most of its neighbours, Malaysia became a favourite destination for direct foreign investments until it was outpaced by China after 1993. Initially foreign investments were highly focused on the electronics sector – re-exporting finished products to the USA. Despite the contraction of the USA market following the dot-com bust, Malaysia’s economy remained robust, growing at 4.2 percent in 2002. Under Mahathir’s leadership the economy kept on reinventing itself. Dependence on manufacturing based on re-exporting had to be reduced. It was replaced with a strategy to make Malaysia less dependent on the global economic cycle. One element was to boost domestic demand. The second was to pay more attention to the traditional commodities that had supported Malaysia over many years: palm oil, rubber and crude oil. The third was a new emphasis on services. But manufacturing still remained important, accounting for more than 30 percent of the economy. In contrast to Singapore and Hong Kong, Malaysia also had the benefit of a sizeable domestic market. It helped to keep the economy growing by 4 to 5 percent a year, despite a depressed 221

global trading environment. A big part of its domestic focus was infrastructure development: roads, dams, a new airport and housing projects. High savings and big foreign reserves assisted in financing the numerous projects – albeit by way of budget deficits stretching over six years. But most of the borrowing was done in the domestic market, keeping the external debt-service ratio below 5 percent. High oil prices obtained by Petronas, the state oil and gas company, along with price rises for palm oil and rubber exports, largely assisted a healthy balance-of-payments account. Malaysia also branched out into tourism (5 percent of GDP) as well as cosmetic surgery, health care, retirement villages and nursing homes for the affluent of the whole of East Asia. Malaysian-owned universities were also attracting the well-healed from elsewhere in the region by their lower fees. Mahathir also pushed through the creation of high-tech services business parks. Cyberjaya was established as a gigantic electronic business park, mid-way between Petronas Towers and the new airport. It lies at the heart of the much larger Multimedia Super Corridor (MSC), which runs from the towers to the airport. With a total area of 750 sq.km. it is larger than the whole of Singapore. Companies providing high-tech services were given MSC-status – packages of lavish benefits to locate in the MSC. The MSC was essentially developed by the private sector, but enthusiastically supported by the government. MSC attracted several large job-creating enterprises. HSBC, the London-based bank has chosen Cyberyaja as one of five world-wide bank- office data-processing centres (alongside two in China and two in India). Ericson, Fujitsu, DHL, Shell, Standard Chartered, Citibank, Nokia, Western Union, Hewlett-Packard, Intel and BMW are among the long list of companies that have regional data-processing or customer-service centres in Malaysia, mostly within the MSC. Labour costs are lower than Singapore and land is cheaper. MSC is also attracting software developers like Japan’s giant NTT. AccTrack 21, a developer of accounting software has also relocated there because its costs are said to be only 5 percent of that in the USA. Malaysia has done well in fostering racial harmony, but racial discrimination remains legally embedded in most walks of life. It is clear that the NEP seriously needs fixing. Many people, Malay and non-Malay, seem to wonder whether the NEP is needed at all. By some reckonings the NEP has been a success. Race riots have disappeared and Malaysia has continued to prosper. The Malay professional class has grown rapidly so that over 30 percent of lawyers and almost 40 percent of doctors are now Malay. Between 1970 and 1990, the Malay share of Malaysian firms rose from 2.4 percent of equity to 19.3 percent, but has stagnated at that level since that time. In 2005, the Prime Minister, Abdullah Badawi, stated that the NEP has “imbued Malays not with the intended spirit of entrepreneurialism, but with an unfortunate proclivity for rent- seeking”. But the biggest failing of the scheme is the culture of cronyism it has engendered. It was disclosed in 2005 that of the Malays that were granted valuable permits to import foreign cars, the main beneficiaries turned out to be not struggling entrepreneurs, but former officials at the Ministry of Trade. It was also revealed that Mahathir’s government built up a coterie of Malay tycoons through lucrative concessions, only to see many of them go bust during the Asian crisis. As a result, disenchantment with affirmative action has grown among many Malaysians of all stripes. The NEP was originally billed as a temporary measure, but when it expired in 1990, it was reset. Indians have supplanted Malays as the most disadvantaged group, but do not enjoy the same privileges. Today stronger voices are heard that the system of racial preferences should be abolished altogether. It is contended that it has fostered a culture of dependency. Anwar Ebrahim, a former deputy prime minister, also argues that all forms of affirmative action should be abolished. This position is also supported by Malaysia’s biggest Islamic party. Even Mahathir now argues that Malays are becoming too accustomed to leg-ups and hand-outs. Malaysia is an economic success story – albeit not a shining example of how a democracy should function. It nevertheless has succeeded in improving the living standards of its people 222

over the past 50 years. It also serves as a good role-model for other Islamic countries where economic failure and government heavy-handedness is the predominant order of the day. (See The Economist, “A Survey of Malaysia”, April 5th, 2003)

Japan’s Regression After 1995

For much of the post-war decades, Japan served as a role model for its East Asian neighbours. Many countries looked with envy at Japan’s rapid economic growth coupled with social equity. The Japanese success formula seemed to encompass a strong export capacity based on low wages, high savings rates to finance investments as well as strong government support of the private sector. Japan’s GDP increased at a rate of 10 percent per annum during the 1960-1970 period. By 1975 it declined to a growth rate of 4 percent and by 1995 it declined to a growth rate of around 1 percent, as the good times came to an end. The country was on the verge of financial and economic meltdown with a political establishment apparently incapable of facing the crisis. Prices started to fall in 1995 and in 1997 nominal GDP shrunk by 6 percent. The Governor of the Bank of Japan officially confirmed that the country faced a problem of deflation for which macro-economic textbooks offered no solution. Interest rates were brought down to zero rendering monetary policy impotent. The chances to employ fiscal easing measures were reduced because of public-sector debt being already dangerously large. The option to halt deflation by printing money being also diminished by the fact that the bank was already creating lots of money by buying government bonds – to the extent of increasing the monetary base at an annual rate of 25 percent over a period of two years. The money-transmission mechanism became blocked because banks, being saddled with bad debts, could not lend more unless the banking system was properly fixed. Part of the bad debt problem was caused by the fall of 80 percent in the value of land prices in the decade following 1990. This meant that the banks had undervalued the portion of loans without collateral – i.e. the portion against which they needed to hold reserves. As a result it was difficult for the Financial Services Agency (FSA), the industry watchdog, to monitor the true scope of problem loans which remained as unrealised losses in the loan portfolio’s of the banks. The Japanese model also came in for heavy criticism for a variety of reasons. Many countries complained that Japan unfairly took what it could from the world economy – profits from export markets and the absorption of Western technology – but without giving much back. Japan had consistently resisted inward investment by other countries, while Japanese business interests were simultaneously combing the rest of the world for lucrative investment opportunities. Others complained that Japan protected its own farm production behind the facade of protectionist phyto-sanitary requirements and other bureaucratic impediments on imports. Foreign critics were also negative about Japan’s macro-economic and financial policies. The central bank, the finance ministry, the bank regulators, the prime minister and the ruling party regulators, all blamed each other for failing to deal with the acute financial problems experienced by the country for more than a decade. Politicians and bureaucrats who prevent competitive pressures from driving change, are themselves protected from political competition. The dominant Liberal Democratic Party (LDP) had been in power for generations. Another problem area was the socialised financial system. The tax code, the public works budget and hidden subsidies played a role in propping moribund firms up. The banks were under no pressure to cut off lending to such firms and were allowed to carry a large volume of non-performing loans on their books without being forced to write them off as bad debts. In this sense the economic system also functioned as a extension of the welfare system. The bureaucrats of Japan operate as a major influential “special interest” group. It is a general practice for retired bureaucrats to start working for the companies they used to regulate after retirement. Inevitably this practice creates “conflicts of interest”, nepotism and corruption. In Japan inordinate power is vested in its much-revered conglomerates. These firms are unwieldy and inefficient giants. In 2001 it was estimated that Hitachi consisted of more than 223

1000 subsidiaries and Fujitsu had more than 500. Within the framework of these giant conglomerates, lifetime employment is practiced unofficially with the result that the true level of unemployment is not portrayed by employment figures. Prime Minister Junichiro Koizumi came to office in 2001, and expectations improved that a profound change to the political establishment and its ability to deal with economic problems which had festered since Japan’s asset bubble burst in 1990, would come about. Koizumi forced banks and the conglomerates to clear up the piles of bad debts. Companies thus unburdened, started to be profitable again and by 2002 the economy began to grow again. The power of special interests (farmers, senior bureaucrats) were curtailed. In 2005 Koizumi obtained a stunning general-election victory for the “reformed” LDP and its coalition partner, the New Komeito. When Koizumi retired a year later, there was a widespread belief that Japan was set for economic modernisation. After a four-year recovery with an average growth rate of 2 percent, the economy was again falling into recession by early 2008. The successor to Koizumi, Shinzo Abe, resigned under nervous and physical strain as the country was again thrown into political chaos under the baton of an untested double-talking successor. Japan’s economic prospects are also darkened by demographics as its population is greying faster than any other big economy. No less than 20 percent of Japanese are over 65 and it is estimated that by 2015 the proportion will grow to 25 percent – about 30 million. Birth-rates are below replacement at 1.32 with virtually no immigration expected. The population of 127 million has already started to shrink and is expected to drop below 100 million over the next half-century. In addition, the Japanese society carries the burden of one of the most entrenched bureaucracies in the world. The Economist of February 2008, under the heading “Japan’s Pain”, made the following summary: “Japan needs a mass of economic reforms – a more open climate to foreign investment, for instance, lower tariffs on imported food, fewer subsidies for farmers, freer trade, better tax treatment of foreign companies, the abolition of a welter of business subsidies, a more flexible labour market, greater fiscal rectitude (national debt is currently around 180 percent of GDP), more accountability by pension funds and insurance companies, further privatisation of services and much more.” Impact of the Financial Crisis of 1997-98

After Thailand devalued the baht on July 2nd, 1997, capital rushed out of the region’s economies, and in rapid succession most of them collapsed. The resulting panic soon spread and for a while posed a serious threat to the world economy. But the region’s economies were hard hit. The currencies of Thailand, South Korea, Malaysia and the Philippines were all down by 40 to 60 percent, stock markets lost 75 percent and real GDP shrunk by 11 percent. Millions lost their jobs. The IMF, which also failed to foresee the crisis, assisted in the recovery. It insisted, though controversially, on tight fiscal and monetary policies, but also advocated a swift disposal of bad debts, which rescued the financial system. Foreign-currency debt was either repaid or re- scheduled and currencies were allowed to float. Long-standing governments fell in South Korea, Taiwan, Indonesia, Thailand and the Philippines. Throughout the region domestic consumption picked up significantly, reducing the high level of dependency on volatile export markets. South Korea led the way to recovery. Its banks were recapitalised by the government and a public asset-management company was set up to buy up bad loans. This freed up the banks to get on with business with fresh capital and healthier loan portfolios – less subject to political pressure. The government planned to retrieve its bail-out related debts by reselling equity in banks and recouping a portion of their bad loans and collateral. Financial regulators laid down new guidelines for managing credit risks, including independent credit committees insulated from outside meddling. A new emphasis was laid on lending to small and medium-sized firms as 224

well as consumer lending. It boiled down to an all-out reform to its financial system that triggered the upturn. The IMF had less success in Malaysia and Thailand to refrain from mixing business with politics. In Hong Kong and Singapore they found a strong preference to maintain a high ratio of bank deposits to bank lending. New York banking experts from Merrill Lynch typically advised banks to persuade savers to channel more money into bonds and equity. After unseating long-standing dictator, Suharto, the Indonesian government took over most of the banking sector’s non-performing loans, but also moved in to control much of the Indonesian economy, from telecoms to plantations, and nationalised nearly all the banks. As a result foreign investment continued to avoid the country. On balance, the IMF bail-out helped to restore confidence, bolstered by reforms to restructure and strengthen banking. Income per head returned to pre-crisis levels in South Korea and Malaysia by 2000. In Thailand and Indonesia it took until 2003 and 2004 respectively.

The New Millennium Fluctuations

After 10 years, East Asia’s economies were booming again at growth rates averaging 7.5 percent. It appeared that after restructuring and reform, the region became more dynamic and resilient. All members in the region started showing current-account surpluses, much less foreign debt and strong enough reserves to cover its short-term foreign debts. With the onset of the Global Financial Crisis in 2008, the tiger-countries of East Asia were again severely affected. This time they were tripped by their heavy dependence on exports. Japan reported that its exports fell by 35 percent in the 12 months to December 2008. In the same period Taiwan’s exports dropped by 42 percent and industrial production was down by 32 percent. East Asia’s export-driven economies had benefited more than any other region from America’s consumer boom. Consequently, its manufacturers were bound to be hit hard by the sudden downward lurch. The plunge in exports has been exacerbated by the global credit crunch, which made it harder to get trade finance. Exports to China dropped even more to 27 percent lower than a year earlier – reflecting a weaker demand for components for assembly into goods for re-export. Weaker domestic demand also explained a large part of the slump. South Korea was the only exception by managing to retain a positive contribution to growth from its exports. But its consumer spending and fixed investment dropped significantly. Its households’ debt increased to 150 percent of disposable income (higher than the USA), badly hitting its banking system, which borrowed heavily abroad to finance the surge in domestic lending. Domestic spending also collapsed in Taiwan, Singapore and Hong Kong. House prices and stock market levels declined by 20 percent and 30 percent respectively. East Asia’s recovery from the 1997-98 recession was led by a rebound in exports to the rich world. The question is whether the next recovery would be pushed along by increased domestic demand or else by the impact of monetary and fiscal expansion. In contrast to America and Europe, most East Asian households had modest debt levels. It could also be expected that fiscal pump-priming in East Asia might be more effective than elsewhere on account of the private sector being in better shape and able to respond by spending more. Governments across East Asia have reduced interest rates across the board and announced stimulus packages of 3 to 8 percent of GDP. All the main members of the East Asian community had relatively low ratios of public debt to GDP. In view of considerable government reserves, the governments of South Korea and Singapore had much scope for fiscal easing in the form of personal and corporate tax cuts. Taiwan reverted to boosting consumer spending by issuing shopping vouchers. In addition all East Asian governments implemented plans to boost infrastructure spending – expecting thereby to boost productivity with better roads and railways. Western economists advised that the decline in export earnings could be offset by policies to lift households’ share of national income. By scrapping subsidies and implementing tax breaks which favour manufacturing over services and attacking monopolies and other barriers to 225

services, the bias towards capital-intensive manufacturing could be reduced. Stronger exchange rates could also shift growth away from exports and boost households’ real spending power by reducing the cost of imports in local currency terms. Another option was to look at higher public spending on health, education and welfare support in order to encourage households to save less and spend more. It was argued that inadequate social-welfare networks (private and public) encourage people to save for a rainy day.

Conclusions

The impressive growth experience of the East Asian “tigers” is indeed remarkable. Free marketeers point to their reliance on private enterprise, markets and trade liberalisation. Interventionists point with equal assertiveness to the non-market and centrally planned allocation of resources, the role of clever bureaucratic elites and regulated trade regimes. The truth of the matter is more complex and requires the careful balancing of a multitude of factors including the cultural traits of the societies involved. Regard must also be had to the interaction of these economies with the lucrative markets in the USA and Europe. The wellsprings of growth and transformation in the East Asia regions included several forces: sound macroeconomic policies in the sense of getting the fundamentals right, incentives for increasing productivity, openness to foreign ideas and technology, an export orientation and the development of human resources. A positive government role was crucial. It was buttressed by a leadership committed to a broad-based development and an efficient bureaucracy. Government did not act as planner or owner of industrial enterprise, but as a guide and facilitator, developing infrastructure and a framework for effective policy implementation, encouraging the accumulation of physical and human capital and allocating it to productive activities. International competitiveness was recognised as the ultimate aim and it relied on the private sector as the engine of growth. The east Asian success stories demonstrated that targeted and controlled government activity can be beneficial to the common good – government involvement is not bad per se. The East Asian achievement is all the more remarkable when one considers that several countries (e.g. Japan, Singapore and Hong Kong) were not endowed with rich natural resources. Several were faced with population pressures. In addition, the successful governments did not place development under any particular ideological umbrella. They were pragmatic in the sense of placing emphasis on what worked in terms of evidence-based policymaking. The spectacular success of Japan and its neighbours in East Asia, illustrates that modernisation can proceed without fully-fledged cultural westernisation. Western countries put a high value on individualism, freedom of expression, competitive politics, loose-jointed societies and diverse lifestyles. East Asians prefer easternisation, like working in groups, are more disciplinarian and conformist, socially conservative, hierarchical and accept interventionist power. But cultural factors undoubtedly did play an important role: attitudes to work, discipline, family loyalty, thrift, inherent commercial instinct and the high value placed on learning and education. The influence of cultural factors is well illustrated by the influence of the Confucian tradition. Under Confucianism, government has an absolute right to regulate all aspects of social and business relations for the common good. This may explain why there was no jury system, little right of appeal – even in commercial law cases. There is a stark contrast with the Western legal tradition, based on individual rights and freedoms, which dates back to the Enlightenment. But in countries with Confucian traditions such as Japan, Korea, Taiwan, China and Indonesia, the freedom of action of a person or a company stems not from a fundamental right, but is based upon the “grant of benefit” from the state. It must be pointed out, however, that these cultural traits were expressed in various forms and proved to be subject to change. After the 1997-98 financial crisis, and more recently, the Global Financial Crisis of 2008, many foreign observers were quick to pronounce the end of the East Asian “miracle”. But they were wrong. The East Asian economies bounced back, partly because of the markets in the West, but also because the tiger-countries still had the inherent impetus to continue their growth- 226

path: high savings to finance investment, high productivity and social stability. These are likely to help East Asia to remain the fastest growing region in the world in the foreseeable future. If a bigger share of those gains go to workers and consumers, the next growth phase could have widespread benefits for its more than 800 million people.

References

Balassa, B. (1981) The Newly Industrializing Countries in the World Economy, New York: Pergamon Chen, E.K. (1979) Hypergrowth in Asian Economies, London: Macmillan Estévez-Abe, Margarita Welfare and Capitalism in Postwar Japan: Party, (2008) Bureaucracy and Business, New York: Cambridge University Press Håfheinz, R. & Calder, E.K. The East Asia Edge, New York: Basic Books (1982) Hou, C. (1988) “Strategy for economic development in Taiwan and Implications for developing economies”, Conference on Economic Development Experiences of Taiwan and its New Role in Emerging Asia-Pacific Area, Taipei, Institute of Economics, pp.31-57 Imai, K. (1980) “Japan’s industrial organization”, in K. Sato, Ed., Industry and Business in Japan, London: Croom Helm Johnson, C. (1982) MITI and the Japanese Miracle, Stanford: Stanford University Press 227

Lau, L.J. & Klein L.R. (1990) Models of Development, San Francisco: ICS Press Lee, Y. & Tsai, T. (1988) “Development of financial systems and monetary policies in Taiwan”, Conference on Economic Development Experiences, op.cit., pp.205-25 Li, K.T. (1989) “Sources of rapid economic growth: the case of Taiwan”, Journal of Economic Growth, pp.4-12 Liu, P.K.C. & San, G (1988) “Social and institutional basis for economic development in Taiwan, Conference on Economic Development Experiences, op.cit., pp.257-775 Mahbubani, Kishore (2007) The New Asian Hemisphere – The Irresistible Gift of Power to the East, New York: Public Affairs Redding, S.G. & Wong, G.Y.Y. “The psychology of Chinese organizational behaviour”, (1986) in M.H. Bond, Ed., The Psychology of the Chinese People, Hong Kong: Oxford University Press Sakai, K. (1990) “The feudal world of Japanese manufacturing”, Harvard Business Review, November-December, pp.38-49 Scitovsky (1990) “Economic development in Taiwan and South Korea”, in Lau & Klein, Models of Development, San Francisco: ICS Press Tsiang, S.C. (1988) “In search of a growth theory that would fit our own conditions”, Conference on Economic Development Experiences of Taiwan and its New Role in an Emerging Asia-Pacific Area, Taipei, Institute of Economics, pp.11-28 Wade, R. (1991) Governing the Market: Economic Theory and the Role of Government in East Asian Industrialisation, Princeton: Princeton University Press Yu, T. (1988) “The role of the government in industrialisation”, Conference on Economic Development Experiences of Taiwan and its new role in an Emerging Asia-Pacific area, Taipei, Institute of Economics, pp.121-159 Special Report (1991) “The growing power of Asia”, Fortune International, October 7th, pp.32-88 The Economist (1991) “Where tigers breed: a survey of Asia’s emerging Economies”, November 16th, pp.5-24 The Economist (1994) “Oriental Renaissance – A Survey of Japan”, July 9th, 1994 The Economist (2002) “Report on East Asian Economies”, July 2002, pp.65-67 The Economist (2005) “A Survey of Malaysia”, April 5th, 2005 The Economist (2005) “The Sun Also Rises – A Survey of Japan”, October 8th, 2005 The Economist (2007) “Troubled Tigers”, January 31st, 2007, pp.67-69 The Economist (2007) “Briefing East Asian Economies”, June 30th, 2007 The World Bank (1993) The East Asian Miracle, New York: Oxford University Press 228

10 China – The Emerging Giant

China covers an area of 9,596,961 square kilometres and its population is estimated to be around 1,400 million. The Han Chinese comprise 92 percent of its population and the remaining 8 percent include several minority ethnic groups – each more numerous than the populations of more than 20 countries represented in the United Nations. once said, with good justification, that China was like another United Nations. It has 31 provinces, five “autonomous” regions ( and Tibet) and two “administrative” regions (Hong Kong and ). China’s largest city, Shanghai, is five times the size of Singapore. The most populous province, , has over 120 million people – almost as much as Japan. The provinces Guangdong, , , , , , and each have between 60 million and 100 million people. The “Autonomous Region”, with 50 million people, is more populous than Poland.

Historical Background

The early can be traced back to more than two millennia when the First Emperor consolidated various tribal groups into a consolidated political unit. During that early period perhaps 60 million people crowded what was to become the northern edge of China. This number more or less held over the next millennium, but from about the 12th to the beginning of the 13th century, the population doubled to around 120 million. At that point the pandemics that were also scourging Europe and the Middle East reduced the population to around 70-80 million. Around 1400 the population rose again to over 100 million and to over 200 million by the middle of the 17th century. In modern times, the era of the Manchu dynasty (1644-1911) ended when Sun Yat-sen and fellow revolutionaries overthrew the imperial regime in February 1912. A succession of unstable regimes and civil war followed. Sun Yat-sen’s (KMT Nationalists) established a government over parts of the country in Canton in 1919. The Communist Party (CCP) was founded in 1921. By 1927 it controlled large tracts of territory in the south and its armies carried on an armed struggle against the Kuomintang’s rule led by Chiang Kai-shek. In 1931 the Japanese occupied the Chinese region and in 1933 gained control of Jehol. In 1934 the KMT advance forced the Communists out of their southern bases. The CCT undertook the Long March (1934-1935) through southern and western China, finally ending at Yanan in the province, where Mao Zedong emerged as the CCP leader. In 1936 Chiang Kai-shek joined forces with the CCP to fight the Japanese, but in 1937 Japan occupied much of North East China and established puppet governments in Beijing and Nanking. The Nationalists were forced to relocate their capital in Chongqing. After the Japanese surrender in 1945, the Kuomintang Nationalists took control of Japanese- occupied areas. The US attempted to mediate a negotiated settlement between the KMT and the CCP. Beijing fell to the Communists in January 1949 (The People’s Liberation Army). The People’s Republic of China was formally established October 1st, 1949. The Communist forces marched south and stopped at the border with Hong Kong. In October 1951 Tibet was re- annexed. Taiwan and surrounding islands were left in the hands of the KMT. China signed a friendship treaty with the USSR and the USA became the protector of Taiwan. Between 1950 and 1958 a series of land reforms were implemented involving the redistribution of land among landlords, rich peasants, poor peasants and the landless. Agricultural co-operatives were formed which were later amalgamated into communes of about 200,000 people. The 1954 constitution organised China into five autonomous regions, 23 provinces (including Taiwan), 175 municipal administrations and 2,000 districts. A central council under Mao Zedong, formed by the Communist Party, held absolute power. became Premier of the Administrative Council. In May 1957 the “”, ostensibly organised 229

to invite constructive criticism by intellectuals, culminated in the “Anti-Rightist Campaign” against those who had spoken out. The movement to root out the “White Flags” followed. The 1958 “” launched by Mao was intended to mobilise the support of the masses for his economic policies. Farmers were herded into regimented communes and backyard pig iron furnaces became the symbols of the Great Leap. It proved to be a disaster. Millions of people died of starvation as the disrupted agricultural and industrial production and internal trade plummeted. Refugees streamed into Hong Kong. Mao’s standing deteriorated. In 1964 China detonated its first nuclear bomb and by 1966 Mao Zedong introduced the “Great Proletarian ” as a means to bolster his grip on supreme power. The charter of the “Cultural Revolution” allowed the masses to attack those in authority and the “Red Guard” was formed to eliminate unacceptable “old ways”. proceeded to put numerous local authorities on trial. A Central Cultural Revolutionary Committee was formed with Chen Boda, Mao’s secretary as chairman and Mao’s wife, Jiang Qing, as vice-chairman. The Revolutionary Committee, together with the Military Commission and the State Council, ruled China under Mao’s guidance. The Red Guards ran riot through cities, ransacking property and humiliating foreign diplomats. Mao’s Little Red Book served as the bible of the Cultural Revolution. By 1967 Mao used the army to restore order and purged all authoritative bodies of members considered by Mao as undesirable. In 1969 Mao regained his influence as chairman of the Communist Party and the Central Committee. The army now filled the majority of the posts in the Central Committee, the Politburo as well as provincial and local councils. Mao Zedong died on 9 September 1976. After a prolonged internal power struggle in the CCP, Deng Xiaoping emerged as the new leader in 1980. Deng Xiaoping was the son of a prosperous landowner – turned local-government official. As a boy he attended a traditional Confucian school, but amid the tumult following the Chinese Revolution of 1911, he proceeded to France where he met Zhou Enlai and Ho Chi Minh (from Vietnam) and became exposed to the communist movement which was fashionable amongst students and intellectuals in Paris. Deng also studied in Moscow where the Comintern, Stalin’s international apparatus, was teaching students revolutionary strategies. When Deng returned to China as a convinced communist, his organisational skills and sharp intellect carried him forward to become chief secretary of the Central Committee of the Communist Party at the age of 23. He allied himself with the Mao Zedong faction and took part in the Long March of 1934-35. His later wartime role carried him into a key position when the Communists came to power in 1949. During Mao Zedong’s rule, Deng Xiaoping’s fortunes fluctuated between key roles and imprisonment. The ideological revolutionaries considered Deng as a “capitalist raider”. He was protected by his personal friendship with Zhou Enlai and his networking ties with the army. After the Cultural Revolution had run its course, Deng came back into the leadership circles. He believed in education and economic incentives as the road to development and modernisation rather than ideology and exhortation. In December 1978 the Third Plenum of the 11th Congress of the Chinese Communist Party made the fundamental decision to re-orient China toward market economics. They made a break with and adopted instead a strategy of pragmatic adjustment – practical steps that deliver economic results as long as the party remained in control. Said Deng: “We have two choices – we can distribute poverty or we can distribute wealth”.

Deng Xiaoping’s Reforms

The initial reform effort centred on agriculture in view of the dismal results produced by Mao’s collectivised agriculture system. During 1978 Anhui province experienced a severe drought so that agricultural output was further diminished and starvation became endemic. Diseases swept through the region and people flocked into Shanghai. People appealed for a return to the “old ways”, meaning the “household responsibility system” which allowed a family to keep some of the benefits of its labour. The peasants got their 230

wish and the household responsibility system was adopted throughout the country and material incentives replaced the Maoist strictures. With the communal collectivised system undone, each family could take responsibility for the land it tilled. They had to deliver a certain amount of their production to the state, but the portion above that they could keep for own consumption or sell. Thus the first steps towards free enterprise were taken. The results were stunning. Over the next sixteen years output increased by 50 percent. The introduction of markets in agricultural products generated an entire trading apparatus. Farmers became involved in transportation, home building, repairs, private food markets and hiring workers. The changes created a whirlwind of entrepreneurship. In 1978 just 8 percent of agricultural output was sold in open markets; by 1990 the share was 80 percent – raising the real income in farm households more than 60 percent. The rapid improvement in agriculture spurred economic reforms in other areas. It created a pro-reform constituency, not only among farmers but also among city dwellers, who could find more food and more variety in the marketplace. These improvements created a momentum for additional reform such as the de-control of prices. The reform process was set into motion with a clear winner. The reform of the industrial sector was more difficult. It was highly interconnected, controlled from the centre, the scale was large and it generated much of government’s revenues. Change in the system could throw the country into economic disarray. Moreover, Marxist economics was focused on industrial production. The desperate need of reform in the industrial sector spurred an acrimonious debate over the relationship of the state and the marketplace. One argument was that the way the state collected revenues from enterprises ended up “whipping the fast ox” – punishing firms that were more efficient. The higher the firm’s profits, the greater the proportion of profits taken away by the government. Arguments were made for increasing the autonomy of enterprises and moving the system towards “market socialism”. Yugoslavia’s self-governing firms were seen as a model, but with the retention of the dominant position of the state. At the Wuki Conference in 1979 economists gave expression to the general sentiments by stating that China “… cannot allow Adam Smith’s invisible hand to control our economic development (because) … if individual consumers in the market make decisions based on their own economic interests, this will not necessarily accord with the general interest of society”. Planning had to be made more effective, but giving over to the “blindness and anarchy of capitalism” was to be avoided. The main spokesman of the “go slow” on reform movement was Chen Yun, a party elder like Deng. Although he was also purged during the Cultural Revolution for not being adequately supportive of Maoism, he became to be seen as the CCP’s expert on economics. He favoured steadiness and opposed “rashness” as experienced during the Cultural Revolution. But he was a socialist technocrat, a fervent believer in planning and had no desire to support the introduction of a full-blown market system, nor was he keen to attract foreign investment. He feared “foreign pollution of Chinese socialism”. He was unhappy with existing central planning, but he did not believe a country as large and as poor as China, with limited resources, could jettison planning. He wanted to improve it – make it more scientific and more balanced – less reform and more “readjustment”. He felt the “planned economy” should remain “primary”, while the “market economy” should play a “secondary”, supplementary role. Chen’s approach became known as the “birdcage thesis”. The “re-adjusters” carried the day in the early 1980s, bolstered by other factors such as the “solidarity movement” in Poland which raised alarm amongst Chinese leaders. In addition the leaders were ambivalent about the legacy of Mao and their perception about the amount of change the system could absorb. Deng went along with the cautious re-adjusters. He argued that the CCP was essential to the central goal of modernisation. He believed that without such a party, China would split up and accomplish nothing. But by the middle 1980s the “go slow” argument was loosing its credibility. The economy was growing much faster than anticipated without the severe problems that Chen Yun had forecast. 231

Improvement in agriculture stimulated the emergence of rural industry and commerce. Reform now had both a constituency and a track record. Chinese economists took notice of developments in Hungary which involved experiments with market mechanisms as reflected in the influential writings of Hungarian economist Janos Kornai. But the most pressing example came from their nearby neighbour, Japan. Visiting Japan and seeing its dynamism firsthand shocked the Chinese Communists. The head of the CCP’s propaganda department noted that one in every two households in Japan owned an automobile; over 95 percent of households possessed TV sets, refrigerators and washing machines; the majority owned a variety of clothing and changed clothes every day! (See Yergin and Stanislaw, op.cit, pp.198-216)

Socialism with Chinese Characteristics

By the mid-1980s, the Chinese economy entered a period of high-speed growth. The leadership of Deng Xiaoping embraced economic reform and liberation even while striving to maintain political control. To appease his comrades who feared seeing capitalism replacing the socialism and communism they had strived for all their lives, he described what was happening as “Building Socialism with Chinese Characteristics” – which became the title of a book he published at the end of 1984. Deng was constantly harassed by the opposition and rivalry of Chen Yun. Both were veterans who joined the CCP at the beginning, both were victims of the Cultural Revolution, both were intent on redressing the deep wounds inflicted upon society by Maoism, but they disagreed over the terrain of the reform package. Fortunately for China, Deng’s pragmatism carried the day – often by adjusting terminology: replacing “hired labour” (which sounded bad on Marxist ears) with “asked-to-help labour”! From 1984 the debate about the future moved on beyond Marxist ideology to the practicalities of creating a market economy. Economic data replaced Marxist catechisms in arguments about market allocation of resources versus planning. Deng was the of reform, while Chen served as the paramount critic. At the heart of the debate was the question of the proper relationship between state and market. The Chen-hardliners wanted to reassert centralisation, stabilisation and mandatory planning because they feared chaotic dislocations and inflationary pressures – and loss of political control by the CCP. Deng also feared the erosion of CCP-control, but wanted to reduce bureaucratic control by party secretaries and instead make enterprises responsive to market signals. The introduction of the “contract responsibility system” (echoing the “household responsibility system”) allowed state enterprises to keep earnings above a certain target. By December 1987, 80 percent of China’s large and medium-sized firms had adopted such a system. The reforms were still inadequate to sufficiently reduce the inefficiency of state firms. They were losing out on the growing competition from new companies established by local villages and towns. It became clear that the most important missing element was property rights. It became clear that only ownership could introduce responsibility into decision making and channel motivation. So the debate moved from Marx to Mao and ultimately to Hayek. Deng was essentially interested in results which for him meant increasing China’s wealth and power. But Deng was constantly compromised by Chen’s pressure to step on the brakes and to oppose the reformist incumbents of key positions. Deng was forced by Chen pressure to remove CCP general secretary Hu Yaobang on account of him being regarded as too liberal. He was replaced by Deng with Zhao Ziyang who promoted the idea to build up new industries geared to export, particularly in the coastal areas. The benefits of focusing on export industries were well illustrated by the success of neighbouring East Asian economies. It offered the solution to multiple problems: earning hard currency and absorbing surplus labour coming out of inland regions. The central focus of the strategy would be Special Economic Zones (SEZ’s) to engage with the world economy. The original SEZ’s were created in 1980, some in Guandong province, including 232

Shenzhen (close to Hong Kong) and others in province, across from Taiwan. Their whole orientation was outward as export-processing zones and they were also meant as magnets to draw in foreign investment. Beijing gave local authorities in the SEZ’s unprecedented autonomy in trade and investment decisions. From then on the Chinese economy was driven forward by the coastal cities. Accelerating inflation by the end of 1988 forced Zhao and his allies to go on the defensive. A “Mao Zedong craze” was unleashed which forced the reformist leaders onto the back foot. They were accused of “capitalist-style crime and corruption”’ and being materialistic drivers of bourgeois inequality and democracy. Deng remained reform’s prime cheerleader, but in 1988 anticipation of a price reform, ignited a run on the banks and a panic buying of goods. Deng’s government was shaken and it changed course. The focus then turned to stability.

The Tiananmen Square Clampdown

Thousands of students occupied Beijing’s Tiananmen Square in April 1989, mourning the death of Hu Yaobang, the deposed reformist CCP general secretary and expressing their displeasure with the suppression of the democracy movement. To the hardliners, it was an act of rebellion – the consequence of too much reform and too little control. To people like Deng, it challenged the sacred supremacy of the CCP which the old veterans considered as the bulwark against disorder and chaos. To Deng it also resembled militant mass action during the Cultural Revolution. He feared that the leadership core could be in danger. The events in Poland and other East European communist states convinced Deng that concessions lead to more demands and more concessions beget more chaos. Moreover, Tiananmen Square carried a lot of symbolism in the history of the CCP. In 1949, Mao declared victory and the establishment of the People’s Republic of China on Tiananmen Square, and thirty years before that, on May 4th 1919, it had been the scene of the nationalist student demonstrations that paved the way for the birth of the Chinese Communist Party. In June 1989, the order was given to the military by Deng to crush the protest. About a thousand people are thought to have been killed in the ensuing struggle. Zhao Ziyang, who was Party Secretary at the time, tried to peacefully disperse protesters in Tiananmen Square before the fatal confrontation with the military. Mr. Li Peng, who was Prime Minister at the time, forced Zhao to defend his actions to a disciplinary meeting of the CCP’s Central Committee. Zhao was put under house arrest, where he stayed for the rest of his life. His memoirs, dictated on tapes, were smuggled out of China and were published in 2009 under the title : The Secret Journal of Zhao Ziyang (Simon & Schuster). In his memoirs, Zhao describes Deng as the enabler, not the architect of detailed reform measures. Zhao credits himself as the real architect of China’s reforms in the 1980s, with Deng helping him to keep the hardliners at bay. The collapse of communism in Eastern Europe and eventually in the Soviet Union in the period 1989-1991, reinforced the position of the CCP hardliners to rein in reform. Economic growth slowed and dissent was stifled. Although Deng remained in power he was by then advanced in age and of frail health. Reform was in retreat and so was Deng’s influence. His old rival Chen Yun was in ascendancy again and “Chen Yun Thought” enjoyed comparable prestige to “Mao Zedong Thought”. Deng’s Nanxun Campaign

In 1992, the 88-year-old Deng set out in his private railway car on yet another campaign. He headed south on his “nanxun” (southern) journey. During four weeks he visited the delta in Guandong province, and in particular the SEZ which borders on Hong Kong. He gave speeches, met local officials and business leaders and visited construction sites. Having seen the modern high-rise urban area and all the vast changes in the surrounding areas, Deng regained his confidence. He declared Shenzhen a “flying leap” and a model for the future. Deng returned from his “nanxun” journey with several important messages. One of them was that market economies need not be surnamed capitalism because socialism has markets too. 233

Plans and markets are simply economic stepping stones to universal prosperity and riches. The other oft-quoted message was a warning to party members to “… watch out for the Right, but mainly defend against the Left”. After the hardliners tried to suppress Deng’s messages for several weeks, the news leaked out and became the subject of much discussion and debate. Deng’s messages found wide resonance. At the 14th Party Congress in 1992 a new commitment to reform was affirmed. It was explicitly decided that China should shift from a “socialist planned economy” to a “”. At the ripe age of 88, Deng reaffirmed his position as paramount leader. Deng indicated that Guangdong should be considered the engine of China’s growth and that China should overtake the four tigers – Korea, Taiwan, Singapore and Hong Kong – within 20 years. In reality China was already on its way. Between 1978 and 1995, under Deng’s rule, China’s economy grew at an average annual rate of 9.3 percent.

Deng’s Legacy

After Deng’s trip to the Pearl River in 1992, he remained the paramount leader. Though he held no formal title, even in semi-retirement, he ruled China from his modest Beijing courtyard home. Leaders still vied for his attention. His growing deafness made communication difficult. Deng Xiaoping died early in 1997 at age 93. At his funeral Chinese President Jiang Zemin referred to Deng’s “three rises and three falls”. But Deng broke with conventions and paved the way for China’s ascendancy as a world power. When he came to power China was desperately poor, but he launched a growth path that enabled China’s foreign trade to increase from $36 billion in 1978 to $300 billion in 1995. Per capita income doubled between 1978 and 1987 and doubled again between 1987 and 1996 – a rate almost unheard of in modern history. Deng lifted upward of 200 million people out of poverty in just two decades. At Deng’s funeral president Jiang declared that henceforth “Deng Xiaoping Theory” would be the “guiding ideology” of China and, alongside Marxist-Leninism and Mao Zedong Thought, it would be the Party’s “guide to action”. The only way Mao’s theory could be made compatible, is through a healthy dose of pragmatism. By lifting the banner of Deng Xiaoping high, the communist leadership enshrined his adherence to the principle of pragmatism. After his death, a Shanghai newspaper reported a generally unknown fact about Deng’s career. While Deng was studying in Paris to become a communist in the early 1920s, he also opened a restaurant called “China Bean Curd Soup”. It was reported that the bean curd was good, the restaurant was a success and Deng expanded both his menu and his seating space.

Hong Kong’s Crucial Role

Hong Kong was part of Chinese territory ceded to Britain in 1842 after the Opium Wars. In 1997 it was returned to China and placed under control of a Chief Executive appointed by Beijing. During British rule, thousands of refugees from China entered Hong Kong in 1911, 1937 and in 1949. During the period 1941-1945 it was invaded and occupied by Japan. Down the years Hong Kong offered a secure trading outlet as well as a safe haven for the assets of Chinese businessmen and industrialists. Over the years Hong Kong acquired a business community with advanced education, entrepreneurial skills and a network of connections with mainland China that were particularly advantageous to British interests, but it also provided an important commercial outlet for China and an avenue for importing foreign investments and technology into China. The investments of displaced Chinese and the availability of cheap labour fostered a mushrooming of local assembly plants, textile workshops and factories for light manufactures. Economic life was freewheeling. There were no trade or exchange restrictions, no central bank, labour legislation was light and taxes were low. In the 1980s Hong Kong was closely linked to Deng Xiaoping’s reforms on the mainland. It re- opened the door to travel, trade and investment across the border. By establishing the first SEZ’s near Hong Kong in Shenzhen, Deng facilitated investment into China’s vast pool of labour and 234

resources. Labour-intensive production was shifted onto the mainland turning the into a megalopolis with Hong Kong and Guangshou as its twin poles. Hong Kong also emerged as one of the world’s pre-eminent financial centres in the 1980s. With all the major trading houses established in the city, the became a major source of financing and investment expertise for mainland enterprises. Hong Kong also served as clandestine conduit for funds coming from Taiwan and the millions of overseas Chinese (called “guanxi”). The guanxi also provided important trade and marketing channels for Chinese exporters. Long before the Chinese take-over in 1997, China’s state-controlled firms invested heavily in Hong Kong real estate. The state-owned built one of Hong Kong’s most distinctive harbour-front skyscrapers. Even before the handover, Hong Kong’s wealth – in per capita terms – was significantly more than the UK. Deng Xiaoping left his successors with an easy pragmatic solution to the special status of Hong Kong with the guiding concept “one country, two systems”. Today Hong Kong ranks as one of the most modern cities in the world. It has a population of over 7 million, safe deepwater harbour facilities and one of the most modern airports on the planet. The Impetus of Guangdong

Merchants from Guangdong had dominated Southeast Asian maritime commerce for centuries until this trade was banned in the 16th century by the Ming Dynasty. Under Mao the coastal areas also were short-changed by his policies to concentrate resources on building up the internal economy far from the coast, which he feared would be vulnerable to enemy attack. The rebirth of Guangdong in the 1990s was endorsed by Deng’s famous “nanxun”-tour in 1992. But much of the underlying impetus came from the “guanxi”. It is claimed that as much as 80 percent of the 30 million overseas ethnic Chinese trace their origins to Guangdong and they subsequently invested billions in the province. In addition, the strategic location of Shenzhen, adjacent to Hong Kong, proved to be essential to the take-off of the region. The Pearl River Delta which includes both Shenzhen and has been described as the “crown jewel of the Chinese economy”, or “the fifth dragon”. Between 1978 and 1993, Guangdong’s economy grew at 13.9 percent – well above the national average. The delta was still higher at 17.3 percent. An estimated 40 percent of all China’s exports came from Guangdong and 70 percent of Guangdong’s exports, in turn, came from the Pearl River Delta.

Cutting the State-Owned Sector

China’s President, Jiang Zemin, continued Deng’s pragmatic policies. At the 1988 congress he persuaded the CCP to agree to start cutting the size of the state-owned sector of the economy. Although some companies were well managed and profitable, the overall sector was inefficient, loss-making and inflexible. As much as 40 percent of the loans to these enterprises by the state banks were considered to be non-performing. The word “privatisation” was unacceptable to ideological hardliners, but Jiang Zemin persuaded the CCP congress that as many as 100,000 of these enterprises should be divorced from the state and operated according to the principle of “ming ying” (“people-owned” companies). This ambiguous phrase did not exclude ownership by shareholders. Jiang Zemin explained to the CCP congress that public ownership can and should take multiple forms in its realisation, including mergers, bankruptcy and “downsizing”. Congress also endorsed the principle of direct elections from village level up to larger townships. The 1998 CCP congress also appointed Zhu Rongji, an engineer by training and a successful former mayor of Shanghai, as the next Premier of China. Convinced that the role of government and enterprise should be separated, Zhu initiated a swift restructuring of state-owned enterprises. Zhu reduced the size of government and moved toward more market-orientated systems in housing and banking. Instead of using the word “privatisation”, the emphasis was on the “corporatisation” of state-owned companies – making them more responsive to the discipline of the marketplace and to competitive pressures. 235

Rapid Growth

Over the past 25 years, China has achieved an average annual GDP growth rate of more than 9 percent – more than any other Asian country during a period of rapid development. It suffered sharp slowdowns in the early 1990s and again later in the decade, but quickly recovered. To maintain its rapid growth, China needed to tackle a number of serious problems, e.g. the big burden of non-performing loans carried by its banking sector and its massive and inefficient government sector. The social-security system also proved totally inadequate to prevent social unrest in the event of a downturn. But China also had other strengths beyond its rapid growth. National revenue growth also increased rapidly at 18 percent a year since 1994 to a deficit of less than 1.6 percent of GDP. China also enjoys one of the world’s highest savings rates at some 40 percent of GDP and very large foreign exchange reserves. It has halved the total numbers employed by state-owned enterprises by way of closures, mergers and privatisations. Those made unemployed by these measures have been absorbed by the rapidly expanding non-state sector. By 2006 the state-owned enterprises accounted for less than one-third of GDP, against almost all of it in the early 1980s. But this still left nearly 140,000 state enterprises employing 40 million people – accounting for more than 50 percent of all industrial assets. In 2003 a new body was created to manage non-financial state assets in order to put a buffer between government and enterprise management.

The Tangled Web of Business Relationships

Two economists at the City University of Hong Kong, Shuhe Li and Shaomin Li, were used by The Economist of April 8th, 2000, to explain the complexities of doing business with Chinese. According to Messrs. Li, Chinese business relationships are essentially not based on contracts but on personal agreements. The rule-based system in advanced economies is conducted in a publicly verifiable manner (using contracts), under laws that are widely known and consistently enforced (at least in theory). Such a system has grown up over generations and carries large fixed costs involved in the establishment of legislation, judicial interpretation and the implementation of contracts. Once such a system is in place; people take it for granted – albeit sometimes at their peril. In the Chinese relationships-based system, transactions are purely private affairs, neither verifiable nor enforceable in the public sphere. To avoid being ripped off in this system, you thoroughly check a person’s background, status and assets. Cheaters have to be dealt with in kind – seizure of assets, or other forms of tit-for-tat. A rule-based system requires a high and costly level of public order – including normally a huge parasite economy. A relations-based system needs only minimal public order. But the marginal costs of finding, screening and monitoring a potential partner, are extremely high. Relationships have to be managed personally. Interaction cannot be delegated – forcing executives to answer their own phones. As a result of the high marginal cost of cultivating new relationships, doing business begins first with close family, then extended family, then neighbours from your home town, then former class mates, and only then, reluctantly, with strangers. This underlying network is the essence of the “guanxi” system. Foreigners are well-advised to closely scrutinise their business contacts, where outsiders see an opportunity to profit, insiders may see an opportunity to loot. James McGregor, who spent nearly two decades in China, first as Wall Street Journal correspondent then as Dow Jones representative and finally as venture capitalist consultant, shares his experience in his book One Billion Customers – Lessons from the Front Lines of Doing Business in China (Free Press, 2005). He advises persons venturing into China to avoid joint- ventures with government entities, to be wary of the person who hires your staff assistants, keep asking questions about how the system works, not to rely on the protection of the law, not to rely on one individual for access to government officials and to avoid quick deals. 236

McGregor maintains that “... the Chinese always need to get concessions from you … The overall system is almost incompatible with honesty … Assume that all procurement departments are corrupt, that suppliers need to be told not to bribe and that technology firms will always be ripped off. Another expectation is that you will always be cheated; that business in China is about survival, … for someone to win, somebody has to lose.”

Banking

The major banks in China are the Bank of China, and the Industrial and of China – all government owned. In 2005 a programme was launched to sharpen their operations as commercial institutions with public share offerings in Hong Kong. In advance of their listings the banks also sold strategic stakes to selected foreign financial institutions, but foreign ownership of banks is capped at 25 percent. The CCP regards state ownership and control of the banks as a vital bulwark against financial instability. It believes it is essential to shore up public confidence in the banking system which, in turn, is needed for maintaining the flow of savings and foreign investment. It is feared that encouraging the development of privately controlled banks in China would exacerbate financial risk. With its huge foreign exchange reserves, strong revenues and low deficit, China has to date remained in a strong position to prop up its banks. Given the widespread concerns about old age and other social-security provisions, the savings rate is likely to stay high in the foreseeable future. The scope of non-performing loans in the banks’ portfolios is difficult to establish in view of the commonly used accounting trick used of moving such loans into asset-management companies. Boosting private consumption to reduce China’s heavy reliance on exports is not an easy option in view of the low levels of per capita income.

Manufacturing Business

Some Chinese companies have come in for stringent criticism of the quality of their manufacturing. After poisoned dairy products killed six children in 2008, the chairwoman of Sanlu, the most notable producer, was sentenced to life in prison. Two suppliers were condemned to death. Undoubtedly, lots of high-quality goods are , from sporting goods and MP3 players to luxury clothing. China has also become the world’s largest exporter of information and communications technology. According to 2008 figures, in transport alone, there are a dozen sizeable car-makers, 300 tyre-makers, 1,000 bicycle-makers and several thousand scooter-makers. More than 3,500 watch-makers list their services in Alibaba, a sourcing website, as do 8,000 razor-makers. A myriad of companies turn out fake Gillette’s and Rolexes sold on streets. Explicitly state-controlled firms make up half of the economy. But because even private firms understood that their existence depends on their relations with the bureaucrats, the true effect of government intervention is probably understated. In state-controlled companies, senior managers are rotated at the behest of government officials. Most factories inevitably occupy land that was once state held. As a consequence, their shareholding often includes local government. Officials have little interest in industrial efficiency. They consider mergers unattractive if they mean losses of local jobs. The picture on the wall of a typical corporate office is invariably a photograph of a visit by a senior government official. Blurred ownership tends to distort finance, management structure and long-term planning. To insulate themselves from the vicissitudes of state control, companies go through all manner of legal contortions when they list shares. Securities offerings must be approved by government officials and the bulk of legal financing comes from state-controlled banks. With this myriad of political ties, innovation is difficult. 237

Theoretically the smaller private firms are more flexible. But raising money is hard. Loans to such firms account for only a small part of total lending by state-controlled banks. The source of small firms’ money is one of China’s great mysteries. The hints are that grey-market financiers, including pawn shops, “credit-guarantee” firms and small industrial companies that lend to other smaller companies are filling the financing gap. Because such financing is informal, short- term and based on personal relationships, it is uncertain and unreliable. None of these serious impediments have prevented China’s economy from growing. The extraordinary way in which money, people and enterprises interact, illustrates the country’s adaptability and the resilience of the spirit of enterprise. (See The Economist, Chinese Business, February 21st, 2009, pp.62-64)

Trade Patterns

Angus Maddison, an economic historian at the University of Groningen, has estimated that between 1600 and the early 19th century, China accounted for between a quarter and a third of global output. During the period 1950 to 1973 it declined to 4 percent, then rose to 11 percent in 1998 and 18 percent in 2006. (Quoted in The Economist, March 13th, 2006, p.4) With a trade-to-GDP ratio of around 75 percent in 2005 and a large volume of foreign investment, China has become one of the world’s most open economies. It means that the sum of its total exports and imports of goods and services amounts to around 75 percent of China’s GDP. To focus only on China’s growing share of global output and exports is to miss half the story. China’s imports have risen at the same pace as its exports. Thus China is giving a big boost to both global supply and demand. The “positive supply-side shock” given to the world economy by China’s entry into world trade patterns, has increased the world’s potential growth rate, helped to hold down inflation and triggered changes in the relative prices of labour, capital, goods and assets. The entry of China’s vast army of cheap workers into the international system of production and trade has reduced the bargaining power of workers in developed economies. The threat that firms could produce offshore helped to keep a lid on wages. (See The Economist, July 30th, 2005, pp.65-67) Not only were the prices of the goods that China exported falling, the prices of the goods it imported were rising, notably oil and other raw materials. China became the world’s biggest importer of many commodities such as , steel, copper, coal and the second biggest consumer of oil. The upward pressure that Chinese imports had on the prices of commodities and raw materials has been offset by the downward pressure of Chinese manufactured exports. Hence China played a role in keeping a lid on inflation. Much of the world’s low-cost manufacturing has shifted to the Chinese mainland. The Chinese manufacturing machine has sucked up vast quantities of raw materials and resources from many parts of the world. This gigantic manufacturing machine produces goods for the domestic market, but also vast quantities for its booming export market. China’s manufacturing machine has also sucked up vast quantities of parts and components for final assembly from other parts of Asia – Thailand, Malaysia, Singapore, the Philippines and Indonesia, as well as richer Taiwan and South Korea. China has been integrated into existing and highly sophisticated pan-Asian production networks. All members of the pan-Asian network have benefited. Even rich Japan, which in 2002-2003 was pulled out of a decade-and-a-half slump by Chinese demand for top-notch components and capital goods. Economic inter-dependence between the two countries has grown by leaps and bounds since 2003. China has not only lifted Japan out of stagnation, it became its biggest trading partner in 2004. Japan now imports finished goods such as office machines and computers from China. In 2005, the total volume of trade between the two countries was nearly $190 billion. Japan also accounted for 11 percent of foreign direct investment in China, making it the largest foreign investor in China. 238

South-East Asia also received a major boost from its trade with China. Rich in resources, including rubber, crude oil, palm oil and natural gas, it is likely to profit from China’s appetite for raw materials for a long time to come. For South Korea, Taiwan, Hong Kong and Singapore trade has turned from the rich world towards China as their biggest trading partner. In China itself, the processing and assembly of imported parts and components now account for more than half of its exports. Much of China’s growing trade surplus can be explained by its trade formula which is based on assembly. China has become a major economic power, not only on account of its fast growing exports, but also increasingly from being a major buyer, investor and provider of aid. China’s huge imports are a major source of influence in many countries. It is the source of a new kind of power: intertwined economic and political power. China’s presence as a commercial force is rapidly being felt around the world, through its growing investments overseas and through an apparent insatiable hunger for resources to fuel its own industrial revolution at home. Planeloads and shiploads of oil-drillers, pipe-layers and construction workers are sent from China to work on oil rigs or build ports, highways or railways in South-East-Asia, Africa, Latin-America or the Middle East. Chinese workers are also fanning out to neighbouring countries in less formal ways to work on farms, forest plantations and market gardening. Throughout the 19th century, thousands of indentured workers were lured by Chinese and Western recruiters to work on the guano deposits of Peru, the cane fields of the Caribbean Islands or the goldfields of Australia and South Africa. Now the Chinese workers are present in many parts of the world with Chinese capital behind them.

China’s Exports by Destination (Percentage of Total) 1996 2006 Japan 20.4 9.5 Hong Kong 21.8 16.0 United States 17.7 21.0 European Union 13.1 18.8 Taiwan 1.9 2.1 South Korea 5.0 4.6 Britain 2.1 2.5 Singapore 2.5 2.4 Other 15.5 23.1 (Source: The Economist, March 21st, 2007, p.9)

Foreign Acquisitions

Over the years the and the large companies it controls, have not concealed their hunger for foreign assets. But despite their overflowing coffers of foreign reserves, they face some obstacles to make the desired acquisitions. In 2005 America’s Congress blocked the efforts of the part state-owned China National Offshore Oil Corporation (CNOOC) to buy Unocal, an American rival. Chinalco, a state-controlled aluminium firm, has stirred up concern in Australia with its bid to enlarge its stake in , an Anglo-Australian mining giant. As a result, big Chinese firms with government ties have been looking for more oblique and less obtrusive ways to expand abroad. On May 24th, Petro China, another partially state-owned oil firm announced that it will buy (at a substantial premium to the stock market price) a big stake in Singapore Petroleum, a refiner. This acquisition hints a new tactic by passing the unease shown by national governments about approving outright acquisitions by Chinese firms. One reason for the unease is the lack of reciprocity. China itself tolerates little involvement by non- Chinese companies on its own turf: news media, banking and the energy industry. To overcome obstacles, several Chinese investments have taken more roundabout forms. has agreed to lend billions of dollars to state-controlled Brazilian and 239

Russian oil firms in exchange for long-term supplies of crude. Also, CNOOC and Petro-China have each done billion-dollar deals tied to the development of specific gas projects in Australia, thereby blurring the line between investment and supply contracts. Petro China has also indicated that Singapore Petroleum could serve as a platform for other transactions in the future. But the Sino-Trojan horse formula is not likely to go undetected.

Demographic Patterns

A look at China’s GDP statistics reveals the complexity of making sense out of available data. Economists have long doubted the credibility of Chinese official statistics, particularly the over- statement of GDP growth. Dragonomics, a research firm in Beijing, estimates that GDP growth was 5 percent in 1980, -1 percent in 1990, 5 percent in 2000, 13 percent in 2005 and 8 percent in 2008. The problem with government massaged statistics is that politically embarrassing bad news is often understated or not published at all. The more eyes there are on China, and the more crucial its economic performance becomes for the rest of the world, the harder it becomes for officials to tamper with statistics. GDP figures are significantly distorted by regional variation. National averages conceal regional inequalities such as the poor region compared with rich Guangdong. The cities of Shanghai and particularly Hong Kong have higher per capita income than the UK. Besides the regional inequalities, there is a serious wealth gap between city and countryside. Where city dwellers have washing machines and colour TV, the most widely owned consumer durable found in 70 percent of farm households, is the sewing machine. Helped by the one-child-per-couple policy introduced in the late 1970s, China has a large working-age population with a small number of dependents. But as the number of young workers start declining the “demographic bonus” will run out. According to the UN’s World Population Prospects (2004 revision), China’s 15-24 age category stood at 18 percent in 1990 and is expected to decline to 12 percent in 2010 and to 10 percent in 2050. China’s growing dependency ratio could be mitigated by the rising productivity of an increasingly well-educated workforce. It could also be counteracted by productivity gains as more people migrate from rural to urban areas, and by an easing of the one-child policy. China’s population control measures is said to have resulted in some 300 million fewer births in the last 30 years. But while such measures may have helped to ease pressure on scarce resources and reduce poverty, they are also aggravating demographic imbalances that could undermine these gains. In the next two decades, the proportion of China’s population aged 65 and over will begin swelling rapidly while the growth of the working age population will shrink. If current trends continue, the ratio of working age persons to retirees will fall from six in 2004 to two in 2040. That will impose huge financial burdens to meet pension commitments to the elderly. Urban China, in particular, is facing the “4-2-1 phenomenon” which refers to four grandparents and two only-child parents being supported by only one child. The sex ratio is also becoming increasingly skewed. Cultural bias in favour of males has produced an officially recorded ratio at birth of 118 boys to 100 girls in 2000. The normal ratio is about 105 to 100. But some births are not recorded, in order to avoid reprisals by zealous family-planning officials. A further distortion is caused by selective abortions. The desire for large families has also been blunted by China’s transition in recent years to a market economy. Health care, education and housing, once provided virtually free, are now costly. Even in some rural areas where authorities have experimented with allowing farmers to have two children unconditionally, parents have shown little inclination to increase their families. But there is a growing feeling that a two-child policy would be more suitable. Rich families are increasingly willing to pay the fines or sometimes even buy in vitro fertilisation treatment. Some try to have a second child abroad, so that the child can get a foreign passport and not be counted by Chinese family-planning officials. 240

The Chinese government has introduced a new pension scheme whereby it invests on behalf of individual workers and then pays them pensions from their individual accounts. This scheme is highly dependent on the development of a mature bond and equity market for its success.

Centralised Government

China’s political system is said to be one of sharp elbows and centrifugal forces. But despite the tensions between the centre and the periphery, China’s political system has not disintegrated into a warring quagmire of fiefdoms. At the centre there are perhaps no more than 200 unelected, often elderly party veterans, who have kept control of the country as a whole and of the reform process. They have succeeded in holding on to control by devolving responsibility for economic growth to the local and regional levels, by way of the highly disciplined totalitarian single party system. Party discipline is enforced by keeping tight control over criteria for party membership and especially over the hiring and firing of local and provincial officials. The collection of tax revenues is centralised, while the granting of credit to localities, regions and state enterprises, is carefully apportioned and rationed. The decision-making process is not subjected to the glare of competitive news media or the pressure of public opinion. Dissemination of information in the public domain, is carefully scrutinised and regulated.

Civil Rights

The CCP’s grip on power to date has largely rested on its control of the military and the police forces as buttressed by the benefits of strong economic growth. But in the wake of economic growth normally comes rising expectations on the part of Chinese citizens. On the part of the wider world, the expectations are for a China not just wedded to capitalism but to the principles of open government, private initiative, sound corporate governance and legal impartiality. Elsewhere in Asia, rapid economic growth has often gone hand in hand with political change. In China there are no clear indications of the anticipated rising expectations manifested in civic life. Serious political reform does not currently appear on the CCP’s agenda. It appears that there are, as yet, no particular rallying causes active in China today. Sporadic public protests take place in the countryside as well as in towns and cities, but the central government still enjoys considerable support. Angry peasants from time to time direct their resentment at rural authorities rather than at the central leadership of the CCP. The traditional view of government is based on a positive and favourable image of the good emperor – if only you could get through to him through all the layers of bad bureaucrats. Thousands of people go to Beijing every year to seek redress of local injustices. Their inevitable disappointment does not dim their belief that it is the local authorities, rather than the top leadership or the system itself, that is the problem. Outspoken newspapers are closed to stifle public criticism of government. Dissidents who have posted their views on the internet are jailed. Party officials are subjected to intensive indoctrination campaigns. Congresses are mere rubber-stamp events held every five years to confirm party policy and name new leaders. Inter-personal contests amongst leaders take place behind the scenes and outside the public glare. Only a few isolated broad strands of political movements can be identified amongst intellectuals and party officials. Growing inequalities of wealth and access to public services have prompted strident criticism of economic “neo-liberalism”. Champions of a more caring, worker-friendly kind of capitalism, are dubbed the “new left”. Wang Hui, editor of an outspoken literary journal “Dushn” propagated the idea that workers should be allowed to have independent trade unions rather than impotent party-controlled unions. He also argued against the way state-owned enterprises are being privatised, fearing that an oligarchy of a wealthy elite controlling the country’s resources might be created, as happened in Russia. Opposition was also expressed by new-left intellectuals against allowing domestic private investment in the state’s hitherto jealously guarded preserves such as power, railways and telecommunications. 241

In response, the government has tightened control over management buy-outs of state- owned enterprises. The “new left” also expressed opposition to a new draft law on property ownership. The 2009 Report of Amnesty International reported that in 2008, the year that the Olympic Games came to Beijing, Chinese authorities “intensified their use of administrative forms of detention which allowed police to incarcerate individuals without trial”. Instead of ushering in a happier, freer China, the Olympic Games had brought “heightened repression throughout the country”, with tighter state control over human-rights activists, religious groups, lawyers and journalists. Chinese Law

China’s legal structure is loosely based on a civil law system, largely derived from Soviet-age civil code legal principles founded on the political philosophy of state totalitarianism, incorporating elements of long-established imperial legal codes. It is essentially controlled within the framework of the one-party rule of the Chinese Communist Party. It means the judiciary is not independent; judicial proceedings are not open to public scrutiny and hearings are conducted in secrecy. The concept of state security overrides any competing legal claim; the rules applicable are not transparent or subject to established legal principles. In the Western World, the “rule of law” is understood in terms of established liberal constitutional democratic principles (involving the sovereignty of law conditioned by the separation of powers, the independence of the judiciary, the safeguarding of individual rights and the audi alteram partem rule). The “rule of law”, in this sense, is not applicable in China. The overriding characteristic is that the dictates and interests of the Chinese Communist Party government enjoy ipso facto pre-eminence and priority over the interests or claims of any individual person or company. It is a totalitarian one-party dictatorship. The legal drama which involved employees of the international mining company Rio Tinto, in 2009/10, illustrates the opaque nature of Chinese Law. There is a murky line in China between state and commercial interests or “secrets” in industries where government monopolies mean the big players are all state-owned. As a result it is not clear where the line is between legitimate commercial information gathering and “criminal” action to obtain “state secrets”. The same opaqueness applies to the realm of civil rights. Citizen rights are determined by totalitarian Communist Party Rule. Oil, Coal and Pollution

Behind the USA, China is the world’s second biggest oil importer – 40 percent of its demand. Government is committed to a car-led development path, which implies continued growth in oil consumption. Some 45,000 km of expressways have been built or are under construction. The government is also supporting a domestic car industry, which it sees as an engine of future growth. The number of cars in China has leapt from just 4 million in 2000 to 19 million in 2005. This figure is predicted to double by 2010 and reach 130 million by 2020. China is the world’s biggest producer of coal, digging out 2.2 billion tonnes in 2005. Coal also accounted then for 80 percent of China’s energy use. China is also breaking new ground liquefying coal to make oil substitutes, which may in the long run help change its . But abundant use of coal also meant that China overtook the United States in 2009 as the world’s biggest producer of carbon emissions. In 2007 its share was 17 percent of the world’s total, against the USA’s 22 percent. Twenty of the world’s most polluted cities are in China.

Strategic Issues

The rapid spread of the internet technology in China in recent years, has provided new forums for citizens to air their views. Unfortunately China’s internet sensors closely monitor debate on internal issues so that broad public participation is severely constrained. It also limits the possibility of the free world to penetrate the “bamboo curtain”. 242

To the disquiet of the Free World, China has followed a consistent path of cosying up to pariah governments around the world – Venezuela, Zimbabwe, Sudan, Iran and North Korea. China imports 11 percent of its oil requirements from Iran – despite efforts from the UN to impose sanctions on Iran to prevent it from developing nuclear weapons. China’s growing military budget reflects its growing wealth and prestige, along with its desire to protect its rising shipments of oil and other commodities. But China has also test-fired rockets that can destroy an old weather satellite in space. China is also building roads, ports and pipelines in Myanmar and Pakistan, connecting west and south-west China with the Bay of Bengal and the Indian Ocean. These links could serve as future supply routes for the Chinese navy. The extension of the -Lhasa railway could facilitate the transport of military material to the Tibetan border – and, if necessary put strategic pressure on India. China’s People’s Liberation Army (PLA) of 2.5 million may be the largest in the world and, assisted by North Korea’s 2 million, constitute an impressive military force. In reality it is neither well trained nor well equipped. Its officer class is rife with party and family nepotism. Many of its soldiers are semi-literate rural peasants. China’s military technology also has much catching up to do. It nevertheless is huge in terms of sheer numerical preponderance. China has been systematically hiding its own history and conditions in the world outside from its population. Much is done to constantly remind the population about the Japanese atrocities during its occupation of China during World War II. But Chinese people are not told that Chinese people under past rulers, committed cruel atrocities on their own people. The civil war between the Communists and the Nationalist Kuomintang (KMT) claimed as many lives as the Japanese occupation. After the Communist victory in 1949 an estimated 2-3 million land owners were killed in the early 1950s and many intellectuals died during the anti-rightist movement of 1957. To cap it all, no less than 30 million fell victim to the famine that followed the Great Leap Forward (1958-61). New research also suggests that millions more than first thought died during the state-sponsored anarchy of the Cultural Revolution of 1966-76. By those standards the few hundred killed in the Tiananmen Square massacre in 1989 are considered insignificant by the Chinese Communist Party. Although the power of Communist ideology is much reduced as the young generation immerse themselves in the promise of the new prosperous era, the vacuum has been filled with a strong sense of chauvinistic nationalism. For the present that nationalism is focused on the reintegration of Taiwan. It appears that China might hold the preponderance of power to force Taiwan’s return to China’s fold. In the interim, both sides of the divide probably realise that an unleashed conflict is too costly to contemplate. Reactionary nationalist sentiments have deep roots in the Chinese intellectual tradition. As the global economy sputtered there were many signs of the revival of an extreme fringe group that pines for Maoist egalitarianism, state ownership and anti-West action. A clutch of websites in China are actively spreading pro-communist rhetoric suffused with a sense of China as victim and yearning for revenge. Although reactionary Maoism is not likely to make a comeback soon, its nationalism has a broad appeal. Boiling up nationalism was also stimulated by China’s sporting triumph at the Olympic Games in Beijing in August 2008. The West presented a gratifying target for pent-up contempt. Even the normally cautious government felt tempted to flex its muscle on the world stage. (See The Economist, “China and the West”, March 21st, 2009, pp.29-31) For most of the period since 1990, China has played a cautious game internationally. Deng Xiaoping set the tone with his concise guidelines: China should keep a low profile, not take the lead, watch developments patiently and keep its capabilities hidden. The global economic crisis and the West’s obvious weaknesses created new opportunities. In a speech at Cambridge University in February 2009, Wen Jiabao, China’s Prime Minister, stressed that China’s development was no threat to anyone because it is a “peaceful and co- operative great power”. Chinese leaders are careful not to fuel suspicions in the West that China is a threat. China would like to be number one, but would at this stage rather get there without making big enemies. 243

China and the Global Financial Crisis of 2008

In its briefing on China’s economy, The Economist of January 16th, 2010, reports that the Chinese economy rebounded more swiftly from the global downturn than any other big economy, due largely to its enormous monetary and fiscal stimulus. In the year to the fourth quarter of 2009, its real GDP is estimated to have grown by more than 10 percent. High saving and an undervalued exchange rate have fuelled rapid export-led growth and the world’s biggest current-account surplus. In the light of China’s notoriously dodgy official statistics, some foreign observers sounded warnings about the dangers of a “bubble economy”: overvalued asset prices, over-investment and excessive bank lending. The Economist rejected these concerns. As far as the “asset bubble” is concerned, it pointed out that price-earnings ratios in Shanghai A shares at 28 were well below its long-run average of 37. Also that Chinese profits have rebounded faster than those elsewhere and were up to 70 percent higher than a year before. It also pointed out that the Chinese property market had avoided a credit-fuelled property boom because one-quarter of Chinese homebuyers pay cash and the average mortgage covers only 50 percent of a property’s value. China’s property boom is financed by saving, not bank lending and thus much less dangerous than a property boom fuelled by credit and where highly leveraged speculators are forced to sell, pushing prices lower, causing borrowers to default. As far as the “over-investment” charge is concerned (a high investment to GDP ratio), The Economist points out that China is still a relatively poor country with a very low capital stock per capita at only 5 percent of America’s or Japan’s. In addition, most of the 2008/09 investment boom went into infrastructure, not manufacturing. China still needs a vast expansion of its infrastructure of roads, power grids, railways and housing. Although China’s incremental capital- output ratio (ICOR) – calculated as annual investment divided by the annual increase in GDP – was, in 2009, more than double its average in the 1980s and 1990s, it is more informative to look at growth over a longer period. The Economist argues that the best measure of efficiency of investment is to look at total factor productivity (TFP) – calculated as the increase in output not directly accounted for by extra inputs of capital and labour. Over the past two decades China has enjoyed the fastest growth in TFP of any country in the world. China’s investment in roads, railways and power grids will help China to sustain its growth in years ahead. Concerning the fear that bank lending in China is growing too fast, The Economist points out that bank lending is flowing into useful investments, not fuelling asset prices and excess capacity. The Chinese central bank has raised banks’ reserve requirements and lifted interest rates and needs to continue along this macroeconomic path to avoid the risk of bubbles and excess capacity. The Economist points that since 2004, the rise in China’s excess credit (the gap between the growth rates of credit and nominal GDP) has been less than in most developed economies. Official gross government debt is estimated at less than 20 percent of GDP but excludes local government debt and the bonds issued by asset management companies that took over the banks’ non-performing loans. Total government debt could be as high as 50 percent of GDP – but still much lower than the average ratio in rich countries of around 90 percent. Moreover, the Chinese government owns plenty of assets (e.g. shares in listed companies) which are worth 35 percent of GDP. The strongest point of critique raised by The Economist is China’s official policy of holding down the value of the . China’s main excuse for holding down the yuan is to support its battered exports, but that argument is not tenable in the light of the rebound in exports. It is argued that China’s economic clout brings with it demands that the country should play a more responsible role in global affairs – taking into account the interests of the world as a whole as well as its own citizens. (See The Economist, January 16th-22nd, 2010, pp.61-63)

244

Conclusions

China’s spectacular recovery from the stagnant depth of the Maoist era in the 1945-1975 period is truly remarkable. A recent study by Goldman Sachs projects that China’s economy will be bigger than America’s by 2027, and nearly twice as large by 2050. Some futurologists predict that the world would by then live under a with the dollar replaced by the as the world’s reserve currency, and New York and London replaced by Shanghai as the centre of finance. Global citizens will use Mandarin as much, if not more, than English and the thoughts of Confucius will become as familiar as those of Plato. European countries will become quaint relics of a glorious past, like Athens and today. With the West in financial turmoil and its leaders seemingly desperate for cash-rich China to come to its rescue, Chinese leaders can see many strategic opportunities: to acquire assets at bargain prices and to exploit political vacuums in many international hot spots like the Middle East and Sub-Saharan Africa. It is now the Chinese who are doing the lecturing. Simplistic political and economic extrapolations, however, do not take into account the many uncertainties and imponderables that can come into play. China, like many other countries, also faces the problem of an ageing population, of expectations rising faster than the capacity of their system to deliver, of the destructive power of corruption and nepotism and the abuse of political power. The mere of governing this vast country with its huge population pose gigantic challenges to whichever system of organisation and management that is brought to the task. China is also heavily dependent on its collaboration with the West: its technology, its markets, its natural resources and its investments. Optimistic expectations for the emergence of free, open constitutional democracy in China are wishful thinking. China has never experienced an open pluralistic society. It has never developed neither the mindset of civic consciousness nor the associational community-based framework to serve as a foundation for a democratic infrastructure. Sporadic and isolated outbursts of discontent may occur, but a deep-rooted democratic transformation appears to be still decades, if not generations away.

References

Goodman, S.G. (1994) Deng Xiaoping and the Chinese Revolution: A Political Biography, London: Routledge Lieberthal, K. (1995) Governing China: From Revolution Through Reform, New York: W.W. Norton Yergin, D. & Stanislaw, J. The Commanding Heights – The Battle Between (1998) Government and the Marketplace That Is Remaking The Modern World, New York: Simon & Schuster The Economist (2000) A Survey of China, April 8th, 2000, pp.3-23 The Economist (2002) A Survey of China, June 15th, 2002, pp.3-18 The Economist (2004) The Dragon and the Eagle, October 2nd, 2004, pp.3-24 The Economist (2006) A Survey of China, March 25th, 2006, pp.3-20 The Economist (2009) Chinese Business, February 21st, 2009, pp.61-64 The Economist (2009) China and the West, March 21st, 2009, pp.29-31

245

11 Australia - the Lucky Country

Australia is referred to as the “lucky country” on account of its bonanza of natural and human endowments. No country in the world is geologically endowed, in per capita terms, with such a treasure trove of marketable mineral resources. No other country has been bequeathed by its history, in per capita terms, with such a large proportion of educated, trained and skilled migrants. Comparatively speaking, Australia has been treated very kindly by its geography and its history. Australia’s land area comprises 7,682,300 square kilometres and is the sixth largest country in the world. For most of Australian human history, Australia and the islands of New-Guinea, Timor and Tasmania were joined. Aborigines from both areas shared the same territories. As sea levels rose, Australia was distanced from the islands to the north and south and Indonesia was transformed into a chain of islands. The rising seas cut the Aborigines off from the world until the Dutch explorers came and, much later, when Captain Cook’s exploratory journey identified Botany Bay as a suitable destination for convicts from the over-crowded British prisons. At the time of the arrival of the first European settlers, it was estimated that around 300,000 indigenous people lived in Australia. From its ancient indigenous origins through the British colonial period, followed by waves of European and international migration in the 20th century, people have settled in Australia from all over the world. Each wave of immigrants left a profound impact on Australian society and culture: language, political institutions, technology, value systems and everything that constitutes part of its predominant way of life. During the historical period prior to World War II, the ethnic composition of immigration was 80 percent British, 8.3 percent Other European and 3.2 percent Other. The imposition of restrictions on immigration in earlier times accomplished the maintenance of an English-speaking culture. Today, with a population of only around 21 million, Australia is one of the most sparsely populated countries. Its ethnic composition is 92 percent European descent, 5.5 percent Asian descent and only around 2.5 percent Aboriginal. Of the current population, around 5 million Australians were born overseas (about 1 in 4).

Patterns of Migration

Subsequent to America refusing to accept further shipments of convicts towards the end of the 18th century, England’s prisons began to overflow. They were first accommodated in decommissioned ships (“hulks”) for several decades, but the ultimate solution was New South Wales (Australia) after James Cook’s discoveries in the 1770s. The “First Fleet” carrying 736 prisoners (188 women) set off from England in May 1787, took 252 days to reach its destination in Botany Bay, Sydney, in January 1788. The second fleet arrived in 1790 and became known as the “Death Fleet” because the convicts were so badly treated that more than a quarter died during the voyage. By the 1830s around 3,000 convicts arrived in Australia each year to be assigned to “free” Australians (free settlers) and “emancipists” (those who had served their sentences) as cheap labour. Repeat offenders were sent for severe punishment to Macquarie Harbour in Tasmania and to Norfolk Island in the Pacific. The last convict ship left Britain for Australia in 1868. The records show that by that time 161,021 men and 24,900 women had been sent as convicts to Australia. (See Russell King ed., Origins – An Atlas of Human Migration, Marshall Editions, 2007, pp.95- 105) From its national beginnings Australia imposed various restrictions on immigration intended to accomplish two main objectives: (1) to maintain the largely British ethnic composition of its population, and especially to bar Asians; and (2) to attract mainly agricultural rather than industrial workers. Australia pursued the first objective by firm adherence to its “White 246

Australia” policy, which barred Asian immigration. It sought the second objective by encouraging immigration especially from Great Britain through such measures as maintaining agents abroad and paying part of the passage of desirable immigrants. The money used to subsidise the fares of migrants came from the sale of Australian land. Over the first hundred years many new towns and cities were established and by the mid- 1830s more free migrants than convicts were arriving. By 1850 there were about 400,000 white people in Australia – mostly concentrated in the south-eastern corner of the country. Geoffrey Blaney, in A Shorter History of Australia (Random House, 1994), p.52, describes the impact of the immigration system as follows: “A government agent usually selected the migrants in the British Isles, and after the newcomers stepped ashore – they were often cared for by the colonial government during the first weeks – especially if jobs were scarce. Here was one of the mainsprings of the welfare state which emerged early in Australia and New Zealand. As most migrants were subsidised, they tended to lean on the government that initially cared for them. Self-help dominated American attitudes, but ‘lean on the government’ was common amongst Australian attitudes.” For most of the century – except in the gold rushes of the 1850s – Australia tended to attract those migrants who were slightly less willing to stand on their own feet than those going to the United States. Australia remained primarily a haven for British settlers at the very time when thousands of Germans, followed by Italians, Scandinavians and eastern Europeans poured into the USA. According to Geoffrey Blaney’s analysis: “… Nothing did more to give Australia an ethnic unity than the practice of selecting and subsidising migrants. This sense of unity was to encourage later generations of Australians to fight in Britain’s wars on the far side of the world”. (op.cit., p.53)

Agriculture, Forestry and Fisheries

In its early years, agriculture was the mainstay of the development of the Australian nation. The first grants of land were made in New South Wales in 1787, by way of grants and orders in the name of the Crown, exercised by the Governor under instructions issued by the Secretary of State in London. Initially, the Governor was only authorised to make grants to liberalised prisoners: free from all taxes, rents, fees and other acknowledgements for ten years. Unmarried males could be given 10 acres and marrieds 20 acres or more – and a further 10 acres for each child living with his or her parents at the time of making the grants. By 1789, the privilege of obtaining grants was extended to free migrants and to such of the men belonging to the detachment of marines serving in New South Wales – which then included the whole of the eastern part of Australia. The maximum grant in such cases was not to exceed 100 acres and was subject to a quit-rent of one shilling per annum for every fifty acres. Australia is a relatively flat country with a mean elevation of 200 metres. The Great Dividing Range spans the length of the Eastern Seaboard. Most of its soils are shallow, not fertile and large areas are affected by salt or acidity. With the exception of Antarctica, Australia is the driest continent. The wet northern summer is suited to beef-cattle grazing inland and the growing of sugar and tropical fruits in coastal areas. The drier summer conditions of southern Australia favour wheat and dry-land cereal farming, sheep grazing and dairy farming in the higher rainfall areas, as well as beef cattle. Within regions there is also a high degree of rainfall variability from year to year which results in long periods of drought. No less than 70 percent of the water stored is accounted for by the agricultural sector. Despite Australia’s harsh conditions, agriculture is the most intensive form of land use. In 1999, the total area of agricultural establishments in the country was 453,7 million hectares, representing about 59 percent of the total land area – most of it used for livestock grazing. Cultivation of the soil proceeded from 1,188,282 acres in 1860 to 8,812,463 in 1900. In time, the bulk of the cultivated soil was used to grow wheat, oats, maize, potatoes, green forage, vines and orchards. Poultry farming along with dairy farming developed into a major farming industry. 247

The land used for irrigation represents less than 1 percent of the total land used for agriculture. Most of it is located within the confines of the Murray Darling Basin, which covers parts of New South Wales, Victoria, and South Australia. Vegetables, fruit and sugarcane are the most extensively irrigated crops. Australia is the world’s largest producer of wool, accounting for about 30 percent of world production. Sheep numbers reached a peak of 180 million in 1970. Poor market prospects for wool since 1990 led to a sharp decline in flock numbers to around 92.7 million in 2006 and a fall in wool production by about 40 percent – largely as a result of strong competition from other fibres and lifestyle changes, to more easy-care clothing. Until the late 1950s agricultural products accounted for more than 80 percent of the value of Australia’s exports. Since then, that proportion has declined markedly as a result of the diversification of the Australian economy. This decline is not due to a decline in agricultural activity, but as a result of the increase in the mining, manufacturing and services sectors. Agricultural output has actually increased significantly since the 1950s. The direct contribution of agriculture to the GDP has remained around 3 percent during the last decade, with wool, beef, cotton and sugar being the most important. Dairy produce, fruit and rice are also contributing significantly to the global rural trade. The farm-gate output of agriculture (as well as the 12 percent of forestry and fisheries) understates the spill-over effect of primary industries in the national economy. It employs over 500,000 people and its upstream and downstream contribution to manufacturing, trade and services throughout Australia’s cities and towns is of critical importance. The strategic importance of food self-sufficiency in an international perspective is immeasurable.

Manufacturing

Before federation, the first Australian manufacturing was based on the waterfront repairing visiting vessels, brewing beer and making biscuits. Later in the 19th century the fringe suburbs of the main coastal settlements created thousands of jobs for boilermakers, engineers, iron founders and brick makers – replacing the older trades in small workshops such as saddle making, coach building and dressmaking. After federation, manufacturing prospered but was devastated by the 1930s Depression. Manufacturing led the recovery from the Depression, accounting for 25 percent of total employment by 1940-41. World War II provided further fertile ground for the expansion of key industries for the production of munitions, ships, aircraft, machinery and chemicals, but also for more traditional industries such as , wood working and clothing. During the 1950s and 1960s, the entire economy’s expansion was fuelled by large scale immigration, scientific and technical innovation as well as the increasing availability of raw materials. The pre-war import licensing and tariff protection controls were retained. Manufacturing’s share of the GDP and employment reached historic heights. Increased national income and population numbers drove the demand for consumer goods: motor vehicles, electrical goods, chemicals. During the 1960s, the relative competitiveness of Australian manufacturing declined and by the early 1970s the world economic environment had changed dramatically. The “stagflation” of the Australian economy reflected a world-wide recession, triggered by oil price rises in 1977-74. Australia experienced substantial declines in employment levels during the 1970s. Manufacturing’s share of employment falling from 25 percent in 1970 to 18 percent in 1985. Its proportion of GDP fell from a high in 1960 to 18 percent in 1985. Increasing competition from newly industrialised Asian economies and fluctuating exchange rates together with domestic workforce developments led to dramatic changes. With the introduction of more women and migrant workers into the workforce, the number of people in paid work increased from 2.2 million in 1947 to 6.6 million in 1980. Sharp rises in real wage costs along with intensified import competition caused a downward squeeze on marketable manufacturing output. Import restrictions were imposed on textiles, and white goods, but the manufacturing industry was not in a position to recover in the 1980s. Parts of 248

steelworks at Port Kembla, Whyalla and Newcastle were forced out of business by foreign competition. By the 1980s Australia was already a “post-industrial society” in which manufacturing had come to account for a declining proportion of employment and in which most net growth in employment occurred in service industries. By 1988-89 manufacturing turnover moved to the food, beverages and tobacco industries, which also employed the greatest number in the sector. From 1980 to 2006 manufacturing’s contribution to Australia’s GDP fell from 17 percent to 10 percent. Australia’s products remained relatively competitive in specific industries such as non-ferrous metals, metal products and food products. In wearing apparel its competitiveness declined substantially.

Mining

Mining broadly relates to the extraction of minerals occurring as solids such as coal and ores, liquids such as crude petroleum, or gasses such as natural gas. Australia ranks as one of the world’s leading mineral resource nations and the minerals industry is the nation’s largest export earner. Australia’s mining industry began in the 1850s with the discovery of gold deposits in Victoria. Later gold mining operations spread to New South Wales and . In the 1920s and 1930s, the mining focus spread to other abundant mineral deposits of silver and copper at Broken Hill and then to silver-lead--copper deposits at Mount Isa. Petroleum (including crude oil and natural gas) was a late comer in the mineral production scene in Australia. The real beginning of petroleum exploration in Australia dates back to 1906 at Roma in Queensland, to the Gippsland coast in Victoria in 1924 and later the huge Bass Strait field in the 1960s. Extensive petroleum exploration really took off in the 1980s with the discovery of large resources of natural gas at the Jabion oil field in the Timor Sea. Then followed Woodside’s North West Shelf liquefied natural gas project in the Dampier-Karratha area of the Pilbara. Although coal mining at Newcastle along the New South Wales coast dates back to 1799, the development of the coal export industry was largely coupled with the emergence of Japan as a major buyer in the 1950s and the 1960s. Large quantities of black coal for export and domestic power generation came from open-cut mines in the New South Wales’ hinterland of Newcastle and the Queensland Bowen-Basin hinterland of Gladstone. By the year 2000, the coal industry was Australia’s largest employer in the mining sector with 22,500 employees, or 44 percent of the total. Black coal was Australia’s biggest export earning commodity accounting for 11 percent of the total value of merchandise exports, with Japan as the strongest buyer. Brown coal is mined in Victoria and used predominantly for power generation, but also for the production of briquettes for industrial and domestic heating in Australia and overseas. Iron ore also emerged as a major export earner and accounted for 5 percent of total merchandise exports in the year 2000. Japan was the largest market taking 46 percent of exports. The major source of iron ore mining is the Pilbara, exporting from Port Hedland and Dampier. Apart from the mineral industry’s importance to Australia’s , it is also particularly important in providing jobs and infrastructure development in regional Australia. Since 1967 these industries built some 25 new towns, 12 new ports, 20 airfields and 1,900 kilometres of rail line within Australia. Mining and directly associated manufacturing employ over 400,000 Australians. In 1965, 41 percent of Australia’s mineral exports went to Europe (mostly the UK), 41 percent to Asia (32 percent to Japan) and 16 percent to the USA. By the year 2000 these figures changed dramatically: 14 percent went to Europe (6 percent to the UK), 64 percent to Asia (26 percent to Japan, 12 percent to South Korea and 6 percent to Taipei) and 4 percent to the USA. Export earnings of minerals (including oil and gas) rose to $91 billion in 2005-06. Japan was consistently the main export destination for Australian minerals until 2005-06, receiving 28 percent ($24bn) of total mineral exports. The main minerals exported to Japan 249

were aluminium, coal, copper ore and concentrate, iron ore and pellets, crude oil and other refinery feedstock, LNG and LPG. In addition, 54 percent of Australia’s steaming coal and 37 percent of its coking coal went to Japan. In the period 2008/09 China became Australia’s largest export market for iron ore and pellets, lead concentrate and LPG. India also became a major export destination with a sharp increase in gold exports.

Service Industries

The service industry is the largest and fastest growing component of the Australian economy in terms of number of businesses, employment and gross value added. It includes all industries other than the goods producing industries. It covers wholesale and retail, accommodation, cafes and restaurants, transport and storage, communication, finance and insurance, property and business services, government administration and defence, education, health, community services, cultural and recreational services and personal services. In 2005-06 the largest services-producing industry, in terms of gross value added, was property and business services which accounted for 11.4 percent of GDP, followed by finance and insurance services at 7.1 percent. Communications services recorded the largest percentage increase in the period 2001-2006. Average annual total employment in the service industries in 2006-07 was 7,724,600 people, which represents 75 percent of all employment. The largest employing service industry was the retail trade’s 1,492,500 people accounting for 19 percent of total employment in the sector. The other large employing industries were property and business services (1,238,000), health and community services (1,078,000) and education (718,600).

Economic Performance

Ranking in the top 10 countries in the world in terms of per capita income, Australia enjoys a very high living standard. After many decades of stagnant growth as a result of public ownership of the means of production, heavy government regulation and industry protection, increased economic liberalisation proved to be a force for good economic growth. Real per capita income increased from $18,924 in year 1972-73 to $31,363 in year 1998-1999 and is forecast to reach $39,000 in 2010. The importance of foreign trade to the Australian economy is shown by the ratios of exports and imports of goods and services to GDP. In 1974-1975 the import and export ratios both stood at about 14 percent while in 2005-2006 the import ratio was 22 percent and the export ratio 20 percent. In 2006-2007 Australia recorded a trade deficit of $12.6 billion – largely in its trade with the USA, Germany, Singapore and China. In the same year it recorded trade surpluses with Japan and India.

250

Table 1 Merchandise Exports 2006-2007

Share of 5-Year Destination Value Total Exports Change $m % %

Japan 32,627 19.4 7.4 China 22,845 13.6 23.9 South Korea 13,071 7.8 5.9 India 10,099 6.0 32.0 USA 9,821 5.8 -3.9 New Zealand 9,453 5.6 4.3 Taiwan 6,192 3.7 5.1 UK 6,180 3.7 3.4 Singapore 4,625 2.7 -1.3 Thailand 4,260 2.5 13.2 (Source: ABS – 2008 Yearbook)

Table 2 Merchandise Imports 2006-2007

Share of 5-Year Destination Value Total Exports Change $m % %

China 27,138 15.0 19.2 USA 24,927 13.8 3.0 Japan 17,409 9.6 2.4 Singapore 10,135 5.6 20.6 Germany 9,274 5.1 6.6 UK 7,402 4.1 3.5 Thailand 7,210 4.0 20.1 Malaysia 6,625 3.7 11.4 South Korea 6,010 3.3 4.9 New Zealand 5,605 3.1 3.4 (Source: ABS – 2008 Yearbook)

During the period 2002 to 2007, Australia recorded small annual surpluses in its international trade in services. The major contributors to services exports in 2006-2007 were personal travel services, professional and technical services. The major contributors to service imports were exactly in the same order. For many decades foreign investment accounted for more than half of business investment in Australia – much more than in any advanced economy. Likewise, Australia has for many decades 251

been a net borrower of funds from overseas. Australians do not save enough to meet the local demand for business and housing lending. The banks supply the difference by raising funds offshore. Up to 2006, the public sector debt to overseas lenders remained on modest levels and in 2007 it produced a surplus in contrast to the private sector’s increasing debt burden – particularly the banks.

Table 3

LEVELS OF FOREIGN DEBT—30 June 2003 2004 2005 2006 2007

$m $m $m $m $m Foreign debt assets(a) −225 657 −267 649 −285 576 −344 412 −427 910 Public sector −55 337 −66 395 −73 023 −82 725 −96 975 Private sector −170 320 −201 254 −212 553 −261 687 −330 935

Foreign debt Iiabiiities(a) 582 651 657 135 715 867 845 190 971 984 Public sector 63 576 71 470 83 606 88 210 81 912 Private sector 519 075 585 665 632 261 756 980 890 073

Net foreign debt 356 995 389 487 430 291 500 779 544 075 Public sector 8 240 5 075 10 583 5 485 −15 063 Private sector 348 755 384 411 419 708 495 293 559 138

(a) Foreign debt levels between direct investors and direct Source: Balance of Payments and International Investment investment enterprises are recorded on a gross basis for Position, Australia (53O2.O). assets and liabilities.

Table 4

LEVELS OF AUSTRALIAN INVESTMENT ABROAD AND FOREIGN INVESTMENT IN AUSTRALIA—30 June 2003 2004 2005 2006 2007

$m $m $m $m $m Levels of Australian investment abroad −502 663 −608 327 −606 159 −768 206 −923 276 Direct investment abroad(a) −189 590 −232 047 −201 395 −274 304 −318 752 Portfolio investment assets −160 685 −199 132 −223 021 −280 653 −343 468 Financial derivative assets −40 735 −42 058 −38 790 −46 300 −56 717 Other investment assets −70 894 −84 748 −86 784 −103 134 −124 657 Reserve assets −40 760 −50 342 −56 170 −63 815 −79 682

Levels of foreign investment in Australia 918 568 1 061 653 1 111 837 1 320 776 1 565 715 Direct investment in Australia(b) 252 561 274 082 271 698 289 934 331 398 Portfolio investment liabilities 481 212 609 272 651 876 820 912 982 275 Financial derivative liabilities 45 251 37 683 42 009 40 999 67 638 Other investment liabilities 139 544 140 616 146 254 168 931 184 404

(a) Net direct investment abroad, after deduction of liabilities to direct investment enterprises abroad. (b) Net direct investment in Australia, after deduction of claims of Australian direct investment enterprises on direct investors. Source: Balance of Payments and International Investment Position, Australia (5302.0).

Year Book Australia 2008

During the first decade of the new millennium, Australia’s balance sheet kept improving – largely courtesy of the strong surge in the export of its mineral resources. By 2008 exports stood at 22 percent of GDP. The continued strength of this export market provided a crucial shield against the onslaught of the world financial crisis which followed the collapse of Lehman Brothers in 2008. The continued demand for its resources in East Asia enabled Australia to escape a devastating decline in its economic output. During the period 2008-09, total mineral exports rose by 37 percent to a record $160 billion. The rise in Chinese demand more than offset the fall in Japanese demand. Despite all the political rhetoric about the demand boosting effect of the stimulus package, Australia was one of the few countries in the world to actually export more amid the biggest collapse in international trade since the 1930s. 252

But despite its export fortunes, Australia’s reliance on foreign credit pushed up its foreign liabilities to unprecedented levels. The chronic current account deficit lifted net foreign debt to $768 billion by 2008 – or 57 percent of GDP. The current account deficit created a problematic level of external vulnerability which required credible government guarantees for wholesale bank loans placed in foreign markets as well as sympathetic and trusting creditors.

The “Fair Go” Model

According to Paul Kelly’s 100 Years – Australian Story, published in 2001, the formation of the Australian Federation of Colonies in 1901 was an epic experiment in nationhood. It involved, inter alia, a multiple experiment in the creation of an Australia-Britain nationhood in the South, economic egalitarianism, a utopian endeavour to humanise capitalism based on a partnership of economic justice and wealth creation, a replacement of being British with a self-confident Australian identity, multi-culturalism and globalisation. This eclectic Gladstone bag of aspirations has, in the course of a century, been shuffled like a deck of cards in the political arena. But the remote Australian island-continent experienced little violent external intervention and scant internal turmoil, adding justification to its description as the “lucky country”. It is well endowed with industrial minerals to be a major exporter of commodities, adequate arable land to produce large quantities of food for export and it has been able over more than a century to import large numbers of educated and trained migrants to turn the country into a prosperous and constructive member of the regional and international community – despite its relatively small population of around 22 million. Kelly maintains that Australia’s history is evolutionary, not revolutionary, its political character is guided by pragmatic self-interest, a skill for adaptation and a sense of conscience. As the “land of the fair go”, Australia offered a model to the world in its quest to “civilise” capitalism. It claims to be a mediator in the struggle between capital and labour – with a keen sense of public interest advanced through government power. The instinct of “fair go” is claimed by Kelly to have been implemented in Australia’s political culture from its convict origins. It was to be realised through state power for individual needs in contrast to the American self-realisation through individual liberty. Australians looked by instinct to government. Americans by contrast, had fought a war of independence in the cause of freedom against government tyranny. Australia’s faith in state power was shaped by former convicts (“state people” they were called), military officers and a “colonial-secretary” mentality. In Australia land was settled not by small independent farmers, but by squatters bankrolled by finance houses and holding vast estates. (See Kelly, op.cit., p.98) Faith in government intervention in Australia has taken a bipartisan character. Despite the class divide at the heart of Australia’s party system, based on Labour versus non-Labour, both sides are philosophical interventionists. Their differences are only a pragmatic matter of degree. Over time it created a comprehensive welfare state, a pervasive judicial-based system of wage determination and industrial conciliation, vast public enterprises, government-owned monopolies and a rigid regulation of markets. Faith in government intervention is deeply ingrained in the Australian political culture. Its bipartisan support base is the key to its longevity. Its earliest expression was the introduction of tariff protection for local industry – often sanctified by its champions by altruistic rationalisations rather than the reality of self-interest. The initial champions of protectionism came from Australian “Liberals” like Alfred Deakin who had the “subtle ability to cloak self- interest with moral principle”. (See Kelly, op.cit., p.100) The next phase of the Australian interventionist saga involved the creation of a mechanism to redistribute the profits of private enterprise. The Conciliation and Arbitration Act of 1904 created mechanisms in the form of industrial tribunals, rules and regulations to enforce minimum wages through arbitration – ostensibly to engineer a system of social and economic fairness. Minimum wages were to be based on the officially defined needs of the “working man” and not on the economic law of supply and demand. This system was enforced on Hugh McKay’s 253

harvester manufacturing company irrespective of the affordability of the wage levels. Thus a fundamental principle of Australian political life was enunciated which became a central characteristic of the Australian political ethos – putting the principle of official minimum wage determination above the need for job creation. It took more than 70 years, as long as the lifespan of Soviet , before it became recognised in Australian public life that minimum wage protection imposed a burden on consumers and distorted the economy. But throughout Australia’s history of political party contest, both Labour and non-Labour parties accepted government intervention to advance individual interests. The contest focused on the nature or scope of the intervention, not the principle of intervention. In 1921 the ALP National Conference adopted in its platform “the socialisation of industry, production, distribution and exchange”. The age of nationalisation had begun – also a new battle between capitalism and socialism. As the Depression engulfed Australia, the economy faltered, income declined and industrial disputes increased. The arbitration system was unable to deliver wage cuts as the economy headed towards a crisis. There was no effective circuit-breaker for the upward spiral of protection, wages and inflation. When the Depression came to Australia, the country’s wage system was unable to adjust wage rates downwards to reflect falling world prices. Australia had created a system that misallocated resources, killed productivity and weakened private enterprise. The ALP under Whitlam resuscitated the belief that every social problem can be solved by government intervention and more money. Paul Kelly described it as “more naïve than idealistic” – as reflecting the high tide of Australian faith in government intervention. It coincided with the worldwide stagflation which was precipitated by the first OPEC oil shock. Unemployment and inflation rose dramatically. Government spending rose above 30 percent of GDP. The Hawke-Keating led ALP victory in 1983 coincided with the Thatcher era in the UK. They quickly adopted Thatcherite reform measures and abandoned Labour Party hostility towards the market. They pragmatically set their sails to the new global tide. Thanks to John Howard’s influence in the Liberal Party, the Hawke-Keating pragmatic market-friendly reforms were given bipartisan support. What else can you do when your political opposition steals your clothes? In December 1983, the Australian dollar was floated, allowing markets to set domestic interest rates and exchange rates. Financial deregulation was followed by further deregulation in the trade and labour markets. The next step was to cut, in stages, a century of tariff protection, allowing the economy to integrate itself as far as possible with the world economy. The objective was to reduce the general tariff level to 5 percent by 1996. The ALP also initiated the other part of Thatcherite reform – the privatisation of public enterprises such as Qantas and Commonwealth Bank. The major unfinished part of the reform agenda was the labour market. This was tackled by the Howard-led Liberal Party government elected in 1996. It involved a crucially important shift away from the power of the Arbitration Court towards a market-based wages system. This area of reform now became the focus of party political contest. The ALP was organically and electorally tied with an umbilical cord to the trade unions – its traditional power base. (See Paul Kelly: 100 Years – Australian Story, op.cit., pp.98-142)

Industrial and Labour Relations

At the heart of traditional Labour thinking is the assumption that Australian society is divided by the opposing interests of workers and employers and that unions are the shock troops in the war against capitalism. The Hawke-Keating government persuaded the economically literate union leaders that instead of squabbling over the existing spoils, it made more sense to generate increased wealth for all by deregulating the economy – including the way wages and working conditions are centrally set. The Labour leaders began dismantling the old industrial relations 254

system and the way the unions could conscript workers into strikes that had detrimental effects on their employment prospects and the national interest. The initial reforms helped improve Australian productivity and played a part in the subsequent substantial increase in national wealth. But the reform process also diminished union membership to about 15 percent in the private sector although it still predominated the bargaining process in the public sector. When the Howard government launched a second wave of industrial reforms, encouraging workplace-based individual contracts, union leaders feared for their futures and fought back with their last-ditch “working families” campaign during the 2008 election. The union-financed ALP campaign was staunchly supported by the overly influential Australian Broadcasting Corporation (and even the commercial media). The campaign was based on the accusation that workers’ rights were attacked and their interests “disadvantaged”. Mr. Kevin Rudd, whose family fortune was based on government-financed employment promotion schemes, served as the business friendly face. In 2007 the ALP was able to capture the imagination of the electorate by convincing it that Work Choices was comprehensively unfair. In doing so it relied on a union-funded multi-million dollar media campaign that played on the emotions of the electorate. In the campaign the ALP was dutifully assisted by the public broadcasters, ABC and SBS. The ABC in particular carried, for months prior to the election, a relentless campaign to expose the “inequities” of Work Choices. The Australian Council of Trade Unions, with its stranglehold on the Australian Labour Party (ALP), followed up its election triumph with a carefully orchestrated campaign to increase the power of its officials by calling for a return to a world where union leaders can lead industry- wide strikes to achieve whatever objectives – material or ideological – they choose to pursue. With the ALP dominated by retired and “transferred” union leaders and its leadership a captive of union rhetoric, only external factors – such as the economic downturn and the mismanagement of the economy –can reverse the fortunes of the Australian Labour Party. The Rudd government, driven by Deputy Prime Minister Julia Gillard (a former labour union lawyer) as Employment and Industrial Relations Minister, instructed the newly created Australian Industrial Relations Commission to rewrite the IR rules in order to re-introduce the IR award system. A return to the IR award system involved imposing specific rules and obligations on employers in specific industries concerning wages, job classifications, penalty rates, allowances, rosters and leave. The businesses under the gun of the award system, are the labour-intensive parts of the economy such as retailers, tourism and hospitality, small and medium businesses. These businesses would be facing large increases in labour costs, simply because of this policy, without regard to any changes in productivity. To avoid higher costs, they will have to cut back on jobs, or alternatively, increase their prices. The ALP policies promote an industry-wide awards system that is legally mandatory on all businesses without regard to local variations of business conditions. The award system regulates thousands of businesses in a “one size fits all” framework that creates inefficiencies and inflexible work practices. Any exemptions to the requirements of the system are hamstrung by costly bureaucratic constraints. It imposed a large volume of compliance requirements on the large number of small businesses employing more than 50 percent of the total workforce. It is a formula for institutionalised inflation.

Regulation of Finance

Australia has an “independent” central bank with a mandate to keep “underlying” consumer inflation in the range of 2 to 3 percent while paying attention to asset inflation as well as the state of the economy. The Reserve Bank Act says that it is the duty of the Reserve Bank board “within the limits of its powers” to ensure that the monetary and banking policy is “directed” to the greatest advantage of the people of Australia. The Act states further that the powers of the bank are exercised in such a manner as, in the opinion of the Reserve Bank board, will best 255

contribute to the stability of the currency of Australia; the maintenance of full employment in Australia; and, the economic prosperity and welfare of the people of Australia. These guidelines, though inherently contradictory and impossible to implement, have enabled the Reserve Bank of Australia to play a crucial prudential role before and during the 2008/09 economic crisis. It remains to be seen if the Reserve Bank is sufficiently equipped to handle the inflationary pressures of ballooning government spending.

The Role of the Public Service

The prominent role of the Australian public service is best described in the words of Prof. R.N. Spann, a renowned student of Australian government and public administration: “Among the institutional elites of Australian society, public servants occupy an important place. Australia has an executive-based political system, in which Prime Ministers and State Premiers play a key role. Parliaments rarely have a large supply of able members and operate fewer formal controls on administration than in most countries. Ministers have often had political skills and interests, rather than executive capacity and experience. This has helped to give senior departmental officers and the executive heads of the large statutory corporations considerable influence ...” (See R.N. Spann, Government Administration in Australia, George Allen & Unwin, 1979, p.36) Though written in 1979, this assessment is a fortiori as applicable today. The prominence, reverence and comparative opulence bestowed upon public servants in Australia raises many questions. How responsive, accountable and efficient is the Australian bureaucratic estate? Has the dividing line between governmental and non-governmental become too vague? How vulnerable is the “independence” of key public-funded institutions such as the Reserve Bank, the Auditor General and public broadcasters such as ABC and SBS? Can bureaucratic pre-dominance and be reconciled? How extensively are senior public servants involved with policy formulation? How much discretionary power is vested in the hands of bureaucrats? There is a real danger that the role of the public bureaucracy could mutate from being assistant and servant to that of manager and director within an unsuspicious political culture. The public sector is today the power base of the trade union movement.

The Rudd Deficit

Kevin Rudd won the 2008 election as a “fiscal conservative”. Even after assuming power, he kept on talking about the crucial importance of fighting inflation, containing government expansion and the regulatory burden. When the economic crisis unfolded during the second half of 2008, Mr. Rudd’s rhetoric changed direction. In a February 2009 article in The Monthly, published under Mr. Rudd’s name titled “The Global Financial Crisis”, it is argued that “neo-liberal” economic reformers are responsible for the woes of the world, woes that only social democrats, like himself, can fix. Instead of advocating a frugal state, Mr. Rudd started to adjust his opinions and image to what he thought would sell – a government that spends big to help “working families”. Mr. Rudd’s new strategy was presented as a spending programme to protect citizens from the caprice and cruelty of “extreme capitalism”. While changing his strategy from fiscal conservatism to deficit spending in order to assist the unemployed and fund essential infrastructure to help kick-start the economy, Mr. Rudd now denounced as heresy the articles of faith he had adhered to prior to the last election. It now became fashionable to attack the free market. Now Mr. Rudd presented himself as a social democrat who exposes the conspiracy of Australia’s “neo-liberals” who, according to Rudd, left the country financially wrecked. He preferred not to give recognition to the handy surplus which he inherited from the outgoing government – nor to the comparatively sound financial regulatory regime. With regard to the fact that Mr. Rudd’s government inherited a surplus of over $20 billion from Mr. Howard’s Coalition Government, the Rudd-camp retorted that the good years were wasted, the surplus should have been larger. 256

The first “pre-emptive” stimulus package of $50 billion announced by the Rudd government in October 2008 involved pumping cash into the pockets of families, pensioners, students and farmers. The Opposition called it a “cash splash” pre-Christmas handout. But the Commonwealth government also guaranteed bank deposits to remove depositors’ uncertainty. In addition the government stepped in to guarantee the foreign loan commitments of the banks to safeguard Australia’s foreign credit rating on the strength of the country’s robust balance sheet. The second unprecedented $52 billion rescue package announced by the Rudd government formed part of the 2009/10 budget. The package included large funding for schools, defence, housing, infrastructure and further handouts to “working families”. The total package was said to amount to about 5 percent of GDP and was projected to escalate the deficit burden to an amount of $300 billion – a debt to be repaid over a period of 10 years (ceteris paribus). Apart from the uncertainties surrounding the fiscal stimuli, there were also troubling questions. No policy-maker can claim with real confidence that they know how the measures taken will play out. The downturn was caused by a global implosion following the collapse of the financial system. Financial institutions suffered huge losses on financial instruments and this, in turn, led them to turn the screws on business financing and thus the real economy. The “openness” of the Australian economy – its dependence on external financing and foreign export markets – remained a problematic issue. Australia’s economy is tightly interlocked with those of its creditors and customers. However, by virtue of its resource-based strengths, the Australian economy was well positioned to outperform most rich countries of the world.

The Aftermath of the 2008/09 Downturn

The onset of the 2008/2009 Downturn unleashed a widespread climate of apprehension. A variety of analysts, commentators and opportunists tried to find the culprits or to allocate the blame for the economic catastrophe. The Australian Prime Minister, Kevin Rudd, also jumped on the leftist bandwagon to blame “… the prevailing neo-liberal economic orthodoxy of the past 30 years” and then roped in a team of advisors (and unnamed “others with a common interest in the ideological origins of the current crisis”) to outline a strong left-leaning neo-socialist big government strategy to lead the world into a new utopia with a properly constituted and properly directed formula to achieve “… the common good, embracing both individual freedom and fairness, a project designed for the many, not just the few”. (See Kevin Rudd, “The Global Financial Crisis”, The Monthly, February 2009, p.29) The claim that the Anglo-Saxon system has failed (also called the “Washington consensus” or the “neo-liberal economic orthodoxy”) is more of a journalistic or opportunistic political statement than a serious intellectual argument. What is implied is that the policies of deregulation and privatisation have actually brought the world economy to the brink of disaster. This brings into question arguments in favour of market solutions in all fields including health, education and international trade. In reality overall lowering of barriers to trade has delivered progress on a dramatic scale over the past three decades. Hundreds of millions of people have been lifted out of abject poverty around the world. The attack on deregulation is also ill-advised unless specific areas of regulatory action are discussed. No responsible person would deny the need for regulating the world of finance (particularly in the USA and the UK) because it has always been prone to panics, crashes and bubbles since the early days of international banking. Governments have always been involved in the regulation of financing: prescribing capital cover, monitoring risky strategies, punishing scams, etc. But in recent years financial innovation outpaced the rule-setters. Derivatives, such as credit-default swaps flowed undetected into the international financial circuit. The imbalances caused by China’s decision to hold down its exchange rate sent a wash of capital into the American market. The crisis was as much caused by policy mistakes as by Wall Street’s excesses. The way out seems to be more a matter of better regulation than more regulation. In order to deal with the high Rudd deficit and debt levels, his government announced a new round of taxes on the resources sector in May 2010: the Resources Super Profits Tax (RSPT). It 257

meant that Australia would become the world’s highest taxer on mining projects with an effective tax rate of around 55 percent. This tax proposal immediately gave rise to the spectre of a “sovereign debt” threat over a strategically important industry that carried the country through the turbulent waters of the GFC. Because the resources sector comprises a large slice of the Australian economy, the decline of mining stock values and the present value of potential future expansions inevitably exert a big impact on other investments – including the massive superannuation industry. A special tax on one industry also raises concern in other industries such as manufacturing and banking – including higher charges for funding and upward pressures on interest rates. Prof. McKibbin of the Australian National University and Reserve Bank Board member described this tax as a “badly designed” attempt to repair the damage done to the national accounts by a stimulus package that was created in panic and without proper consultation. Particularly problematic was the interaction of the proposed rent tax and the existing company tax – as well as the unquantifiable promise by the government to fund 40 percent of any losses. Particularly controversial was the implied argument that a “fair” rate of return can be set by government fiat, rather than the market for an industry that is subject to the ups and downs of international trading. The leftist Rudd government’s approach opened a Pandora’s Box of furious contention about governments creaming off “excess” profits and of what constitutes a “fair share” for Australians of the mineral wealth of the country. When support for the ALP and Prime Minister Rudd started plummeting in the opinion polls in June 2010, the trade unions and party factions secretly took the initiative to dump Kevin Rudd as ALP leader. On 24th June, Kevin Rudd stood down in favour of Deputy Prime Minister Julia Gillard, who was promptly sworn in as Australia’s first female Prime Minister. In her first press conference Prime Minister, Gillard confessed that the Rudd government had “lost its way”. She promised more inclusive consultation in future policy making and to renegotiate the RSPT.

Appraisal

In his latest tome The March of the Patriots – The Struggle for Modern Australia (1009), Paul Kelly, the doyen of Australian commentators, tells the inside story of the Keating and Howard roles in transforming Australia into a successful nation in the globalised age. He argues that these two leaders, though unrelenting political rivals, altered the nation’s direction and redefined the economic, social, cultural and foreign policy agendas of their parties. Both worked towards a model defined by free trade, competitiveness, surplus budgets, an independent central bank, an enterprise-based business culture, an inclusive “Australian” culture, an egalitarian ethic and a strong economy. Kelly states that the legacy handed to Kevin Rudd in November 2008, at the outbreak of the global financial crisis, was a Keating-Howard product. After generously handing out compliments to all Prime Ministers, Kelly states that the 1991-2007 era saw the growth of a largely bipartisan Australian strategy for success in the globalised world. The proof was Australia’s “… superior position to the rest of the developed world”. He described it as a stunning validation of the Australian model. There can be no doubt that the period since the early 1990s was particularly beneficial for Australia. The end of the Cold War, with the implosion of the Soviet Union, saw the demise of communism and the triumph of economic liberalism and free enterprise. The successful re- emergence of China and India demoralised the Left worldwide and gave impetus to the idea that market forces promoted the public interest. These ideas were politically re-kindled by Margaret Thatcher and Ronald Reagan and embraced by Keating and Howard. With the return to power of the Labour Party in 2008, Australia again took a sharp turn to the Left. Rudd rapidly changed from a “fiscal conservative” to an interventionist statist and deficit spender – all in the name of providing a stimulus package according to IMF specifications. According to the Australian Bureau of Statistics, Australia marginally avoided a technically defined recession. The Australian government pumped massive amounts of money into the economy to maintain consumer spending, protect employment and to foster domestic and 258

international confidence in the economy. The scope and allocation of the stimulus money largely escaped any penetrating analysis or critical appraisal – particularly for its lack of focus on productive assets and its generous cash handouts to politically expedient low-income consumers. It is not clear what proportion of the stimulus money leaked overseas to the providers of consumer products – or reinforced institutionalised corruption and nepotism in government contracting. Subsequently, the Rudd government revived and entrenched trade union power with its retrogressive labour relations legislation – ostensibly to reward the trade unions for their electoral support. It opened the door to a cumulative spiral of cost-push wage inflation driven by wage and salary increases that bear no relationship to productivity. The impact of these changes will become evident in future years. Cause and effect evidence only becomes available after the passage of time. The underlying strength of the resource-based Australian economy, its well-regulated banks and the appropriate monetary measures taken by its Reserve Bank, enabled Australia to enter the global crisis with a tidy fiscal surplus of around $45 billion, a well-financed “Future Fund”, an enviable capacity to export highly marketable natural resources, an educated workforce and an efficient financial system. Most resource-based economies around the world weathered the global financial storms remarkably well (e.g. Canada, Brazil, South Africa, Angola, Saudi Arabia and Norway). It is the post-industrial service economies that suffered most. The Rudd government’s stimulus programme was plagued by a series of inefficiencies and corrupt practices – particularly its disastrous home insulation scheme and wasteful school facilities expenditure. The Rudd government also took the initiative to introduce large pay rises for public servants as part of its “blueprint for reform”. Most of the politically driven spending initiatives were not backed by publicly available cost-benefit analysis. The prospects of expanded offshore oil and gas exploration raise the danger of re-enforcing Australia’s drift to becoming a one-sector economy. In several parts of the world a tsunami of resources income created a “resource curse” in the form of squandering tax proceeds, inflationary pressures and high interest rate levels. Other export sectors face the danger of being crunched by rising exchange rates and over-heated domestic demand pressures. Looking into the future, the jury is still out on Australia’s ability to recalibrate its deficit spending, in order to reduce its debt levels, to finance a greater proportion of its investment in productive assets out of domestic savings, to reduce its ballooning public sector, to promote small business entrepreneurship, to contain welfare entitlements and to re-introduce labour market flexibility. A big threat facing the Australian economy is the revival of the mindset that government intervention is per definition in the public interest. Bigger government paves the road to serfdom. But the biggest threat is the endorsement of the “Chinese way” of doing things. It would imply an utter inevitability about the dominance of China in Australia’s future. In the wake of the declining sphere of influence of the western world, China’s dominance will be felt in many challenging ways in the lifestyle of Australians. The echoes of the Whitlam-Hawke era still resonate in the leftwing circles of contemporary Australian society. Historian Manning Clark provided a cogent sounding board of the prevailing sentiments of the 1970s and 1980s with his depiction of the leadership of that era. In Clark’s view, Whitlam was the idealistic visionary, Fraser the anachronistic upper-class conservative and Hawke the consummate, charismatic, enigmatic, pragmatist, populist, prophetic fixer who rose to great heights by bringing working Australians together and by claiming to give capitalist society a human face. In today’s world, Labour leaders still project themselves as visionaries, idealists and modest reformers who can humanise capitalism and manage the affairs of the “bourgeois state” efficiently – reducing unemployment, limiting industrial disputes, protecting the environment and keeping a lid on inflation. Coalition leaders (Liberal and National) still associate themselves with time-honoured conservative themes such as restoring sanity and probity to public life, pruning government spending, enhancing efficiency and productivity, promoting a strong work ethic, espousing a limited but good government, safeguarding the public interest, reducing the 259

number of people dependent on government for their means of existence as well as downsizing the degrading and corrupting “hand-out culture”. Although Manning Clark considered himself as a “social democrat”, he showed a naively limited insight into the dangers of excessive leftist statism. But in his typically provocative style he incisively described the transformation that took hold in Australian society during the second half of the 20th century: “The revolution in the Australian way of life has occurred outside politics. The revolutions in transport and communications, and the boom in minerals, have ended the material backwardness and isolation – the prime cause of the inferiority complex, and the grovelling to the English. Being mistaken for an Englishman or an Englishwoman has gradually ceased to be the ambition of even the “comfortable classes” in Australia ...” “By the 1960s the horrors of the First World War ... (and) ... the Second World War ... had weakened belief in either a benevolent, caring God, or the capacity of human beings to build a better society. By this time the puritan morality was gradually dropped ... Nude bathing, nudity on stage and screen, the use of four letter words on stage, screen, radio, television and the printed page, the contraceptive pill, and the demand for the repeal of laws making homosexual behaviour between adults a criminal offence, were all part of this revolution.” “The decline of faith begat nihilism, and nihilism begat hedonism. It looked as though in the contest between Mammon and ‘millennial Eden’, Mammon had won. The dreams of all those who had migrated to the great south land had evaporated... But there are signs that the children of this generation may prove wiser either than the children of God or the children of light ... The shackles of the old puritan morality have been loosened.” “Australians have liberated themselves from the fate of being second-rate Europeans. Australians have begun to contribute to the never-ending conversation of humanity on the meaning of life, and the means of wisdom and understanding ... Now is the time for the life affirmers to show whether they have anything to say, whether they have any food for the great hungers of humanity.” (See Manning Clark, op.cit., pp.247-251) Considering the fact that Australia’s economy constitutes only 1.5 percent of the world’s GDP, some foreign observers describe Australians as people obsessed to fight above their weight in the international arena. Others, more jokingly, associate Aussies with large appetites for fun and games and peculiar perceptions of a “fair go”. Nonetheless, the comparative track record of today’s 22 million Australians, proves that one does not get “lucky” in the many fields of human endeavour unless you put in the hard yards.

References

Blaney, G. (1994) A Short History of Australia, Random House Clark, M. (1986) A Short History of Australia, Penguin Books Australia Ferguson, Niall (2003) Empire – How Britain Made the Modern World, London: Penguin Books Henderson, David (1999) The Changing Fortunes of Economic Liberalism London: Institute of Economic Affairs Kelly, Paul (2001) 100 Years – The Australian Story, Allen & Unwin Kelly, Paul (2009) The March of the Patriots – The Struggle for Modern Australia, Melbourne University Press King, R. (2007) Origins – An Atlas of Human Migration, London: Marshall Editions Spann, R.N. (1979) Government Administration in Australia, George Allen & Unwin, 1979. Wolfe, Alan (2008) The Future of Liberalism, New York: Knopf Australian Bureau of Statistics 2008 Year Book Australia Treasury of the Commonwealth Intergenerational Report 2010 – of Australia Australia to 2050: Future Challenges, January 2010 260

12 Future Political-Economic Challenges

In today’s world several megatrends are challenging the prospects of a prosperous, stable and democratic future world. These megatrends include the following: - unsustainable population growth and the pollution resulting from that growth; - the seemingly inexorable rise of big government; - the unsustainable ballooning of the public sector; - the precariously slow process of democratisation around the world; - the rebalancing of global economic growth patterns in the face of the decline of the West and the rise of the East Bloc: and, - the risks of contagious financial crises.

The confrontation of these challenges will dominate and shape the international arena in the 21st century. Curbing Population Growth and Pollution

Population Pressure One irony of the debate is that the pivotal role of population increase is totally overlooked. Media commentators and political leaders refuse to mention the real inconvenient truth: the gigantic environmental footprint of around 7 billion people. The human dwellers on planet earth took millions of years to reach the total of 1 billion people around 1830. Only one hundred years later, around 1930, it doubled to 2 billion. Since 1950 the growth rate accelerated and the population reached 3 billion by 1960, 4 billion by 1975, 5 billion by 1985 and 6 billion by 2000. It is projected to reach 7 billion by 2020 and 9.5 billion by 2050. Although the growth rate is expected to show a reduction after 2020, growth itself seems likely to continue until 2100 or perhaps beyond, even as fertility diminishes. At the close of the 20th century, the world’s population increased by around 10,000 every hour, 6 million extra individuals every month, more than the UK population every year. The growth was largely concentrated in the “developing” world, which is set to reach 80 percent of the world’s population within a decade. In the “developed” countries, populations are almost static and in some places, such as Germany, actually falling. There seems to be a clear correlation between wealth and low fertility: as incomes rise, reproduction rates tend to drop. It remains to be seen whether the planet will tolerate the population growth that appears, ceteris paribus, to be inevitable before stability is reached. More people will require more food, more energy, more water, more urban housing, more transportation and more income opportunities. The demand curve for energy and food in particular is steep and will continue to be driven by population growth and the expectations of those extra millions. The growth of humankind’s environmental footprint will be driven by population increase and rising aspirations. It is not clear how future generations will cope with the simple existential problems of human survival – let alone how to deal with a troubled environment. Ironically, more people could mean a greater supply of human ingenuity – the only commodity likely to resolve the crisis. But that is a highly optimistic interpretation. It is not a pre-ordained outcome.

Measuring Pollutants The 2007 report issued by the Intergovernmental Panel on Climate Change found a 90 percent certainty of anthropogenic climate warming. That finding has unleashed a world- wide controversy about the exact measurement of the impact of man-made emissions on climate change. The main focus has been directed on the planet’s “greenhouse-gas” emissions caused by electricity generators, factories, transport vehicles, forest and home fires and by livestock farming. Ironically, these activities are directly or indirectly related to the improvement of the quality of the lives of the earth’s human multitudes. 261

Today’s world depends heavily on energy in all facets of human life. The demand curve for energy is steep and the price elasticity of this demand is relatively low. During the period 1950 to 2005, energy consumption increased at a rate of around 3 percent per annum. The structure of this demand is as follows: biomass 10 percent, coal 25 percent, oil 35 percent, gas 20 percent, nuclear 6 percent and renewables 4 percent. Changes in this demand structure are likely to be hesitantly slow. Increases in demand are likely to be driven by hundreds of millions of Chinese peasants seeking a better life, followed by hundreds of millions of Indians. The demand curves for coal, gas and oil are likely to shift sharply to the right with a concomitant increase in carbon emission levels. The range of greenhouse (pollutants) includes (CO₂), black carbon (soot), hydro fluorocarbons (HFCS), methane and nitrogen compounds. Carbon dioxide (CO₂) is said be responsible for around 50 percent of the world’s greenhouse-gas emissions and is produced by the burning of fossil fuels (coal, oil, natural gas) as a source of energy by power stations. Black carbon (soot or particulate air pollution) is said to be responsible for around 20-30 percent of greenhouse-gas emissions (the second largest contributor to global warming) and is produced by poorly maintained diesel engines (in ships, trucks or cars), forest fires, households and factories that use wood, crop waste, dung or coal for cooking, heating and other energy needs. The suspended particles of black carbon are said to absorb sunlight, warming up the atmosphere and in turn the earth itself, thereby melting glaciers and sea ice. The next level of greenhouse-gas emissions is claimed to be the ozone gases, responsible for about 20-25 percent of global warming. Ozone is a gas formed from other gases (“ozone precursors”) such as carbon monoxide, burning of fossil fuels, methane (from livestock), the burning of wood and hydrocarbons (from the burning of organic materials in industrial processes). (See Wallack, J.S. and Ramanathan, V., “The Other Climate Changers”, Foreign Affairs, Sept/Oct 2009, pp.105-107) It is claimed that carbon dioxide and other long-lasting and far-spreading greenhouse gases, emissions anywhere, contribute to global warming everywhere. But the effects of black carbon and ozone are more confined to the specific regions where they enter the atmosphere. Ozone precursors are said to be more regionally confined than carbon dioxide. Because the effects of black carbon and ozone are mostly regional, the benefits from reducing them would accrue in large part to the areas where reductions were achieved. Black carbon emissions in China and India, two of the largest polluters in the world, of sulphur dioxide and nitrogen oxide, could have a major detrimental effect on the Himalayan and Tibetan glaciers which feed the major water systems of some of the poorest regions of the world: the Brahmaputra River, the Indus River, the Ganges River, the Yellow River, the Yangzi River and the Mekong River. (See Wallack and Ramanathan, op.cit., pp.108-113) It appears to be a daunting task to establish the exact climate altering caused by each category of greenhouse-gas emissions. It is also not clear to what extent these gases are broken down over time, or diluted by wind and rain, or remain as scattered concentrations in the atmosphere. There are still many obdurate measuring problems involved in producing an accurate emissions bookkeeping record. Who can provide a reliable, independently verified audit of the accuracy of emissions quantification? Equally complex are the measuring problems in establishing the exact causal relationships between the various emission levels and specific climate events and trends. The accuracy of “climate modelling” depends on a proper scientific understanding of the links between human activities, emissions and climate change. The majority of the world’s “climate scientists” have convinced themselves and also a lot of laymen (including politicians and journalists) that the earth’s climate is changing as a result of human activity in the form of excessive emissions of greenhouse gases. A minority of scientists are sceptical about these claims and argue that rising temperatures could be explained, inter alia, by natural variations in solar radiation and that longer-term evidence that modern temperatures are higher than they have been for hundreds of thousands of years is actually too flaky to be significant. 262

The “consensus” of the UN Intergovernmental Panel on Climate Change is that the rise in the earth’s temperature is a certainty and that it is mainly caused by human activity. But science as a discipline lives off doubt and advances by disproving or confirming accepted theories. There are no certainties in science since prevailing theories and orthodoxies must be constantly tested against evidence. When scientists stop questioning orthodoxy, mankind will have given up the quest for truth. Science doesn’t lie but scientists sometimes do. Questions of scientific validity cannot be resolved by a show of raised hands. It already doomed poor old Galileo. Today there seems to be a wave of hysteria and irrationality afoot that is standing in the way of objective analysis. In recent years, thousands of “scientists”, “economists” and “journalists” have climbed upon the climate change bandwagon, each advocating a particular interpretation of available information and a particular line of action ranging from cooking the data, stifling dissent, denigrating contrarian arguments, concealing contradictory findings to creative spin propagating specific emissions mitigation strategies. Spin usually trumps substance and scientific rigour when there is a limited appetite for realism. On the eve of the Copenhagen Summit, The Economist, December 5th, 2009, carried a special report on climate change and the carbon economy, written by Emma Duncan under the title “Getting Warmer”: “Carbon-dioxide emissions are now 30 percent higher than they were when the UNFCCC was signed 17 years ago. Atmospheric concentrations of CO₂ equivalent (carbon dioxide and other greenhouse gases) reached 430 parts per million last year, compared with 280 ppm before the industrial revolution (sic!). At the current rate of increase they could more than treble by the end of the century, which would mean a 50 percent risk of a global temperature increase of 5°C. To put that in context, the current average global temperature is only 5°C warmer than the last ice age (sic!). Such a rise would probably lead to fast-melting ice sheets, rising sea levels, drought, disease and collapsing agriculture in poor countries and mass immigration. But nobody really knows, and nobody wants to know.” “Some scientists think that the planet is already on an irreversible journey to dangerous warming. A few climate-change sceptics think the problem will right itself. Either may be correct. Predictions about a mechanism as complex as the climate cannot be made with any certainty. But the broad scientific consensus is that serious climate change is a danger, and this newspaper believes that, as an insurance policy against a catastrophe that may never happen, the world needs to adjust its behaviour to try to avert the threat.” “The problem is not a technological one. The human race has almost all the tools it needs to continue leading much the sort of life it has been enjoying without causing a net increase in greenhouse-gas concentrations in the atmosphere. Industrial and agricultural processes can be changed. Electricity can be produced by wind, sunlight, biomass or nuclear reactors, and cars can be powered by bio fuels and electricity. Bio fuel engines for aircraft still need some work before they are suitable for long-haul flights, but should be available soon.” “Nor is it a question of economics. Economists argue over the sums, but broadly agree that greenhouse-gas emissions can be curbed without flattening the world economy (sic!). (See The Economist, Special Report, op.cit., p.4) It is clear that The Economist, like its banking shareholders and other financial conglomerates, are hedging their bets. Their strategy seems to be to take out an insurance policy in case the clamouring scientists are right.

Strategies to Reduce Pollutants International summits were held over the past two decades in order to produce international agreements on the limitation of greenhouse-gas emissions: in Rio de Janeiro, Kyoto and Copenhagen. Lofty goals were set requiring all signatories to meet specific time-bound reduction targets. Despite much moral posturing and passionate campaigning by climate change believers (including organised campaigns against climate change sceptics), no leading industrial country has to date implemented comprehensive carbon reduction schemes that would lead to measurable substantially lower carbon emissions anywhere on the planet. 263

The main hurdle appears to be the unavailability of credible, generally acceptable and reliable alternative energy technologies. Half of the world’s electricity still comes from coal despite much effort poured into the development of sources such as wind, water and sunlight. In developing countries such as China and India, with around 40 percent of the world’s population, the proportion of electricity generated from coal is closer to 80 percent. The one technology, nuclear power, that could conceivably replace coal as a base-load electricity generating source, has to date been vigorously opposed, particularly by left-wing campaigners and activists. The bulk of electricity in China, India and South Korea comes from burning coal. Coal is abundant and relatively cheap. The global demand for coal is expected to increase by 1.9 percent per annum until 2015, outpacing all other fossil fuels except natural gas. Indonesia is the world’s biggest exporter of coal for power plants – around 200 million tonnes. Australia ships more of the sort used in steel production. China itself is the world’s biggest producer of coal, but its own domestic demand is so large that it depends on growing imports, particularly from Indonesia and Australia. Most countries around the world, both developed and developing, are heavily dependent on a carbon-intensive lifestyle and economic growth model. Coal, oil, natural gas and wood – all of which contribute to carbon emissions – remain the world’s predominant sources of energy. Despite recent investments in alternative sources of energy such as solar, wind, hydro-electric, geothermal and nuclear power – they only account for a small share of the world’s energy supply. Trillions have been invested in finding, developing, refining, transporting, marketing, selling and using fossil fuels. Changing the way the world produces energy and weaning the global economy from carbon dependency is bound to be costly, complicated, time consuming and devastating. The two main strategies proposed to “put a price on carbon emissions” is either to slap a hefty tax on emitters of carbon gases or a “cap and trade” scheme bolstered by government-sold permits to emitters and subsidies to clean energy technologies. So far no evidence has been produced to show how tax or ETS interventions, per se, will reduce the earth’s pollution levels. All protagonists of these strategies rely on the optimistic assumption that somehow new technologies will emerge if only sufficient money is allocated to development projects and if, simultaneously, carbon is priced sufficiently high. However, the danger is real that ill-conceived interventions as propagated by zealous political campaigners could have very real, dramatic and indeed devastating effects on the economies of nations. If imposed, interventions in the form of either taxes or carbon trading schemes (or both) will be accompanied by gradual price increases for electricity, transportation fuel, farm produce, manufactured goods, etc. The end-users will have to foot the bill and face subsequent inflationary pressures. An emission-reduction scheme based on taxing the emitters who inevitably would off-load their tax burdens on the end-using public is certainly simpler to enact, more difficult to corrupt and easier to enforce. Every end-user’s tax burden will be more visibly connected to his or her own carbon-footprint. The downside is the extra money flowing into government coffers and the need to exercise effective control over the accountability, efficiency and efficacy of government spending. It would require an informed, organised and vigilant citizenry to monitor the choices and priorities of policy-makers. An emission-reduction scheme based on a cap-and-trade system is sometimes deceptively called “market-based”. In reality, it is as interventionist as a tax-based system with the added problem that it opens a Pandora’s Box of manipulative exploitation – largely led by the banking and finance interests based in New York, Chicago and London. Carbon markets are entirely political creations. Moreover, there is little evidence that carbon pricing will induce technological breakthroughs. Potential climate change policy interventions have become a large potential source of economic rent-seeking and pork barrel spending. Each interest group is mobilising its own set of “hired hands”, sailing under the flags of being “scientists” or “economists”. Estimates of past, current and future levels of greenhouse-gas emissions abound. Cost-benefit calculations of 264

policy alternatives are usually based on untested assumptions and flimsy factual foundations. Costs and benefits will mostly accrue many years into the future.

The Failed Copenhagen Treaty The “Draft Treaty” that was proposed for approval at Copenhagen in December 2009 was based on three pillars: government intervention, facilitating mechanisms to subordinate market rules and financing mechanisms. The specific interventions proposed included the following: - the creation of around 300 additional bureaucracies to contain free-market operations; - a tax of 2 percent of each country’s GDP; - a world-wide cap-and-trade regime; and - unspecified fines on governments that do not comply with the new international rules.

A national ETS can only work as part of a properly regulated and audited global system in which all the big emitters – the USA, China and India – participate. Even in such an unlikely eventuality, a global ETS would be extremely volatile and open to manipulation and fraud. Who would effectively monitor global trading in permit derivatives? Emission reduction should not be made dependent on a highly questionable ETS that lacks transparency and accountability. Although the proposed scheme failed to win support as a binding agreement based on compliance mechanisms, the campaign is by no means over. The battle is likely to be carried forward by an army of NGOs, rent-seekers, bureaucrats, misguided activists and, above all, ambitious politicians.

Practical Abatement Strategies We know with certainty that the world’s climate changes, that it always has done so and that it always will. Over millions of years the temperature has gone up and down as ice ages have alternated with warmer inter-glacial periods. Numerous “climate scientists” have in recent years produced several sets of findings that the world’s climate has seemed particularly changeable during the past few decades. Britain’s Hadley Centre and the University of East Anglia have assembled data showing that the ten years to 2004 was the warmest decade since reliable measurements began in the mid-19th century. Other “signals” include the decline in the amount of sea ice in the Arctic, the melting of Greenland’s ice cap, the faltering of sea currents in the North Atlantic and possible links between increased sea-surface temperatures and the frequency of intense categories of hurricanes, typhoons and tropical storms. But the solar hypothesis as a natural source of warming in the form of the sun’s heat output still stands. That output is known to vary over time and has not been matched to temperature changes. Moreover, the fact remains that good-quality, long-term, consistent data is not sufficiently established beyond all reasonable doubt. Too little is known about the carbon “sinking” role of the oceans and the dissipation of gases in the atmosphere. Too many climate science assertions are based on patchy information., The reported manipulation of data by some activist scientists has prejudiced the objective certainty of their assertions. Science and politics make uncomfortable bed fellows and it is also clear that the deontology of some “climate scientists” is seriously deficient – not to mention their journalistic camp followers and hired-hand “economists”. Some even speak with the certainty of religious zealots. The real challenge facing the international community is to move from a pollution-prone, carbon-intensive world economy to low-carbon products and industrial processes. Mankind has no effective mechanism to ensure that the cost of collective abatement is equitably allocated, that the free-rider problem is curtailed, that efficient and effective domestic policies are implemented and that international agreement is reached on measurement, reporting and verification. The way to make progress is to focus on less controversial objectives that are obviously beneficial to opinion leaders around the world and achievable by way of proven technologies: the “low-hanging fruit” to reduce pollution, per se. Examples are: - The setting of tractable targets to reduce air pollution by black carbon and ozone precursor emissions by way of the deployment of clean-energy options for households and small 265

industries in the developing world and of emission-reduction technologies for transportation around the world. Much pollution is culturally embedded in household activities such as cooking and heating by burning coal, wood, dung, organic material, etc. - The expansion of renewable energy (hydro, wind and solar) and nuclear power as decarbonised sources of electricity. - Substituting coal with less carbon-intensive natural gas where possible and capturing and storing carbon dioxide from power stations by way of “carbon sequestration”. - Expanding efforts to reduce any form of the degradation or destruction of forests. By replanting the equivalent of what is being lost is a readily available photosynthetic carbon sequestration process. - Targeting the reduction of the carbon footprints of individuals through “emissions awareness” public campaigns encouraging people to rely more on public transport, cycling and walking and to reduce their wasteful use of water, electricity and food. The efforts of billions of people could engender highly significant results. Success breeds success. - Longer-term pollution mitigation should be achieved over time as the major international role-players indicate their commitments to emissions-cutting initiatives such as schemes for avoiding deforestation or boosting low-carbon energy. - It is important to acknowledge that climate change is a fact of life. Throughout the earth’s geological time frame, ice ages have been followed by warmer periods and vice versa. These changes were apparently caused by a variety of natural causes: movements of the earth’s crust, the formation of continents, typographical changes, changes in the earth’s tilt, atmospheric changes, etc. The evidence of these geological climate changes can be seen in rising and falling sea levels and historically recorded ice ages and the movements of continents. What is under dispute is the scope of current man-induced global warming. It is clear that the wrong questions would lead to misleading answers. What is not in doubt is that human-induced pollution has increased in gigantic proportions. In the past two centuries alone, human polluters have increased from one billion to close to seven billion. That is an indisputable fact – also the rise in their collective level of pollution, per se. - The problem of pollution cannot be addressed by simply improving on proportional per capita levels. The total pollution per country of 10 million people is infinitely less than the pollution per country of 1,000 million people. Countries with a huge population’s footprint inevitably contribute more to pollution than a country with a small population’s footprint. The impact of population pressure on pollution of whatever kind must be recognised and included as a crucial part of any abatement strategy. If global warming in the long run is in doubt, the growing pollution and pressure on natural resources caused by population pressure cannot be in doubt. - In the final analysis, it is important to realise that doing the wrong things is infinitely worse than doing what is known to be effective on a limited scale or even doing nothing at all. Beware of false prophets, propagandists, rent-seekers and frauds.

Curtailing Big Government

The tone of Paul Johnson’s survey A History of the Modern World – From 1917 to the 1980s, is set by a quote on the back cover: “Throughout these years, the power of the state to do evil expanded with awesome speed. Its power to do good grew slowly and ambiguously.” Before 1914, state sectors were generally small, though some were growing at a rapid pace. The area of actual state activity averaged between 5 and 10 percent of the Gross National Product. In 1913 total government revenue in the USA was as low as 9 percent of GNP. In Germany, with its extensive welfare provisions, it was 18 percent and in Britain it was 13 percent. In both Imperial Russia and Japan, the predominance of the state in every area of economic activity was becoming a central fact of societal life. The state owned oil fields, gold and coal mines; also the bulk of the railway system and thousands of factories. Russian and Japanese industry, when not publicly owned, had an exceptionally high dependence on tariff barriers, 266

subsidies, grants and loans or was heavily public sector controlled. Finance Ministries kept close links with banks and appointed civil servants to their boards. In both countries, the State Bank operated under the Finance Ministry, controlled savings banks and credit associations, managed the finances of the railways, financed adventures in foreign policy and acted as regulator of the whole economy. The Ministry of Trade supervised private trading syndicates, regulated prices, profits, freight-changes and placed agents on the boards of all joint-stock companies. Imperial Russia constituted a large-scale experiment in state collective capitalism and appeared to its neighbours as highly successful.

The Influence of World War I The onset of World War I enormously increased the size as well as the capacity of the state. The qualitative and quantitative expansion of the role of the state has thereafter never been reversed. Germany adopted the Russian modus operandi, but with improved efficiency. When Lenin inherited the Imperialist Russian state-capitalist machine in 1917-18 he looked to the German experience and practices for guidance. Germany was practicing what was openly termed “War Socialism”. In France the corporate spirit has always been strong, so the state speedily swallowed up the independence of the private sector. Jacobin patriotic fervour soon defined independent freedom as potentially “subversive”. The liberal Anglo-Saxon democracies succumbed to similar pressures. Under Britain’s Lloyd George the Defence of the Realm Act brought manufacturing transport and supply under the control of corporatist war boards. When Wilson brought the USA into the War, the case for corporatist control was undisputed. The central control mechanism was the War Industries Board whose members, according to Paul Johnson, “... ran a kindergarten for 1920s interventionism and the New Deal, which in turn inspired the New Frontier and the Great Society”. World War I demonstrated the impressive speed with which the modern state could expand itself. When Lenin’s Bolshevists seized control in Russia, the template for statist control was already in place – and the rest of the road to totalitarian Soviet Communism is history. Since Karl Marx provided no blueprint for running or ending a proletarian dictatorship, the design of totalitarian communist dictatorship was haphazardly left to the devices of Lenin, Stalin and Mao Zedong. As subsequent history has shown, they excelled in masterminding instruments of bureaucratic control and oppressive power. They were less adept in managing the output of goods and services or the process whereby wealth is created to the benefit of society.

The Influence of the Great Depression During the Great Depression of the 1930s, the massive unemployment and poverty forced governments everywhere to take on a much expanded role. Governments stepped in to create employment opportunities and to regulate and stabilise economic activities. The market system was comprehensively discredited as being unable to deliver economic growth and a decent life. After World War II, nobody in Europe believed in private enterprise. Those who did were a defeated minority. Capitalism was considered morally objectionable; that it appealed to greed instead of idealism and promoted inequality. Yergin and Stanislaw write that after the failures of capitalism in the 1930s, the Soviet Union enjoyed an economic prestige and respect in the West that is hard to reconstruct today. Its five- year plans for industrial development, its “command-and-control” economy, its claims to full employment were all seen to constitute a great antidote to capitalism’s failures. After World War II, the Soviet economic model gained further credit from the USSR’s successful resistance against the Nazi war machine. The mood of the time was well captured by Yergin and Stanislaw with the following description: “Altogether, these things gave socialism a good name. This respect and admiration came not only from the left in Europe but also from moderates, and even from conservatives. The anguish and brutality of the Stalinist system were not yet very visible, or were not taken very seriously. The 267

limitations and rigidity of central planning – and, ultimately, its fatal flaws, its inability to innovate – were still decades away from being evident ... The Soviet model was the rallying point for the left. It challenged and haunted social democrats, centrists, and conservatives; its imprint on thinking across the entire political spectrum could not be denied.” (Yergin, D and Stanislaw, J., The Commanding Heights – The Battle Between Government and the Marketplace That Is Remaking the Modern World, op.cit., p.22) In Britain the 1930s had delivered mass unemployment and hardship, bitter confrontation between labour and management and preservations of the class system. Labourites saw Britain as a nation whose capitalists had failed it: under-investing, lacking entrepreneurial drive, hoarding profits, avoiding innovation and depriving workers. Appalled by poverty, the Fabians, Beatrice and Sydney Webb, George Bernard Shaw and ultimately also Clement Attlee, were committed to reform and social justice and a growing belief in the responsibility of government to install socialism by incremental reform. In Shaw’s words, step by step towards “collectivism”, not by revolutionary upheaval. British socialists were highly impressed by the “heroic” accomplishments of communism, socialism and central planning, which seemed to make the USSR an exception to global stagnation. Others, Lloyd George in particular, were enchanted by the apparent success of the Nazi national-socialism in Germany.

The Influence of World War II in Britain World War II had vastly enlarged the economic realm of government. The management of the British economy during the war provided positive proof of what government could do and demonstrated the benefits of planning. Government took over the economy and squeezed more production out of the industrial machine than its capitalist owners had done before the war. The national population gallantly assisted by turning the national economy into a common cause rather than an arena of class conflict. These historical currents led to a rejection of Adam Smith, laissez faire and traditional 19th century liberalism as an economic philosophy. It was based on a disbelief of the idea that the individual’s pursuit of what Adam Smith defined as “self-interest” would add up, in aggregate, to the benefit of all. Instead, the belief was common that the pursuit of self-interest led to injustice and inequality, the few benefiting from the sweat of the many. The concept of “profit” as motive for economic progress was itself considered morally distasteful. In the final weeks of World War II, the Labour Party took power in Britain. Under Attlee the Labourites decided to make government the “protector and partner of the people and take on greater responsibility for the well-being of its citizens than ever before.” The Beveridge Report prepared by the former head of the London School of Economics provided the blueprint. It set out to slay the five evil giants: want, disease, ignorance., squalor and idleness. The report’s influence was said to be global in reach, changing not only the way Britain but also the entire industrialised world came to view the obligations of the state vis-à-vis social welfare. Implementing the recommendations of the Beveridge Report, the British Labour government established free medical care under a National Health Service, created new systems of pensions, promoted better education and housing and sought to deliver “full employment”. All of this added up to what the Labourites called the “welfare state” in contrast to the “power states” of Continental dictators. It transformed Britain into the first major “welfare state”. A major component of Labour Party policy was contained in the famous Clause IV of its constitution written by Sidney Webb. It called for “ of the means of production, distribution and exchange”. In time this policy objective became known as “nationalisation” of the “commanding heights” – the latter term borrowed from Lenin. When in July 1945 Labour came into power they started their nationalisation programme of key industries: coal, iron and steel, railroads, utilities and telecommunications. The argument was that as private businesses these industries had underinvested, been inefficient and lacked scale. As nationalised firms they would be more efficient, would ensure the achievement of the national objectives of economic growth, technological innovation, full employment, justice and equality. They would be the engines of economic growth, modernisation and the redistribution 268

of income. The nationalised enterprises became “public corporations” (along the model of the BBC), operating as business concerns under government appointed boards and buying the necessary brains and technical skills. The underlying co-ordination between the nationalised enterprises was to be provided by the concept of “planning” – placing the welfare of the nation before any section. After the war, Britain’s finances were in desperate shape. Much of the country’s wealth was used to defeat the Axis and after the war much of its overseas investments were liquidated. Food rationing remained until 1954. But the British people had acquired a welfare state: access to health care, education and old age care. About 20 percent of the nation’s workforce ended up employed by the newly nationalised industries. Clement Attlee described the British brand of socialism as “... a mixed economy developing towards socialism ... The doctrines of abundance, of full employment, and of social security require the transfer to public ownership of certain major economic forces and the planned control in the public interest of many other economic activities.” This “mixed economy” with its welfare state became the basis of the post-war “Attlee Consensus”. This model subsequently had a profound impact around the world over the next four decades.

The French Experience In France the expansion of the state’s role arose out of the disaster of war: collapse and humiliation, collaboration and resistance. During the 1930s, the capitalist system was discredited as “rotten”. In 1939 the French per capita income was the same as in 1913. After World War II a significant part of French business was tainted by collaboration with the Nazis and the puppet Vichy regime. Across the political spectrum there was consensus on the need to expand government in the face of the apparent weakness of the market system. As head of the new provisional government, General Charles de Gaulle declared in 1945 that “the state must hold the levers of command”. The new France was to consist of three sectors: the private, the controlled and the nationalised. Industries were highly fractionated and needed consolidation. Communist- controlled unions had to be enrolled in the process of reconstruction by nationalisation of industries. Through nationalisation acts in 1945 and 1946, the French state decisively asserted its domination of the “commanding heights” by taking control of banking, electricity, gas and coal. The state also undertook punitive nationalisation of companies whose owners and managers had consorted with Vichy (e.g. Renault and certain media interests). The form of corporate governance adopted in France gave board members from communist- controlled unions inordinate influence over the newly nationalised industries. In 1947 the communist unions organised massive strikes which, in turn, brought an end to further nationalisation plans. The end result was nevertheless the emergence of a mixed economy with the state in control of some of the most critical sectors of the economy. “Planification”, the implementation of a national economic plan, also became a trademark of the expansion of the state’s power over the economy. This process was described as indicative planning – focusing, prioritising and pointing the way. It was intended to be different from the Soviet system with its highly directive and rigid central planning. It was meant to be a middle way between free markets and socialism. The French planning experiment achieved success under the guidance of a remarkably talented banker-businessman, Jean Monnet. Born into a brandy family from Cognac, Monnet had travelled all over the world selling the liquor since he was a teenager. During World War I he played a key role in organising the Allied supply effort. He was later appointed deputy secretary- general of the League of Nations but soon left to tend to his family’s business interests which he turned into international banking. After World War II, de Gaulle invited Monnet to take control of the plans to modernise and transform the French economy. The Monnet Plan’s key role was prioritising, setting investment targets, allocating investment funds with the focus on reconstruction of basic industries (electricity, coal, rail transportation, steel, cement and agricultural machinery). He wanted action that would generate momentum for 269

more action. He secured American aid through the Marshall Plan. He established a planning board, the Commissariat General du Plan, as an independent commission reporting directly to the Prime Minister. The Monnet Plan did not achieve all of its objectives. But what the plan did do, according to Yergin and Stanislaw, “... at a crucial period, was to provide the discipline, direction, vision, confidence and hope for a nation that otherwise might have remained in a deep and dangerous malaise ... and it set France on the road to an economic miracle in the 1950s”. (Op. cit., p.32) Monnet, who is also today called “the Father of the European Union”, did not like plans, per se. But he did use the state’s role to promote modernisation and helped create a relative consensus behind the “mixed economy” at a time when Europe was highly exposed to the Soviet drive to the West. Monnet was not ideologically committed to the concept of state involvement. For him, the state was essentially a pragmatic instrument to achieve other objectives.

The German Experience Nazism was the culmination of cartelisation and state control over the economy. Cartels and monopolies had already had their origins in Germany in the time of the First Reich, when they were allowed to develop unchecked under the Kaiser’s rule. It paved the way for greater concentrations of economic and political power and to totalitarian control by the Nazis. After the war Germany was set on quite a different economic path. The division of Germany between East and West discredited left-wing trends and aid under the Marshall Plan soon gave impetus to setting Germany’s reconstruction on a different path. The “Ordoliberals” of the Freiburg School to which Ludwig Erhard belonged, believed in an economy based on competitive market forces. Government’s responsibility was to create and maintain a framework that promoted competition and prevented cartels. They believed competition was the best way to prevent private or public concentrations of power. At the same time it constituted the best guarantee of political liberty, as well as providing a superior economic mechanism. Yet the Ordoliberals were not simplistic proponents of laissez faire. Their sense of order was captured by the word “Ordo”: a natural hierarchical form of order. They believed in a strong state and a strong social morality based on justice, traditions and morals, standards and values. Economic, social and fiscal policies outside the market sphere, should balance interests and protect the weak, restrain the immoderate, cut down excesses, limit power, set the rules of the game and guard their observance. Thus the Ordoliberals found nothing inconsistent between their commitment to free markets and their support of a social safety net – a system of subsidies and transfer payments to take care of the weak and disadvantaged. They were also devoted to a stable currency. The September 1949 election was fought by Adenhauer and Erhard over the “planned economy” versus the “social market economy”. Their Christian Democratic Party gained a majority in coalition with the Free Democratic Party and started building the social market economy that was subsequently described as the “Wirtschaftswunder”. The social market economy looked in many ways like a mixed economy. Public ownership at both the federal and state (länder) levels was relatively broad in scope: transportation systems, telephone, telegraph, postal communications, radio and television networks and utilities. But there were also crucial differences between the German and the French and British models. In France and Britain the state took control of the “commanding heights” whereas in Germany the state created a network of organisations to enable the market to work more effectively. The economy functioned under the tripartite management of government, business and labour. Advisory boards called “betriebsrätte” consisted of representatives from all three sectors. Within a decade this system propelled Germany to the centre of the European economic order and it continued to function as the locomotive of the Euro-area.

The Influence of the Cold War (1946-1989) This period was characterised by a profound competitive struggle between the Western World led by the USA and the Communist World led by the USSR and China. At stake were not 270

only national strategic and security interests but a pervasive conflict of the socio-economic- political cultures of these super-powers. At the end of World War II the Soviet Union occupied all of Eastern Europe including East Germany. The USA assumed the core leadership of the Free World in its efforts to contain the expansion of the Communist World. A long list of military standoffs ensued: Berlin, Korea, Vietnam, Cuba, Hungary, Angola, and Afghanistan. These were accompanied, in addition, by a weapons development and space exploration contest which required astronomical amounts of government spending. Other allied powers, such as the UK, Germany, France and Japan also participated but the financing brunt was carried by the US government. The expansion of the US government’s role increased the public sector’s share of the GNP from 14% in 1940 to 26% in 1990. The annual budget of the US Department of Defence rose to a level that was claimed to be larger than the entire GNP of Great Britain. The rise of the public sector in the Free World stems in part also from the increase in population that has spurred the growth of huge urban complexes; and in part from a widening government involvement with education, health, welfare, communication systems and many other spheres of collective human existence. The socialist spectrum extended across Europe from Scandinavia through the Lowlands and France to Italy. The “welfare state” became the order of the day. The European consensus in the 1950s and 1960s rested on the belief that political leadership (subject to periodic elections) could run their mixed economies by a combination of tools - regulation, planning, state ownership, , Keynesian fiscal management and monetary policy. The actual mix varied considerably among countries - depending upon their history and traditions. What, in fact, governments delivered was huge and inefficient bureaucracies, heavily taxed private sectors and the introduction of endemic inflation. In most rich countries counter-cyclical policies became the norm. Keynesianism became the macro- economic orthodoxy. By the end of the 1960s most Western economies were facing problems of “stagflation”, i.e. stagnant growth, growing unemployment and still rising prices. The traditional Keynesian methods of stimulating economies by government spending or increasing the money supply did not end slow growth and unemployment - it only created additional inflation, stagnation and stifling bureaucracy. The prevailing Keynesian imprint on policy changed after the emergence of supply-side economics – particularly around 1980 after Margaret Thatcher took over as Prime Minister in the UK and Ronald Reagan became President of the USA. In both countries efforts were made to constrain government spending and taxation and to curtail the role of government. The market friendly theories of Von Hayek and Friedman restored the job- and wealth-creating role of the private sector and curtailed the power of the trade unions, which extorted wage rises by means of industrial action without commensurate increase in productivity. Von Hayek maintained that the Keynes approach was based on a paradoxical error - it could cushion the impasse of economic slumps, but would inevitably institutionalise inflation. In the UK, Prime Minister Thatcher not only curtailed public spending, but also introduced a privatisation programme under the banner of creating a “capital-owning democracy”. She believed that the citizens should own houses, shares and a stake in society. By 1990 the supply- side Thatcher/Reagan model created a new economic agenda around the world.

The Influence of the Boom Period (1989-2008) The year 1989 (200 years after the start of the French Revolution) marks the collapse of Soviet Communism - the end of a flawed system which embodied the fallacy that societies do their best in a vast controlled collective set-up rather than in an open society allowing them a relatively free pursuit of their best interests. Communists as well as planned social democracies believed that government bureaucrats can run an efficient egalitarian economy. The implosion of Soviet Communism marked the beginning of an opportunity to change the predominant development model of the world. 271

In most of the Free World countries was revised and modified by the free-market theories of Friedrich von Hayek and Milton Friedman. Although most countries were still running mixed economies, the new approach brought a tilt in favour of private enterprise. Even left-wing Labour Parties around the world from New Zealand, Australia and the UK, as well as social-democratic parties such as the American Democratic Party and similar left of centre parties in Europe started to revise their policies to reduce the role of the state, maximising individual liberty, economic freedom, deregulation, reliance on the market and decentralised decision making. Liberalism was restored to its traditional meaning - less government not more. It is important to note that during the New Deal era of the thirties Franklin D. Roosevelt expropriated the word “liberal” to describe his policies to ward off accusations of being left- wing. He declared that liberalism was “plain English for a changed concept of the duty and responsibility of government towards economic life”. Ironically, since then the concept of “liberalism” has been identified in the United States with an expansion of government’s role in the economy. In the rest of the world “liberalism” means exactly the opposite - i.e. a reduced role for the state and the maximisation of individual liberty, economic freedom and reliance on the market and decentralised decision-making. People who are not familiar with the history of political thought often confuse these totally divergent uses of the word “liberal” - not to mention the misuse of the word “conservatism”. (See Yergin and Stanislaw, op.cit., p.15) The implosion of communism discredited the tilt towards statism. The 1990s saw the emergence of rapid economic growth, employment and rising standards of living. The transformation not only spread through West and Eastern Europe, but also penetrated to India and China. Deng Xiaoping became the first paramount leader in China to replace Maoist strictures with pragmatic market friendly reform measures to establish what he called a “socialist market economy”. Over the ten-year period 1985-1995 the Chinese economy grew at an average annual rate of 9.3% - leading over 200 million people out of poverty in a decade. The first steps were taken to open the door for private ownership of property and business enterprises and to reduce the high level of public ownership. This slow process is still under way, but it is unfortunately not coupled with the expansion of civil rights. China still remains a totalitarian communist dictatorship. The market based system that unfolded in large parts of the world over the past three decades has delivered unequalled wealth and freedom on a dramatic scale. Hundreds of millions of people have been lifted out of abject poverty - the fastest growth in global income per person in history. In common parlance the new system became known as the “Washington consensus” (based on deregulation, innovation and privatisation). Sometimes it is referred to as Anglo- Saxon capitalism (highly leveraged, lightly regulated and globally mobile) with financial services provided from New York, Chicago and London. The Western World in the 1990s saw a rapid expansion of the information-technology driven “new economy”. The IT revolution transformed the way business was done, revealing new opportunities for growth and helping businesses to lower their costs through rising productivity. This upward trend was most visible in the USA. Since 1996 America’s non-farm productivity improved on average by 2.8% a year. Share price levels were breaking new ground with regard to traditional yardsticks of value. The number of adult Americans invested in increased from 21% in 1990 to 43% in 1997 with 60% of all household assets allocated to equities. This speculative frenzy prompted Federal Reserve Chairman, Alan Greenspan, to refer to “irrational exuberance” - quoting Robert Shiller. The collapse of the Asian and Russian economies towards the end of 1997 triggered a slump of 20% in the US stock market. Japan, South Korea, most of East Asia, Latin America, Eastern Europe and economies that account for 40% of the world output tumbled into recession. 272

After the US government bailed out Long-Term Capital Management, a massively leveraged hedge fund, in 1988, the US economy managed to weather the economic recession that engulfed a large part of the world by the turn of the century. In the first few years of the new millennium the Anglo-Saxon macro-economic model steamed ahead. The annual growth rate in the USA and the UK continued to outperform the rest of the free world. But, simultaneously, a debt-financed asset boom in housing and share-markets ballooned unchecked. Financial markets were flooded with dodgy and obscure financial instruments that carried the seeds of disaster. After the introduction of the Euro as well as allowing several new members into the European Community, growth rates slowed down. By 2004 it was estimated by The Economist that the average person in the Euro zone was still 30% poorer (in terms of GDP per person measured at purchasing-power parity) than the average American. This discrepancy was ascribed to a variety of factors: the burden of unification in Germany, the high tax rates, the inflexible labour markets making it difficult to fire unproductive workers and unattractive to hire new workers, the shorter work hours, the bloated bureaucracies and the welfare-state mentality which dampens the work ethic. However, these comparisons obscured the fact that the Euro zone was less driven by a housing- and consumer-credit bubble, income imbalances and also very unstable and obscure financial practices.

The Return of Big Government In the wake of the global financial crisis of 2008/09, massive amounts have been pumped into most economies by huge stimulation packages – mainly to strengthen demand and to contain unemployment. A lingering consequence of these interventions is the world-wide expansion of the state’s reach: its debts, its taxes, its authority, its scope – in short, the burden of its interventions. The ratchet effect of a crisis on the reach and authority of the state is that it does not recede when the crisis passes: the state never returns to its previous limits. The world economy is likely to be overwhelmed by the weight of public finances for years to come. As a result of the emergency measures taken to avert economic collapse and the general inclination to introduce new interventions to combat climate change, the role of state is growing rapidly. But the resources to support the Leviathan are limited. Tax revenues are shrinking and public debt is growing. The latest round of government interventions is likely to be as wasteful, and inefficient as before: achieving less than the resources squandered. Floods of money have been allocated to cash handouts, make-work projects and hiring extra government employees. In the depth of the crisis governments focused on the short term: on preserving jobs and electoral support rather than structural reform or investment in productive assets. Many countries now face the challenge of repairing their balance sheets.

The Public Debt Burden Debt has many negative connotations. In “Hamlet”, Polonius famously advises Laertes that he should “neither a borrower nor a lender be”. Politicians routinely bemoan the national debt, which is what the government owes to its citizens – and, increasingly to foreigners. President Eisenhower called the national debt “our children’s inherited mortgage” and accused profligate governments of robbing their grandchildren. What levels of debt and deficits are excessive? Much depends on the size and vibrancy of each economy. The Maastricht Treaty’s fiscal criteria for monetary union prescribes that total government debt should be no more than 60 percent of GDP and that budget deficits be no bigger than 3 percent. But these rules have been easily broken without effective punishment. In October 2009, The Economist published findings that total government debt in the big rich countries was expected to reach an average of 100.6 percent of GDP in 2009 and heading for 119.4 percent of GDP in 2014. These levels of debt cannot be sustained for long, especially when nervous capital markets drive up the cost of servicing the growing debt. In the UK the level of official deficit and debt levels is exacerbated by the fact that its banks are overextended and that its private sector is overindebted on top of its overweight public sector. As a percentage of GDP, 273

its budget deficit in 2010 is forecast at 14 percent, the public debt at 80 percent and total debt (public and private) at around 400 percent. The critical variables are the level of confidence of the buyers of government bonds, the interest rates required to buy government debt and the repayment terms involved. These requirements, in turn, depend on perceptions of the relevant government’s fiscal rectitude and the economic potential of the country and its taxpayers. A country with a firm growth potential, a stable political system and a convincing record of good economic management will find it easier to raise loans domestically and internationally to cover its debt requirements. The payback potential of a country depends on its projected disposable income stream. To avoid public debt spiralling out of control requires a high growth rate, raised taxes, or reduced spending. But in several countries the growth rate is sclerotic, the tax base eroded by the economic downturn and the expenditure budgets committed to ballooning public salaries and jobs. Hardly any euro-zone country had any reason to be optimistic about their growth prospects. In the past, extended periods of government deficit spending generated a disease that Keynes never anticipated – stagflation – i.e. stagnant growth, high unemployment and rising prices at the same time. Several rich countries already seem to suffer from some form of stagflation. Their growth prospects are weak, their public spending levels are high and inflationary pressures are rising.

Deficits as Percentage of GDP

Budget Budget Debt Debt Deficit Deficit Country 2009 2014 2009 2014 USA 88.8 112.0 -12.3 4.3 Japan 217.4 239.2 -9.0 9.8 Germany 79.8 91.4 -2.3 2.8 France 77.4 95.5 -5.3 3.1 Britain 68.6 99.7 -10.0 3.4 100.6 119.7 -8.6 4.5 (Source: "The Long Climb", a Special Report on the World Economy, The Economist, October 3rd, 2009, p.12)

By precipitating inflationary pressures, excessive public borrowing affects current trends as well as the economy that future generations will inherit. Getting inflation down requires higher levels of interest rates and restraining wage levels. Servicing the national debt requires taxes to rise, which in turn would damage the economy by reducing incentives to work or by causing distortions in capital and labour markets that reduce income and wealth levels. Government borrowing also tends to reduce private investment – and so reduces the capital stock that future generations inherit, causing a lower standard of living compared to what it would otherwise have been. The reason is that government absorbs the savings that would otherwise have gone into more productive investment. The impact of the “crowding out” of private capital formation depends on the productivity of government spending. What matters as much as the size of the debt, is the use of the money raised by the debt and the alternative uses it displaces (the “alternative cost”). A national debt that finances welfare payments at the expense of productive investments constitutes a major burden, whereas financing a new road, railroad or harbour development is likely to be a boon. Fast-growing economies can happily support more borrowing than a slow-growing economy. Fiscal adjustments that rely on spending cuts are more sustainable and friendlier to growth than those that rely on tax hikes. Cutting public-sector wages and transfers is better than cutting public investments. Spending cuts achieved through raising the pension age and slashing farm 274

subsidies have the double benefit of improving public finances and boosting economic growth through raising productivity and promoting more efficient resource allocation. Most European economies have seen average growth of below 2½ percent over the past two decades. If growth declines below the level of real interest rates, debt burdens will continue to rise. The debt level becomes excessive and unsustainable when a vicious circle is set in motion: rising debts boost interest payments, which in turn requires extra borrowing to service earlier debt, and so on. Governments then have only three ways to break away from the debt trap: raise taxes, slash spending or let inflation rip. If those measures are exhausted, the only option remaining is default – and eventually the status of a failed state, unless debt restructuring can be arranged with creditors. Defaulting countries are usually locked out of international capital markets because creditors are forced to accept large losses of principal. When defaulting countries re-enter markets once debt restructuring is complete, their reduced credit rating leads to escalating costs of funds, not only for their governments, but also for private companies in the defaulting countries.

Anglo-Saxon Liberalism Under Scrutiny The Anglo-Saxon model of market liberalisation and deregulation was an obvious casualty of the 2008 meltdown. Australia’s Prime Minister, Kevin Rudd, proclaimed the “demise of neo- liberalism, the economic orthodoxy of our time”. Yukio Hatoyama, the new Prime Minister of Japan after the 2009 election, strongly campaigned against “unrestrained market fundamentalism”. The Chinese leadership did not miss the opportunity to scoff at the “teachers having trouble of their own”. What was implied is that policies of deregulation and privatisation have brought the world economy to the brink of disaster. But were opportunistic politicians and journalists justified to bring into question all arguments in favour of market solutions? Do they also question the market’s role in industrial production, health, education and international trade? Attacks on liberal market solutions are ill-advised unless specific areas of regulatory action are discussed. The recent crisis was not a generic failure of markets. It was a specific failure of financial markets. No responsible person would deny the need for regulating the world of finance. It has always been prone to bubbles, panics and crashes since the early days of international finance. Governments have always been involved in the regulation of financing: monitoring risky strategies, punishing scams and prescribing capital cover. But in recent years financial innovation outpaced the rule-makers. Derivatives, such as collateralised debt obligations and credit default swaps flowed undetected into the international financial circuit. The imbalances caused by China’s intervention to hold down its exchange rate (against market trends) sent a wash of capital into the American market. The crisis was as much caused by policy mistakes as by Wall Street’s market excesses. The way out seems to be more a matter of better regulation than more regulation. Heavy regulation would not inoculate the world against future crises. What is needed is better government, not more government. If regulators could learn from this crisis, they could manage finance better in the future.

Downsizing the Public Sector

The level of spending in a government’s budget and the outcome achieved with the budgeted amount are two different things. Standing between the two is a cumbersome bureaucratic machine often burdened by inefficiency, waste and often also apathy and corruption. Welfare spending may fail to reach the needy, “nation-building” spending may be squandered or misallocated and infrastructure “investment” may end up in nepotistic pork-barrels. Policy impact is not the same as policy input. Keeping down the numbers of public-sector employees and keeping a lid on the total running cost of the public sector is by far the biggest challenge facing the modern democratic state.

275

Unaffordable Public Spending Measured per person in constant monetary units, spending on government activity has grown much faster than both the population and inflation in the Western world. Government sectors have grown faster than the private sectors – the ultimate source of taxable income – during most of the past half century. For a brief period from the mid-1990s to around 2007, several Western countries have seen a small decline in the rate of increase in government spending’s share as a percentage of GDP. However, since the onset of the downturn in mid-2008, government spending on stimulation packages shot up at an unprecedented rate. The problem of bureaucratic growth is exacerbated by the trend that bureaucracies grow very fast during national emergencies but do not decline to their previous levels when the emergency ends. Post-crisis expenditures simply level out on a higher plateau than pre-crisis expenditures. Thus national emergencies provide the occasion for bureaucracies to rise to successively higher plateaus. The impact of the activities that cost the taxpayer billions, are seldom properly measured and evaluated independently. Even the goals and objectives are seldom clearly stated so that there is no standard against which to measure the success of the spending programme. A proper impact evaluation of a policy would require a clear indication of the target, its possible “spill over effects”, its direct costs and its indirect costs. All the benefits and costs, both immediate and future must be measured. If benefits do not exceed the costs, the community loses twice: from the waste of diverting scarce capital from better uses and from distortions caused by the higher taxes needed to cover the costs.

Ballooning Public Sector Employment Expanding the role of government, in effect means increasing the number of public service employees. Expanded government functions translate into more government interventions: more laws; more ordinances setting out rules, regulations and requirements; more institutions such as departments, state corporations, councils, commissions, institutes, tribunals, boards, agencies and offices; more bureaucrats, advisors, consultants, clerks, inspectors, officials and a host of others paid predominantly out of the public purse. It requires skilful research to quantify the total public payroll of modern governments and considerable forensic skills to trace and count all the hangers-on, scavengers and camp followers that are not directly reflected in the nooks and crannies of official statistics. The expansion of the role of government inevitably increases the running costs, the number of public-sector employees and their emoluments (salaries, superannuation and entitlements). The total running cost of the public sector (including emoluments) is by far the largest recurrent item in government spending. Increases in spending are driven by the numbers of people employed as public sector workers and changes in salaries and other benefits paid out to these employees. In most Western democracies, the total payrolls of public-sector workers now stand at around 60 percent of total government spending on all levels of government. In authoritarian countries it is much higher. Accurate comparative and longitudinal statistics on the ballooning public sector are difficult to obtain. This is due to the complexity of reconciling divergent statistical categories and nomenclature and also the controversial implications of the results obtained. Often official statistics are totally unreliable. Definitions vary of what is permanent and temporary, full-time and part-time, central and regional or local, civil service or para-statal, civilian or military, elected or appointed. Few journalists are equipped with the skills and motivation to undertake the tedious task of working through stacks of statistical tables. Most economists are themselves part of the corpus of public-sector workers and have limited incentives to divulge the family secrets. Politicians have even less interest in debating this issue because public-sector employees constitute a substantial voting block – around 20 to 30 percent of the workforce in most constitutional democracies.

Public-Sector Unionisation In the USA, the UK, France, Germany, Scandinavia, Canada and Australia, public-sector workers account for the bulk of union membership. They outnumber privately employed union 276

members by a considerable margin. They also do most of the striking since they are employed by agencies that hold monopolies and encompass the staff of would-be controllers: police, soldiers, inspectors, judicial officers, etc. In the UK the traditional links between the Labour Party and the unions still prevail. The unions gave birth to the party and still provide more than 50 percent of the party’s money. When the Labour government published a proposal on reforming party financing in 2007, it recommended limiting donations from companies and businessmen – the main sources of Tory cash – but exempted the unions from the new rules! Similar structural relationships exist in many parts of the world. Public-sector unionisation and collective bargaining practices have led to glaring disparities in public and private sector emoluments. In the UK the median public-sector employee is better paid than his private-sector counterpart and his pension benefits are much better. Public-sector employees are privileged with defined-benefit (DB) schemes which are financed out of government’s current tax revenues. In the private sector most employees fall under contributory systems where the employers’ and employees’ contributions of a certain percentage of salaries/wages to their pension funds, determine the pensions they receive. There is a gap of around 30 percent between public and private sector benefit rates. Every year since 2001, public-sector workers have enjoyed bigger pay rises. In the USA no less than 37 percent of public-sector workers were unionised in 2008, nearly five times the share in the private sector. The share of unionised private-sector jobs have collapsed from 17 percent to 8 percent during the past 25 years. In 2009, for the first time, public-sector workers comprised more than half of America’s union members. The Economist reports that Democrats in particular have little incentive to anger workers who are often their electoral foot-soldiers. Those who defy unions do so at their peril as Arnold Schwarzenegger, Governor of California, discovered to his dismay when he tried to curb the unions’ power. As a result of their powerful lobby, public-sector workers are spoiled and unfairly advantaged. Government employees in the USA earn 21 percent more than their private-sector equivalents and are 24 percent more likely to have access to health care. Only 21 percent of private workers enjoy defined-benefit (DB) pensions which guarantee retirement income based on years of service and final salary. No less than 84 percent of government employees receive DB plans. State and local governments are reported to face huge budget gaps for the fiscal years 2009 to 2011 according to the National Association of State Budget Officers. In California, the unfunded liabilities of retirement programmes for public workers are expected to exceed $100 billion through 2015. Cutting costs is politically difficult. The stranglehold of public-sector employees on society is even worse in many parts of the world: France, Scandinavia, Russia, Egypt, India, China, Japan, Australia, New Zealand, Mexico, Brazil, etc. Some reformers argue that public-sector unions should be forced to face competition by having to bid against the private sector to deliver services. Unless public-sector workers find ways to improve productivity and find more innovative ways to deliver those services, taxpayers will turn elsewhere and overturn the system. (See The Economist, “Public-sector Unions”, December 12th, 2009, pp.33-34)

Public-Sector Pension Scams For decades governments have not revealed the true cost of public-sector pension schemes to the general tax-paying public. One possible explanation is that it is too politically sensitive – particularly since public servants and politicians belong to similarly funded pension schemes. Hence the full bill has never been fully accounted for. The consequence is inevitable that the public sector is building up a huge future liability – on the same level as other public debt – which future taxpayers will be required to meet. In Australia the government created a well-funded “Future Fund” to cover the future liabilities of public sector pensioners (politicians and bureaucrats). The popular perception of ordinary voters is that the “Future Fund” is a provision for “rainy day” needs. In reality it is 277

nothing more than a “nest egg” for public sector employees. It does not cover the liability for the old-age pensions of ordinary citizens. In many countries old-age pension schemes face obdurate problems. In most cases future liabilities have to be funded by future taxes. The problem with such a pay-as-you-go scheme is that it relies on a continuous stream of tax-based income. This resembles a Ponzi-scheme that feeds on a continuous stream of new depositors to meet the claims of the old. In the private sector a Pension Regulator can force a company to top up its contributions to its pension scheme if such a need is indicated by an actuarial evaluation of its liabilities. But the Pension Regulator cannot force the government to top up its contributions. That makes it particularly important to account properly for the cost of public-sector pensions. Proper accounting does not stop at monitoring cash flow needs, but also involves the cost of future liabilities.

Exploitive Collective Bargaining During the Kennedy era, collective bargaining practices which were developed in the private sector were naively transferred to the public sector in the USA. Since then this trend has emerged around the world. Today it is clear that serious questions have to be asked about collective bargaining in the public sector and its inherent contradictions of conflict of interest and unsustainability. In the private sector the bargaining process is subject to the discipline of the bottom line: the profitability of the enterprise in relation to the productivity of the workers. In the public sector there is no effective measure of productivity and the discipline of the bottom line is totally absent. The people who are assumed to represent the “employer” are also “employees” dependent on the taxpayer. The taxpayer only has an indirect, much delayed influence on the bargaining process and is forced to rely on very blunt instruments of control. Taxpayer interests are certainly not adequately represented. The results are obvious. It has created disparities and cost burdens that are not sustainable. Moreover, the system is patently unfair: as well as shouldering much of the burden of their own retirement, private-sector workers pay for generous public-sector pensions via their taxes. During the recent economic downturn since 2008, millions of private-sector workers have lost their jobs. There are no records of public-sector sackings as a result of the downturn. Even for incompetence, sacking is a rarity. To add insult to injury, it is the profitable private sector firms that provide the taxable income base for all public-sector activities and emoluments. Unprofitable businesses go bankrupt and stop functioning.

Government Bureaucracies’ Dead Weight The average growth rate of every major industrial country in the Western world has been on the decline for several decades. Only resource-based economies have grown. The economic sclerosis afflicting the rich Western economies manifests itself in many symptoms: unemployment rates around 10 percent; chronic budget deficits with levels of government spending approaching, and exceeding in some cases, 50 percent of national output; social welfare systems placing an unsustainable tax burden on society; public debt levels creating “debt” traps where a government has to pay more interest than it can service; and, endemic inflation built in as an integral part of Keynesian national accounts. During the 1950s and 1960s, many economists thought Keynesian deficit spending was costless. Governments thought they could fine-tune their economies out of recession. Eventually it was realised that the ultimate result of too much stimulus was higher inflation and excessive government involvement in the economy. Keynesian demand management was abandoned in favour of the “monetary approach”. The experience of recent years has demonstrated that the use of monetary policy – usually to keep interest rate levels low - had its costs too: not so much in consumer inflation, as in rising debt levels and growing asset bubbles. The costs of the latest round of government intervention will be felt in many ways. Investors will take it into account during the next boom that governments will rescue the largest banks, 278

slash interest rates, intervene in markets and run huge deficits. That means the moral-hazard problem will be greater. The financial wizards will know what to do. Many questions remain. What will bond markets do if central banks unload the holdings acquired during the crisis? What can creditors do to protect their investments from near-zero interest rates and floating exchange rates? The new era seems to be even more fragile than the previous one. There appears to be more scope for policy-makers to go wrong, or to be manipulated by vested interests. During the period between 1913, when the Trade Union Act was passed, and the 1980s, when Margaret Thatcher introduced minor abridgements of union privileges in the Employment Acts of 1980 and 1982, British trade unions exercised excessive political power and enjoyed excessive privileges. They eventually brought the British economy to a standstill. They changed Britain from a prosperous minimum-government state to a country where public expenditure accounted for around 60 percent of GDP in 1980. The burden of union control destroyed Britain’s growth potential in three ways: - First, its restrictive practices inhibited the growth of productivity and discouraged investment. - Second, it increased the pressure of wage inflation on the back of inflation-indexed collective bargaining (where the index is calculated by the public sector). - Thirdly, trade union demands on government had a cumulative tendency to increase the size of the public sector and government’s share of the GDP. In today’s world the “trade unions” are now dominated by “government employees” who are well placed to dominate collective bargaining in the public sector. Many of the political leaders who are supposed to represent taxpaying electorates are also, in effect, ex-employees of the government sector or ex-executives of the trade unions. The beneficiaries – public sector employees and their families – form around 25 percent of the electorate. This stranglehold does not bode well for the future of both representative democracies and free-enterprise economies. A self-serving bureaucracy will destroy society’s creative potential. Serious questions must be asked about the causes of the economic stagnation that has taken root in most social democracies. Is the problem “cyclical” or of a “structural” nature? Can this trend be reversed? Is the welfare state a state that cannot stop growing? Many factors make it unlikely that the countries of the West will succeed in significantly rolling back the encroaching state or will return to higher growth rates in the foreseeable future. In public debates the obvious mainspring of their malaise is persistently overlooked: the size of government.

Safeguarding Democracy

Students of political life have long debated the prospects for democracy. For optimists there is ultimately one path to modernity and that is the path towards liberal democracy. For sceptics the victory of democracy is not pre-ordained. They question the presumption that liberal democracy possesses such intrinsic advantages that there is an inevitability in history’s march towards democratisation. Between 1960 and 1995 scores of countries made the transition to democracy, bringing widespread euphoria about democracy’s future. Subsequently, democracy has retreated again in several countries: Zimbabwe, Bangladesh, Nigeria, the Philippines, Russia, Thailand and Venezuela. American attempts to establish democracy in Iraq and Afghanistan seem to have left both countries in chaos. The growing power of China and Russia have led many observers to question the rise of democracy and to conclude that there are multiple paths to capitalist modernity and that authoritarianism is quite compatible with capitalism. These issues were recently comprehensively debated in the pages of Foreign Affairs, the bi-monthly journal of the American-based Council on Foreign Relations.

The Autocratic Revival The historian Azar Gat (Tel Aviv University) argued in “The Return of Authoritarian Great Powers”, Foreign Affairs, July/August 2007) that China and Russia mark “... a return of economically successful authoritarian capitalist powers” and “... may represent a viable 279

alternative path to modernity”. This implies that there is no inevitable connection between the economic liberalisation associated with capitalism and economic globalisation, on the one hand, and political liberalisation associated with liberal democracy and limited-government constitutionalism on the other. This thesis implicitly accepts that it is capitalism, not socialism, that is the most viable economic system. But it does not imply that democracy has a competitive advantage over authoritarian systems. The supposed autocratic revival also triggered a reassessment of why earlier autocratic states failed. Gat, for example, contends that the earlier failure of authoritarian capitalist states was a product of contingent factors rather than some deep misfit between capitalism and authoritarianism. Gat argues that Nazi Germany and Imperial Japan failed as a result of insufficient territorial and population size, rather than some other essential flaw or intrinsic weakness. The autocratic revivalists claim that the combination of authoritarian political systems and capitalism in major countries such as China and Russia is not a fleeting stage of transition but a durable alternative to the Western combination of political democracy and capitalism. Hence the prospects for liberal democracy are far less bright than the liberal narrative stretching from the Enlightenment to the 1990s allows.

The Liberal Narrative Two political scientists, Daniel Deudney (John’s Hopkins) and G. John Ikenberry (Princeton), are strong proponents of the argument that liberal capitalist democracy is the inevitable wave of the future. They argue that illiberal regime types (e.g. the Axis states and Soviet Russia), being unable to formulate successful grand strategies, are prone to making profound strategic blunders by launching implausible campaigns of aggression and underestimating their adversaries. They ascribe these blunders to living in a closed information system where ill- informed views are left unchallenged. They maintain that closed, authoritarian systems are generally plagued by corruption because of the absence of accountability structures. They maintain that there are strong reasons to believe in the generally superior adaptability of liberal democratic regimes – reconfiguring themselves in response to systemic breakdowns and emerging threats. Deudney and Ikenberry also argue that the collapse of the Soviet Union and the international communist bloc, after the prolonged Cold War period, destroyed the potential of communism and socialism to offer a fundamental alternative to liberal capitalism. Beginning in the late 1940s, responding to the crisis of economic collapse during the Great Depression in the 1930s and taking advantage of the US geopolitical dominance in the wake of World War II, the United States spearheaded the creation of a set of international rules and institutions, most notably the Breton Woods system (including the World Bank and the IMF), the UN and various security partnerships. Taken together, US hegemony and the order gave liberal democratic states a greater presence in world politics. They also provided a structure that other states could engage with and join, one that could reorient those states in a liberal direction. The ability of the Western states to generate wealth and power seemed to prove that liberal democracy represented the sole pathway to sustained modernisation. The near-universal eagerness of peoples and states around the world to join the expanding capitalist international system gave further credibility to this liberal vision. (See Deudney, D. And Ikenberry, G.J. – “The Myth of the Autocratic Revival – Why Liberal Democracy will Prevail” in Foreign Affairs, Jan/Feb, 2009, pp.77-82)

Linkages Between Capitalism and Democracy There are many connections between capitalism and democracy, but Deudney and Ikenberry highlight three as most important: - First, that rising levels of wealth and education create demands for political participation and accountability. The basic logic behind this link is that rising living standards are made possible because capitalism, over time, generates a socio-economic stratum (middle class) whose interests are inclined to challenge closed political decision-making. 280

- Second is the relationship between capitalist property systems and the rule of law. In a capitalist system, by definition, the means of production are held as private property and economic transactions occur through contracts. The functioning of capitalism requires the enforcement of contracts, the adjudication of business disputes and court systems to apply the rule of law. Independent rights in the economic sphere and the institutions they require are an intrinsic limitation on state power which, over time, is embedded in a network of wider political rights. - Third, the economic development propelled by capitalism leads to a divergence of interests. Modern societies are characterised by a complexity of specialised activities and occupations producing a plural society rather than a mass polity. The increasing diversity of socio- economic interests leads to demands for competitive elections between multiple parties. (See Deudney and Ikenberry, op.cit., pp.83-84)

The situation in China Despite rapid rates of economic growth, China remains a very poor country with a huge population that has only recently begun to taste the fruits of capitalist modernisation. The middle class is still relatively small and political accountability is still a mirage. A deepening civic culture can only emerge in the wake of a deepening economic modernisation. The liberal narrative is not bound by a brief time-frame: it requires a deep-rooted socio- economic transformation which took many generations in the Western world. The Chinese transformation is also likely to take many decades and may be interrupted by stops and starts and even by periodic backward steps. The pathway to change is not likely to be as quick as the economic transformation that was engineered by Deng Xiaoping. He ended the Maoist strictures in 1978 and replaced it with his policy of “socialism with Chinese characteristics”. Only at the 14th Communist Party Congress in 1992 did he succeed in shifting the policy focus from a “socialist planned economy” to a “socialist market economy”. Although the abusive use of state authority and the aggrandisement of government officials is a tendency in every political system, it is much harder to check in autocratic regimes. Hence corruption is inherently endemic in the Chinese and Russian single-party autocracies. It also thrives on stratified predatory systems controlled by a parasitic ruling class of apparatchiks. Top-down, closed structures of power and authority choke off information from outside sources or distort it for the purposes of political control. Looking at the overall situation in China and Russia, there is little evidence of the emergence of a stable equilibrium between market capitalism and autocracy, such that this combination should be considered as a new model of modernity. China and Russia are both bureaucratic autocracies, but they are certainly more liberal and democratic now than they have ever been. In the case of Russia, the cushion of plentiful oil and gas has delayed political liberalisation and in the words of Deudney and Ikenberry, “... high energy prices and exports help subsidise bad government”. China, however, faces many developmental restraints, most notably over- population, environmental degradation and energy dependence. The problem of corruption, inequality and unaccountability will continue to drive political change in China, Russia and the rest of the world’s autocracies.

Democracy’s Prerequisites A cautiously optimistic, revised version of modernisation theory is expressed by political scientists Ronald Inglehart (University of Michigan) and Christian Welzel (Jacobs University Bremen) in their article “How Development Leads to Democracy – What We Know About Modernization” (Foreign Affairs, March/April 2009, pp.33-48). Noting the recent retreat of democracy in many countries, the authors state that “although the outlook is never hopeless, democracy is most likely to emerge and survive when certain social and cultural conditions are in place”. This tone continues to the article’s final page, where they argue that democratic institutions will not emerge in China or Iran as long as the current regimes continue to control the security forces. 281

Inglehart and Welzel emphasize the crucial distinction between industrial and post-industrial societies which emerged well after 1945. They maintain that authoritarian regimes can be quite effective at promoting rapid industrialisation as long as that industrialisation is largely dependent on massive inputs and marching large numbers of disciplined workers to the factories. As long as authoritarian regimes are importing technologies that were developed abroad, they can play catch-up even faster than democratic ones. Thus by 1980 the Soviet Union was producing more steel and electricity than the United States. It also had a substantially larger population. But although it had a larger industrial base it was unable to compete in the realm of high technology, which had become crucial to military power. A successful knowledge society requires open communication flows and an innovative and autonomous work force. For China to attain these will require substantial liberalisation. Inglehart and Welzel hold that economic development tends to bring about important, roughly predictable changes in society, culture and politics. They claim that earlier versions of modernisation theory need to be modified in several respects: - Firstly, modernisation is not linear. It does not move constantly in the same direction; instead, the process reaches inflection points. Empirical evidence shows that each phase of modernisation is associated with distinctive changes in people’s worldviews. Industrialisation leads to bureaucratisation, hierarchy, centralisation of authority, secularisation and a shift away from traditional values. The rise of post-industrial society brings a further set of cultural changes that move in a different direction. Instead of bureaucratisation and centralisation, the new trend is toward an increasing emphasis on individual autonomy and self-expression. This, in turn, leads to a growing emancipation from authority. Thus high levels of economic development tend to make people more tolerant and trusting, bringing more emphasis on self- expression and expecting more participation in decision-making. This process is not deterministic and any forecasts can only be probalistic since economic factors are not the only influence. Leaders and nation-specific events also shape the course of events. Moreover, modernisation is not irreversible. Severe economic collapse can reverse the trend. - Secondly, history matters. Social and cultural change is path-dependent. This means a society’s heritage – whether shaped by Protestantism, Catholicism, Islam, Confucianism, or Communism – leaves a lasting imprint on its worldview. Religion proves to be particularly resilient. Although the publics of industrialising societies are becoming richer and more educated, they are not necessarily creating a uniform global culture. Cultural heritages are remarkably enduring. - Thirdly, modernisation is not westernisation, contrary to the earlier ethnocentric version of the theory. The process of industrialisation began in the West, but recently East Asia has undergone the world’s highest economic growth rates. The industrialising countries are not becoming like the United States. - Fourthly, modernisation does not automatically lead to democracy. Rather, it brings about social and cultural changes that make democratisation increasingly probable in the long run. High levels of per capita GDP do not produce democracy (e.g. the examples of Kuwait and the UAE that have not undergone modernisation). The emergence of post-industrial societies brings about social and cultural changes that are specifically conducive to democratisation. Knowledge societies cannot function without highly educated publics that have become increasingly accustomed to thinking for themselves. Rising levels of economic security bring growing demands for self-expression, free choice and political participation. Repressing mass demands for more open societies becomes increasingly costly. Thus, in its advanced stages, modernisation brings social and cultural changes that make the emergence of democratic institutions increasingly likely. Modernisation theory today holds that economic and technical development bring about a coherent set of social, cultural and political changes. It is linked to pervasive shifts in people’s beliefs and motivations, the role of religion, job motivations, human fertility rates, gender roles and sexual norms. They inspire growing mass demands for democratic participation and for more responsive behaviour on the part of elites. It makes democracy increasingly likely to emerge. 282

Sham Democracies During the spreading of democracy in the last quarter of the 20th century, much emphasis was laid on electoral practices. There was a tendency to view any regime that held “free and fair” elections to be a democracy. Many forms of sham democracy emerged. It is therefore important to distinguish between effective and ineffective democracies. The essence of democracy is that it empowers ordinary citizens. Effective democracy therefore depends not on “bills of rights” or “paper constitutions setting out civil and political rights”, but on the degree to which office-holders respect these rights. In-depth analysis is required to establish the degree of integrity of government elites and the institutions of government as well as the participatory opportunities and habits of the general public. Democracy involves much more than occasional voting. It involves the presence of a civic culture: public accountability, public responsibility, the paramountcy of the public interest, constitutional checks and balances and the rule of law. Around the world people who are disillusioned with American liberal capitalism have an eye on China as an alternative. What they seem to ignore is that China is , in effect, ruled by its 80 million members of the Communist Party whose basic role is to rubber-stamp the decisions of the leaders. It is a formula that achieves short-term comfort at the expense of long-term stability. Within the party’s ranks, the leadership is immune to scrutiny. It fosters corruption and leads to untested decisions. It gives supposedly elected posts to hand-picked favourites. Members of the private sector are usually marginalised. Inner-party democracy is needed to give voice to the party members. Competitive parties are needed to give voice to the people. Without competitive political parties, leadership is not decided by the ballot box but by backroom deals between party factions. China still has a long way to go before it would approach the status of a democracy. Intense security remains in force to control public opinion. Networking on the internet is tightly controlled and any form of dissidence is prosecuted as subversion. Even intra-party competition between factions is strictly avoided by upholding the Communist Party’s tradition of “centralism”: following the leader with no open dissent. (See “Democracy, China and the Communist Party”, The Economist, December 19th, 2009, pp.45- 46)

Democracy’s Vulnerabilities Democracy is a delicate political plant. In the history of mankind the development of democracy was a slow and arduous process. It was built on the blood, sweat and tears of millions of people. Of the close to 200 countries in the world today, it is unlikely that more than 20 qualify as true democracies. Democracy – a word compounded by the Ancient Greeks from demos (the people) and kratos (authority) – means many different things to different people. There are a great many definitions and a wide spectrum of political systems claim to be democracies. In the Western world democracy is considered as a form of government organised in accordance with the principles of popular sovereignty for each nation within its territory, political equality for all members of the community, effective popular consultation of the people on the basis of regular free elections and majority rule within the framework of transparent and established constitutional law. Governance of the people, for the people, by the people with competing branches of government to provide checks and balances, electing those in power by means of competitive political parties, bolstered by free speech and freedom of association are prerequisites for the establishment of democratic societies. (See Austin Ranney, The Governing of Men, Holt, Reinhart and Winston, 1966, pp.84-101) Competitive democracy is vulnerable on various fronts, but particularly from within on account of its soft underbelly. Hard choices must be made in relation to a wide range of issues: the control of special interests like public-sector unions, corruption and nepotism or patronage in government contracting and appointments, separation of religion and state, control of subversive activities, maintaining national security, vexatious or malicious litigation, 283

exploitation by rent seekers, the growth of the “parasite” economy, exploitation of welfare benefits, free riders, the “rule of lawyers”, the multiplication of pressure groups, the propaganda role of public broadcasters, the manipulative role of opinion polls, the ballooning public sector and the potential tyranny of the majority where society is divided between those who depend on the government and those who pay the bills.

Democracy’s Prospects In its January 16th-22nd, 2010 issue, The Economist reviewed the Freedom House survey of liberty and human rights in their report entitled “Freedom in the World 2010: Global Erosion of Freedom”. They found that declines in liberty had occurred in 2009 in no less than 40 countries, while gains were recorded in only 16. Taken as a whole, the findings suggest a turn to the worse. Freedom House claimed that liberal democracy is not merely suffering political reverses, it is also in intellectual retreat as Western governments and intellectuals temper their moral concerns with commercial and pragmatic considerations. Can democracy prevail by winning a genuinely open debate that political freedom works best? The corruption watchdog, Transparency International, reported that all but two of the 30 least corrupt countries in the world are democracies – with Singapore and Hong Kong as two semi-democratic exceptions. Autocracies occupy the highest rankings on the corruption scale. Entrenched political elites, unhindered by free and fair elections, can more easily get away with stuffing their pockets, rewarding their supporters and bribing their enemies. Although the link between political systems and economic growth is hard to prove, The Economist quotes a study published by the Council for Foreign Relations that between 1960 and 2001 the average annual growth rate was 2.3 percent for democracies and 1.6 percent for autocracies. It is often argued that autocracies create stability without which growth is impossible. But autocracies are not, in fact, more stable than democracies. Tito’s Yugoslavia and Saddam Hussein’s Iraq disintegrated once the straitjackets that held their systems together came off. Democracies depend on a “culture of compromise” coupled with accountability and limits on the power of the state. This enables democracies to avoid catastrophic mistakes or criminal cruelty as occurred under the unrestrained power of Mao and Stalin. Democracies do not commit mass murders or permit large famines. Noisy legislatures and robust courts tend to restrain government action. Autocracies may be faster and bolder, but they are also more accident prone. Open and accountable government tends, in the long run, to produce better policies. No group of mandarins can justifiably claim to know what is best for society. They have no “hotline” to the future. Autocracies tend to be top-heavy and surrounded by secrecy and paranoia. Alternative views or arguments are harder to surface and transfers of power tend to be reigned by intrigue and violence. Ballot boxes do not guarantee free elections. Individual and minority rights need to be protected by accepted principles of constitutional checks and balances and open political contests. Democracy has never endured in countries with non-market economies. An overweening, meddlesome state sucks the oxygen away from free associations and independent power centres that a free economy needs to thrive. A vibrant middle class is less susceptible to state pressure and political patronage. Democracy also is more likely to succeed in countries with a high degree of social cohesion – without strong cultural or ethnic cleavages that can easily turn political conflict into violent confrontation. Consensus on vital issues such as security, wealth creation and justice, is of crucial importance. A democratic constitution cannot, by itself, guarantee good governance, but it has a good chance to prevail if its proponents show success at governing. Evil flourishes when good people do nothing. It seems that the admonition of Pericles is as relevant as ever: “The price of liberty is eternal vigilance”.

284

Rebalancing Global Economic Growth Patterns

For much of the past five hundred years the “West” played a predominant role in the history of the world. It served as the mainspring of scientific and technological development and the spread of the Christian religion and its associated cultural characteristics. Today the “rich world” encompasses the West European countries and the United States, Canada and Australia – all countries with per capita incomes of around $40,000 per annum and above. It represents almost 700 million people – around 10 percent of the world’s population. The “rest” of the world, the other 90 percent of the world’s population, is much poorer, with only a few outliers where per capita incomes are above $10,000 per annum. Around 70 percent of those living in the “rest” of the world have per capita incomes of less than $5,000 per annum. The few outliers such as Israel, the United Arab Emirates, Saudi Arabia, Singapore, Taiwan, Japan, South Korea and Hong Kong have been assisted by their close ties with the “West” by way of occupation, investments and trade.

Emerging-market Dynamism For many years the “rest” of the world has been described as the “under-developed” world, then as the “Third World”, thereafter, euphemistically as the “developing world”. Today it is called the “emerging world” and it seems to be entering an era of spectacular growth. Its share of global GDP (at purchasing-power-parity) increased from 36 percent in 1980 to 45 percent in 2008. consumers have increased their share of global consumption to 34 percent. Their economic growth rate is expected to outpace the growth rate of the “rich” world by a considerable margin in the next decade. Much depends on continued export-led growth and strong growth in domestic consumption.

Export-led Growth In the aftermath of World War II the USA provided important market opportunities for Japanese and German manufacturers. In time the Japanese and German development models were followed by France and Italy in Europe and in Asia by South Korea, Taiwan, Malaysia and ultimately also by China. Since 1990 China’s booming economy steadily built its wealth on export revenues particularly based on Western demand. Similarly, since the 1950s, vast volumes of crude oil were exported from the Gulf States to the USA, Europe and the Far East. Built on the strong demand in Western countries, the economic transformation of Asia must be considered as one of the most phenomenal developments of the 20th century. Export was the key to Japan’s success and its development model became a template for Asia’s “tigers”. They all prospered on variations of the Japanese model of export-led growth. By emphasising exports, Asian countries replaced reliance on foreign capital with a dependence on foreign demand – particularly Western markets. In turn, Western countries have exported some of their dirty industry to the developing world: steel, cement, fridges, kettles and all the paraphernalia of modern life, the production of which used to cause pollution in developed countries. In a recent article entitled “Tamed Tigers, Distressed Dragon” (Foreign Affairs, Volume 88, No.4, 2009, pp.8-16), B.P. Klein and K.N. Cukier claim that excessive focus on exports lead to economic distortions. Corporate investment, government spending and foreign direct investment flood into the export sector at the expense of the broader domestic economy. Social goods such as public education, health care, unemployment insurance and social security were neglected. This imbalance is said to explain why Asians save as much as they do: “self-insurance” to protect themselves. A typical household saves between 10 and 30 percent of annual income. National savings, including government and corporate savings, amount to as much as half of GDP. The authors claim that thrift, which is normally a virtue, becomes a vice pulling money away from consumption where it could be used to improve people’s living standards and their countries’ overall economies. They also argue that lending within Asian countries is disproportionately oriented toward powerful economic and political interests such as state-controlled and family- owned enterprises and mega-conglomerates. 285

In China small and mid-size firms represent 70 percent of GDP, but tap only 20 percent of the country’s financial resources. They claim that too much power is concentrated in the hands of elites in Hong Kong, Malaysia, Singapore, Philippines, South Korea and India. The concentration of wealth and power has contributed to weak corporate governance across the region. It has stunted the growth of Asia’s middle class which, in turn, curtails private consumption and wages as a proportion of the region’s overall GDP. In China, where exports and corporate earnings soared between 1997 and 2007, wages actually declined from 53 percent to 43 percent of GDP.

Government’s Role Klein and Cukier claim that centralised planning is suppressing entrepreneurship and more robust domestic growth in Asia. The Chinese government still owns 76 percent of the country’s wealth. It controls the banking sector and oversees state-owned enterprises that account for one-third of the economic output. Likewise in India, where the government has dismantled the jumble of rules known as the “licence raj”, red tape continues to strangle business activity. Many emerging countries rely heavily on state-owned enterprises which partly resemble the European trading companies of the 16th-19th centuries, such as the Dutch East India Company and Britain’s East India Company. They borrow money from government at subsidised rates, have ties to central and local authorities and enjoy legal privileges. These enterprises are prominent in the energy and resources sectors. The world’s 13 largest oil companies (measured by reserves) are all controlled by governments. China’s largest companies are all state backed or hybrid companies. It is not clear whether they are responsible to the government or to the marketplace. They are subject to political meddling, are considered part of the state’s “strategic” interests and are used to oil the ruling party’s patronage machine. Foreign businesses find it hard to know whether to treat them as businesses or as arms of government.

Recycled Surpluses During the past two decades, enormous financial surpluses were realised by the major exporting countries, particularly Japan, China, South Korea, Taiwan and the oil-producing states of the Persian Gulf. These surpluses were consistently recycled back to the West in the form of portfolio investments at investment banks in Wall Street and the City of London. These financial centres were considered to be the most developed financial markets offering the best and safest returns. In most emerging countries, the local financial markets were relatively “immature” in the sense that they did not offer enough trustworthy savings vehicles to absorb the savings glut. The USA, and to a lesser extent Britain, were considered favourite destinations for global capital flows on account of their broad and liquid markets for securities. As the USA was sucking up vast amounts of savings from abroad, its own current account plunged into the red. The USA needed to borrow from abroad to pay for its deficits. A by-product of the vast trade surplus in China was the piling up of reserves of US dollars which Beijing then placed mostly in US government securities as well as in quasi-government securities such as Fannie Mae and Freddie Mac.

Sources of Global Growth The global recession sent the economies of the rich world into a tailspin, but merely caused the emerging economies to slow down somewhat. Developing countries subsequently started recovering much faster than the sclerotic “rich” world. According to the IMF virtually all of the world GDP growth in 2009 (measured on a purchasing-power basis) came from developing countries. The advanced economies outside the USA were expected to be a drag on global growth in 2010. The IMF also predicted that by 2015, around three-quarters of global growth will come from China and other developing countries. In a special report on innovation in emerging markets called “The World turned Upside Down”, written by Adrian Wooldridge (The Economist, April 17th, 2010), it is stated that developing countries are becoming the hotbeds of business innovation, reinventing systems of production and distribution and experimenting with entirely new business models. 286

The Economist claims that the world’s centre of economic gravity is shifting towards emerging markets. Over the past five years China’s annual growth rate has been more than 10 percent and India’s more than 8 percent. They are producing breakthroughs in everything from telecoms to car making to health care. They are redesigning products to reduce costs by wide margins. By redesigning business processes, they do things better and faster than their trade union dominated rivals in the West. The emerging world’s growing ability to make established products at dramatically lower costs – called “frugal innovation” – is based on the principle of redesigning products and processes to cut out unnecessary costs. Entrepreneurs in the developing world are applying the classic principles of division of labour and economies of scale to new areas. They are combining technological and business-model innovation to produce entirely new categories of services. The Chinese are using flexible networks – powered by guanxi or personal connections – to reduce costs and increase flexibility. They rope in networks of thousands of companies operating in dozens of countries to create customised supply chains and to serve as partners to help solve problems. The Chinese also excel in “guerrilla” innovation, known as shanzhai. It involves “parasitizing” on existing information networks or technology by ingenious copying, or modification or forcing technology transfer on foreign suppliers trading in China. Indians rely on their tradition of jugaad – meaning making do with what you have and never giving up. Their “frugal” products on the market are growing rapidly: cheap mobile handsets, small cars, small fridges, low-energy stoves. They produced the business model of “contracting out”, using existing technology in imaginative ways and to apply cost-cutting mass-production techniques in new and unexpected areas.

Shifting Centres of Economic Gravity As the slow-growth rich world lumbers on, emerging market leaders are advancing faster and gobbling up investment opportunities in many parts of the world. Western consumers and governments have been on a debt-fuelled spending spree for many years. The Economist reports that American household debt rose from 65 percent of GDP in the mid-1990s to 95 percent in 2009. The American government was cutting taxes and raising public spending even before the recession struck. The British government increased public spending from 35 percent of GDP in 2000-01 to 43 percent in 2009-10. The IMF reported that the scale of write-downs on loans or securities that banks worldwide would have to make between 2007 and 2010 amounted to $2.3 trillion. Of this amount more than 95 percent of the write-downs involved the “rich” countries in the West. Less than 5 percent involved emerging countries. But the IMF cautioned that mortgage delinquencies continued to rise in the USA where almost a quarter of American borrowers owe more on their mortgages than their houses are worth. By early 2010 Western governments and households were placed under growing pressure to put their financial houses in order. Consumers started cutting back on their spending in the face of high unemployment levels and shrinking wealth. It became necessary to rediscover the connection between effort and reward and to reduce the number of people subsisting on state benefits. Historically, big financial crises have been followed by long periods of slow growth and economic malaise. Oliver Blanchard, the IMF’s chief economist, predicted that painful retrenchment in Europe could last for 20 years. As growth headed south, debt headed north. Comparative analysis of concentrations of investment wealth also shows a gradual shift from west to east. Governments in Asia and oil exporters control some US$7 trillion of financial assets – most of it in currency reserves and sovereign wealth funds. Some analysts predict that the total of such funds could reach US$15 trillion by 2013. That would make government-controlled funds a large force in global capital markets with the equivalent of 41 percent of the assets of global insurance companies, 25 percent of global mutual funds and a third of the size of global pension funds. Capital-starved western banks seem to be desperately seeking cash infusions from eastern sovereign wealth funds and other state-controlled investors. The Economist of January 18th, 2008, published the following information about the scope of sovereign wealth 287

funds (US$ bn): UAE 875, Singapore GIC 330, Saudi Arabia 300, China Investment Corporation 200, and Qatar Investment Authority 50. Concentrated ownership by authoritarian governments is a serious strategic as well as an economic concern. A report by Capgemini, a Merrill Lynch Global Wealth Management firm, claims that of the total world’s population of approximately 7 billion, around 10 million are high net worth individuals (i.e. persons with investment assets of at least US$1 million excluding their residential homes). These high net worth individuals are distributed as follows: USA 2,866,000 Japan 1,650,000, Germany 861,000, China 477,000, Britain 448,000, France 383,000, Canada 251,000, Switzerland 222,000, Italy 179,000, Australia 174,000, Brazil 147,000 and Spain 143,000. The global average wealth of the world’s millionaires stood at US$3.88 million and their total wealth stood at US$39 trillion in 2009. The total wealth of Asia’s 3 million millionaires (largely concentrated in Hong Kong and India) surged to US$9.7 trillion in 2009, compared to the US$9.5 trillion held by Europe’s richest. (See The Australian, June 24th, 2010, p.25) In contrast with the sclerotic West, China and India, in particular, have become the world’s two biggest construction sites. Their populations are much bigger than all the developed countries added together and growing much faster. Asia’s population is expected to change from around 4 billion in 2010 to 5 billion in 2030; Africa’s from around 1 billion in 2010 to 1.5 billion in 2030; Europe’s from around 800 million in 2010 to 750 million in 2030; Latin America’s from around 650 million in 2010 to 800 million in 2030; and, North America’s from around 350 million in 2010 to 450 million in 2030. In the emerging world hundreds of millions of people will enter middle class levels in the coming two decades. Their economies are set to grow faster too. Brainpower too, will be relatively abundant. This combination of challenges and opportunities will produce an unstoppable momentum of creativity. (See Adrian Wooldridge, “The World Turned Upside Down”, The Economist, April 17th, 2010, Special Report, pp.1-16) Managing the Risks of Contagion

By early 2010, in the aftermath of the global financial crisis, the cumulative downward spiral in the Western World generated a circular blame game. Who or what was to be blamed for the malaise? Ironically, there was enough blame to go around: the financial markets for pushing dodgy instruments, the regulators for not insisting on the necessary precautions, the loose monetary policies of the authorities, the excessive spending on cheap Chinese imports, inadequate saving levels in the USA and the UK, the inadequate domestic demand in China and the inadequate funnelling of emerging-market savings into emerging-market projects. There could be little doubt that the root cause was to be found in the world of finance – particularly the use of derivatives through which finance houses in New York and London could shift risks to other institutions in the form of futures, options, forwards and swops. The complexity of these financial instruments and the ingenuity (if not the cunning) of the derivatives traders called for serious scrutiny of the practices of the banking world: financial institutions that were “too big to fail” and financial instruments that were “too complex to understand”.

The Volatility of Global Financial Markets For many generations, from the late 17th to the mid-19th century, English and American securities markets were heavily regulated. On both sides of the Atlantic, both authorities and the general populace were ambivalent about speculative activity. Such activity was either illegal or tightly regulated. The wider population has largely been suspicious of the power and practices of financial professionals. Rules were made against deception and price manipulation. In recent decades, financial innovation has proceeded much faster than regulatory practices. Hedge funds and private equity funds have ballooned to account for trillions worth of assets world wide. They also bear responsibility for the precarious volatility of global financial markets. Complex new products that are created in one financial centre involve assets in another and are sold in a third. 288

Capital markets are racing ahead of the regulators which remain rooted in their national systems. Financial firms are straddling borders, using technology that has made electronic transactions faster and cheaper – also making regulatory barriers less visible and transactions riskier. Risk is being dispersed more widely across geographical areas, financial institutions and investors. This dispersion, the argument goes, allows the financial system to absorb the stresses of the rapidly growing system. But the experience of the recent global financial crisis has shown that the dispersal of financial risk also carried the germs of a contagious global financial disaster.

Predominant Financial Hubs Although many cities like Zurich, Paris, Frankfurt, Dubai, Singapore, Tokyo, Mumbai, Hong Kong and Shanghai promote themselves as financial hubs, New York and London are by far the leaders of the pack. They score well on a package of key attributes that global financial firms are looking for: plenty of skilled people, ready access to capital, good infrastructure, attractive regulatory and tax environments and low levels of corruption. Location and use of the English language, the language of global finance, are also important. In terms of these criteria, New York, London and Hong Kong are considered as the world’s top three financial centres. Governments are paying much attention to wooing and keeping financial firms because of the benefits they bring with them: highly paid jobs, large tax revenues and international connections. Such cities are teeming not only with banks and exchanges, but also with legal, accountancy and public-relations firms and consultancies. New York and London are ahead of the others by a large margin. They dominate their own national markets and surrounding regions. Their success generates a “network effect” that creates a cumulative momentum for more success: air link and communications networks, sound legal systems, robust financial exchanges, multinational talent pools, attractive lifestyles and playgrounds for rich financiers and fluent English language facilities. Nearly 15 percent of New York’s workforce is employed in the financial sector. Slightly more than 15 percent of its gross output comes from the financial sector as did more than one-third of its tax revenue in 2006. The New York Stock Exchange is by far the world’s biggest market for share trading. NASDAQ, its other big stock exchange, deals mainly in technology and start-up companies on a global scale. Together the two stock exchanges accounted for nearly 50 percent of global stock trading in 2006. It houses more hedge funds and private equity funds than any place on earth. It handles around 40 percent of the world’s private-banking and wealth- management business. London’s roots as a centre of commerce stretches back many centuries, when the sun never set on the British Empire. In the early 1960s, London’s status as a financial centre was in gradual decline, reflecting Britain’s waning importance in the global economy. Then the American government helpfully imposed regulations and tax levies that encouraged investors to hold a lot of dollars offshore. These interventions enabled London to develop a lucrative offshore lending business (the Euromarket). Over the years, London built on that opportunity by welcoming foreign market-makers and by offering a regulatory structure that seemed more appealing than those on offer in Paris or Frankfurt. Favourable tax laws encouraged the global elite to spend part of their time in Britain. In 1979, when exchange controls were scrapped, the free flow of capital opened the city to broader international markets. The “Big Bang” reforms introduced by Margaret Thatcher in the mid-1980s modernised the City of London’s financial practices and lured a host of big American banks to London. A plethora of financial regulators were replaced by a single authority, the Financial Services Authority (FSA) that oversaw all of London’s financial markets. The old city’s “Square Mile” expanded to Canary Wharf in the old docklands area of east London. Heathrow became the destination of more than 70 million passengers per year as London became a hub in areas such as fund management and derivatives trading. London supporters claimed that it surpassed New York in structured finance and new stock listings. It accounted for 24 percent of the world’s exports of financial services (against 40 percent for all of the USA). It had a two-thirds share of the European Union’s total foreign- exchange and derivatives trading and 42 percent of the EU’s share trading. The London Stock 289

Exchange (LSE) claimed to be the world’s “most international capital market by a considerable margin”. (See Julie Sell, “Magnets for Money” – a special report on financial centres, The Economist, September 15th, 2007, pp.3-11) Although the financing services of New York and London provide a dizzying array of financial products, they also exercise inordinate power and influence over the ebb and flow of international finance. Sometimes they acted as a force for good, but also, in recent times, as a force for manipulation and destruction.

Regulating Financial Centres Regulators, who are generally appointed by and report to national governments, are expected to keep financial practices under control. But the complexities of rapid trading, particularly across multiple borders and asset classes, are exceeding the capacity limitations of even the most advanced regulators. Increasingly, financial institutions have choices about where they do their transactions, list their shares and keep their staff. The practice of “regulatory arbitrage” means that financial firms look for the most favourable environments to conduct their operations and to locate their staff. Thus regulatory regimes inevitably play a major part in deciding where to base themselves. By the end of 2007, it became commonplace to talk about the possibility of London replacing New York as the world’s financial centre. In September 2007, when The Economist published its special report on financial centres, it stated that “current American financial regulation – divided among many agencies at both federal and state levels – strikes many firms as complex and confusing” ... and that there are worries in financial circles that New York “... may be losing some of their business to financial centres abroad” such as London. The report also states that the Sarbanes-Oxley act “... passed five years ago, which imposed far tougher controls on public companies, is also often blamed for making America a less attractive place for doing business”. In contrast, Britain’s financial regulatory system was considered to be more “user friendly”, based on “broader principles” and a “risk-based” approach. Stocks, futures, banking, insurance and over-the-counter products were grouped under a single regulator, the Financial Services Authority (FSA). (See Julie Sell, The Economist, op.cit., pp.21-22) After the onset of the Global Financial Crisis, the UK’s regulatory system came under severe scrutiny: its cosy relationship with financial institutions based on dialogue rather than strict discipline and also its insider trading practices embedded in the old-boy network. In June 2010, the new Lib-Con coalition government’s George Osborne announced that the FSA was to be recast as a subsidiary of the Bank of England. In anticipation of the EU’s plans to revamp the financial regulation of banks, securities and insurance, George Osborne announced attacks on banks’ short-term financing, e.g. overnight interbank borrowing, generally used by banks to finance speculative bets. In late 2008, when banks stopped extending short-term loans to each other, it led to the cumulative freezing of global credit markets. It became obvious that the functioning and structure of the UK’s financial system required fundamental reform. Viewed from the British perspective, it also became clear that the only thing worse than relying heavily on the City of London, is not having the City (and its “locusts”) at all.

In Search of Better Regulation Under the caption “London risks losing its global appeal”, The Economist, December 19th, 2009, pp.111-112, described some of the “post-” woes facing the British taxpayers. After punching above Britain’s economic weight as a financial centre for several decades, commentators observed that London’s position was threatened on various fronts. Several polls showed that Singapore, Shanghai and Zurich could challenge the prominence of London within the next decade. According to information provided by The Economist of December 19th, 2009, Britain took the lion’s share of EU- wholesale finance in 2008. Of the total amount of €218.7 billion, the 290

distribution was as follows: Britain - €79.4; Germany - €28.6; France - €23.5; Italy - €15.1; Netherlands - €14.2; Spain - €13.8; and Ireland - €11.8. It was also stated that Britain collected the lion’s share of any tax collected from the “nomadic non-doms” – the itinerant hedge-fund and equity-fund managers and overpaid bankers. According to The Economist, the British government estimated that the top one percent of all taxpayers (many of whom work in finance or related industries) would pay 24.1 percent of all income tax revenues in 2009-10. In the decade before the crisis, financial companies were paying 20-27 percent of all corporation tax receipts. Anger at the bankers’ and financiers’ bonuses became widespread in Britain, the USA and elsewhere in the world. Even Pres. Obama was reported to have said “I did not run for office to be helping out a bunch of fat-cat bankers on Wall Street”. Britain responded with a special levy on bonus payments. In the USA the question of reforming the financial system remained bogged down in endless congressional hearings and political posturing. In both the UK and the USA, the financial system constitutes the lifeblood of their economies. It is vital that its pathology be properly diagnosed and treated. There are no simple cures, but several remedial regulatory measures are available: raising the size and quality of capital and liquidity buffers banks must carry; lifting the accountancy and reporting standards to determine the fair value of loans on their books and the transparency and reliability of their accounting information; reducing the structural reliance of banks on short-term funding; removing the low quality and dodgy financial instruments from the core capital of banks; shrinking or breaking up financial giants that are “too big to fail” and causing “moral hazards”; barring banks from proprietary trading and from owning hedge funds and private equity firms; establishing industry-financed funds and resolution regimes to deal with failing firms; lowering implicit government support for failing firms; and, raising the standards or requirements for granting credit in all financial institutions. The challenge is to keep the financial system as free as possible without giving free rein to the raconteurs and racketeers. In a leader article dd. 19th December, 2009, The Economist stated that: “London now risks losing its reputation as a hub of international finance, driving away mobile capital and taxpayers at a time when the government’s deficit is above 10 percent of GDP.” Unfortunately, The Economist did not focus on the much more serious dilemma – the international ramifications of global financial hubs that are not subject to effective supra-national regulatory scrutiny. Under the status quo, the manipulators of the capital markets in New York and London effectively manipulate the capital markets of today’s world. Perhaps more daylight on the identity, structures, assets and modus operandi of the financiers would be helpful. More transparency would restore a sense of trust and confidence.

The Complexity of Contagion Risks The very scope and reach of the integrated global markets create financial risks on an unprecedented scale. These dangers result from the inter-connection of currency markets, interest rates on bonds, stock market prices, along with the growth of ancillary markets. Contagion can sweep through the world’s markets in hours – endangering the economic stability of the entire world. Over the recent three years, the complex global economy changed from boom to bust conditions. Some American borrowers defaulting on their sub-prime mortgages caused highly leveraged financial institutions to flounder. When pessimism set in, a self-reinforcing downward spiralling collapse of confidence was set in motion. As asset prices fell, people spent less, businesses postponed investments and reduced employment expansion. As liquidity and credit facilities declined, a value-destroying uncertainty took hold. Forced asset sales drove prices down and markets went into a regressive tailspin. The failure of investment bank Lehman Brothers, and the losses that spilled over to money- market operators that held their debt, prompted a global run on wholesale credit markets. As it became harder for even healthy banks to find finance, businesses were cut off from all but the shortest term financing. So the credit freeze spilled over into the prospects of the real economy, 291

which in itself added to concern about the solvency of banks – which in turn, raised the spectre of runs on banks. Central banks reacted to unblock clogged credit markets by buying commercial paper from companies or by guarantees for debts issued by banks. Governments intervened to guarantee the security of bank deposits and by way of fiscal and monetary policies tried to cushion the negative effects of the economic fallout: unemployment, residential property foreclosures, tumbling stock markets and the rise of poverty. Stimulatory packages were introduced to boost demand, including “pump-priming” through government deficit spending. The monetary side involved the control of the volume of money supply by banks and by the lowering of interest rates. To restore confidence required fixing the financial system and addressing market failures. As the debt levels of governments kept piling up, a new set of problems arose: perceptions of failure and expectations of sovereign debt default. Investors in government bonds demand higher interest rates if expectations change about future government solvency, intensifying an already bad fiscal crisis by driving up interest payments on new debt – thereby plunging a country into fiscal and political crisis. This scenario played out in the case of Greece in early 2010. Anxiety about Greece spilled over into concerns about the state of public finances in Portugal, Ireland, Spain, Italy and potentially also the UK. Each country suffered under some combination of big budget deficits and high public debt levels.

Reducing Deficits and Debt Levels Remedies for these macroeconomic ailments are painful and hard to implement: tax increases, wage restraints and spending freezes combined with structural reforms aimed at boosting productivity. Putting these measures in place takes time as well as political determination. Some countries, like Japan, can rely on the domestic market to finance government debt because they have the benefit of current account surpluses. Others, like Greece and Spain, have to depend on foreign buyers of their debt – covered by guarantees issued by their own governments (if persuasive enough) or by other powers (an association of states such as the EU or the IMF). The biggest challenge in such an eventuality is convincing foreign investors – such as banks, private wealth funds, hedge funds, sovereign wealth funds, pension funds and insurance companies – that their economies can revive without further infusion of credit. Whenever investors retreat from sovereign debt, the problem intensifies. In the USA a bipartisan deficit commission has been appointed to advise the President on debt levels. The USA’s projected 2010 deficit of 11 percent of GDP and net public debt of 64 percent of GDP, looked good compared to those of Greece where public debt levels are projected to rise above 150 percent of GDP by the middle of the next decade. Deficit and debt reduction require a judicious combination of cost control, tax reform and new sources of revenue which are politically palatable. The IMF predicted that the Euro-zone’s net public debt would not exceed an average of 68 percent of GDP in 2010, as against the USA’s 64 percent. But currently the USA still has the great advantage that it holds the world’s reserve currency. Scared investors around the world rather seek a safe haven in American Treasuries than in any other currency, hence the cost of government borrowing remains relatively low. How long this pattern would endure, would depend upon the growth performance of the various major economies in the coming years.

The Way Forward

World experience has shown that without a prosperous economy a society is condemned to impoverishment. Economic growth is simply a prerequisite for providing the basic goods for which people strive to improve the quality of their lives: a sustainable increase in living standards that encompass material consumption, education, health, housing and . What matters, is successful templates for achieving sustainable economic growth. Without growth, the only way in which one person can be made better off is by taking something 292

away from another person – a zero sum game. With economic growth there can be more for everyone – everyone’s lot can be improved in a variable sum game. According to the World Bank, countries that have succeeded in achieving significant economic progress in recent decades have several features in common: - they invested wisely in the education and training of their people and in essential physical infrastructure such as roads, railroads, harbours, power generators, water supplies, machines and tools; - they achieved high productivity from these investments by giving private initiative and enterprise in the form of orderly and stable markets, competition and free trade leading roles; - they encouraged new ideas, technological innovation and efforts to achieve efficiency in the production of goods and services; - they restricted public sector growth and found complementary ways of interaction between government and market roles in the production of goods and services; - they nourished business entrepreneurship as the link between innovation and production, to explore economic opportunities, to take calculated risks, to improve methods of production and distribution and to marshal and manage skills and resources; - they restricted excessive population growth which enabled them to realise a rise in the standard of living above the subsistence level – more and better food, clothing and housing, better health and education services – breeding a keen awareness of the relationship between curbing family size and the ability to realise their aspirations for a better life.

From a global perspective, there are also costs involved – “externalities” such as the pollution of the environment: smoke, garbage, noise, congestion, water-pollution, landscape degradation as well as the pollution of the atmosphere by carbon dioxide and other harmful gases. In geophysical terms, planet earth, with its inter-connected web of life, may be considered a self- regulating entity, but the prominent role taken by mankind during the past three centuries has drastically interfered with the key relationships between the various forms of life on the planet. It is essential to understand the consequences of human interference on planet earth’s evolutionary interactions of life cycles. Mankind is the only species equipped with the creative intelligence to grasp the impacts of its footprints on planet earth and beyond. But finding a proper balance between economic development and ecological sustainability is still beyond the reach of the world’s organised societies today: higher standards of living versus environmental degradation. Progress depends, first and foremost, on the enterprise of people. Governments can facilitate progress, but people make things happen. Government’s role should be a promotional role in full awareness of the dangers of creeping bureaucracy and the unproductive allocation and utilisation of resources. Regulatory intervention should always be guided by the objective of creating an enabling environment for human development: needs-driven education, market- driven training, community-directed infrastructure services and emergency-driven social and health services. An enabling environment helps people to help themselves. The proper functioning of a people-based, market-orientated, enterprise-driven economy requires a small, efficient and professional public service which can justifiably be endowed with dignity and social recognition. It is unavoidable that societies turn to the state to relieve distress, to help solve problems and to do those necessary things which are not done at all. But it is vital to prevent the state from becoming so beneficent that it undermines the peoples’ will to help themselves. The predominant emerging political framework is that of a democracy based on the recognition of human and civil rights, popular sovereignty and government accountability. There is strong evidence that political checks and balances, free media and open debate on the costs and benefits of government policy, tends to give a wider public stake in the benefits of development and growth. It also increases governments’ incentives to perform well. Authoritarian governments are not generally conducive to progress, but the preservation of 293

stability and civilised law and order is essential. The common good requires a proper balance between freedom and order. It is essential to understand the critical role of the mainspring of all progress: human creativity. Favourable natural resources may provide important platforms and opportunities for development, but without harnessing and expanding the creative capacity of mankind, there can be no sustainable progress. Mankind’s creative capacity is expressed through the application of rational thought, resourceful intelligence, technological innovation, productive skills, entrepreneurial organisation and management, observant experience, progressive education and systematic perseverance. Any system of social, economic and political organisation – or cultural environment - that does not succeed in progressively harnessing and expanding the creative capacity of all of its people without compulsion, is bound to be left behind.

294

References

Deudney, P. and “The Myth of the Autocratic Revival”, Foreign Affairs, Ikenberry, G.J. (2009) January/February 2009, pp.77-93 Duncan, E. (2009) “Getting Warmer”, Special Report on Climate Change, The Economist, December 5th, 2009, pp.3-22 Ferguson, N. (2008) The Ascent of Money – A Financial History of the World, Allen Lane Gat, A. (2009) “Democracy’s Victory is not Pre-ordained”, Foreign Affairs, July/August 2009, pp.150-155 Huntington, S.P. (1997) The Clash of Civilizations and the Remaking of World Order New York: Simon & Schuster Inglehart, R. And Welzel, C. “How Development Leads to Democracy”, Foreign Affairs, (2009) March/April 2009, pp.33-48 Johnson, P. (1983) A History of the Modern World, Weidenfeld & Nicolson Klein, B.P. and Cukier, K.N. “Tamed Tigers, Distressed Dragon”, Foreign Affairs, (2009) Vol.88, No,4, pp.8-16 Kurtzman, J. (2009) “The Low-Carbon Diet”, Foreign Affairs, September/October 2009, pp.114-122 Levi, M.A. (2009) “Copenhagen’s Inconvenient Truth”, Foreign Affairs, September/October 2009, pp.92-104 Olson, Mansur (1982) The Rise and Decline of Nations, Yale University Press, New Haven & London Olson, Mansur (2000) Power and Prosperity – Outgrowing Communist and Capitalist Dictatorships, Basic Books Plimer, I. (2009) Heaven and Earth – Global Warming: The Missing Science, Connor Court Publishing Pty Ltd Ranney, A. (1966) The Governing of Men, Holt, Reinhart and Winston Sell, J. (2007) “Magnets for Money” a Special Report on Financial Centres, The Economist, September 15th, 2007, pp.3-24 Wallack, J.S. and “The Other Climate Changers”, Foreign Affairs, Ramanathan, V. (2009) September/October 2009, pp.105-122 Yergin, D. and The Commanding Heights – The Battle Between Government Stanislaw, J. (1999) and the Marketplace That Is Remaking the Modern World, New York: Simon & Schuster The Economist (2009) “Public Sector Finances: The State’s Take”, November 21st, 2009, pp.78-79 The Economist (2009) “Public Sector Unions”, December 12th, 2009, pp.33-34 The Economist (2010) “Crying for Freedom – Democracy’s Decline”, January 16th, 2010, pp.52-54 The Economist (2010) The World in 2010 – Beyond the Economic Crisis Wooldridge, A. (2010) “The World Turned Upside Down”, Special Report, The Economist, April 17th, 2010, pp.1-16